{"id":5842,"date":"2023-03-27T22:00:00","date_gmt":"2023-03-27T22:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=5842"},"modified":"2023-03-10T03:11:48","modified_gmt":"2023-03-10T03:11:48","slug":"carnegie-mellons-alan-robot-learns-to-explore-and-complete-tasks-autonomously","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/carnegie-mellons-alan-robot-learns-to-explore-and-complete-tasks-autonomously\/","title":{"rendered":"Carnegie Mellon&#8217;s \u201cALAN\u201d Robot Learns to Explore and Complete Tasks Autonomously"},"content":{"rendered":"\n<figure class=\"wp-block-video aligncenter\"><video controls src=\"https:\/\/robo-explorer.github.io\/resources\/achievers\/k2_achiever_2x.mp4\"><\/video><figcaption class=\"wp-element-caption\">(Mendonca\/Bahl\/Pathak, 2023)<\/figcaption><\/figure>\n\n\n\n<p>Researchers at <a href=\"https:\/\/www.cmu.edu\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Carnegie Mellon University<\/a> have developed a robot agent called <a href=\"https:\/\/robo-explorer.github.io\/\" target=\"_blank\" rel=\"noopener\" title=\"\">ALAN<\/a> that can autonomously explore unfamiliar environments and complete tasks without human guidance. ALAN is part of a framework that could be used to make physical robots better at exploring their surroundings and learning how to do new things. It uses visual cues to increase the efficiency of robot learning. It can learn to manipulate objects with only around 100 trajectories in 1-2 hours in two separate play kitchens, without any rewards.<\/p>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Curiosity-driven Robots in the Real World\" width=\"1200\" height=\"675\" src=\"https:\/\/www.youtube.com\/embed\/F-AyAXcqfM4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>As the brainchild of researchers <a href=\"https:\/\/russellmendonca.github.io\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Russell Mendonca<\/a>, <a href=\"https:\/\/www.cs.cmu.edu\/~sbahl2\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Shikhar Bahl<\/a>, and <a href=\"https:\/\/www.cs.cmu.edu\/~dpathak\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Deepak Pathak<\/a>, ALAN learns a model of the world that it can use to plan its actions and guides itself by setting both agent-centric and environment-centric goals. It also reduces the workspace to the area of interest using off-the-shelf, pre-trained detectors. After exploring, the robot can put the skills it has learned together to do single-step or multi-step tasks that are shown as goal images. ALAN uses what it has learned about the world to find actions where it isn&#8217;t sure how an object will change, and then it takes those actions in the real world.<\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video controls src=\"https:\/\/robo-explorer.github.io\/resources\/explorers\/alan_fridge_filmora_out.mp4\"><\/video><figcaption class=\"wp-element-caption\">(Mendonca\/Bahl\/Pathak, 2023)<\/figcaption><\/figure>\n\n\n\n<p>The researchers&#8217; robot features an optical module that can estimate the movements of objects in its surroundings. This module then uses these estimations of how objects have moved to maximize the change in objects and encourage the robot to interact with them. The researchers&#8217; proposed learning strategy enables ALAN to continuously and autonomously learn to complete tasks while exploring their surroundings. ALAN and the framework underpinning it could pave the way for the creation of better-performing autonomous robotic systems for environmental exploration.<\/p>\n\n\n\n<figure class=\"wp-block-video aligncenter\"><video controls src=\"https:\/\/robo-explorer.github.io\/resources\/achievers\/k1_achiever_2x.mp4\"><\/video><figcaption class=\"wp-element-caption\">(Mendonca\/Bahl\/Pathak, 2023)<\/figcaption><\/figure>\n\n\n\n<p>The research is currently available as a preprint on <a href=\"https:\/\/arxiv.org\/abs\/2302.06604\" target=\"_blank\" rel=\"noopener\" title=\"\">arXiv<\/a>.<\/p>\n\n\n\n<h1 id=\"references\" class=\"wp-block-heading\">References<\/h1>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Fadelli, I. &amp; Tech Xplore. (2023, March 9). <em>A robot that can autonomously explore real-world environments<\/em> [Tech Xplore]. Tech Xplore. <a href=\"https:\/\/techxplore.com\/news\/2023-03-robot-autonomously-explore-real-world-environments.html\" target=\"_blank\" rel=\"noopener\" title=\"\">https:\/\/techxplore.com\/news\/2023-03-robot-autonomously-explore-real-world-environments.html<\/a><\/li>\n\n\n\n<li>Mendonca, R., Bahl, S., &amp; Pathak, D. (2023a). <em>ALAN: Autonomously Exploring Robotic Agents in the Real World<\/em>. ALAN\u202f: Autonomously Exploring Robotic Agents in the Real World. <a href=\"http:\/\/robo-explorer.github.io\" target=\"_blank\" rel=\"noopener\" title=\"\">http:\/\/robo-explorer.github.io<\/a><\/li>\n\n\n\n<li>Mendonca, R., Bahl, S., &amp; Pathak, D. (2023b). <em>ALAN: Autonomously Exploring Robotic Agents in the Real World<\/em>. arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2302.06604\" target=\"_blank\" rel=\"noopener\" title=\"\">https:\/\/doi.org\/10.48550\/arXiv.2302.06604<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"Researchers at Carnegie Mellon University have developed a robot agent called ALAN that can autonomously explore unfamiliar environments&hellip;\n","protected":false},"author":4,"featured_media":5841,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[15,16],"tags":[334,370],"class_list":{"0":"post-5842","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-engineering","8":"category-tech","9":"tag-artificial-intelligence","10":"tag-robotics","11":"cs-entry","12":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5842","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=5842"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5842\/revisions"}],"predecessor-version":[{"id":5843,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5842\/revisions\/5843"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/5841"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=5842"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=5842"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=5842"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}