{"id":5659,"date":"2023-02-26T22:00:00","date_gmt":"2023-02-26T22:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=5659"},"modified":"2023-02-10T05:57:47","modified_gmt":"2023-02-10T05:57:47","slug":"scientists-designed-an-ai-powered-robot-that-can-paint-errors-and-all","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/scientists-designed-an-ai-powered-robot-that-can-paint-errors-and-all\/","title":{"rendered":"Scientists Designed an AI-Powered Robot That Can Paint\u2014Errors and All"},"content":{"rendered":"\n<p><a href=\"https:\/\/www.cmu.edu\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Carnegie Mellon University<\/a>&#8216;s <a href=\"https:\/\/www.ri.cmu.edu\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Robotics Institute<\/a> has welcomed a new artist-in-residence called FRIDA, which is a robotic arm with a paintbrush attached to it. Framework and Robotics Initiative for Developing Arts (FRIDA) is a way for robots and people to work together on works of art. The project is led by <a href=\"https:\/\/www.lti.cs.cmu.edu\/people\/222227085\/peter-schaldenbrand\" target=\"_blank\" rel=\"noopener\" title=\"\">Peter Schaldenbrand<\/a>, a Ph.D. student at the <a href=\"https:\/\/www.cs.cmu.edu\/\" target=\"_blank\" rel=\"noopener\" title=\"\">School of Computer Science<\/a>, with RI faculty members <a href=\"https:\/\/www.cs.cmu.edu\/~.\/jeanoh\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Jean Oh<\/a> and <a href=\"http:\/\/www.cs.cmu.edu\/~jmccann\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Jim McCann<\/a>. The robot makes paintings using AI models similar to those used in <a href=\"https:\/\/openai.com\/\" target=\"_blank\" rel=\"noopener\" title=\"\">OpenAI<\/a>&#8216;s <a href=\"https:\/\/openai.com\/blog\/chatgpt\/\" target=\"_blank\" rel=\"noopener\" title=\"\">ChatGPT<\/a> and <a href=\"https:\/\/openai.com\/dall-e-2\/\" target=\"_blank\" rel=\"noopener\" title=\"\">DALL-E 2<\/a>. It does this by learning, simulating, and painting in the real world.<\/p>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">Some of Frida&#39;s first paintings! More to come <a href=\"https:\/\/t.co\/4ZsW95GnFn\">pic.twitter.com\/4ZsW95GnFn<\/a><\/p>&mdash; FRIDA Robot Painter (@FridaRobot) <a href=\"https:\/\/twitter.com\/FridaRobot\/status\/1562467005038415872?ref_src=twsrc%5Etfw\" target=\"_blank\" rel=\"noopener\">August 24, 2022<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n\n\n\n<p>People can tell FRIDA what to do by typing in a description or uploading a picture. The team is experimenting with other inputs, including audio. In the end, the things that FRIDA makes are whimsical and impressionistic, with bold brushstrokes that are not very precise and include mistakes. The team is focused on exploring the intersection of human and robotic creativity.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter\"><img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/lh3.googleusercontent.com\/1rVWX06X3j3zE_shqvPRPV_cDWDLUUovDRpAnOzxW7rAB5Hhin8A1mIValFfYTHv6LdGLSwVs2pk1rzZIHyo6KMVrl3Ha-Rt51VyqqgeBNpeqYZapx_T8Yv6uo1WcSf7AGRowYTDwOzQWrMMkPljMgE\" ><figcaption class=\"wp-element-caption\">FRIDA can be seen here painting an impression of the late, famed U.S. judiciary figure Ruth Bader Ginsburg. (Carnegie Mellon University, 2023)<\/figcaption><\/figure>\n<\/div>\n\n\n<p>FRIDA uses AI and machine learning several times during the artistic process. The first step involves the robot learning how to use its paintbrush. The robot then uses large vision-language models to understand the input, similar to those used in OpenAI&#8217;s DALL-E 2. The biggest technical problem with making a physical image is that the simulation and the real world are not the same. To address this, FRIDA uses a process known as &#8220;Real2Sim2Real,&#8221; where the robot&#8217;s actual brush strokes are used to train the simulator.<\/p>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"FRIDA: A Framework and Robotics Initiative for Developing Arts\" width=\"1200\" height=\"675\" src=\"https:\/\/www.youtube.com\/embed\/e2vHvYgjiYg?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/div><figcaption class=\"wp-element-caption\">(CMUComputerScience, 2023)<\/figcaption><\/figure>\n\n\n\n<p>The team is also working to address limitations in current large vision-language models. They fed the models headlines from news articles to give them a sense of the world and further trained them on images and text from diverse cultures to avoid bias. The international group is led by master&#8217;s students from Carnegie Mellon and <a href=\"https:\/\/www.dongguk.edu\/eng\/main\" target=\"_blank\" rel=\"noopener\">Dongguk University<\/a> in Korea, and people from many different countries have contributed to it.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/lh4.googleusercontent.com\/IvkXbXLupppr3ScWWMihoevGVx8DfPqpIBmVm3Nd-bvd67iS0a9C-Fu05zODMxkWZVfltHRb4-5OBcEZKOKT3JVBkLi7xdTipubDHPP6jlPK-dMiY9HuOfgM9VB33YKc1juVAfGZGtGFTdSJfxJJvVE\" ><figcaption class=\"wp-element-caption\">Pictured, left to right: Peter Schaldenbrand, Jean Oh, and Jim McCann, who spearheaded the work on developing FRIDA. (Carnegie Mellon University, 2023)<\/figcaption><\/figure>\n\n\n\n<p>Lastly, once the user has told the robot what the painting will be about, it will use machine learning to create a simulation and a plan to help the user reach their goals. The robot displays a color palette on a computer screen so that a person can mix paint and give it to the robot. However, automatic paint mixing is being worked on right now. The robot works on a painting for a few hours, using an overhead camera to check its progress and make changes to its plan. With FRIDA, the team wants to help people be more creative. Its latest research will be shown at <a href=\"https:\/\/www.icra2023.org\/\" target=\"_blank\" rel=\"noopener\" title=\"\">London&#8217;s 2023 IEEE International Conference on Robotics and Automation<\/a>.<\/p>\n\n\n\n<h1 id=\"references\" class=\"wp-block-heading\">References<\/h1>\n\n\n\n<p>Aupperlee, A. (2023, February 7). <em>Carnegie Mellon\u2019s AI-Powered FRIDA Robot Collaborates With Humans To Create Art<\/em>. Carnegie Mellon School of Computer Science; Carnegie Mellon University. <a href=\"https:\/\/www.cs.cmu.edu\/news\/2023\/frida-robot\" target=\"_blank\" rel=\"noopener\" title=\"\">https:\/\/www.cs.cmu.edu\/news\/2023\/frida-robot<\/a><\/p>\n\n\n\n<p>Lin, C. (2023, February 8). <em>What happens when you mash generative AI with robotics? Carnegie Mellon scientists dream up an answer<\/em>. Fast Company; Fast Company. <a href=\"https:\/\/www.fastcompany.com\/90847320\/generative-ai-art-robot-carnegie-mellon-frida-scientists\" target=\"_blank\" rel=\"noopener\" title=\"\">https:\/\/www.fastcompany.com\/90847320\/generative-ai-art-robot-carnegie-mellon-frida-scientists<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Carnegie Mellon University&#8216;s Robotics Institute has welcomed a new artist-in-residence called FRIDA, which is a robotic arm with&hellip;\n","protected":false},"author":2,"featured_media":5658,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[16],"tags":[334,693],"class_list":{"0":"post-5659","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech","8":"tag-artificial-intelligence","9":"tag-chatgpt","10":"cs-entry","11":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=5659"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5659\/revisions"}],"predecessor-version":[{"id":5660,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/5659\/revisions\/5660"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/5658"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=5659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=5659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=5659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}