{"id":6623,"date":"2023-07-13T22:00:00","date_gmt":"2023-07-13T22:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=6623"},"modified":"2023-06-30T04:22:10","modified_gmt":"2023-06-30T04:22:10","slug":"how-should-a-robot-explore-the-moon-a-simple-question-shows-the-limits-of-current-ai-systems","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/how-should-a-robot-explore-the-moon-a-simple-question-shows-the-limits-of-current-ai-systems\/","title":{"rendered":"How should a robot explore the Moon? A simple question shows the limits of current AI systems"},"content":{"rendered":"\n  <figure>\n    <img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/534227\/original\/file-20230627-21-19eu48.jpg?ixlib=rb-1.1.0&#038;rect=0%2C0%2C2880%2C1621&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" >\n      <figcaption>\n        \n        <span class=\"attribution\"><span class=\"source\">University of Alberta<\/span><\/span>\n      <\/figcaption>\n  <\/figure>\n\n<span><a href=\"https:\/\/theconversation.com\/profiles\/sally-cripps-1443684\" target=\"_blank\" rel=\"noopener\">Sally Cripps<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/alex-fischer-1441919\" target=\"_blank\" rel=\"noopener\">Alex Fischer<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/australian-national-university-877\" target=\"_blank\" rel=\"noopener\">Australian National University<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/edward-santow-1380913\" target=\"_blank\" rel=\"noopener\">Edward Santow<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/hadi-mohasel-afshar-1443992\" target=\"_blank\" rel=\"noopener\">Hadi Mohasel Afshar<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>, and <a href=\"https:\/\/theconversation.com\/profiles\/nicholas-davis-1378484\" target=\"_blank\" rel=\"noopener\">Nicholas Davis<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em><\/span>\n\n<p>Rapid progress in artificial intelligence (AI) has spurred some leading voices in the field to <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\">call for a research pause<\/a>, raise the possibility of <a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk\" target=\"_blank\" rel=\"noopener\">AI-driven human extinction<\/a>, and even <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/may\/24\/openai-leaders-call-regulation-prevent-ai-destroying-humanity\" target=\"_blank\" rel=\"noopener\">ask for government regulation<\/a>. At the heart of their concern is the idea AI might become so powerful we lose control of it. <\/p>\n\n<p>But have we missed a more fundamental problem? <\/p>\n\n<p>Ultimately, AI systems should help humans make better, more accurate decisions. Yet even the most impressive and flexible of today\u2019s AI tools \u2013 such as the large language models behind the likes of ChatGPT \u2013 can have the opposite effect. <\/p>\n\n<p>Why? They have two crucial weaknesses. They do not help decision-makers understand causation or uncertainty. And they create incentives to collect huge amounts of data and may encourage a lax attitude to privacy, legal and ethical questions and risks.<\/p>\n\n<h2 id=\"cause-effect-and-confidence\">Cause, effect and confidence<\/h2>\n\n<p>ChatGPT and other \u201cfoundation models\u201d use an approach called deep learning to trawl through enormous datasets and identify associations between factors contained in that data, such as the patterns of language or links between images and descriptions. Consequently, they are great at interpolating \u2013 that is, predicting or filling in the gaps between known values. <\/p>\n\n<p>Interpolation is not the same as creation. It does not generate knowledge, nor the insights necessary for decision-makers operating in complex environments. <\/p>\n\n<p>However, these approaches require huge amounts of data. As a result, they encourage organisations to assemble enormous repositories of data \u2013 or trawl through existing datasets collected for other purposes. Dealing with \u201cbig data\u201d brings considerable risks around security, privacy, legality and ethics.<\/p>\n\n\n\n<p>In low-stakes situations, predictions based on \u201cwhat the data suggest will happen\u201d can be incredibly useful. But when the stakes are higher, there are two more questions we need to answer. <\/p>\n\n<p>The first is about how the world works: \u201cwhat is driving this outcome?\u201d The second is about our knowledge of the world: \u201chow confident are we about this?\u201d<\/p>\n\n<h2 id=\"from-big-data-to-useful-information\">From big data to useful information<\/h2>\n\n<p>Perhaps surprisingly, AI systems designed to infer causal relationships don\u2019t need \u201cbig data\u201d. Instead, they need <em>useful information<\/em>. The usefulness of the information depends on the question at hand, the decisions we face, and the value we attach to the consequences of those decisions. <\/p>\n\n<p>To paraphrase the US statistician and writer Nate Silver, the <a href=\"https:\/\/www.google.com.au\/books\/edition\/The_Signal_and_the_Noise\/udSFU9G49AcC?hl=en&amp;gbpv=1&amp;dq=%22a%20relatively%20constant%20amount%20of%20objective%20truth%22&amp;pg=PT16&amp;printsec=frontcover\" target=\"_blank\" rel=\"noopener\">amount of truth<\/a> is approximately constant irrespective of the volume of data we collect.<\/p>\n\n<p>So, what is the solution? The process starts with developing AI techniques that tell us what we genuinely don\u2019t know, rather than producing variations of existing knowledge. <\/p>\n\n<p>Why? Because this helps us identify and acquire the minimum amount of valuable information, in a sequence that will enable us to disentangle causes and effects.<\/p>\n\n<h2 id=\"a-robot-on-the-moon\">A robot on the Moon<\/h2>\n\n<p>Such knowledge-building AI systems exist already.<\/p>\n\n<p>As a simple example, consider a robot sent to the Moon to answer the question, \u201cWhat does the Moon\u2019s surface look like?\u201d <\/p>\n\n<p>The robot\u2019s designers may give it a prior \u201cbelief\u201d about what it will find, along with an indication of how much \u201cconfidence\u201d it should have in that belief. The degree of confidence is as important as the belief, because it is a measure of what the robot doesn\u2019t know. <\/p>\n\n<p>The robot lands and faces a decision: which way should it go?<\/p>\n\n\n\n<p>Since the robot\u2019s goal is to learn as quickly as possible about the Moon\u2019s surface, it should go in the direction that maximises its learning. This can be measured by which new knowledge will reduce the robot\u2019s uncertainty about the landscape \u2013 or how much it will increase the robot\u2019s confidence in its knowledge. <\/p>\n\n<p>The robot goes to its new location, records observations using its sensors, and updates its belief and associated confidence. In doing so it learns about the Moon\u2019s surface in the most efficient manner possible.<\/p>\n\n<p>Robotic systems like this \u2013 known as \u201cactive SLAM\u201d (Active Simultaneous Localisation and Mapping) \u2013 were first proposed <a href=\"https:\/\/ieeexplore.ieee.org\/document\/1041446\" target=\"_blank\" rel=\"noopener\">more than 20 years ago<\/a>, and they are still an <a href=\"https:\/\/arxiv.org\/abs\/2207.00254\" target=\"_blank\" rel=\"noopener\">active area of research<\/a>. This approach of steadily gathering knowledge and updating understanding is based on a statistical technique called <a href=\"https:\/\/en.wikipedia.org\/wiki\/Bayesian_optimization\" target=\"_blank\" rel=\"noopener\">Bayesian optimisation<\/a>.<\/p>\n\n<h2 id=\"mapping-unknown-landscapes\">Mapping unknown landscapes<\/h2>\n\n<p>A decision-maker in government or industry faces more complexity than the robot on the Moon, but the thinking is the same. Their jobs involve exploring and mapping unknown social or economic landscapes.<\/p>\n\n<p>Suppose we wish to develop policies to encourage all children to thrive at school and finish high school. We need a conceptual map of which actions, at what time, and under what conditions, will help to achieve these goals. <\/p>\n\n<p>Using the robot\u2019s principles, we formulate an initial question: \u201cWhich intervention(s) will most help children?\u201d<\/p>\n\n<p>Next, we construct a draft conceptual map using existing knowledge. We also need a measure of our confidence in that knowledge.<\/p>\n\n<p>Then we develop a model that incorporates different sources of information. These won\u2019t be from robotic sensors, but from communities, lived experience, and any useful information from recorded data.<\/p>\n\n<p>After this, based on the analysis informing the community and stakeholder preferences, we make a decision: \u201cWhich actions should be implemented and under which conditions?\u201d <\/p>\n\n<p>Finally, we discuss, learn, update beliefs and repeat the process.<\/p>\n\n<h2 id=\"learning-as-we-go\">Learning as we go<\/h2>\n\n<p>This is a \u201clearning as we go\u201d approach. As new information comes to hand, new actions are chosen to maximise some pre-specified criteria.<\/p>\n\n<p>Where AI can be useful is in identifying what information is most valuable, via algorithms that quantify what we don\u2019t know. Automated systems can also gather and store that information at a rate and in places where it may be difficult for humans.<\/p>\n\n<p>AI systems like this apply what is called a <a href=\"https:\/\/royalsocietypublishing.org\/doi\/10.1098\/rsta.2022.0156\" target=\"_blank\" rel=\"noopener\">Bayesian decision-theoretic framework<\/a>. Their models are explainable and transparent, built on explicit assumptions. They are mathematically rigorous and can offer guarantees. <\/p>\n\n<p>They are designed to estimate causal pathways, to help make the best intervention  at the best time. And they incorporate human values by being co-designed and co-implemented by the communities that are impacted.<\/p>\n\n<p>We do need to reform our laws and create new rules to guide the use of potentially dangerous AI systems. But it\u2019s just as important to choose the right tool for the job in the first place.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img  loading=\"lazy\"  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"The Conversation\"  width=\"1\"  height=\"1\"  style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\"  referrerpolicy=\"no-referrer-when-downgrade\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/counter.theconversation.com\/content\/199180\/count.gif?distributor=republish-lightbox-basic\" ><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/sally-cripps-1443684\" target=\"_blank\" rel=\"noopener\">Sally Cripps<\/a>, Director of Technology UTS Human Technology Institute, Professor of Mathematcis and Statistics, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/alex-fischer-1441919\" target=\"_blank\" rel=\"noopener\">Alex Fischer<\/a>, Honorary Fellow, <em><a href=\"https:\/\/theconversation.com\/institutions\/australian-national-university-877\" target=\"_blank\" rel=\"noopener\">Australian National University<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/edward-santow-1380913\" target=\"_blank\" rel=\"noopener\">Edward Santow<\/a>, Professor &#038; Co-Director, Human Technology Institute, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>; <a href=\"https:\/\/theconversation.com\/profiles\/hadi-mohasel-afshar-1443992\" target=\"_blank\" rel=\"noopener\">Hadi Mohasel Afshar<\/a>, Lead Research Scientist, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em>, and <a href=\"https:\/\/theconversation.com\/profiles\/nicholas-davis-1378484\" target=\"_blank\" rel=\"noopener\">Nicholas Davis<\/a>, Industry Professor of Emerging Technology and Co-Director, Human Technology Institute, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-technology-sydney-936\" target=\"_blank\" rel=\"noopener\">University of Technology Sydney<\/a><\/em><\/span><\/p>\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"noopener\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/how-should-a-robot-explore-the-moon-a-simple-question-shows-the-limits-of-current-ai-systems-199180\" target=\"_blank\" rel=\"noopener\">original article<\/a>.<\/p>\n\n","protected":false},"excerpt":{"rendered":"University of Alberta Sally Cripps, University of Technology Sydney; Alex Fischer, Australian National University; Edward Santow, University of&hellip;\n","protected":false},"author":523,"featured_media":6600,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[15,14,16],"tags":[334,196,363,370,474],"class_list":{"0":"post-6623","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-engineering","8":"category-space","9":"category-tech","10":"tag-artificial-intelligence","11":"tag-moon","12":"tag-robot","13":"tag-robotics","14":"tag-the-conversation","15":"cs-entry","16":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6623","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/523"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=6623"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6623\/revisions"}],"predecessor-version":[{"id":6624,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6623\/revisions\/6624"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/6600"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=6623"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=6623"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=6623"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}