{"id":6380,"date":"2023-05-31T10:00:00","date_gmt":"2023-05-31T10:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=6380"},"modified":"2023-05-19T09:32:29","modified_gmt":"2023-05-19T09:32:29","slug":"chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai\/","title":{"rendered":"ChatGPT can\u2019t think \u2013 consciousness is something entirely different to today\u2019s AI"},"content":{"rendered":"\n  <figure>\n    <img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/525711\/original\/file-20230511-10496-d2f8t7.jpg?ixlib=rb-1.1.0&#038;rect=16%2C0%2C5631%2C3988&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" >\n      <figcaption>\n        \n        <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-vector\/low-polygon-brain-wireframe-mesh-on-686888194\" target=\"_blank\" rel=\"noopener\">Illus_man \/ Shutterstock<\/a><\/span>\n      <\/figcaption>\n  <\/figure>\n\n<span><a href=\"https:\/\/theconversation.com\/profiles\/philip-goff-761089\" target=\"_blank\" rel=\"noopener\">Philip Goff<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/durham-university-867\" target=\"_blank\" rel=\"noopener\">Durham University<\/a><\/em><\/span>\n\n<p>There has been shock around the world at the rapid rate of progress with <a href=\"https:\/\/openai.com\/blog\/chatgpt\" target=\"_blank\" rel=\"noopener\">ChatGPT<\/a> and other artificial intelligence created with what\u2019s known as large language models (LLMs). These systems can produce text that seems to display thought, understanding and even creativity.<\/p>\n\n<p>But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tells us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.<\/p>\n\n<p>In 1950, the father of modern computing, Alan Turing, <a href=\"https:\/\/www.cs.ox.ac.uk\/activities\/ieg\/e-library\/sources\/t_article.pdf\" target=\"_blank\" rel=\"noopener\">published a paper<\/a> which laid out a way of determining whether a computer thinks. This is now called \u201cthe Turing test\u201d. Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which. <\/p>\n\n<p>If a computer can fool 70% of judges in a five-minute conversation into thinking it\u2019s a person, the computer passes the test. Would passing the Turing test \u2013 something which now seems imminent \u2013 show that an AI has achieved thought and understanding? <\/p>\n\n<h2 id=\"chess-challenge\">Chess challenge<\/h2>\n\n<p>Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of \u201cthought\u201d, whereby to think just means passing the test.<\/p>\n\n<p>Turing was wrong, however, when he said the only clear notion of \u201cunderstanding\u201d is the purely behavioural one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of \u201cunderstanding\u201d that\u2019s tied to consciousness. To understand in this sense is to consciously grasp some truth about reality. <\/p>\n\n<p>In 1997, the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Deep_Blue_versus_Garry_Kasparov\" target=\"_blank\" rel=\"noopener\">Deep Blue AI beat chess grandmaster Garry Kasparov<\/a>. On a purely behavioural conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn\u2019t have any feelings or experiences. <\/p>\n\n<p>Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person.<\/p>\n\n<p>It doesn\u2019t consciously understand the meaning of the words it\u2019s spitting out. If \u201cthought\u201d means the act of conscious reflection, then ChatGPT has no thoughts about anything. <\/p>\n\n<h2 id=\"time-to-pay-up\">Time to pay up<\/h2>\n\n<p>How can I be so sure that ChatGPT isn\u2019t conscious? In the 1990s, neuroscientist Christof Koch <a href=\"https:\/\/www.newscientist.com\/article\/mg23831830-300-consciousness-how-were-solving-a-mystery-bigger-than-our-minds\/\" target=\"_blank\" rel=\"noopener\">bet philosopher David Chalmers a case of fine wine<\/a> that scientists would have entirely pinned down the \u201cneural correlates of consciousness\u201d in 25 years. <\/p>\n\n<p>By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It\u2019s about time Koch paid up, as there is zero consensus that this has happened.<\/p>\n\n<p>This is because consciousness can\u2019t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects\u2019 testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.<\/p>\n\n<figure class=\"align-center \">\n            <img  decoding=\"async\"  alt=\"Chess player\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-ls-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"  data-pk-srcset=\"https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=400&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/525841\/original\/file-20230512-27-bghacs.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=503&amp;fit=crop&amp;dpr=3 2262w\" >\n            <figcaption>\n              <span class=\"caption\">Unlike computers, humans consciously understand the rules of chess and the underlying strategy.<\/span>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/concentrated-beautiful-girl-playing-chess-on-1740304379\" target=\"_blank\" rel=\"noopener\">LightField Studios \/ Shutterstock<\/a><\/span>\n            <\/figcaption>\n          <\/figure>\n\n<p><a href=\"https:\/\/philarchive.org\/rec\/MICCPA-6\" target=\"_blank\" rel=\"noopener\">Some scientists<\/a> believe there is a close connection between consciousness and reflective cognition \u2013 the brain\u2019s ability to access and use information to make decisions. This leads them to think that the brain\u2019s prefrontal cortex \u2013 where the high-level processes of acquiring knowledge take place \u2013 is essentially involved in all conscious experience. Others deny this, <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fncel.2019.00302\/full\" target=\"_blank\" rel=\"noopener\">arguing instead that<\/a> it happens in whichever local brain region that the relevant sensory processing takes place. <\/p>\n\n<p>Scientists have good understanding of the brain\u2019s basic chemistry. We have also made progress in understanding the high-level functions of various bits of the brain. But we are almost clueless about the bit in-between: how the high-level functioning of the brain is realised at the cellular level.<\/p>\n\n<p>People get very excited about the potential of scans to reveal the workings of the brain. But fMRI (functional magnetic resonance imaging) has a very low resolution: <a href=\"https:\/\/www.nature.com\/articles\/nature06976\" target=\"_blank\" rel=\"noopener\">every pixel<\/a> on a brain scan corresponds to 5.5 million neurons, which means there\u2019s a limit to how much detail these scans are able to show.<\/p>\n\n<p>I believe progress on consciousness will come when we understand better how the brain works.<\/p>\n\n<h2 id=\"pause-in-development\">Pause in development<\/h2>\n\n<p>As I argue in my forthcoming book <a href=\"https:\/\/global.oup.com\/academic\/product\/why-the-purpose-of-the-universe-9780198883760?lang=en&amp;cc=jp\" target=\"_blank\" rel=\"noopener\">\u201cWhy? The Purpose of the Universe\u201d<\/a>, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness. <\/p>\n\n<p>If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms. <\/p>\n\n<p>My bet, then, is that as we learn more about the brain\u2019s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can\u2019t be explained by currently known chemistry and physics. Already, <a href=\"http:\/\/www.wiringthebrain.com\/2019\/09\/beyond-reductionism-systems-biology.html\" target=\"_blank\" rel=\"noopener\">some neuroscientists<\/a> are seeking potential new explanations for consciousness to supplement the basic equations of physics.  <\/p>\n\n<p>While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious. <\/p>\n\n<p>There are many dangers posed by AI, and I fully support the recent call by tens of thousands of people, including tech leaders Steve Wozniak and Elon Musk,<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" target=\"_blank\" rel=\"noopener\"> to pause<\/a> development to address safety concerns. The potential for fraud, for example, is immense. However, the argument that near-term descendants of current AI systems will be super-intelligent, and hence a major threat to humanity, is premature. <\/p>\n\n<p>This doesn\u2019t mean current AI systems aren\u2019t dangerous. But we can\u2019t correctly assess a threat unless we accurately categorise it. LLMs aren\u2019t intelligent. They are systems trained to give the outward appearance of human intelligence. Scary, but not that scary.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img  loading=\"lazy\"  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"The Conversation\"  width=\"1\"  height=\"1\"  style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\"  referrerpolicy=\"no-referrer-when-downgrade\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/counter.theconversation.com\/content\/204823\/count.gif?distributor=republish-lightbox-basic\" ><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/philip-goff-761089\" target=\"_blank\" rel=\"noopener\">Philip Goff<\/a>, Associate Professor of Philosophy, <em><a href=\"https:\/\/theconversation.com\/institutions\/durham-university-867\" target=\"_blank\" rel=\"noopener\">Durham University<\/a><\/em><\/span><\/p>\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"noopener\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823\" target=\"_blank\" rel=\"noopener\">original article<\/a>.<\/p>\n\n","protected":false},"excerpt":{"rendered":"Illus_man \/ Shutterstock Philip Goff, Durham University There has been shock around the world at the rapid rate&hellip;\n","protected":false},"author":475,"featured_media":6360,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[12,16],"tags":[334,693,792,497,474],"class_list":{"0":"post-6380","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health-and-body","8":"category-tech","9":"tag-artificial-intelligence","10":"tag-chatgpt","11":"tag-consciousness","12":"tag-neural-network","13":"tag-the-conversation","14":"cs-entry","15":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6380","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/475"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=6380"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6380\/revisions"}],"predecessor-version":[{"id":6381,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6380\/revisions\/6381"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/6360"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=6380"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=6380"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=6380"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}