{"id":4502,"date":"2022-07-21T22:00:00","date_gmt":"2022-07-21T22:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=4502"},"modified":"2022-07-06T07:01:27","modified_gmt":"2022-07-06T07:01:27","slug":"googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought\/","title":{"rendered":"Google\u2019s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought"},"content":{"rendered":"\n  <figure>\n    <img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/470388\/original\/file-20220622-7895-m4o7lp.jpg?ixlib=rb-1.1.0&#038;rect=0%2C7%2C4928%2C3245&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" >\n      <figcaption>\n        Words can have a powerful effect on people, even when they\u2019re generated by an unthinking machine.\n        <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.gettyimages.com\/detail\/photo\/words-this-is-my-story-typed-on-paper-with-a-royalty-free-image\/1359861887\" target=\"_blank\" rel=\"noopener\">iStock via Getty Images<\/a><\/span>\n      <\/figcaption>\n  <\/figure>\n\n<span><a href=\"https:\/\/theconversation.com\/profiles\/kyle-mahowald-1354171\" target=\"_blank\" rel=\"noopener\">Kyle Mahowald<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/the-university-of-texas-at-austin-college-of-liberal-arts-4975\" target=\"_blank\" rel=\"noopener\">The University of Texas at Austin College of Liberal Arts<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/anna-a-ivanova-1354170\" target=\"_blank\" rel=\"noopener\">Anna A. Ivanova<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/massachusetts-institute-of-technology-mit-1193\" target=\"_blank\" rel=\"noopener\">Massachusetts Institute of Technology (MIT)<\/a><\/em><\/span>\n\n<p>When you read a sentence like this one, your past experience tells you that it\u2019s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text. <\/p>\n\n<p>People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural \u2013 but potentially misleading \u2013 to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do. <\/p>\n\n<p>Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google\u2019s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and <a href=\"https:\/\/www.washingtonpost.com\/technology\/2022\/06\/11\/google-ai-lamda-blake-lemoine\/\" target=\"_blank\" rel=\"noopener\">the subsequent media coverage<\/a> led to a <a href=\"https:\/\/www.washingtonpost.com\/opinions\/2022\/06\/17\/google-ai-ethics-sentient-lemoine-warning\/\" target=\"_blank\" rel=\"noopener\">number<\/a> of rightly skeptical <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2022\/jun\/14\/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune\" target=\"_blank\" rel=\"noopener\">articles<\/a> and <a href=\"https:\/\/garymarcus.substack.com\/p\/nonsense-on-stilts?s=r\" target=\"_blank\" rel=\"noopener\">posts<\/a> about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing. <\/p>\n\n<p>The question of what it would mean for an AI model to be sentient is complicated (<a href=\"https:\/\/threadreaderapp.com\/thread\/1536829311562354688.html\" target=\"_blank\" rel=\"noopener\">see, for instance, our colleague\u2019s take<\/a>), and our goal here is not to settle it. But as <a href=\"https:\/\/scholar.google.com\/citations?user=XUmFLVUAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">language<\/a> <a href=\"https:\/\/scholar.google.com\/citations?user=hBUjCB0AAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">researchers<\/a>, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.<\/p>\n\n<h2 id=\"using-ai-to-generate-humanlike-language\">Using AI to generate humanlike language<\/h2>\n\n<p>Text generated by models like Google\u2019s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language. <\/p>\n\n<figure class=\"align-center zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"noopener\"><img  decoding=\"async\"  alt=\"a screenshot showing a text dialog\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-ls-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"  data-pk-srcset=\"https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=328&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=413&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=413&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/470359\/original\/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=413&amp;fit=crop&amp;dpr=3 2262w\" ><\/a>\n            <figcaption>\n              <span class=\"caption\">The first computer system to engage people in dialogue was psychotherapy software called Eliza, built more than half a century ago.<\/span>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.flickr.com\/photos\/rosenfeldmedia\/49467507798\" target=\"_blank\" rel=\"noopener\">Rosenfeld Media\/Flickr<\/a>, <a class=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by\/4.0\/\" target=\"_blank\" rel=\"noopener\">CC BY<\/a><\/span>\n            <\/figcaption>\n          <\/figure>\n\n<p>Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it\u2019s easy to know that \u201cpeanut butter and jelly\u201d is a more likely phrase than \u201cpeanut butter and pineapples.\u201d If you have enough English text, you will see the phrase \u201cpeanut butter and jelly\u201d again and again but might never see the phrase \u201cpeanut butter and pineapples.\u201d<\/p>\n\n<p>Today\u2019s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are tuned by a huge number of internal \u201cknobs\u201d \u2013 so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.<\/p>\n\n<p>The models\u2019 task, however, remains the same as in the 1950s: determine which word is likely to come next. Today, they are so good at this task that almost all sentences they generate seem fluid and grammatical.<\/p>\n\n<h2 id=\"peanut-butter-and-pineapples\">Peanut butter and pineapples?<\/h2>\n\n<p>We asked a large language model, <a href=\"https:\/\/theconversation.com\/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591\" target=\"_blank\" rel=\"noopener\">GPT-3<\/a>, to complete the sentence \u201cPeanut butter and pineapples___\u201d. It said: \u201cPeanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.\u201d If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.<\/p>\n\n<p>But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples \u2013 it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind \u2013 even that of a Google engineer \u2013 to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.<\/p>\n\n<figure>\n            <iframe loading=\"lazy\" width=\"440\" height=\"260\" src=\"https:\/\/www.youtube.com\/embed\/a6jt3Vufa9U?wmode=transparent&amp;start=0\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n            <figcaption><span class=\"caption\">Large AI language models can engage in fluent conversation. However, they have no overall message to communicate, so their phrases often follow common literary tropes, extracted from the texts they were trained on. For instance, if prompted with the topic \u201cthe nature of love,\u201d the model might generate sentences about believing that love conquers all. The human brain primes the viewer to interpret these words as the model\u2019s opinion on the topic, but they are simply a plausible sequence of words.<\/span><\/figcaption>\n          <\/figure>\n\n<p>The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person\u2019s goals, feelings and beliefs.<\/p>\n\n<p>The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. <\/p>\n\n<p>However, in the case of AI systems, it misfires \u2013 building a mental model out of thin air.<\/p>\n\n<p>A little more probing can reveal the severity of this misfire. Consider the following prompt: \u201cPeanut butter and feathers taste great together because___\u201d. GPT-3 continued: \u201cPeanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather\u2019s texture.\u201d<\/p>\n\n<p>The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.<\/p>\n\n<h2 id=\"ascribing-intelligence-to-machines-denying-it-to-humans\">Ascribing intelligence to machines, denying it to humans<\/h2>\n\n<p>A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics \u2013 the study of language in its social and cultural context \u2013 shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently. <\/p>\n\n<p>For instance, people with a foreign accent are often <a href=\"https:\/\/theconversation.com\/heres-why-people-might-discriminate-against-foreign-accents-new-research-172539\" target=\"_blank\" rel=\"noopener\">perceived as less intelligent<\/a> and are less likely to get the jobs they are qualified for. Similar biases exist against <a href=\"https:\/\/theconversation.com\/british-people-still-think-some-accents-are-smarter-than-others-what-that-means-in-the-workplace-126964\" target=\"_blank\" rel=\"noopener\">speakers of dialects<\/a> that are not considered prestigious, <a href=\"https:\/\/doi.org\/10.1080%2F17470218.2012.731695\" target=\"_blank\" rel=\"noopener\">such as Southern English<\/a> in the U.S., against <a href=\"https:\/\/doi.org\/10.1177%2F0160597613481731\" target=\"_blank\" rel=\"noopener\">deaf people using sign languages<\/a> and against people with speech impediments <a href=\"https:\/\/doi.org\/10.1016\/j.jfludis.2004.08.001\" target=\"_blank\" rel=\"noopener\">such as stuttering<\/a>. <\/p>\n\n<p>These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.<\/p>\n\n<h2 id=\"fluent-language-alone-does-not-imply-humanity\">Fluent language alone does not imply humanity<\/h2>\n\n<p>Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have <a href=\"https:\/\/news.northeastern.edu\/2022\/06\/16\/google-sentient-ai-concerns\/\" target=\"_blank\" rel=\"noopener\">pondered<\/a> it <a href=\"https:\/\/link.springer.com\/article\/10.1007\/BF00360578\" target=\"_blank\" rel=\"noopener\">for decades<\/a>. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img  loading=\"lazy\"  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"The Conversation\"  width=\"1\"  height=\"1\"  style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/counter.theconversation.com\/content\/185099\/count.gif?distributor=republish-lightbox-basic\" ><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/kyle-mahowald-1354171\" target=\"_blank\" rel=\"noopener\">Kyle Mahowald<\/a>, Assistant Professor of Linguistics, <em><a href=\"https:\/\/theconversation.com\/institutions\/the-university-of-texas-at-austin-college-of-liberal-arts-4975\" target=\"_blank\" rel=\"noopener\">The University of Texas at Austin College of Liberal Arts<\/a><\/em> and <a href=\"https:\/\/theconversation.com\/profiles\/anna-a-ivanova-1354170\" target=\"_blank\" rel=\"noopener\">Anna A. Ivanova<\/a>, PhD Candidate in Brain and Cognitive Sciences, <em><a href=\"https:\/\/theconversation.com\/institutions\/massachusetts-institute-of-technology-mit-1193\" target=\"_blank\" rel=\"noopener\">Massachusetts Institute of Technology (MIT)<\/a><\/em><\/span><\/p>\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"noopener\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099\" target=\"_blank\" rel=\"noopener\">original article<\/a>.<\/p>\n\n","protected":false},"excerpt":{"rendered":"Words can have a powerful effect on people, even when they\u2019re generated by an unthinking machine. iStock via&hellip;\n","protected":false},"author":135,"featured_media":4503,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[16],"tags":[334,474],"class_list":{"0":"post-4502","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech","8":"tag-artificial-intelligence","9":"tag-the-conversation","10":"cs-entry","11":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/4502","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/135"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=4502"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/4502\/revisions"}],"predecessor-version":[{"id":4504,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/4502\/revisions\/4504"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/4503"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=4502"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=4502"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=4502"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}