{"id":6777,"date":"2023-08-01T10:00:00","date_gmt":"2023-08-01T10:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=6777"},"modified":"2023-07-21T09:09:50","modified_gmt":"2023-07-21T09:09:50","slug":"eliminating-bias-in-ai-may-be-impossible-a-computer-scientist-explains-how-to-tame-it-instead","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/eliminating-bias-in-ai-may-be-impossible-a-computer-scientist-explains-how-to-tame-it-instead\/","title":{"rendered":"Eliminating bias in AI may be impossible \u2013 a computer scientist explains how to tame it instead"},"content":{"rendered":"\n  <figure>\n    <img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/537509\/original\/file-20230714-16554-ycstss.jpg?ixlib=rb-1.1.0&#038;rect=22%2C38%2C2095%2C1266&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" >\n      <figcaption>\n        Blindly eliminating biases from AI systems can have unintended consequences.\n        <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.gettyimages.com\/detail\/photo\/grid-of-hexagonal-portraits-hand-adding-new-one-royalty-free-image\/169710978?adppopup=true\" target=\"_blank\" rel=\"noopener\">Dimitri Otis\/DigitalVision via Getty Images<\/a><\/span>\n      <\/figcaption>\n  <\/figure>\n\n<span><a href=\"https:\/\/theconversation.com\/profiles\/emilio-ferrara-314635\" target=\"_blank\" rel=\"noopener\">Emilio Ferrara<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-southern-california-1265\" target=\"_blank\" rel=\"noopener\">University of Southern California<\/a><\/em><\/span>\n\n<p>When I asked ChatGPT for a joke about Sicilians the other day, it implied that Sicilians are stinky.<\/p>\n\n<figure class=\"align-center zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"noopener\"><img  decoding=\"async\"  alt=\"ChatGPT exchange in which user asks for a joke about Sicilians, with response &#039;Why did the Sicilian chef bring extra garlic to the restaurant? Because he heard the customers wanted some &#039;Sicilian stink-ilyan&#039; flavor in their meals!&#039;\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-ls-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"  data-pk-srcset=\"https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=204&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=204&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=204&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=257&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=257&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/536938\/original\/file-20230711-15-aj57mt.png?ixlib=rb-1.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=257&amp;fit=crop&amp;dpr=3 2262w\" ><\/a>\n            <figcaption>\n              <span class=\"caption\">ChatGPT can sometimes produce stereotypical or offensive outputs.<\/span>\n              <span class=\"attribution\"><span class=\"source\">Screen capture by Emilio Ferrara<\/span>, <a class=\"license\" href=\"http:\/\/creativecommons.org\/licenses\/by-nd\/4.0\/\" target=\"_blank\" rel=\"noopener\">CC BY-ND<\/a><\/span>\n            <\/figcaption>\n          <\/figure>\n\n<p>As somebody born and raised in Sicily, I reacted to ChatGPT\u2019s joke with disgust. But at the same time, <a href=\"https:\/\/scholar.google.com\/citations?user=0r7Syh0AAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">my computer scientist brain<\/a> began spinning around a seemingly simple question: Should ChatGPT and other artificial intelligence systems be allowed to be biased? <\/p>\n\n<p>You might say \u201cOf course not!\u201d And that would be a reasonable response. But there are some researchers, like me, who argue the opposite: AI systems like ChatGPT <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2304.03738\" target=\"_blank\" rel=\"noopener\">should indeed be biased<\/a> \u2013 but not in the way you might think.<\/p>\n\n<p>Removing bias from AI is a laudable goal, but blindly eliminating biases can have unintended consequences. Instead, bias in AI <a href=\"https:\/\/aclanthology.org\/2023.findings-acl.602\/\" target=\"_blank\" rel=\"noopener\">can be controlled<\/a> to achieve a higher goal: fairness.<\/p>\n\n<h2 id=\"uncovering-bias-in-ai\">Uncovering bias in AI<\/h2>\n\n<p>As AI is increasingly <a href=\"https:\/\/blog.google\/technology\/ai\/bard-google-ai-search-updates\/\" target=\"_blank\" rel=\"noopener\">integrated<\/a> <a href=\"https:\/\/blogs.microsoft.com\/blog\/2023\/03\/16\/introducing-microsoft-365-copilot-your-copilot-for-work\/\" target=\"_blank\" rel=\"noopener\">into<\/a> <a href=\"https:\/\/slack.com\/blog\/news\/introducing-slack-gpt\" target=\"_blank\" rel=\"noopener\">everyday technology<\/a>, many people agree that addressing bias in AI is <a href=\"https:\/\/theconversation.com\/the-white-houses-ai-bill-of-rights-outlines-five-principles-to-make-artificial-intelligence-safer-more-transparent-and-less-discriminatory-192003\" target=\"_blank\" rel=\"noopener\">an important issue<\/a>. But what does \u201cAI bias\u201d actually mean? <\/p>\n\n<p>Computer scientists say an AI model is biased if it <a href=\"https:\/\/www.airoboticslaw.com\/blog\/artificial-intelligence-bias-mitigating-risk\" target=\"_blank\" rel=\"noopener\">unexpectedly produces skewed results<\/a>. These results could exhibit prejudice against individuals or groups, or otherwise not be in line with positive human values like fairness and truth. Even small divergences from expected behavior can have a \u201c<a href=\"https:\/\/doi.org\/10.48550\/arXiv.2307.05842\" target=\"_blank\" rel=\"noopener\">butterfly effect<\/a>,\u201d in which seemingly minor biases can be amplified by generative AI and have far-reaching consequence.<\/p>\n\n<p>Bias in generative AI systems <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2304.03738\" target=\"_blank\" rel=\"noopener\">can come from a variety of sources<\/a>. Problematic <a href=\"https:\/\/hbr.org\/2019\/10\/what-do-we-do-about-the-biases-in-ai\" target=\"_blank\" rel=\"noopener\">training data<\/a> can <a href=\"https:\/\/theconversation.com\/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images-208748\" target=\"_blank\" rel=\"noopener\">associate certain occupations with specific genders<\/a> or <a href=\"https:\/\/www.bloomberg.com\/graphics\/2023-generative-ai-bias\/\" target=\"_blank\" rel=\"noopener\">perpetuate racial biases<\/a>. Learning algorithms themselves <a href=\"https:\/\/www.engati.com\/glossary\/algorithmic-bias\" target=\"_blank\" rel=\"noopener\">can be biased<\/a> and then amplify existing biases in the data.<\/p>\n\n<p><\/p>\n\n<p>But systems <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2304.03738\" target=\"_blank\" rel=\"noopener\">could also be biased by design<\/a>. For example, a company might design its generative AI system to prioritize formal over creative writing, or to specifically serve government industries, thus inadvertently reinforcing existing biases and excluding different views. Other societal factors, like a lack of regulations or misaligned financial incentives, can also lead to AI biases. <\/p>\n\n<h2 id=\"the-challenges-of-removing-bias\">The challenges of removing bias<\/h2>\n\n<p>It\u2019s not clear whether bias can \u2013 or even should \u2013 be entirely eliminated from AI systems.<\/p>\n\n<p>Imagine you\u2019re an AI engineer and you notice your model produces a stereotypical response, like Sicilians being \u201cstinky.\u201d You might think that the solution is to remove some bad examples in the training data, maybe jokes about the smell of Sicilian food. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2306.03819\" target=\"_blank\" rel=\"noopener\">Recent research<\/a> has identified how to perform this kind of \u201cAI neurosurgery\u201d to deemphasize associations between certain concepts.<\/p>\n\n<p>But these well-intentioned changes can have unpredictable, and possibly negative, effects. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2304.01910\" target=\"_blank\" rel=\"noopener\">Even small variations<\/a> in the training data or in an AI model configuration can lead to significantly different system outcomes, and these changes are impossible to predict in advance. You don\u2019t know what other associations your AI system has learned as a consequence of \u201cunlearning\u201d the bias you just addressed.<\/p>\n\n<p>Other attempts at bias mitigation run similar risks. An AI system that is trained to completely avoid certain sensitive topics could <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2112.04359\" target=\"_blank\" rel=\"noopener\">produce incomplete or misleading responses<\/a>. Misguided regulations can worsen, rather than improve, issues of AI bias and safety. <a href=\"https:\/\/www.forbes.com\/sites\/jamesbroughel\/2023\/06\/22\/how-regulating-ai-could-empower-bad-actors\/\" target=\"_blank\" rel=\"noopener\">Bad actors<\/a> could evade safeguards to elicit malicious AI behaviors \u2013 making <a href=\"https:\/\/theconversation.com\/four-ways-criminals-could-use-ai-to-target-more-victims-207944\" target=\"_blank\" rel=\"noopener\">phishing scams more convincing<\/a> or <a href=\"https:\/\/theconversation.com\/events-that-never-happened-could-influence-the-2024-presidential-election-a-cybersecurity-researcher-explains-situation-deepfakes-206034\" target=\"_blank\" rel=\"noopener\">using deepfakes to manipulate elections<\/a>.<\/p>\n\n<p>With these challenges in mind, researchers are working to improve data sampling techniques and <a href=\"https:\/\/doi.org\/10.1609\/aaai.v37i6.25911\" target=\"_blank\" rel=\"noopener\">algorithmic fairness<\/a>, especially <a href=\"https:\/\/doi.org\/10.1145\/2090236.2090255\" target=\"_blank\" rel=\"noopener\">in settings<\/a> where <a href=\"https:\/\/doi.org\/10.1145\/3340531.3411980\" target=\"_blank\" rel=\"noopener\">certain sensitive data<\/a> is not available. Some companies, <a href=\"https:\/\/www.technologyreview.com\/2023\/02\/21\/1068893\/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased\/\" target=\"_blank\" rel=\"noopener\">like OpenAI<\/a>, have opted to have <a href=\"https:\/\/time.com\/6247678\/openai-chatgpt-kenya-workers\/\" target=\"_blank\" rel=\"noopener\">human workers annotate the data<\/a>.<\/p>\n\n<p>On the one hand, these strategies can help the model better align with human values. However, by implementing any of these approaches, developers also run the risk of introducing new cultural, ideological or political biases.<\/p>\n\n<h2 id=\"controlling-biases\">Controlling biases<\/h2>\n\n<p>There\u2019s a trade-off between reducing bias and making sure that the AI system is still useful and accurate. Some researchers, including me, think that generative AI systems should be allowed to be biased \u2013 but in a carefully controlled way.<\/p>\n\n<p>For example, my collaborators and I developed techniques that <a href=\"https:\/\/aclanthology.org\/2023.findings-acl.602\/\" target=\"_blank\" rel=\"noopener\">let users specify<\/a> what level of bias an AI system should tolerate. This model can detect toxicity in written text by accounting for in-group or cultural linguistic norms. While traditional approaches can inaccurately flag some posts or comments written in <a href=\"https:\/\/doi.org\/10.18653\/v1\/P19-1163\" target=\"_blank\" rel=\"noopener\">African-American English as offensive<\/a> and by <a href=\"https:\/\/aclanthology.org\/2023.acl-long.507\/\" target=\"_blank\" rel=\"noopener\">LGBTQ+ communities as toxic<\/a>, this \u201ccontrollable\u201d AI model provides a much fairer classification.<\/p>\n\n<p>Controllable \u2013 and safe \u2013 generative AI is important to ensure that AI models produce outputs that align with human values, while still allowing for nuance and flexibility.<\/p>\n\n<h2 id=\"toward-fairness\">Toward fairness<\/h2>\n\n<p>Even if researchers could achieve bias-free generative AI, that would be just one step toward the <a href=\"https:\/\/theconversation.com\/what-is-ethical-ai-and-how-can-companies-achieve-it-204349\" target=\"_blank\" rel=\"noopener\">broader goal of fairness<\/a>. The pursuit of fairness in generative AI requires a holistic approach \u2013 not only better data processing, annotation and debiasing algorithms, but also human collaboration among developers, users and affected communities.<\/p>\n\n<p>As AI technology continues to proliferate, it\u2019s important to remember that bias removal is not a one-time fix. Rather, it\u2019s an ongoing process that demands constant monitoring, refinement and adaptation. Although developers might be unable to easily anticipate or contain the <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2307.05842\" target=\"_blank\" rel=\"noopener\">butterfly effect<\/a>, they can continue to be vigilant and thoughtful in their approach to AI bias.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img  loading=\"lazy\"  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"The Conversation\"  width=\"1\"  height=\"1\"  style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\"  referrerpolicy=\"no-referrer-when-downgrade\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/counter.theconversation.com\/content\/208611\/count.gif?distributor=republish-lightbox-basic\" ><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/emilio-ferrara-314635\" target=\"_blank\" rel=\"noopener\">Emilio Ferrara<\/a>, Professor of Computer Science and of Communication, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-southern-california-1265\" target=\"_blank\" rel=\"noopener\">University of Southern California<\/a><\/em><\/span><\/p>\n\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"noopener\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/eliminating-bias-in-ai-may-be-impossible-a-computer-scientist-explains-how-to-tame-it-instead-208611\" target=\"_blank\" rel=\"noopener\">original article<\/a>.<\/p>\n\n","protected":false},"excerpt":{"rendered":"Blindly eliminating biases from AI systems can have unintended consequences. Dimitri Otis\/DigitalVision via Getty Images Emilio Ferrara, University&hellip;\n","protected":false},"author":540,"featured_media":6767,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[16],"tags":[334,333,497,474],"class_list":{"0":"post-6777","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech","8":"tag-artificial-intelligence","9":"tag-machine-learning","10":"tag-neural-network","11":"tag-the-conversation","12":"cs-entry","13":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6777","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/540"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=6777"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6777\/revisions"}],"predecessor-version":[{"id":6778,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/6777\/revisions\/6778"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/6767"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=6777"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=6777"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=6777"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}