{"id":13829,"date":"2025-03-30T22:00:00","date_gmt":"2025-03-30T22:00:00","guid":{"rendered":"https:\/\/modernsciences.org\/staging\/4414\/?p=13829"},"modified":"2025-03-22T06:19:53","modified_gmt":"2025-03-22T06:19:53","slug":"ai-in-the-workplace-productivity-tools-limitations-march-2025","status":"publish","type":"post","link":"https:\/\/modernsciences.org\/staging\/4414\/ai-in-the-workplace-productivity-tools-limitations-march-2025\/","title":{"rendered":"How AI can (and can\u2019t) help lighten your load at work"},"content":{"rendered":"\n<div class=\"theconversation-article-body\">\n    <figure>\n      <img  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/656423\/original\/file-20250319-56-9glrsn.jpg?ixlib=rb-4.1.0&#038;rect=0%2C35%2C5991%2C3952&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" >\n        <figcaption>\n          \n          <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/ai-artificial-intelligence-generated-content-concept-2390780411\" target=\"_blank\" rel=\"noopener\">SObeR 9426\/Shutterstock<\/a><\/span>\n        <\/figcaption>\n    <\/figure>\n\n  <span><a href=\"https:\/\/theconversation.com\/profiles\/akhil-bhardwaj-1547067\" target=\"_blank\" rel=\"noopener\">Akhil Bhardwaj<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-bath-1325\" target=\"_blank\" rel=\"noopener\">University of Bath<\/a><\/em><\/span>\n\n  <p>Legend has it that <a href=\"https:\/\/www.smithsonianmag.com\/history\/in-search-of-william-tell-2198511\/\" target=\"_blank\" rel=\"noopener\">William Tell<\/a> shot an apple from his young son\u2019s head. While there are many interpretations of the tale, from the perspective of the theory of technology, a few are especially salient. <\/p>\n\n<p>First, Tell was an expert marksman. Second, he knew his bow was reliable but understood it was just a tool with no independent agency. Third, Tell chose the target.<\/p>\n\n<p>What does all this have to do with artificial intelligence? Metaphorically, AI (think large language models or LLMs, such as ChatGPT) can be thought of as a bow, the user is the archer, and the apple represents the user\u2019s goal. Viewed this way, it\u2019s easier to work out how AI can be used effectively in the workplace.<\/p>\n\n<p>To that end, it\u2019s helpful to consider what is known about the limitations of AI before working out where it can \u2013 and can\u2019t \u2013 help with efficiency and productivity.<\/p>\n\n<p>First, LLMs tend to create outcomes that are not tethered in reality. A recent study showed that as much as <a href=\"https:\/\/www.cjr.org\/tow_center\/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php\" target=\"_blank\" rel=\"noopener\">60%<\/a> of their answers can be incorrect. Premium versions even incorrectly answer questions more confidently than their free counterparts. <\/p>\n\n<p>Second, some LLMs are <a href=\"https:\/\/nexla.com\/ai-infrastructure\/data-drift\/\" target=\"_blank\" rel=\"noopener\">closed systems<\/a> \u2013 that is, they do not update their \u201cbeliefs\u201d. In a <a href=\"https:\/\/doi.org\/10.5465\/amr.2021.0488\" target=\"_blank\" rel=\"noopener\">mutable world<\/a> that is constantly changing, the static nature of such LLMs can be misleading. In this sense, they drift away from reality and may not be reliable.<\/p>\n\n<p>What\u2019s more, there is some evidence that interactions with users lead to a degradation in performance. For example, researchers have found that LLMs become more <a href=\"https:\/\/www.technologyreview.com\/2024\/03\/11\/1089683\/llms-become-more-covertly-racist-with-human-intervention\/\" target=\"_blank\" rel=\"noopener\">covertly racist<\/a> over time. Consequently, their output is not predictable. <\/p>\n\n<p>Third, LLMs have no goals and are not capable of independently discovering the world. They are, at best, just tools to which a user can <a href=\"https:\/\/www.researchgate.net\/publication\/389889194_Science_as_a_vocation_redux_Outsourcing_the_logic_of_discovery_to_AI\" target=\"_blank\" rel=\"noopener\">outsource<\/a> their exploration of the world. <\/p>\n\n<p>Finally, LLMs do not \u2013 to borrow a term from the 1960s sci-fi novel <a href=\"https:\/\/www.britannica.com\/topic\/Stranger-in-a-Strange-Land\" target=\"_blank\" rel=\"noopener\">Stranger in a Strange Land<\/a> \u2013 \u201cgrok\u201d (understand) the world they are embedded in. They are far more like jabbering <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noopener\">parrots<\/a> that give the impression of being smart. <\/p>\n\n<p>Think of the ability of LLMs to mine data and consider statistical associations between words, which they use to mimic human speech. The AI does not know what statistical association between words mean. It does not know that the crowing of the rooster does not lead to a sunrise, for example. <\/p>\n\n<p>Of course, an LLM\u2019s ability to mimic speech is impressive. But the ability to mimic something does not mean it has the attributes of the original.<\/p>\n\n<h2 id=\"lightening-the-workload\">Lightening the workload<\/h2>\n\n<p>So how can you use AI more effectively? One thing it can be useful for is critiquing ideas. Very often, people prefer not to hear criticism and feel a loss of face when their ideas are criticised \u2013 especially when it happens in public. <\/p>\n\n<p>But LLM-generated critiques are private matters and can be useful. I have done so for a recent essay and found the critique reasonable. Pre-testing ideas can also help avoid blind spots and obvious errors. <\/p>\n\n<p>Second, you can use AI to crystallise your understanding of the world. What does this mean? Well, because AI does not understand the causes of events, asking it questions can force you to engage in sense-making. For example, I asked an LLM about whether my university (Bath) should widely adopt the use of AI. <\/p>\n\n<p>While the LLM pointed to efficiency advantages, it clearly did not understand how resource are allocated. For example, administrative staff who are freed up cannot be redeployed to make high-level strategic decisions or teach courses. AI has no experience in the world to understand that. <\/p>\n\n<p>Third, AI can be used to complement mundane tasks such as editing and writing emails. But here, of course, lies a danger \u2013 users will use LLMs to write emails at one end and summarise emails at the other. <\/p>\n\n<p>You should consider when a clumsily written personal email might be a better option (especially if you need to persuade someone about something). Authenticity is likely to start counting more as the use of LLMs becomes more widespread. A personal email that uses the <a href=\"https:\/\/doi.org\/10.1177\/14761270251316028\" target=\"_blank\" rel=\"noopener\">right language and appeals to shared values<\/a> is more likely to resonate. <\/p>\n\n<p>Fourth, AI is best used for low-stakes tasks where there is no liability. For example, it could be used to summarise a lengthy customer review, answer customer questions that are not related to policy or finance, generate social media posts, or help with employee inductions. <\/p>\n\n<figure class=\"align-center zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\" target=\"_blank\" rel=\"noopener\"><img  decoding=\"async\"  alt=\"two colleagues having a discussion in a warehouse.\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-ls-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\"  data-pk-src=\"https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"  data-pk-srcset=\"https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=338&amp;fit=crop&amp;dpr=1 600w, https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=338&amp;fit=crop&amp;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=338&amp;fit=crop&amp;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=424&amp;fit=crop&amp;dpr=1 754w, https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=424&amp;fit=crop&amp;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/656576\/original\/file-20250320-56-k08pw9.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=424&amp;fit=crop&amp;dpr=3 2262w\" ><\/a>\n            <figcaption>\n              <span class=\"caption\">Where decisions might have serious consequences, human input is better.<\/span>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/two-professional-engineer-man-woman-manager-2287150141\" target=\"_blank\" rel=\"noopener\">M Stocker\/Shutterstock<\/a><\/span>\n            <\/figcaption>\n          <\/figure>\n\n<p>Consider the opposite case. In 2022, an LLM used by Air Canada misinformed a passenger about a fee \u2013 and the passenger sued. The judge held the airline <a href=\"https:\/\/www.bbc.com\/travel\/article\/20240222-air-canada-chatbot-misinformation-what-travellers-should-know\" target=\"_blank\" rel=\"noopener\">liable<\/a> for the bad advice. So always think about liability issues.<\/p>\n\n<p>Fans of AI often advocate it for everything under the sun. Yet frequently, AI comes across as a solution looking for a problem. The trick is to consider very carefully if there is a case for using AI and what the costs involved might be. <\/p>\n\n<p>Chances are, the more creative your task is, or the more unique it is, and the more understanding it requires of how the world works, the less likely it is that AI will be useful. In fact, outsourcing creative work to AI can take away some of the <a href=\"https:\/\/www.researchgate.net\/publication\/389889194_Science_as_a_vocation_redux_Outsourcing_the_logic_of_discovery_to_AI\" target=\"_blank\" rel=\"noopener\">\u201cmagic\u201d<\/a>. AI can mimic humans \u2013 but only humans \u201cgrok\u201d what it is to be human.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img  loading=\"lazy\"  decoding=\"async\"  src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAP+KeNJXAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\"  alt=\"The Conversation\"  width=\"1\"  height=\"1\"  style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\"  referrerpolicy=\"no-referrer-when-downgrade\"  class=\" pk-lazyload\"  data-pk-sizes=\"auto\"  data-pk-src=\"https:\/\/counter.theconversation.com\/content\/252663\/count.gif?distributor=republish-lightbox-basic\" ><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n\n  <p><span><a href=\"https:\/\/theconversation.com\/profiles\/akhil-bhardwaj-1547067\" target=\"_blank\" rel=\"noopener\">Akhil Bhardwaj<\/a>, Associate Professor (Strategy and Organisation), School of Management, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-bath-1325\" target=\"_blank\" rel=\"noopener\">University of Bath<\/a><\/em><\/span><\/p>\n\n  <p>This article is republished from <a href=\"https:\/\/theconversation.com\" target=\"_blank\" rel=\"noopener\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/how-ai-can-and-cant-help-lighten-your-load-at-work-252663\" target=\"_blank\" rel=\"noopener\">original article<\/a>.<\/p>\n<\/div>\n\n\n\n\n<p class=\"\"><\/p>\n\n\n\n<p class=\"\"><\/p>\n","protected":false},"excerpt":{"rendered":"SObeR 9426\/Shutterstock Akhil Bhardwaj, University of Bath Legend has it that William Tell shot an apple from his&hellip;\n","protected":false},"author":1119,"featured_media":13831,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","fifu_image_url":"https:\/\/live.staticflickr.com\/65535\/52617511739_779ceb2341_h.jpg","fifu_image_alt":"","footnotes":""},"categories":[16],"tags":[6322,6318,6332,6331,6325,6315,6327,6320,6329,6317,6326,6319,6330,6333,6316,6328,6321,6323,6314,474,6324],"class_list":{"0":"post-13829","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech","8":"tag-ai-and-liability-risks","9":"tag-ai-critiques-for-writing","10":"tag-ai-for-admin-tasks","11":"tag-ai-for-customer-service","12":"tag-ai-for-email-writing","13":"tag-ai-in-the-workplace","14":"tag-ai-job-automation-limits","15":"tag-ai-productivity-tools","16":"tag-ai-summarizing-tools","17":"tag-ai-work-efficiency","18":"tag-ai-generated-errors","19":"tag-authenticity-vs-ai-generated-content","20":"tag-creative-tasks-vs-ai","21":"tag-dangers-of-overusing-ai","22":"tag-effective-ai-use-cases","23":"tag-how-to-use-ai-at-work","24":"tag-human-vs-ai-decision-making","25":"tag-large-language-model-accuracy","26":"tag-limitations-of-ai-tools","27":"tag-the-conversation","28":"tag-when-not-to-use-ai","29":"cs-entry","30":"cs-video-wrap"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/13829","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/users\/1119"}],"replies":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/comments?post=13829"}],"version-history":[{"count":1,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/13829\/revisions"}],"predecessor-version":[{"id":13830,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/posts\/13829\/revisions\/13830"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media\/13831"}],"wp:attachment":[{"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/media?parent=13829"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/categories?post=13829"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/modernsciences.org\/staging\/4414\/wp-json\/wp\/v2\/tags?post=13829"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}