News May 12, 2023
When AI Goes Wrong: 4 Cautionary Case Studies
ChatGPT and other generative AI can help with productivity, but there are mounting examples of organizations using the technology in ways that have landed them in hot water.
ChatGPT and other generative AI tools can help businesses become more efficient, but as with any technology, they’re not suited to every occasion. And in fact, they can cause more problems than they solve. Consider these four examples of what not to do with AI.
1
Using AI Models To ‘Add Diversity’
Levi’s recently announced a partnership with Lalaland.ai, a digital fashion studio that builds customized AI-generated models. The denim brand said it would be testing the technology by using AI-generated models to supplement human models, “increasing the number and diversity of our models for our products in a sustainable way,” according to a press release. The move quickly drew fire from critics who wondered why Levi’s wouldn’t hire real models to promote diversity, calling the decision to use AI instead problematic, lazy and racist.
Can't wait til @LEVIS starts using AI models so I can just never buy anything from them again. https://t.co/I0V2S70K4h
— Patrick Lucas Austin (@patbits) March 28, 2023
The backlash was so rapid that Levi’s had to add an editor’s note to its original press release on the topic, noting that the AI would potentially allow them to publish more images of their products on a range of body types more quickly. “That being said, we are not scaling back our plans for live photo shoots, the use of live models or our commitment to working with diverse models,” according to the editor’s note.
Lesson: Don’t use AI as a shortcut or to paper over real issues – like a company’s lack of diversity.
2
A Soulless Response to a Tragedy
After a mass shooting at Michigan State that killed three students and injured five more, Vanderbilt University’s Peabody School sent an email to its students to address the tragedy. It was quickly discovered, however, that the message had been written using ChatGPT.
Wow. Vanderbilt University officials had to issue an apology recently after they used ChatGPT to generate an email about the mass shooting at Michigan State University.https://t.co/sY2hgjtBuw
— Caroline Orr Bueno, Ph.D (@RVAwonk) March 5, 2023
The message talked about the importance of creating an inclusive environment and noted that “one of the key ways to promote a culture of care on our campus is through building strong relationships with one another.” In small print at the end of the message were the words: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.” The email was also signed by two administrators.
Students expressed anger and confusion that administrators would use AI to write an email about the tragedy, and school officials had to send out a follow-up message apologizing for the “poor judgment” they’d exercised.
Lesson: AI can be a great tool for brainstorming or overcoming writer’s block, but there are many instances where real, human empathy is what’s required.
3
Don’t Let the Facts Get in the Way of an Automated Story
Last year, CNET, a tech-focused news site, started experimenting with AI-written articles on money-related topics. The publication never made a formal announcement, initially using bylines that said the articles were written by “CNET Money Staff.” A dropdown description elaborated that the articles were generated using automation technology and edited and fact-checked by the company’s editorial staff.
New from me: the work of CNET's article-writing AI isn't just riddled with errors. It also appears to be substantially plagiarized from pieces that had been published previously, with no citation pic.twitter.com/mjz3XChXbI
— Jon Christian (@Jon_Christian) January 23, 2023
After news of their AI experiment broke, competing news outlets quickly discovered numerous errors in the AI-written copy. CNET ended up issuing corrections for 41 of the 77 stories it published – including rewriting some phrases “that were not entirely original” – using an internally developed AI tool. Company leadership also said it would temporarily pause use of AI-generated content at CNET and other websites owned by parent company Red Ventures.
Lesson: Don’t trust every fact that AI provides. A skilled editor with subject matter expertise should be reviewing content before it’s released to the public.
4
‘Hallucinations’ of Crimes That People Never Committed
Making factual errors about how concepts like compound interest works – as AI tools did in the CNET example – is one thing, but accusing people of crimes they never committed is another can of worms entirely. But there are more and more examples of ChatGPT doing just that: A California lawyer recently asked the AI chatbot to generate a list of legal scholars who had sexually harassed someone. The AI added law professor Jonathan Turley to the list, writing that he had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in the Washington Post as the source.
The AI chatbot fabricated a sexual harassment scandal involving a law professor — and cited a fake Washington Post article as evidence. https://t.co/LwnrjS0s06
— The Washington Post (@washingtonpost) April 5, 2023
Turley, however, has never taken students to Alaska, never been accused of harassment, and the article ChatGPT cited doesn’t exist. “It was quite chilling,” Turley told the Washington Post.
In another example, Brian Hood, regional mayor of Hepburn Shire in Australia, says ChatGPT made false claims that he had served time in prison for bribery. Hood is threatening to file what would be the first defamation lawsuit against OpenAI, according to Reuters.
Lesson: Be wary of the information ChatGPT and other generative AI spew out. These tools work by predicting the most likely strings of words that match a person’s question, but they don’t have human reasoning to determine whether a response is logical or factually consistent. Sometimes that will result in a phenomenon known as “hallucinations,” where the AI will make up its own facts. Check and double-check to ensure you’re not repeating false claims if you’re taking information directly from a chatbot.