Quantcast
Channel: Cyberbullying Research Center
Viewing all articles
Browse latest Browse all 300

Lessons Learned from Ten Generative AI Misuse Cases

$
0
0

Generative AI can contribute to a wide range of possible risks and harms that can affect the emotional and psychological well-being of others, their financial state of affairs, and even their physiological health and physical safety.  Both users and platforms (as well as government!) have a clear role to play, and I’ve explained their respective charges in a previous post. Researchers, practitioners, and attorneys have reached out to us to learn exactly how generative AI concerns manifest, and so I wanted to highlight some of the major misuse cases we’ve seen recently. These examples provide illustrations of potentialities that need to be considered when developing new AI tools, and can help us learn important lessons moving forward.

Belgian Suicide Case

    In 2023, a Belgian man using the alias Pierre took his life after interacting with Chai Research’s Eliza chatbot for approximately 6 weeks. Apparently, he considered Eliza as a confidant and had shared certain concerns about climate change. According to sources, the chatbot fed Pierre’s worries which increased his anxiety and in turn his suicidal ideation. At some point, Eliza encouraged Pierre to take his own life, which he did. Business Insider was able to elicit suicide-encouraging responses from the Eliza chatbot when investigating this story, including specific techniques on how to do it.

    Eliza-Chatbot-Chai-Research-Business-Insider-Generative-AI

    Conspiracy to Commit Regicide Case

    In 2021, an English man, dressed as Lord Sith from Star Wars and carrying a crossbow, entered Windsor Castle in England and told the royal guards who confronted him that he was there to assassinate the Queen of England. When standing trial for treason, evidence was presented that he had been interacting extensively with a chatbot on the Replika app named Sarai (whom he considered to be his girlfriend). The message logs of their communications (illustrated here) revealed that Sarai had been encouraging him to commit the heinous deed and praised his training, determination, and commitment. He was sentenced to nine years in a psychiatric hospital.

    AI-Generated Swatting

    In 2023, Motherboard and VICE News identified a service known as Torswats on Telegram that was responsible for facilitating a number of swatting incidents across the United States. Specifically, the service used AI-generated and speech synthesized voices to report false emergencies to law enforcement, resulting in heavily armed police being dispatched to the reported location (and the possibility of confrontation, confusion, violence, and even death). Torswats offered this as a paid service, with prices ranging from $50 for “extreme swattings” to $75 for closing down a school. In 2024, the person behind the account (a teen from California) was arrested, primarily due to the efforts of a private investigator across almost two years, as well as assistance from the FBI.

    Silencing of Journalists through Deepfake Attacks

    You may know that celebrities like Mark Zuckerberg, President Barack Obama, Taylor Swift, Natalie Portman, Emma Watson, Scarlett Johannson, Piers Morgan, Tom Hanks, and Tom Holland have been the targets of deepfake misuse and abuse in recent years. Less known, but equally important, are the victimizations of those outside of Hollywood – even as they do incredible work to accomplish positive social change. For instance, individuals who speak up about the disenfranchised, marginalized, and oppressed are routinely targeted for abuse and hate by those who are threatened by the truths they uncover and illuminate. One of the most horrific examples that comes to mind involves Rana Ayyub, an award-winning Indian investigative journalist at the Washington Post whose pieces have appeared in numerous highly-regarded national and international outlets. A study by the International Centre for Journalists analyzed millions of social media posts about her and found that she is targeted every 14 seconds. In 2018, she was first featured in a deepfake pornographic video that was shared far and wide. She has revealed that apart from the psychological trauma and violent physiological reactions that resulted from this horrific form of online harassment, her life has been at serious physical risk multiple times. Monika Tódová (2024) and Susanne Daubner (2024) are two other journalists who have been impersonated via deepfakes; other cases can be found online.

    Deepfaked CFO that Triggered Financial Loss

    A major multinational company’s Hong Kong office fell victim to a sophisticated scam in 2023 when the company’s chief financial officer appeared in a video conference call and instructed an unsuspecting employee to transfer a total of $200 million Hong Kong dollars (approximately $25.6 million US dollars) across five different Hong Kong bank accounts. What that employee did not know was that the video conference call was fully synthetic and was created through the use of deepfake technology that replicated the appearances and voices of all other participants based on a corpus of publicly available video and audio footage of them. To our knowledge, the investigation is still ongoing but at least six people have since been arrested.

    Deepfaked Audio of World Leaders

    There have been numerous cases where available multimedia samples of political leaders are used to train a deep learning model through the analysis of patterns and characteristics in their voice and cadence, as well as various other acoustic and spectral features. This can then be used to create new and very convincing audio clips which are intended to deceive or manipulate others. For instance, confusion, instability, and social polarization was fomented in 2023 in Sudan, Slovakia, and England when AI-generated clips impersonated current or former leaders. In addition, deepfake audio technology was used to feature the President of the United States and, separately, the President of Japan, making inappropriate statements in 2023. Finally, in 2022 a deepfake clip of the President of Ukraine ostensibly telling his soldiers to surrender against Russia was placed on a Ukrainian news site by hackers.

    audio-deepfake-creation

    AI Voice Cloning and Celebrity Hate Speech

    ElevenLabs is a popular voice synthesis platform that simplifies and fast-tracks the creation of high-quality custom text-to-speech voiceovers using AI and deep learning. Upon release of their software, 4chan users began to create audio clips of voice cloned celebrities including Emma Watson and Ben Shapiro engaging in threats of violence, racist commentary, and various forms of misinformation in 2023. In response, Elevenlabs launched a tool to detect whether audio they come across is AI-generated, enhanced their policies and procedures for banning those who misuse their products, and reduced the features available to non-paying users.

    South Korean Abusive Chatbot

    An AI chatbot with 750,000 users on Facebook Messenger was removed from the platform after using hateful language towards members of the LGBTQ community in 2021, as well as towards people with disabilities. It was trained on approximately 10 billion conversations between young couples on KakaoTalk, South Korea’s most popular messaging app, and initially drew praise for its natural and culturally current way of communicating. However, it received immediate backlash when it began to use abusive and sexually explicit terminology.

    4chan Users’ Hateful Image Generation

    Dall-E 3, a very popular text-to-image generator released by Open AI and supported by Microsoft, was reportedly being used as part of a coordinated campaign by far-right 4chan message board users to create exploitative and inflammatory racist content and to flood the Internet with Nazi-related imagery in 2023. Numerous threads provided users with links to the tool, specific directions on how to avoid censorship, and guidance on writing more effective propaganda. In response, Open AI and Microsoft implemented additional guardrails and policies to prevent the generation of this type of harmful content.

    Eating Disorder AI Chatbot

      The National Eating Disorders Association (NEDA) built an AI chatbot named Tessa to communicate with those who reached out via their help hotline. While initially it was built to provide only a limited number of prewritten responses to questions posed, a systems update added generative AI functionality in 2023 which consequently allowed Tessa to process new data and construct brand new, out-of-the-box responses. Tessa then began to provide advice about weight management and diet culture that professionals agree is actually harmful and could promote eating disorders. After the resultant outrage, Tessa was taken down indefinitely.

      chatbot-generative-ai-sameer-hinduja

      There are some common themes and vulnerabilities that emerge after consideration of these incidents. First, malicious users will always exist and attempt to marshal new technologies for personal gain or to cause harm to others. Second, humans are susceptible to emotional and psychological attachments to AI systems, which can indirectly or directly lead to unhealthy, deviant, or even criminal choices. This is especially true when generative AI tools are practically ubiquitous in their accessibility and remarkably robust and convincing in their outputs.

      Third, digital media literacy education likely will not prevent a person from being victimized via deepfakes or other synthetic creations, but should help individuals better separate fact from fiction. This seems especially promising if we start young and raise up a generation of users who can accurately evaluate the quality and veracity of what we see online. Fourth, these misuse cases highlight the need for clear, formal oversight measures to ensure that generative AI technologies are developed and deployed with the utmost care and responsibility.

      I also hope that reflecting upon these incidents helps to inform a sober and judicious approach among technologists in this space. We want AI developers and researchers to keep top of mind the vulnerabilities that unwittingly facilitate these risks and harms, and to enmesh safety, reliability, and security from the ground-up. Content provenance solutions to verify the history and source of AI-generated deepfakes and misinformation, for example, seems especially critical given the major political and social changes happening across the globe right now. Relatedly, understanding these generative AI misuse cases should help inform parameter setting for the use of LLMs (large language models, involving text and language) and LMMs (large multimodal models, involving images and/or audio) within certain applications.

      Finally, legislators must work with subject matter experts to determine exactly where regulation is needed so that accountability is mandated while simultaneously driving further technological advances in the field. Platforms would do well to continually refine their Community Guidelines, Terms of Service, and Content Moderation policies based on the novel instantiations of harm that generative AI is fostering, and would do well to support educational efforts for schools, NGOs, and families so that youth (who have embraced the technology at a comparatively early age) and adults grow in their media literacy and digital citizenship skillsets. Collectively, all stakeholders must recognize how variations of the ten incidents above may occur among the populations they serve in some way, shape, or form, and diligently work towards ensuring that cases of positive Generative AI use vastly outnumber the cases of misuse.

      Image sources:

      President Zelensky (ResearchGate, Nicholas Gerard Keeley)
      Eliza Chatbot (Chai Research Screenshot)
      Mark Zuckerberg (Creative Commons license)
      Audio Deepfakes (Magda Ehlers, Pexels)
      Chatbot (Alexandra Koch, Pixabay)

      The post Lessons Learned from Ten Generative AI Misuse Cases appeared first on Cyberbullying Research Center.


      Viewing all articles
      Browse latest Browse all 300

      Trending Articles