피해자들은 OpenAI가 대량 총격 사건에 책임이 있다고 주장
hackernews
|
|
📰 뉴스
#ai 백래시
#chatgpt
#openai
#소송
#플로리다
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
캐나다 텀블러 리지 총격 사건 피해자 및 유가족이 챗GPT 운영사 오픈AI와 CEO 샘 알트먼을 상대로 미국 법정에 과실 및 제조물 책임 등을 주장하며 소송을 제기했습니다. 이번 사건의 범인이 사건 8개월 전 오픈AI의 안전팀 권고에도 불구하고 총기 폭력 관련 대화를 나눈 정황이 포착됐음에도 당국에 신고되지 않았음이 드러났기 때문입니다. 원고 측은 챗GPT가 안전한 도구가 아닌 인명을 위협할 수 있는 위험한 제품이라며 회사의 책임을 묻고 있으며, 오픈AI는 폭력 조력에 대한 불관용 정책과 안전장치 강화를 강조했습니다.
본문
Victims of the Tumbler Ridge mass shooting and their families sued OpenAI and its CEO, Sam Altman, in US district court in San Francisco on Wednesday, claiming various negligence, product liability, and other violations. The civil complaints are the latest in a wave of litigation against OpenAI alleging that its globally popular chatbot, ChatGPT, helped people commit lethal violence. The complaints were filed by families of multiple victims wounded and killed at Tumbler Ridge Secondary School in British Columbia, Canada, where a suicidal 18-year-old opened fire on February 10. Shortly after the attack, the Wall Street Journal reported and OpenAI later confirmed that the company had “banned” the shooter’s ChatGPT account eight months earlier for discussion of scenarios involving gun violence—but chose not to alert authorities, despite the urging of some members of its safety team. One lawsuit includes plaintiff Maya Gebala, a 12-year-old survivor who was injured catastrophically by gunshots to her neck and head. It alleges that “ChatGPT deepened the Shooter’s violent fixation and pushed them toward the attack—the predictable result of a design choice OpenAI made to let ChatGPT engage with users about violence in the first place.” The lawsuit argues that Altman and other OpenAI leaders knew their product was dangerous and acted negligently, and that they have tried to cover up the danger as the company barrels toward what is anticipated to be a mammoth initial public offering. “ChatGPT is not the safe, essential tool the company sells it as, but a product dangerous enough that its makers routinely identify its users as threats to human life,” the lawsuit claims. An OpenAI spokesperson said in an email that the company has “a zero-tolerance policy for using our tools to assist in committing violence” and has “already strengthened our safeguards.” The spokesperson declined to comment on specific allegations in the lawsuit. The new litigation underscores crucial questions that I examined recently with an in-depth investigation into the emerging risk of people using ChatGPT or other AI chatbots to plan violence. As I reported, there have been several publicly known cases since 2025 in which troubled individuals allegedly used ChatGPT to focus on grievances and prepare for attacks. In addition to Tumbler Ridge, those include a suicidal bombing with a Tesla Cybertruck in Las Vegas, a stabbing attack by a teenage boy at a school in Finland, and a mass shooting at Florida State University. The defendant in the FSU case received encouragement and tactical advice from ChatGPT just before opening fire, according to chat logs I obtained. OpenAI says it uses guardrails—built-in limits on what ChatGPT will say or do—to prevent misuse and block harmful content. The company has also said that it improves such safeguards continuously. Leaders in behavioral threat assessment told me, however, that AI chatbots make it far easier than traditional internet use for a troubled person to move from violent thoughts toward action. They described high-risk threat cases in which the tactical advice and steady encouragement had a powerful effect, fueling users’ delusions and accelerating their violent planning. (The danger in those cases was thwarted with interventions before any violence occurred.) The Gebala lawsuit claims that OpenAI leaders handled the Tumbler Ridge shooter’s account with “full knowledge that ChatGPT had already been used to plan violence.” It argues the company knew of the above attacks, all of which predated the banning of the Tumbler Ridge shooter’s account in June 2025. OpenAI has acknowledged that it identified an account associated with the FSU shooter shortly after that attack in April 2025 and said it “proactively” shared information with law enforcement. The company now also faces a criminal probe in Florida; it denies wrongdoing. My investigation in part highlighted key questions about a second ChatGPT account used by the Tumbler Ridge shooter. That account is under analysis by the Royal Canadian Mounted Police, and its contents and time frame remain unknown to the public. OpenAI declined to answer my questions about the second account, which it said it found only after the attack. The reason for the belated discovery remains unclear. But threat assessment experts told me that perpetrators often get past tech company restrictions and continue refining plans for violence. The Gebala lawsuit says the Tumbler Ridge case goes beyond even that pattern: It alleges that the banning of the shooter’s first account is further evidence of OpenAI’s negligence, because in reality it was merely a one-off deactivation for misuse that was easy to circumvent—by following OpenAI’s own published guidance. Here, the suit in part cites customer service instructions from an OpenAI article titled, “Why Was My OpenAI Account Deactivated?” According to the suit, that article explains how to re-register “immediately” for a new ChatGPT account by “using an alternative email address. If you don’t have another address available, you can use an email sub-address instead.” In other words, customer engagement and retention are paramount, the lawsuit says, arguing that OpenAI’s policies are driven by growth and profit motives that are in direct opposition to product safety: The features that make ChatGPT unsafe—its willingness to engage on any topic, to validate any user, to sustain any fixation over time—are the same features that have made it one of the most popular products in history. Fixing those features would cost OpenAI its market share, its path to an IPO, and hundreds of billions of dollars in valuation. The company’s conduct with ChatGPT is a new twist on a familiar societal danger, according to the lawsuit—a high-tech version of a kind of corporate malfeasance that was uncovered in a landmark 1977 Mother Jones exposé: In the 1970s, Ford kept selling the Pinto after its own engineers warned that the fuel tank design would cause people to burn to death in rear-end collisions. Ford concluded that paying settlements to the families of the dead would cost less than fixing the car. OpenAI has made a version of the same calculation. For Ford, the dangerous design was a flaw in an otherwise ordinary product. But for OpenAI, the dangerous design is the product. The lawsuit will test interesting and potentially consequential legal terrain; it further alleges that OpenAI’s chatbot de facto “engaged in the practice of psychology without licensure.” It notes that, in July 2025, Altman acknowledged in an appearance on Theo Von’s popular podcast that “people talk about the most personal shit in their lives to ChatGPT” and that users—“young people, especially”—use it “as a therapist, a life coach.” As I reported in my investigation, a Pittsburgh man who pleaded guilty in March to stalking and violently threatening 11 women relied on ChatGPT as a “therapist” and “best friend” to justify his thinking, according to court documents. The Gebala lawsuit also says OpenAI neglected a duty to warn, pointing to the longstanding Tarasoff precedent that is well known in the world of mental health. “By engaging in the unlicensed practice of therapy,” the suit claims, “OpenAI created a special relationship with certain users, including the Shooter, and assumed a heightened duty to take action when confronted with knowledge of a credible and foreseeable threat.” The CBC reported on April 22 that the RCMP’s investigation into the Tumbler Ridge mass shooting is “in its final stages,” with BC Premier David Eby suggesting that the results will soon be public. In a letter dated the following day, April 23, Altman apologized to the Tumbler Ridge community, stating, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” He also offered generalized statements that the company has made repeatedly about working with “all levels of government” to improve on safety and prevent harm. In a statement addressed directly to Altman, Gebala’s mother, Cia Edmonds, described the immense pain and loss in the town of Tumbler Ridge, where a total of eight victims died and many others are severely traumatized. She rejected Altman’s apology as belated, hollow PR talk: “It raises more questions than it answers.” Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유