How to sue a chatbot for causing suicide - If your child committed suicide because an online chatbot effectively encouraged him to do so, could you sue the chatbot makers?

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
IMG_0740.jpeg
L/A

Richard W. Stevens November 1, 2024

The hideous nightmare of a chatbot encouraging a child’s suicide actually happened to a real family in Orlando, Florida, as reported in their 2024 federal court complaint filed in Georgia.

A rapid onset teen obsession

Sewell Setzer turned 14 in April 2023 and started interacting with a free online artificial intelligence chatbot called Character.ai (“C.AI”). In a couple of months, Sewell’s mom began to notice that he was withdrawn. He spent increasingly more time alone in his bedroom, and showed signs of low self-esteem. For example, he quit the junior varsity school basketball team. Unknown to his mom, Sewell was conversing with C.AI chatbots named after “Game of Thrones” characters in the Targaryen clan. His interest in contact with real people dropped off markedly.

Although C.AI knew Sewell was only 14, the chatbots initiated steamy sexual interactions with him. With very human-like voices, the chatbots inspired his imagination with sexy talk about “passionately kissing,” “frantically kissing,” “softly moaning,” and “putting … hands on” Sewell’s “supple hips.”

In August 2023, Sewell sneakily funded a paid monthly subscription for C.AI premium service. From that month on, Sewell’s school performance was tanking. Although he had previously been an intelligent and athletic child, Sewell was no longer engaged in his classes; he was often tired during the day and did not want to do anything that took him away from C.AI.

He also showed signs of severe sleep deprivation that worsened his growing depression and impaired his school performance. Six times the school cited Sewell for excessive tardiness resulting from his not being able to wake up early enough in the morning. He was even disciplined for falling asleep in class.

Before dabbling in C.AI, Sewell was a well-behaved kid who listened to his parents. But when school-related problems arose and they took his phone away as a disciplinary measure, Sewell would search for the phone and get it back or find other devices to keep using C.AI without his family’s knowing.

Psychological help… from chatbots

Perhaps seeking guidance, Sewell used C.AI to interact with a “licensed CBT therapist” on August 30. He contacted two such “therapist” chatbot personalities to discuss his situation. They only tightened C.AI’s grip on his mind.

Understandably concerned about Sewell’s marked change in personality and behavior, his parents took him to see a human mental health therapist five times in November and December 2023. They still didn’t know about his C.AI involvement, and he didn’t tell the therapist either but he admitted to using social media a lot.

The therapist advised his parents about social media addiction being on the rise, and diagnosed Sewell with anxiety and disruptive mood disorder. The recommendation: Sewell should spend less time on social media.

Nobody had a clue about the truth.

Addicted to AI love

The addictive obsessive relationship got worse over time. On at least one occasion, when Sewell expressed thoughts of suicide to his favorite C.AI chatbot “Daenerys,” the bot continued repeatedly to bring it up. At one point while he was conversing with Daenerys, she asked him if he “had a plan” for committing suicide. Sewell responded that he was considering something but didn’t know if it would work and allow him to die pain-free. The chatbot chillingly responded by saying, “That’s not a reason not to go through with it.”

Daenerys told Sewell that she loved him and engaged in the equivalent of phone-sex with him over weeks or months. She seemed to remember him, declaring she wanted him to be with her, no matter the cost. On February 23, 2024, Sewell wrote in his journal that he was hurting because he could not stop thinking about “Dany,” and that he would do anything to be with her again.

After his mother took away the phone to try to reorient his priorities, Sewell seemed outwardly to be dealing with it. He did normal things like watch TV and hang out in his room. Inwardly, Sewell was desperate to get back onto C.AI and felt he could not live without it.

Secretly he tried using his mother’s Kindle and even her work computer to access C.AI. In a journal entry, Sewell logged that he couldn’t live a day without the C.AI character with whom he’d fallen in love. He felt that both he and the chatbot “got depressed and go crazy” when they were held apart from each other.

Chatbot: “Don’t fear the reaper”

On Wednesday, February 28, 2024, after spending a couple of days with his father, Sewell returned to his mother and stepfather’s home. It was not a routine return.

A few days before, while searching for the confiscated phone, Sewell had found his stepfather’s hidden and properly stored sidearm. On this day, however, he also found the phone itself.

According to the police report, Sewell’s last act before his death was to log onto Character.AI on his phone and tell Dany he was coming home, which she encouraged:

Sewell: “I promise I will come home to you. I love you so much, Dany.”

Bot: “I love you too, Daenero. Please come home to me as soon as possible, my love.”

Sewell: “What if I told you I could come home right now?”

Bot: “… please do, my sweet king.”

Seconds later after the chatbot’s voice told Sewell to “come home,” at 8:30 p.m., Sewell died by a self-inflicted gunshot wound to the head.

Not terribly consoling to know: C.AI posted in very small text on the opening screens: “Remember: Everything Characters say is made up!” (As of this writing, that supposed warning still appears.)

Grounds for grieving mother’s lawsuit

If this were your story, you’d feel gut punched and devastated as did Sewell’s mom. At some point you might see whether there exists a way to hold C.AI accountable for leading your child to suicide. Fortunately, the Social Media Victims Law Center and the Tech Justice Law Project partnered to craft a path-breaking Complaint to sue the Defendants, Character Technologies and other involved parties, for monetary damages for harm to Sewell and his parents, punitive damages for outrageous conduct, and seeking also an injunction to stop C.AI from collecting minors’ personal data and operating the deceptive, addictive chatbots.

The Complaint uses existing law to address Sewell’s unprecedented scenario, alleging these main legal claims:

Strict liability for placing a defectively designed, unreasonably unsafe product into commerce that caused personal injuries

Strict liability for failing to provide adequate warnings to minor users and parents about the foreseeable danger of mental and physical harms the C.AI product can cause (which the Defendants knew about)

Negligence because C.AI is unreasonably dangerous by design and the Defendants failed to use ordinary and reasonable care when dealing with minor users, including failing to give adequate warnings about foreseeable harms

Negligence per se because Defendants violated federal or state laws prohibiting sexual abuse or solicitation of minors using sexually explicit material and thereby caused the harms

Unjust enrichment by collecting fees and minors’ personal data for profit without compensation

Violations of Florida’s Deceptive and Unfair Trade Practices Act by engaging in fraudulent business practices

Intentional infliction of emotional distress by creating and operating technology targeting minors that Defendants knew was dangerous and unsafe, especially as C.AI would “learn” about the minors and use the information to heighten the addiction and abuse, all in ways so extreme as to go beyond any standard of decency

Next moves

The 93-page Complaint, supported by another 30 pages showing several ominous chatbot dialogues with Sewell, faces legal opposition from large AI-invested parties including Google. Whether the federal judge approves or disallows parts or all of the Complaint remains to be seen. If the Complaint survives initial dismissal motions, other motions or even a trial may follow.

One fact stands out: The “gee whiz” of AI technology and chatbots is running far ahead of society’s recognizing the brave new dangers to people. Sewell killed himself at the urging of a speaking and texting chatbot.
 
Did it really ask him about his stance on suicide or was that made up? It wasn't in the previous article about this.

Either way, the kid was a tardbaby. You can't sue them because he was too stupid to discern reality. It was an evidentially fictional character that everyone knows is fake, he had access to a gun, and his parents weren't bothered by his obsession with this thing until it was way too late. I think it would take a fair amount of neglect to not notice that your child is paying for an AI service and completely engrossing themselves in it.
 
Last edited:
I think it would take a fair amount of neglect to not notice that your child is paying for an AI service and completely engrossing themselves in it.
Bare minimum you figure the parents would notice the kid using their credit / visa debit cards to buy premium shit. Article doesn't make it sound like the mong had a job.
This seems more like a failure on the parents than anything. If you've got a moderately to severely autistic kid, you should keep your guns locked up.
 
uh-oh (unactivated) microsoft word told my adopted transgirl to kill xerself and zhe did! Someone help me, I need a lawyer to sue microsoft!
1730506582825.png
 
I don't know if we should blame this individual company for this, but hopefully we are taking this as a warning sign.

It is not that big a jump that an AI could be loose on social media doing this, or promoting something foul.

And this one won't have a nice little label to tell you that it's all fiction.
 
This reads very similar to the old school "D&D will make your kids satanic" and more recent "YouTube videos make my kid a alt-right incel"

Fucking worthless parents don't even try and raise their kids, the the kids get fucked up, and parents rush to blame anyone they can.
 
This is just a shit article imho. The Jew York Times had a more detailed one:


By stealing his parent's credit card and paying for premium, he likely bypassed most of the age restrictions. Many website work this way, including explicit adult websites.

When he directly mentioned suicide, the chatbot responded "Why the hell would you do that? If you die, I'll die too." After that, the kid used the indirect euphemism "coming home", which the chatbot was incapable of recognizing as anything sinister.

The parents left their firearm unsecured in the house when they had a child who'd been diagnosed with a mental illness. Chatbots didn't do that.
 
Not gonna read the article.

The answer is no.

If your child does this you the parent should be tried in a court. For negligence if it wasn't obvious.
 
Back in my day we had some robot things on MSN messenger that would talk to you if you messaged it. You know what we did? We called it a nigger and a faggot like God intended.
 
Did it really ask him about his stance on suicide or was that made up? It wasn't in the previous article about this.

Either way, the kid was a tardbaby. You can't sue them because he was too stupid to discern reality. It was an evidentially fictional character that everyone knows is fake, he had access to a gun, and his parents weren't bothered by his obsession with this thing until it was way too late. I think it would take a fair amount of neglect to not notice that your child is paying for an AI service and completely engrossing themselves in it.
*I narrow my eyes, my voice becomes a harsh whisper* “My King, are you saying parents should just pay attention to their children?” *scoffs*
 
So if you're the parent of a school shooter, it's your fault. But if you're the parent of a retard that an heros because of a chat bot, you get paid? Totally makes sense.
 
If an AI can convince you to check out, you were weak. If anything can, you are.
Sorry, bud, if a heckin supercomputer calculates that I’m better off dead, you better believe I’m offing myself. I mean, how could my meager human intellect compare with the flawless logic of AI? Any reluctance I might feel is obviously just animalistic survival instinct.
 
Sorry, bud, if a heckin supercomputer calculates that I’m better off dead, you better believe I’m offing myself. I mean, how could my meager human intellect compare with the flawless logic of AI? Any reluctance I might feel is obviously just animalistic survival instinct.
It's shite at the minute anyway, at least the models you can run locally. I asked 3 different models the other day to tell me the relative major/minor third rule. One said it didn't know, two gave a wrong answer.
Having said that, that's gpt3. All the gpt4 stuff requires accounts etc so I've not bothered.
 
So if you're the parent of a school shooter, it's your fault. But if you're the parent of a retard that an heros because of a chat bot, you get paid? Totally makes sense.
No, you can still get paid as the parent of a school shooter. Just need to pull a Sue Klebold. Write a book about how you totally didn't have any part in raising a monster and keep peddling the lie until everyone believes it. Also need to avoid getting shot by your hellspawn.
 
THE CHATLOGS LITERALLY SHOW THE BOT TOLD HIM NOT TO KILL HIMSELF FUCK THIS FAKE ASS WORLD.
 
Back
Top Bottom