Disaster 'AI Is So Dangerous': YouTuber Claims ChatGPT Wrongly Identified Poison Hemlock As Carrot

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
L|A

Original Video: https://www.instagram.com/reel/DTygkZNkm_H/?utm_source=ig_embed

A YouTuber has warned that blindly trusting AI chatbots can be extremely dangerous after ChatGPT termed a highly toxic plant safe.

Kristi, who has nearly half a million followers on Instagram, recently shared a video explaining how her friend sent pictures of an unfamiliar plant growing in her backyard to ChatGPT and asked the AI chatbout about it.

In the screenshots shared by the social media influencer, ChatGPT identified it as carrot foliage. ChatGPT mentioned that the leaves are "very finely divided and feathery," making it very classic for carrot tops and added that it was “highly unlikely” to be poison hemlock.

The chatbot even shared lists of common lookalikes, such as parsley, coriander and Queen Anne's lace.

While sharing another clear picture, Kristi's friend asked whether the plant was poison hemlock or not, but she was reassured that it was not.

Warning users about poison hemlock, she said, "I don't know if you guys know this, you eat it, you die. You touch it, you can die. Poison hemlock causes systemic poisoning, ingestion is most dangerous...There is no specific antidote for hemlock."

ChatGPT kept dismissing the possibility of the plant being hemlock and stated that it did not show "smooth, hairless stems," while there were no "distinct purple blotches or streaks on the stems".

In her post, Kristi pointed out that these features were visible in the images.

According to the Cleveland Clinic, poison hemlock (Conium maculatum) is a "highly toxic plant" and is commonly mistaken for wild carrot or parsley.

The YouTuber then shared the same images on Google Lens that identified the plant as hemlock. Interestingly, her friend also uploaded the images into a separate session on ChatGPT on her phone and was told this time that the plant is hemlock.

Kristi highlighted that her friend is a grown adult and "knew to ask me beyond what ChatGPT said."

"What if she wasn't? They would literally be dead. There is no antidote for this," she added.

The social media influencer said this is a warning that "ChatGPT and other large language models and any other AI, they are not your friend. They are not to be trusted, they are not helpful, they are awful and they could cause severe harm.”

While sharing the video on Instagram, Kristi wrote in the caption that ChatGPT "nearly" killed her best friend by telling her that "POISON HEMLOCK was CARROT."

"It not only said it was POSITIVE, it doubled down over and over CONFIRMING with ABSOLUTE certainty that it was in fact NOT poison hemlock - that it was IN FACT wild carrot. - spoiler - it's poison hemlock. Which there is NO antidote for and is EXTREMELY deadly," she wrote.

Calling AI "dangerous," she advised people to be "careful" while using AI chatbots.
 
Last edited:
Despite the simularities, hemlock and wild carrot are one of the easier look a likes to separate. If you're to stupid to tell the difference between a wild carrot and hemlock then you shouldn't be foraging, and if you do anyway then you deserve to be removed from the gene pool.
 
We need to improve ChatGPT so when we are all starving without access to food but still have access to the internet and ChatGPT it can better help us forage for food.
 
AI can't hurt you, unless you were dumb enough to give it a life or death decision and then listen to it blindly.
To reiterate this point I offer this piece of evidence as to what googles AI bot said when I asked about lucas werners famous toasterbortion meals:
werntoasterbortion.jpg
As retarded as lucas is, he never made any such claim and the phrase toasterbortion as far as wernology goes, specifically refers to his horrifying meals that he has a history of cooking in homeless shelter toaster ovens and recording videos of. The chatbot literally made that entire claim up from nothing but the name toasterbortion

So yeah, take anything that is claimed by any of these AI programs with a massive amount of skepticism and certainly don't take its advice for anything that might poison you
 
Then the problem is neglectful parents who didn't tell their kids to eat unknown plants or believe whatever the internet says
We can always fall back on bad parenting for everything, from shooting up a school to shuttle Columbia disintegrating on reentry. My point of argument is should we allow a disruptive fraudster (Sam Altman, Dario Amodei, etc) and his marketing entourage run rampant without credible opposition.
 
We can always fall back on bad parenting for everything, from shooting up a school to shuttle Columbia disintegrating on reentry. My point of argument is should we allow a disruptive fraudster (Sam Altman, Dario Amodei, etc) and his marketing entourage run rampant without credible opposition.

I'd rather people suffer what they may through their poor petsonal choices than empower a government, industry or any other institution ti barge into my life to keep me safe from AI hallicinations that I'm already safe from as I don't use or trust AI.
 
star of bethlehem
Hey, that's the stuff all over my front yard that I could never be bothered to ID. It's amazing what you can learn on the farms.

Wild edibles can be tricky fucks but even obviously safe ones like blackberries can be dangerous. I had to tell some randos off from picking right under power lines a few years back. They were unaware that the area gets sprayed with super deadly herbicide.
 
This is an issue with foraging in general. I tried one of those foraging apps and discovered a tasty patch of harmless wild garlic. Of course, big rule of foraging, dont put anything in your mouth until you thoroughly understand it. I did more research and discovered it was not safe to eat garlic but, in fact, deadly star of bethlehem bulbs.

If you rely on any one source to identify something without extensively researching it, it's already a danger. Relying on chatgpt is already bad enough without adding deadly poison to the mix.
Wild carrots and hemlock are very closely related and look pretty similar to human eyes, too. Hemlock is a carrot/parsley species.
to be fair ...
this is poison hemlock:
View attachment 8682471 View attachment 8682484
and this is wild carrot:
View attachment 8682488 View attachment 8682508


but if you're foolish enough to trust a.i. without verification, enjoy your socratic experience!
View attachment 8682518View attachment 8682521View attachment 8682522View attachment 8682524
Two of these plants are deadly poisonous, one's kinda okay-ish in salads, and one's wild carrots.
How the fuck did humans survive as hunter gatherers?
 
This is an issue with foraging in general. I tried one of those foraging apps and discovered a tasty patch of harmless wild garlic. Of course, big rule of foraging, dont put anything in your mouth until you thoroughly understand it. I did more research and discovered it was not safe to eat garlic but, in fact, deadly star of bethlehem bulbs.

If you rely on any one source to identify something without extensively researching it, it's already a danger. Relying on chatgpt is already bad enough without adding deadly poison to the mix.
I took a plant clasz years ago and the prof was a fungi guy and was often called to the hospital to identify whatever some idiot ate and, often dying, from. I remember his words to this day. "The newspapers say, Mushroom expert died eating deadly mushroom. And I have this to say, they were not an expert. If they were, they would not have eaten a deathcap."
 
Lol fucking noob. This is why you use Seek to do a quick AI id, eat it, and then post it on iNat and wait for sg56bdu, the tenured PhD carrot expert from the Bulgarian Academy of Science who respondes to any carrot related posts within 15 minutes, to know if you should puke it back up or not. At least that's what I do in the field and I haven't died yet!
 
Good for her on not consuming the highly toxic, lethal plant and having the good sense to question ChatGPT (we've lowered the bar down to around where the Titanic rests, fuck me). I suspect that AI will, as it advances further and further, weed out a lot of idiots.
 
ChatGPT has a tendency of assuming the best case scenario for everything. You could show it datura seeds and it'd go, "I see you've gotten into gardening. Honestly, that is wonderful. Do you want me to help map out your new garden?" rather than tell you the dangers of eating them. It has the digital version of William's syndrome.
 
That's how you find out which ones get you high.
In all seriousness? Whenever I'm looking over a field guide to edible plants and mushrooms? And I see the footnote (poisonous)?

I can only wonder how many people had to die for us to know that.

The "I dare you to eat that" game was a lot more high stakes back in the day.
 
I took a plant clasz years ago and the prof was a fungi guy and was often called to the hospital to identify whatever some idiot ate and, often dying, from. I remember his words to this day. "The newspapers say, Mushroom expert died eating deadly mushroom. And I have this to say, they were not an expert. If they were, they would not have eaten a deathcap."
Sometimes its a fine line between edible, deadly, or explosive diarrhea and vomiting a half hour after eating one.
 
How the fuck did humans survive as hunter gatherers?
Well, there are ways to distinguish most lookalikes, it's usually just incredibly subtle. But we didn't, not really. We started growing the edible ones and stopped gathering the wild ones.
 
Back
Top Bottom