The Chatbot Disinfo Inflaming the LA Protests


Zoë Schiffer: Oh, wow.

Leah Feiger: Yeah, exactly. Who has Trump’s ear already. This became widespread. And so we were talking about people went to X’s Grok and they were like, “Grok, what is this?” And what did Grok tell them? No, no. Grok said these were not actually images from the protest in la. They said they were from Afghanistan.

Zoë Schiffer: Oh. Grok, no.

Leah Feiger: They were like, “There’s no credible support. This is misattribution. It was really bad. It was really, really bad. And then there was another situation where another couple of people were sharing these photos with ChatGPT and ChatGPT was also like, “Yep, this is Afghanistan. This isn’t accurate, etcetera, etcetera. It’s not great.

Zoë Schiffer: I mean, don’t get me started on this moment coming after a lot of these platforms have systematically dismantled their fact-checking programs, have decided to purposefully let through a lot more content. And then you add chatbots into the mix who, for all of their uses, and I do think they can be really useful, they are incredibly confident. When they do hallucinate, when they do mess up, they do it in a way that is very convincing. You will not see me out here defending Google Search. Absolute trash, nightmare, but it’s a little more clear when that’s going astray, when you’re on some random, uncredible blog than when Grok tells you with complete confidence that you’re seeing a photo of Afghanistan when you’re not.

Leah Feiger: It’s really concerning. I mean, it’s hallucinating. It’s fully hallucinating, but is with the swagger of the drunkest frat boy that you’ve ever unfortunately been cornered at a party in your life.

Zoë Schiffer: Nightmare. Nightmare. Yeah.

Leah Feiger: They’re like “No, no, no. I am sure. I have never been more sure in my life.”

Zoë Schiffer: Absolutely. I mean, okay, so why do chatbots give these incorrect answers with such confidence? Why aren’t we seeing them just say, “Well, I don’t know, so maybe you should check elsewhere. Here are a few credible places to go look for that answer and that information.”

Leah Feiger: Because they don’t do that. They don’t admit that they don’t know, which is really wild to me. There’s actually been a lot of studies about this, and in a recent study of AI search tools at the Tow Center for Digital Journalism at Columbia University, it found that chatbots were “generally bad at declining to answer questions they couldn’t answer accurately. Offering instead incorrect or speculative answers.” Really, really, really wild, especially when you consider the fact that there were so many articles during the election about, “Oh no, sorry, I’m ChatGPT and I can’t weigh in on politics.” You’re like, well, you’re weighing in on a lot now.

Zoë Schiffer: Okay, I think we should pause there on that very horrifying note and we’ll be right back. Welcome back to Uncanny Valley. I’m joined today by Leah Feiger, Senior Politics Editor at WIRED. Okay, so beyond just trying to verify information and footage, there’ve also been a bunch of reports about misleading AI-generated videos. There was a TikTok account that started uploading videos of an alleged National Guard soldier named Bob who’d been deployed to the LA protests, and you could see him saying false and inflammatory things like like the fact that the protesters are “chucking in balloons full of oil” and one of the videos had close to a million views. So I don’t know, it feels like people have to become a little more adept at identifying this kind of fake footage, but it’s hard in an environment that is inherently contextless like a post on X or a video on TikTok.



Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *