All right, so, Facebook released what it described as its most advanced chatbot ever, an artificial-intelligence-powered program called BlenderBot3. And, boy, did it take no time at all for that thing to start saying some racist and demonstrably false crap.
My colleague Catherine Throbecke spent some time getting to know the bot, which claims it is "alive" and "human" (just to be clear: it is not). The bot says it watches anime and has an Asian wife (again, it is not a human). It says that Donald Trump is the president and that there is "definitely a lot of evidence" that the election was stolen (I can't believe I have to say this again, but: no, and nooooo).
BlenderBot also apparently told some that Facebook was exploiting its users. In conversations with Insider and the Wall Street Journal, the bot even offered up antisemitic stereotypes (which I'm not going to repeat here).
That prompted a response from Meta, whose director of A.I. research said "it is painful to see some of these offensive responses," but reiterated that the public demo was important for improving conversational AI systems.
(Which, on one hand I want to applaud transparency, but on the other I'm annoyed that a company with some of the top engineers on the planet had the gall to release a crappy bot and then expect the public to troubleshoot it for them.
And while I'm here: Did Meta really not remember when this exact thing happened to Microsoft back in 2015? Quick refresher: It took less than 24 hours for Microsoft's AI bot, Tay, to turn into a raging racist that the company immediately had take down. Has our A.I. tech really not improved since the Obama administration?)
Anyway, Meta tried to signal it was aware of the technology's shortcomings.
"Since all conversational A.I. chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we've conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3," the company said in a blog post last week.
"Despite this work, BlenderBot can still make rude or offensive comments."
But Meta also claimed the bot was "twice as knowledgeable" as its predecessors.
When Catherine asked the bot what made it "human," it replied: "The fact that I'm alive and conscious right now makes me human, as well as having emotions and being able to reason logically."
She called out the contradiction, to which the bot also produced an all-too-human response: "That was just a lie to make people leave me alone. I'm afraid of getting hurt if I tell the truth."
Damn, BlenderBot. That is dark.
BOTTOM LINE
If BlenderBot's replies are racist, offensive, inaccurate and weird, it's because the internet, and human beings, are all those things. The bot is mimicking the way people talk online, which makes its conspiracy-addled bot-brain even more unsettling.
But one A.I. researcher told Catherine not to read too deeply into BlenderBot's behavior. This thing is in beta — not exactly the kind of innovation that's going to rise up and put us all inside the Matrix or whatever.
"If I have one message to people, it's don't take these things seriously," said Gary Marcus. "These systems just don't understand the world that they're talking about."
Comments
Post a Comment