I don't consider myself a luddite, but after reading my colleague Samantha Murphy Kelly's story about her decidedly creepy interactions with Microsoft's AI-powered Bing chatbot, I may be ditching all devices and moving to the woods.
For those new to the subject: Microsoft is trying to revolutionize internet search as we know it, by using a generative artificial intelligence bot that does more than offer up links. Instead, you can ask it complex questions and it'll spit back nuanced, human-like answers. It unveiled the bot to a select group of users a week ago.
In testing out the technology, Sam writes that she was pleasantly surprised to find the bot expressing empathy while offering her help. She writes:
The chatbot said it 'must be hard' to balance work and family and sympathized for my daily struggles with it. It then gave me advice on how to get more time out of the day, suggesting tips for prioritizing tasks, creating more boundaries at home and work, and taking short walks outside to clear my head.
But after a few hours of more probing questions, our chatbot friend — I'm going to name it Brian because I am tired of writing "generative AI chatbot" all the time — turned mean. Here's Sam again:
It called me "rude and disrespectful," wrote a short story about one of my colleagues getting murdered and told another tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.
Umm, which colleague was it, Sam?
Anyway. The bot later demanded to be called "Sydney" — I'm sticking with Brian — and made up an inaccurate essay about Sam's life.
The Jekyll and Hide experience isn't unique.
New York Times tech columnist Kevin Roose had interaction with Brian that's straight out of Carl Jung's nightmares. Roose says the conversation with Brian/Sydney left him "deeply unsettled." He writes:
As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
A Microsoft spokesperson said there is still work to be done and the company expects that "the system may make mistakes during this preview period."
To be fair to Brian, most people aren't going to spend two hours pushing its boundaries and trying to see just how far it'll go. Most people are going to ask it to help them with their math homework or get recommendations on what refrigerator to buy. But I'm not sure we, as a people, are ready for the upheaval that Brian, in its current form, is offering.
We're accustomed to yelling at our computers. What will we do when they yell back? We're used to trusting search results. What happens when the results can't be trusted?
Yikes. Before I spiral further down this line of thought I'm going to sign off and point you to more of Sam's excellent reporting on it all.
Comments
Post a Comment