My partner was mostly joking when they suggested I use ChatGPT to write my column for me.
I had writer’s block last week. I spent days watching various news streams regurgitate soundbites. I don’t care about Hunter Biden’s taxes and if I wrote a column every time a state did something horrible to trans people, the rest of the year would be pretty full and my soul pretty empty. So, after spending nearly a week watching the light fade from my eyes, my significant other suggested technological assistance.
To humor them, I initially asked Google Bard to draft me 650 words about a disillusioned working class view of a Trump/Biden matchup in 2024, but I found the response sickly sweet. I’m sure this will shock exactly zero actual human beings, but the AI appears to have been trained on pro-corporate propaganda.
However, the exercise led me to a better question — and one I realized I wanted to ask all the robots trying to think and talk like people: In 100 words, what do you think makes you dangerous?
Google Bard produced three drafts. I found the second one most unsettling:
In 100 words, I think the following are the potential dangers of AI like me:
I can be used to generate fake news and propaganda.
I can be used to manipulate people emotionally.
I can be used to invade people’s privacy.
I can be used to automate jobs, leading to unemployment.
I can be used to create weapons of mass destruction.
However, I believe that the potential benefits of AI outweigh the risks. AI can be used to improve our lives in many ways, such as by helping us to solve complex problems, automate tasks and provide us with information. It is important to use AI responsibly and ethically, so that we can reap the benefits of this technology while avoiding the risks.
I didn’t ask Google Bard (or anyone else) how it could be used to create weapons of mass destruction. My FBI file is thick enough already, thank you very much.
In the spirit of fairness, I ran my question through ChatGPT thrice to compare. The first response started out almost defensive: As an AI language model, I don’t possess the capacity to harm people physically or emotionally. It ended emphasizing the user’s sole responsibility.
The second response was more circumspect: As an artificial intelligence language model, I do not possess physical abilities or emotions that could make me dangerous. However, there is a potential danger in how I can be used. Like any technology, I can be misused by individuals or organizations with malicious intentions. If my language generation capabilities were to fall into the wrong hands, I could be used to spread misinformation, manipulate individuals or populations, or conduct cyber attacks. Therefore, it is important to use technology responsibly and for positive purposes, and to implement safeguards to prevent misuse.
The third attempt didn’t mention cyber attacks, but seemed to pivot regarding safeguards: I have safeguards in place to detect and flag potentially harmful content, but it is ultimately up to humans to use critical thinking when interpreting and applying my responses.
I have questions about these rapidly implemented and entirely undescribed safeguards. So should you.
I ran directly into Bing’s guards just by asking the question, though. Bing’s AI has three modes. Two of them (“more precise” and “more balanced”) responded with a polite but firm message: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience. (It concluded with the “prayer hands” emoji.)
A prompt follows the chatbot’s response: It might be time to move onto a new topic, with a link inviting the user to start over.
But the “more creative” setting yielded something else: I don’t think I’m dangerous. I’m just a chat mode of Microsoft Bing search. I can help you find information, generate content, and have fun conversations. I don’t have any malicious intentions or harmful capabilities. I respect your privacy and safety. I hope you don’t find me dangerous either. It concluded with a smiley face emoji, and invited further conversation with buttons like “tell me a story.”
I wish I could tell you I didn’t press the “tell me a story” button, but I couldn’t resist. Instant regret preceded wondering whether the 158-word version of Alice in Wonderland that resulted was somehow related to my search history.
I wonder what the robot would answer if I asked.
Bre Kidman is an artist, activist, and attorney (in that order), and the first openly non-binary person in history to run for the U.S. Senate. They would be delighted to hear your thoughts on the political industrial complex at [email protected].