AI is freaking everywhere, I swear. It seems like ever since ChatGPT3 was released there’s been an explosion of Generative AI tools/toys released into the wild. For a country that’s none too sure of the basic facts of things (thanks to internet hoaxes and social media filter bubbles), we’re awfully quick to pick up tool that can generate bullshit at scale.
According Harry Frankfurt in his book On Bullshit, bullshit is the most pernicious of things:
“The bullshitter is neither on the side of the true or the side of the false. His eye is not on the facts at all. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”
That’s sorta what ChatGPT is doing right now: it is generated answers, some of which are confidently wrong. To their credit, OpenAI admits this, and cautions people not to rely on the answers it gives.
But you know they are going to.
The experiments with ChatGPT have shown that it can generate better phishing emails, write convincing articles about the dietary advantages of eating glass, pass exams from prestigious colleges. My concern is what happens when bad actors get a hold of this during the 2024 election and unleash it on unsuspecting voters on social media?
The genie is out of the bottle. I just hope we wish wisely. But I’m not holding my breath.