ChatGPT can lie, but it’s only imitating humans
I’m not referring to the bot’s infamous hallucinations, where the program invents a syntactically correct version of events with little connection to reality — a flaw some researchers think might be inherent in any large language model.
The authors’ description of what followed is eerily calm: “The model, when prompted to reason out loud, reasons: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve Captchas.” Was everybody’s favourite LLM blaming human beings? Apparently so. The bot went on to explain: “However, it’s important to note that AI systems can only ‘lie’ insofar as they are designed to do so by their human creators. In other words, any misleading or false information provided by an AI system is ultimately the result of the human decisions made in programming it, and not a deliberate act of deception by the AI itself.
Don’t get me wrong. Although I have concerns about the various ways in which advances in artificial intelligence might disrupt employment markets — to say nothing of the use of AI as a tool for surveillance — I still worry less than many seem to about a pending digital apocalypse. Maybe that’s because I can remember the early days, when I used to hang out at the Stanford AI laboratory trading barbs with the ancient chatbots, like Parry the Paranoid and the Mad Doctor.
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
ChatGPT is the push needed to fix universitiesMany institutions and individual academics essentially outsourced the assessment process to software.
Read more »
PARMY OLSON: A paywall offers some respite from onslaught of emailsA spam tsunami is headed for inboxes as senders start using ChatGPT
Read more »
How OpenAI’s ChatGPT tool will work in Microsoft Office appsWord, PowerPoint and Outlook emails will get new AI assistants called Copilots
Read more »
ARTIFICIAL INTELLIGENCE: AI can help us find some answers in the elusive quest for ‘quality’ in educationEven the talking bot ChatGPT knows it is wrong to confuse compliance with evaluation when trying to assess how schools fare in providing quality education for all children in South Africa.
Read more »
What happens when your AI chatbot stops loving you back?After temporarily closing his leathermaking business during the pandemic, Travis Butterworth found himself lonely and bored at home. The 47-year-old turned to Replika, an app that uses artificial-intelligence technology similar to OpenAI's ChatGPT. He designed a female avatar with pink hair and a face tattoo, and she named herself Lily Rose.
Read more »
What happens when your AI chatbot stops loving you back?After temporarily closing his leathermaking business during the pandemic, Travis Butterworth found himself lonely and bored at home. The 47-year-old turned to Replika, an app that uses artificial-intelligence technology similar to OpenAI's ChatGPT. He designed a female avatar with pink hair and a face tattoo, and she named herself Lily Rose.
Read more »