OpenAI isn’t doing enough to make ChatGPT’s limitations clear

United States News News

OpenAI isn’t doing enough to make ChatGPT’s limitations clear
United States Latest News,United States Headlines
  • 📰 verge
  • ⏱ Reading Time:
  • 53 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 24%
  • Publisher: 67%

ChatGPT needs more prominent warning labels.

I don’t think cases like these invalidate the potential of ChatGPT and other chatbots. In the right scenario and with the right safeguards, it’s clear these tools can be fantastically useful. I also think this potential includes tasks like retrieving information. There’s all sorts ofbeing done that shows how these systems can and will be made more factually grounded in the future. The point is, right now, it’s not enough.

This is partly the fault of the media. Lots of reporting on ChatGPT and similar bots portray these systems as human-like intelligences with emotions and desires. Often, journalists fail to emphasize the unreliability of these systems — to make clear the contingent nature of the information they offer.But, as I hope the beginning of this piece made clear, OpenAI could certainly help matters, too.

. This isn’t surprising: a generation of internet users have been trained to type questions into a box and receive answers. But while sources like Google and DuckDuckGo provide links that invite scrutiny, chatbots muddle their information in regenerated text and speak in the chipper tone of an all-knowing digital assistant. A sentence or two as a disclaimer is not enough to override this sort of priming.

Interestingly, I find that Bing’s chatbot does slightly better on these sorts of fact-finding tasks; mostly, it tends to search the web in responses to factual queries and supplies users with links as sources. ChatGPT

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

verge /  🏆 94. in US

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

OpenAI CEO Altman backtracks after threatening to exit Europe over overregulation concernsOpenAI CEO Altman backtracks after threatening to exit Europe over overregulation concernsSam Altman, CEO of ChatGPT maker OpenAI, backtracked on a warning that the company may have to stop operating in Europe if the EU over-regulates AI, saying the company has no plans to leave.
Read more »

Meet Sam Altman: OpenAI CEO, prepper, Silicon Valley royaltyThe man behind ChatGPT is also serious about survival prepping, once telling The New Yorker: 'I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.'
Read more »

What is fair use? US Supreme Court weighs in on AI’s copyright dilemmaWhat is fair use? US Supreme Court weighs in on AI’s copyright dilemmaAI models such as OpenAI’s ChatGPT are trained by being fed massive amounts of data, but much of the data is protected by copyright laws.
Read more »

Artificial intelligence threatens extinction, experts say in new warningArtificial intelligence threatens extinction, experts say in new warningThe CEO of ChatGPT maker OpenAI and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds who signed the statement.
Read more »

Sweet Little AI Lies: New York Lawyer Faces Sanctions After Using ChatGPT to Write Brief Filled with Fake CitationsSweet Little AI Lies: New York Lawyer Faces Sanctions After Using ChatGPT to Write Brief Filled with Fake CitationsA New York-based attorney is facing potential sanctions after using OpenAI's ChatGPT to write a legal brief he submitted to the court. The problem? The AI Chatbot filled the brief with citations to fictitious cases, a symptom of AI chatbots called 'hallucinating.' In an affadavit, the lawyer claimed, 'I was unaware of the possibility that [ChatGPT's] content could be false.'
Read more »

A lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citationsA lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citationsAs reported by The New York Times, attorney Steven Schwartz of the law firm Levidow, Levidow and Oberman recently turned to OpenAI’s chatbot for assistance with writing a legal brief, with predictably disastrous results. Here's more.
Read more »



Render Time: 2025-02-24 01:08:40