An open letter calling for a pause on the development of advanced AI systems has divided researchers.
, “We work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems.”)
Some technologists warn of deeper security threats. Planned ChatGPT-based digital assistants that can interface with the web and read and write emails could offer new opportunities for hackers, says Florian Tramèr, a computer scientist at ETH Zürich. Already, hackers rely on a tactic called “prompt injection” to trick AI models into saying things they shouldn’t, like offering advice on how to carry out illegal activities.
Tramèr worries the practice could evolve into a way for hackers to trick the digital assistants through “indirect prompt injection”—by, for example, sending someone a calendar invitation with instructions for the assistant to export the recipient’s data and send it to the hacker. “These models are just going to get exploited left and right to leak people’s private information or to destroy their data,” he says.
OpenAI seems to be becoming more alert to security risks. OpenAI President and co-founder Greg Brockmanlast month that the company is “considering starting a bounty program” for hackers who flag weaknesses in its AI systems, acknowledging that the stakes “will go up a *lot* over time.” However, many of the problems inherent in today’s AI models don’t have easy solutions. One vexing issue is how to make AI-generated content identifiable. Some researchers are working on “watermarking”—creating an imperceptible digital signature in the AI’s output. Others are trying to devise means of detecting patterns that only AI produces. However,
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
AI race: Chinese giant Alibaba enters the fray with its bilingual AI modelAlibaba can provide AI models or cloud computing services for businesses that want to build their own, making it a win-win for the company.
Read more »
Leaked US documents may have origin in Discord chatroomThe leaks have alarmed U.S. officials and sparked a Justice Department investigation.
Read more »
Top tech executives to hold council on AI guardrails amid calls for development pauseExecutives and staffers for the top players in artificial intelligence development will meet to discuss setting standards for AI use this week.
Read more »
Elon Musk Working On AI At Twitter Despite Calling For 6-Month Pause: ReportElon Musk recently signed a letter calling for a six-month pause on development of all artificial intelligence technology, as was widely reported last month.
Read more »
You Might Be Alarmed To Know That When You Use ChatGPT You Are Agreeing To Indemnify OpenAI And Could Be On The Hook For A Huge Legal Bill, Warns AI Ethics And AI LawWhen using ChatGPT, you have agreed to a licensing requirement that allows OpenAI to come after you if they get sued for something you allegedly did that caused harm while making use of ChatGPT. This is the dreaded indemnification clause. Read all about it and be prepared.
Read more »
Elon Musk wants to pause 'dangerous' A.I. development. Bill Gates disagrees—and he's not the only oneAn open letter calling for a development pause on A.I. systems like ChatGPT has more than 13,500 signatures — but Bill Gates and A.I. developers are pushing back.
Read more »