Customer conversations with chatbots can include contact information and personal details that make it easier for scammers to launch phishing attacks and commit fraud.
Since Sears is still a trusted name but largely out of the public eye, security researcher Jeremiah Fowler was surprised and alarmed last month when he found three publicly exposed databases containing massive troves of chat logs, audio files, and text transcriptions of audio that contained personal details about Sears Home Services customers.
The Home Services division claims to be the US’s “largest appliance repair service provider” and reports that it performs more than seven million repairs each year. The exposed Sears databases uncovered by Fowler, which have since been secured, contained 3.7 million chat logs, plus 1.4 million audio files and plain text transcripts from 2024 to this year. Fowler found that one CSV file about the incident contained 54,359 complete chat logs. Conversations Fowler saw included the chatbot introducing itself as “Samantha, an AI virtual voice agent for Sears Home Services,” with the logs also including the name of the company’s AI technology “kAIros.” The cache of data contained chats in both English and Spanish and included personal information about Sears customers, such as names, phone numbers, home addresses, appliances owned, and information on delivery appointments and repairs. “The thing to remember is that it is real data of real people,” says Fowler, a researcher with Black Hills Information Security. While companies may be able to save money deploying AI, he emphasizes that it is crucial they “don't take any shortcuts when it comes to protecting that data, securing that data. At the bare minimum, these files should have been password protected and encrypted.” After finding the publicly accessible databases at the start of February, Fowler emailed staff at Transformco, the company that owns Sears and Sears Home Services, and the databases were quickly secured, he says. It is unclear how long the databases were exposed online and whether anyone other than Fowler accessed them during that time. Transformco did not respond to multiple requests for comment from WIRED about the information being available to anyone on the web. Fowler says that when he disclosed the finding to Transformco, he received a reply from someone who claimed that they were connecting him directly with a Samantha AI Chatbot manager. He says that individual never replied to him, though, even after a follow -up message. Any exposed customer data is problematic, but Fowler was particularly concerned about the Sears data for two reasons. First, such information would be extremely useful in phishing attacks, because it includes details about customers’ contact information and home lives, including their appliances, which could be exploited for warranty scams and other targeting. The second shock came from the fact that a surprising number of the audio calls captured hours of ambient audio after customers apparently thought a call had ended. Some of the recordings were up to four hours long. It is unclear why customers left the calls running once they were done speaking to the Sears AI agent, but these extended recording sessions may have captured private conversations and sensitive details that Sears customers thought they were discussing privately as they went about their days. “You could hear the TV playing, you could hear people having conversations, and this recorded all of it,” Fowler says. The files also show people getting frustrated with glitchy chatbots, which sometimes failed to answer questions or also pushed people toward human customer service agents. Just two minutes into one 76-minute audio call observed by Fowler, the person trying to get help from the company asks to speak to a human. The AI voice bot responds: “I am fully equipped to address your needs efficiently and can resolve your issue right away. Whereas connecting with a live agent may involve a short wait.” Just a few minutes later, the bot struggles to complete the task it is asked about. “I am facing some errors while assisting you with your plan. Can I transfer your call to our live agent who will help with your request?” In one text transcript, which begins near 11 am and ends at 1:30 pm, a person speaking with the Samantha “AI virtual voice” grows increasingly frustrated with the replies. “Where's my technician?” they repeat 28 times in a row. After getting some more responses they were unhappy with, the transcript shows, the person repeats: “You’re a computer. You’re a computer. You’re a computer.” The situation comes as companies continue to scramble to integrate generative AI into their technology stacks, and the exposures highlight the privacy, trust, and reputational risks of using bots to directly interact with customers. Carissa Véliz, an author and associate professor at the University of Oxford, says that in some circumstances people may feel safer when talking to a machine. “The machine, after all, will not want to rob your house,” she points out. But she adds that people often have little choice about trusting companies with their sensitive information. “They should also give people more choices: the choice to talk with a human being if they prefer it and the choice to not have their conversation recorded,” Véliz says. “In the long run you want your customers to be safe and feel comfortable, not alienated and exploited.”
Privacy Data Breaches Chatbots
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Scrubs’ Zach Braff Is Not Dating AI Chatbot, Tells How Rumors StartedZach Braff shuts down rumors about dating an AI chatbot, explaining the viral claims were linked to a Scrubs story during the show’s revival.
Read more »
Disturbing warning ChatGPT could turn into 'sexy suicide coach' as company pushes for erotic chatbotToday's Business Headlines: 03/13/26
Read more »
Park Police’s Secret Role in ICE Arrests ExposedClass-action suits against DHS reveal federal traffic stops used to detain immigrants.
Read more »
Musk’s Grok Chatbot Made Sexual Images of Minors, Teens Allege in LawsuitThree Jane Does allege they were minors when they purportedly were depicted in sexually explicit deepfake images created with Elon Musk's Grok chatbot.
Read more »
Fitness influencer’s gym ‘entitlement’ exposed: 'Just trying to open your toes'Today's Video Headlines: 03/16/2026
Read more »
Trump’s Plot for Ultimate Betrayal in Cuba Exposed in Bombshell LeakThe president openly said he wants to “take” the island.
Read more »
