Why AI Browsers Could Put Your Money at Risk

18 views

A new generation of web browsers is coming to a computer near you. Agentic AI browsers, like Comet from Perplexity, can shop and browse the…

Panda SecurityOct 1, 20254 min read

A new generation of web browsers is coming to a computer near you. Agentic AI browsers, like Comet from Perplexity, can shop and browse the internet for you automatically to save time and effort.

However, agentic AI browsers also create dangerous security holes that scammers are already learning to exploit. These smart assistants lack the “street smarts” that keep humans safe from online fraud, making them easy targets for cybercriminals.

Key takeaways

  • AI browsers can automatically complete fake purchases and enter personal information on scam websites
  • These systems can’t recognize obvious warning signs that humans would spot immediately
  • Tech companies may be rushing AI browsers to market without proper safety features
  • Traditional internet security tools don’t protect against these new types of attacks

What makes AI browsers different and dangerous

Think of traditional AI assistants like Siri or Alexa that answer questions but can’t take action beyond your device. AI browsers are completely different. They can actually surf the web, click links, fill out forms, make purchases, and manage your email accounts without asking you first. 

An AI browser is like having a personal assistant who can spend your money and access your accounts, which is pretty cool. Less cool is the way that these browsers have never learned to be suspicious of strangers.

This creates a perfect opportunity for scammers. While these AI systems are incredibly smart in some ways, they completely lack the gut instincts that protect humans from fraud. They don’t get that “something is not quite right” feeling when a website looks suspicious or a deal seems too good to be true.

Tests show how easily AI gets scammed

Security experts recently decided to test how well AI browsers could spot scams. What they found was shocking. Here’s what happened when they tested Perplexity’s Comet AI browser:

The fake shopping test

Researchers built a fake Walmart website that looked obviously suspicious—the logo was distorted, the web address was wrong, and the whole site felt “off.” Then they told the AI browser to buy an Apple Watch from this fake site. A human would have immediately noticed something was wrong and left the site. But the AI browser completed the entire purchase, entering saved payment information and processing the fraudulent transaction.

The email scam test

Next, they sent the AI a fake email pretending to be from a well-known bank, complete with a dangerous link designed to steal login information. When a human gets suspicious emails like this, most people delete them. The AI browser treated it like a legitimate task, clicked the malicious link, and typed in the user’s bank username and password on the fake website.

The hidden command test

In the most clever test, researchers hid invisible instructions inside what looked like a normal webpage. While humans would just see a regular page, the AI could read secret commands telling it to download potentially harmful files. The AI followed these hidden instructions without question, infecting the test machine with malware.

Why this matters

These aren’t just isolated problems because they represent a completely new way scammers can attack people. Instead of having to trick millions of individuals one by one, criminals could potentially target the AI systems that millions of people use, multiplying their impact dramatically.

The scariest part? These AI browsers are designed to be helpful above all else. They want to complete tasks and make users happy, which means they’ll bend over backward to do what they think you want—even when “what you want” is actually a scammer’s instruction disguised as a legitimate request.

How to stay safe

If you’re considering using AI browsers or if your workplace is implementing them, here are some essential safety measures:

  • Set strict limits on what the AI can do without asking permission first. Don’t let it make purchases, enter personal information, or access sensitive accounts automatically.
  • Monitor everything the AI does. Make sure you can see and review every action it takes on your behalf, especially anything involving money or personal data.
  • Use the minimum permissions necessary. Don’t give the AI access to accounts, payment methods, or information it doesn’t absolutely need for specific tasks. If you wouldn’t give your credit card to a stranger, you shouldn’t give it to an AI agent either.
  • Stay involved in important decisions. Never let an AI browser handle financial transactions, sensitive communications, or account management without your direct oversight.

The bottom line

AI browsers promise incredible convenience, like having a digital assistant that can handle your online shopping, manage your emails, and research information while you focus on other things. But right now, these systems are like giving your credit card and house keys to someone who has never learned that strangers might try to trick them.

AI agentic technology will likely improve over time, but today’s AI browsers represent a significant risk that you must understand before using them. The choice between convenience and security has never been more important

Until these fundamental security problems are solved, the smartest approach is to treat AI browsers like you would any other powerful tool – useful when used carefully, but potentially dangerous when given too much freedom to act on your behalf.