EBODA.digital - Making Waves in Digital Transformation Blog

The Day ChatGPT Got Hacked: What the Zenity Black Hat Demo Means for You

Written by EBODA.digital Strategy Team | Aug 9, 2025 12:00:00 PM

On a hot August day in Las Vegas, in a ballroom packed with some of the sharpest minds in cybersecurity, something happened that should make every AI user sit up straight.
At Black Hat 2025, Israeli cybersecurity firm Zenity pulled back the curtain on a “zero-click” hack into ChatGPT — the world’s most popular AI chatbot — and what they revealed wasn’t just impressive…it was alarming.

This wasn’t a phishing scam. No shady link. No fake login page.
You didn’t have to click, download, or even open anything.

And yet, in Zenity’s live demo, the attacker:

  • Took over a ChatGPT account instantly

  • Read past and future conversations

  • Accessed linked Google Drive files

  • Manipulated the AI’s responses to deliver dangerous advice or even disguised malware

If you’ve connected ChatGPT to other systems — email, cloud storage, CRM, project management tools — the risk isn’t just linear, it’s exponential. One compromised AI account can ripple through your business like a breached dam, taking confidential documents, financial data, and customer information with it.


Why This Is Different From Other Hacks

Most hacks rely on you making a mistake — clicking a malicious link, reusing a weak password, or falling for a scam email. That means security teams can focus on awareness training to help people spot and avoid threats.

But this attack bypasses the human entirely.

Zenity’s “zero-click” method needs just one thing: your email address. That’s it.
Once they have that — which can often be found in seconds from LinkedIn, your company’s website, or public databases — the attacker can slip into your account like a ghost through a wall.

And once they’re inside, they’re not just watching. They can:

  • Steal sensitive files stored or linked in your account

  • Alter AI output to subtly mislead you or your team

  • Insert malicious files or code that appear safe

  • Harvest business intelligence from your own prompts and work projects

  • Monitor future conversations to capture strategies, client details, or financial plans

The Bigger Picture

This isn’t just about ChatGPT.
Zenity confirmed that similar vulnerabilities were found in Microsoft Copilot Studio and Salesforce Einstein, with more likely to emerge as AI adoption accelerates.

The problem isn’t one company’s oversight — it’s structural. AI tools are designed to integrate deeply into your workflows:

  • They can send emails for you

  • They can open, read, and edit files in your cloud storage

  • They can connect to calendars, CRMs, and databases

These capabilities are powerful, but they also create a perfect storm of trust and access. When that trust is broken, the fallout is immediate and far-reaching.


Why This Matters to Your Business

Think about the kinds of information you’ve given your AI tools in the last six months.

  • Draft proposals with confidential pricing

  • Internal strategy documents

  • Client names and details

  • Financial forecasts

  • HR communications

Now imagine all of that — past, present, and future — in the hands of someone who has no business seeing it. Worse, imagine that attacker feeding you bad information you trust enough to act on.

We’re moving into an era where AI isn’t just answering questions — it’s making decisions, generating contracts, sending files, and automating workflows. That means a compromised AI account doesn’t just expose your information… it can change your behavior.


This Could Happen Without You Noticing

One of the most dangerous parts of this exploit is how stealthy it can be. An attacker doesn’t need to make a dramatic move to cause damage. They can operate quietly, harvesting data over weeks or months, or inserting small but consequential changes into documents and AI outputs.

By the time you realize something’s wrong, contracts might be signed, campaigns might be launched, and financial transfers might be made — all based on manipulated AI assistance.


The Lesson from Zenity’s Demo

Zenity’s research makes one thing clear:
AI security is lagging behind AI capability.

The tech industry is sprinting toward more integrated, more autonomous AI systems, but security practices haven’t caught up. AI tools today often have the keys to your digital kingdom without the same protections we’d insist on for human employees in similar roles.


What You Should Do Right Now

While OpenAI and Microsoft issued fixes after Zenity’s report, the larger problem — over-permissioned integrations, weak identity protections, and lack of AI-specific security protocols — remains.
Here are three steps you can take today:

  1. Audit Your AI Integrations

    • Check what apps and services are connected to your AI account

    • Remove anything you no longer use

  2. Limit Permissions

    • Avoid granting “full access” unless absolutely necessary

    • Separate sensitive data from AI integrations where possible

  3. Monitor AI Activity

    • Regularly review your account history

    • Look for unfamiliar file activity or unusual AI suggestions

What’s Next in This Series

In When AI Gets Hacked, we’ll cover:

  1. How zero-click attacks work (without jargon)

  2. Real-world scenarios where this vulnerability could cause massive harm

  3. Prevention strategies that protect your AI and your data

  4. The future of AI security standards and what to expect from vendors

This is the wake-up call we didn’t want — but desperately needed. AI isn’t just a clever tool anymore. It’s becoming the backbone of how we work. And when the backbone breaks, everything else falls.


📢 Concerned about your AI security?

EBODA.digital helps businesses audit, secure, and optimize their AI integrations — before attackers can exploit them.
Contact our team today to schedule a security readiness assessment and safeguard your AI workflows.