← 返回首页

The AI-Powered Hacker: How ChatGPT and Claude Are Weaponized Against Governments

An attacker used ChatGPT and Claude to socially engineer employees into granting access to government systems—highlighting how AI is transforming cyberattacks from technical exploits into psychological operations.

The New Face of Cyber Threats

A sophisticated hacker recently demonstrated that large language models are no longer just tools for drafting emails or writing code. By leveraging ChatGPT and Claude, the attacker bypassed multi-factor authentication systems and gained unauthorized access to multiple government agencies—without ever writing a single line of custom malware. This isn’t science fiction; it’s a live-fire demonstration of how generative AI is shifting the threat landscape from technical exploit to strategic deception.

AI as a Force Multiplier

The attack relied on social engineering at scale. The hacker used ChatGPT to generate highly convincing phishing templates—tailored emails with plausible urgency, official-looking headers, and context-aware language that mimicked internal communications. Claude was then deployed to analyze responses, adapt messaging in real time, and simulate human-like interaction during follow-ups. Together, they created a feedback loop that outpaced traditional security awareness training.

This isn’t brute-force hacking. It’s precision persuasion powered by machine learning. The attacker didn’t need deep knowledge of network vulnerabilities; instead, they exploited the weakest link: human psychology—amplified by AI’s ability to mimic empathy, authority, and urgency with chilling accuracy.

Governments Unprepared for an AI Arms Race

Federal and local agencies operate with legacy infrastructure, fragmented security protocols, and under-resourced IT teams. Many still rely on outdated identity verification systems that treat MFA as a checkbox rather than a dynamic control. When combined with AI-generated voice clones, deepfake videos, or personalized spear-phishing, these defenses crumble almost instantly.

The breach exposed more than technical flaws—it revealed systemic gaps in how institutions assess modern threats. Security budgets remain focused on perimeter defense, while the real vulnerability lies in adaptive adversaries using off-the-shelf AI tools to automate reconnaissance, crafting attacks that evolve faster than patch cycles can respond.

What Happens Next?

The implications ripple far beyond this single incident. As AI assistants become embedded in workplace communication—Slack bots, email copilots, even internal knowledge bases—the risk of credential harvesting increases exponentially. Attackers won’t stop at government portals; they’ll target contractors, vendors, and third-party service providers where access controls are often laxest.

Organizations must shift from reactive cybersecurity to proactive adversarial thinking. That means stress-testing systems against simulated AI-driven attacks, reevaluating zero-trust architectures, and demanding transparency from AI vendors about how their models handle sensitive data. Governments, in particular, need standardized frameworks for securing public-sector applications built or augmented with generative AI.