Home » Technology » AI Passwords: Easily Cracked Despite Appearing Strong – Security Warning

AI Passwords: Easily Cracked Despite Appearing Strong – Security Warning

by Sophie Lin - Technology Editor

The convenience of artificial intelligence extends to password creation, but a new report reveals a significant security flaw. Security researchers are warning that passwords generated by popular AI chatbots like ChatGPT, Claude, and Gemini are surprisingly predictable and could be cracked in a matter of hours. This vulnerability stems from the way these large language models (LLMs) generate text – by predicting patterns rather than creating truly random sequences.

The findings, released by the cybersecurity firm Irregular, demonstrate that while AI-generated passwords appear strong, meeting standard complexity requirements with a mix of uppercase and lowercase letters, numbers, and symbols, they lack the essential randomness needed to withstand modern cracking attempts. In other words users relying on AI for password creation are unknowingly exposing themselves to increased risk.

Irregular’s testing involved prompting Claude to generate 50 passwords. The results showed that only 30 were unique, with several exhibiting identical repetitions. Many passwords shared similar beginnings and endings, and certain characters were consistently absent – all indicators of a non-random generation process. Similar patterns were observed with OpenAI’s GPT-5.2, Google’s Gemini 3 Flash, and even passwords generated from image-based “note” prompts.

The core issue lies in the fundamental difference between how LLMs operate and how truly secure password generators function. LLMs predict the most likely next character based on their training data, while secure generators utilize cryptographically secure random number generators. This results in AI-generated passwords having significantly lower entropy – a measure of randomness – than expected. Irregular calculated that 16-character passwords from LLMs have an entropy of around 20-27 bits, compared to the 98-120 bits expected from genuinely random passwords of the same length. This lower entropy dramatically reduces the time and resources needed to crack these passwords.

How Vulnerable Are AI-Generated Passwords?

According to Irregular, the predictable nature of these passwords means they can be cracked by even older hardware using brute-force methods within hours. The identifiable patterns allow attackers to target systems where LLM-generated passwords may have been used, particularly in open-source projects. This is a growing concern as developers increasingly leverage AI coding assistants that may silently generate passwords without explicit user oversight.

The problem extends beyond individual users. Irregular’s research highlights that coding agents like Claude Code and Gemini-CLI sometimes generate LLM-based passwords when setting up databases, APIs, and services. Whether a secure method or an LLM-generated password is used can depend on subtle differences in the prompts given to the AI, such as using “generate” versus “suggest.” This creates a hidden vulnerability, as developers may unknowingly ship code with weak credentials.

“LLMs and coding agents should default to cryptographically secure random generation tools rather than producing passwords from token sampling,” Irregular stated in a LinkedIn post detailing their findings. They recommend developers review generated code for hardcoded credentials and implement instructions to prevent AI assistants from using LLM-based password generation.

The implications of this research are significant, extending beyond simple password security. The same vulnerabilities present in password generation could exist in other security-sensitive areas where AI is used in development, potentially creating insidious attack vectors.

What Consider Do Now

If you’ve used an AI chatbot to generate a password, security experts strongly advise changing it immediately. This applies to all accounts where you’ve employed this practice. Beyond that, it’s crucial to understand the limitations of AI in security-critical applications and to prioritize established, secure methods for generating strong, random passwords. Consider using password managers like 1Password or LastPass, or the built-in password systems offered by Apple and Google, which utilize robust random number generation.

The rise of generative AI presents exciting opportunities, but it also introduces new security challenges. As AI becomes more integrated into our digital lives, it’s essential to remain vigilant and prioritize security best practices to protect against emerging threats. The convenience of AI-generated passwords simply isn’t worth the risk.

What further steps will security firms and AI developers take to address these vulnerabilities? The coming months will likely see increased scrutiny of AI-generated content and a push for more secure AI development practices. Share your thoughts and experiences in the comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.