GhostGPT is a new artificial intelligence (AI) tool that cybercriminals are exploiting to develop malicious software, breach systems, and craft convincing phishing emails. According to security researchers from Abnormal Security, GhostGPT is being sold on the messaging platform Telegram, with prices starting at $50 per week. Its appeal lies in its speed, user-friendliness, and the fact that it doesn’t store user conversations, making it challenging for authorities to trace activities back to individuals.
This trend isn’t isolated to GhostGPT; other AI tools like WormGPT are also being utilized for illicit purposes. These unethical AI models enable criminals to circumvent the security measures present in legitimate AI systems such as ChatGPT, Google Gemini, Claude, and Microsoft Copilot. The emergence of cracked AI models—modified versions of authentic AI tools—has further facilitated hackers’ access to powerful AI capabilities without restrictions. Security experts have observed a rise in the use of these tools for cybercrime since late 2024, posing significant concerns for the tech industry and security professionals. The misuse of AI in this manner threatens both businesses and individuals, as AI was intended to assist rather than harm.
For further details, access the article here

Basic Principle to Enterprise AI Security
New regulations and AI hacks drive cyber security changes in 2025
Threat modeling your generative AI workload to evaluate security risk
How CISOs Can Drive the Adoption of Responsible AI Practices
Hackers will use machine learning to launch attacks
To fight AI-generated malware, focus on cybersecurity fundamentals
4 ways AI is transforming audit, risk and compliance


InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services