Dec 01 2025

ChatGPT CEO Warns of AI Risks: Balancing Innovation with Societal Safety

Category: AI,AI Guardrailsdisc7 @ 12:12 pm

1. Sam Altman — CEO of OpenAI, the company behind ChatGPT — recently issued a sobering warning: he expects “some really bad stuff to happen” as AI technology becomes more powerful.

2. His concern isn’t abstract. He pointed to real‑world examples: advanced tools such as Sora 2 — OpenAI’s own AI video tool — have already enabled the creation of deepfakes. Some of these deepfakes, misusing public‑figure likenesses (including Altman’s own), went viral on social media.

3. According to Altman, these are only early warning signs. He argues that as AI becomes more accessible and widespread, humans and society will need to “co‑evolve” alongside the technology — building not just tech, but the social norms, guardrails, and safety frameworks that can handle it.

4. The risks are multiple: deepfakes could erode public trust in media, fuel misinformation, enable fraud or identity‑related crimes, and disrupt how we consume and interpret information online. The technology’s speed and reach make the hazards more acute.

5. Altman cautioned against overreliance on AI‑based systems for decision-making. He warned that if many users start trusting AI outputs — whether for news, advice, or content — we might reach “societal‑scale” consequences: unpredictable shifts in public opinion, democracy, trust, and collective behavior.

6. Still, despite these grave warnings, Altman dismissed calls for heavy regulatory restrictions on AI’s development and release. Instead, he supports “thorough safety testing,” especially for the most powerful models — arguing that regulation may have unintended consequences or slow beneficial progress.

7. Critics note a contradiction: the same company that warns of catastrophic risks is actively releasing powerful tools like Sora 2 to the public. That raises concerns about whether early release — even in the name of “co‑evolution” — irresponsibly accelerates exposure to harm before adequate safeguards are in place.

8. The bigger picture: what happens now will likely shape how society, law, and norms adapt to AI. If deepfake tools and AI‑driven content become commonplace, we may face a future where “seeing is believing” no longer holds true — and navigating truth vs manipulation becomes far harder.

9. In short: Altman’s warning serves partly as a wake‑up call. He’s not just flagging technical risk — he’s asking society to seriously confront how we consume, trust, and regulate AI‑powered content. At the same time, his company continues to drive that content forward. It’s a tension between innovation and caution — with potentially huge societal implications.


🔎 My Opinion

I think Altman’s public warning is important and overdue — it’s rare to see an industry leader acknowledge the dangers of their own creations so candidly. This sort of transparency helps start vital conversations about ethics, regulation, and social readiness.

That said, I’m concerned that releasing powerful AI capabilities broadly, while simultaneously warning they might cause severe harm, feels contradictory. If companies push ahead with widespread deployment before robust guardrails are tested and widely adopted, we risk exposing society to misinformation, identity fraud, erosion of trust, and social disruption.

Given how fast AI adoption is accelerating — and how high the stakes are — I believe a stronger emphasis on AI governance, transparency, regulation, and public awareness is essential. Innovation should continue, but not at the expense of public safety, trust, and societal stability.

Further reading on this topic

Investopedia

CEO of ChatGPT’s Parent Company: ‘I Expect Some Really Bad Stuff To Happen’-Here’s What He Means

Mastering ISO 23894 – AI Risk Management: The AI Risk Management Blueprint | AI Lifecycle and Risk Management Demystified | AI Risk Mastery with ISO 23894 | Navigating the AI Lifecycle with Confidence

InfoSec services | ISMS Services | AIMS Services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, AI risks, Deepfakes and Fraud, deepfakes for phishing, identity‑related crime, misinformation


Mar 08 2024

Immediate AI risks and tomorrow’s dangers

Category: AIdisc7 @ 11:29 am

“At the most basic level, AI has given malicious attackers superpowers,” Mackenzie Jackson, developer and security advocate at GitGuardian, told the audience last week at Bsides Zagreb.

These superpowers are most evident in the growing impact of fishing, smishing and vishing attacks since the introduction of ChatGPT in November 2022.

And then there are also malicious LLMs, such as FraudGPT, WormGPT, DarkBARD and White Rabbit (to name a few), that allow threat actors to write malicious code, generate phishing pages and messages, identify leaks and vulnerabilities, create hacking tools and more.

AI has not necessarily made attacks more sophisticated but, he says, it has made them more accessible to a greater number of people.

The potential for AI-fueled attacks

It’s impossible to imagine all the types of AI-fueled attacks that the future has in store for us. Jackson outlined some attacks that we can currently envision.

One of them is a prompt injection attack against a ChatGPT-powered email assistant, which may allow the attacker to manipulate the assistant into executing actions such as deleting all emails or forwarding them to the attacker.

Inspired by a query that resulted in ChatGPT outright inventing a non-existent software package, Jackson also posited that an attacker might take advantage of LLMs’ tendency to “hallucinate” by creating malware-laden packages that many developers might be searching for (but currently don’t exist).

The immediate threats

But we’re facing more immediate threats right now, he says, and one of them is sensitive data leakage.

With people often inserting sensitive data into prompts, chat histories make for an attractive target for cybercriminals.

Unfortunately, these systems are not designed to secure the data – there have been instances of ChatGTP leaking users’ chat history and even personal and billing data.

Also, once data is inputted into these systems, it can “spread” to various databases, making it difficult to contain. Essentially, data entered into such systems may perpetually remain accessible across different platforms.

And even though chat history can be disabled, there’s no guarantee that the data is not being stored somewhere, he noted.

One might think that the obvious solution would be to ban the use of LLMs in business settings, but this option has too many drawbacks.

Jackson argues that those who aren’t allowed to use LLMs for work (especially in the technology domain) are likely to fall behind in their capabilities.

Secondly, people will search for and find other options (VPNs, different systems, etc.) that will allow them to use LLMs within enterprises.

This could potentially open doors to another significant risk for organizations: shadow AI. This means that the LLM is still part of the organization’s attack surface, but it is now invisible.

How to protect your organization?

When it comes to protecting an organization from the risks associated with AI use, Jackson points out that we really need to go back to security basics.

People must be given the appropriate tools for their job, but they also must be made to understand the importance of using LLMs safely.

He also advises to:

  • Put phishing protections in place
  • Make frequent backups to avoid getting ransomed
  • Make sure that PII is not accessible to employees
  • Avoid keeping secrets on the network to prevent data leakage
  • Use software composition analysis (SCA) tools to avoid AI hallucinations abuse and typosquatting attacks

To make sure your system is protected from prompt injection, he believes that implementing dual LLMs, as proposed by programmer Simon Willison, might be a good idea.

Despite the risks, Jackson believes that AI is too valuable to move away from.

He anticipates a rise in companies and startups using AI toolsets, leading to potential data breaches and supply chain attacks. These incidents may drive the need for improved legislation, better tools, research, and understanding of AI’s implications, which are currently lacking because of its rapid evolution. Keeping up with it has become a challenge.

AI Scams:

Are chatbots the new weapon of online scammers?

AI used to fake voices of loved ones in “I’ve been in an accident” scam

Story of Attempted Scam Using AI | C-SPAN.org

Woman loses Rs 1.4 lakh to AI voice scam

Kidnapping scam uses artificial intelligence to clone teen girl’s voice, mother issues warning

First-Ever AI Fraud Case Steals Money by Impersonating CEO

AI Scams Mitigation:

A.I. Scam Detector

Every country is developing AI laws, standards, and specifications. In the US, states are introducing 50 AI related regulations a week (Axios 0 2024). Each of the regulations see AI through the lens for social and technical risk.

Trust Me: AI Risk Management is a book of AI Risk Controls that can be incorporated into the NIST AI RMF guidelines or NIST CSF. Trust Me looks at the key attributes of AI including trust, explainability, and conformity assessment through an objective-risk-control-why lens. If you’re developing, designing, regulating, or auditing AI systems, Trust Me: AI Risk Management is a must read.

👇 Do you place your trust in AI?? 👇

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: AI risks