
1. Sam Altman ā CEO of OpenAI, the company behind ChatGPT ā recently issued a sobering warning: he expects āsome really bad stuff to happenā as AI technology becomes more powerful.
2. His concern isnāt abstract. He pointed to realāworld examples: advanced tools such as Sora 2 ā OpenAIās own AI video tool ā have already enabled the creation of deepfakes. Some of these deepfakes, misusing publicāfigure likenesses (including Altmanās own), went viral on social media.
3. According to Altman, these are only early warning signs. He argues that as AI becomes more accessible and widespread, humans and society will need to ācoāevolveā alongside the technology ā building not just tech, but the social norms, guardrails, and safety frameworks that can handle it.
4. The risks are multiple: deepfakes could erode public trust in media, fuel misinformation, enable fraud or identityārelated crimes, and disrupt how we consume and interpret information online. The technologyās speed and reach make the hazards more acute.
5. Altman cautioned against overreliance on AIābased systems for decision-making. He warned that if many users start trusting AI outputs ā whether for news, advice, or content ā we might reach āsocietalāscaleā consequences: unpredictable shifts in public opinion, democracy, trust, and collective behavior.
6. Still, despite these grave warnings, Altman dismissed calls for heavy regulatory restrictions on AIās development and release. Instead, he supports āthorough safety testing,ā especially for the most powerful models ā arguing that regulation may have unintended consequences or slow beneficial progress.
7. Critics note a contradiction: the same company that warns of catastrophic risks is actively releasing powerful tools like Sora 2 to the public. That raises concerns about whether early release ā even in the name of ācoāevolutionā ā irresponsibly accelerates exposure to harm before adequate safeguards are in place.
8. The bigger picture: what happens now will likely shape how society, law, and norms adapt to AI. If deepfake tools and AIādriven content become commonplace, we may face a future where āseeing is believingā no longer holds true ā and navigating truth vs manipulation becomes far harder.
9. In short: Altmanās warning serves partly as a wakeāup call. Heās not just flagging technical risk ā heās asking society to seriously confront how we consume, trust, and regulate AIāpowered content. At the same time, his company continues to drive that content forward. Itās a tension between innovation and caution ā with potentially huge societal implications.
🔎 My Opinion
I think Altmanās public warning is important and overdue ā itās rare to see an industry leader acknowledge the dangers of their own creations so candidly. This sort of transparency helps start vital conversations about ethics, regulation, and social readiness.
That said, Iām concerned that releasing powerful AI capabilities broadly, while simultaneously warning they might cause severe harm, feels contradictory. If companies push ahead with widespread deployment before robust guardrails are tested and widely adopted, we risk exposing society to misinformation, identity fraud, erosion of trust, and social disruption.
Given how fast AI adoption is accelerating ā and how high the stakes are ā I believe a stronger emphasis on AI governance, transparency, regulation, and public awareness is essential. Innovation should continue, but not at the expense of public safety, trust, and societal stability.
Further reading on this topic
CEO of ChatGPT’s Parent Company: ‘I Expect Some Really Bad Stuff To Happen’-Here’s What He Means
- A Simple 4-Step Path to ISO 42001 for SMBs
- How ISO 42001 Strengthens Alignment With the EU AI Act (Without Replacing Legal Compliance)
- When a $3K “cybersecurity gap assessment” reveals you don’t actually have cybersecurity to assess…
- ISO 42001 and the Business Imperative for AI Governance
- Emerging Tools & Frameworks for AI Governance & Security Testing
InfoSec servicesĀ |Ā ISMS ServicesĀ |Ā AIMS ServicesĀ |Ā InfoSec booksĀ |Ā Follow our blogĀ |Ā DISC llc is listed on The vCISO DirectoryĀ |Ā ISO 27k Chat botĀ |Ā Comprehensive vCISO ServicesĀ |Ā Security Risk Assessment ServicesĀ |Ā Mergers and Acquisition Security



