Dec 05 2025

Are AI Companies Protecting Humanity? The Latest Scorecard Says No

The article reports on a new “safety report card” assessing how well leading AI companies are doing at protecting humanity from the risks posed by powerful artificial-intelligence systems. The report was issued by Future of Life Institute (FLI), a nonprofit that studies existential threats and promotes safe development of emerging technologies.

This “AI Safety Index” grades companies based on 35 indicators across six domains — including existential safety, risk assessment, information sharing, governance, safety frameworks, and current harms.

In the latest (Winter 2025) edition of the index, no company scored higher than a “C+.” The top-scoring companies were Anthropic and OpenAI, followed by Google DeepMind.

Other firms, including xAI, Meta, and a few Chinese AI companies, scored D or worse.

A key finding is that all evaluated companies scored poorly on “existential safety” — which covers whether they have credible strategies, internal monitoring, and controls to prevent catastrophic misuse or loss of control as AI becomes more powerful.

Even though companies like OpenAI and Google DeepMind say they’re committed to safety — citing internal research, safeguards, testing with external experts, and safety frameworks — the report argues that public information and evidence remain insufficient to demonstrate real readiness for worst-case scenarios.

For firms such as xAI and Meta, the report highlights a near-total lack of evidence about concrete safety investments beyond minimal risk-management frameworks. Some companies didn’t respond to requests for comment.

The authors of the index — a panel of eight independent AI experts including academics and heads of AI-related organizations — emphasize that we’re facing an industry that remains largely unregulated in the U.S. They warn this “race to the bottom” dynamic discourages companies from prioritizing safety when profitability and market leadership are at stake.

The report suggests that binding safety standards — not voluntary commitments — may be necessary to ensure companies take meaningful action before more powerful AI systems become a reality.

The broader context: as AI systems play larger roles in society, their misuse becomes more plausible — from facilitating cyberattacks, enabling harmful automation, to even posing existential threats if misaligned superintelligent AI were ever developed.

In short: according to the index, the AI industry still has a long way to go before it can be considered truly “safe for humanity,” even among its most prominent players.


My Opinion

I find the results of this report deeply concerning — but not surprising. The fact that even the top-ranked firms only get a “C+” strongly suggests that current AI safety efforts are more symbolic than sufficient. It seems like companies are investing in safety only at a surface level (e.g., statements, frameworks), but there’s little evidence they are preparing in a robust, transparent, and enforceable way for the profound risks AI could pose — especially when it comes to existential threats or catastrophic misuse.

The notion that an industry with such powerful long-term implications remains essentially unregulated feels reckless. Voluntary commitments and internal policies can easily be overridden by competitive pressure or short-term financial incentives. Without external oversight and binding standards, there’s no guarantee safety will win out over speed or profits.

That said, the fact that the FLI even produces this index — and that two firms get a “C+” — shows some awareness and effort towards safety. It’s better than nothing. But awareness must translate into real action: rigorous third-party audits, transparent safety testing, formal safety requirements, and — potentially — regulation.

In the end, I believe society should treat AI much like we treat high-stakes technologies such as nuclear power: with caution, transparency, and enforceable safety norms. It’s not enough to say “we care about safety”; firms must prove they can manage the long-term consequences, and governments and civil society need to hold them accountable.

InfoSec services | ISMS Services | AIMS Services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Safety, AI Scorecard