Jan 28 2026

AI Is the New Shadow IT: Why Cybersecurity Must Own AI Risk and Governance

Category: AI,AI Governance,AI Guardrailsdisc7 @ 2:01 pm

AI is increasingly being compared to shadow IT, not because it is inherently reckless, but because it is being adopted faster than governance structures can keep up. This framing resonated strongly in recent discussions, including last week’s webinar, where there was broad agreement that AI is simply the latest wave of technology entering organizations through both sanctioned and unsanctioned paths.

What is surprising, however, is that some cybersecurity leaders believe AI should fall outside their responsibility. This mindset creates a dangerous gap. Historically, when new technologies emerged—cloud computing, SaaS platforms, mobile devices—security teams were eventually expected to step in, assess risk, and establish controls. AI is following the same trajectory.

From a practical standpoint, AI is still software. It runs on infrastructure, consumes data, integrates with applications, and influences business processes. If cybersecurity teams already have responsibility for securing software systems, data flows, and third-party tools, then AI naturally falls within that same scope. Treating it as an exception only delays accountability.

That said, AI is not just another application. While it shares many of the same risks as traditional software, it also introduces new dimensions that security and risk teams must recognize. Models can behave unpredictably, learn from biased data, or produce outcomes that are difficult to explain or audit.

One of the most significant shifts AI introduces is the prominence of ethics and automated decision-making. Unlike conventional software that follows explicit rules, AI systems can influence hiring decisions, credit approvals, medical recommendations, and security actions at scale. These outcomes can have real-world consequences that go beyond confidentiality, integrity, and availability.

Because of this, cybersecurity leadership must expand its lens. Traditional controls like access management, logging, and vulnerability management remain critical, but they must be complemented with governance around model use, data provenance, human oversight, and accountability for AI-driven decisions.

Ultimately, the debate is not about whether AI belongs to cybersecurity—it clearly does—but about how the function evolves to manage it responsibly. Ignoring AI or pushing it to another team risks repeating the same mistakes made with shadow IT in the past.

My perspective: AI really is shadow IT in its early phase—new, fast-moving, and business-driven—but that is precisely why cybersecurity and risk leaders must step in early. The organizations that succeed will be the ones that treat AI as software plus governance: securing it technically while also addressing ethics, transparency, and decision accountability. That combination turns AI from an unmanaged risk into a governed capability.

In a recent interview and accompanying essay, Anthropic CEO Dario Amodei warns that humanity is not prepared for the rapid evolution of artificial intelligence and the profound disruptions it could bring. He argues that existing social, political, and economic systems may lag behind the pace of AI advancements, creating a dangerous mismatch between capability and governance.

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | AIMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

At DISC InfoSec, we help organizations navigate this landscape by aligning AI risk management, governance, security, and compliance into a single, practical roadmap. Whether you are experimenting with AI or deploying it at scale, we help you choose and operationalize the right frameworks to reduce risk and build trust. Learn more at DISC InfoSec.

Tags: Shadow AI, Shadow IT


Sep 21 2023

Shadow IT: Security policies may be a problem

Category: Security policydisc7 @ 3:13 pm

Shadow IT A Clear and Concise Reference

A recent report by Kolide and Dimensional Research has disclosed that three-quarters of employees resort to utilizing their personal and often unmanaged mobile devices and laptops for work purposes, with nearly half of the surveyed companies permitting such unmanaged devices to access secure resources. The report, based on responses from 334 IT, security, and business professionals, highlights the diverse motivations behind this practice, with three specific reasons indicating that a substantial number of employees use personal devices as a means to circumvent their organization’s security policies.

The dangers of shadow IT

The prevalence of shadow IT in enterprise environments is a well established fact.

When the organization’s IT department refuses to sign off on a needed solution or they drag their feet when asked to approve it, workers in other departments are tempted to deploy it without the IT workers’ knowledge.

The problem is compounded by the widespread use of personal/unmanaged devices, as the IT department has no way of knowing what’s happening on them, whether they are regularly patched/upgraded or whether they have been compromised.

“When engineers do production-level work on personal devices, an organization’s risk of a breach skyrockets. A bad actor can use a security flaw in an unmanaged device to break into the production environment, as in the LastPass breach. Even a simple smash-and-grab of a laptop can turn into a nightmare if that laptop is full of PII, and IT has no way to remotely wipe it,” Kolide researchers noted.

Employees shouldn’t be blamed for flawed security policies

Workers use their personal devices for work to (among other things) access websites and applications that have been restricted by the IT department, and because getting through security measures is frustrating.

This, and the fact that only 47% of the pollees said that they always follow all the cybersecurity policies, shows that the security policies in place are not working for all.

“Unfortunately, we don’t have data on which specific policies respondents felt justified in going around, but we can make two inferences from this response: Any security policy that workers can ignore at will does not have adequate safeguards around it, and if workers who generally try to follow the rules ignore a security policy, either they don’t understand the risks associated with a specific behavior, or the policy itself is flawed,” the researchers said.

Employers and workers need more open, honest dialogue about security, they pointed out. Security and IT professionals must make an effort to understand why workers feel they have to go around policies.

Finally, the results of the survey also debunk the myth that security training is useless and a despised nuisance.

“In the strongest data point of our survey, 96% of workers (across teams and seniority) reported that training was either helpful, or would be helpful if it were better designed. The message here is that people want to be educated on how to behave safely,” the researchers concluded.

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

 

Tags: Shadow IT