Jul 20 2025

Think Before You Share: The Hidden Privacy Costs of AI Convenience

Category: AI,Information Privacydisc7 @ 8:28 am
  1. AI is rapidly embedding itself into daily life—from smartphones and web browsers to drive‑through kiosks—with baked‑in assistants changing how we seek information. However, this shift also means AI tools are increasingly requesting extensive access to personal data under the pretext of functionality.
  2. This mirrors a familiar pattern: just as simple flashlight or calculator apps once over‑requested permissions (like contacts or location), modern AI apps are doing the same—collecting far more than needed, often for profit.
  3. For example, Perplexity’s AI browser “Comet” seeks sweeping Google account permissions: calendar manipulation, drafting and sending emails, downloading contacts, editing events across all calendars, and even accessing corporate directories.
  4. Although Perplexity asserts that most of this data remains locally stored, the user is still granting the company extensive rights—rights that may be used to improve its AI models, shared among others, or retained beyond immediate usage.
  5. This trend isn’t isolated. AI transcription tools ask for access to conversations, calendars, contacts. Meta’s AI experiments even probe private photos not yet uploaded—all under the “assistive” justification.
  6. Signal’s president Meredith Whittaker likens this to “putting your brain in a jar”—granting agents clipboard‑level access to passwords, browsing history, credit cards, calendars, and contacts just to book a restaurant or plan an event.
  7. The consequence: you surrender an irreversible snapshot of your private life—emails, contacts, calendars, archives—to a profit‑motivated company that may also employ people who review your private prompts. Given frequent AI errors, the benefits gained rarely justify the privacy and security costs.

Perspective:
This article issues a timely and necessary warning: convenience should not override privacy. AI tools promising to “just do it for you” often come with deep data access bundled in unnoticed. Until robust regulations and privacy‑first architectures (like end‑to‑end encryption or on‑device processing) become standard, users must scrutinize permission requests carefully. AI is a powerful helper—but giving it full reign over intimate data without real safeguards is a risk many will come to regret. Choose tools that require minimal, transparent data access—and never let automation replace ownership of your personal information.

AI Data Privacy and Protection: The Complete Guide to Ethical AI, Data Privacy, and Security

A recent Accenture survey of over 2,200 security and technology leaders reveals a worrying gap: while AI adoption accelerates, cybersecurity measures are lagging. Roughly 36% say AI is advancing faster than their defenses, and about 90% admit they lack adequate security protocols for AI-driven threats—including securing AI models, data pipelines, and cloud infrastructure. Yet many organizations continue prioritizing rapid AI deployment over updating existing security frameworks. The solution lies not in starting from scratch, but in reinforcing and adapting current cybersecurity strategies to address AI-specific risks —- This disconnect between innovation and security is a classic but dangerous oversight. Organizations must embed cybersecurity into AI initiatives from the start—by integrating controls, enhancing talent, and updating frameworks—rather than treating it as an afterthought. Embedding security as a foundational pillar, not a bolt-on, is essential to ensure we reap AI benefits without compromising digital safety.

The AI Readiness Gap: High Usage, Low Security – Databricks AI Security Framework (DASF) and the AI Controls Matrix (AICM) from CSA can both be used effectively for AI security readiness assessments

AIMS and Data Governance

Hands-On Large Language Models: Language Understanding and Generation

AWS Databases for AI/ML: Architecting Intelligent Data Workflows (AWS Cloud Mastery: Building and Securing Applications)


Trust Me – ISO 42001 AI Management System

ISO/IEC 42001:2023 – from establishing to maintain an AI management system

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI, AI Data Privacy and Protection, Hidden Privacy


May 22 2025

AI in the Legislature: Promise, Pitfalls, and the Future of Lawmaking

Category: AI,Security and privacy Lawdisc7 @ 9:00 am

Bruce Schneier’s essay, “AI-Generated Law,” delves into the emerging role of artificial intelligence in legislative processes, highlighting both its potential benefits and inherent risks. He examines global developments, such as the United Arab Emirates’ initiative to employ AI for drafting and updating laws, aiming to accelerate legislative procedures by up to 70%. This move is part of a broader strategy to transform the UAE into an “AI-native” government by 2027, with a substantial investment exceeding $3 billion. While this approach has garnered attention, it’s not entirely unprecedented. In 2023, Porto Alegre, Brazil, enacted a local ordinance on water meter replacement, drafted with the assistance of ChatGPT—a fact that was not disclosed to the council members at the time. Such instances underscore the growing trend of integrating AI into legislative functions worldwide.

Schneier emphasizes that the integration of AI into lawmaking doesn’t necessitate formal procedural changes. Legislators can independently utilize AI tools to draft bills, much like they rely on staffers or lobbyists. This democratization of legislative drafting tools means that AI can be employed at various governmental levels without institutional mandates. For example, since 2020, Ohio has leveraged AI to streamline its administrative code, eliminating approximately 2.2 million words of redundant regulations. Such applications demonstrate AI’s capacity to enhance efficiency in legislative processes.

The essay also addresses the potential pitfalls of AI-generated legislation. One concern is the phenomenon of “confabulation,” where AI systems might produce plausible-sounding but incorrect or nonsensical information. However, Schneier argues that human legislators are equally prone to errors, citing the Affordable Care Act’s near downfall due to a typographical mistake. Moreover, he points out that in non-democratic regimes, laws are often arbitrary and inhumane, regardless of whether they are drafted by humans or machines. Thus, the medium of law creation—human or AI—doesn’t inherently guarantee justice or fairness.

A significant concern highlighted is the potential for AI to exacerbate existing power imbalances. Given AI’s capabilities, there’s a risk that those in power might use it to further entrench their positions, crafting laws that serve specific interests under the guise of objectivity. This could lead to a veneer of neutrality while masking underlying biases or agendas. Schneier warns that without transparency and oversight, AI could become a tool for manipulation rather than a means to enhance democratic processes.

Despite these challenges, Schneier acknowledges the potential benefits of AI in legislative contexts. AI can assist in drafting clearer, more consistent laws, identifying inconsistencies, and ensuring grammatical precision. It can also aid in summarizing complex bills, simulating potential outcomes of proposed legislation, and providing legislators with rapid analyses of policy impacts. These capabilities can enhance the legislative process, making it more efficient and informed.

The essay underscores the inevitability of AI’s integration into lawmaking, driven by the increasing complexity of modern governance and the demand for efficiency. As AI tools become more accessible, their adoption in legislative contexts is likely to grow, regardless of formal endorsements or procedural changes. This organic integration poses questions about accountability, transparency, and the future role of human judgment in crafting laws.

In reflecting on Schneier’s insights, it’s evident that while AI offers promising tools to enhance legislative efficiency and precision, it also brings forth challenges that necessitate careful consideration. Ensuring transparency in AI-assisted lawmaking processes is paramount to maintain public trust. Moreover, establishing oversight mechanisms can help mitigate risks associated with bias or misuse. As we navigate this evolving landscape, a balanced approach that leverages AI’s strengths while safeguarding democratic principles will be crucial.

For further details, access the article here

Artificial Intelligence: Legal Issues, Policy, and Practical Strategies

AIMS and Data Governance

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: #Lawmaking, AI, AI Laws, AI legislature


Oct 01 2024

Could APIs be the undoing of AI?

Category: AI,API securitydisc7 @ 11:32 am

The article discusses security challenges associated with large language models (LLMs) and APIs, focusing on issues like prompt injection, data leakage, and model theft. It highlights vulnerabilities identified by OWASP, including insecure output handling and denial-of-service attacks. API flaws can expose sensitive data or allow unauthorized access. To mitigate these risks, it recommends implementing robust access controls, API rate limits, and runtime monitoring, while noting the need for better protections against AI-based attacks.

The post discusses defense strategies against attacks targeting large language models (LLMs). Providers are red-teaming systems to identify vulnerabilities, but this alone isn’t enough. It emphasizes the importance of monitoring API activity to prevent data exposure and defend against business logic abuse. Model theft (LLMjacking) is highlighted as a growing concern, where attackers exploit cloud-hosted LLMs for profit. Organizations must act swiftly to secure LLMs and avoid relying solely on third-party tools for protection.

For more details, visit Help Net Security.

Hacking APIs: Breaking Web Application Programming Interfaces

Trust Me – AI Risk Management

AI Security risk assessment quiz

Implementing BS ISO/IEC 42001 will demonstrate that you’re developing AI responsibly

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI, AI Risk Management, API security risks, Hacking APIs