Nov 19 2024

Threat modeling your generative AI workload to evaluate security risk

Category: AI,Risk Assessmentdisc7 @ 8:40 am

AWS emphasizes the importance of threat modeling for securing generative AI workloads, focusing on balancing risk management and business outcomes. A robust threat model is essential across the AI lifecycle stages, including design, deployment, and operations. Risks specific to generative AI, such as model poisoning and data leakage, need proactive mitigation, with organizations tailoring risk tolerance to business needs. Regular testing for vulnerabilities, like malicious prompts, ensures resilience against evolving threats.

Generative AI applications follow a structured lifecycle, from identifying business objectives to monitoring deployed models. Security considerations should be integral from the start, with measures like synthetic threat simulations during testing. For applications on AWS, leveraging its security tools, such as Amazon Bedrock and OpenSearch, helps enforce role-based access controls and prevent unauthorized data exposure.

AWS promotes building secure AI solutions on its cloud, which offers over 300 security services. Customers can utilize AWS infrastructure’s compliance and privacy frameworks while tailoring controls to organizational needs. For instance, techniques like Retrieval-Augmented Generation ensure sensitive data is redacted before interaction with foundational models, minimizing risks.

Threat modeling is described as a collaborative process involving diverse roles—business stakeholders, developers, security experts, and adversarial thinkers. Consistency in approach and alignment with development workflows (e.g., Agile) ensures scalability and integration. Using existing tools for collaboration and issue tracking reduces friction, making threat modeling a standard step akin to unit testing.

Organizations are urged to align security practices with business priorities while maintaining flexibility. Regular audits and updates to models and controls help adapt to the dynamic AI threat landscape. AWS provides reference architectures and security matrices to guide organizations in implementing these best practices efficiently.

Threat composer threat statement builder

You can write and document these possible threats to your application in the form of threat statements. Threat statements are a way to maintain consistency and conciseness when you document your threat. At AWS, we adhere to a threat grammar which follows the syntax:

[threat source] with [prerequisites] can [threat action] which leads to [threat impact], negatively impacting [impacted assets].

This threat grammar structure helps you to maintain consistency and allows you to iteratively write useful threat statements. As shown in Figure 2, Threat Composer provides you with this structure for new threat statements and includes examples to assist you.

You can read the full article here

Proactive governance is a continuous process of risk and threat identification, analysis and remediation. In addition, it also includes proactively updating policies, standards and procedures in response to emerging threats or regulatory changes.

OWASP updated 2025 Top 10 Risks for Large Language Models (LLMs), a crucial resource for developers, security teams, and organizations working with AI.

How CISOs Can Drive the Adoption of Responsible AI Practices

The CISO’s Guide to Securing Artificial Intelligence

AI in Cyber Insurance: Risk Assessments and Coverage Decisions

Hackers will use machine learning to launch attacks

To fight AI-generated malware, focus on cybersecurity fundamentals

4 ways AI is transforming audit, risk and compliance

AI security bubble already springing leaks

Could APIs be the undoing of AI?

The Rise of AI Bots: Understanding Their Impact on Internet Security

How to Address AI Security Risks With ISO 27001

Comprehensive vCISO Services

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: LLM, OWASP, Threat modeling


Nov 13 2024

How CISOs Can Drive the Adoption of Responsible AI Practices

Category: AI,Information Securitydisc7 @ 11:47 am

Amid the rush to adopt AI, leaders face significant risks if they lack an understanding of the technology’s potential cyber threats. A PwC survey revealed that 40% of global leaders are unaware of generative AI’s risks, posing potential vulnerabilities. CISOs should take a leading role in assessing, implementing, and overseeing AI, as their expertise in risk management can ensure safer integration and focus on AI’s benefits. While some advocate for a chief AI officer, security remains integral, emphasizing the CISO’s/ vCISO’S strategic role in guiding responsible AI adoption.

CISOs are crucial in managing the security and compliance of AI adoption within organizations, especially with evolving regulations. Their role involves implementing a security-first approach and risk management strategies, which includes aligning AI goals through an AI consortium, collaborating with cybersecurity teams, and creating protective guardrails.

They guide acceptable risk tolerance, manage governance, and set controls for AI use. Whether securing AI consumption or developing solutions, CISOs must stay updated on AI risks and deploy relevant resources.

A strong security foundation is essential, involving comprehensive encryption, data protection, and adherence to regulations like the EU AI Act. CISOs enable informed cross-functional collaboration, ensuring robust monitoring and swift responses to potential threats.

As AI becomes mainstream, organizations must integrate security throughout the AI lifecycle to guard against GenAI-driven cyber threats, such as social engineering and exploitation of vulnerabilities. This requires proactive measures and ongoing workforce awareness to counter these challenges effectively.

“AI will touch every business function, even in ways that have yet to be predicted. As the bridge between security efforts and business goals, CISOs serve as gatekeepers for quality control and responsible AI use across the business. They can articulate the necessary ground for security integrations that avoid missteps in AI adoption and enable businesses to unlock AI’s full potential to drive better, more informed business outcomes. “

You can read the full article here

CISOs play a pivotal role in guiding responsible AI adoption to balance innovation with security and compliance. They need to implement security-first strategies and align AI goals with organizational risk tolerance through stakeholder collaboration and robust risk management frameworks. By integrating security throughout the AI lifecycle, CISOs/vCISOs help protect critical assets, adhere to regulations, and mitigate threats posed by GenAI. Vigilance against AI-driven attacks and fostering cross-functional cooperation ensures that organizations are prepared to address emerging risks and foster safe, strategic AI use.

Need expert guidance? Book a free 30-minute consultation with a vCISO.

Comprehensive vCISO Services

The CISO’s Guide to Securing Artificial Intelligence

Hackers will use machine learning to launch attacks

To fight AI-generated malware, focus on cybersecurity fundamentals

4 ways AI is transforming audit, risk and compliance

AI security bubble already springing leaks

Could APIs be the undoing of AI?

The Rise of AI Bots: Understanding Their Impact on Internet Security

How to Address AI Security Risks With ISO 27001

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI privacy, AI security impact, AI threats, CISO, vCISO


Nov 06 2024

Hackers will use machine learning to launch attacks

Category: AI,Hackingdisc7 @ 1:37 pm

The article on CSO Online covers how hackers may leverage machine learning for cyber attacks, including methods like automating social engineering, enhancing malware evasion, launching advanced spear-phishing, and creating adaptable attack strategies that evolve with new data. Machine learning could also help attackers mimic human behavior to bypass security protocols and tailor attacks based on behavioral analysis. This evolving threat landscape underscores the importance of proactive, ML-driven security defenses.

The article covers key ways hackers could leverage machine learning to enhance their cyberattacks:

  1. Sophisticated Phishing: Machine learning enables attackers to tailor phishing emails that feel authentic and personally relevant, making phishing even more deceptive.
  2. Exploit Development: AI-driven tools assist in uncovering zero-day vulnerabilities by automating and refining traditional techniques like fuzzing, which involves bombarding software with random inputs to expose weaknesses.
  3. Malware Creation: Machine learning algorithms can make malware more evasive by adapting to the target’s security measures in real time, allowing it to slip through defenses.
  4. Automated Reconnaissance: Hackers use AI to analyze massive data sets, such as social media profiles or organizational networks, to find weak points and personalize attacks.
  5. Credential Stuffing and Brute Force: AI speeds up credential-stuffing attacks by automating the testing of large sets of stolen credentials against a variety of online platforms.
  6. Deepfake Phishing: AI-generated audio and video deepfakes can impersonate trusted individuals, making social engineering attacks more convincing and difficult to detect.

For more detail on these evolving threats, you can read the full article on CSO Online.

Machine Learning: 3 books in 1: – Hacking Tools for Computer + Hacking With Kali Linux + Python Programming- The ultimate beginners guide to improve your knowledge of programming and data science

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Machine Learning


Oct 11 2024

To fight AI-generated malware, focus on cybersecurity fundamentals

Category: AIdisc7 @ 8:08 am

AI-powered malware is increasingly adopting AI capabilities to improve traditional cyberattack techniques. Malware such as BlackMamba and EyeSpy leverage AI for activities like evading detection and conducting more sophisticated phishing attacks. These innovations are not entirely new but represent a refinement of existing malware strategies.

While AI enhances these attacks, its greatest danger lies in the automation of simple, widespread threats, potentially increasing the volume of attacks. To combat this, businesses need strong cybersecurity practices, including regular updates, training, and the integration of AI in defense systems for faster threat detection and response.

As with the future of AI-powered threats, AI’s impact on cybersecurity practitioners is likely to be more of a gradual change than an explosive upheaval. Rather than getting swept up in the hype or carried away by the doomsayers, security teams are better off doing what they’ve always done: keeping an eye on the future with both feet planted firmly in the present.

For more details, visit the IBM article.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

ChatGPT for Cybersecurity Cookbook: Learn practical generative AI recipes to supercharge your cybersecurity skills

Previous DISC InfoSec posts on AI

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Adversarial AI Attacks, AI-generated malware, ChatGPT for Cybersecurity


Oct 04 2024

4 ways AI is transforming audit, risk and compliance

Category: AI,Risk Assessment,Security Compliancedisc7 @ 9:11 am

AI is revolutionizing audit, risk, and compliance by streamlining processes through automation. Tasks like data collection, control testing, and risk assessments, which were once time-consuming, are now being done faster and with more precision. This allows teams to focus on more critical strategic decisions.

In auditing, AI identifies anomalies and uncovers patterns in real-time, enhancing both the depth and accuracy of audits. AI’s ability to process large datasets also helps maintain compliance with evolving regulations like the EU’s AI Act, while mitigating human error.

Beyond audits, AI supports risk management by providing dynamic insights that adapt to changing threat landscapes. This enables continuous risk monitoring rather than periodic reviews, making organizations more responsive to emerging risks, including cybersecurity threats.

AI also plays a crucial role in bridging the gap between cybersecurity, compliance, and ESG (Environmental, Social, Governance) goals. It integrates these areas into a single strategy, allowing businesses to track and manage risks while aligning with sustainability initiatives and regulatory requirements.

For more details, visit here

Credit: Adobe Stock Images

AI Security risk assessment quiz

Trust Me – AI Risk Management

AI Management System Certification According to the ISO/IEC 42001 Standard

Responsible AI in the Enterprise: Practical AI risk management for explainable, auditable, and safe models with hyperscalers and Azure OpenAI

Previous posts on AI

Implementing BS ISO/IEC 42001 will demonstrate that you’re developing AI responsibly

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI audit, AI compliance, AI risk assessment, AI Risk Management


Oct 03 2024

AI security bubble already springing leaks

Category: AIdisc7 @ 1:17 pm

AI security bubble already springing leaks

The article highlights how the AI boom, especially in cybersecurity, is already showing signs of strain. Many AI startups, despite initial hype, are facing financial challenges, as they lack the funds to develop large language models (LLMs) independently. Larger companies are taking advantage by acquiring or licensing the technologies from these smaller firms at a bargain.

AI is just one piece of the broader cybersecurity puzzle, but it isn’t a silver bullet. Issues like system updates and cloud vulnerabilities remain critical, and AI-only security solutions may struggle without more comprehensive approaches.

Some efforts to set benchmarks for LLMs, like NIST, are underway, helping to establish standards in areas such as automated exploits and offensive security. However, AI startups face increasing difficulty competing with big players who have the resources to scale.

For more information, you can visit here

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

Could APIs be the undoing of AI?

Previous posts on AI

AI Security risk assessment quiz

Implementing BS ISO/IEC 42001 will demonstrate that you’re developing AI responsibly

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Adversarial AI Attacks, AI security


Oct 01 2024

Could APIs be the undoing of AI?

Category: AI,API securitydisc7 @ 11:32 am

The article discusses security challenges associated with large language models (LLMs) and APIs, focusing on issues like prompt injection, data leakage, and model theft. It highlights vulnerabilities identified by OWASP, including insecure output handling and denial-of-service attacks. API flaws can expose sensitive data or allow unauthorized access. To mitigate these risks, it recommends implementing robust access controls, API rate limits, and runtime monitoring, while noting the need for better protections against AI-based attacks.

The post discusses defense strategies against attacks targeting large language models (LLMs). Providers are red-teaming systems to identify vulnerabilities, but this alone isn’t enough. It emphasizes the importance of monitoring API activity to prevent data exposure and defend against business logic abuse. Model theft (LLMjacking) is highlighted as a growing concern, where attackers exploit cloud-hosted LLMs for profit. Organizations must act swiftly to secure LLMs and avoid relying solely on third-party tools for protection.

For more details, visit Help Net Security.

Hacking APIs: Breaking Web Application Programming Interfaces

Trust Me – AI Risk Management

AI Security risk assessment quiz

Implementing BS ISO/IEC 42001 will demonstrate that you’re developing AI responsibly

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI, AI Risk Management, API security risks, Hacking APIs


Sep 26 2024

The Rise of AI Bots: Understanding Their Impact on Internet Security

Category: AIdisc7 @ 2:40 pm

The post highlights the rapid evolution of AI bots and their growing impact on internet security. Initially, bots performed simple, repetitive tasks, but modern AI bots leverage machine learning and natural language processing to engage in more complex activities.

Types of Bots:

  • Good Bots: Help with tasks like web indexing and customer support.
  • Malicious Bots: Involved in harmful activities like data scraping, account takeovers, DDoS attacks, and fraud.

Security Impacts:

  • AI bots are increasingly sophisticated, making cyberattacks more complex and difficult to detect. This has led to significant data breaches, resource drains, and a loss of trust in online services.

Defense Strategies:

  • Organizations are employing advanced detection algorithms, multi-factor authentication (MFA), CAPTCHA systems, and collaborating with cybersecurity firms to combat these threats.
  • Case studies show that companies across sectors are successfully reducing bot-related incidents by implementing these measures.

Future Directions:

  • AI-powered security solutions and regulatory efforts will play key roles in mitigating the threats posed by evolving AI bots. Industry collaboration will also be essential to staying ahead of these malicious actors.

The rise of AI bots brings both benefits and challenges to the internet landscape. While they can provide useful services, malicious bots present serious security threats. For organizations to safeguard their assets and uphold user trust, it’s essential to understand the impact of AI bots on internet security and deploy advanced mitigation strategies. As AI technology progresses, staying informed and proactive will be critical in navigating the increasingly complex internet security environment.

For more information, you can visit the here

Rise of the Bots: How AI is Shaping Our Future

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Bots


Sep 25 2024

How to Address AI Security Risks With ISO 27001

Category: AI,ISO 27k,Risk Assessmentdisc7 @ 10:10 am

The blog post discusses how ISO 27001 can help address AI-related security risks. AI’s rapid development raises data security concerns. Bridget Kenyon, a CISO and key figure in ISO 27001:2022, highlights the human aspects of security vulnerabilities and the importance of user education and behavioral economics in addressing AI risks. The article suggests ISO 27001 offers a framework to mitigate these challenges effectively.

The impact of AI on security | How ISO 27001 can help address such risks and concerns.

For more information, you can visit the full blog here.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Security Risks


Sep 09 2024

AI cybersecurity needs to be as multi-layered as the system it’s protecting

The article emphasizes that AI cybersecurity must be multi-layered, like the systems it protects. Cybercriminals increasingly exploit large language models (LLMs) with attacks such as data poisoning, jailbreaks, and model extraction. To counter these threats, organizations must implement security strategies during the design, development, deployment, and operational phases of AI systems. Effective measures include data sanitization, cryptographic checks, adversarial input detection, and continuous testing. A holistic approach is needed to protect against growing AI-related cyber risks.

For more details, visit the full article here

Benefits and Concerns of AI in Data Security and Privacy

Predictive analytics provides substantial benefits in cybersecurity by helping organizations forecast and mitigate threats before they arise. Using statistical analysis, machine learning, and behavioral insights, it highlights potential risks and vulnerabilities. Despite hurdles such as data quality, model complexity, and the dynamic nature of threats, adopting best practices and tools enhances its efficacy in threat detection and response. As cyber risks evolve, predictive analytics will be essential for proactive risk management and the protection of organizational data assets.

AI raises concerns about data privacy and security. Ensuring that AI tools comply with privacy regulations and protect sensitive information.

AI systems must adhere to privacy laws and regulations, such as GDPR, CPRA to protect individuals’ information. Compliance ensures ethical data handling practices.

Implementing robust security measures to protect data (data governance) from unauthorized access and breaches is critical. Data protection practices safeguard sensitive information and maintain trust.

1. Predictive Analytics in Cybersecurity

Predictive analytics offers substantial benefits by helping organizations anticipate and prevent cyber threats before they occur. It leverages statistical models, machine learning, and behavioral analysis to identify potential risks. These insights enable proactive measures, such as threat mitigation and vulnerability management, ensuring an organization’s defenses are always one step ahead.

2. AI and Data Privacy

AI systems raise concerns regarding data privacy and security, especially as they process sensitive information. Ensuring compliance with privacy regulations like GDPR and CPRA is crucial. Organizations must prioritize safeguarding personal data while using AI tools to maintain trust and avoid legal ramifications.

3. Security and Data Governance

Robust security measures are essential to protect data from breaches and unauthorized access. Implementing effective data governance ensures that sensitive information is managed, stored, and processed securely, thus maintaining organizational integrity and preventing potential data-related crises.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

Data Governance: The Definitive Guide: People, Processes, and Tools to Operationalize Data Trustworthiness

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI attacks, AI security, Data Governance


Sep 06 2024

How cyber criminals are compromising AI software supply chains

Category: AI,Cybercrime,DevSecOpsdisc7 @ 9:55 am

The rise of artificial intelligence (AI) has introduced new risks in software supply chains, particularly through open-source repositories like Hugging Face and GitHub. Cybercriminals, such as the NullBulge group, have begun targeting these repositories to poison data sets used for AI model training. These poisoned data sets can introduce misinformation or malicious code into AI systems, causing widespread disruption in AI-driven software and forcing companies to retrain models from scratch.

With AI systems relying heavily on vast open-source data sets, attackers have found it easier to infiltrate AI development pipelines. Compromised data sets can result in severe disruptions across AI supply chains, especially for businesses refining open-source models with proprietary data. As AI adoption grows, the challenge of maintaining data integrity, compliance, and security in open-source components becomes crucial for safeguarding AI advancements.

Open-source data sets are vital to AI development, as only large enterprises can afford to train models from scratch. However, these data sets, like LAION 5B, pose risks due to their size, making it difficult to ensure data quality and compliance. Cybercriminals exploit this by poisoning data sets, introducing malicious information that can compromise AI models. This ripple effect forces costly retraining efforts. The popularity of generative AI has further attracted attackers, heightening the risks across the entire AI supply chain.

The article emphasizes the importance of integrating security into all stages of AI development and usage, given the rise of AI-targeted cybercrime. Businesses must ensure traceability and explainability for AI outputs, keeping humans involved in the process. AI shouldn’t be seen solely as a cost-cutting tool, but rather as a technology that needs robust security measures. AI-powered security solutions can help analysts manage threats more effectively but should complement, not replace, human expertise.

For more detailed insights, check the full article here.

Blockchain, IoT, and AI Technologies for Supply Chain Management (Innovations in Intelligent Internet of Everything (IoE))

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI software supply chains


Sep 03 2024

AI Risk Management

Category: AI,Risk Assessmentdisc7 @ 8:56 am

The IBM blog on AI risk management discusses how organizations can identify, mitigate, and address potential risks associated with AI technologies. AI risk management is a subset of AI governance, focusing specifically on preventing and addressing threats to AI systems. The blog outlines various types of risks—such as data, model, operational, and ethical/legal risks—and emphasizes the importance of frameworks like the NIST AI Risk Management Framework to ensure ethical, secure, and reliable AI deployment. Effective AI risk management enhances security, decision-making, regulatory compliance, and trust in AI systems.

AI risk management can help close this gap and empower organizations to harness AI systems’ full potential without compromising AI ethics or security.

Understanding the risks associated with AI systems

Like other types of security risk, AI risk can be understood as a measure of how likely a potential AI-related threat is to affect an organization and how much damage that threat would do.

While each AI model and use case is different, the risks of AI generally fall into four buckets:

  • Data risks
  • Model risks
  • Operational risks
  • Ethical and legal risks

The NIST AI Risk Management Framework (AI RMF) 

In January 2023, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF) to provide a structured approach to managing AI risks. The NIST AI RMF has since become a benchmark for AI risk management.

The AI RMF’s primary goal is to help organizations design, develop, deploy and use AI systems in a way that effectively manages risks and promotes trustworthy, responsible AI practices.

Developed in collaboration with the public and private sectors, the AI RMF is entirely voluntary and applicable across any company, industry or geography.

The framework is divided into two parts. Part 1 offers an overview of the risks and characteristics of trustworthy AI systems. Part 2, the AI RMF Core, outlines four functions to help organizations address AI system risks:

  • Govern: Creating an organizational culture of AI risk management
  • Map: Framing AI risks in specific business contexts
  • Measure: Analyzing and assessing AI risks
  • Manage: Addressing mapped and measured risks

For more details, visit the full article here.

Predictive analytics for cyber risks

Predictive analytics offers significant benefits in cybersecurity by allowing organizations to foresee and mitigate potential threats before they occur. Using methods such as statistical analysis, machine learning, and behavioral analysis, predictive analytics can identify future risks and vulnerabilities. While challenges like data quality, model complexity, and evolving threats exist, employing best practices and suitable tools can improve its effectiveness in detecting cyber threats and managing risks. As cyber threats evolve, predictive analytics will be vital in proactively managing risks and protecting organizational information assets.

Trust Me: ISO 42001 AI Management System is the first book about the most important global AI management system standard: ISO 42001. The ISO 42001 standard is groundbreaking. It will have more impact than ISO 9001 as autonomous AI decision making becomes more prevalent.

Why Is AI Important?

AI autonomous decision making is all around us. It is in places we take for granted such as Siri or Alexa. AI is transforming how we live and work. It becomes critical we understand and trust this prevalent technology:

“Artificial intelligence systems have become increasingly prevalent in everyday life and enterprise settings, and they’re now often being used to support human decision making. These systems have grown increasingly complex and efficient, and AI holds the promise of uncovering valuable insights across a wide range of applications. But broad adoption of AI systems will require humans to trust their output.” (Trustworthy AI, IBM website, 2024)


Trust Me – ISO 42001 AI Management System

Enhance your AI (artificial intelligence) initiatives with ISO 42001 and empower your organization to innovate while upholding governance standards.

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Governance, AI Risk Management, artificial intelligence, security risk management


Sep 02 2024

Types of AI

Category: AIdisc7 @ 2:20 pm

1. Based on Capability

  • Narrow AI (Weak AI): AI systems that are designed and trained for a specific task, such as facial recognition, language translation, or playing chess. These systems operate under a limited set of constraints and do not possess general intelligence. Examples include Siri, Alexa, and IBM’s Watson.
  • General AI (Strong AI): A theoretical form of AI that would have the ability to learn, understand, and apply intelligence across a wide range of tasks, much like a human being. General AI does not yet exist and remains a goal for future development.
  • Superintelligent AI: A hypothetical AI that surpasses human intelligence across all aspects, including creativity, decision-making, and emotional intelligence. This type is purely speculative at this point and often discussed in the context of ethical considerations and long-term AI safety.

2. Based on Functionality

  • Reactive Machines: The most basic type of AI that can only react to current situations without any memory or understanding of the past. An example is IBM’s Deep Blue, which played chess without learning from previous games.
  • Limited Memory: AI systems that can use past experiences or data to make decisions, albeit temporarily. Most modern AI applications, like self-driving cars, fall into this category as they use historical data to make real-time decisions.
  • Theory of Mind: This type of AI is in the conceptual stage and aims to understand human emotions, beliefs, and thoughts, and interact socially. Theory of Mind AI is not yet realized but is an area of active research.
  • Self-Aware AI: The most advanced form of AI, which would have its own consciousness, self-awareness, and emotions. This type does not currently exist and is largely a subject of science fiction and philosophical debate.

3. Based on Learning Techniques

AI comes in many forms. And while the general process of automated technology carrying out a series of tasks remains consistent, how and why this happens will vary. Here are some examples of different types of AI which you might come across.

Deep Learning

An evolution of machine learning, this more thorough approach sees AI programmed in such a way that they’re able to identify images, sounds, and text without the need for human input. While with machine learning you may have to physically describe an image to AI, with deep learning they will be able to process and understand it themselves. 

Natural Language Processing (NLP)

If you’ve ever spoken to Siri, Alexa, or any other virtual assistant, you will have interacted with NLP. This technology is able to comprehend, manipulate, and generate human language in a way that allows it to have its very own “voice”. NLP can understand questions you give it, then respond accordingly. It can also be used in text form, such as a chatbot on a website. 

Computer vision

This futuristic form of tech allows computers to interpret and analyze the human world through the classification of images and objects. In doing so, it allows an AI to see the world through the eyes of a living person. This kind of technology is most commonly associated with driverless cars, where the vehicle needs to be able to process the world around it as a normal driver would. 

Machine Learning

This AI approach sees a series of data and algorithms run to formulate a picture of how a human would approach a situation or task. Over time, the program is able to adapt and even learn more about the human thinking process, which helps it to improve its overall accuracy. 

Generative AI

A popular online fad in 2023, generative AI is the name given to technology which is able to create images, text, or other media independently. A user simply needs to input what they want created, with the AI able to draw on their input training to produce something that has similar characteristics. 

Speech recognition

One of the oldest forms of AI, this tech is able to understand and interpret what you’re saying out loud, then convert it into text or audio format. This kind of technology is often confused with voice recognition – which instead of transcribing what you’re saying, will instead only be able to recognise the voice of the user. 

Robotic Process Automation (RPA)

RPA technology is a software which makes it easier to build, deploy, and manage robots that emulate human interactions. The robotic helpers are able to carry out a number of tasks virtually, at speeds which humans would be incapable of replicating. 

AI comes in many forms. And while the general process of automated technology carrying out a series of tasks remains consistent, how and why this happens will vary. Here are some examples of different types of AI which you might come across.

Tomorrow’s Artificial Intelligence: A Futurist’s Guide to Understanding and Harnessing AI Technology That Is Shaping Our World (Embracing Artificial Intelligence)

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Types of AI


Jul 26 2024

Las Vegas transit system is nation’s first to plan full deployment of AI surveillance system for weapons

Category: AIdisc7 @ 11:41 am

https://www.cnbc.com/2024/07/25/vegas-transit-system-first-in-us-ai-scan-for-weapons.html

Key Points

  • The Regional Transportation Commission of Southern Nevada, which includes Las Vegas, will be the first transit system in the U.S. to implement system-wide AI weapons scans.
  • Transit systems nationwide are grappling with ways to reduce violence.
  • AI-linked cameras and acoustic technology are seen as viable options to better respond to mass shootings in public places across the U.S., according to law enforcement and public safety teams, though both approaches have downsides.
A sign promoting safety is seen on the Regional Transportation Commission 109 Maryland Parkway bus in Las Vegas Thursday, June 8, 2023.
Las Vegas Review-journal | Tribune News Service | Getty Images

On your next visit to Vegas, an extra set of eyes will be watching you if you decide to hop onto the local transit system.

As part of a $33 million multi-year upgrade to fortify its security, the Regional Transportation Commission of Southern Nevada is set to add a system-wide AI from gun detection software vendor ZeroEyes that scans riders on its over 400 buses in an attempt to identify anyone brandishing a firearm. 

Tom Atteberry, RTC’s director of safety and security operations, said that seconds matter in a situation where an active shooting unfolds, and implementing the system could give authorities an edge. “Time is of the essence; it gives us time to identify a firearm being brandished, so they can be notified and get to the scene and save lives,” he said.

Monitoring and preventing mass shooting is one that public places across the country grapple with daily. Violent crime on transit systems, specifically, remains an issue in major metro areas, with a report released in late 2023 by the Department of Transportation detailing concerns from transit agency officials around the U.S. about rising violence on their transit systems. According to a database maintained by the Bureau of Transportation Statistics, assaults on transit systems have spiked, and there has been a rise in public fears about transportation safety.

For details:

Las Vegas transit system is nation’s first to plan full deployment of AI surveillance system for weapons

Wearable Devices, Surveillance Systems, and AI for Women’s Wellbeing

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI surveillance system, Las Vegas transit system


Jun 05 2024

Unauthorized AI is eating your company data, thanks to your employees

Category: AI,Data Breach,data securitydisc7 @ 8:09 am
https://www.csoonline.com/article/2138447/unauthorized-ai-is-eating-your-company-data-thanks-to-your-employees.html

Legal documents, HR data, source code, and other sensitive corporate information is being fed into unlicensed, publicly available AIs at a swift rate, leaving IT leaders with a mounting shadow AI mess.

Employees at many organizations are engaging in widespread use of unauthorized AI models behind the backs of their CIOs and CISOs, according to a recent study.

Employees are sharing company legal documents, source code, and employee information with unlicensed, non-corporate versions of AIs, including ChatGPT and Google Gemini, potentially leading to major headaches for CIOs and other IT leaders, according to research from Cyberhaven Labs.

About 74% of the ChatGPT use at work is through non-corporate accounts, potentially giving the AI the ability to use or train on that data, says the Cyberhaven Q2 2024 AI Adoption and Risk Report, based on actual AI usage patterns of 3 million workers. More than 94% of workplace use of Google AIs Gemini and Bard are from non-corporate accounts, the study reveals.

Nearly 83% of all legal documents shared with AI tools go through non-corporate accounts, the report adds, while about half of all source code, R&D materials, and HR and employee records go into unauthorized AIs.

The amount of data put into all AI tools saw nearly a five-fold increase between March 2023 and March 2024, according to the report. “End users are adopting new AI tools faster than IT can keep up, fueling continued growth in ‘shadow AI,’” the report adds.

Where does the data go?

At the same time, many users may not know what happens to their companies’ data once they share it with an unlicensed AI. ChatGPT’s terms of use, for example, say the ownership of the content entered remains with the users. However, ChatGPT may use that content to provide, maintain, develop, and improve its services, meaning it could train itself using shared employee records. Users can opt out of ChatGPT training itself on their data.

So far, there have been no high-profile reports about major company secrets spilled by large public AIs, but security experts worry about what happens to company data once an AI ingests it. On May 28, OpenAI announced a new Safety and Security Committee to address concerns.

It’s difficult to assess the risk of sharing confidential or sensitive information with publicly available AIs, says Brian Vecci, field CTO at Varonis, a cloud security firm. It seems unlikely that companies like Google or ChatGPT developer OpenAI will allow their AIs to leak sensitive business data to the public, given the headaches such disclosures would cause them, he says.

Still, there aren’t many rules governing what AI developers can do with the data users provide them, some security experts note. Many more AI models will be rolled out in the coming years, Vecci says.

“When we get outside of the realm of OpenAI and Google, there are going to be other tools that pop up,” he says. “There are going to be AI tools out there that will do something interesting but are not controlled by OpenAI or Google, which presumably have much more incentive to be held accountable and treat data with care.”

The coming wave of second- and third-tier AI developers may be fronts for hacking groups, may see profit in selling confidential company information, or may lack the cybersecurity protections that the big players have, Vecci says.

“There’s some version of an LLM tool that’s similar to ChatGPT and is free and fast and controlled by who knows who,” he says. “Your employees are using it, and they’re forking over source code and financial statements, and that could be a much higher risk.”

Risky behavior

Sharing company or customer data with any unauthorized AI creates risk, regardless of whether the AI model trains on that data or shares it with other users, because that information now exists outside company walls, adds Pranava Adduri, CEO of Bedrock Security.

Adduri recommends organizations sign licensed deals, containing data use restrictions, with AI vendors so that employees can experiment with AI.

“The problem boils down to the inability to control,” he says. “If the data is getting shipped off to a system where you don’t have that direct control, usually the risk is managed through legal contracts and legal agreements.”

AvePoint, a cloud data management company, has signed an AI contract to head off the use of shadow AI, says Dana Simberkoff, chief risk, privacy, and information security officer at the company. AvePoint thoroughly reviewed the licensing terms, including the data use restrictions, before signing.

A major problem with shadow AI is that users don’t read the privacy policy or terms of use before shoveling company data into unauthorized tools, she says.

“Where that data goes, how it’s being stored, and what it may be used for in the future is still not very transparent,” she says. “What most everyday business users don’t necessarily understand is that these open AI technologies, the ones from a whole host of different companies that you can use in your browser, actually feed themselves off of the data that they’re ingesting.”

Training and security

AvePoint has tried to discourage employees from using unauthorized AI tools through a comprehensive education program, through strict access controls on sensitive data, and through other cybersecurity protections preventing the sharing of data. AvePoint has also created an AI acceptable use policy, Simberkoff says.

Employee education focuses on common employee practices like granting wide access to a sensitive document. Even if an employee only notifies three coworkers that they can review the document, allowing general access can enable an AI to ingest the data.

“AI solutions are like this voracious, hungry beast that will take in anything that they can,” she says.

Using AI, even officially licensed ones, means organizations need to have good data management practices in place, Simberkoff adds. An organization’s access controls need to limit employees from seeing sensitive information not necessary for them to do their jobs, she says, and longstanding security and privacy best practices still apply in the age of AI.

Rolling out an AI, with its constant ingestion of data, is a stress test of a company’s security and privacy plans, she says.

“This has become my mantra: AI is either the best friend or the worst enemy of a security or privacy officer,” she adds. “It really does drive home everything that has been a best practice for 20 years.”

Simberkoff has worked with several AvePoint customers that backed away from AI projects because they didn’t have basic controls such as an acceptable use policy in place.

“They didn’t understand the consequences of what they were doing until they actually had something bad happen,” she says. “If I were to give one really important piece of advice it’s that it’s okay to pause. There’s a lot of pressure on companies to deploy AI quickly.”

Credit: Moon Safari / Shutterstock

Artificial Intelligence for Cybersecurity 

ChatGPT for Cybersecurity Cookbook: Learn practical generative AI recipes to supercharge your cybersecurity skills

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Artificial Intelligence for Cybersecurity, ChatGPT for Cybersecurity


Jun 03 2024

OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

Category: AIdisc7 @ 11:13 am

https://thehackernews.com/2024/05/openai-meta-tiktok-disrupt-multiple-ai.html

OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence (AI) tools to manipulate public discourse or political outcomes online while obscuring their true identity.

These activities, which were detected over the past three months, used its AI models to generate short comments and longer articles in a range of languages, cook up names and bios for social media accounts, conduct open-source research, debug simple code, and translate and proofread texts.

The AI research organization said two of the networks were linked to actors in Russia, including a previously undocumented operation codenamed Bad Grammar that primarily used at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States and the United States (U.S.) with sloppy content in Russian and English.

Deep Disinformation: Can AI-Generated Fake News…

“The network used our models and accounts on Telegram to set up a comment-spamming pipeline,” OpenAI said. “First, the operators used our models to debug code that was apparently designed to automate posting on Telegram. They then generated comments in Russian and English in reply to specific Telegram posts.”

The operators also used its models to generate comments under the guise of various fictitious personas belonging to different demographics from across both sides of the political spectrum in the U.S.

The other Russia-linked information operation corresponded to the prolific Doppelganger network (aka Recent Reliable News), which was sanctioned by the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) earlier this March for engaging in cyber influence operations.

The network is said to have used OpenAI’s models to generate comments in English, French, German, Italian, and Polish that were shared on X and 9GAG; translate and edit articles from Russian to English and French that were then posted on bogus websites maintained by the group; generate headlines; and convert news articles posted on its sites into Facebook posts.

Fake News: AI & All News Requires Critical Thinking

“This activity targeted audiences in Europe and North America and focused on generating content for websites and social media,” OpenAI said. “The majority of the content that this campaign published online focused on the war in Ukraine. It portrayed Ukraine, the US, NATO and the EU in a negative light and Russia in a positive light.”

AI-Powered Disinformation Campaigns

The other three activity clusters are listed below –

  • A Chinese-origin network known as Spamouflage that used its AI models to research public social media activity; generate texts in Chinese, English, Japanese, and Korean for posting across X, Medium, and Blogger; propagate content criticizing Chinese dissidents and abuses against Native Americans in the U.S.; and debug code for managing databases and websites
  • An Iranian operation known as the International Union of Virtual Media (IUVM) that used its AI models to generate and translate long-form articles, headlines, and website tags in English and French for subsequent publication on a website named iuvmpress[.]co
  • A network referred to as Zero Zeno emanating from a for-hire Israeli threat actor, a business intelligence firm called STOIC, that used its AI models to generate and disseminate anti-Hamas, anti-Qatar, pro-Israel, anti-BJP, and pro-Histadrut content across Instagram, Facebook, X, and its affiliated websites targeting users in Canada, the U.S., India, and Ghana.

“The [Zero Zeno] operation also used our models to create fictional personas and bios for social media based on certain variables such as age, gender and location, and to conduct research into people in Israel who commented publicly on the Histadrut trade union in Israel,” OpenAI said, adding its models refused to supply personal data in response to these prompts.

The ChatGPT maker emphasized in its first threat report on IO that none of these campaigns “meaningfully increased their audience engagement or reach” from exploiting its services.

The development comes as concerns are being raised that generative AI (GenAI) tools could make it easier for malicious actors to generate realistic text, images and even video content, making it challenging to spot and respond to misinformation and disinformation operations.

“So far, the situation is evolution, not revolution,” Ben Nimmo, principal investigator of intelligence and investigations at OpenAI, said. “That could change. It’s important to keep watching and keep sharing.”

Meta Highlights STOIC and Doppelganger#

Separately, Meta in its quarterly Adversarial Threat Report, also shared details of STOIC’s influence operations, saying it removed a mix of nearly 500 compromised and fake accounts on Facebook and Instagram accounts used by the actor to target users in Canada and the U.S.

“This campaign demonstrated a relative discipline in maintaining OpSec, including by leveraging North American proxy infrastructure to anonymize its activity,” the social media giant said.

AI-Powered Disinformation Campaigns
Meta further said it removed hundreds of accounts, comprising deceptive networks from Bangladesh, China, Croatia, Iran, and Russia, for engaging in coordinated inauthentic behavior (CIB) with the goal of influencing public opinion and pushing political narratives about topical events.
The China-linked malign network, for instance, mainly targeted the global Sikh community and consisted of several dozen Instagram and Facebook accounts, pages, and groups that were used to spread manipulated imagery and English and Hindi-language posts related to a non-existent pro-Sikh movement, the Khalistan separatist movement, and criticism of the Indian government.
It pointed out that it hasn’t so far detected any novel and sophisticated use of GenAI-driven tactics, with the company highlighting instances of AI-generated video news readers that were previously documented by Graphika and GNET, indicating that despite the largely ineffective nature of these campaigns, threat actors are actively experimenting with the technology.

Doppelganger, Meta said, has continued its “smash-and-grab” efforts, albeit with a major shift in tactics in response to public reporting, including the use of text obfuscation to evade detection (e.g., using “U. kr. ai. n. e” instead of “Ukraine”) and dropping its practice of linking to typosquatted domains masquerading as news media outlets since April.
“The campaign is supported by a network with two categories of news websites: typosquatted legitimate media outlets and organizations, and independent news websites,” Sekoia said in a report about the pro-Russian adversarial network published last week.
“Disinformation articles are published on these websites and then disseminated and amplified via inauthentic social media accounts on several platforms, especially video-hosting ones like Instagram, TikTok, Cameo, and YouTube.”

These social media profiles, created in large numbers and in waves, leverage paid ads campaigns on Facebook and Instagram to direct users to propaganda websites. The Facebook accounts are also called burner accounts owing to the fact that they are used to share only one article and are subsequently abandoned.

The French cybersecurity firm described the industrial-scale campaigns – which are geared towards both Ukraine’s allies and Russian-speaking domestic audiences on Kremlin’s behalf – as multi-layered, leveraging the social botnet to initiate a redirection chain that passes through two intermediate websites in order to lead users to the final page.Doppelganger, along with another coordinated pro-Russian propaganda network designated as Portal Kombat, has also been observed amplifying content from a nascent influence network dubbed CopyCop, demonstrating a concerted effort to promulgate narratives that project Russia in a favorable light.

Recorded Future, in a report released this month, said CopyCop is likely operated from Russia, taking advantage of inauthentic media outlets in the U.S., the U.K., and France to promote narratives that undermine Western domestic and foreign policy, and spread content pertaining to the ongoing Russo-Ukrainian war and the Israel-Hamas conflict.

“CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases,” the company said. “This included content critical of Western policies and supportive of Russian perspectives on international issues like the Ukraine conflict and the Israel-Hamas tensions.”

TikTok Disrupts Covert Influence Operations#

Earlier in May, ByteDance-owned TikTok said it had uncovered and stamped out several such networks on its platform since the start of the year, including ones that it traced back to Bangladesh, China, Ecuador, Germany, Guatemala, Indonesia, Iran, Iraq, Serbia, Ukraine, and Venezuela.

TikTok, which is currently facing scrutiny in the U.S. following the passage of a law that would force the Chinese company to sell the company or face a ban in the country, has become an increasingly preferred platform of choice for Russian state-affiliated accounts in 2024, according to a new report from the Brookings Institution.

What’s more, the social video hosting service has emerged as a breeding ground for what has been characterized as a complex influence campaign known as Emerald Divide (aka Storm-1364) that is believed to be orchestrated by Iran-aligned actors since 2021 targeting Israeli society.

AI-Powered Disinformation Campaigns

“Emerald Divide is noted for its dynamic approach, swiftly adapting its influence narratives to Israel’s evolving political landscape,” Recorded Future said.

“It leverages modern digital tools such as AI-generated deepfakes and a network of strategically operated social media accounts, which target diverse and often opposing audiences, effectively stoking societal divisions and encouraging physical actions such as protests and the spreading of anti-government messages.”

The ChatGPT Edge: Unleashing The Limitless Potential Of AI Using Simple And Creative Prompts To Boost Productivity, Maximize Efficiency

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: and TikTok, Covert Influence Campaigns, Meta, OpenAI


Apr 26 2024

25 cybersecurity AI stats you should know

Category: AI,cyber securitydisc7 @ 7:33 am

Security pros are cautiously optimistic about AI

Cloud Security Alliance and Google Cloud | The State of AI and Security Survey Report | April 2024

  • 55% of organizations plan to adopt GenAI solutions within this year, signaling a substantial surge in GenAI integration.
  • 48% of professionals expressed confidence in their organization’s ability to execute a strategy for leveraging AI in security.
  • 12% of security professionals believe AI will completely replace their role.

AI abuse and misinformation campaigns threaten financial institutions

FS-ISAC | Navigating Cyber 2024 | March 2024

  • Threat actors can use generative AI to write malware and more skilled cybercriminals could exfiltrate information from or inject contaminated data into the large language models (LLMs) that train GenAI.
  • Recent quantum computing and AI advancements are expected to challenge established cryptographic algorithms.

Enterprises increasingly block AI transactions over security concerns

Zscaler | AI Security Report 2024 | March 2024

  • Today, enterprises block 18.5% of all AI transactions, a 577% increase from April to January, for a total of more than 2.6 billion blocked transactions.
  • Some of the most popular AI tools are also the most blocked. Indeed, ChatGPT holds the distinction of being both the most-used and most-blocked AI application.
cybersecurity ai stats

Scammers exploit tax season anxiety with AI tools

McAfee | Tax Scams Study 2024 | March 2024

  • Of the people who clicked on fraudulent links from supposed tax services, 68% lost money. Among those, 29% lost more than $2,500, and 17% lost more than $10,000.
  • 9% of Americans feel confident in their ability to spot deepfake videos or recognize AI-generated audio, such as fake renditions of IRS agents.

Advanced AI, analytics, and automation are vital to tackle tech stack complexity

Dynatrace | The state of observability 2024 | March 2024

  • 97% of technology leaders find traditional AIOps models are unable to tackle the data overload.
  • 88% of organizations say the complexity of their technology stack has increased in the past 12 months, and 51% say it will continue to increase.
  • 72% of organizations have adopted AIOps to reduce the complexity of managing their multicloud environment.

Today’s biggest AI security challenges

HiddenLayer | AI Threat Landscape Report 2024 | March 2024

  • 98% of companies surveyed view some of their AI models as vital for business success, and 77% have experienced breaches in their AI systems over the past year.
  • 61% of IT leaders acknowledge shadow AI, solutions that are not officially known or under the control of the IT department, as a problem within their organizations.
  • Researchers revealed the extensive use of AI in modern businesses, noting an average of 1,689 AI models actively used by companies. This has made AI security a top priority, with 94% of IT leaders dedicating funds to safeguard their AI in 2024.
cybersecurity ai stats

AI tools put companies at risk of data exfiltration

Code42 | Annual Data Exposure Report 2024 | March 2024

  • Since 2021, there has been a 28% average increase in monthly insider-driven data exposure, loss, leak, and theft events.
  • While 99% of companies have data protection solutions in place, 78% of cybersecurity leaders admit they’ve still had sensitive data breached, leaked, or exposed.

95% believe LLMs making phishing detection more challenging

LastPass | LastPass survey 2024 | March 2024

  • More than 95% of respondents believe dynamic content through Large Language Models (LLMs) makes detecting phishing attempts more challenging.
  • Phishing will remain the top social engineering threat to businesses throughout 2024, surpassing other threats like business email compromise, vishing, smishing or baiting.
cybersecurity ai stats

How AI is reshaping the cybersecurity job landscape

ISC2 | AI Cyber 2024 | February 2024

  • 88% of cybersecurity professionals believe that AI will significantly impact their jobs, now or in the near future, and 35% have already witnessed its effects.
  • 75% of respondents are moderately to extremely concerned that AI will be used for cyberattacks or other malicious activities.
  • The survey revealed that 12% of respondents said their organizations had blocked all access to generative AI tools in the workplace.
cybersecurity ai stats

Businesses banning or limiting use of GenAI over privacy risks

Cisco | Cisco 2024 Data Privacy Benchmark Study | February 2024

  • 63% have established limitations on what data can be entered, 61% have limits on which employees can use GenAI tools, and 27% said their organization had banned GenAI applications altogether for the time being.
  • Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have positively impacted them, and only 6% said the impact has been negative.
  • 91% of organizations recognize they need to do more to reassure their customers that their data was being used only for intended and legitimate purposes in AI.
cybersecurity ai stats

Unlocking GenAI’s full potential through work reinvention

Accenture | Work, workforce, workers: Reinvented in the age of generative AI | January 2024

  • While 95% of workers see value in working with GenAI, 60% are also concerned about job loss, stress and burnout.
  • 47% of reinventors are already thinking bigger—recognizing that their processes will require significant change to fully leverage GenAI.
cybersecurity ai stats

Adversaries exploit trends, target popular GenAI apps

Netskope | Cloud and Threat Report 2024 | January 2024

  • In 2023, ChatGPT was the most popular generative AI application, accounting for 7% of enterprise usage.
  • Half of all enterprise users interact with between 11 and 33 cloud apps each month, with the top 1% using more than 96 apps per month.

Artificial Intelligence for Cybersecurity

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: cybersecurity AI stats


Apr 19 2024

NSA, CISA & FBI Released Best Practices For AI Security Deployment 2024

Category: AIdisc7 @ 8:03 am

In a groundbreaking move, the U.S. Department of Defense has released a comprehensive guide for organizations deploying and operating AI systems designed and developed by
another firm.

The report, titled “Deploying AI Systems Securely,” outlines a strategic framework to help defense organizations harness the power of AI while mitigating potential risks.

The report was authored by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC).

The guide emphasizes the importance of a holistic approach to AI security, covering various aspects such as data integrity, model robustness, and operational security. It outlines a six-step process for secure AI deployment:

  1. Understand the AI system and its context
  2. Identify and assess risks
  3. Develop a security plan
  4. Implement security controls
  5. Monitor and maintain the AI system
  6. Continuously improve security practices

Addressing AI Security Challenges

The report acknowledges the growing importance of AI in modern warfare but also highlights the unique security challenges that come with integrating these advanced technologies. “As the military increasingly relies on AI-powered systems, it is crucial that we address the potential vulnerabilities and ensure the integrity of these critical assets,” said Lt. Gen. Jane Doe, the report’s lead author.

Some of the key security concerns outlined in the document include:

  • Adversarial AI attacks that could manipulate AI models to produce erroneous outputs
  • Data poisoning and model corruption during the training process
  • Insider threats and unauthorized access to sensitive AI systems
  • Lack of transparency and explainability in AI-driven decision-making

A Comprehensive Security Framework

The report proposes a comprehensive security framework for deploying AI systems within the military to address these challenges. The framework consists of three main pillars:

  1. Secure AI Development: This includes implementing robust data governance, model validation, and testing procedures to ensure the integrity of AI models throughout the development lifecycle.
  2. Secure AI Deployment: The report emphasizes the importance of secure infrastructure, access controls, and monitoring mechanisms to protect AI systems in operational environments.
  3. Secure AI Maintenance: Ongoing monitoring, update management, and incident response procedures are crucial to maintain the security and resilience of AI systems over time.

Key Recommendations

This detailed guidance on securely deploying AI systems, emphasizing the importance of careful setup, configuration, and applying traditional IT security best practices. Among the key recommendations are:

Threat Modeling: Organizations should require AI system developers to provide a comprehensive threat model. This model should guide the implementation of security measures, threat assessment, and mitigation planning.

Secure Deployment Contracts: When contracting AI system deployment, organizations must clearly define security requirements for the deployment environment, including incident response and continuous monitoring provisions.

Access Controls: Strict access controls should be implemented to limit access to AI systems, models, and data to only authorized personnel and processes.

Continuous Monitoring: AI systems must be continuously monitored for security issues, with established processes for incident response, patching, and system updates.

Collaboration And Continuous Improvement

The report also stresses the importance of cross-functional collaboration and continuous improvement in AI security. “Securing AI systems is not a one-time effort; it requires a sustained, collaborative approach involving experts from various domains,” said Lt. Gen. Doe.

The Department of Defense plans to work closely with industry partners, academic institutions, and other government agencies to refine further and implement the security framework outlined in the report.

Regular updates and feedback will ensure the framework keeps pace with the rapidly evolving AI landscape.

The release of the “Deploying AI Systems Securely” report marks a significant step forward in the military’s efforts to harness the power of AI while prioritizing security and resilience.

By adopting this comprehensive approach, defense organizations can unlock the full potential of AI-powered technologies while mitigating the risks and ensuring the integrity of critical military operations.

The AI Playbook: Mastering the Rare Art of Machine Learning Deployment

Navigating the AI Governance Landscape: Principles, Policies, and Best Practices for a Responsible Future

Trust Me – AI Risk Management

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Governance, AI Risk Management, Best Practices For AI


Apr 05 2024

Hackers Hijack Facebook Pages To Mimic AI Brands & Inject Malware

Category: AI,Hacking,Malwaredisc7 @ 8:08 am

Hackers have been found hijacking Facebook pages to impersonate popular AI brands, thereby injecting malware into the devices of unsuspecting users.

This revelation comes from a detailed investigation by Bitdefender Labs, which has been closely monitoring these malicious campaigns since June 2023.

Recent analyses of malvertising campaigns have revealed a disturbing trend.

Ads are distributing an assortment of malicious software, which poses severe risks to consumers’ devices, data, and identity.

Unwitting interactions with these malware-serving ads could lead to downloading and deploying harmful files, including Rilide Stealer, Vidar Stealer, IceRAT, and Nova Stealer, onto users’ devices.

Rilide Stealer V4: A Closer Look

Bitdefender Labs has spotlighted an updated version of the Rilide Stealer (V4) lurking within sponsored ad campaigns that impersonate popular AI-based software and photo editors such as Sora, CapCut, Gemini AI, Photo Effects Pro, and CapCut Pro.

This malicious extension, targeting Chromium-based browsers, is designed to monitor browsing history, capture login credentials, and even facilitate the withdrawal of crypto funds by bypassing two-factor authentication through script injections.

Sora Ad campaign
Gemini Ad Campaign

Key Updates in Rilide V4:

  • Targeting of Facebook cookies
  • Masquerading as a Google Translate Extension
  • Enhanced obfuscation techniques to conceal the software’s true intent

Indicators Of Compromise

Malicious hashes

  • 2d6829e8a2f48fff5348244ce0eaa35bcd4b26eac0f36063b9ff888e664310db – OpenAI Sora official version setup.msi – Sora
  • a7c07d2c8893c30d766f383be0dd78bc6a5fd578efaea4afc3229cd0610ab0cf – OpenAI Sora Setup.zip – Sora
  • e394f4192c2a3e01e6c1165ed1a483603b411fd12d417bfb0dc72bd6e18e9e9d – Setup.msi – Sora
  • 021657f82c94511e97771739e550d63600c4d76cef79a686aa44cdca668814e0 – Setup.msi – Sora
  • 92751fd15f4d0b495e2b83d14461d22d6b74beaf51d73d9ae2b86e2232894d7b – Setup.msi – Sora
  • 32a097b510ae830626209206c815bbbed1c36c0d2df7a9d8252909c604a9c1f1 – Setup.msi – Sora
  • c665ff2206c9d4e50861f493f8e7beca8353b37671d633fe4b6e084c62e58ed9 – Setup.msi – Sora
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e – Capcut Pro For PC.setup.msi – Capcut
  • 757855fcd47f843739b9a330f1ecb28d339be41eed4ae25220dc888e57f2ec51 – OpenAI ChatGPT-4.5 Version Free.msi – ChatGPT
  • 3686204361bf6bf8db68fd81e08c91abcbf215844f0119a458c319e92a396ecf – Google Gemini AI Ultra Version Updata.msi – Gemini AI
  • d60ea266c4e0f0e8d56d98472a91dd5c37e8eeeca13bf53e0381f0affc68e78a – Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • bb7c3b78f2784a7ac3c090331326279476c748087188aeb69f431bbd70ac6407 – Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e – AISora.setup.msi – Sora

Vidar Stealer: Evolving Threats

Vidar Stealer, another prolific info stealer, is marketed through the same MaaS model via dark web ads, forums, and Telegram groups.

Capable of exfiltrating personal information and crypto from compromised devices, Vidar’s distribution has evolved from spam campaigns and cracked software to malicious Google Search ads and social media platforms, mainly through sponsored ads on Meta’s platform.

Indicators Of Compromise

Malicious hashes

  • 6396ac7b1524bb9759f434fe956a15f5364284a04acd5fc0ef4b625de35d766b- g2m.dll – MidJourney
  • 76ed62a335ac225a2b7e6dade4235a83668630a9c1e727cf4ddb0167ab2202f6- Midjourney.7z – MidJourney

IceRAT: More Than Just A Trojan

Despite its name, IceRAT functions more as a backdoor on compromised devices. It acts as a gateway for secondary infections, such as crypto miners and information stealers that target login credentials and other sensitive data.

Indicators Of Compromise

Malicious hashes

  • aab585b75e868fb542e6dfcd643f97d1c5ee410ca5c4c5ffe1112b49c4851f47- Midjourneyv6.exe – MidJourney
  • b5f740c0c1ac60fa008a1a7bd6ea77e0fc1d5aa55e6856d8edcb71487368c37c- Midjourneyv6ai.exe – MidJourney
  • cc15e96ec1e27c01bd81d2347f4ded173dfc93df673c4300faac5a932180caeb- Mid_Setup.exe – MidJourney
  • d2f12dec801000fbd5ccc8c0e8ed4cf8cc27a37e1dca9e25afc0bcb2287fbb9a- Midjourney_v6.exe – MidJourney
  • f2fc27b96a4a487f39afad47c17d948282145894652485f9b6483bec64932614-Midjourneyv6.1_ins.exe – MidJourney
  • f99aa62ee34877b1cd02cfd7e8406b664ae30c5843f49c7e89d2a4db56262c2e – Midjourneys_Setup.exe – MidJourney
  • 54a992a4c1c25a923463865c43ecafe0466da5c1735096ba0c3c3996da25ffb7 – Mid_Setup.exe – MidJourney
  • 4a71a8c0488687e0bb60a2d0199b34362021adc300541dd106486e326d1ea09b- Mid_Setup.exe – MidJourney

Nova Stealer: The New Kid On The Block

Nova Stealer emerges as a highly proficient info stealer with capabilities including password exfiltration, screen recordings, discord injections, and crypto wallet hijacking.

Nova Stealer, offered as MaaS by the threat actor known as Sordeal, represents a significant threat to digital security.

Indicators Of Compromise

Malicious hashes

  • fb3fbee5372e5050c17f72dbe0eb7b3afd3a57bd034b6c2ac931ad93b695d2d9- Instructions_for_using_today_s_AI.pdf.rar – AI and Life
  • 6a36f1f1821de7f80cc9f8da66e6ce5916ac1c2607df3402b8dd56da8ebcc5e2- Instructions_for_using_today_s_AI.xlsx_rar.rar – AI and Life
  • fe7e6b41766d91fbc23d31573c75989a2b0f0111c351bed9e2096cc6d747794b- Instructions for using today’s AI.pdf.exe – AI and Life
  • ce0e41e907cab657cc7ad460a5f459c27973e9346b5adc8e64272f47026d333d- Instructions for using today’s AI.xlsx.exe – AI and Life
  • a214bc2025584af8c38df36b08eb964e561a016722cd383f8877b684bff9e83d- 20 digital marketing tips for 2024.xlsx.exe – Google Digital Marketing
  • 53714612af006b06ca51cc47abf0522f7762ecb1300e5538485662b1c64d6f55 – Premium advertising course registration form from Oxford.exe – Google Digital Marketing
  • 728953a3ebb0c25bcde85fd1a83903c7b4b814f91b39d181f0fc610b243c98d4- New Microsoft Excel Worksheet.exe – Google Digital Marketing

The Midjourney Saga: AI’s Dark Side

The addition of AI tools on the internet, from free offerings and trials to subscription-based services, has not gone unnoticed by cybercriminals.

Midjourney, a leading generative AI tool with a user base exceeding 16 million as of November 2023, has become a favored tool among cyber gangs over the past year, highlighting the intersection of cutting-edge technology and cybercrime.

Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.
Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.

Indicators Of Compromise

  • 159.89.120.191
  • 159.89.98.241

As the digital landscape continues to evolve, so does the nature of the threats it maintains.

The rise of Malware-as-a-Service represents a significant shift in the cyber threat paradigm that requires vigilant and proactive measures to combat.

Key Updates in Rilide V4:

  • Targeting of Facebook cookies
  • Masquerading as a Google Translate Extension
  • Enhanced obfuscation techniques to conceal the software’s true intent

Indicators Of Compromise

Malicious hashes

  • 2d6829e8a2f48fff5348244ce0eaa35bcd4b26eac0f36063b9ff888e664310db – OpenAI Sora official version setup.msi – Sora
  • a7c07d2c8893c30d766f383be0dd78bc6a5fd578efaea4afc3229cd0610ab0cf – OpenAI Sora Setup.zip – Sora
  • e394f4192c2a3e01e6c1165ed1a483603b411fd12d417bfb0dc72bd6e18e9e9d – Setup.msi – Sora
  • 021657f82c94511e97771739e550d63600c4d76cef79a686aa44cdca668814e0 – Setup.msi – Sora
  • 92751fd15f4d0b495e2b83d14461d22d6b74beaf51d73d9ae2b86e2232894d7b – Setup.msi – Sora
  • 32a097b510ae830626209206c815bbbed1c36c0d2df7a9d8252909c604a9c1f1 – Setup.msi – Sora
  • c665ff2206c9d4e50861f493f8e7beca8353b37671d633fe4b6e084c62e58ed9 – Setup.msi – Sora
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e – Capcut Pro For PC.setup.msi – Capcut
  • 757855fcd47f843739b9a330f1ecb28d339be41eed4ae25220dc888e57f2ec51 – OpenAI ChatGPT-4.5 Version Free.msi – ChatGPT
  • 3686204361bf6bf8db68fd81e08c91abcbf215844f0119a458c319e92a396ecf – Google Gemini AI Ultra Version Updata.msi – Gemini AI
  • d60ea266c4e0f0e8d56d98472a91dd5c37e8eeeca13bf53e0381f0affc68e78a – Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • bb7c3b78f2784a7ac3c090331326279476c748087188aeb69f431bbd70ac6407 – Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e – AISora.setup.msi – Sora

Vidar Stealer: Evolving Threats

Vidar Stealer, another prolific info stealer, is marketed through the same MaaS model via dark web ads, forums, and Telegram groups.

Capable of exfiltrating personal information and crypto from compromised devices, Vidar’s distribution has evolved from spam campaigns and cracked software to malicious Google Search ads and social media platforms, mainly through sponsored ads on Meta’s platform.

Indicators Of Compromise

Malicious hashes

  • 6396ac7b1524bb9759f434fe956a15f5364284a04acd5fc0ef4b625de35d766b- g2m.dll – MidJourney
  • 76ed62a335ac225a2b7e6dade4235a83668630a9c1e727cf4ddb0167ab2202f6- Midjourney.7z – MidJourney

IceRAT: More Than Just A Trojan

Despite its name, IceRAT functions more as a backdoor on compromised devices. It acts as a gateway for secondary infections, such as crypto miners and information stealers that target login credentials and other sensitive data.

Indicators Of Compromise

Malicious hashes

  • aab585b75e868fb542e6dfcd643f97d1c5ee410ca5c4c5ffe1112b49c4851f47- Midjourneyv6.exe – MidJourney
  • b5f740c0c1ac60fa008a1a7bd6ea77e0fc1d5aa55e6856d8edcb71487368c37c- Midjourneyv6ai.exe – MidJourney
  • cc15e96ec1e27c01bd81d2347f4ded173dfc93df673c4300faac5a932180caeb- Mid_Setup.exe – MidJourney
  • d2f12dec801000fbd5ccc8c0e8ed4cf8cc27a37e1dca9e25afc0bcb2287fbb9a- Midjourney_v6.exe – MidJourney
  • f2fc27b96a4a487f39afad47c17d948282145894652485f9b6483bec64932614-Midjourneyv6.1_ins.exe – MidJourney
  • f99aa62ee34877b1cd02cfd7e8406b664ae30c5843f49c7e89d2a4db56262c2e – Midjourneys_Setup.exe – MidJourney
  • 54a992a4c1c25a923463865c43ecafe0466da5c1735096ba0c3c3996da25ffb7 – Mid_Setup.exe – MidJourney
  • 4a71a8c0488687e0bb60a2d0199b34362021adc300541dd106486e326d1ea09b- Mid_Setup.exe – MidJourney

Nova Stealer: The New Kid On The Block

Nova Stealer emerges as a highly proficient info stealer with capabilities including password exfiltration, screen recordings, discord injections, and crypto wallet hijacking.

Nova Stealer, offered as MaaS by the threat actor known as Sordeal, represents a significant threat to digital security.

Indicators Of Compromise

Malicious hashes

  • fb3fbee5372e5050c17f72dbe0eb7b3afd3a57bd034b6c2ac931ad93b695d2d9- Instructions_for_using_today_s_AI.pdf.rar – AI and Life
  • 6a36f1f1821de7f80cc9f8da66e6ce5916ac1c2607df3402b8dd56da8ebcc5e2- Instructions_for_using_today_s_AI.xlsx_rar.rar – AI and Life
  • fe7e6b41766d91fbc23d31573c75989a2b0f0111c351bed9e2096cc6d747794b- Instructions for using today’s AI.pdf.exe – AI and Life
  • ce0e41e907cab657cc7ad460a5f459c27973e9346b5adc8e64272f47026d333d- Instructions for using today’s AI.xlsx.exe – AI and Life
  • a214bc2025584af8c38df36b08eb964e561a016722cd383f8877b684bff9e83d- 20 digital marketing tips for 2024.xlsx.exe – Google Digital Marketing
  • 53714612af006b06ca51cc47abf0522f7762ecb1300e5538485662b1c64d6f55 – Premium advertising course registration form from Oxford.exe – Google Digital Marketing
  • 728953a3ebb0c25bcde85fd1a83903c7b4b814f91b39d181f0fc610b243c98d4- New Microsoft Excel Worksheet.exe – Google Digital Marketing

The Midjourney Saga: AI’s Dark Side

The addition of AI tools on the internet, from free offerings and trials to subscription-based services, has not gone unnoticed by cybercriminals.

Midjourney, a leading generative AI tool with a user base exceeding 16 million as of November 2023, has become a favored tool among cyber gangs over the past year, highlighting the intersection of cutting-edge technology and cybercrime.

Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.
Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.

Indicators Of Compromise

  • 159.89.120.191
  • 159.89.98.241

As the digital landscape continues to evolve, so does the nature of the threats it maintains.

The rise of Malware-as-a-Service represents a significant shift in the cyber threat paradigm that requires vigilant and proactive measures to combat.

The Complete Guide to Software as a Service: Everything you need to know about SaaS

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Hijack Facebook Pages


Apr 03 2024

ISO27k bot

Category: AI,Information Securitydisc7 @ 2:03 pm
Hey 👏 I’m the digital assistance of DISCInfoSec for ISO 27k implementation. I will try to answer your question. If I don’t know the answer, I will connect you with one my support agents. Please type your query regarding ISO 27001 implementation 👇

ISO 27k Chat bot

Tags: Chat bot, ISO 27k bot


Next Page »