Jul 16 2024

Understanding Compliance With the NIST AI Risk Management Framework

Category: NIST Privacy,Risk Assessmentdisc7 @ 10:06 am

Incorporating artificial intelligence (AI) seems like a logical step for businesses looking to maximize efficiency and productivity. But the adverse effects of AI use, such as data security risk and misinformation, could bring more harm than good.

According to the World Economic Forum’s Global Risks Report 2024, AI-generated misinformation and disinformation are among the top global risks businesses face today.

To address the security risks posed by the increasing use of AI technologies in business processes, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1.0) in January 2023. 

Adhering to this framework not only puts your organization in strong position to avoid the dangers of AI-based exploits, it also adds an impressive type of compliance to your portfolio, instilling confidence in external stakeholders. Moreover, while NIST AI RMF is more of a guideline than a regulation, today there are several AI laws in the process of being enacted, so adhering to NIST’s framework helps CISOs to future-proof their AI compliance postures.

Let’s examine the four key pillars of the framework – govern, map, measure and manage – and see how you can incorporate them to better protect your organization from AI-related risks.

1.Establish AI Governance Structures

In the context of NIST AI RMF, governance is the process of establishing processes, procedures, and standards that guide responsible AI development, deployment, and use. Its main goal is to connect the technical aspect of AI system design and development with organizational goals, values, and principles.

Strong governance starts from the top, and NIST recommends establishing accountability structures with the appropriate teams responsible for AI risk management, under the framework’s “Govern” function. These teams will be responsible for putting in place structures, systems and processes, with the end goal of establishing a strong culture of responsible AI use throughout the organization.

Using automated tools is a great way to streamline the often tedious process of policy creation and governance. “We view it as our responsibility to help organizations maximize the benefits of AI while effectively mitigating the risks and ensuring compliance with best practices and good governance,” said Arik Solomon, CEO of Cypago, a SaaS platform that automates governance, risk management, and compliance (GRC) processes in line with the latest frameworks.

“These latest features ensure that Cypago supports the newest AI and cyber governance frameworks, enabling GRC and cybersecurity teams to automate GRC with the most up-to-date requirements.”

Rather than existing as a stand-alone component, governance should be incorporated into every other NIST AI RMF function, particularly those associated with assessment and compliance. This will foster a strong organizational risk culture and improve internal processes and standards.

2.Map And Categorize AI Systems

The framework’s “Map” function supports governance efforts while also providing a foundation for measuring and managing risk. It’s here that the risks associated with an AI system are put into context, which will ultimately determine the appropriateness or need for the given AI solution.

As Opice Blum data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilization.” 

But how do you actually put this mapping process into practice?

NIST recommends the following approach:

  • Clearly establish why you need or want to implement the AI system. What are the expectations? What are the prospective settings where the system will be deployed? You should also determine the organizational risk tolerance for operating the system.
  • Map all of the risks and benefits associated with using the system. Here is where you should also determine your risk tolerance, not only with monetary costs but also those stemming from AI errors or malfunctions.
  • Analyze the likelihood and magnitude of the impact the AI system will have on the organization, including employees, customers, and society as a whole.

3.Measure AI Performance and Risk

The “Measure” function utilizes qualitative and quantitative techniques to analyze and monitor the AI-related risks identified in the “Map” function.

AI systems should be tested before deployment and frequently thereafter. But measuring risk with AI systems can be tricky. The technology is fairly new, so there are no standardized metrics yet. This might change in the near future, as developing these metrics is a high priority for many consulting firms. For example, Ernst & Young (EY) is developing an AI Confidence Index

“Our confidence index is founded on five criteria – privacy and security, bias and fairness, reliability, transparency and explainability, and the last is accountability,” noted Kapish Vanvaria, EY Americas Risk Market Leader. The other axis includes regulations and ethics. 

“Then you can have a heat map of the different processes you’re looking at and the functions in which they’re deployed,” he says. “And you can go through each one and apply a weighted scoring method to it.”

In the NIST framework’s priorities, there are three main components of an AI system that must be measured: trustworthiness, social impact, and how humans interact with the system. The measuring process will likely consist of extensive software testing, performance assessments and benchmarks, along with reporting and documentation of results.

4.Adopt Risk Management Strategies

The “Manage” function puts everything together by allocating the necessary resources to regularly attend to uncovered risks during the previous stages. The means to do so are typically determined with governance efforts, and can be in the form of human intervention, automated tools for real-time detection and response, or other strategies.

To manage AI risks effectively, it’s crucial to maintain ongoing visibility across all organizational tools, applications, and models. AI should not be handled as a separate entity but integrated seamlessly into a comprehensive risk management framework.

Ayesha Gulley, an AI policy expert from Holistic AI, urges businesses to adopt risk management strategies early, taking into account five factors: robustness, bias, privacy, exploitability and efficacy. Holistic’s software platform includes modules for AI auditing and risk posture reporting.

“While AI risk management can be started at any point in the project development,” she said, “implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”

Evolve With AI

The NIST AI Framework is not designed to restrict the efficient use of AI technology. On the contrary, it aims to encourage adoption and innovation by providing clear guidelines and best practices for developing and using AI securely and responsibly.

Implementing the framework will not only help you reach compliance standards but also make your organization much more capable of maximizing the benefits of AI technologies without compromising on risk.

AI-RMF A Practical Guide for NIST AI Risk Management Framework

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: NIST AI Risk Management Framework


Jun 11 2024

YOUR AZURE SECURITY AT RISK? HOW HACKERS ARE EXPLOITING AZURE SERVICE TAGS (AND HOW TO STOP THEM)?

Category: Hacking,Risk Assessmentdisc7 @ 8:24 am

A significant security vulnerability has been discovered by Tenable Research that affects Azure customers relying on Service Tags for their firewall rules. This vulnerability allows attackers to bypass Azure firewall rules, posing a substantial risk to organizations using these configurations. Here’s an in-depth look at the vulnerability, how it can be exploited, and crucial defensive measures to mitigate the risk.

Azure Security

INITIAL DISCOVERY IN AZURE APPLICATION INSIGHTS

Tenable Research initially uncovered the vulnerability within Azure Application Insights, a service designed to monitor and analyze web applications’ performance and availability. The Availability Tests feature of Azure Application Insights, intended to check the accessibility and performance of applications, was found to be susceptible to abuse. Users can control server-side requests in these tests, including adding custom headers and changing HTTP methods. This control can be exploited by attackers to forge requests from trusted services, mimicking a server-side request forgery (SSRF) attack.

EXPANSION TO MORE THAN 10 OTHER AZURE SERVICES

Upon further investigation, Tenable Research found that the vulnerability extends beyond Azure Application Insights to more than 10 other Azure services. These include:

  • Azure DevOps
  • Azure Machine Learning
  • Azure Logic Apps
  • Azure Container Registry
  • Azure Load Testing
  • Azure API Management
  • Azure Data Factory
  • Azure Action Group
  • Azure AI Video Indexer
  • Azure Chaos Studio

Each of these services allows users to control server-side requests and has an associated Service Tag, creating potential security risks if not properly mitigated.

HOW ATTACKERS CAN EXPLOIT THE VULNERABILITY

Attackers can exploit the vulnerability in Azure Service Tags by abusing the Availability Tests feature in Azure Application Insights. Below are detailed steps and examples to illustrate how an attacker can exploit this vulnerability:

1. Setting Up the Availability Test:

  • Example Scenario: An attacker identifies an internal web service within a victim’s Azure environment that is protected by a firewall rule allowing traffic only from Azure Application Insights.
  • Action: The attacker sets up an Availability Test in Azure Application Insights, configuring it to target the internal web service.

2. Customizing the Request:

  • Manipulating Headers: The attacker customizes the HTTP request headers to include authorization tokens or other headers that may be expected by the target service.
  • Changing HTTP Methods: The attacker can change the HTTP method (e.g., from GET to POST) to perform actions such as submitting data or invoking actions on the target service.
  • Example Customization: The attacker configures the test to send a POST request with a custom header “Authorization: Bearer <malicious-token>”.

3. Sending the Malicious Request:

  • Firewall Bypass: The crafted request is sent through the Availability Test. Since it originates from a trusted Azure service (Application Insights), it bypasses the firewall rules based on Service Tags.
  • Example Attack: The Availability Test sends the POST request with the custom header to the internal web service, which processes the request as if it were from a legitimate source.

4. Accessing Internal Resources:

  • Unauthorized Access: The attacker now has access to internal APIs, databases, or other services that were protected by the firewall.
  • Exfiltration and Manipulation: The attacker can exfiltrate sensitive data, manipulate internal resources, or use the access to launch further attacks.
  • Example Impact: The attacker retrieves confidential data from an internal API or modifies configuration settings in an internal service.

DETAILED EXAMPLE OF EXPLOIT

Scenario: An organization uses Azure Application Insights to monitor an internal financial service. The service is protected by a firewall rule that allows access only from the ApplicationInsightsAvailability Service Tag.

  1. Deploying an Internal Azure App Service:
    • The organization has a financial application hosted on an Azure App Service with firewall rules configured to accept traffic only from the ApplicationInsightsAvailability Service Tag.
  2. Attempted Access by the Attacker:
    • The attacker discovers the endpoint of the internal financial application and attempts to access it directly. The firewall blocks this attempt, returning a forbidden response.
  3. Exploiting the Vulnerability:
    • Setting Up the Test: The attacker sets up an Availability Test in Azure Application Insights targeting the internal financial application.
    • Customizing the Request: The attacker customizes the test to send a POST request with a payload that triggers a financial transaction, adding a custom header “Authorization: Bearer <malicious-token>”.
    • Sending the Request: The Availability Test sends the POST request to the internal financial application, bypassing the firewall.
  4. Gaining Unauthorized Access:
    • The financial application processes the POST request, believing it to be from a legitimate source. The attacker successfully triggers the financial transaction.
    • Exfiltration: The attacker sets up another Availability Test to send GET requests with custom headers to extract financial records from the application.

ADVANCED EXPLOITATION TECHNIQUES

1. Chain Attacks:

  • Attackers can chain multiple vulnerabilities or services together to escalate their privileges and impact. For example, using the initial access gained from the Availability Test to find other internal services or to escalate privileges within the Azure environment.

2. Lateral Movement:

  • Once inside the network, attackers can move laterally to compromise other services or extract further data. They might use other Azure services like Azure DevOps or Azure Logic Apps to find additional entry points or sensitive data.

3. Persistent Access:

  • Attackers can set up long-term Availability Tests that periodically execute, ensuring continuous access to the internal services. They might use these persistent tests to maintain a foothold within the environment, continuously exfiltrating data or executing malicious activities.

DEFENSIVE MEASURES

To mitigate the risks associated with this vulnerability, Azure customers should implement several defensive measures:

1. Analyze and Update Network Rules:

  • Conduct a thorough review of network security rules.
  • Identify and analyze any use of Service Tags in firewall rules.
  • Assume services protected only by Service Tags may be vulnerable.

2. Implement Strong Authentication and Authorization:

  • Add robust authentication and authorization mechanisms.
  • Use Azure Active Directory (Azure AD) for managing access.
  • Enforce multi-factor authentication and least privilege principles.

3. Enhance Network Isolation:

  • Use network security groups (NSGs) and application security groups (ASGs) for granular isolation.
  • Deploy Azure Private Link to keep traffic within the Azure network.

4. Monitor and Audit Network Traffic:

  • Enable logging and monitoring of network traffic.
  • Use Azure Monitor and Azure Security Center to set up alerts for unusual activities.
  • Regularly review logs and audit trails.

5. Regularly Update and Patch Services:

  • Keep all Azure services and applications up to date with security patches.
  • Monitor security advisories from Microsoft and other sources.
  • Apply updates promptly to minimize risk.

6. Use Azure Policy to Enforce Security Configurations:

  • Deploy Azure Policy to enforce security best practices.
  • Create policies that require strong authentication and proper network configurations.
  • Use Azure Policy initiatives for consistent application across resources.

7. Conduct Security Assessments and Penetration Testing:

  • Perform regular security assessments and penetration testing.
  • Engage with security experts or third-party services for thorough reviews.
  • Use tools like Azure Security Benchmark and Azure Defender.

8. Educate and Train Staff:

  • Provide training on risks and best practices related to Azure Service Tags and network security.
  • Ensure staff understand the importance of multi-layered security.
  • Equip teams to implement and manage security measures effectively.

https://www.securitynewspaper.com/2024/05/16/how-to-implement-principle-of-least-privilegecloud-security-in-aws-azure-and-gcp-cloud/embed/#?secret=4TeHUyw59w#?secret=RHf1cNP2eR

The vulnerability discovered by Tenable Research highlights significant risks associated with relying solely on Azure Service Tags for firewall rules. By understanding the nature of the vulnerability and implementing the recommended defensive measures, Azure customers can better protect their environments and mitigate potential threats. Regular reviews, updates, and a multi-layered security approach are essential to maintaining a secure Azure environment.

Azure Security

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Azure Security


Apr 12 2024

An Adoption Guide
For FAIR

Category: Risk Assessment,Security Risk Assessmentdisc7 @ 8:44 am

Via RiskLens

Measuring and Managing Information Risk: A FAIR Approach

Factor Analysis of Information Risk (FAIR), a powerful methodology for assessing and quantifying information risks. Here’s a comprehensive overview:

1. What Is FAIR?
a. FAIR, short for Factor Analysis of Information Risk, is a quantitative risk quantification methodology designed to help businesses evaluate information risks.
b. It stands out as the only international standard quantitative model framework that addresses both operational risk and information security.
c. Mature organizations that utilize Integrated Risk Management (IRM) solutions significantly benefit from FAIR.

2. Objective of FAIR:
a. The primary goal of FAIR is to support existing frameworks and enhance risk management strategies within organizations.
b. Unlike cybersecurity frameworks (such as NIST CSF), FAIR is not a standalone framework. Instead, it complements other industry-standard frameworks like NIST, ISO 2700x, and more.
c. As organizations shift from a compliance-based approach to a risk-based approach, they need a quantitative risk methodology to support this transition.

3. How FAIR Differs from Legacy Risk Quantification Methods:
a. FAIR is not a black-box approach like traditional penetration testing. Instead, it operates as a “glass-box” method.
b. Legacy methods focus on penetration testing without internal knowledge of the target system. While they identify vulnerabilities, they cannot provide the financial impact of risks.
c. In contrast, FAIR translates an organization’s loss exposure into financial terms, enabling better communication between technical teams and non-technical leaders.
d. FAIR provides insights into how metrics were derived, allowing Chief Information Security Officers (CISOs) to present detailed information to board members and executives.

4. Benefits of FAIR:
a. Financial Context: FAIR expresses risks in dollars and cents, making it easier for decision-makers to understand.
b. Risk Gap Identification: FAIR helps organizations efficiently allocate resources to address risk gaps.
c. Threat Level Scaling: Unlike other frameworks, FAIR scales threat levels effectively.
d. Board Engagement: FAIR fosters interest in cybersecurity among board members and non-technical leaders.

5. Drawbacks of FAIR:
a. Complexity: FAIR lacks specific, well-defined documentation of its methods.
b. Complementary Methodology: FAIR is not an independent risk assessment tool; it complements other frameworks.
c. Probability-Based: While FAIR’s probabilities are not baseless, they may not be entirely accurate due to the unique nature of cyber-attacks and their impact.

In summary, FAIR revolutionizes risk analysis by providing a quantitative, financially oriented perspective on information risk. It bridges the gap between technical and non-technical stakeholders, enabling better risk management decisions.

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: A FAIR Approach


Sep 20 2023

Balancing budget and system security: Approaches to risk tolerance

Category: Risk Assessment,Security Risk Assessmentdisc7 @ 10:56 am

Recently, it was revealed that Nickelodeon, an American TV channel and brand, has been the victim of a data leak. According to sources, the breach occurred at the beginning of 2023, but much of the data involved was “related to production files only, not long-form content or employee or user data, and (appeared) to be decades old.” The implication of this ambiguous statement: because the data is old and not related to individuals’ personally identifiable information (PII) or any proprietary information that hasn’t already been publicly released, this is a non-incident.

Let’s say Nickelodeon didn’t suffer any material harm because of this incident — great! It’s probable, though, that there are facts we don’t know. Any time proprietary data ends up where it shouldn’t, warning bells should go off in security professionals’ heads. What would be the outcome if the “decades old” files did contain PII? Some of the data would be irrelevant, but some could be crucial. What if the files contained other protected or private data? What if they compromised the integrity of the brand? All organizations need to think through the “what ifs” and apply the worst and base case scenarios to their current security practices.

The Nickelodeon case raises the question of whether keeping “decades old” data is necessary. While holding onto historical data can, in some cases, benefit the organization, every piece of kept data increases the company’s attack surface and increases risk. Why did Nickelodeon keep the old files in a location where it could be easily accessed? If the files were in a separate location, the security team likely did not apply adequate controls to accessing the files. Given that the cost of securing technology and all its inherent complexity is already astronomically high, CISOs need to prioritize budgetary and workforce allocation for all security projects and processes, including those for all past, present, and future data protection.

In a slow economy, balancing system security and budget requires skill and savvy. Even in boom times, though, throwing more money at the problem doesn’t always help. There is no evidence that an increase in security spending proportionately improves an organization’s security posture. In fact, some studies suggest that an overabundance of security tools leads to more confusion and complexity. CISOs should therefore focus on business risk tolerance and reduction.

Approaches to cyber risk management

Because no two organizations are alike, every CISO must find a cyber risk management approach that aligns with the goals, culture, and risk tolerance of the organization. Budget plays an important role here, too, but securing more budget will be an easier task if the security goals align with those of the business. After taking stock of these considerations, CISOs may find that their organizations fall into one or more core approaches to risk management.

Risk tolerance-based approach

Every company– and even every department within a company– has a tolerance for the amount and type of risk they’re willing to take. Security-specific tolerance levels must be based on desired business outcomes; cyber security risk cannot be determined or calculated based on cybersecurity efforts alone, rather how those efforts support the larger business.

To align cybersecurity with business risk, security teams must address business resilience by considering the following questions:

  • How would the business be impacted if a cybersecurity event were to occur?
  • What are the productivity, operational, and financial implications of a cyber event or data breach?
  • How well equipped is the business to handle an event internally?
  • What external resources would be needed to support internal capabilities?

With answers to these types of questions and metrics to support them, cyber risk levels can be appropriately set.

Maturity-based approach

Many companies today estimate their cyber risk tolerance based on how mature they perceive their cybersecurity team and controls to be. For instance, companies with an internal security operations center (SOC) that supports a full complement of experienced staff might be better equipped to handle continuous monitoring and vulnerability triage than a company just getting its security team up and running. Mature security teams are good at prioritizing and remediating critical vulnerabilities and closing the gaps on imminent threats, which generally gives them a higher security risk tolerance.

That said, many SOC teams are too overwhelmed with data, alerts, and technology maintenance to focus on risk reduction. The first thing a company must do if it decides to take on a maturity-based approach is to honestly assess its own level of security maturity, capabilities, and efficacy. A truly mature cybersecurity organization isbetter equipped to manage risk, but self-awareness is vital for security teams regardless of maturity level.

Budget-based approach

Budget constraints are prevalent in all aspects of business today, and running a fully staffed, fully equipped cybersecurity program is no bargain in terms of cost. However, organizations with an abundance of staff and technology don’t necessarily perform better security- or risk-wise. It’s all about being budget savvy for what will be a true compliment to existing systems.

Invest in tools that move the organization toward a zero trust-based architecture, focusing on security foundation and good hygiene first. By laying the right foundations, and having competent staff to manage them, cybersecurity teams will be better off than having the latest and greatest tools implemented without mastering the top CIS Controls: Inventory and control of enterprise and software assets, basic data protection, secure configuration management, hardened access management, log management, and more.

Threat-based approach

An important aspect of a threat-based approach to risk management is understanding that vulnerabilities and threats are not the same thing. Open vulnerabilities can lead to threats (and should therefore be a standard part of every organization’s security process and program). “Threats,” however, refer to a person/persons or event in which a vulnerability has the potential to be exploited. Threats also rely on context and availability of a system or a resource.

For instance, the Log4Shell exploit took advantage of a Log4j vulnerability. The vulnerability resulted in a threat to organizations with an unpatched version of the utility running. Organizations that were not running unpatched versions — no threat.

It is therefore imperative for organizations to know concretely:

  • All assets and entities present in their IT estates
  • The security hygiene of those assets (point in time and historical)
  • Context of the assets (non-critical, business-critical; exposed to the internet or air-gapped; etc.)
  • Implemented and operational controls to secure those assets

With this information and context, security teams can start to build threat models appropriate for the organization and its risk tolerance. The threat models used will, in turn, allow teams to prioritize and manage threats and more effectively reduce risk.

People, process and technology-based approach

People, process, and technology (PPT) are often considered the “three pillars” of technology. Some security pros consider PPT to be a framework. Through whatever lens PPT is viewed, it is the most comprehensive approach to risk management.

A PPT approach has the goal of allowing security teams to holistically manage risk while incorporating an organization’s maturity, budget, threat profile, human resources, skill sets, and the entirety of the organization’s tech stack, as well as its operations and procedures, risk appetite, and more. A well-balanced PPT program is a multi-layered plan that relies evenly on all three pillars; any weakness in one of the areas tips the scales and makes it harder for security teams to achieve success — and manage risk.

The wrap up

Every organization should carefully evaluate its individual capabilities, business goals, and available resources to determine the best risk management strategy for them. Whichever path is chosen, it is imperative for security teams to align with the business and involve organizational stakeholders to ensure ongoing support.

RISK ASSESSMENT: AN INDEPTH GUIDE TO PRINCIPLES, METHODS, BEST PRACTISE, AND INTERVIEW QUESTIONS AND ANSWERS

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: risk tolerance


Mar 12 2023

Security Risk Assessment Services

Category: Risk Assessment,Security Risk AssessmentDISC @ 11:16 pm

Security risk assessment services are crucial in the cybersecurity industry as they help organizations identify, analyze, and mitigate potential security risks to their systems, networks, and data. Here are some opportunities for providing security risk assessment services within the industry:

  1. Conducting Vulnerability Assessments: As a security risk assessment service provider, DISC can conduct vulnerability assessments to identify potential vulnerabilities in an organization’s systems, networks, and applications. You can then provide recommendations to mitigate these vulnerabilities and enhance the organization’s overall security posture.
  2. Performing Penetration Testing: Penetration testing involves simulating a real-world attack on an organization’s systems and networks to identify weaknesses and vulnerabilities. As a security risk assessment service provider, DISC can perform penetration testing to identify potential security gaps and provide recommendations to improve security.
  3. Risk Management: DISC can help organizations identify and manage risks associated with their information technology systems, data, and operations. This includes assessing potential threats, analyzing the impact of these threats, and developing plans to mitigate them.
  4. Compliance Assessment: DISC can help organizations comply with regulatory requirements by assessing their compliance with industry standards such as ISO 27001, HIPAA, or NIST-CSF. DISC can then provide recommendations to ensure that the organization remains compliant with these standards.
  5. Cloud Security Assessments: As more organizations move their operations to the cloud, there is a growing need for security risk assessment services to assess the security risks associated with cloud-based systems and applications. As a service provider, DISC can assess cloud security risks and provide recommendations to ensure the security of the organization’s cloud-based operations.
  6. Security Audit Services: DISC can provide security audit services to assess the overall security posture of an organization’s systems, networks, and applications. This includes reviewing security policies, processes, and procedures and providing recommendations to improve security.

By providing these services, DISC can help organizations identify potential security risks and develop plans to mitigate them, thereby enhancing their overall security posture.

In what situations would a vCISO Service be appropriate?

Transition plan from ISO 27001 2013 to ISO 27001 2022

We’d love to hear from you! If you have any questions, comments, or feedback, please don’t hesitate to contact us. Our team is here to help and we’re always looking for ways to improve our services. You can reach us by email (info@deurainfosec.com), or through our website’s contact form

Contact DISC InfoSec if you need further assistance in your ISO 27001 2022 transition Plan

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: Security Risk Assessment


Feb 27 2023

Understanding Cyber Risk Quantification: The Buyer’s Guide” by Jack Jones

Category: Risk Assessment,Security Risk AssessmentDISC @ 11:42 am

Version 2 Updated for Release – February 2023. 

From Jack Jones, Chairman of the FAIR Institute and creator of the FAIR model for cyber risk quantification (CRQ) — the definitive guide to understanding CRQ: What it is (and isn’t), its value proposition and limitations, and facts regarding the misperceptions that are commonplace. 

If you’re considering or are actively shopping for an analysis solution that treats cyber risk in financially-based business terms, Jack’s extensive, jargon-free guide — including an evaluation checklist — will give you the objective and practical advice you need.

And just in time. There’s never been more interest or, frankly, confusion in the marketplace over what exactly is cyber risk quantification. As you’ll read in this buyer guide, many solutions may count vulnerabilities, provide ordinal values, or deliver numeric “maturity” scores but don’t measure risk, let alone put a financial value on it to help make business decisions.

This paper answers questions such as:

  • What does CRQ provide that I’m not already getting from other cyber risk-related measurements?
  • What makes CRQ reliable? Why should I believe the numbers?
  • Do I have enough data to run an analysis?

Jack also provides red flags to look out for in CRQ solutions, such as:

  • Mis-identification of risks.
  • Mis-use of control frameworks as risk measurement tools.
  • Over-simplification that can result in poorly-informed decisions, especially when performed at scale.

The ‘Understanding Cyber Risk Quantification’ guide is designed to be of use to security and risk executives, industry analysts, consultants, auditors, investors, and regulators–essentially anyone who has a stake in how well cyber risk is managed.

Download Below

DOWNLOAD NOW

Tags: CRQ, cyber risk quantification


Jan 04 2023

Ransomware Risk Management

Category: Ransomware,Risk AssessmentDISC @ 12:15 pm

A Cybersecurity Framework Profile

Infosec books | InfoSec tools | InfoSec services

Tags: ransomware, Ransomware Protection Playbook


Nov 28 2022

Best practices for implementing a company-wide risk analysis program

Category: Risk Assessment,Security Risk AssessmentDISC @ 11:36 pm

The associated risk management programs are also constantly evolving, and that’s likely due to outside influences such as client contract requirements, board requests and/or specific security incidents that require security teams to rethink and strengthen their strategy. Not surprisingly, CISO’s today face several dilemmas: How do I define the business impact of a cyber event? How much will it cost to protect our company’s most valuable assets? Which investments will make the business most secure? How do we avoid getting sidetracked by the latest cyber breach headline?

A mature risk analysis program can be thought of as a pyramid. Customer-driven framework compliance forms the base (PCI/ISO frameworks required for revenue generation); then incident-driven infrastructure security in the middle (system-focused security based on known common threats and vulnerabilities); with analysis-driven comprehensive coverage at the pinnacle (identification of assets, valuations, and assessment of threat/vulnerability risk).

risk analysis

How do you kickstart that program? Here are five steps that I’ve found effective for getting risk analysis off the ground.

Determine enterprise-specific assets

The first step is determining what is critical to protect. Unlike accounting assets (e.g., servers, laptops, etc.), in cybersecurity terms this would include things that are typically of broader business value. Often the quickest path is to talk with the leads for different departments. You need to understand what data is critical to the functioning of each group, what information they hold that would be valuable to competitors (pricing, customers, etc.) and what information disclosures would hurt customer relationships (contract data, for instance).

Also assess whether each department handles trade secrets, or holds patents, trademarks, and copyrights. Finally, assess who handles personally identifiable information (PII) and whether the group and its data are subject to regulatory requirements such as GDPR, PCI DSS, CCPA, Sarbanes Oxley, etc.

When making these assessments, keep three factors in mind: what needs to be safe and can’t be stolen, what must remain accessible for continued function of a given department or the organization, and what data/information must be reliable (i.e., that which can’t be altered without your knowledge) for people to do their jobs.

Value the assets

Once you’ve identified these assets, the next step is to attach a value. Again, I make three recommendations: keep it simple, make (informed) assumptions, and err on the side of overestimating. The reason for these recommendations is that completing a full asset valuation for an enterprise would take years and wouldn’t ever be finished (because assets constantly change).

Efficient risk analysis requires a more practical approach that uses broad categories, which can then be prioritized to understand where deeper analysis is needed. For instance, you might use the following categories, and assign values based on informed assumptions:

  • Competitive advantage – the items/processes/data that are unique to your company and based on experience. These are items that would be of value to a competitor to build on. To determine value, consider the cost of growing a legitimate competitor in your dominant market from scratch, including technology and overhead.
  • Client relationships – what directly impacts customer relationships, and therefore revenue. This includes “availability” impacts from outages, SLAs, etc. Value determination will likely be your annual EBIT goal, and impact could be adjusted by a Single Loss Exposure.
  • Third-party partnerships – relating to your ability to initiate, maintain or grow partner networks, such as contractors, ISPs or other providers. When valuing, consider the employee labor cost needed to recruit and maintain those partners.
  • Financial performance – items that impact your company’s ability to achieve financial goals. Again, valuation might equate to annual EBIT.
  • Employee relations – the assets that impact your ability to recruit and retain employees. Valuation should consider the volume of potential losses and associated backfill needs, including base salaries, bonuses, benefit equivalencies, etc.

Determine relevant threats, assess vulnerability, and identify exposures

When it comes to analyzing risk from threats, vulnerabilities and exposures, start with the common security triad model for information security. The three pillars – Confidentiality, Integrity and Availability (CIA) – help guide and focus security teams as they assess the different ways to address each concern.

Confidentiality touches on data security and privacy; it entails not only keeping data safe, but also making sure only those who need access, have it.

Integrity reflects the need to make sure data is trustworthy and tamper-free. While data accuracy can be compromised by simple mistakes, what the security team is more concerned with is intentional compromise that’s designed to harm the organization.

Availability is just what it sounds like – making sure that information can be accessed where and when needed. Availability is an aspect of the triad where security teams need to coordinate closely with IT on backup, redundancy, failover, etc. That said, it also involves everything from secure remote access to timely patches and updates to preventing acts of sabotage like denial of service or ransomware attacks.

In undertaking this part of the risk assessment, you’re using this security triad to determine threats, and then identifying exposure and assessing vulnerability to better estimate both the potential impact and probability of occurrence. Once these determinations are made, you’re ready for the next step.

Define risk

AV = assigned Asset Value (quantitative/qualitative) as identified above.
EF = the Exposure Factor, a subjective assessment of the potential percentage loss to the asset if a specific threat is realized. For example, an asset may be degraded by half, giving an EF of 0.50.

From this we can calculate the Single Loss Expectancy (SLE) – the monetary value from one-time risk to an asset – by multiplying AV and EF. As an example, if the asset value is $1M, and the exposure factor from a threat is a 50% loss (0.50) then the SLE will be $500,000.

Risk definition also takes this one step further by using this SLE and multiplying it by a potential Annualized Rate of Occurrence (ARO) to come up with the Annualized Loss Expectancy (ALE). This helps us understand the potential risk over time.

When working through these figures, it’s important to recognize that potential loss and probability of occurrence are hard to define, and thus the potential for error is high. That’s why I encourage keeping it simple and overestimating when valuing assets – the goal is to broadly assess the likelihood and impact of risk so that we can better focus resources, not to get the equations themselves perfectly accurate.

Implement and monitor safeguards (controls)

Now that we have a better handle on the organizational risks, the final steps are more familiar territory for many security teams: implementing and monitoring the necessary and appropriate controls.

You’re likely already very familiar with these controls. They are the countermeasures – policies, procedures, plans, devices, etc. – to mitigate risk.

Controls fall into three categories: preventative (before an event), detective (during) and corrective (after). The goal is to try to stop an event before it happens, quickly react once it does, and efficiently get the organization back on its feet afterward.

Implementing and monitoring controls are where the rubber hits the road from a security standpoint. And that’s the whole point of the risk analysis, so that security professionals can best focus efforts where and how appropriate to mitigate overall organizational risk.

Security Risk Management: Building an Information Security Risk Management Program from the Ground Up

Tags: risk analysis program


Nov 16 2022

Risk Management Toolkit

“By implementing sound #management of our #risks and the threats and opportunities that flow from them we will be in a stronger position to deliver our organisational objectives, provide improved services to the community, achieve better value for money and demonstrate compliance with the Local Audit and Accounts Regulations. #Riskmanagement will therefore be at the heart of our good management practice and corporate governance arrangements.”

Tags: Risk Management Toolkit


Oct 20 2022

Why chasing risk assessments will have you chasing your tail

Category: Risk Assessment,Security Risk AssessmentDISC @ 10:07 am

Third-party risk assessments are often described as time-consuming, repetitive, overwhelming, and outdated. Think about it: organizations, on average, have over 5,000 third parties, meaning they may feel the need to conduct over 5,000 risk assessments. In the old school method, that’s 5,000 redundant questionnaires. 5,000 long-winded Excel sheets. No wonder they feel this way.

The reason why risk assessments have become so dreaded is that it has always been a process of individual inspection and evaluation. For perspective, that’s roughly 14 risk assessments completed per day in the span of one year. How can we expect security, risk, and procurement professionals to get any other work done with this type of task on their plate? With the state of today’s threat landscape, wouldn’t you rather your security team be focused on actual analysis and mitigation, rather than just assessing? And, not to mention the fact that a tedious risk assessment process will contribute to burnout that can lead to poor employee retention within your security team. With how the cybersecurity job market is looking now, this isn’t a position any organization wants to be in.

So, now that you know how the people actually with their ‘hands in the pot’ feel about risk assessments, let’s take a look at why this approach is flawed and what organizations can do to build a better risk assessment process.

The never-ending risk assessment carousel ride

The key to defeating cybercriminals is to be vigilant and proactive. Not much can be done when you’re reacting to a security incident as the damage is already done. Unfortunately, the current approach to risk management is reactive, and full of gaps that do not provide an accurate view into overall risk levels. How so? Current processes only measure a point-in-time and do not account for the period while the assessment is being completed–or any breaches that occurred after the assessment was submitted. In other words, assessments will need to be routinely refilled out, a never-ending carousel ride, which is not feasible.

It should come to no surprise that assessments are not updated nearly as much as they should be, and that’s to no one’s fault. No one has the time to continually fill out long, redundant Excel sheets. And, not to mention, unless the data collected is standardized, very little can be done with it from an analysis point of view. As a result, assessments are basically thrown in a drawer and never see the light of day.

Every time a third-party breach occurs there is a groundswell of concern and company executives and board members immediately turn to their security team to order risk assessments, sending them on a wild goose chase. What they don’t realize is that ordering assessments after a third-party breach has occurred is already too late. And the organizations that are chosen for a deeper assessment are most likely not the ones with the highest risk. Like a never-ending carousel ride, the chase for risk assessments will never stop unless you hop off the ride now.

Show me the data!

The secret ingredient for developing a better risk management collection process is standardized data. You can’t make bread without flour, and you can’t have a robust risk management program without standardized data. Standardized data is the process of gathering data in a common format, making it easier to conduct an analysis and determine necessary next steps. Think of it this way, if you were looking at a chart comparing student test grades and they were all listed in various formats (0.75, 68%, 3/16, etc.), you would have a difficult time comparing these data points. However, if all the data is listed in percentages (80%, 67%, 92%, etc.), you could easily identify who is failing and needs more support in the classroom. This is the way using standardized data in the risk assessment process works. All data collected from assessments would be in the same format and you can understand which third parties are high risk and require prioritized mitigation.

CISOs who are still focused on point-in-time assessments are not getting it right. Organizations need to understand that risk assessment collection alone does not in fact equal reduced risk. While risk assessments are important, what you do with the risk assessment after it is complete is what really matters. Use it as a catalyst to create a larger, more contextual risk profile. Integrate threat intelligence, security ratings, machine learning, and other data sources and you’ll find yourself with all the data and insights you need and more to proactively reduce risk. You’ll be armed with the necessary information to mitigate risk and implement controls before the breach occurs, not the rushed patchwork after. A data-driven approach to third-party risk assessment will provide a more robust picture of risk and put an end to chasing assessments once and for all.

risk assessment

Security Risk Assessment

How to do an information security risk assessment for ISO27001

Tags: data breach, Risk Assessment, Third Party Risk


Sep 14 2022

Risk Management document templates

Risk Assessment and Risk Treatment Methodology

The purpose of this document is to define the methodology for assessment and treatment of information risks, and to define the acceptable level of risk.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

There are 3 appendices related to this document. The appendices are not included in the price of this document and can be purchased separately

Risk Assessment Table

The purpose of this table is to list all information resources, vulnerabilities and threats, and assess the level of risk. The table includes catalogues of vulnerabilities and threats.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

This document is an appendix. The main document is not included in the price of this document and can be purchased separately

Risk Treatment Table

The purpose of this table is to determine options for the treatment of risks and appropriate controls for unacceptable risks. This table includes a catalogue of options for treatment of risks as well as a catalogue of 114 controls prescribed by ISO 27001.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

This document is an appendix. The main document is not included in the price of this document and can be purchased separately

Risk Assessment and Treatment Report

The purpose of this document is to give a detailed overview of the process and documents used during risk assessment and treatment.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

This document is an appendix. The main document is not included in the price of this document and can be purchased separately

Statement of Applicability

The purpose of this document is to define which controls are appropriate to be implemented in the organization, what are the objectives of these controls, how they are implemented, as well as to approve residual risks and formally approve the implementation of the said controls.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

Risk Treatment Plan

The purpose of this document is to determine precisely who is responsible for the implementation of controls, in which time frame, with what budget, etc.

The document is optimized for small and medium-sized organizations – we believe that overly complex and lengthy documents are just overkill for you.

Toolkit below contains all the documents above

Tags: Risk Assessment, Security Risk Assessment


Mar 31 2022

How to read a SOC 2 Report

how to read a SOC 2 report
https://fractionalciso.com/how-to-read-a-soc-2-report/

The following conversation about reviewing a SOC 2 report is one to avoid. 

Potential Customer: “Hi Vendor Co., do you have a SOC 2?”

Vendor Co. Sales Rep: “Yes!”

Potential Customer: “Great! We can’t wait to start using your service.” 

The output of a SOC 2 audit isn’t just a stamp of approval (or disapproval). Even companies that have amazing cybersecurity and compliance programs have a full SOC 2 report written about them by their auditor that details their cybersecurity program. SOC 2 reports facilitate vendor management by creating one deliverable that can be given to customers (and potential customers) to review and incorporate into their own vendor management programs.

Vendor security management is an important part of a company’s cybersecurity program. Most mature organizations’ process of vendor selection includes a vendor security review – a key part of which includes the review of a SOC 2 report.

SOC 2 reports can vary greatly in length but even the most basic SOC 2 report is dense with information that can be difficult to digest, especially if you aren’t used to reading them. This article will teach you how to read a SOC 2 report by providing a breakdown of the report’s content, with emphasis on how to pull out the important parts to look at from a vendor security review perspective.

Please note that you should not use this as a guide to hunt and peck your way through a SOC 2 report. It is important to read through the entire report to gain a full understanding of the system itself. However, this should help draw attention to the particular points of interest you should be looking out for when reading a report. 

Many different auditing firms perform SOC 2 audits, some reports may look a little different from the others but the overall content is generally the same.

How to read a SOC 2 report: the Cover Page

Even the cover page of a SOC 2 report has a lot of useful information. It will have the type of SOC 2 report, date(s) covered, the relevant trust services criteria (TSC) categories, and the auditing firm that conducted the audit. 

What Type of SOC 2 Report?

There are two types of SOC 2 reports that can be issued: A SOC 2 Type I and a SOC 2 Type II. The type of report will be denoted on the cover page. The key difference is the timeframe of the report:

A SOC 2 Type I is an attestation that the company complied with the SOC 2 criteria at a specific point in time. 

A SOC 2 Type II is an attestation that the company complied with the SOC 2 criteria over a period of time, most commonly a 6 or 12 month period. 

SOC 2 Type II reports are more valuable because they demonstrate a long-term commitment to a security program – and any issues over the time frame will be revealed. It’s possible for a company to get a SOC 2 Type I report then fail to adhere to their controls. 

Key takeaway: If a company only has a SOC 2 Type I, ask if and when they are working on achieving a SOC 2 Type II. If they say they are not getting a Type II, this is indicative of a lower commitment to security. 

Trust Services Criteria

Cybersecurity for Executives in the Age of Cloud 

Tags: SOC 2 report, SOC2


Mar 12 2022

Integrating Cybersecurity and Enterprise Risk Management (ERM)

Source: https://

/10.6028/NIST.IR.8286-draft2

ISO 31000: 2018 Enterprise Risk Management (CERM Academy Series on Enterprise Risk Management)

Tags: ERM, ISO 31000


Nov 04 2021

Supply Chain at Risk: Brokers Sell Access to Shipping, Logistics Companies

Category: Risk Assessment,Vendor AssessmentDISC @ 8:54 am

As if disruption to the global supply chain post-pandemic isn’t bad enough, cybercriminals are selling access, sometimes in the form of credentials, to shipping and logistics companies in underground markets.

That’s a worrisome, if not unexpected, development; a cybersecurity incident at a company that operates air, ground and maritime cargo transport on multiple continents and moves billions of dollars worth of goods could prove devastating to the global economy.

“At the moment, the global supply chain is extremely fragile. This makes the industry a top target from cybercriminals who will look to take advantage of today’s current situation,” said Joseph Carson, chief security scientist and advisory CISO at ThycoticCentrify. “The global chip shortage is resulting in major delays, with some stock unavailable or backlogged for more than six months, making it a prime attraction for cybercriminals to attempt to expose and monetize this via various scams. This includes redirecting shipments by changing logistic details or causing disruptions via ransomware.”

The actors, ranging from newcomers to prolific network access brokers, are selling credentials they obtained by leveraging known vulnerabilities in remote desktop protocol (RDP), VPN, Citrix and SonicWall and other remote access solutions, according to the Intel 471 researchers tracking them.

“No business or IT security team would willingly allow bad actors to exploit known vulnerabilities in remote access technologies, but this is exactly what is happening,” said Yaniv Bar-Dayan, CEO and co-founder of Vulcan Cyber, who believes much of the problem is a result of poor cybersecurity hygiene.

In one instance last August, an actor that has worked with groups deploying Conti ransomware said they had accessed “corporate networks belonging to a U.S.-based transportation management and trucking software supplier and a U.S.-based commodity transportation services company,” the researchers wrote in a blog post. “The actor gave the group access to an undisclosed botnet powered by malware that included a virtual network computing (VNC) function.” The group then used the botnet “to download and execute a Cobalt Strike beacon on infected machines, so group members in charge of breaching computer networks received access directly via a Cobalt Strike beacon session,” they said.

supply chain IoT edge trucking

Supply Chain Risk Management

Supply Chain Risk Management

Tags: Supply Chain at Risk


Oct 11 2021

An Adoption Guide for FAIR

Category: Risk Assessment,Security Risk AssessmentDISC @ 12:09 pm

Jack draws on years of experience introducing quantified risk analysis to organizations like yours, to write An Adoption Guide For FAIR. In this free eBook, he’ll show you how to:

Lay the foundation for a change in thinking about risk

Plan an adoption program that suits your organization’s style.

Identify stakeholders and key allies for socialization of FAIR

Select and achieve an initial objective, then integrate business-aligned, risk-based practices across your organization.

Tags: A FAIR Approach, Cost-Benefit Analysis, Factor Analysis, FAIR in a Nutshell, Pure Risk Reduction, Quantitative Tactical Analyses, What is FAIR?


Oct 01 2021

CISA releases Insider Risk Mitigation Self-Assessment Tool

Category: Risk Assessment,Security Risk AssessmentDISC @ 9:39 am

The US CISA has released a new tool that allows to assess the level of exposure of organizations to insider threats and devise their own defense plans against such risks.

The US Cybersecurity and Infrastructure Security Agency (CISA) has released the Insider Risk Mitigation Self-Assessment Tool, a new tool that allows organizations to assess their level of exposure to insider threats.

Insider threats pose a severe risk to organizations, the attacks are carried out by current or former employees, contractors, or others with inside knowledge, for this reason they are not easy to detect.

An attack from insiders could compromise sensitive information, cause economic losses, damages the reputation of the organization, theft of intellectual property, reduction of market share, and even physical harm to people. 

The tool elaborates the answers of the organizations to a survey about their implementations of a risk program management for insider threats.

“The Cybersecurity and Infrastructure Security Agency (CISA) released an Insider Risk Mitigation Self-Assessment Tool today, which assists public and private sector organizations in assessing their vulnerability to an insider threat.  By answering a series of questions, users receive feedback they can use to gauge their risk posture.  The tool will also help users further understand the nature of insider threats and take steps to create their own prevention and mitigation programs.” reads the announcement published by CISA.

Cybersecurity Awareness Month 2021 Toolkit: Key messaging, articles, social media, and more to promote Cybersecurity Awareness Month 2021

Held every October, Cybersecurity Awareness Month is a collaborative effort between government and industry to ensure every American has the resources they need to stay safe and secure online while increasing the resilience of the Nation against cyber threats.
The Cybersecurity and Infrastructure Security Agency (CISA) and the National Cyber Security Alliance (NCSA) co-lead Cybersecurity Awareness Month.

Cybersecurity Awareness Month 2021 Toolkit: Key messaging, articles, social media, and more to promote Cybersecurity Awareness Month 2021 by [Cybersecurity and Infrastructure Security Agency]

Tags: CISA, Cybersecurity Awareness Month 2021, Risk Mitigation Self-Assessment Tool


Jun 17 2021

Calculating Your Company’s Total Cybersecurity Risk Exposure

Category: Risk Assessment,Security Risk AssessmentDISC @ 12:20 pm
Skyscrapers - Total Cyber Risk of an Organization copy

In the first part of my blog post I focused on calculating the impact of a cybersecurity breach in relation to a company’s size and industry. In part two, I present an approach to better understand how often a company will experience security breaches.

The probability is usually the big unknown. Not particularly helpful is that our abilities to estimate a probability are inferior to our abilities to estimate damage. In addition, we must consider a range of limitations to our abilities to estimate. We don’t estimate well in magnitudes very small or large. Once in 1,000 years and once in 10,000 years is harder to differentiate than once per year and once in 10 years. Also, we tend to overestimate the probability of recently occurred incidents.

The great uncertainty drives risk practitioners to reduce their risk assessments to pure impact assessments (“Estimations of probability can only be wrong!”). However, we can use what is out there on data and make comparisons.

Source: Calculating Your Company’s Total Cybersecurity Risk Exposure

Tags: FAIR


May 25 2021

A leadership guide for mitigating security risks with low code platforms

Category: Risk Assessment,Security Risk AssessmentDISC @ 8:32 am

The lingering question of application code security follows, as stories of security breaches continue to pour, and remote teams across the world adopt low code for faster application delivery. Even as Gartner predicts that 65% of applications will be built using the low-code paradigm by 2024, it is important to understand the security implications that come with it and discuss how we can mitigate possible risks.

Most low code platforms enable non-technical users to build applications quickly and offer in-built security for various aspects of the application, such as APIs, data access, web front-ends, deployment, etc. Some go deeper with functionalities purpose-built for professional developers, with abilities to customize at a platform level. That said, no platform can claim to be the silver bullet when it comes to abstracting all security risks.

Business leaders should assess both internal and external risks that arise, and make sure there are certain guard rails enforced to secure low code-built applications. Let’s discuss some of these in detail.

The Enterprise Risk Management Program (ERMP) Guide provides program-level risk management guidance that directly supports your organization’s policies and standardizes the management of cybersecurity risk and also provides access to an editable Microsoft Word document template that can be utilized for baselining your organizations risk management practices. Unfortunately, most companies lack a coherent approach to managing risks across the enterprise:When you look at getting audit ready, your policies and standards only cover the “why?” and “what?” questions of an audit. This product addresses the “how” questions for how your company manages risk.

The ERMP provides clear, concise documentation that provides a “paint by numbers” approach to how your organization manages risk.The ERMP addresses fundamental needs when it comes to what is expected in cybersecurity risk management, how risk is defined, who can accept risk, how risk is calculated by defining potential impact and likelihood, necessary steps to reduce risk.Just as Human Resources publishes an “employee handbook” to let employees know what is expected for employees from an HR perspective, the ERMP does this from a cybersecurity risk management perspective.Regardless if your cybersecurity program aligns with NIST, ISO, or another framework, the Enterprise Risk Management Program (ERMP) is designed to address the strategic, operational and tactical components of IT security risk management for any organization.

Policies & standards are absolutely necessary to an organization, but they fail to describe HOW risk is actually managed. The ERMP provides this middle ground between high-level policies and the actual procedures of how risk is managed on a day-to-day basis by those individual contributors who execute risk-based controls.


May 03 2021

Risk-based vulnerability management has produced demonstrable results

Category: Risk Assessment,Security Risk AssessmentDISC @ 7:44 am

Risk-based vulnerability management

Risk-based vulnerability management doesn’t ask “How do we fix everything?” It merely asks, “What do we actually need to fix?” A series of research reports from the Cyentia Institute have answered that question in a number of ways, finding for example, that attackers are more likely to develop exploits for some vulnerabilities than others.

Research has shown that, on average, about 5 percent of vulnerabilities actually pose a serious security risk. Common triage strategies, like patching every vulnerability with a CVSS score above 7 were, in fact, no better than chance at reducing risk.

But now we can say that companies using RBVM programs are patching a higher percentage of their high-risk vulnerabilities. That means they are doing more, and there’s less wasted effort. (Which is especially good because patch management is resource constrained.)

The time it took companies to patch half of their high-risk vulnerabilities was 158 days in 2019. This year, it was 27 days.

And then there is another measure of success. Companies start vulnerability management programs with massive backlogs of vulnerabilities, and the number of vulnerabilities only grows each year. Last year, about two-thirds of companies using a risk-based system reduced their vulnerability debt or were at least treading water. This year, that number rose to 71 percent.

When a company discloses that their networks have been breached and that their data has been stolen or encrypted for ransom, there is a steady drumbeat of critics. The company, these critics contend, is somehow at fault. Its security team didn’t do EVERYTHING it could have to prevent the breach. The proof of this doesn’t lie in knowledge of what preventative steps the security team did, but in the fact that it got breached. Victim blaming was alive and well in cybersecurity.

Thankfully, this mindset is fading away. But when cybersecurity companies with risk-based approaches began entering the market, they faced headwinds from the security nihilism crowd who thought if you can’t fix everything, then “why bother?”

We can now say that, when it comes to vulnerability management – a complex, yet fundamental cybersecurity discipline – the risk-based approach has produced clear results. The proof is in the data.

Enterprises that use risk-based approaches to vulnerability management are getting faster and smarter at this foundational cybersecurity discipline. They are doing less work and seeing more impactful security improvements. It’s encouraging to see these year-over-year improvements and we believe this trend is likely to continue.

Risk Based Vulnerability Management 

Risk Based Vulnerability Management A Complete Guide - 2019 Edition by [Gerardus Blokdyk]

Tags: Risk-based vulnerability management


Mar 30 2021

Risky business: 3 timeless approaches to reduce security risk in 2021

Category: Risk AssessmentDISC @ 9:57 pm

Steps to reduce security risk in 2021

A summary of the tactical and strategic moves CISOs can make to reduce security risk:

  • Look to reduce your “haystack” of threat avenues through smart policy enforcement. Consider DNS as a vector – for both attack and detection
  • Ensure that your cloud adoption strategy is coupled with sound cloud security policy and design
  • Educate your leadership team. “We aren’t a target” is equivalent to sticking your head in the sand.

Are you doing enough? Do you understand your risks? What if the brightest aren’t always the best choice for your company?

The Smartest Person in the room
The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity by [Christian Espinosa]

Tags: reduce security risk


Next Page »