Aug 01 2023

A step-by-step guide for patching software vulnerabilities

Category: Security patching,Security vulnerabilitiesdisc7 @ 8:59 am

The Cyber Threat Index 2023 by the Coalition anticipates a 13% increase in the average rate of Common Vulnerabilities and Exposures (CVEs) compared to 2022, projecting it to surpass 1,900 per month in 2023. This surge in CVEs poses a challenge for organizations as they grapple with managing the release of thousands of patches and updates every month.

Streamline your patch management process

First a quick disclaimer. Proper patch management relies on important factors like size of an organization, complexity of an IT environment, criticality of systems, and number of resources allocated to manage it all, so plan accordingly. Also, this advice assumes you already have some sort of endpoint management solution or function in place for deploying patches. If not, that’s step one.

Assuming you have a solution in place, the next step is to evaluate and prioritize patches.

Not all vulnerabilities are created equally, which means not all patches are either. But as vulnerabilities like WannaCry demonstrated, delayed patching can have catastrophic consequences. Therefore, it’s important to prioritize updates that have the highest severity of non-superseded vulnerabilities and/or the highest exposure for each environment. For example, if you have an update that impacts only a few devices out of a thousand, and another that impacts 80% of devices, but both are critical, focus on the one that could have the biggest negative impact, and then address the others.

Once the critical updates are addressed, plan to move onto the non-critical patches, which are often driver updates or new software that enhances user experience and prioritize those based on importance to business operations.

Many use the Common Vulnerability Scoring System (CVSS) to help prioritize updates, which is a good starting point. Just remember that many vulnerabilities rated at a medium severity level are ignored – and found to be the source of a breach later.

Once you’ve prioritized the types of updates, the next step is to create guidelines for testing them before they go into production.

The last thing you want to do is break the system. Start by researching the criteria of each update and identifying which components require testing. Next, install each update on at least five off-network devices to be tested against proven success criteria. Record the evidence and have another person review it. Be sure to find out if the update has an uninstaller and use it to ensure complete and safe removal of outdated programs.

If you’re like most organizations, you’ll likely plan on having tons of updates/patches happening all the time. But the more updates installed at any given time increases the risk of end-user disruption (i.e., greater volume of data needing to be downloaded, longer installation times, system reboots, etc.).

Therefore, the next step is to assess your system’s bandwidth, calculate the total number and size of the updates against the total number of devices and types. This can prevent system overloads. When in doubt, just plan to start with five updates and then reassess bandwidth.

Additionally, if you follow any change management best practices (such as ITIL, Prince2, or ServiceNow), it’s important you adhere to those processes for proper reporting and auditability. They usually require documentation on which updates are needed, the impact on a user, evidence of testing, and go-live schedules. Capturing this data properly through the above steps is often required for official approvals as it serves as a single source of truth.

We’ve now gotten to the point of deployment. The next step is to ensure deployment happens safely. I recommend using a patch management calendar when making change requests and when scheduling or reviewing new patch updates. This is where you define the baselines for the number of updates to be deployed and in which order. This should utilize information gathered from previous steps. Once that baseline is set, you can schedule the deployment and automate where necessary.

At last, we’ve made it to the final step: measuring success. This can be handled in a variety of ways. For example, by the number of registered help desk incidents, the ease of which the process can be followed or repeated, or the number of positive reports provided by your toolsets. But ultimately what matters is swift deployment, streamlined repeatable processes, a reduction in manual requirements, and in the end, an organization that is less vulnerable to exploit.

A quick note on where patching often goes awry

Believe it or not, some organizations still allow users to have local admin rights for patching. This creates major attack surfaces, and the reality is, no IT team should rely on end-users for patching (blanket admin rights are just too risky).

Some also rely on free tools, but these often do not deliver all the security needed for patching. They also generally don’t provide the necessary reporting to ensure systems are 100% patched (i.e., validation). And finally, there is an over-reliance on auto-updates. Auto-updates can provide a false sense of security and can impact productivity if they are triggered during work hours.

Conclusion

Whether large or small, organizations continue to struggle with patching. I hope this quick step-by-step guide of key considerations for patch management helps your organization create a new framework or optimize an existing one.

Vulnerability And Patch Management A Complete Guide

CISSP training course

InfoSec tools | InfoSec services | InfoSec books

Tags: Vulnerability Management Program


Nov 11 2022

How can CISOs catch up with the security demands of their ever-growing networks?

Category: CISO,CISSP,vCISODISC @ 11:12 am

Vulnerability management has always been as much art as science. However, the rapid changes in both IT networks and the external threat landscape over the last decade have made it exponentially more difficult to identify and remediate the vulnerabilities with the greatest potential impact on the enterprise.

With a record of 18,378 vulnerabilities reported by the National Vulnerability Database in 2021 and an influx of new attack techniques targeting increasingly complex and distributed environments, how can CISOs know where to start?

Why has maintaining network visibility become such a challenge?

Heavy investments into digital transformation and cloud migration have rendered significant, foundational changes to the enterprise IT environment. Gartner predicts end-user spending on public cloud services will reach almost 600 billion in 2023, up from an estimated $494.7 billion this year and $410.9 in 2021.

Long gone are the days when security teams could concern themselves only with connections to and from the data center; now they must establish effective visibility and control of a sprawling, complex network that includes multiple public clouds, SaaS services, legacy infrastructure, the home networks of remote users, etc. Corporate assets are no longer limited to servers, workstations, and a few printers; teams must now secure virtual machines on premise and in the cloud, IoT devices, mobile devices, microservices, cloud data stores, and much more – making visibility and monitoring infinitely more complex and challenging.

In many cases, security investments have not kept up with the rapid increase in network scope and complexity. In other cases, agile processes have outpaced security controls. This results in security teams struggling to achieve effective visibility and control of their networks, resulting in misconfigurations, compliance violations, unnecessary risk, and improperly prioritized vulnerabilities that provide threat actors with easy attack paths.

Adversaries are specifically targeting these blind spots and security gaps to breach the network and evade detection.

What are the most common mistakes being made in attempting to keep up with threats?

With the average cost of a data breach climbing to $4.35 million in 2022, CISOs and their teams are under extraordinary pressure to reduce cyber risk as much as possible. But many are hindered by a lack of comprehensive visibility or pressure to deliver agility beyond what can be delivered without compromising security. One of the most common issues we encounter is an inability to accurately prioritize vulnerabilities based on the actual risk they pose to the enterprise. With thousands of vulnerabilities discovered every year, determining which vulnerabilities need to be patched and which can be accepted as incremental risk is a critical process.

The Common Vulnerability Scoring System (CVSS) has become a useful guidepost, providing security teams with generalized information for each vulnerability. Prioritizing the vulnerabilities with the highest CVSS score may seem like a logical and productive approach. However, every CISO should recognize that CVSS scores alone are not an accurate way to measure the risk a vulnerability poses to their individual enterprise.

To accurately measure risk, more contextual information is required. Security teams need to understand how a vulnerability relates to their specific environment. While high-profile threats like Heartbleed may seem like an obvious priority, a less public vulnerability with a lower CVSS score exposed to the Internet in the DMZ may expose the enterprise to greater actual risk.

These challenges are exacerbated by the fact that IT and security teams often lose track of assets and applications as ownership is pushed to new enterprise teams and the cloud makes it easier than ever for anyone in the enterprise to spin up new resources. As a result, many enterprises are riddled with assets that are unmonitored and remain dangerously behind on security updates.

Why context is critical

With resources like the National Vulnerability Database at their fingertips, no CISO lacks for data on vulnerabilities. In fact, most enterprises do not lack for contextual data either. Enterprise security, IT, and GRC stacks provide a continuous stream of data which can be leveraged in vulnerability management processes. However, these raw streams of data must be carefully curated and combined with vulnerability information to be turned into actionable context – and it is this in this process where many enterprises falter.

Unfortunately, most enterprises do not have the resources to patch every vulnerability. In some circumstances, there may be a business case for not patching a vulnerability immediately, or at all. Context from information sources across the enterprise enables standardized risk decisions to be made, allowing CISOs to allocate their limited resources where they will have the greatest impact on the security of the enterprise.

Making the most of limited resources with automation

There was a time when a seasoned security professional could instinctively assess the contextual risk of a threat based on their experience and familiarity with the organisation’s infrastructure. However, this approach cannot scale with the rapid expansion of the enterprise network and the growing number of vulnerabilities that must be managed. Even before the ongoing global security skills shortage, no organization had the resources to manually aggregate and correlate thousands of fragments of data to create actionable context.

In today’s constantly evolving threat landscape, automation offers the best chance for keeping up with vulnerabilities and threats. An automated approach can pull relevant data from the security, IT, and GRC stacks and correlate it into contextualized information which can be used as the basis for automated or manual risk decisions.

data

Vulnerability Management Program Guide: Managing the Threat and Vulnerability Landscape

Tags: CISO, Vulnerability Management Program