Aug 10 2022

APIC/EPIC! Intel chips leak secrets even the kernel shouldn’t see

Here’s this week’s BWAIN, our jocular term for a Bug With An Impressive Name.

BWAIN is an accolade that we hand out when a new cybersecurity flaw not only turns out to be interesting and important, but also turns up with its own logo, domain name and website.

This one is dubbed Ă†PIC Leak, a pun on the words APIC and EPIC.

The former is short for Advanced Programmable Interrupt Controller, and the latter is simply the word “epic”, as in giantmassiveextrememegahumongous.

The letter Æ hasn’t been used in written English since Saxon times. Its name is æsc, pronounced ash (as in the tree), and it pretty much represents the sound of the A in in the modern word ASH. But we assume you’re supposed to pronounce the word ÆPIC here either as “APIC-slash-EPIC”, or as “ah!-eh?-PIC”.

What’s it all about?

All of this raises five fascinating questions:

  • What is an APIC, and why do I need it?
  • How can you have data that even the kernel can’t peek at?
  • What causes this epic failure in APIC?
  • Does the ÆPIC Leak affect me?
  • What to do about it?

What’s an APIC?

Let’s rewind to 1981, when the IBM PC first appeared.

The PC included a chip called the Intel 8259A Programmable Interrupt Controller, or PIC. (Later models, from the PC AT onwards, had two PICs, chained together, to support more interrupt events.)

The purpose of the PIC was quite literally to interrupt the program running on the PC’s central processor (CPU) whenever something time-critical took place that needed attention right away.

These hardware interrupts included events such as: the keyboard getting a keystroke; the serial port receiving a character; and a repeating hardware timer ticking over.

Without a hardware interrupt system of this sort, the operating system would need to be littered with function calls to check for incoming keystrokes on a regular basis, which would be a waste of CPU power when no one was typing, but wouldn’t be responsive enough when they did.

As you can imagine, the PIC was soon followed by an upgraded chip called the APIC, an advanced sort of PIC built into the CPU itself.

These days, APICs provide much more than just feedback from the keyboard, serial port and system timer.

APIC events are triggered by (and provide real-time data about) events such as overheating, and allow hardware interaction between the different cores in contemporary multicore processors.

And today’s Intel chips, if we may simplifly greatly, can generally be configured to work in two different ways, known as xAPIC mode and x2APIC mode.

Here, xAPIC is the “legacy” way of extracting data from the interrupt controller, and x2APIC is the more modern way.

Simplifying yet further, xAPIC relies on what’s called MMIO, short for memory-mapped input/output, for reading data out of the APIC when it registers an event of interest.

In MMIO mode, you can find out what triggered an APIC event by reading from a specific region of memory (RAM), which mirrors the input/output registers of the APIC chip itself.

This xAPIC data is mapped into a 4096-byte memory block somewhere in the physical RAM of the computer.

This simplifies accessing the data, but it requires an annoying, complex (and, as we shall see, potentially dangerous) interaction between the APIC chip and system memory.

In contrast, x2APIC requires you to read out the APIC data directly from the chip itself, using what are known as Model Specific Registers (MSRs).

According to Intel, avoiding the MMIO part of the process â€śprovides significantly increased processor addressability and some enhancements on interrupt delivery.”

Notably, extracting the APIC data directly from on-chip registers means that the total amount of data supported, and the maximum number of CPU cores that can be managed at the same time, is not limited to the 4096 bytes available in MMIO mode.

Tags: Cryptography, Data loss

Nov 02 2021

50% of internet-facing GitLab installations are still affected by a RCE flaw

Cybersecurity researchers warn of a now-patched critical remote code execution (RCE) vulnerability, tracked as CVE-2021-22205, in GitLab’s web interface that has been actively exploited in the wild.

The vulnerability is an improper validation issue of user-provided images the can lead to arbitrary code execution. The vulnerability affects all versions starting from 11.9.

“An issue has been discovered in GitLab CE/EE affecting all versions starting from 11.9. GitLab was not properly validating image files that is passed to a file parser which resulted in a remote command execution. This is a critical severity issue (AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H, 9.9). It is now mitigated in the latest release and is assigned CVE-2021-22205.” reads the advisory published by GitLab.

GitLab addressed the vulnerability on April 14, 2021, with the release of 13.8.8, 13.9.6, and 13.10.3 versions.

The vulnerability was reported by the expert vakzz through the bug bounty program of the company operated through the HackerOne platform.

The vulnerability was actively exploited in the wild, researchers from HN Security described an attack one of its customers. Threat actors created two user accounts with admin privileges on a publicly-accessible GitLab server belonging to this organization. The attackers exploited the flaw to upload a malicious payload that leads to remote execution of arbitrary commands.

“Meanwhile, we noticed that a recently released exploit for CVE-2021-22205 abuses the upload functionality in order to remotely execute arbitrary OS commands. The vulnerability resides in ExifTool, an open source tool used to remove metadata from images, which fails in parsing certain metadata embedded in the uploaded image, resulting in code execution as described here.” reads the analysis published by HN Security.

The flaw was initially rated with a CVSS score of 9.9, but the score was later changed to 10.0 because the issue could be triggered by an unauthenticated attackers.

Researchers from Rapid7 reported that of the 60,000 internet-facing GitLab installations:

Git for Programmers

Tags: Gitlab, Gitlab vulnerability

Jun 07 2021

The evolution of cybersecurity within network architecture

Category: Security ArchitectureDISC @ 10:09 am

A decade ago, security officers would have been able to identify the repercussions of an attack almost immediately, as most took place in the top-level layers of a system, typically through a malware attack. Now however, threat actors work over greater lengths of time, with much broader, long-term horizons in mind.

Leaders can no longer assume that their business systems are safe. The only certainty is that nothing is certain. The past year has been evidence of that, as large, well-trusted companies have faced catastrophic breaches, such as the SolarWinds and Microsoft attacks. These organizations were believed to have some of the best systems installed to protect their data, yet they were still successfully infiltrated.

Threat actors are also pervading through underlying networks, passing from router to router and accessing data stored far below the top level in a system. The refinement of their attacks mean that businesses can go unaware of a breach for longer periods of time, increasing the amount of damage that can be done.

Businesses should take all precautions necessary when it comes to security and assume that anything is possible and devise their security plans around the worst-case scenario. This means adopting the attitude that any one employee could be a hacker’s key to access company systems. Anyone could fall for one of the increasingly sophisticated attacks and click on a phishing email, resulting in a rabbit hole of malicious elements.

Visibility and analytics

Moving forwards, visibility and analytics will be instrumental in strengthening a business’ security approach. These elements deliver invaluable insights into a company’s security standpoint and can help identify any vulnerabilities that have gone unnoticed. Where security and connectivity within an organization have been the two main focus points of leaders, visibility and analytics have now become the third and fourth fundamental elements.

The value of this information cannot be overstated. For a company who has identified a breach attempt and shut all systems down, the first challenge is understanding how far the criminals managed to get before being detected, and what data had been accessed.

In the scenario when businesses are faced with threats from ransomware attackers and take part in negotiations, it helps to have an overview of all business systems. For example, if an attack took place over one week and a company is able to see all incoming and outgoing traffic, then they can deduce roughly how far the criminals could have got.

This could be vital in seeing through any deceptions from the hackers, who may claim to have accessed ten terabytes of data, when realistically they may only have secured a couple of files before being shut out. Only with complete visibility will businesses be able to counter a criminal’s threat.

Strengthening the architecture

There are a number of pathways that organizations can take to strengthen their network architecture against threats. Zero-trust approaches are highly recommended for businesses, especially in the age of remote working, as a way of limiting privileged accounts and the general amount of data left easily accessible. Requesting authentication before access not only protects the business’ external perimeter, but also any risks that exist within as well.

A lot of businesses will find themselves needing to re-address the very foundations of their infrastructure before any additional approaches can be taken. Integration is a massive part of strengthening a company’s network architecture as most will have existing technologies that will need to be combined into one fully functioning capability.

Not only will this allow for greater accessibility and flexibility, but it will also simplify the systems so that they are easier to manage. Achieving this integration will provide businesses with greater visibility into their platforms, making it significantly easier to identity and defend against incoming cyber threats.

Ensuring a secure future

Solutions such as Secure Access Service Edge (SASE) can assist in the strengthening of network architecture. SASE is the integration of networking and security solutions, such as zero trust and firewall-as-a-service (FWaaS), into a single service that can be delivered entirely through the cloud. This ability to deploy through the cloud allows for greater flexibility, making it easy to apply security services wherever they are needed. As a lot of applications used are cloud-based, including collaborative communications, seamless and secure transition to and from the cloud are crucial.

Cybersecurity will likely become more of a process model that is part of every new project. It will become imbedded in every business area, regardless of what their main function is. In such an extreme and sophisticated threat landscape, simply educating employees and home workers of security risks cannot be relied upon to protect companies from malicious attacks.

In an era where cybersecurity attacks are inevitable, strong network architecture and end-to-end visibility are the fundamentals to a resilient security posture. Providing a single point of control using solutions such as SASE will enable businesses to create a more streamlined network architecture, whether from remote locations or within the office. Regardless of their current standpoint, all businesses should be working towards one goal – implementing a business approach that combines the three crucial elements: security, network and visibility.

Tags: cybersecurity within network architecture, Network security architecture