Hacker tournament brings together world’s best in Las Vegas

DEF CON hacking conference in Las Vegas

(Reuters) – A team of hackers from two North American universities won the “Capture the Flag” championship, a contest seen as the “Olympics of hacking,” which draws together some of the world’s best in the field.

In the carpeted ballroom of one of the largest casinos in Las Vegas, the few dozen hackers competing in the challenge sat hunched over laptops from Friday through Sunday during the DEF CON security conference that hosts the event.

The winning team, called Maple Mallard Magistrates, included participants from Carnegie Mellon University, its alumni, and the University of British Columbia.

The contest involves breaking into custom-built software designed by the tournament organizers. Participants must not only find bugs in the program but also defend themselves from hacks coming from other competitors.

The hackers, mostly young men and women, included visitors from China, India, Taiwan, Japan and South Korea. Some worked for their respective governments, some for private firms and others were college students.

While their countries may be engaged in cyber espionage against one another, the DEF CON CTF contest allows elite hackers to come together in the spirit of sport.

The reward is not money, but prestige. “No other competition has the clout of this one,” said Giovanni Vigna, a participant who teaches at the University of California in Santa Barbara. “And everybody leaves politics at home.”

“You will easily find a participant here going to another who may be from a so-called enemy nation to say ‘you did an amazing job, an incredible hack.'”

The game has taken on new meaning in recent years as cybersecurity has been elevated as a national security priority by the United States, its allies and rivals. Over the last 10 years, the cybersecurity industry has boomed in value as hacking technology has evolved.

Winning the title is a lifelong badge of honor, said Aaditya Purani, a participant who works as an engineer at electric car maker Tesla Inc (TSLA.O).

This year’s contest was broadcast for the first time on YouTube, with accompanying live commentary in the style of televised sports.

DEF CON itself, which began as a meetup of a few hundred hackers in the late 1990s, was organized across four casinos this year and drew a crowd of more than 30,000, according to organizing staff.

On Saturday afternoon, participants at the “Capture the Flag” contest sat typing into their laptops as conference attendees streamed in and out of the room to watch. Some participants took their meals at the tables, munching on hamburgers and fries with their eyes fixed on screens.

Seungbeom Han, a systems engineer at Samsung Electronics, who was part of a South Korean team, said it was his first time at the contest and it had been an honor to qualify.

The competition was intense and sitting for eight hours a day at the chairs was not easy. They did take bathroom breaks, he said with a laugh, “but they are a waste of time.”

Reporting by Zeba Siddiqui in Las Vegas Editing by Matthew Lewis

https://www.reuters.com/technology/hacker-tournament-brings-together-worlds-best-las-vegas-2022-08-17/

The Hacker Quarterly

Leave a Comment

PoC exploit code for critical Realtek RCE flaw released online

Realtek

Exploit code for a critical vulnerability affecting networking devices using Realtek RTL819x system on a chip released online.

The PoC exploit code for a critical stack-based buffer overflow issue, tracked as 

 (CVSS 9.8), affecting networking devices using Realtek’s RTL819x system on a chip was released online. The issue resides in the Realtek’s SDK for the open-source eCos operating system, it was discovered by researchers from cybersecurity firm Faraday Security

“On Realtek eCos SDK-based routers, the ‘SIP ALG’ module is vulnerable to buffer overflow. The root cause of the vulnerability is insufficient validation on the received buffer, and unsafe calls to strcpy. The ‘SIP ALG’ module calls strcpy to copy some contents of SIP packets to a predefined fixed buffer and does not check the length of the copied contents.” reads the advisory published by Realtek, which published the issue in March 2022. “A remote attacker can exploit the vulnerability through a WAN interface by crafting arguments in SDP data or the SIP header to make a specific SIP packet, and the successful exploitation would cause a crash or achieve the remote code execution.”

Millions of devices, including routers and access points, are exposed to hacking.

The experts (Octavio GianatiempoOctavio GallandEmilio CoutoJavier Aguinagadisclosed technical details of the flaw at the DEFCON hacker conference last week.

A remote attacker can exploit the flaw to execute arbitrary code without authentication by sending to the vulnerable devices specially crafted SIP packets with malicious SDP data.

The issue is very dangerous because the exploitation doesn’t require user interaction.

The PoC code developed by the experts works against Nexxt Nebula 300 Plus routers.

“This repository contains the materials for the talk “Exploring the hidden attack surface of OEM IoT devices: pwning thousands of routers with a vulnerability in Realtek’s SDK for eCos OS.”, which was presented at DEFCON30.” reads the description provided with the exploit code on GitHub.

The repo includes:

  • analysis: Automated firmware analysis to detect the presence of CVE-2022-27255 (Run analyse_firmware.py).
  • exploits_nexxt: PoC and exploit code. The PoC should work on every affected router, however the exploit code is specific for the Nexxt Nebula 300 Plus router.
  • ghidra_scripts: Vulnerable function call searching script and CVE-2022-27255 detection script.
  • DEFCON: Slide deck & poc video.

Johannes Ullrich, Dean of Research at SANS shared a Snort rule that can be used to detect PoC exploit attempt.

“The rule looks for “INVITE” messages that contain the string “m=audio “. It triggers if there are more than 128 bytes following the string (128 bytes is the size of the buffer allocated by the Realtek SDK) and if none of those bytes is a carriage return. The rule may even work sufficiently well without the last content match. Let me know if you see any errors or improvements.” wrote the expert.

Slides for the DEFCON presentation along with exploits, and a detection script for 

 are available in this GitHub repository.

Leave a Comment

ITGP comprehensive set of Toolkits

When it comes to protecting your data, you’re in safe hands. IT Governance is at the forefront of cyber security and data protection. Learn more about IT Governance Publishing’s range of toolkits.

ITGP-Toolkits

InfoSec Playbooks

Leave a Comment

Chrome browser gets 11 security fixes with 1 zero-day – update now!

The latest update to Google’s Chrome browser is out, bumping the four-part version number to 104.0.5112.101 (Mac and Linux), or to 104.0.5112.102 (Windows).

According to Google, the new version includes 11 security fixes, one of which is annotated with the remark that “an exploit [for this vulnerability] exists in the wild”, making it a zero-day hole.

The name zero-day is a reminder that there were zero days on which even the most well-informed and proactive user or sysadmin could have been patched ahead of the Bad Guys.

Update details

Details about the updates are scant, given that Google, in common with many other vendors these days, restricts access to bug details “until a majority of users are updated with a fix”.

But Google’s release bulletin explicitly enumerates 10 of the 11 bugs, as follows:

  • CVE-2022-2852: Use after free in FedCM.
  • CVE-2022-2854: Use after free in SwiftShader.
  • CVE-2022-2855: Use after free in ANGLE.
  • CVE-2022-2857: Use after free in Blink.
  • CVE-2022-2858: Use after free in Sign-In Flow.
  • CVE-2022-2853: Heap buffer overflow in Downloads.
  • CVE-2022-2856: Insufficient validation of untrusted input in Intents. (Zero-day.)
  • CVE-2022-2859: Use after free in Chrome OS Shell.
  • CVE-2022-2860: Insufficient policy enforcement in Cookies.
  • CVE-2022-2861: Inappropriate implementation in Extensions API.

As you can see, seven of these bugs were caused by memory mismanagement.

use-after-free vulnerability means that one part of Chrome handed back a memory block that it wasn’t planning to use any more, so that it could be reallocated for use elsewhere in the software…

…only to carry on using that memory anyway, thus potentially causing one part of Chrome to rely on data it thought it could trust, without realising that another part of the software might still be tampering with that data.

Often, bugs of this sort will cause the software to crash completely, by messing up calculations or memory access in an unrecoverable way.

Sometimes, however, use-after-free bugs can be triggered deliberately in order to misdirect the software so that it misbehaves (for example by skipping a security check, or trusting the wrong block of input data) and provokes unauthorised behaviour.

heap buffer overflow means asking for a block of memory, but writing out more data than will fit safely into it.

This overflows the officially-allocated buffer and overwrites data in the next block of memory along, even though that memory might already be in use by some other part of the program.

Buffer overflows therefore typically produce similar side-effects to use-after-free bugs: mostly, the vulnerable program will crash; sometimes, however, the program can be tricked into running untrusted code without warning.

The zero-day hole

The zero-day bug CVE-2022-2856 is presented with no more detail than you see above: “Insufficient validation of untrusted input in Intents.”

A Chrome Intent is a mechanism for triggering apps directly from a web page, in which data on the web page is fed into an external app that’s launched to process that data.

Google hasn’t provided any details of which apps, or what sort of data, could be maliciously manipulated by this bug…

…but the danger seems rather obvious if the known exploit involves silently feeding a local app with the sort of risky data that would normally be blocked on security grounds.

What to do?

Chrome will probably update itself, but we always recommend checking anyway.

On Windows and Mac, use More > Help > About Google Chrome > Update Google Chrome.

There’s a separate release bulletin for Chrome for iOS, which goes to version 104.0.5112.99, but no bulletin yet [2022-08-17T12:00Z] that mentions Chrome for Android.

On iOS, check that your App Store apps are up-to-date. (Use the App Store app itself to do this.)

You can watch for any forthcoming update announcement about Android on Google’s Chrome Releases blog

The open-source Chromium variant of the proprietary Chrome browser is also currently at version 104.0.5112.101.

Microsoft Edge security notes, however, currently [2022-08-17T12:00Z] say:

August 16, 2022

Microsoft is aware of the recent exploit existing in the wild. We are actively working on releasing a security patch as reported by the Chromium team.

You can keep your eye out for an Edge update on Microsoft’s official Edge Security Updates page.

Web Security for Developers: Real Threats, Practical Defense

Leave a Comment

Clop Ransomware Gang Breaches Water Utility, Just Not the Right One

South Staffordshire in the UK has acknowledged it was targeted in a cyberattack, but Clop ransomware appears to be shaking down the wrong water company.

Uk man hole cover

South Staffordshire plc, a UK water-supply company, has acknowledged it was the victim of a cyberattack. Around the same time, the Clop ransomware group started threatening Thames Water that it would release data it has stolen from the utility unless Thames Water paid up.

The problem? Thames Water wasn’t breached. 

Apparently, Clop got its UK water companies confused. 

South Staffordshire serves about 1.6 million customers and recently reported that it was targeted in a cyberattack and was “experiencing a disruption to out corporate IT network and our teams are working to resolve this as quickly as possible.” It added there has been no disruption on service. 

“This incident has not affected our ability to supply safe water, and we can confirm we are still supplying safe water to all of our Cambridge Water and South Staffs Water customers,” the water company said. 

Meanwhile, Thames Water, the UK’s largest water supplier to more than 15 million people, was forced to deny it was breached by Clop ransomware attackers, who threatened they now had the ability to tamper with the water supply, according to reports. 

“As providers of critical national infrastructure, we take the security of our networks and systems very seriously and are focused on protecting them, so that we can continue to provide resilient services to our customers and the environment,” the larger water company told the UK Mirror

While Clop seems to have its records all wrong, both water utilities mounted capable responses to the ransomware group’s attack on critical infrastructure, according to Edward Liebig, global director of cyber ecosystem at Hexagon Asset Lifecycle Intelligence. 

“I’m impressed by South Staffordshire Water’s ability to defend against the cyberattack in the IT systems and buffer the OT systems from impact,” Liebeg said. “And had Thames Water not done an investigation of the ‘proof of compromise,’ they may very well have decided to negotiate further. In both instances, each organization did their due diligence.”

https://www.darkreading.com/attacks-breaches/clop-ransomware-gang-breaches-water-utility

Ransomware Protection Playbook

Leave a Comment

Organisations Must Invest in Cyber Defences Before It’s Too Late

We’ve all been feeling the effects of inflation recently. Prices rose by 8.2% in the twelve months to June 2022, with the largest increases being seen in electricity, gas and transport prices.

Meanwhile, the cost of renting commercial property continues to rise, despite the decreased demand for office space amid the uptick in remote work.

It should be obvious why costs are on the rise; substantial disruption remains related to COVID-19, Russia’s invasion of Ukraine has disrupted supply chains and interest rates have been raised several times this year.

The Bank of England says that the causes of rising inflation are not likely to last, but it has warned that the prices of certain things may never come down.

Clearly, then, rising costs are not simply a temporary issue that we must get through. We must instead carefully plan for how we will deal with increased costs on a permanent basis.

One apparent measure is to look at ways your organisation can cut costs. For better or worse, the most likely targets will be parts of the business that don’t contribute to a direct return on investment.

However, before you start slashing budgets, you should consider the full effects of your decisions.

Take cyber security for example. It’s already notoriously underfunded, with IT teams and other decision makers being forced to make do with limited resources.

According to a Kaspersky report, a quarter of UK companies admit underfunding cyber security even though 82% of respondents have suffered data breaches.

The risk of cyber security incidents is even higher in the summer months, when staff holidays mean that cyber security resources are even more stretched than usual.

What’s at stake?

The global cost of cyber crime is predicted to reach $10.5 trillion (£8.8 trillion) in the next three years, more than triple the $3 trillion (£2.5 trillion) cost in 2015.

We’ve reached record numbers of phishing attacks, with the Anti-Phishing Working Group detecting more than one million bogus emails last quarter. Meanwhile, there were more ransomware attacks in the first quarter of 2022 than there were in the whole of 2021.

These are worrying signs for organisations, and an economic downturn will only make cyber criminals more determined to make money – especially as they know their targets are focusing on cutting costs.

But it’s not just the immediate costs associated with cyber attacks and disruption that organisations should be worried about. There are also long-term effects, whether that’s lingering operational disruption, reputational damage or regulatory action.

Consider the ongoing problems that British Airways faced after it suffered a cyber attack in 2018. It took the airline more than two months to detect the breach, creating enduring difficulties and ultimately resulting in a £20 million fine.

The ICO (Information Commissioner’s Office), which investigated the incident, found that British Airways was processing a significant amount of personal data without adequate security measures in place, and had it addressed those vulnerabilities, it would have prevented the attack.

There were several measures that British Airways could have used to mitigate or prevent the damage, including:

  • Applying access controls to applications, data and tools to ensure individuals could only access information relevant to their job;
  • Performing penetration tests to spot weaknesses; and
  • Implementing multi-factor authentication.

In addition to the fine, British Airways settled a class action from as many as 16,000 claimants. The amount of the settlement remains confidential, but the cost of the payout was estimated to be as much as £2,000 per person.

Remarkably, the penalty and the class action represent a case of strikingly good fortune for British Airways. Had it come earlier, it would have been at the height of the COVID-19 pandemic when airlines were severely affected, and were it any later, it would have come during a period of massive inflation.

It’s a lesson that other organisations must take to heart. The GDPR is being actively enforced throughout the EU and UK, so organisations must ensure compliance.

Failure to do so will result in unforeseen costs at a time when every precaution must be taken to reduce costs.

Invest today, secure tomorrow

It’s long been accepted that it’s a matter of ‘when’ rather than ‘if’ you will suffer a cyber attack. When you do, you’ll have to invest heavily in security solutions on top of having to paying remediation costs.

In times of uncertainty, you need your services to be as reliable as possible. The challenges your organisation will face in the coming months as a result of falling consumer confidence are enough to deal with without having to contend with cyber crime and its inevitable fallout.

Investing in effective cyber security measures will enable your organisation to make the most of its opportunities in straightened circumstances.

You can find out how you can bolster your organisation’s defences quickly and efficiently with IT Governance’s range of training courses.

We want to help our customers get the most from their cyber security training this August.

Book any classroom, Live Online or self-paced training course before the end of this month and automatically receive:

Leave a Comment

API Security: A Complete Guide

Our society has become increasingly dependent on technology in the past few decades, and the global pandemic accelerated this trend.

What is API Security?

APIs are prevalent in SaaS models and modern applications across the board. API security refers to best practices applied to aspects of these APIs to ensure they’re protected from cybercriminals.

Web API security includes access control and privacy, as well as the detection of attacks via reverse engineering and exploitation of vulnerabilities. Since APIs enable the easy development of client-side applications, security measures are applied to applications aimed at employees, consumers, partners and others via mobile or web apps.

Why API Security Should Be a Top Priority

Attacking APIs requires first learning about a company’s APIs. To do so, bad actors perform extensive, drawn-out reconnaissance. That activity flies under the radar of existing technology such as API gateways and web application firewalls (WAFs). APIs make a very lucrative target for bad actors since they are a pipeline to valuable data and they’re poorly defended. Since data is the lifeblood of an organization, protecting it – and end-users – is paramount to avoiding breaches and the financial and reputational harm that comes with them.

In 2017, Gartner predicted API attacks would be the greatest threat to organizations in 2022. The year has arrived, and this foresight has proved accurate. Cyberattacks on APIs have exposed vulnerabilities and cost businesses a lot of time, money and heartache to recover from these breaches.

Major organizations like Peloton and LinkedIn have recently fallen victim to API-driven attacks, proving that even enterprise-class businesses (with enterprise-class budgets) are no match for cybercriminals. API attacks grew an astounding 681% in 2021, showing that businesses cannot afford to be complacent about this threat.

API Security Checklist for Development and Implementation

As with any security objective, it’s crucial to implement best practices and ensure you close all gaps in your API security strategy. While it can be overwhelming, an organized approach will help break your plan into manageable pieces. Start with scope and prioritization:

  • Perform penetration tests for your APIs, and know that to get a clear picture of the security status, you’ll need runtime protection
  • Assess the entirety of your environments, including your digital supply chain and APIs that fall outside of your API management suite
  • If you need to start small, prioritize runtime protection to protect from attackers while your application and API teams delve further into the comprehensive security strategy

Design and Development

Building a robust API security strategy is crucial, but that doesn’t mean you need to start from scratch. Great supportive resources, including the OWASP Application Security Verification Standard (ASVS), are available to help you design your approach.

Ensure you draft your organization’s build and integration security requirements, include business logic when performing design reviews and implement practices for coding and configuration relevant to your security stack.

Documentation

Ensure that you keep comprehensive documentation for application and integration teams. Documentation should cover security testing, design reviews, operations and protection. By documenting the stages of your process, you will ensure continuity in your testing and protection approaches.

Discovery and Cataloging

Ideally, your documentation process will be thorough and consistent. In reality, however, sometimes things are missed. Therefore, organizations must implement automated discovery of API endpoints, data types and parameters. You will benefit from this approach to create an API inventory to serve IT needs throughout your organization.

Ensure you use automation to detect and track APIs across all environments, not limiting the focus to production. Be sure to include third-party APIs and dependencies. Tag and label your microservices and APIs—this is a DevOps best practice.

Security Testing

Traditional security testing tools will help verify elements of your APIs, including vulnerabilities and misconfigurations. Bear in mind that while helpful, these tools do have their limitations. They cannot fully parse business logic, leaving organizations vulnerable to API abuse. Use tools to supplement your security strategy, and do not rely on them as a be-all-end-all view of the state of your APIs.

Security at the Front-End

For a multi-layered approach, ensure you implement a front-end security strategy for your API clients that depend on back-end APIs. Client-side behavior analytics can embellish privacy concerns while protecting the front end. It is recommended to draft security requirements for your front-end code and to store minimal data client-side to reduce the risk of reverse engineering attacks. Ensure you have secured your back-end APIs as well, as this is not an either/or approach.

Network and Data Security

In a zero-trust architecture framework, network access is dynamically restricted. It is still possible for API attacks to occur due to the connectivity required for API functionality, meaning trusted channels can still create security threats. Ensure your data is encrypted during API transport, and use API allow and deny lists if your user list is short.

Many organizations are unclear on which APIs transmit sensitive data, exposing them to the risk of regulatory penalties and large-scale data security breaches. For data security, transport encryption is suitable in most use cases.

Authentication, Authorization, and Runtime Protection

Accounting for authentication and authorization for both users and machines is crucial to a comprehensive API security approach. Avoid using API keys as a primary means of authentication, and continuously authorize and authenticate users for a higher level of security. Modern authentication tools such as 0Auth2 will increase security fortitude.

Organizations should deploy runtime protection. Make sure your runtime protection can identify configuration issues in API infrastructure. It should also detect behavior anomalies such as credential stuffing, brute forcing, or scraping attempts. DoS and DDoS attacks are on the rise, and you should be sure that mitigation plays a role in your API security strategy.

API Security is Fundamental in Today’s World

The use of APIs is a fundamental element of life in the modern era. As such, organizations have a responsibility to ensure end users, networks and data are kept safe from intruders who may expose API vulnerabilities. By following these key aspects of API security, you will be able to successfully mitigate risk.

API Security in Action

A web API is an efficient way to communicate with an application or service. However, this convenience opens your systems to new security risks. API Security in Action gives you the skills to build strong, safe APIs you can confidently expose to the world. 

API Security in Action

Leave a Comment

Zoom for Mac patches get-root bug – update now!

At the well-known DEF CON security shindig in Las Vegas, Nevada, last week, Mac cybersecurity researcher Patrick Wardle revealed a “get-root” elevation of privilege (EoP) bug in Zoom for Mac:

Leave a Comment

Patch Madness: Vendor Bug Advisories Are Broken, So Broken

Dustin Childs and Brian Gorenc of ZDI take the opportunity at Black Hat USA to break down the many vulnerability disclosure issues making patch prioritization a nightmare scenario for many orgs.

Image of a bug spewing out code

BLACK HAT USA – Las Vegas – Keeping up with security-vulnerability patching is challenging at best, but prioritizing which bugs to focus on has become more difficult than ever before, thanks to context-lacking CVSS scores, muddy vendor advisories, and incomplete fixes that leave admins with a false sense of security.

That’s the argument that Brian Gorenc and Dustin Childs, both with Trend Micro’s Zero Day Initiative (ZDI), made from the stage of Black Hat USA during their session, “Calculating Risk in the Era of Obscurity: Reading Between the Lines of Security Advisories.”

ZDI has disclosed more than 10,000 vulnerabilities to vendors across the industry since 2005. Over the course of that time, ZDI communications manager Childs said that he’s noticed a disturbing trend, which is a decrease in patch quality and reduction of communications surrounding security updates.

“The real problem arises when vendors release faulty patches, or inaccurate and incomplete information about those patches that can cause enterprises to miscalculate their risk,” he noted. “Faulty patches can also be a boon to exploit writers, as ‘n-days’ are much easier to use than zero-days.”

The Trouble With CVSS Scores & Patching Priority

Leave a Comment

How to manage the intersection of Java, security and DevOps at a low complexity cost

In this Help Net Security video above, Erik Costlow, Senior Director of Product Management at Azul, talks about Java centric vulnerabilities and the headache they have become for developers everywhere.

He touches on the need for putting security back into DevOps and how developers can better navigate vulnerabilities that are taking up all of their efforts and keeping them from being able to focus on the task at hand.

Java

Microservices Security in Action: Design secure network and API endpoint security for Microservices applications, with examples using Java, Kubernetes, and Istio

Leave a Comment

Microsoft: We Don’t Want to Zero-Day Our Customers

The head of Microsoft’s Security Response Center defends keeping its initial vulnerability disclosures sparse — it is, she says, to protect customers.

Laptop screen showing Windows Update window
Source: CC Photo Labs via Shutterstock

Jai Vijayan

BLACK HAT USA — Las Vegas — A top Microsoft security executive today defended the company’s vulnerability disclosure policies as providing enough information for security teams to make informed patching decisions without putting them at risk of attack from threat actors looking to quickly reverse-engineer patches for exploitation.

In a conversation with Dark Reading at Black Hat USA, the corporate vice president of Microsoft’s Security Response Center, Aanchal Gupta, said the company has consciously decided to limit the information it provides initially with its CVEs to protect users. While Microsoft CVEs provide information on the severity of the bug, and the likelihood of it being exploited (and whether it is being actively exploited), the company will be judicious about how it releases vulnerability exploit information.

For most vulnerabilities, Microsoft’s current approach is to give a 30-day window from patch disclosure before it fills in the CVE with more details about the vulnerability and its exploitability, Gupta says. The goal is to give security administrations enough time to apply the patch without jeopardizing them, she says. “If, in our CVE, we provided all the details of how vulnerabilities can be exploited, we will be zero-daying our customers,” Gupta says.

Sparse Vulnerability Information?

Microsoft — as other major software vendors — has faced criticism from security researchers for the relatively sparse information the company releases with its vulnerability disclosures. Since Nov. 2020, Microsoft has been using the Common Vulnerability Scoring System (CVSS) framework to describe vulnerabilities in its security update guide. The descriptions cover attributes such as attack vector, attack complexity, and the kind of privileges an attacker might have. The updates also provide a score to convey severity ranking.

However, some have described the updates as cryptic and lacking critical information on the components being exploited or how they might be exploited. They have noted that Microsoft’s current practice of putting vulnerabilities into an “Exploitation More Likely” or an “Exploitation Less Likely” bucket does not provide enough information to make risk-based prioritization decisions.

More recently, Microsoft has also faced some criticism for its alleged lack of transparency regarding cloud security vulnerabilities. In June, Tenable’s CEO Amit Yoran accused the company of “silently” patching a couple of Azure vulnerabilities that Tenable’s researchers had discovered and reported.

“Both of these vulnerabilities were exploitable by anyone using the Azure Synapse service,” Yoran wrote. “After evaluating the situation, Microsoft decided to silently patch one of the problems, downplaying the risk,” and without notifying customers.

Yoran pointed to other vendors — such as Orca Security and Wiz — that had encountered similar issues after they disclosed vulnerabilities in Azure to Microsoft.

Consistent with MITRE’s CVE Policies

Gupta says Microsoft’s decision about whether to issue a CVE for a vulnerability is consistent with the policies of MITRE’s CVE program.

“As per their policy, if there is no customer action needed, we are not required to issue a CVE,” she says. “The goal is to keep the noise level down for organizations and not burden them with information they can do little with.”

“You need not know the 50 things Microsoft is doing to keep things secure on a day-to-day basis,” she notes.

Gupta points to last year’s disclosure by Wiz of four critical vulnerabilities in the Open Management Infrastructure (OMI) component in Azure as an example of how Microsoft handles situations where a cloud vulnerability might affect customers. In that situation, Microsoft’s strategy was to directly contact organizations that are impacted.

“What we do is send one-to-one notifications to customers because we don’t want this info to get lost,” she says “We issue a CVE, but we also send a notice to customers because if it is in an environment that you are responsible for patching, we recommend you patch it quickly.”

Sometimes an organization might wonder why they were not notified of an issue — that’s likely because they are not impacted, Gupta says.

Source: We Don’t Want to Zero-Day Our Customers

Leave a Comment

Black Hat 2022 Trip Report

Black Hat Vegas

by Mike Rothman 

It felt like I had stepped out of a time machine and it was 2019. I was walking about a mile between meetings on different sides of the Mandalay Bay hotel. Though seeing some folks with face masks reminded me that it was, in fact, 2022. But I was in Las Vegas, and the badge around my neck indicated I was there for the Black Hat U.S. 2022 show.

It’s been a long time since I’ve been to a large security conference. Or any conference at all, for that matter. I couldn’t attend the RSA Conference back in June, so it had been 30 months since I’ve seen the security community in person. As I fly over Arkansas on my way back to Atlanta, here are a few thoughts about the show.

1. Security conferences are back: Well, kind of. There were a lot of people at Black Hat. Lots of vendor personnel on the show floor and lots of practitioners at the sessions. Sometimes the practitioners even made it to the show floor, given that most of the companies said they had a steady stream of booth traffic. It was nice to see people out and about, and I got to connect with so many good friends and got lots of hugs. It was good for my soul.
2. There was no theme: I went in expecting to see a lot of zero-trust and XDR and DevSecOps. I saw some of the buzzword bingo, but it was muted. That doesn’t mean I understood what most of the companies did, based on their booth. I didn’t. Most had some combination of detection, cloud and response as well as a variety of Gartner-approved category acronyms. I guess the events marketing teams are a bit rusty.
3. Booth size doesn’t correlate to company size: Some very large public companies had small booths. Some startups that I’d never heard of had large booths. Does that mean anything? It means some companies burned a lot of their VC money in Vegas this week, and public company shareholders didn’t.
4. Magicians still fill the booth, and you can get very caffeinated: Whenever I saw a crowd around a booth, there was typically some kind of performer doing some kind of show. Not sure how having some guy do magic tricks helped create demand for a security product, but it did fill the booths. So, I guess event marketing folks get paid by the badge scan, as well. Moreover, every other booth had an espresso machine. So if you needed a shot of energy after a long night at the tables or in a club, Black Hat was there for you.

I asked practitioners about budgets and vendors about sales cycles. Some projects are being scrutinized, but the “must-haves” like CSPM, CNAPP, and increasingly, API security are still growing fast. Managed detection and response remains very hot as organizations realize they don’t have the resources to staff their SOC. Same as it ever was.

Overall, the security business seems very healthy, and I couldn’t be happier to be back at Black Hat.

Leave a Comment

AWS and Splunk partner for faster cyberattack response

OCSF initiative will give enterprise security teams an open standard for moving and analyzing threat data

BLACK HAT AWS and Splunk are leading an initiative aimed at creating an open standard for ingesting and analyzing data, enabling enterprise security teams to more quickly respond to cyberthreats.

Seventeen security and tech companies at the Black Hat USA 2022 show this week unveiled the Open Cybersecurity Schema Framework (OCSF) project, which will use the ICD Schema developed by Symantec as the foundation for the vendor-agnostic standard.

The creation of the OCSF, licensed under the Apache License 2.0, comes as organizations are seeing their attack surfaces rapidly expand as their IT environments become increasingly decentralized, stretching from core datacenters out to the cloud and the edge. Parallel with this, the number and complexity of the cyberthreats they face is growing quickly.

“Today’s security leaders face an agile, determined and diverse set of threat actors,” officials with cybersecurity vendor Trend Micro, one of the initial members of OCSF, wrote in a blog post. “From emboldened nation state hackers to ransomware-as-a-service (RaaS) affiliates, adversaries are sharing tactics, techniques and procedures (TTPs) on an unprecedented scale – and it shows.”

Trend Micro blocked more than 94 billion threats in 2021, a 42 percent year-on-year increase, and 43 percent of organizations responding to a survey from the vendor said their digital attack surface is getting out of control.

Cybersecurity vendors have responded by creating platforms that combine attack surface management, threat prevention, and detection and response to make it easier and faster for enterprises to counter attacks. They streamline processes, close security gaps, and reduce costs, but they’re still based on vendor-specific products and point offerings.

Vendors may use different data formats in their products, which means moving datasets from one vendor’s product to that of another often requires the time-consuming task of changing the format of the data.

“Unfortunately, normalizing and unifying data from across these disparate tools takes time and money,” Trend Micro said. “It slows down threat response and ties up analysts who should be working on higher value tasks. Yet up until now it has simply become an accepted cost of cybersecurity. Imagine how much extra value could be created if we found an industry-wide way to release teams from this operational burden?”

Dan Schofield, program manager for technology partnerships at IBM Security, another OCSF member, wrote that the lack of open industry standards for logging and event purposes creates challenges when it comes to detection engineering, threat hunting, and analytics, and until now, there has been no critical mass of vendors willing to address the issue.

Source: AWS and Splunk partner for faster cyberattack response

Leave a Comment

New Open Source Tools Launched for Adversary Simulation

The new open source tools are designed to help defense, identity and access management, and security operations center teams discover vulnerable network shares.

globalnetwork_sasunBughdaryan-AdobeStock.jpg

Network shares in Active Directory environments configured with excessive permissions pose serious risks to the enterprise in the form of data exposure, privilege escalation, and ransomware attacks. Two new open source adversary simulation tools PowerHuntShares and PowerHunt help enterprise defenders discover vulnerable network shares and manage the attack surface.

The tools will help defense, identity and access management (IAM), and security operations center (SOC) teams streamline share hunting and remediation of excessive SMB share permissions in Active Directory environments, NetSPI’s senior director Scott Sutherland wrote on the company blog. Sutherland developed these tools.

PowerHuntShares inventories, analyzes, and reports excessive privilege assigned to SMB shares on Active Directory domain joined computers. The PowerHuntShares tool addresses the risks of excessive share permissions in Active Directory environments that can lead to data exposure, privilege escalation, and ransomware attacks within enterprise environments.

“PowerHuntShares will inventory SMB share ACLs configured with ‘excessive privileges’ and highlight ‘high risk’ ACLs [access control lists],” Sutherland wrote.

PowerHunt, a modular threat hunting framework, identifies signs of compromise based on artifacts from common MITRE ATT&CK techniques and detects anomalies and outliers specific to the target environment. The tool automates the collection of artifacts at scale using PowerShell remoting and perform initial analysis. 

Network shares configured with excessive permissions can be exploited in several ways. For example, ransomware can use excessive read permissions on shares to access sensitive data. Since passwords are commonly stored in cleartext, excessive read permissions can lead to remote attacks against databases and other servers if these passwords are uncovered. Excessive write access allows attackers to add, remove, modify, and encrypt files, such as writing a web shell or tampering with executable files to include a persistent backdoor. 

“We can leverage Active Directory to help create an inventory of systems and shares,” Sutherland wrote. “Shares configured with excessive permissions can lead to remote code execution (RCE) in a variety of ways, remediation efforts can be expedited through simple data grouping techniques, and malicious share scanning can be detected with a few common event IDs and a little correlation (always easier said than done).”

Source: New Open Source Tools Launched for Adversary Simulation

The Tao of Open Source Intelligence

Hunting Cyber Criminals: A Hacker’s Guide to Online Intelligence Gathering Tools and Techniques

Leave a Comment

APIC/EPIC! Intel chips leak secrets even the kernel shouldn’t see

Here’s this week’s BWAIN, our jocular term for a Bug With An Impressive Name.

BWAIN is an accolade that we hand out when a new cybersecurity flaw not only turns out to be interesting and important, but also turns up with its own logo, domain name and website.

This one is dubbed ÆPIC Leak, a pun on the words APIC and EPIC.

The former is short for Advanced Programmable Interrupt Controller, and the latter is simply the word “epic”, as in giantmassiveextrememegahumongous.

The letter Æ hasn’t been used in written English since Saxon times. Its name is æsc, pronounced ash (as in the tree), and it pretty much represents the sound of the A in in the modern word ASH. But we assume you’re supposed to pronounce the word ÆPIC here either as “APIC-slash-EPIC”, or as “ah!-eh?-PIC”.

What’s it all about?

All of this raises five fascinating questions:

  • What is an APIC, and why do I need it?
  • How can you have data that even the kernel can’t peek at?
  • What causes this epic failure in APIC?
  • Does the ÆPIC Leak affect me?
  • What to do about it?

What’s an APIC?

Let’s rewind to 1981, when the IBM PC first appeared.

The PC included a chip called the Intel 8259A Programmable Interrupt Controller, or PIC. (Later models, from the PC AT onwards, had two PICs, chained together, to support more interrupt events.)

The purpose of the PIC was quite literally to interrupt the program running on the PC’s central processor (CPU) whenever something time-critical took place that needed attention right away.

These hardware interrupts included events such as: the keyboard getting a keystroke; the serial port receiving a character; and a repeating hardware timer ticking over.

Without a hardware interrupt system of this sort, the operating system would need to be littered with function calls to check for incoming keystrokes on a regular basis, which would be a waste of CPU power when no one was typing, but wouldn’t be responsive enough when they did.

As you can imagine, the PIC was soon followed by an upgraded chip called the APIC, an advanced sort of PIC built into the CPU itself.

These days, APICs provide much more than just feedback from the keyboard, serial port and system timer.

APIC events are triggered by (and provide real-time data about) events such as overheating, and allow hardware interaction between the different cores in contemporary multicore processors.

And today’s Intel chips, if we may simplifly greatly, can generally be configured to work in two different ways, known as xAPIC mode and x2APIC mode.

Here, xAPIC is the “legacy” way of extracting data from the interrupt controller, and x2APIC is the more modern way.

Simplifying yet further, xAPIC relies on what’s called MMIO, short for memory-mapped input/output, for reading data out of the APIC when it registers an event of interest.

In MMIO mode, you can find out what triggered an APIC event by reading from a specific region of memory (RAM), which mirrors the input/output registers of the APIC chip itself.

This xAPIC data is mapped into a 4096-byte memory block somewhere in the physical RAM of the computer.

This simplifies accessing the data, but it requires an annoying, complex (and, as we shall see, potentially dangerous) interaction between the APIC chip and system memory.

In contrast, x2APIC requires you to read out the APIC data directly from the chip itself, using what are known as Model Specific Registers (MSRs).

According to Intel, avoiding the MMIO part of the process “provides significantly increased processor addressability and some enhancements on interrupt delivery.”

Notably, extracting the APIC data directly from on-chip registers means that the total amount of data supported, and the maximum number of CPU cores that can be managed at the same time, is not limited to the 4096 bytes available in MMIO mode.

Leave a Comment

Microsoft confirms ‘DogWalk’ zero-day vulnerability has been exploited

Microsoft confirms ‘DogWalk’ zero-day vulnerability has been exploited

Microsoft confirms ‘DogWalk’ zero-day vulnerability has been exploited

Microsoft has published a fix for a zero-day bug discovered in 2019 that it originally did not consider a vulnerability.

The tech giant patched CVE-2022-34713 – informally known as “DogWalk” – on Tuesday, noting in its advisory that it has already been exploited.

According to Microsoft, exploitation of the vulnerability requires that a user open a specially-crafted file delivered through a phishing email or web-based attack.

“In a web-based attack scenario, an attacker could host a website (or leverage a compromised website that accepts or hosts user-provided content) containing a specially crafted file designed to exploit the vulnerability,” Microsoft explained. “An attacker would have no way to force users to visit the website. Instead, an attacker would have to convince users to click a link, typically by way of an enticement in an email or instant message, and then convince them to open the specially crafted file.”

Later in the advisory, Microsoft said the type of exploit needed is called an “Arbitrary Code Execution,” or ACE, noting that the attacker would need to convince a victim through social engineering to download and open a specially-crafted file from a website which leads to a local attack on their computer. 

A three-year wait

The bug was originally reported to Microsoft by security researcher Imre Rad on December 22, 2019. Even though a case was opened one day later, Rad said in a blog post that Microsoft eventually declined to fix the issue six months later. 

Microsoft initially told Rad that to make use of the attack he described, an attacker would need “to create what amounts to a virus, convince a user to download the virus, and then run it.” The company added that “as written this wouldn’t be considered a vulnerability.” 

“No security boundaries are being bypassed, the PoC doesn’t escalate permissions in any way, or do anything the user couldn’t do already,” Microsoft told Rad. 

But in June, as security researchers dug into the “Follina” vulnerability, cybersecurity expert j00sean took to Twitter to resurface the issue and spotlight it again.  

Rad noted that on August 4, Microsoft contacted him and said they “reassessed the issue” and “determined that this issue meets our criteria for servicing with a security update” tagging it as CVE-2022–34713.

Microsoft said in its advisory that, like Follina, this is yet another vulnerability centered around Microsoft Support Diagnostic Tool (MSDT)

“Public discussion of a vulnerability can encourage further scrutiny on the component, both by Microsoft security personnel as well as our research partners. This CVE is a variant of the vulnerability publicly known as Dogwalk,” Microsoft said this week. 

Microsoft acknowledged but did not respond to requests for comment about why their assessment of the issue changed after three years, but Microsoft security research and engineering lead Johnathan Norman took to Twitter to thank Rad and j00sean for highlighting the issue.

“We finally fixed the #DogWalk vulnerability. Sadly this remained an issue for far too long. thanks to everyone who yelled at us to fix it,” he said. 

Coalfire vice president Andrew Barratt said he has not seen the vulnerability exploited in the wild yet but said it would “be easily delivered using a phishing/rogue link campaign.”

When exploited, the vulnerability places some malware that automatically starts the next time the user reboots/logs into their Windows PC, Barratt explained, noting that while it is not a trivial point-and-click exploit and requires an attachment to be used in an email, it can be delivered via other fileservers – making it an interesting tactic for an insider to leverage.

“The vast majority of these attachments are blocked by Outlook, but various researchers point out that other email clients could see the attachment and launch the Windows troubleshooting tool (which it leverages as part of the exploit),” Barratt said. “The challenge for a lot of anti-malware is that the file leveraged doesn’t look like a traditional piece of malware, but could be leveraged to pull more sophisticated malware on to a target system. It’s an interesting technique but not one that is going to affect the masses. I’d expect this to be leveraged more by someone meeting the profile of an insider threat.”

Bharat Jogi, director of vulnerability and threat research at Qualys, added that Microsoft likely changed its tune related to CVE-2022–34713 because today’s bad actors are growing more sophisticated and creative in their exploits.

Jogi noted that Follina has been recently used by threat actors — like China-linked APT TA413 — in phishing campaigns that have targeted local U.S. and European government personnel, as well as a major Australian telecommunications provider

Source: Microsoft confirms ‘DogWalk’ zero-day vulnerability has been exploited

Countdown to Zero Day

Leave a Comment

Buying Cyber Insurance Gets Trickier as Attacks Proliferate, Costs Rise

Security chiefs should shop early for coverage and prepare for long questionnaires about their companies’ cyber defenses, industry professionals say


Insurers are scrutinizing prospective clients’ cybersecurity practices more closely than in past years, when underwriting was less strict.
PHOTO: GETTY IMAGES/ISTOCKPHOTO

For many businesses, obtaining or renewing cyber insurance has become expensive and arduous.

The price of cyber insurance has soared in the past year amid a rise in ransomware hacks and other cyberattacks. Given these realities, insurers are taking a harder line before renewing or granting new or additional coverage. They are asking for more in-depth information about companies’ cyber policies and procedures, and businesses that can’t satisfy this greater level of scrutiny could face higher premiums, be offered limited coverage or be refused coverage altogether, industry professionals said.

“Underwriting scrutiny has really tightened up over the past 18 months or so,” said Judith Selby, a partner in the New York office of Kennedys Law LLP.

In the second quarter, U.S. cyber-insurance prices increased 79% from a year earlier, after more than doubling in each of the preceding two quarters, according to the Global Insurance Market Index from professional-services firm Marsh & McLennan Cos.

Direct-written premiums for cyber coverage collected by the largest U.S. insurance carriers—the amounts insurers charge to clients, excluding premiums earned from acting as a reinsurer—climbed to $3.15 billion last year, up 92% from 2020, according to information submitted to the National Association of Insurance Commissioners, an industry watchdog, and compiled by ratings firms. Analysts attribute the increase primarily to higher rates, as opposed to insurers significantly expanding coverage limits.

Companies buying insurance are subject to tight scrutiny of internal cyber practices. This is different from past years, when carriers poured into the cyber market and competition produced less-stringent underwriting, Ms. Selby said.

Now, insurers aiming to limit their risk are putting corporate security chiefs through lengthy lists of questions about how they defend their companies, said Chris Castaldo, chief information security officer at Crossbeam Inc., a Philadelphia-based tech firm that helps companies find new business partners and customers.

“Prior to the questionnaires, you just gave them the coverage amount you wanted and the industry you were in, and that was it,” Mr. Castaldo said, referring to interactions with cyber insurers.

Discover Financial Services has a third party validate the robustness of its cybersecurity program, which helps with insurance, said CISO Shaun Khalfan. “Insurers want to have confidence that you are making the right investments and are building and maintaining a robust cybersecurity program,” Mr. Khalfan said.

Some of the questions insurers ask—and the level of detail required—can depend on the carrier, the size and type of the business seeking coverage and the amount of coverage desired.

Around 18 months ago, underwriters asked companies whether they required multifactor authentication when administrators accessed their system, said Tom Reagan, cyber practice leader in Marsh McLennan’s financial and professional products specialty practice. Today there’s an expectation that multifactor authentication is used throughout the organization, not just by administrators, he said.

Insurers also expect organizations to have planned and tested for a cyber event, such as through tabletop exercises, Mr. Reagan said: “They are not just interested in your smoke alarms, they want to hear about the fire drills.”

Carriers want to know what kind of backup plans companies have if a ransomware attack strikes and how those plans are tested. Insurers also diving deeper into whether a company’s networks are segregated to limit the spread of malware, Ms. Selby said. Other important criteria some insurers consider, she said, include endpoint protection, or monitoring and protecting devices against cyber threats, and incident-response exercises.

Some companies will need to work with more carriers than in the past to get the desired level of coverage because no single insurer wants to carry so much risk, Ms. Selby said.

Amid the changing landscape, Mr. Reagan recommended that companies start to re-evaluate their cyber-insurance needs as early as six months before a policy comes up for renewal. Starting earlier to identify possible holes allows businesses to make changes to their cyber defenses, if necessary, and gather information that carriers require, he said.

https://www.wsj.com/articles/buying-cyber-insurance-gets-trickier-as-attacks-proliferate-costs-rise-11659951000?tpl=cs

Demystifying Cyber Insurance

Leave a Comment

Dark Reading News Desk: Live at Black Hat USA 2022

https://www.youtube.com/watch?v=L8wum8NuJAM&ab_channel=DarkReading

The livestream for Dark Reading News Desk at Black Hat USA 2022 will go live on August 10 at 9:50 AM

Welcome to the Dark Reading News Desk, which will be livestreamed from Black Hat USA at Mandalay Bay in Las Vegas. Dark Reading editors Becky Bracken, Fahmida Rashid, and Kelly Jackson Higgins will host Black Hat newsmakers ranging from independent researchers and threat hunters to reverse engineers and other top experts in security, on Wednesday, Aug. 10, and Thursday, Aug. 11, from 11 a.m. until 3 p.m. Pacific Time.

Among the highlights: On Wednesday, Dark Reading will be joined at the Black Hat News Desk by Allison Wikoff from PwC, to talk about the latest in job-themed APT social engineering scams; Brett Hawkins from IBM, to discuss supply chain management systems abuse; and many more. Dr. Stacy Thayer, a researcher specializing in burnout, will also be on hand to offer her best tips for helping cybersecurity professionals manage stress.

On Thursday, Martin Doyhenard joins the Dark Reading News Desk to unpack his research on exploiting inter-process communication in SAP’s HTTP server; Kyle Tobener, head of security with Copado, will explain his new framework for “effective and compassionate security guidance”; and Zhenpeng Lin, a PhD student at Northwestern University, will walk us through his work on the so-called Dirty Pipe Linux kernel exploit.

So don’t miss any of the action from Black Hat and join Dark Reading’s News Desk broadcast for some of the biggest headlines and the latest cybersecurity research from around the globe.

Tune in to this page on Wednesday and the livestream will appear at the top of the page.

Leave a Comment

Scientists hid encryption key for Wizard of Oz text in plastic molecules

It’s “a revolutionary scientific advance in molecular data storage and cryptography.”

Scientists from the University of Texas at Austin encrypted the key to decode text of the <em>The Wizard of Oz</em> in polymers.

Scientists from the University of Texas at Austin sent a letter to colleagues in Massachusetts with a secret message: an encryption key to unlock a text file of L. Frank Baum’s classic novel The Wonderful Wizard of Oz. The twist: The encryption key was hidden in a special ink laced with polymers, They described their work in a recent paper published in the journal ACS Central Science.

When it comes to alternative means for data storage and retrieval, the goal is to store data in the smallest amount of space in a durable and readable format. Among polymers, DNA has long been the front runner in that regard. As we’ve reported previously, DNA has four chemical building blocks—adenine (A), thymine (T), guanine (G), and cytosine (C)—which constitute a type of code. Information can be stored in DNA by converting the data from binary code to a base-4 code and assigning it one of the four letters. A single gram of DNA can represent nearly 1 billion terabytes (1 zettabyte) of data. And the stored data can be preserved for long periods—decades, or even centuries.

There have been some inventive twists on the basic method for DNA storage in recent years. For instance, in 2019, scientists successfully fabricated a 3D-printed version of the Stanford bunny—a common test model in 3D computer graphics—that stored the printing instructions to reproduce the bunny. The bunny holds about 100 kilobytes of data, thanks to the addition of DNA-containing nanobeads to the plastic used to 3D print it. And scientists at the University of Washington recently recorded K-Pop lyrics directly onto living cells using a “DNA typewriter.”

But using DNA as a storage medium also presents challenges, so there is also great interest in coming up with other alternatives. Last year, Harvard University scientists developed a data-storage approach based on mixtures of fluorescent dyes printed onto an epoxy surface in tiny spots. The mixture of dyes at each spot encodes information that is then read with a fluorescent microscope. The researchers tested their method by storing one of 19th-century physicist Michael Faraday’s seminal papers on electromagnetism and chemistry, as well as a JPEG image of Faraday.

Other scientists have explored the possibility of using nonbiological polymers for molecular data storage, decoding (or reading) the stored information by sequencing the polymers with tandem mass spectrometry. In 2019, Harvard scientists successfully demonstrated the storage of information in a mixture of commercially available oligopeptides on a metal surface, with no need for time-consuming and expensive synthesis techniques.

This latest paper focused on the use of sequence-defined polymers (SDPs)  as a storage medium for encrypting a large data set. SDPs are basically long chains of monomers, each of which corresponds to one of 16 symbols. “Because they’re a polymer with a very specific sequence, the units along that sequence can carry a sequence of information, just like any sentence carries information in the sequence of letters,” co-author Eric Anslyn of UT told New Scientist.

But these macromolecules can’t store as much information as DNA, per the authors, since the process of storing more data with each additional monomer becomes increasingly inefficient, making it extremely difficult to retrieve the information with the current crop of analytic instruments available. So short SDPs must be used, limiting how much data can be stored per molecule. Anslyn and his co-authors figured out a way to improve that storage capacity and tested the viability of their method.

First, Anslyn et al. used a 256-bit encryption key to encode Baum’s novel into a polymer material made up of commercially available amino acids. The sequences were comprised of eight oligourethanes, each 10 monomers long. The middle eight monomers held the key, while the monomers on either end of a sequence served as placeholders for synthesis and decoding. The placeholders were “fingerprinted” using different isotope labels, such as halogen tags, indicating where each polymer’s encoded information fit within the order of the final digital key,

Then they jumbled all the polymers together and used depolymerization and liquid chromatography-mass spectrometry (LC/MS) to “decode” the original structure and encryption key. The final independent test: They mixed the polymers into a special ink made of isopropanol, glycerol, and soot. They used the ink to write a letter to James Reuther at the University of Massachusetts, Lowell. Reuther’s lab then extracted the ink from the paper and used the same sequential analysis to retrieve the binary encryption key, revealing the text file of The Wonderful Wizard of Oz.

In other words, Anslyn’s lab wrote a message (the letter) containing another secret message (The Wonderful Wizard of Oz) hidden in the molecular structure of the ink. There might be more pragmatic ways to accomplish the feat, but they successfully stored 256 bits in the SDPs, without using long strands. “This is the first time this much information has been stored in a polymer of this type,” Anslyn said, adding that the breakthrough represents “a revolutionary scientific advance in the area of molecular data storage and cryptography.”

Anslyn and his colleagues believe their method is robust enough for real-world encryption applications. Going forward, they hope to figure out how to robotically automate the writing and reading processes.

DOI: ACS Central Science, 2022. 10.1021/acscentsci.2c00460  (About DOIs).

Leave a Comment

What Makes ICS/OT Infrastructure Vulnerable?

OT Infrastructure Vulnerable
Infrastructure security for operational technologies (OT) and industrial control systems (ICS) varies from IT security in several ways, with the inverse confidentiality, integrity, and availability (CIA) tradeoff being one of the leading causes.
Adopting cybersecurity solutions to protect OT infrastructure is a vital obligation since availability is critical in OT infrastructure. It necessitates a thorough knowledge of ICS operations, security standards/frameworks, and recommended security solutions.
OT security in the past was restricted to guarding the infrastructure using well-known techniques like security officers, biometrics, and fences because ICS/OT systems didn’t connect to the internet.
For ease of operation, every ICS/OT infrastructure currently has internet access or is doing so. However, this transformation exposes these systems to dangers that cannot be avoided by relying just on conventional precautions.

Table of Contents
OT/ICS Security Trends
Vulnerabilities In ICS/OT Infrastructure:
Some of the vulnerabilities are:Authentication-Free Protocols
User Authentication Weakness
Conclusion:

Industrial Cybersecurity: Efficiently monitor the cybersecurity posture of your ICS environment

Leave a Comment