InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key”, the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.
First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor.
Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an “adversarially robust” classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.
In this article, it turns out to be the first name (in Latin script, anyway) of a convicted cybercriminal called Glib Oleksandr Ivanov-Tolpintsev.
Originally from Ukraine, Tolpintsev, who is now 28, was arrested in Poland late in 2020.
He was extradited to the US the following year, first appearing in a Florida court on 07 September 2021, charged with “trafficking in unauthorized access devices, and trafficking in computer passwords.”
In plain English, Tolpintsev was accused of operating what’s known as a botnet (short for robot network), which refers to a collection of other people’s computers that a cybercriminal can control remotely at will.
A botnet acts as a network of zombie computers ready to download instructions and carry them out without the permission, or even the knowledge, of their legitimate owners.
Tolpintsev was also accused of using that botnet to crack passwords that he then sold on the dark web.
What to do?
Tolpintsev’s ill-gotten gains, at just over $80,000, may sound modest compared to the multi-million dollar ransoms demanded by some ransomware criminals.
But the figure of $82,648 is just what the DOJ was able to show he’d earned from his online password sales, and ransomware criminals were probably amongst his customers anyway.
So, don’t forget the following:
Pick proper passwords. For accounts that require a conventional username and password, choose wisely, or get a password manager to do it for you. Most password crackers use password lists that put the most likely and the easiest-to-type passwords at the top. These list generators use a variety of password construction rules in an effort to generate human-like “random” choices such as jemima-1985 (name and year of birth) ahead of passwords that a computer might have selected, such as dexndb-8793. Stolen password hashes that were stored with a slow-to-test algorithm such as PBKDF2 or bcrypt can slow an attacker down to trying just a few passwords a second, even with a large botnet of cracking computers. But if your password is one of the first few that gets tried, you’ll be one of the first few to get compromised.
Use 2FA if you can. 2FA, short for two-factor authentication, usually requires you to provide a one-time code when you login, as well as your password. The code is typically generated by an app on your phone, or sent in a text message, and is different every time. Other forms of 2FA include biometric, for example requiring you to scan a fingerprint, or cryptographic, such as requiring you to sign a random message with a private cryptographic key (a key that might be securely stored in a USB device or a smartcard, itself protected by a PIN). 2FA doen’t eliminate the risk of crooks breaking into your network, but it makes individual cracked or stolen passwords much less useful on their own.
Never re-use passwords. A good password manager will not only generated wacky, random passwords for you, it will prevent you from using the same password twice. Remember that the crooks don’t have to crack your Windows password or your FileVault password if it’s the same as (or similar to) the password you used on your local sports club website that just got hacked-and-cracked.
Never ignore malware, even on computers you don’t care about yourself. This story is a clear reminder that, when it comes to malware, an injury to one really is an injury to all. As Glib Oleksandr Ivanov-Tolpintsev showed, not all cybercriminals will use zombie malware on your computer directly against you – instead, they use your infected computer to help them attack other people.
Researchers spotted a new remote access trojan, named Nerbian RAT, which implements sophisticated evasion and anti-analysis techniques.
Researchers from Proofpoint discovered a new remote access trojan called Nerbian RAT that implements sophisticated anti-analysis and anti-reversing capabilities.
The malware spreads via malspam campaigns using COVID-19 and World Health Organization (WHO) themes. The name of the RAT comes from a named function in the source code of the malware, Nerbia is a fictional place from the novel Don Quixote.
he Nerbian RAT is written in Go programming language, compiled for 64-bit systems, to make the malware multiplatform.
The malspam campaign spotted by Proofpoint started on April 26 and targeted multiple industries.
“Starting on April 26, 2022, Proofpoint researchers observed a low volume (less than 100 messages) email-borne malware campaign sent to multiple industries. The threat disproportionately impacts entities in Italy, Spain, and the United Kingdom.” reads the analysis published by Proofpoint “The emails claimed to be representing the World Health Organization (WHO) with important information regarding COVID-19.”
he emails contain a weaponized Word attachment, which is sometimes compressed with RAR. Upon enabling the macros, the document provided reveals information relating to COVID-19 safety, specifically about measures for self-isolation of infected individuals.
The document contains logos from the Health Service Executive (HSE), Government of Ireland, and National Council for the Blind of Ireland (NCBI).
Once opened the document and enabled the macro, a bat file executes a PowerShell acting as downloader for a Goland 64-bit dropper named “UpdateUAV.exe”.
The UpdateUAV executable is a dropper for the Nerbian RAT and borrows the code from various GitHub projects.
The Nerbian RAT supports a variety of different functions, such as logging keystrokes and capturing images of the screen, and handle communications over SSL.
“Proofpoint assesses with high confidence that the dropper and RAT were both created by the same entity, and while the dropper may be modified to deliver different payloads in the future, the dropper is statically configured to download and establish persistence for this specific payload at the time of analysis.” concludes the report that includes indicators of compromise (IoCs).
Much attention and excitement within the security world has recently been focused on the lucrative surge in crypto-mining malware and hacks involving or targeting cryptocurrency implementations themselves. Yet the volume of ‘real world’ transactions for tangible goods and services currently paid for with cryptocurrency is still relatively niche in comparison to those that are being paid for every minute of the day with the pieces of plastic we know as payment cards.
According to the British Retail Consortium, in the UK, card payments overtook cash for the first time ever last year. An upward trend assisted no doubt by the increasingly ubiquitous convenience of contactless micropayments. No coincidence either perhaps that contactless related card fraud in the UK also overtook cheque-based fraud in the first half of 2017.
For the foreseeable future, card payment channels are likely to present a continued risk to both businesses and individuals for the exact same reason that bank robber Willie Hutton gave us in the last century for his chosen means of income. In today’s digital economy, however, agile cyber criminals will not only ‘go’ as Mr. Hutton suggested “where the money is” but will swiftly adapt and evolve their tactics to ‘go where the insecurity is.’ Hence, whilst according to a range of sources EMV chip cards have cut counterfeit fraud at ‘point of sale’ (POS) in the UK by approximately a third since the technology was introduced and similar improvements are now being cited for its more recent adoption in the US, a marked and plausibly corresponding uptake in online ‘card not present’ (CNP) fraud continues to rise.
The Payment Card Industry Data Security Standard (PCI-DSS) has formally existed since 2004 to help reduce the risk of card fraud through the adoption and continued application of a recognized set of base level security measures. Whilst many people have heard of and will often reference PCI-DSS, the standard isn’t always as well understood, interpreted, or even applied as best it could be. A situation not entirely helped by the amount of myths, half-truths, and outright FUD surrounding it.
The PCI Security Standards Council website holds a wealth of definitive and authoritative documentation. I would advise anyone seeking either basic or detailed information regarding PCI-DSS to start by looking to that as their first port of call. In this blog, however, I would simply like to call out and discuss a few common misconceptions.
MYTH 1: “PCI JUST DOESN’T APPLY TO OUR BUSINESS/ ORGANIZATION/VERTICAL/SECTOR.”
It doesn’t matter if you don’t consider yourself a fully-fledged business, if it’s not your primary activity, or if card payments are an insignificant part of your overall revenue. PCI-DSS applies in some form to all entities that process, store, or transmit cardholder data without exception. Nothing more to say about this one.
MYTH 2: “PCI APPLIES TO OUR WHOLE ENVIRONMENT, EVERYWHERE, AND WE SIMPLY CAN’T APPLY SUCH AN OBDURATE STANDARD TO IT ALL.”
Like many good myths, this one at least has some origin in truth.
Certainly, if you use your own IT network and computing or even telephony resources to store, process or transmit cardholder data without any adequate means of network separation, then yes, it is fact. It could also rightly be stated that most of the PCI-DSS measures are simply good practice which organizations should be adhering to anyway. The level of rigor to which certain controls need to be applied may not always be practical or appropriate for areas of the environment who have nothing to do with card payments, however. A sensible approach is to, therefore, reduce the scope of the cardholder data environment (CDE) by segmenting elements of network where payment related activity occurs. Do remember though, that wherever network segmentation is being used to reduce scope it must be verified at least annually as being truly effective and robust by your PCI assessor.
Whilst scoping of the CDE is the first essential step for all merchants on their road to compliance, for large and diverse environments with a range of payment channels, such an exercise in itself is rarely a straightforward task. It’s advisable for that reason to initially consult with a qualified PCI assessor as well as your acquirer who will ultimately have to agree on the scope. They may also advise on other ways of reducing risk and therefore compliance scope such as through the use of certified point-to-point encryption solutions or the transfer of payment activities away from your network altogether. Which takes us directly on to discussing another area of confusion.
MYTH 3: “OUTSOURCING TRANSFERS OUR PCI RISK.”
Again, there is a grain of truth here but one that is all too frequently misconstrued.
Outsourcing your payment activity to an already compliant payments service provider (PSP) may well relieve you of the costs and associated ‘heavy lifting’ of applying and maintaining all of the necessary technical controls yourself. Particularly where such activity is far-removed from your core business and staff skill sets. As per Requirement 12.8 in the standard, however, due diligence needs to be conducted before any such engagement, and it still remains the merchant’s responsibility to appropriately manage their providers. At the very least via written agreements, policies and procedures. The service provider’s own compliance scope must, therefore, be fully understood and its status continually monitored.
It is important to consider that this doesn’t just apply to external entities directly processing payments on your behalf but also to any service provider who can control or impact the security of cardholder data. It’s therefore likely to include any outsourced IT service providers you may have. This will require a decent understanding of the suppliers Report or Attestation of Compliance (ROC or AOC), and where this is not sufficient to meet your own activity, they may even need to be included within your own PCI scope. Depending on the supplier or, service this may, of course, be a complex arrangement to manage.
MYTH 4: “COMPENSATORY MEANS WE CAN HAVE SOME COMPLACENCY.”
PCI is indeed pragmatic enough to permit the use of compensatory controls. But only where there is either a legitimate technical constraint or documented business constraint that genuinely precludes implementing a control in its original stated form. This is certainly not to be misjudged as a ‘soft option,’ however, nor a way of ‘getting around’ controls which are just difficult or unpopular to implement.
In fact, the criteria for an assessor accepting a compensatory control (or whole range of controls to compensate a single one in some cases) means that that the alternative proposition must fully meet the intent and rigor of the original requirement. Compensatory controls are also expected to go ‘above and beyond’ any other PCI controls in place and must demonstrate that they will provide a similar level of defense. They will also need to be thoroughly revaluated after any related change in addition to the overall annual assessment. In many cases and especially over the longer term, this may result in maintaining something that is a harder and costlier overhead to efficiently manage than the original control itself. Wherever possible, compensatory controls should only be considered as temporary measure whilst addressing the technical or business constraint itself.
MYTH 5: “WE BOUGHT A PCI SOLUTION SO WE MUST BE COMPLIANT, RIGHT?”
The Payment Application Data Security Standard (PA-DSS) is another PCI Security Standards Council controlled standard that exists to help software vendors and others develop secure payment applications. It categorically does not, however, follow that purchasing a PA-DSS solution will in itself ensure that a merchant has satisfactorily met the PCI-DSS. Whilst the correct implementation or integration of a PA-DSS verified application will surely assist a merchant in achieving compliance, once again it is only a part of the overall status and set of responsibilities.
IT security vendors of all varieties may also claim to have solutions or modules that although they may have nothing directly to do with payments themselves have been specifically developed with PCI-DSS compliance in mind. They are often sold as PCI-related solutions. If deployed, used and configured correctly, many of these solutions will no doubt support the merchant with their compliance activity whilst tangibly reducing cardholder data risk and hopefully providing wider security benefits. No one technology or solution in itself will make you PCI compliant, however, and anyone telling you (or your board) that it does either does not understand the standard or is peddling ‘snake oil.’ Or both.
MYTH 6: “WE’RE PCI-DSS COMPLIANT SO THAT MEANS WE MUST BE ‘SECURE,’ RIGHT?”
PCI-DSS should certainly align and play a key part within a wider security program. It should and cannot be an organizations only security focus, however. Nor should being compliant with any standard be confused with some unfeasible nirvana of being completely ‘secure’ whatever that may mean at any given point in time. There have, after all, been plenty examples of PCI-compliant organizations who have still been harshly and significantly breached. Some reports of high profile incidents have voiced scathing comments about the potentially ostensible nature of the breached organization’s PCI compliance status, even questioning validity of the standard itself. Such derision misses some key points. In the same way that passing a driving test does not guarantee you will never be involved in an accident, reasonably speaking, it will certainly decrease those chances. Far more so than if nobody was ever required to take such a test. PCI or any other security compliance exercise should be viewed with a similar sense of realism and perspective.
Applying PCI-DSS controls correctly, with integrity and unlike a driving test re-assessing them annually, must surely help to reduce the risk of card payment fraud and breaches. More so than if you weren’t. Something that is to everyone’s benefit. It cannot possibly, however, protect against all attacks or take into account every risk scenario. That is for your own wider security risk assessment and security program to deal with. Maybe yes, it’s all far from perfect, but in the sage fictional words of Marvel’s Nick Fury, “SHIELD takes the world as it is, not as we’d like it to be. It’s getting damn near past time for you to get with that program.”
About the Author:Angus Macrae is a CISSP (Certified Information Systems Security Professional) in good standing, a CCP (NCSC Certified Professional for the IT Security Officer role at Senior Practitioner level) and PCIP (PCI SSC Payment Card Industry Professional.) He is currently the IT security lead for King’s Service Centre supporting the services of King’s College London, one of the worlds’ top 20 universities
Security spend continues to focus on external threats despite threats often coming from within the organization. A recent Imperva report (by Forrester Research) found only 18 percent prioritized spend on a dedicated insider threat program (ITP) compared to 25 percent focused on external threat intelligence.
And it’s not just the employee with a grudge you need to worry – most insider incidents are non-malicious in nature. In its 2022 Cost of Insider Threats Global Report, Proofpoint and the Ponemon Institute found careless or negligent behavior accounted for 56 percent of all incidents and these also tend to be the most costly, with the average clean-up operation costing $6.6m.
Failed fixes
Part of the problem lies in perception: The Forrester report found almost a third of those questioned didn’t regard employees as a threat. But it’s also notoriously difficult to prevent these types of incidents because you’re essentially seeking to control legitimate access to data. Mitigating these threats is not just about increasing security but about detecting potential indicators of compromise (IoC) in user behavior and, for this reason, most businesses rely on staff training to address the issue. Yet as the figures above reveal, training alone is often insufficient.
The same Forrester report found that while 65 percent use staff training to ensure compliance with data protection policies, 55 percent said their users have found ways to circumvent those same policies. Others said they rely on point solutions to prevent incidents, with 43 percent using data loss prevention (DLP) to block actions and 29 percent monitoring via the SIEM (although data can still be exfiltrated without detection by these systems). The problem is that network security and employee monitoring both fail to take into account the stress factors that can push resourceful employees resort to use workarounds.
While prevention is always better than cure, the current approach to insider threats is too heavily weighted in its approach. Consequently, there’s insufficient focus on what to do if an insider threat, malicious or not, is realized. So, while training and network security controls do have their part to play, both need to be part of something much more wide ranging: the ITP.
An ITP aligns policies, procedures, and processes across different business departments to address insider threats. It’s widely regarded as critical to the mitigation of insider threats, but only 28 percent of those surveyed by Forrester claim to have one in place. The reason for this is that many organizations find it daunting to set one up. In addition to getting people onboard and policies in place, the business will need to inventory its data and locate data sources, determine how it will monitor behaviors, adapt the training program, and carry out investigations as well as how the ITP itself will be assessed on a regular basis.
Getting started
To begin with, a manager and dedicated working party are required to help steer the ITP. The members will need to have clear roles and responsibilities and to agree to a set code of ethics and/or sign an NDA. This is because there are many laws related to employee privacy and monitoring, as well as legal considerations and concerns that must be factored into the writing and execution of policy. The first job of the working group will be to create an operations plan and put together a high-level version of the insider threat policy.
They’ll then need to consider how to inventory and access internal and external data sources and to do this the working group will need to familiar with record handling and use procedures specific to certain data sets. Once the processes and procedures needed to collect, integrate, and analyze the data have been created, the data should be marked according to its use and so may be related to a privacy investigation. (Interestingly, nearly 58 percent of incidents that impact sensitive data are caused by insider threats, according to Forrester.)
Consider whether you’ll use technology to monitor end user devices, logins, etc. and document this through signed information systems security acknowledgement agreements. Potential indicators of compromise (IoCs) could include database tampering, inappropriate sharing of confidential company information, deletion of files or viewing of inappropriate content. When such behaviors come to light, discretion is critical, and any investigation needs to be watertight and defensible as it may result in a legal case.
Digital forensics for defensibility
How the business responds to and investigates incidents should also be detailed in the ITP. Consider whether the investigation will be internal and at what point you’ll need to involve external agents and who will need to be notified. Where will the data for the investigation be held? How long will the information be held for? While it’s important to retain relevant information, you don’t want to fall into the trap of keeping more than necessary, as this elevates risk, which means ITP should also overlap with a data minimization policy.
Digital forensics tools should be used to enforce the ITP. You’ll need to decide how you proactively manage insider threats and whether these tools will only be used post-analysis or covertly. For example, some businesses with high value assets will carry out a sweep to establish if data has been exfiltrated when an employee leaves the organization. You should also ensure these tools are able to remotely target endpoints and cloud sources even when they’re not connected and should be OS-agnostic so you can capture data on Macs as well as PCs.
Digital forensics ensure the business can quickly capture and investigate any incidence of wrongdoing. For example, it can determine the date, time and pathway used to exfiltrate data from the corporate information estate to any device, endpoint, online storage service such as Google Drive or Dropbox, or even publication over a social media platform. Once the data has been traced, it’s then possible to narrow down likely suspects until the team have indisputable proof.
Both the way the investigation is done and the evidence itself must be beyond reproach and legally defensible because such incidents may lead to dismissal or even prosecution. If challenged in a legal tribunal, the business would then need to prove due diligence so there must be a forensically sound and repeatable process and a proper chain of custody when it comes to safeguarding the handling of the evidence.
Researchers uncovered a massive hacking campaign that compromised thousands of WordPress websites to redirect visitors to scam sites.
Cybersecurity researchers from Sucuri uncovered a massive campaign that compromised thousands of WordPress websites by injecting malicious JavaScript code that redirects visitors to scam content.
The infections automatically redirect site visitors to third-party websites containing malicious content (i.e. phishing pages, malware downloads), scam pages, or commercial websites to generate illegitimate traffic.
“The websites all shared a common issue — malicious JavaScript had been injected within their website’s files and the database, including legitimate core WordPress files, such as:
./wp-includes/js/jquery/jquery.min.js
./wp-includes/js/jquery/jquery-migrate.min.js“
“Once the website had been compromised, attackers had attempted to automatically infect any .js files with jQuery in the names. They injected code that begins with “/* trackmyposs*/eval(String.fromCharCode…”“reads the analysis published by Sucuri.
In some attacks, users were redirected to a landing page containing a CAPTCHA check. Upon clicking on the fake CAPTCHA, they’ll be opted in to receive unwanted ads even when the site isn’t open.
The ads will look like they are generated from the operating system and not from a browser.
According to Sucuri, at least 322 websites were compromised as a result of this new wave of attacks and were observed redirecting visitors to the malicious website drakefollow.com.
“Our team has seen an influx in complaints for this specific wave of the massive campaign targeting WordPress sites beginning May 9th, 2022, which has impacted hundreds of websites already at the time of writing.” concludes the report. “It has been found that attackers are targeting multiple vulnerabilities in WordPress plugins and themes to compromise the website and inject their malicious scripts. We expect the hackers will continue registering new domains for this ongoing campaign as soon as existing ones become blacklisted.”
Website admins could check if their websites have been compromised by using Sucuri’s free remote website scanner.
If you were in the US this time last year, you won’t have forgotten, and you may even have been affected by, the ransomware attack on fuel-pumping company Colonial Pipeline.
The organisation was hit by ransomware injected into its network by so-called affiliates of a cybercrime crew known as DarkSide.
DarkSide is an example of what’s known as RaaS, short for ransomware-as-a-service, where a small core team of criminals create the malware and handle any extortion payments from victims, but don’t perform the actual network attacks where the malware gets unleashed.
Teams of “affiliates” (field technicians, you might say), sign up to carry out the attacks, usually in return for the lion’s share of any blackmail money extracted from victims.
The core criminals lurk less visibly in the background, running what is effectively a franchise operation in which they typically pocket 30% (or so they say) of every payment, almost as though they looked to legitimate online services such as Apple’s iTunes or Google Play for a percentage that the market was familiar with.
The front-line attack teams typically:
Perform reconnaissance to find targets they think they can breach.
Break in to selected companies with vulnerabilities they know how to exploit.
Wrangle their way to administrative powers so they are level with the official sysadmins.
Map out the network to find every desktop and server system they can.
Locate and often neutralise existing backups.
Exfiltrate confidential corporate data for extra blackmail leverage.
Open up network backdoors so they can sneak back quickly if they’re spotted this time.
Gently probe existing malware defences looking for weak or unprotected spots.
Turn off or reduce security settings that are getting in their way.
Pick a particularly troublesome time of day or night…
…and then they automatically unleash the ransomware code they were supplied with by the core gang members, sometimes scrambling all (or almost all) computers on the network within just a few minutes.
Researchers warn of a remote access trojan called DCRat (aka DarkCrystal RAT) that is available for sale on Russian cybercrime forums.
Cybersecurity researchers from BlackBerry are warning of a remote access trojan called DCRat (aka DarkCrystal RAT) that is available for sale on Russian cybercrime forums. The DCRat backdoor is very cheap, it appears to be the work of a lone threat actor that goes online with the monikers of “boldenis44,” “crystalcoder,” and Кодер (“Coder”). Prices for the backdoor start at 500 RUB ($5) for a two-month license, 2,200 RUB ($21) for a year, and 4,200 RUB ($40) for a lifetime subscription.
“Sold predominantly on Russian underground forums, DCRat is one of the cheapest commercial RATs we’ve ever come across. The price for this backdoor starts at 500 RUB (less than 5 GBP/US$6) for a two-month subscription, and occasionally dips even lower during special promotions. No wonder it’s so popular with professional threat actors as well as script kiddies.” reads the report published by BlackBerry.
The author implemented an effective malware and continues to efficiently maintain it. The researchers pointed out that the price for this malware is a fraction of the standard price such RAT on Russian underground forums.
DCRat first appeared in the threat landscape in 2018, but a year later it was redesigned and relaunched.
DCRat is written in .NET and has a modular structure, affiliates could develop their own plugins by using a dedicated integrated development environment (IDE) called DCRat Studio.
The modular architecture of the malware allows to extend its functionalities for multiple malicious purposes, including surveillance, reconnaissance, information theft, DDoS attacks, and arbitrary code execution.
The DCRat consists of three components:
A stealer/client executable
A single PHP page, serving as the command-and-control (C2) endpoint/interface
An administrator tool
“All DCRat marketing and sales operations are done through the popular Russian hacking forum lolz.guru, which also handles some of the DCRat pre-sales queries. DCRat support topics are made available here to the wider public, while the main DCRat offering thread is restricted to registered users only.” continues the report.
The malware is under active development, the author announces any news and updates through a dedicated Telegram channel that had approximately 3k subscribers.
DCRat Telegram announcing discounts and price specials (source BlackBerry)
During recent months, the researchers ofter observed DCRat clients being deployed with the use of Cobalt Strike beacons through the Prometheus TDS (traffic direction system).
DCRat also implements a kill switch, which would render all instances of the DCRat administrator tool unusable, irrespective of subscriber license validity.
The Administrator tool allows subscribers to sign in to an active C2 server, configure (and generate) builds of the DCRat client executable, execute commands on infected systems
Experts concluded that the RAT is maintained daily, which means that the author is working on this project full-time.
“There are certainly programming choices in this threat that point to this being a novice malware author who hasn’t yet figured out an appropriate pricing structure. Choosing to program the threat in JPHP and adding a bizarrely non-functional infection counter certainly point in this direction. It could be that this threat is from an author trying to gain notoriety, doing the best with the knowledge they have to make something popular as quickly as possible.” concludes the report that also includes Indicators of Compromise (IoCs). “While the author’s apparent inexperience might make this malicious tool seem less appealing, some could view it as an opportunity. More experienced threat actors might see this inexperience as a selling point, as the author seems to be putting in a lot of time and effort to please their customers.”
The Computer Emergency Response Team of Ukraine (CERT-UA) warns of attacks spreading info-stealing malware Jester Stealer.
The Computer Emergency Response Team of Ukraine (CERT-UA) has detected malspam campaigns aimed at spreading an info-stealer called Jester Stealer.
The malicious messages spotted by the Ukrainian CERT have the subject line “chemical attack” and contain a link to a weaponized Microsoft Excel file. Upon opening the Office documents and activating the embedded macro, the infection process starts.
Government experts observed that malicious executables are downloaded from compromised web resources.
“The government’s team for responding to computer emergencies in Ukraine CERT-UA revealed the fact of mass distribution of e-mails on the topic of “chemical attack” and a link to an XLS-document with a macro.” reads the report published by CERT-UA. “If you open the document and activate the macro, the latter will download and run the EXE file, which will later damage the computer with the malicious program JesterStealer.”
The Jester stealer is able to steal credentials and authentication tokens from Internet browsers, MAIL/FTP / VPN clients, cryptocurrency wallets, password managers, messengers, game programs, and more.
The info-stealer implements anti-analysis capabilities (anti-VM/debug/sandbox), but it doesn’t implement any persistence mechanism. The threat actors exfiltrare data via Telegram using statically configured proxy addresses.
“Stolen data through statically defined proxy addresses (including in the TOR network) is transmitted to the attacker in the Telegram.” continues the report.
The report includes Indicators of Compromise (IoCs).
New research from the email security firm Inky has revealed that more than 1000 emails were sent from NHS inboxes over a six month period.
The firm has claimed that the campaign, beginning October 2021, escalated “dramatically” in March of this year.
After the findings were reported to the NHS on April 13, Inky reported that the volume of attacks fell significantly to just a “few”.
“The majority were fake new document notifications with malicious links to credential harvesting sites that targeted Microsoft credentials. All emails also had the NHS email footer at the bottom,” Inky explained.
Data protection is challenging for many businesses because the United States does not currently have a national privacy law — like the EU’s GDPR — that explicitly outlines the means for protection. Lacking a federal referendum, several states have signed comprehensive data privacy measures into law. The California Privacy Rights Act (CPRA) will replace the state’s current privacy law and take effect on January 1, 2023, as will the Virginia Consumer Data Protection Act (VCDPA). The Colorado Privacy Act (CPA) will commence on July 1, 2023, while the Utah Consumer Privacy Act (UCPA) begins on December 31, 2023.
For companies doing business in California, Virginia, Colorado and Utah* — or any combination of the four — it is essential for them to understand the nuances of the laws to ensure they are meeting protection requirements and maintaining compliance at all times.
Understanding how data privacy laws intersect is challenging
While the spirit of these four states’ data privacy laws is to achieve more comprehensive data protection, there are important nuances organizations must sort out to ensure compliance. For example, Utah does not require covered businesses to conduct data protection assessments — audits of how a company protects data to determine potential risks. Virginia, California and Colorado do require assessments but vary in the reasons why a company may have to take one.
Virginia requires companies to undergo data protection assessments to process personal data for advertising, sale of personal data, processing sensitive data, or processing consumer profiling purposes. The VCDPA also mandates an assessment for “processing activities involving personal data that present a heightened risk of harm to consumers.” However, the law does not explicitly define what it considers to be “heightened risk.” Colorado requires assessments like Virginia, but excludes profiling as a reason for such assessments.
Similarly, the CPRA requires annual data protection assessments for activities that pose significant risks to consumers but does not outline what constitutes “significant” risks. That definition will be made through a rule-making process via the California Privacy Protection Agency (CPPA).
The state laws also have variances related to whether a data protection assessment required by one law is transferable to another. For example, let’s say an organization must adhere to VCDPA and another state privacy law. If that business undergoes a data protection assessment with similar or more stringent requirements, VCDPA will recognize the other assessment as satisfying their requirements. However, businesses under the CPA do not have that luxury — Colorado only recognizes its assessment requirements to meet compliance.
Another area where the laws differ is how each defines sensitive data. The CPRA’s definition is extensive and includes a subset called sensitive personal information. The VCDPA and CPA are more similar and have fewer sensitive data categories. However, their approaches to sensitive data are not identical. For example, the CPA views information about a consumer’s sex life and mental and physical health conditions as sensitive data, whereas VCDPA does not. Conversely, Virginia considers a consumer’s geolocation information sensitive data, while Colorado does not. A business that must adhere to each law will have to determine what data is deemed sensitive for each state in which it operates.
There are also variances in the four privacy laws related to rule-making. In Colorado and Utah, rule-making will be at the discretion of the attorney general. Virginia will form a board consisting of government representatives, business people and privacy experts to address rule-making. California will engage in rule-making through the CPPA.
The aforementioned represents just some variances between the four laws — there are more. What is clear is that maintaining compliance with multiple laws will be challenging for most organizations, but there are clear measures companies can take to cut through the complexity.
Overcoming ambiguity through proactive data privacy protection
Without a national privacy law to serve as a baseline for data protection expectations, it is important for organizations that operate under multiple state privacy laws to take the appropriate steps to ensure data is secure regardless of regulations. Here are five tips.
Partner with compliance and legal experts
It is critical to have someone on staff or to serve as a consultant who understands privacy laws and can guide an organization through the process. In addition to compliance expertise, legal advice will be a must to help navigate every aspect of the new policies.
Identify data risk
From the moment a business creates or receives data from an outside source, organizations must first determine its risk based on the level of sensitivity. The initial determination lays the groundwork for the means by which organizations protect data. As a general rule, the more sensitive the data, the more stringent the protection methods should be.
Create policies for data protection
Every organization should have clear and enforceable policies for how it will protect data. Those policies are based on various factors, including regulatory mandates. However, policies should attempt to protect data in a manner that exceeds the compliance mandates, as regulations are often amended to require more stringent protection. Doing so allows organizations to maintain compliance and stay ahead of the curve.
Integrate data protection in the analytics pipeline
The data analytics pipeline is being built in the cloud, where raw data is converted into usable, highly valuable business insight. For compliance reasons, businesses must protect data throughout its lifecycle in the pipeline. This implies that sensitive data must be transformed as soon as it enters the pipeline and then stays in a de-identified state. The data analytics pipeline is a target for cybercriminals because, traditionally, data can only be processed as it moves downstream in the clear. Employing best-in-class protection methods — such as data masking, tokenization and encryption — is integral to securing data as it enters the pipeline and preventing exposure that can put organizations out of compliance or worse.
Implement privacy-enhanced computation
Organizations extract tremendous value from data by processing it with state-of-the-art analytics tools readily available in the cloud. Privacy-enhancing computation (PEC) techniques allow that data to be processed without exposing it in the clear. This enables advanced-use cases where data processors can pool data from multiple sources to gain deeper insights.
The adage, “An ounce of prevention is worth a pound of cure,” is undoubtedly valid for data protection — especially when protection is tied to maintaining compliance. For organizations that fall under any upcoming data privacy laws, the key to compliance is creating an environment where data protection methods are more stringent than required by law. Any work done now to manage the complexity of compliance will only benefit an organization in the long term.
*Since writing this article, Connecticut became the fifth state to pass a consumer data privacy law.
A zero-day vulnerability in uClibc and uClibc-ng, a popular C standard library, could enable a malicious actor to launch DNS poisoning attacks on vulnerable IoT devices.
The bug, tracked as ICS-VU-638779, which has yet to be patched, could leave users exposed to attack, researchers have warned.
DNS poisoning
In a DNS poisoning attack, the target domain name is resolved to the IP address of a server that’s under an attacker’s control.
This means at if a malicious actor were to send a ‘forgotten password’ request, they could direct it to their own email address and intercept it – allowing them to change the victim’s password and access their account.
For an IoT device, this attack could potentially be used to intercept a firmware update request and instead directing it to download malware.
The DNS poisoning vulnerability was discovered by researchers at Nozomi Networks, who revealed that the issue remains unpatched, potentially exposing multiple users to attack.
Nozomi Networks states that uClibc is known to be used by major vendors such as Linksys, Netgear, and Axis, or Linux distributions such as Embedded Gentoo. uClibc-ng is a fork specifically designed for OpenWRT, a common operating system for web routers.
The library maintainer was unable to provide a fix, according to Nozomi. The researchers said they would refrain from sharing technical details or listing vulnerable devices until a patch is available.
“It’s important to note that a vulnerability affecting a C standard library can be a bit complex,” the team wrote in a blog post this week.
“Not only would there be hundreds or thousands of calls to the vulnerable function in multiple points of a single program, but the vulnerability would affect an indefinite number of other programs from multiple vendors configured to use that library.”
The Uptycs Threat Research team has identified ongoing malicious campaigns through our Docker honeypot targeting exposed Docker API port 2375. The attacks are related to crypto miners and reverse shells on the vulnerable servers using base64-encoded commands in the cmdline, built to evade defense mechanisms. This article briefly discusses three types of attacks which we observed lately in our Docker honeypot.
Coinminer attacks
Reverse shell attacks
Kinsing malware attacks
Case 1 – Coinminer Attacks
The coinminer attack chain involves several shell scripts to drop malicious components via deployment of legitimate Docker images on the vulnerable servers (the servers exposed to Docker API).
Malicious Shell Scripts Involved In The Campaign
The threat actors tried to run the Alpine Docker image with chroot command to gain full privileges on the vulnerable server host (a common misconfiguration). The attacker passed curl utility as an argument to the Alpine image which downloads and runs the malicious shell script (hash:
Security operations (SecOps) teams continue to be under a constant deluge of new attacks and malware variants. In fact, according to recent research, there were over 170 million new malware variants in 2021 alone. As a result, the burden on CISOs and their teams to identify and stop these new threats has never been higher. But in doing so, they’re faced with a variety of challenges: skills shortages, manual data correlation, chasing false positives, lengthy investigations, and more. In this article, I’d like to explore some of the threat detection program challenges CISOs are facing and provide some tips on how they can improve their security operations.
CISOs ensure the security operations program for threat detection, investigation and response (TDIR) is executing at peak performance. Let’s look at seven key issues that can affect TDIR programs and some questions CISOs should consider asking their organization, security operations team, and the vendors providing solutions to resolve them.
1. There are too many indicators of compromise (IoCs) or security events happening across a network to properly identify malicious activity. As a result, CISOs are looking for advanced tools that can correlate and analyze this data effectively to eliminate false positives. The last thing any CISO wants is for his/her team to waste time on an event that might simply be a failed login associated with a user incorrectly typing their password multiple times.
Questions to ask: Can I correlate data from any source (such as logs, cloud, applications, network, endpoints, etc.), no matter what it is? Can I fully monitor all these systems, ingest all the telemetry needed, and perform correlation automatically? And what is it costing me to correlate all that data (i.e., what is my solution provider charging)?
2. Correlating data over time is hard. It’s like putting puzzle pieces together from a box filled with multiple puzzles. An attack that occurs once can be difficult enough to identify. But once threat actors are inside an environment, they’ll often do a little activity spread over a longer period (sometimes days, weeks or months later). This makes is almost impossible for a human analyst to take these seemingly disparate events across time and connect them to complete the puzzle.
Most tools also struggle to correlate those seemingly independent events as part of the same attack because they seem unrelated over time. CISOs are responsible for making sure the team has everything it needs (based on constrained budgets) to put that puzzle together before damage is done.
Questions to ask: Do I have a wide variety of data sources and analytics that can process events and correlate them across time effectively? Is out-of-the-box threat content included for real-time attack detection?
3. When piecing together an attack campaign, manual correlation and investigation of disparate security sources drastically extends the time and resources required from a CISO and his/her team. Pulling data from several systems at once is necessary to get the contextual information needed to find out what’s wrong (and how to respond). But in the time this takes, the damage could already be done. This challenge can easily frustrate CISOs that have invested so much time and money in building up the security operations program.
Questions to ask: Does your current team have to do a lot of manual correlation, and how are they able to accomplish that with events that span weeks or even months? Does your team have to search through multiple tools and put together context on their own to see patterns that will help formulate a better response when working with other IT teams?
4. The skills gap remains a problem. However, as more seasoned practitioners who were fundamentally trained across networking, servers, and other aspects of IT are aging out of the workforce, CISOs are being forced to hire more security focused analysts, but with less broad practitioner experience. This is impacting the amount of on-the-job training and experience required (and offered) for them to be effective. There are just not enough skilled cybersecurity professionals in the market today.
Questions to ask: How can my TDIR platform automate certain tasks and bring the right context to the forefront. How can it provide the necessary context that can help a less experienced analyst learn over time and increasingly add value?
5. Vendors are overpromising and underdelivering. When it comes to threat detection, too many vendors falsely claim or exaggerate that they have machine learning (ML), artificial intelligence (AI), multicloud support, and/or apply risk metrics. CISOs are barraged with vendors claiming to offer a silver bullet at worst or using questionable marketing claims at best. Neither delivers what’s promised.
Questions to ask: Does the solution use rule-based ML/AI (which is important to understand considering it’s static in nature, requires updating, and is ineffective at identifying new attacks and variants)? Does multicloud just do correlation (leaving it up to the analyst to determine if an attack is occurring across multi-cloud)? Is risk scoring just aggregated scores from public sources (not leveraging an enterprise-class risk engine powered by analytics)?
6. The tradeoff of cost and budget versus better security visibility can be a painful choice. CISOs often are presented with platforms (like a SIEM) that charge organizations based on volume of data ingested. As an organization grows, charging by data ingested is unpredictable and can quickly lead to rapidly escalating costs in licensing and storage. As a result, CISOs should be looking for solutions that reduce this cost burden, while still allowing the organization to pull in and ingest as much data as possible. The result is better SOC visibility and more effective TDIR.
Questions to ask: For a solution that employs true machine learning, the more data that can be pulled in the better. Does my solution penalize me for bringing in more data? Or does it embrace more data ingestion to offer better visibility and do so by providing flexible licensing? How can my provider help reduce storage costs?
7.Automation can drive efficiency and speed threat detection. This can free up security team members to focus their attention on more intensive tasks. When done effectively, this provides OPEX savings – which means less time and resources spent on simple and manual tasks of low value, while also shrinking the time for high-value tasks. It can also provide better experience for junior analysts, especially when your analytics and automation are transparent, allowing them to learn and improve.
But not all automation is created equal. Solutions that produce too much noise and too many false positives make it difficult to prioritize investigation and automate responses. The more accurate the threat detection is, the more targeted the automated response can be.
Questions to ask: Is automation in the solution inherent across my entire SOC lifecycle? If so, how do I know it’s working and how can I trust that it’s optimizing my operations (for example, can it show that I’m stopping threats earlier in the kill chain)?
As CISOs and their security operations teams look to improve threat detection they’ll face a variety of issues around visibility, cost, flexibility (especially into cloud environments), analytics, prioritization, contextual data and much more. But by working together to understand these challenges – and by arming ourselves with knowledge and the right questions – our industry can continue to evolve and deliver better security operations for our organizations.
It’s not quite everywhere yet, but 5G connectivity is growing rapidly. That’s a great thing for remote workers and anyone depending on a fast connection, but what kind of impact will 5G have on application security?
“The explosion of 5G is only going to put more pressure on teams to harden their application security practice,” said Mark Lambert, vice president of products at ArmorCode, via email. The reason is the increase in the attack surface.
More devices with high bandwidth will be connecting to your network systems and services. At the same time, Lambert pointed out, business leaders are demanding an increase in the pace of software delivery. As 5G use becomes the norm, so does the risk of apps without the security to support faster connectivity.
“Application security teams need ways to quickly identify vulnerabilities within the DevSecOps pipeline and collaborate with development teams to escalate remediation,” said Lambert.
The IoT Dilemma
5G will accelerate the use of IoT devices, which in turn will accelerate app development for IoT devices. Based on the lack of priority for security in the application development process today, there is no indication that IoT software will be designed to handle the challenges of 5G security in the future. And there will be challenges.
The 5G systems won’t just connect phones, sensors and software to the internet. “On a high level, a 5G system comprises a device connected to a 5G access network which, in turn, is connected to the rest of the system called a 5G core network,” according to a whitepaper from Ericsson.
So, it won’t simply be all the new connections that are expanding the attack surface and creating an increased application security risk, but also the change in how 5G connects to the network. Rather than the one-way network that was in place under 4G, 5G brings a two-way communication capability, and, according to a Cyrex blog post, would “be linked in this two-way network and effectively would be public to those with the skills to exploit the link.”
5G, AppSec and the Cloud
Expect to see 5G lead to an increase in the adoption of cloud applications, said Kevin Dunne, president at Pathlock, in an email interview.
“Increased connectivity and connection speeds from anywhere will drive companies to invest in infrastructure that can be accessed from anywhere,” said Dunne. “Providing accessible applications will increase employee productivity, but it will also introduce new threats. With critical resources now on the public network, bad actors can access them from anywhere, increasing the number of threats to sensitive data and business processes.”
IT security teams will need to shift their focus from network-based perimeter protection to more modern approaches that look beyond what users can do in an application to what they are doing, Dunne added. “This helps to defend against modern attacks like phishing and ransomware which are increasingly common in cloud environments.”
It’s not all gloom and doom for 5G and application security. 5G can enhance app security, allowing developers to create more intelligent software and allowing them to use virtual hardware. 5G can also improve identity management and authentication that will make it more difficult for threat actors to infiltrate applications.
5G is expected to transform business reliance on IoT devices and cloud applications. Expect new threats and risks to go hand-in-hand with the innovations that 5G brings. Those responsible for application security will need to prepared with cybersecurity systems that will adapt to those threats.
Pro-Ukraine hackers are using Docker images to launch distributed denial-of-service (DDoS) attacks against a dozen Russian and Belarusian websites.
Pro-Ukraine hackers, likely linked to Ukraine IT Army, are using Docker images to launch distributed denial-of-service (DDoS) attacks against a dozen websites belonging to government, military, and media. The DDoS attacks also targeted three Lithuanian media websites.
The attacks were monitored by cybersecurity firm CrowdStrike, who discovered that the Docker Engine honeypots deployed between February 27 and March 1 were compromised and used in the DDoS attacks.
The attackers attempt to exploit misconfigured Docker installs through exposed APIs and takeover them to abuse their computational resources.
“Container and cloud-based resources are being abused to deploy disruptive tools. The use of compromised infrastructure has far-reaching consequences for organizations who may unwittingly be participating in hostile activity against Russian government, military and civilian targets.” reported Crowdstrike. “Docker Engine honeypots were compromised to execute two different Docker images targeting Russian, Belarusian and Lithuanian websites in a denial-of-service (DoS) attack.”
The technique to compromise Dockers containers is widely adopted by financially-motivated threat actors, like LemonDuck or TeamTNT to abuse their resources and mine cryptocurrencies.
The experts noticed that the Docker images’ target lists overlap with domains shared by the Ukraine IT Army (UIA). The attacks involved the two images that have been downloaded over 150,000 times, but the threat intelligence firm confirmed that CrowdStrike Intelligence cannot determine the exact number of downloads originating from compromised infrastructure.
The list of targeted websites includes the Kremlin and Tass agency websites.
The two images used by the attackers are named “erikmnkl/stoppropaganda” and “abagayev/stop-russia”.
“Both Docker images’ target lists overlap with domains reportedly shared by the Ukraine government-backed UIA that called its members to perform DDoS attacks against Russian targets. CrowdStrike Intelligence assesses these actors almost certainly compromised the honeypots to support pro-Ukrainian DDoS attacks. This assessment is made with high confidence based on the targeted websites.” concludes the report that includes Indicators of Compromise (IoCs) along with Snort detection rule.
A vulnerability in the domain name system (DNS) component of the uClibc library impacts millions of IoT products.
Nozomi Networks warns of a vulnerability, tracked as CVE-2022-05-02, in the domain name system (DNS) component of the uClibc library which is used by a large number of IoT products. The flaw also affects DNS implementation of all versions of the uClibc-ng library, which is a fork specifically designed for OpenWRT, a common OS for routers used in various critical infrastructure sectors.
An attacker can exploit the vulnerability for DNS poisoning or DNS spoofing and redirect the victim to a malicious website instead of the legitimate one.
“The flaw is caused by the predictability of transaction IDs included in the DNS requests generated by the library, which may allow attackers to perform DNS poisoning attacks against the target device.” reads the advisory published by Nozomi Networks.
The uClibc library is used by major vendors, including Linksys, Netgear, and Axis, or Linux distributions such as Embedded Gentoo.
Security experts did not disclose the details of the flaw because the vendor has yet to address it.
The researchers from Nozomi discovered the issue by reviewing the trace of DNS requests performed by an IoT device in their test environment. They were able to determine the pattern of DNS requests performed from the output of Wireshark, the transaction ID is first incremental, then resets to the value 0x2, then is incremental again. The transaction ID of the requests was predictable, a circumstance that could allow an attacker to perform DNS poisoning under certain circumstances.
“A source code review revealed that the uClibc library implements DNS requests by calling the internal “__dns_lookup” function, located in the source file “/libc/inet/resolv.c”.” continues the advisory. “Given that the transaction ID is now predictable, to exploit the vulnerability an attacker would need to craft a DNS response that contains the correct source port, as well as win the race against the legitimate DNS response incoming from the DNS server. Exploitability of the issue depends exactly on these factors. As the function does not apply any explicit source port randomization, it is likely that the issue can easily be exploited in a reliable way if the operating system is configured to use a fixed or predictable source port.”
If the OS uses randomization of the source port, the only way to exploit the issue is to bruteforce the 16-bit source port value by sending multiple DNS responses, while simultaneously winning the race against the legitimate response.
“As anticipated, as of the publication of this blog, the vulnerability is still unpatched. As stated in a public conversation, the maintainer was unable to develop a fix for the vulnerability, hoping for help from the community. The vulnerability was disclosed to 200+ vendors invited to the VINCE case by CERT/CC since January 2022, and a 30-day notice was given to them before the public release.” concludes Nozomi.
There are a variety of python tools are using in the cybersecurity industries and the python is one of the widely used programming languages to develop the penetration testing tools.
Anyone who is involved in vulnerability research, reverse engineering or pen-testing, Cyber Security News suggests trying out the mastering in Python For Hacking From Scratch.
It has a highly practical but it won’t neglect the theory, so we’ll start with covering some basics about ethical hacking and python programming to advanced level.
The listed tools are written in Python, others are just Python bindings for existing C libraries and some of the most powerful tools pentest frameworks, bluetooth smashers, web application vulnerability scanners, war-dialers, etc. Here you can also find 1000 ofhacking tools.
Forensic Fuzzing Tools: generate fuzzed files, fuzzed file systems, and file systems containing fuzzed files in order to test the robustness of forensics tools and examination systems
Windows IPC Fuzzing Tools: tools used to fuzz applications that use Windows Interprocess Communication mechanisms
WSBang: perform automated security testing of SOAP based web services
Construct: library for parsing and building of data structures (binary or textual). Define your data structures in a declarative manner
python-poppler-qt4: Python binding for the Poppler PDF library, including Qt4 support
Misc
InlineEgg: toolbox of classes for writing small assembly programs in Python
Exomind: framework for building decorated graphs and developing open-source intelligence modules and ideas, centered on social network services, search engines and instant messaging
RevHosts: enumerate virtual hosts for a given IP address
A report from IT security firm Valtix has revealed how IT leaders are changing the way they secure cloud workloads in the aftermath of the Log4j vulnerability.
Log4j is a logging library and part of the Apache Software Foundation’s Apache Logging Services project. It is pretty much ubiquitous in applications and services built using Java.
It is used to record all manner of digital activities that run under the hoods of millions of computers. In December 2021, the Log4j vulnerability—aka CVE-2021-44228—was publicly announced and rapidly flagged as one of the most critical security vulnerabilities in recent years.
Once hackers discovered it was vulnerable to attack, they opened a dangerous vulnerability for IT teams across every industry.
Valtix surveyed 200 cloud security leaders to better understand how they protect every app across every cloud in the aftermath of Log4j. The survey found that 95% of IT leaders said Log4j and Log4Shell was a wake-up call for cloud security and that the vulnerability changed it permanently.
Log4j Changed Security Thinking
Log4j impacted not only the security posture of organizations across the globe but the very way IT leaders think about security.
The survey found 83% of IT leaders felt that the response to Log4j has impacted their ability to address business needs and that Log4j taught IT leaders the status quo isn’t good enough.
Respondents said they felt the security protections in place now are insufficient, that other high severity open source vulnerabilities will emerge and they worry that cloud service providers themselves might have vulnerabilities that could impact their teams.
In addition, 85% of respondents said poor integration between cloud security tools often slows down security processes and caused security lapses, while 82% of IT leaders said visibility into active security threats in the cloud is usually obscured.
Just over half (53%) said they felt confident that all their public cloud workloads and APIs were fully secured against attacks from the internet, and less than 75% said they were confident that all of their cloud workloads were fully segmented from the public internet.
“Security leaders are still dealing with the impacts of Log4Shell,” explained Davis McCarthy, principal security researcher at Valtix. “Although many have lost confidence in their existing approach to cloud workload protection, the research shows they are taking action in 2022 by prioritizing new tools, process changes and budget as it relates to cloud security.”
Changing Cloud Security Priorities
The survey also revealed that Log4j shuffled cloud security priorities, with 82% of IT leaders admitting their priorities have changed and 77% of leaders said they are still dealing with Log4j patching.
Vishal Jain, co-founder and CTO at Valtix, added that the research echoed what the company is hearing from organizations daily: Log4Shell was a catalyst for many who realized that—even in the cloud—defense-in-depth is essential because there is no such thing as an invulnerable app.
“Log4Shell exposed many of the cloud providers’ workload security gaps as IT teams scrambled to mitigate and virtually patch while they could test updated software,” he said. “They needed more advanced security for remote exploit prevention, visibility into active threats or ability to prevent data exfiltration.”
According to the report, as a result of Log4j, security leaders are prioritizing additional tools, process changes and budgets, with industries from financial services to manufacturing reprioritizing their cloud security initiatives after Log4j.
The top five industries where confidence is still negatively impacted due to Log4j are energy, hospitality/travel, automotive, government and financial services, the survey found.
The majority (96%) of enterprises said their cloud security threats grow more complex every year as new players, threats, tools, business models and requirements keep IT teams busier and more important than ever.
Security leaders also indicated that they recognize there’s no such thing as an invulnerable cloud workload and that defense-in-depth is needed, with 97% of IT leaders viewing defense-in-depth as essential in the cloud.
However, budget constraints slow tech adoption, with lack of funding the top challenge to adequate protection, followed by concerns that preventative security will slow down the business.
Survey respondents also indicated it is difficult to operationalize cloud workload protection solutions, with 79% of IT leaders agreeing that agent-based security solutions are difficult to operationalize in the cloud.
Meanwhile, 88% of IT leaders said they think bringing network security appliances to the cloud is challenging to the cloud computing operating model and 90% of IT leaders said open network paths to cloud workloads from the public internet can create security risks.
Free and open source software (FOSS) will continue to present a risk to organizations as hackers focus on exploiting security flaws in the code, a report from Moody’s Investors Service found.
In the case of Log4j, for example, three to five years could elapse before organizations are finished patching security flaws, and with recent estimates indicating open source makes up 80% to 90% of the average piece of software, the persistent security threats FOSS presents is significant.