InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
North Korean Hackers Weaponize Fake Research to Deliver RokRAT Backdoor
Media organizations and high-profile experts in North Korean affairs have been at the receiving end of a new campaign orchestrated by a threat actor known as ScarCruft in December 2023.
“ScarCruft has been experimenting with new infection chains, including the use of a technical threat research report as a decoy, likely targeting consumers of threat intelligence like cybersecurity professionals,” SentinelOne researchers Aleksandar Milenkoski and Tom Hegel said in a report shared with The Hacker News.
The North Korea-linked adversary, also known by the name APT37, InkySquid, RedEyes, Ricochet Chollima, and Ruby Sleet, is assessed to be part of the Ministry of State Security (MSS), placing it apart from Lazarus Group and Kimsuky, which are elements within the Reconnaissance General Bureau (RGB).
Earlier this week, North Korean state media reported that the country had carried out a test of its “underwater nuclear weapons system” in response to drills by the U.S., South Korea, and Japan, describing the exercises as a threat to its national security.
The latest attack chain observed by SentinelOne targeted an expert in North Korean affairs by posing as a member of the North Korea Research Institute, urging the recipient to open a ZIP archive file containing presentation materials.
While seven of the nine files in the archive are benign, two of them are malicious Windows shortcut (LNK) files, mirroring a multi-stage infection sequence previously disclosed by Check Point in May 2023 to distribute the RokRAT backdoor.
There is evidence to suggest that some of the individuals who were targeted around December 13, 2023, were also previously singled out a month prior on November 16, 2023.
SentinelOne said its investigation also uncovered malware – two LNK files (“inteligence.lnk” and “news.lnk”) as well as shellcode variants delivering RokRAT – that’s said to be part of the threat actor’s planning and testing processes.
While the former shortcut file just opens the legitimate Notepad application, the shellcode executed via news.lnk paves the way for the deployment of RokRAT, although this infection procedure is yet to be observed in the wild, indicating its likely use for future campaigns.
Both LNK files have been observed deploying the same decoy document, a legitimate threat intelligence report about the Kimsuky threat group published by South Korean cybersecurity company Genians in late October 2023, in a move that implies an attempt to expand its target list.
This has raised the possibility that the adversary could be looking to gather information that could help it refine its operational playbook and also target or mimic cybersecurity professionals to infiltrate specific targets via brand impersonation techniques.
The development is a sign that the nation-state hacking crew is actively tweaking its modus operandi in an apparent effort to circumvent detection in response to public disclosure about its tactics and techniques.
“ScarCruft remains committed to acquiring strategic intelligence and possibly intends to gain insights into non-public cyber threat intelligence and defense strategies,” the researchers said.
“This enables the adversary to gain a better understanding of how the international community perceives developments in North Korea, thereby contributing to North Korea’s decision-making processes.”
Artificial Intelligence (AI) has arisen as a wildly disruptive technology across many industries. As AI models continue to improve, more industries are sure to be disrupted and affected. One industry that is already feeling the effects of AI is digital security. The use of this new technology has opened up new avenues of protecting data, but it has also caused some concerns about its ethicality and effectiveness when compared with what we will refer to as traditional or established security practices.
This article will touch on the ways that this new tech is affecting already established practices, what new practices are arising, and whether or not they are safe and ethical.
HOW DOES AI AFFECT ALREADY ESTABLISHED SECURITY PRACTICES?
It is a fair statement to make that AI is still a nascent technology. Most experts agree that it is far from reaching its full potential, yet even so, it has still been able to disrupt many industries and practices. In terms of already established security practices, AI is providing operators with the opportunity to analyze huge amounts of data at incredible speed and with impressive accuracy. Identifying patterns and detecting anomalies is easy for AI to do, and incredibly useful for most traditional data security practices.
Previously these systems would rely solely on human operators to perform the data analyses, which can prove time-consuming and would be prone to errors. Now, with AI help, human operators need only understand the refined data the AI is providing them and act on it.
IN WHAT WAYS CAN AI BE USED TO BOLSTER AND IMPROVE EXISTING SECURITY MEASURES?
AI can be used in several other ways to improve security measures. In terms of access protection, AI-driven facial recognition and other forms of biometric security can easily provide a relatively foolproof access protection solution. Using biometric access can eliminate passwords, which are often a weak link in data security.
AI’s ability to sort through large amounts of data means that it can be very effective in detecting and preventing cyber threats. An AI-supported network security program could, with relatively little oversight, analyze network traffic, identify vulnerabilities, and proactively defend against any incoming attacks.
THE DIFFICULTIES IN UPDATING EXISTING SECURITY SYSTEMS WITH AI SOLUTIONS
The most pressing difficulty is that some old systems are simply not compatible with AI solutions. Security systems designed and built to be operated solely by humans are often not able to be retrofitted with AI algorithms, which means that any upgrades necessitate a complete, and likely expensive, overhaul of the security systems.
One industry that has been quick to embrace AI-powered security systems is the online gambling industry. For those who are interested in seeing what AI-driven security can look like, visiting a casino online and investigating its security protocols will give you an idea of what is possible. Having an industry that has been an early adoption of such a disruptive technology can help other industries learn what to do and what not to do. In many cases, online casinos staged entire overhauls of their security suites to incorporate AI solutions, rather than trying to incorporate new tech, with older non-compatible security technology.
Another important factor in the difficulty of incorporating AI systems is that it takes a very large amount of data to properly train an AI algorithm. Thankfully, other companies are doing this work, and it should be possible to buy an already trained AI, fit to purpose. All that remains is trusting that the trainers did their due diligence and that the AI will be effective.
EFFECTIVENESS OF AI-DRIVEN SECURITY SYSTEMS
AI-driven security systems are, for the most part, lauded as being effective. With faster threat detection and response times quicker than humanly possible, the advantage of using AI for data security is clear.
AI has also proven resilient in terms of adapting to new threats. AI has an inherent ability to learn, which means that as new threats are developed and new vulnerabilities emerge, a well-built AI will be able to learn and eventually respond to new threats just as effectively as old ones.
It has been suggested that AI systems must completely replace traditional data security solutions shortly. Part of the reason for this is not just their inherent effectiveness, but there is an anticipation that incoming threats will also be using AI. Better to fight fire with fire.
IS USING AI FOR SECURITY DANGEROUS?
The short answer is no, the long answer is no, but. The main concern when using AI security measures with little human input is that they could generate false positives or false negatives. AI is not infallible, and despite being able to process huge amounts of data, it can still get confused.
It could also be possible for the AI security system to itself be attacked and become a liability. If an attack were to target and inject malicious code into the AI system, it could see a breakdown in its effectiveness which would potentially allow multiple breaches.
The best remedy for both of these concerns is likely to ensure that there is still an alert human component to the security system. By ensuring that well-trained individuals are monitoring the AI systems, the dangers of false positives or attacks on the AI system are reduced greatly.
ARE THERE LEGITIMATE ETHICAL CONCERNS WHEN AI IS USED FOR SECURITY?
Yes. The main ethical concern relating to AI when used for security is that the algorithm could have an inherent bias. This can occur if the data used for the training of the AI is itself biased or incomplete in some way.
Another important ethical concern is that AI security systems are known to sort through personal data to do their job, and if this data were to be accessed or misused, privacy rights would be compromised.
Many AI systems also have a lack of transparency and accountability, which compounds the problem of the AI algorithm’s potential for bias. If an AI is concluding that a human operator cannot understand the reasoning, the AI system must be held suspect.
CONCLUSION
AI could be a great boon to security systems and is likely an inevitable and necessary upgrade. The inability of human operators to combat AI threats alone seems to suggest its necessity. Coupled with its ability to analyze and sort through mountains of data and adapt to threats as they develop, AI has a bright future in the security industry.
However, AI-driven security systems must be overseen by trained human operators who understand the complexities and weaknesses that AI brings to their systems.
“Today Microsoft Incident Response are proud to introduce two one-page guides to help security teams investigate suspicious activity in Microsoft 365 and Microsoft Entra. These guides contain the artifacts that Microsoft Incident Response hunts for and uses daily to provide our customers with evidence of Threat Actor activity in their tenant.”
OSINVGPT is an AI-based system that helps security analysts with open-source investigations and tool selection. While this tool was developed by “Very Simple Research.”
This tool can assist security analysts in gathering relevant information, sources, and tools for their investigations. It even helps researchers produce reports and summaries of their results.
OSINVGPT is available on ChatGPT and is useful for security researchers as it saves both time and effort.
Here below, we have mentioned all the key aspects that OSINVGPT can do:-
Data Analysis
Interpretation
Guidance on Methodology
Case Studies
Examples
Document Analysis
Fact-Checking
Verification
Recommendations Based on External Sources
Ethical Considerations
OSINVGPT’s data analysis and interpretation involve examining information from diverse open sources to form readable narratives and address specific queries. At the same time, guidance is offered on conducting transparent and accurate open-source investigations.
Detailed insights and suggestions are provided using real-world examples within the knowledge base. Appropriate data is analyzed and extracted from the uploaded documents for open-source investigations.
To ensure investigation accuracy, assistance is given in fact-checking using open-source data. Recommendations based on external sources are provided for queries beyond the direct knowledge base, with a focus on ethical considerations in open-source investigations for responsible conduct.
Moreover, if you want, you can access the OSINVGPT tool from here for open-source investigation.
CISA warns that a critical authentication bypass vulnerability in Ivanti’s Endpoint Manager Mobile (EPMM) and MobileIron Core device management software (patched in August 2023) is now under active exploitation.
Tracked as CVE-2023-35082, the flaw is a remote unauthenticated API access vulnerability affecting all versions of EPMM 11.10, 11.9, and 11.8 and MobileIron Core 11.7 and below,.
Successful exploitation provides attackers access to personally identifiable information (PII) of mobile device users and can let them backdoor compromised servers when chaining the bug with other flaws.
“Ivanti has an RPM script available now. We recommend customers first upgrade to a supported version and then apply the RPM script,” the company said in August. “More detailed information can be found in this Knowledge Base articleon the Ivanti Community portal.”
Cybersecurity company Rapid7, which discovered and reported the vulnerability, provides indicators of compromise(IOCs) to help admins detect signs of a CVE-2023-35082 attack.
Shodan’s data also reveals that the more than 150 instances linked to government agencies worldwide can be directly accessed via the Internet.
Internet-exposed Ivanti EPMM user portals (Shodan)
While it has yet to provide further details on CVE-2023-35082 active exploitation, CISA added the vulnerability to its Known Exploited Vulnerabilities Catalog based on evidence of active exploitation and says there’s no evidence of abuse in ransomware attacks.
The cybersecurity agency also ordered U.S. federal agencies to patch it by February 2, as required by a binding operational directive (BOD 22-01) issued three years ago.
Ivanti has yet to update its Augustadvisories or issue another notification warning that attackers are using this security vulnerability in the wild.
Two other Ivanti Connect Secure (ICS) zero-days, an auth bypass (CVE-2023-46805) and a command injection (CVE-2024-21887) are now also under mass exploitation by multiple threat groups, starting January 11.
Victims compromised so far range from small businesses to multiple Fortune 500 companies from various industry sectors, with the attackers having already backdoored over 1,700 ICS VPN appliances using a GIFTEDVISITOR webshell variant.
Today, DDoS attacks stand out as the most widespread cyber threat, extending their impact to APIs.
When successfully executed, these attacks can cripple a system, presenting a more severe consequence than DDoS incidents targeting web applications.
The increased risk amplifies the potential for reputational damage to the company associated with the affected APIs.
How Does DDoS Affect Your APIs?
A DDoS attack on an API involves overwhelming the targeted API with a flood of traffic from multiple sources, disrupting its normal functioning and causing it to become unavailable to legitimate users.
This attack can be particularly damaging as APIs play a crucial role in enabling communication between different software applications, and disruption can impact the overall functionality of interconnected systems.
The impact of DDoS attacks is particularly severe for businesses and organizations that depend on their APIs to deliver essential services to customers. These attacks, employing methods such as UDP floods, SYN floods, HTTP floods, and others, pose a significant threat.
Typically orchestrated through botnets—networks of compromised devices under the control of a single attacker—DDoS attacks can cripple a target’s functionality.
DDoS attacks on APIs focus on the server and each part of your API service. But how do attackers manage to exploit DDoS attacks on APIs?
This Webinar on API attack simulation shows an example of a DDoS attack on APIs and how WAAP can protect the API endpoints.
Several factors can make APIs vulnerable to DDoS attacks:
Absence or insufficient Rate-Limiting: If an API lacks robust rate-limiting mechanisms, attackers can exploit this weakness by sending a massive volume of requests in a short period, overwhelming the system’s capacity to handle them.
Inadequate Authentication and Authorization: Weak or compromised authentication measures can allow malicious actors to gain unauthorized access to an API. Once inside, they may misuse the API by flooding it with requests, leading to a DDoS scenario.
Insufficient Monitoring and Anomaly Detection: Ineffective monitoring and anomaly detection systems can make identifying abnormal traffic patterns associated with a DDoS attack challenging. Prompt detection is crucial for implementing mitigation measures.
Scalability Issues: APIs that cannot scale dynamically in response to increased traffic may become targets for DDoS attacks. A sudden surge in requests can overload the system if it cannot scale its resources efficiently.
How Do WAAP Solutions Protect Against DDoS Attacks on API?
Web Application and API Protection (WAAP) platform offers in-line blocking capabilities for all layer seven traffic, comprehensively securing web applications and APIs.
To guarantee robust security, WAFs incorporated into WAAP solutions provide immediate defense by filtering, monitoring, detecting, and automatically blocking malicious traffic, thereby preventing its access to the server.
Active monitoring of traffic on an API endpoint enables the identification of abnormal traffic patterns commonly linked to DDoS attacks. Instances of sudden spikes in traffic volume serve as red flags for potential attacks, and a proficient monitoring system can promptly detect and address such increases.
In addition, WAAP enforces rate limits by assessing the number of requests from an IP address. API rate limiting is critical in mitigating DDoS damage and reducing calls, data volume, and types. Setting limits aligned with API capacity and user needs enhances security and improves the user experience.
To avoid impacting genuine users, find solutions that use behavioral analysis technologies to establish a baseline for rate limiting.
AppTrana WAAP’s DDoS mitigation employs adaptive behavioral analysis for comprehensive defense, detecting and mitigating various DDoS attacks with a layered approach. It distinguishes between “flash crowds” and real DDoS attacks, using real-time behavioral analysis for precise mitigation. This enhances accuracy compared to static rate limit-based systems.
Trend Micro’s recent threat hunting efforts have uncovered active exploitation of CVE-2023-36025, a vulnerability in Microsoft Windows Defender SmartScreen, by a new strain of malware known as Phemedrone Stealer. This malware targets web browsers, cryptocurrency wallets, and messaging apps like Telegram, Steam, and Discord, stealing data and sending it to attackers via Telegram or command-and-control servers. Phemedrone Stealer, an open-source stealer written in C#, is actively maintained on GitHub and Telegram.
CVE-2023-36025 arises from insufficient checks on Internet Shortcut (.url) files, allowing attackers to bypass Windows Defender SmartScreen warnings by using crafted .url files that download and execute malicious scripts . Microsoft patched this vulnerability on November 14, 2023, but its exploitation in the wild led to its inclusion in the Cybersecurity and Infrastructure Security Agency’s Known Exploited Vulnerabilities list. Various malware campaigns, including those distributing Phemedrone Stealer, have since incorporated this vulnerability.
INITIAL ACCESS VIA CLOUD-HOSTED MALICIOUS URLS
As per the report, this involves leveraging cloud-hosted URLs that are malicious in nature. The article provides insights into how these URLs are used to initiate the attack, highlighting the strategies employed for distributing the malware and penetrating target systems. Attackers host malicious Internet Shortcut files on platforms like Discord or cloud services, often disguised using URL shorteners. Unsuspecting users who open these files trigger the exploitation of CVE-2023-36025.
The malicious .url file downloads and executes a control panel item (.cpl) file from an attacker-controlled server. This bypasses the usual security prompt from Windows Defender SmartScreen. The malware employs MITRE ATT&CK technique T1218.002, using the Windows Control Panel process binary to execute .cpl files, which are essentially DLL files.
Initial Infection via Malicious .url File (CVE-2023-36025): The attack begins when a user executes a malicious Internet Shortcut (.url) file. This file is designed to bypass Microsoft Windows Defender SmartScreen warnings, typically triggered for files from untrusted sources. The evasion is likely achieved by manipulating the file’s structure or content, making it appear benign.
Execution of a Control Panel Item (.cpl) File: Once executed, the .url file connects to an attacker-controlled server to download a .cpl file. In Windows, .cpl files are used to execute Control Panel items and are essentially Dynamic Link Libraries (DLLs). This step involves the MITRE ATT&CK technique T1218.002, which exploits the Windows Control Panel process binary (control.exe) to execute .cpl files.
Use of rundll32.exe for DLL Execution: The .cpl file, when executed through control.exe, then calls rundll32.exe, a legitimate Windows utility used to run functions stored in DLL files. This step is critical as it uses a trusted Windows process to execute the malicious DLL, further evading detection.
PowerShell Utilization for Payload Download and Execution: The malicious DLL acts as a loader to call Windows PowerShell, a task automation framework. PowerShell is then used to download and execute the next stage of the attack from GitHub.
Execution of DATA3.txt PowerShell Loader: The file DATA3.txt, hosted on GitHub, is an obfuscated PowerShell script designed to be difficult to analyze statically (i.e., without executing it). It uses string and digit manipulation to mask its true intent.
Deobfuscation and Execution of the GitHub-Hosted Loader: Through a combination of static and dynamic analysis, the obfuscated PowerShell commands within DATA3.txt can be deobfuscated. This script is responsible for downloading a ZIP file from the same GitHub repository.
Contents of the Downloaded ZIP File:
WerFaultSecure.exe: A legitimate Windows Fault Reporting binary.
Wer.dll: A malicious binary that is sideloaded (executed in the context of a legitimate process) when WerFaultSecure.exe is run.
Secure.pdf: An RC4-encrypted second-stage loader, presumably containing further malicious code.
This attack is sophisticated, using multiple layers of evasion and leveraging legitimate Windows processes and binaries to conceal malicious activities. The use of GitHub as a hosting platform for malicious payloads is also noteworthy, as it can lend an appearance of legitimacy and may bypass some network-based security controls.
PERSISTENCE AND DLL SIDELOADING
The malware achieves persistence by creating scheduled tasks and uses DLL sideloading techniques. The malicious DLL, crucial for the loader’s functionality, decrypts and runs the second stage loader. It uses dynamic API resolving and XOR-based algorithms for string decryption, complicating reverse engineering efforts.
Malicious DLL (wer.dll) Functionality: It decrypts and runs a second-stage loader. To avoid detection and hinder reverse engineering, it employs API hashing, string encryption, and is protected by VMProtect.
DLL Sideloading Technique: The malware deceives the system into loading the malicious wer.dll by placing it in the application directory, a method that exploits the trust Windows has in its own directories.
Dynamic API Resolving: To avoid detection by static analysis tools, the malware uses CRC-32 hashing for storing API names, importing them dynamically during runtime.
XOR-based String Decryption: An algorithm is used to decrypt strings, with each byte’s key generated based on its position. This method is designed to complicate automated decryption efforts.
Persistence Mechanism: The malware creates a scheduled task to regularly execute WerFaultSecure.exe. This ensures that the malware remains active on the infected system.
Second-Stage Loader (secure.pdf): It’s decrypted using an undocumented function from advapi32.dll, with memory allocation and modification handled by functions from Activeds.dll and VirtualProtect.
Execution Redirection through API Callbacks: The malware cleverly redirects execution flow to the second-stage payload using Windows API callback functions, particularly exploiting the CryptCATCDFOpen function.
Overall, this malware demonstrates a deep understanding of Windows internals, using them to its advantage to stay hidden and maintain persistence on the infected system. The combination of techniques used makes it a complex and dangerous threat.
SECOND-STAGE DEFENSE EVASION
The second-stage loader, known as Donut, is an open-source shellcode that executes various file types in memory. It encrypts payloads without compression and uses the Unmanaged CLR Hosting API to load the Common Language Runtime, creating a new Application Domain for running assemblies.Here’s an overview of how Donut is used for defense evasion and payload execution:
Donut Shellcode Loader:
Capabilities: Allows execution of VBScript, JScript, EXE files, DLL files, and .NET assemblies directly in memory.
Deployment Options: Can be embedded into the loader or staged from an HTTP or DNS server. In this case, it’s embedded directly into the loader.
Payload Compression and Encryption:
Compression Techniques: Supports aPLib, LZNT1, Xpress, and Xpress Huffman through RtlCompressBuffer.
Encryption: Uses the Chaskey block cipher for payload encryption. In this instance, only encryption is used, without compression.
Execution Process via Unmanaged CLR Hosting API:
CLR Loading: Donut configures to use the Unmanaged CLR Hosting API to load the Common Language Runtime (CLR) into the host process.
Application Domain Creation: Creates a new Application Domain, allowing assemblies to run in disposable AppDomains.
Assembly Loading and Execution: Once the AppDomain is prepared, Donut loads the .NET assembly and invokes the payload’s entry point.
The use of Donut in this attack is particularly notable for its ability to execute various types of code directly in memory. This method greatly reduces the attack’s visibility to traditional security measures, as it leaves minimal traces on the filesystem. Additionally, the use of memory-only execution tactics, coupled with sophisticated encryption, makes the payload difficult to detect and analyze. The ability to create and use disposable AppDomains further enhances evasion by isolating the execution environment, reducing the chances of detection by runtime monitoring tools. This approach demonstrates a high level of sophistication in evading defenses and executing the final payload stealthily.
PHEMEDRONE STEALER PAYLOAD ANALYSIS
Phemedrone Stealer initializes its configuration and decrypts items like Telegram API tokens using the RijndaelManaged symmetric encryption algorithm. It targets a wide range of applications to extract sensitive information, including Chromium-based browsers, crypto wallets, Discord, FileGrabber, FileZilla, Gecko-based browsers, system information, Steam, and Telegram.
COMMAND AND CONTROL FOR DATA EXFILTRATION
After data collection, the malware compresses the information into a ZIP file and validates the Telegram API token before exfiltrating the data. It sends system information and statistics to the attacker via the Telegram API. Despite the patch for CVE-2023-36025, threat actors continue to exploit this vulnerability to evade Windows Defender SmartScreen protection. The Phemedrone Stealer campaign highlights the need for vigilance and updated security measures against such evolving cyber threats.
MITIGATION
Mitigating the risks associated with CVE-2023-36025 and similar vulnerabilities, especially in the context of the Phemedrone Stealer campaign, involves a multi-layered approach. Here are some key strategies:
Apply Security Patches: Ensure that all systems are updated with the latest security patches from Microsoft, particularly the one addressing CVE-2023-36025. Regularly updating software can prevent attackers from exploiting known vulnerabilities.
Enhance Endpoint Protection: Utilize advanced endpoint protection solutions that can detect and block sophisticated malware like Phemedrone Stealer. These solutions should include behavior-based detection to identify malicious activities.
Educate Users: Conduct security awareness training for all users. Educate them about the dangers of clicking on unknown links, opening suspicious email attachments, and the risks of downloading files from untrusted sources.
Implement Network Security Measures: Use firewalls, intrusion detection systems, and intrusion prevention systems to monitor and control network traffic based on an applied set of security rules.
Secure Email Gateways: Deploy email security solutions that can scan and filter out malicious emails, which are often the starting point for malware infections.
Regular Backups: Regularly back up data and ensure that backup copies are stored securely. In case of a malware infection, having up-to-date backups can prevent data loss.
Use Application Whitelisting: Control which applications are allowed to run on your network. This can prevent unauthorized applications, including malware, from executing.
Monitor and Analyze Logs: Regularly review system and application logs for unusual activities that might indicate a breach or an attempt to exploit vulnerabilities.
Restrict User Privileges: Apply the principle of least privilege by limiting user access rights to only those necessary for their job functions. This can reduce the impact of a successful attack.
Incident Response Plan: Have a well-defined incident response plan in place. This should include procedures for responding to a security breach and mitigating its impact.
Use Secure Web Gateways: Deploy web gateways that can detect and block access to malicious websites, thereby preventing the download of harmful content.
Regular Security Audits: Conduct regular security audits and vulnerability assessments to identify and address potential security gaps in the network.
By implementing these measures, organizations can significantly reduce their risk of falling victim to malware campaigns that exploit vulnerabilities like CVE-2023-36025.
Network Penetration Testing checklist determines vulnerabilities in the network posture by discovering open ports, troubleshooting live systems, and services, and grabbing system banners.
The pen-testing helps the administrator close unused ports, add additional services, hide or customize banners, troubleshoot services, and calibrate firewall rules.
You should test in all ways to guarantee there is no security loophole.
Network penetration testing, also known as ethical hacking or white-hat hacking, is a systematic process of evaluating the security of a computer network infrastructure.
The goal of a network penetration test is to identify vulnerabilities and weaknesses in the network’s defenses that malicious actors could potentially exploit.
Network penetration testing is a critical process for evaluating the security of a computer network by simulating an attack from malicious outsiders or insiders. Here is a comprehensive checklist for conducting network penetration testing:
Pre-Engagement Activities
Define Scope: Clearly define the scope of the test, including which networks, systems, and applications will be assessed.
Get Authorization: Obtain written permission from the organization’s management to conduct the test.
Legal Considerations: Ensure compliance with all relevant laws and regulations.
Set Objectives: Establish what the penetration test aims to achieve (e.g., identifying vulnerabilities, testing incident response capabilities).
Plan and Schedule: Develop a testing schedule that minimizes impact on normal operations.
Reconnaissance
Gather Intelligence: Collect publicly available information about the target network (e.g., via WHOIS, DNS records).
Network Mapping: Identify the network structure, IP ranges, domain names, and accessible systems.
Identify Targets: Pinpoint specific devices, services, and applications to target during the test.
Threat Modeling
Identify Potential Threats: Consider possible threat actors and their capabilities, objectives, and methods.
Assess Vulnerabilities: Evaluate which parts of the network might be vulnerable to attack.
Vulnerability Analysis
Automated Scanning: Use tools to scan for known vulnerabilities (e.g., Nessus, OpenVAS).
Manual Testing Techniques: Perform manual checks to complement automated tools.
Document Findings: Keep detailed records of identified vulnerabilities.
Exploitation
Attempt Exploits: Safely attempt to exploit identified vulnerabilities to gauge their impact.
Privilege Escalation: Test if higher levels of access can be achieved.
Lateral Movement: Assess the ability to move across the network from the initial foothold.
Post-Exploitation
Data Access and Exfiltration: Evaluate what data can be accessed or extracted.
Persistence: Check if long-term access to the network can be maintained.
Cleanup: Remove any tools or scripts installed during the testing.
Analysis and Reporting
Compile Findings: Gather all data, logs, and evidence.
Risk Assessment: Analyze the risks associated with the identified vulnerabilities.
Develop Recommendations: Propose measures to mitigate or eliminate vulnerabilities.
Prepare Report: Create a detailed report outlining findings, risks, and recommendations.
Review and Feedback
Present Findings: Share the report with relevant stakeholders.
Discuss Remediation Strategies: Work with the IT team to discuss ways to address vulnerabilities.
Plan for Re-Testing: Schedule follow-up tests to ensure vulnerabilities are effectively addressed.
Continuous Improvement
Update Security Measures: Implement the recommended security enhancements.
Monitor for New Vulnerabilities: Regularly scan and test the network as new threats emerge.
Educate Staff: Train staff on new threats
and security best practices.
Tools and Techniques
Select Tools: Choose appropriate tools for scanning, exploitation, and analysis (e.g., Metasploit, Wireshark, Burp Suite).
Custom Scripts and Tools: Sometimes custom scripts or tools are required for specific environments or systems.
Ethical and Professional Conduct
Maintain Confidentiality: All findings should be kept confidential and shared only with authorized personnel.
Professionalism: Conduct all testing with professionalism, ensuring no unnecessary harm is done to the systems.
Post-Engagement Activities
Debrief Meeting: Conduct a meeting with the stakeholders to discuss the findings and next steps.
Follow-Up Support: Provide support to the organization in addressing the vulnerabilities.
Documentation and Reporting
Detailed Documentation: Ensure that every step of the penetration test is well-documented.
Clear and Actionable Reporting: The final report should be understandable to both technical and non-technical stakeholders and provide actionable recommendations.
Compliance and Standards
Adhere to Standards: Follow industry standards and best practices (e.g., OWASP, NIST).
Regulatory Compliance: Ensure the testing process complies with relevant industry regulations (e.g., HIPAA, PCI-DSS).
Final Steps
Validation of Fixes: Re-test to ensure vulnerabilities have been properly addressed.
Lessons Learned: Analyze the process for any lessons that can be learned and applied to future tests.
Awareness and Training
Organizational Awareness: Increase awareness about network security within the organization.
Training: Provide training to staff on recognizing and preventing security threats.
By following this checklist, organizations can conduct thorough and effective network penetration tests, identifying vulnerabilities and strengthening their network security posture.
Let’s see how we conduct step-by-step Network penetration testing using famous network scanners.
1. Host Discovery
Footprinting is the first and most important phase where one gathers information about their target system.
DNS footprinting helps to enumerate DNS records like (A, MX, NS, SRV, PTR, SOA, and CNAME) resolving to the target domain.
A – A record is used to point the domain name such as gbhackers.com to the IP address of its hosting server.
MX – Records responsible for Email exchange.
NS – NS records are to identify DNS servers responsible for the domain.
SRV – Records to distinguish the service hosted on specific servers.
PTR – Reverse DNS lookup, with the help of IP you can get domains associated with it.
SOA – Start of record, it is nothing but the information in the DNS system about DNS Zone and other DNS records.
CNAME – Cname record maps a domain name to another domain name.
We can detect live hosts, and accessible hosts in the target network by using network scanning tools such as Advanced IP scanner, NMAP, HPING3, and NESSUS.
Ping&Ping Sweep:
root@kali:~# nmap -sn 192.168.169.128
root@kali:~# nmap -sn 192.168.169.128-20 To ScanRange of IP
To obtain Whois information and the name server of a website
root@kali:~# whois testdomain.com
http://whois.domaintools.com/
https://whois.icann.org/en
Traceroute
Network Diagonastic tool that displays route path and transit delay in packets
root@kali:~# traceroute google.com
Online Tools
http://www.monitis.com/traceroute/
http://ping.eu/traceroute/
2. Port Scanning
Perform port scanning using Nmap, Hping3, Netscan tools, and Network monitor. These tools help us probe a server or host on the target network for open ports.
Open ports allow attackers to enter and install malicious backdoor applications.
root@kali:~# nmap –open gbhackers.com
To find all open ports root@kali:~# nmap -p 80 192.168.169.128
Specific Portroot@kali:~# nmap -p 80-200 192.168.169.128
Range of ports root@kali:~# nmap -p “*” 192.168.169.128
Perform banner grabbing or OS fingerprinting using tools such as Telnet, IDServe, and NMAP to determine the operating system of the target host.
Once you know the version and operating system of the target, you need to find the vulnerabilities and exploit them. Try to gain control over the system.
root@kali:~# nmap -A 192.168.169.128 root@kali:~# nmap -v -A 192.168.169.128 with high verbosity level
IDserve is another good tool for banner grabbing.
Online Tools
https://www.netcraft.com/
https://w3dt.net/tools/httprecon
https://www.shodan.io/
4. Scan For Vulnerabilities
Scan the network using vulnerabilities using GIFLanguard, Nessus, Ratina CS, SAINT.
These tools help us find vulnerabilities in the target system and operating systems. With these steps, you can find loopholes in the target network system.
GFILanguard
It acts as a security consultant and offers patch management, vulnerability assessment, and network auditing services.
Nessus
Nessus is a vulnerability scanner tool that searches for bugs in the software and finds a specific way to violate the security of a software product.
Data gathering.
Host identification.
Port scan.
Plug-in selection.
Reporting of data.
5. Draw Network Diagrams
Draw a network diagram about the organization that helps you to understand the logical connection path to the target host in the network.
The network diagram can be drawn by LANmanager, LANstate, Friendly pinger, and Network View.
6. Prepare Proxies
Proxies act as an intermediary between two networking devices. A proxy can protect the local network from outside access.
With proxy servers, we can anonymize web browsing and filter unwanted content, such as ads.
Proxies such as Proxifier, SSL Proxy, Proxy Finder, etc., are used to hide from being caught.
6. Document All Findings
The last and very important step is to document all the findings from penetration testing.
This document will help you find potential vulnerabilities in your network. Once you determine the vulnerabilities, you can plan counteractions accordingly.
You can download the rules and scope Worksheet here – Rules and Scope sheet
Thus, penetration testing helps assess your network before it gets into real trouble that may cause severe loss in value and finance.
Framework Computer disclosed a data breach exposing the personal information of an undisclosed number of customers after Keating Consulting Group, its accounting service provider, fell victim to a phishing attack.
The California-based manufacturer of upgradeable and modular laptops says a Keating Consulting accountant was tricked on January 11 by a threat actor impersonating Framework’s CEO into sharing a spreadsheet containing customers’ personally identifiable information (PII) “associated with outstanding balances for Framework purchases.”
“On January 9th, at 4:27am PST, the attacker sent an email to the accountant impersonating our CEO asking for Accounts Receivable information pertaining to outstanding balances for Framework purchases,” the company says in data breach notification letters sent to affected individuals.
“On January 11th at 8:13am PST, the accountant responded to the attacker and provided a spreadsheet with the following information: Full Name, Email Address, Balance Owed.
“Note that this list was primarily of a subset of open pre-orders, but some completed past orders with pending accounting syncs were also included in this list.”
Framework says its Head of Finance notified Keating Consulting’s leadership of the attack once he became aware of the breach roughly 29 minutes after the external accountant replied to the attacker’s emails at 8:42 AM PST on January 11th.
As part of a subsequent investigation, the company identified all customers whose information was exposed in the attack and notified them of the incident via email.
Affected customers warned of phishing risks
Since the exposed data includes the names of customers, their email addresses, and their outstanding balances, it could potentially be used in phishing attacks that impersonate the company to request payment information or redirect to malicious websites designed to gather even more sensitive information from those impacted.
The company added that it only sends emails from ‘support@frame.work’ asking customers to update their information when a payment has failed and it never asks for payment information via email. Customers are urged to contact the company’s support team about any suspicious emails they receive.
Framework says that from now on, all Keating Consulting employees with access to Framework customer information will be required to have mandatory phishing and social engineering attack training.
“We are also auditing their standard operating procedures around information requests,” the company added.
“We are additionally auditing the trainings and standard operating procedures of all other accounting and finance consultants who currently or previously have had access to customer information.”
A Framework spokesperson was not immediately available for comment when BleepingComputer asked about the number of affected customers in the data breach.
Scammers are targeting multiple brands with “job offers” on Meta’s social media platform, that go as far as to offer what look like legitimate job contracts to victims.
A fresh wave of job scams is spreading on Meta’s Facebook platform that aims to lure users with offers for remote-home positions and ultimately defraud them by stealing their personal data and banking credentials.
Researchers from Qualys are warning of “ongoing attacks against multiple brands” offering remote work through Facebook ads that go so far as to send what look like legitimate work contracts to victims, according to a blog post published Jan. 10 by Jonathan Trull, Qualys CISO and senior vice president of solutions architecture.
The attackers dangle offers of work-at-home opportunities to lure Facebook users to install or move to a popular chat app with someone impersonating a legitimate recruiter to continue the conversation. Eventually, attackers ask for personal information and credentials that potentially can allow attackers to defraud them in the future.
Likely aiming to take advantage of people’s tendency to make resolutions in the new year, these fake job ads — a persistent online threat — typically “see a rise in prevalence following the holidays” when people are primed for new opportunities, Trull wrote.
Qualys Caught Up in Scam
The researchers discovered the scams because fake recruiters were purporting to be from Qualys with offers of remote work. The company, however, never posts its job listings on social media, only on its own website and reputable employment sites, Trull said.
The initial text lures for the scam occur in group chats that solicit users to move to private messaging with the scammer who posts the job opening. “In several cases, the scammer appears to have compromised legitimate Facebook users and then targeted their direct connections,” Trull wrote.
Once a victim installs Go Chat or Signal — the messaging apps used in the scam — attackers ask for additional details so they can receive and sign what appears to be an official Qualys job offer complete with logos, correct corporate addresses, and signature lines.
Attackers then ask victims to send a copy of a government-issued photo ID, both front and back, and told to digitally cash a check to buy software for a new computer that their new employer will ship to them.
Qualys has notified both Facebook and law enforcement of the scam and encourages users to do the same if they observe it on the platform. The blog post did not list the names of other companies or brands that might also be targeted in the attacks.
Avoid Being Scammed
Job scams are indeed a constant online security issue, one that’s on the rise, according to the US Better Business Bureau (BBB). Online ads and phishing campaigns are popular conduits for job scammers, which use social engineering to bait people into responding and then either steal their personal data, online credentials, and/or money. Scams also can have a negative reputational impact on the companies whose brands are used in the scam.
To avoid being scammed by a fake job listing, Qualys provided some best practices for online employment seekers to follow when using the Internet to search for opportunities.
In general, a mindset of “if it’s too good to be true, it probably is” is a good rule of thumb to approaching online job listings, Trull wrote. “Listen to your intuition,” he added. “If it doesn’t feel right, you should probably not proceed.”
Qualys also advised that people always verify offers by looking up a job opening on an organization’s official website and contacting the company directly instead of using social media contacts that could be abused as part of a scam.
People also should be “highly skeptical” of any job solicitation that doesn’t come from an official source, even if the social media source making the offer appears trusted. Since social media accounts can be hijacked, the source can appear legitimate but isn’t.
Further, if an online recruiter asks a person to install an app to apply for a position, it’s probably a scam, Trull warned. “Real recruiters will call you, email, or set up a multimedia interview call at their expense without any concern — they are set up for it if they are a recruiter,” he wrote.
Recently, there has been an emergence of a new scam targeting victims of ransomware attacks. This scam involves individuals or groups posing as “security researchers” or “ethical hackers,” offering to delete data stolen by ransomware attackers for a fee. The scam plays on the fears and vulnerabilities of organizations already compromised by ransomware attacks, such as those by the Royal and Akira ransomware gangs.
The modus operandi of these scammers is quite consistent and alarming. They approach organizations that have already been victimized by ransomware and offer a service to hack into the servers of the ransomware groups and delete the stolen data. This proposition typically comes with a significant fee, sometimes in the range of 1-5 Bitcoins (which could amount to about $190,000 to $220,000).
These scammers often use platforms like Tox Chat to communicate with their targets and may go by names like “Ethical Side Group” or use monikers such as “xanonymoux.” They tend to provide “proof” of access to the stolen data, which they claim is still on the attacker’s servers. In some instances, they accurately report the amount of data exfiltrated, giving their claims an air of credibility.
A notable aspect of this scam is that it adds an additional layer of extortion to the victims of ransomware. Not only do these victims have to contend with the initial ransomware attack and the associated costs, but they are also faced with the prospect of paying yet another party to ensure the safety of their data. This situation highlights the complexities and evolving nature of cyber threats, particularly in the context of ransomware.
Security experts and researchers, like those from Arctic Wolf, have observed and reported on these incidents, noting the similarities in the tactics and communication styles used by the scammers in different cases. However, there remains a great deal of uncertainty regarding the actual ability of these scammers to delete the stolen data, and their true intentions.
THE EMERGING SCAM IN RANSOMWARE ATTACKS
1. THE FALSE PROMISE OF DATA DELETION
Ransomware gangs have been known not to always delete stolen data even after receiving payment. Victims are often misled into believing that paying the ransom will result in the deletion of their stolen data. However, there have been numerous instances where this has not been the case, leading to further exploitation.
2. FAKE ‘SECURITY RESEARCHER’ SCAMS
A new scam involves individuals posing as security researchers, offering services to recover or delete exfiltrated data for a fee. These scammers target ransomware victims, often demanding payment in Bitcoin. This tactic adds another layer of deception and financial loss for the victims.
3. THE HACK-BACK OFFERS
Ransomware victims are now being targeted by fake hack-back offers. These offers promise to delete stolen victim data but are essentially scams designed to extort more money from the victims. This trend highlights the evolving nature of cyber threats and the need for greater awareness.
4. THE ILLOGICAL NATURE OF PAYING FOR DATA DELETION
Paying to delete stolen data is considered an illogical and ineffective strategy. Once data is stolen, there is no guarantee that the cybercriminals will honor their word. The article argues that paying the ransom often leads to more harm than good.
5. THE ROLE OF RANSOMWARE GROUPS
Some ransomware groups are involved in offering services to delete exfiltrated data for a fee. However, these offers are often scams, and there is no assurance that the data will be deleted after payment.
These scams underscores the critical importance of cybersecurity vigilance and the need for robust security measures to protect against ransomware and related cyber threats. It also highlights the challenging decision-making process for organizations that fall victim to ransomware: whether to pay the ransom, how to handle stolen data, and how to respond to subsequent extortion attempts.
APIs power the digital world—our phones, smartwatches, banking systems and shopping sites all rely on APIs to communicate. They can help ecommerce sites accept payments, enable healthcare systems to securely share patient data, and even give taxis and public transportation access to real-time traffic data.
Nearly every business today now uses them to build and provide better sites, apps and services to consumers. However, if unmanaged or unsecured, APIs present a goldmine for threat actors to exfiltrate potentially sensitive information.
“APIs are central to how applications and websites work, which makes them a rich, and relatively new, target for hackers,” said Matthew Prince, CEO at Cloudflare. “It’s vital that companies identify and protect all their APIs to prevent data breaches and secure their businesses.”
APIs popularity boosts attack volume
The seamless integrations that APIs allow for have driven organizations across industries to increasingly leverage them – some more quickly than others. The IoT, rail, bus and taxi, legal services, multimedia and games, and logistics and supply chain industries saw the highest share of API traffic in 2023.
APIs dominate dynamic Internet traffic around the globe (57%), with each region that Cloudflare protects seeing an increase in usage over the past year. However, the top regions that explosively adopted APIs and witnessed the highest traffic share in 2023 were Africa and Asia.
As with any popular business critical function that houses sensitive data, threat actors attempt to exploit any means necessary to gain access. The rise in popularity of APIs has also caused a rise in attack volume, with HTTP Anomaly, Injection attacks and file inclusion being the top three most commonly used attack types mitigated by Cloudflare.
Shadow APIs provide a defenseless path for threat actors
Organizations struggle to protect what they cannot see. Nearly 31% more API REST endpoints (when an API connects with the software program) were discovered through machine learning versus customer-provided identifiers – e.g., organizations lack a full inventory of their APIs.
Regardless if an organization has full visibility of all their APIs, DDoS mitigation solutions can help block potential threats. 33% of all mitigations applied to API threats were blocked by DDoS protections already in place.
“APIs are powerful tools for developers to create full-featured, complex applications to serve their customers, partners, and employees, but each API is a potential attack surface that needs to be secured,” said Melinda Marks, Practice Director, Cybersecurity, for Enterprise Strategy Group. “As this new report shows, organizations need more effective ways to address API security, including better visibility of APIs, ways to ensure secure authentication and authorization between connections, and better ways to protect their applications from attacks.”
According to the CrowdStrike 2023 Global Threat Report, there was a 95% increase in cloud exploits in 2022, with a three-fold increase in cases involving cloud-conscious threat actors. The cloud is rapidly becoming a major battleground for cyberattacks — and the cost of a breach has never been higher. The estimated average cost of a breach impacting multi-cloud environments is more than $4.75 million USD in 2023.1 The acceleration of cloud-focused threat activity and its effects has made security a key priority across organizations.
Security in the Cloud Is a Shared Responsibility
Security teams are accountable for protecting against risks, but they cannot be the only ones. Each team must try to communicate why their part of the development lifecycle is important to the other teams in the pipeline. With the growth of cloud-native applications and the demand for faster application delivery or continuous integration/continuous delivery (CI/CD), the use of containers is increasing widely. As businesses adopt containerized and serverless technologies and cloud-based services, more complex security issues arise.
Application developers have a tricky balance to maintain between speed and security. In DevOps, security used to be an issue addressed after development — but that’s changing. Now, developers who previously had to code right up to the last minute — leaving almost no time to find and fix vulnerabilities — are using tools like Infrastructure as code (IaC) scanning to validate they have fewer security vulnerabilities before they move to the next phase of development.
When security is considered at every step in the pipeline, it ensures developers find and address issues early on and it streamlines the development process. DevSecOps helps developers find and remediate vulnerabilities earlier in the app development process. Vulnerabilities discovered and addressed during the development process are less expensive and faster to fix. By automating testing, remediation and delivery, DevSecOps ensures stronger software security without slowing development cycles. The goal is to make security a part of the software development workflow, instead of having to address more issues during runtime.
5 Tips to Develop Apps with Security and Efficiency
1. Automate security reviews and testing. Every DevSecOps pipeline should utilize a combination or variation of tools and features like those listed below. A good automated and unified solution will provide broad visibility and address those issues as they arise, while alerting, enforcing compliance and providing customized reports with relevant insights for the DevOps and security teams.
SAST: Static application security testing to detect insecure code before it’s used (tools like GitHub, GitGuardian and Snyk, to name a few)
SCA: Software composition analysis to detect library vulnerabilities before building (tools like GitHub and GitLab)
CSA: Container scanning analysis to detect Operating System Library vulnerabilities and mitigate risk (tools like CrowdStrike Falcon® Cloud Security and GitLab)
Figure 1. Dynamic container analysis in the Falcon platform (click to enlarge)
IaC scanning: Infrastructure-as-code scanning to detect vulnerabilities in infrastructure (tools like Falcon Cloud Security and GitLab)
Figure 2. Falcon infrastructure-as-code (IaC) scanning (click to enlarge)
ASPM: Application security posture management to detect application vulnerabilities and risks once deployed (such as Falcon Cloud Security)
Figure 3. Architecture view of apps, services, APIs and more in Falcon (click to enlarge)
2. Integrate with developer toolchains. Streamline and consolidate your toolchain so developers and security teams can focus their attention on a single interface and source of truth. The tighter the integration between security and app development, the earlier threats can be identified, and the faster delivery can be accelerated. By seamlessly integrating with Jenkins, Bamboo, GitLab and others, Falcon Cloud Security allows DevOps teams to respond to and remediate incidents faster within the toolsets they already use.
3. Share security knowledge among teams. DevSecOps is a journey enabled by technology, but a process that starts with people. Your DevSecOps team should share lessons learned and mitigation steps after resolving the compromise. Some organizations even assign a security champion who helps introduce this sense of responsibility of security within the team. Be prepared to get your teams on board before changing the process, and ensure everyone understands the benefits of DevSecOps. Make security testing part of your project kickoffs and charters, and empower your teams with training, education and tools to make their jobs easier.
4. Measure your security posture. Identify the software development pain points and security risks, create a plan that works well for your organization and your team, and drive execution. Make sure to track and measure results such as the time lost in dealing with vulnerabilities after code is merged. Then, look for patterns in the type or cause of those vulnerabilities, and make adjustments to detect and address them earlier. This introduces a shared plan with integration into the build and production phases. CrowdStrike offers a free comprehensive Cloud Security Risk Review and services to help you plan, execute and measure your plan.
5. “Shift right” as well as “shift left.” Detection doesn’t always guarantee security. Shifting right and knowing how secure your applications and APIs are in production is just as important. By leveraging ASPM to uncover potential vulnerabilities in the application code once they are up and deployed, teams can find potential exposure in their application code that could allow backdoor access to other critical data and systems.
The bottom line is that while security and development used to be separate, the lines are now blurring to a point where security is becoming more and more integrated with the day-to-day job of developers. The benefit is that the modern practice brings together teams across the company to a common understanding, which then drives business growth. DevSecOps requires teams to collaborate and enables the organization to deliver safer applications to customers without compromising security.
How CrowdStrike Powers Your DevSecOps Journey
Security is not meant to be a red light on the road to your business goals or slow down your software development. It is meant to enable you to reach those goals safely with minimal risk. Falcon Cloud Security empowers DevSecOps teams to “shift left” in the application security paradigm, with tools including Infrastructure-as-Code Scanning, Image Assessment, and Kubernetes Admission Controller, all designed to ensure applications are secure earlier in application development and deployment.
CrowdStrike Falcon Cloud Security lets DevOps and security teams join forces to build applications securely before deployment, monitor they are compliant once deployed, and ensure the code is secure during runtime using ASPM. With ASPM in a unified interface that’s easy to visualize and understand, customers can “shift right” to reduce risk and stop breaches from applications that are already deployed.
In the rapidly evolving landscape of artificial intelligence, generative AI systems have become a cornerstone of innovation, driving advancements in fields ranging from language processing to creative content generation. However, a recent report by the National Institute of Standards and Technology (NIST) sheds light on the increasing vulnerability of these systems to a range of sophisticated cyber attacks. The report, provides a comprehensive taxonomy of attacks targeting Generative AI (GenAI) systems, revealing the intricate ways in which these technologies can be exploited. The findings are particularly relevant as AI continues to integrate deeper into various sectors, raising concerns about the integrity and privacy implications of these systems.
INTEGRITY ATTACKS: A THREAT TO AI’S CORE
Integrity attacks affecting Generative AI systems are a type of security threat where the goal is to manipulate or corrupt the functioning of the AI system. These attacks can have significant implications, especially as Generative AI systems are increasingly used in various fields. Here are some key aspects of integrity attacks on Generative AI systems:
Data Poisoning:
Detail: This attack targets the training phase of an AI model. Attackers inject false or misleading data into the training set, which can subtly or significantly alter the model’s learning. This can result in a model that generates biased or incorrect outputs.
Example: Consider a facial recognition system being trained with a dataset that has been poisoned with subtly altered images. These images might contain small, imperceptible changes that cause the system to incorrectly recognize certain faces or objects.
Model Tampering:
Detail: In this attack, the internal parameters or architecture of the AI model are altered. This could be done by an insider with access to the model or by exploiting a vulnerability in the system.
Example: An attacker could alter the weightings in a sentiment analysis model, causing it to interpret negative sentiments as positive, which could be particularly damaging in contexts like customer feedback analysis.
Output Manipulation:
Detail: This occurs post-processing, where the AI’s output is intercepted and altered before it reaches the end-user. This can be done without directly tampering with the AI model itself.
Example: If a Generative AI system is used to generate financial reports, an attacker could intercept and manipulate the output to show incorrect financial health, affecting stock prices or investor decisions.
Adversarial Attacks:
Detail: These attacks use inputs that are specifically designed to confuse the AI model. These inputs are often indistinguishable from normal inputs to the human eye but cause the AI to make errors.
Example: A stop sign with subtle stickers or graffiti might be recognized as a speed limit sign by an autonomous vehicle’s AI system, leading to potential traffic violations or accidents.
Backdoor Attacks:
Detail: A backdoor is embedded into the AI model during its training. This backdoor is activated by certain inputs, causing the model to behave unexpectedly or maliciously.
Example: A language translation model could have a backdoor that, when triggered by a specific phrase, starts inserting or altering words in a translation, potentially changing the message’s meaning.
Exploitation of Biases:
Detail: This attack leverages existing biases within the AI model. AI systems can inherit biases from their training data, and these biases can be exploited to produce skewed or harmful outputs.
Example: If an AI model used for resume screening has an inherent gender bias, attackers can submit resumes that are tailored to exploit this bias, increasing the likelihood of certain candidates being selected or rejected unfairly.
Evasion Attacks:
Detail: In this scenario, the input data is manipulated in such a way that the AI system fails to recognize it as something it is trained to detect or categorize correctly.
Example: Malware could be designed to evade detection by an AI-powered security system by altering its code signature slightly, making it appear benign to the system while still carrying out malicious functions.
PRIVACY ATTACKS ON GENERATIVE AI
Privacy attacks on Generative AI systems are a serious concern, especially given the increasing use of these systems in handling sensitive data. These attacks aim to compromise the confidentiality and privacy of the data used by or generated from these systems. Here are some common types of privacy attacks, explained in detail with examples:
Model Inversion Attacks:
Detail: In this type of attack, the attacker tries to reconstruct the input data from the model’s output. This is particularly concerning if the AI model outputs something that indirectly reveals sensitive information about the input data.
Example: Consider a facial recognition system that outputs the likelihood of certain attributes (like age or ethnicity). An attacker could use this output information to reconstruct the faces of individuals in the training data, thereby invading their privacy.
Membership Inference Attacks:
Detail: These attacks aim to determine whether a particular data record was used in the training dataset of a machine learning model. This can be a privacy concern if the training data contains sensitive information.
Example: An attacker might test an AI health diagnostic tool with specific patient data. If the model’s predictions are unusually accurate or certain, it might indicate that the patient’s data was part of the training set, potentially revealing sensitive health information.
Training Data Extraction:
Detail: Here, the attacker aims to extract actual data points from the training dataset of the AI model. This can be achieved by analyzing the model’s responses to various inputs.
Example: An attacker could interact with a language model trained on confidential documents and, through carefully crafted queries, could cause the model to regurgitate snippets of these confidential texts.
Reconstruction Attacks:
Detail: Similar to model inversion, this attack focuses on reconstructing the input data, often in a detailed and high-fidelity manner. This is particularly feasible in models that retain a lot of information about their training data.
Example: In a generative model trained to produce images based on descriptions, an attacker might find a way to input specific prompts that cause the model to generate images closely resembling those in the training set, potentially revealing private or sensitive imagery.
Property Inference Attacks:
Detail: These attacks aim to infer properties or characteristics of the training data that the model was not intended to reveal. This could expose sensitive attributes or trends in the data.
Example: An attacker might analyze the output of a model used for employee performance evaluations to infer unprotected characteristics of the employees (like gender or race), which could be used for discriminatory purposes.
Model Stealing or Extraction:
Detail: In this case, the attacker aims to replicate the functionality of a proprietary AI model. By querying the model extensively and observing its outputs, the attacker can create a similar model without access to the original training data.
Example: A competitor could use the public API of a machine learning model to systematically query it and use the responses to train a new model that mimics the original, effectively stealing the intellectual property.
SEGMENTING ATTACKS
Attacks on AI systems, including ChatGPT and other generative AI models, can be further categorized based on the stage of the learning process they target (training or inference) and the attacker’s knowledge and access level (white-box or black-box). Here’s a breakdown:
BY LEARNING STAGE:
Attacks during Training Phase:
Data Poisoning: Injecting malicious data into the training set to compromise the model’s learning process.
Backdoor Attacks: Embedding hidden functionalities in the model during training that can be activated by specific inputs.
Attacks during Inference Phase:
Adversarial Attacks: Presenting misleading inputs to trick the model into making errors during its operation.
Model Inversion and Reconstruction Attacks: Attempting to infer or reconstruct input data from the model’s outputs.
Membership Inference Attacks: Determining whether specific data was used in the training set by observing the model’s behavior.
Property Inference Attacks: Inferring properties of the training data not intended to be disclosed.
Output Manipulation: Altering the model’s output after it has been generated but before it reaches the intended recipient.
BY ATTACKER’S KNOWLEDGE AND ACCESS:
White-Box Attacks (Attacker has full knowledge and access):
Model Tampering: Directly altering the model’s parameters or structure.
Backdoor Attacks: Implanting a backdoor during the model’s development, which the attacker can later exploit.
These attacks require deep knowledge of the model’s architecture, parameters, and potentially access to the training process.
Black-Box Attacks (Attacker has limited or no knowledge and access):
Adversarial Attacks: Creating input samples designed to be misclassified or misinterpreted by the model.
Model Inversion and Reconstruction Attacks: These do not require knowledge of the model’s internal workings.
Membership and Property Inference Attacks: Based on the model’s output to certain inputs, without knowledge of its internal structure.
Training Data Extraction: Extracting information about the training data through extensive interaction with the model.
Model Stealing or Extraction: Replicating the model’s functionality by observing its inputs and outputs.
IMPLICATIONS:
Training Phase Attacks often require insider access or a significant breach in the data pipeline, making them less common but potentially more devastating.
Inference Phase Attacks are more accessible to external attackers as they can often be executed with minimal access to the model.
White-Box Attacks are typically more sophisticated and require a higher level of access and knowledge, often limited to insiders or through major security breaches.
Black-Box Attacks are more common in real-world scenarios, as they can be executed with limited knowledge about the model and without direct access to its internals.
Understanding these categories helps in devising targeted defense strategies for each type of attack, depending on the specific vulnerabilities and operational stages of the AI system.
The ChatGPT AI model, like any advanced machine learning system, is potentially vulnerable to various attacks, including privacy and integrity attacks. Let’s explore how these attacks could be or have been used against ChatGPT, focusing on the privacy attacks mentioned earlier:
Model Inversion Attacks:
Potential Use Against ChatGPT: An attacker might attempt to use ChatGPT’s responses to infer details about the data it was trained on. For example, if ChatGPT consistently provides detailed and accurate information about a specific, less-known topic, it could indicate the presence of substantial training data on that topic, potentially revealing the nature of the data sources used.
Membership Inference Attacks:
Potential Use Against ChatGPT: This type of attack could try to determine if a particular text or type of text was part of ChatGPT’s training data. By analyzing the model’s responses to specific queries, an attacker might guess whether certain data was included in the training set, which could be a concern if the training data included sensitive or private information.
Training Data Extraction:
Potential Use Against ChatGPT: Since ChatGPT generates text based on patterns learned from its training data, there’s a theoretical risk that an attacker could manipulate the model to output segments of text that closely resemble or replicate parts of its training data. This is particularly sensitive if the training data contained confidential or proprietary information.
Reconstruction Attacks:
Potential Use Against ChatGPT: Similar to model inversion, attackers might try to reconstruct input data (like specific text examples) that the model was trained on, based on the information the model provides in its outputs. However, given the vast and diverse dataset ChatGPT is trained on, reconstructing specific training data can be challenging.
Property Inference Attacks:
Potential Use Against ChatGPT: Attackers could analyze responses from ChatGPT to infer properties about its training data that aren’t explicitly modeled. For instance, if the model shows biases or tendencies in certain responses, it might reveal unintended information about the composition or nature of the training data.
Model Stealing or Extraction:
Potential Use Against ChatGPT: This involves querying ChatGPT extensively to understand its underlying mechanisms and then using this information to create a similar model. Such an attack would be an attempt to replicate ChatGPT’s capabilities without access to the original model or training data.
Integrity attacks on AI models like ChatGPT aim to compromise the accuracy and reliability of the model’s outputs. Let’s examine how these attacks could be or have been used against the ChatGPT model, categorized by the learning stage and attacker’s knowledge:
ATTACKS DURING TRAINING PHASE (WHITE-BOX):
Data Poisoning: If an attacker gains access to the training pipeline, they could introduce malicious data into ChatGPT’s training set. This could skew the model’s understanding and responses, leading it to generate biased, incorrect, or harmful content.
Backdoor Attacks: An insider or someone with access to the training process could implant a backdoor into ChatGPT. This backdoor might trigger specific responses when certain inputs are detected, which could be used to spread misinformation or other harmful content.
ATTACKS DURING INFERENCE PHASE (BLACK-BOX):
Adversarial Attacks: These involve presenting ChatGPT with specially crafted inputs that cause it to produce erroneous outputs. For instance, an attacker could find a way to phrase questions or prompts that consistently mislead the model into giving incorrect or nonsensical answers.
Output Manipulation: This would involve intercepting and altering ChatGPT’s responses after they are generated but before they reach the user. While this is more of an attack on the communication channel rather than the model itself, it can still undermine the integrity of ChatGPT’s outputs.
IMPLICATIONS AND DEFENSE STRATEGIES:
During Training: Ensuring the security and integrity of the training data and process is crucial. Regular audits, anomaly detection, and secure data handling practices are essential to mitigate these risks.
During Inference: Robust model design to resist adversarial inputs, continuous monitoring of responses, and secure deployment architectures can help in defending against these attacks.
REAL-WORLD EXAMPLES AND CONCERNS:
To date, there haven’t been publicly disclosed instances of successful integrity attacks specifically against ChatGPT. However, the potential for such attacks exists, as demonstrated in academic and industry research on AI vulnerabilities.
OpenAI, the creator of ChatGPT, employs various countermeasures like input sanitization, monitoring model outputs, and continuously updating the model to address new threats and vulnerabilities.
In conclusion, while integrity attacks pose a significant threat to AI models like ChatGPT, a combination of proactive defense strategies and ongoing vigilance is key to mitigating these risks.
While these attack types broadly apply to all generative AI systems, the report notes that some vulnerabilities are particularly pertinent to specific AI architectures, like Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) systems. These models, which are at the forefront of natural language processing, are susceptible to unique threats due to their complex data processing and generation capabilities.
The implications of these vulnerabilities are vast and varied, affecting industries from healthcare to finance, and even national security. As AI systems become more integrated into critical infrastructure and everyday applications, the need for robust cybersecurity measures becomes increasingly urgent.
The NIST report serves as a clarion call for the AI industry, cybersecurity professionals, and policymakers to prioritize the development of stronger defense mechanisms against these emerging threats. This includes not only technological solutions but also regulatory frameworks and ethical guidelines to govern the use of AI.
In conclusion, the report is a timely reminder of the double-edged nature of AI technology. While it offers immense potential for progress and innovation, it also brings with it new challenges and threats that must be addressed with vigilance and foresight. As we continue to push the boundaries of what AI can achieve, ensuring the security and integrity of these systems remains a paramount concern for a future where technology and humanity can coexist in harmony.
Open-source tools represent a dynamic force in the technological landscape, embodying innovation, collaboration, and accessibility. These tools, developed with transparency and community-driven principles, allow users to scrutinize, modify, and adapt solutions according to their unique needs.
In cybersecurity, open-source tools are invaluable assets, empowering organizations to fortify their defenses against evolving threats.
In this article, you will find a list of open-source cybersecurity tools that you should definitely check out.
Nemesis: Open-source offensive data enrichment and analytic pipeline
Nemesis is a centralized data processing platform that ingests, enriches, and performs analytics on offensive security assessment data (i.e., data collected during penetration tests and red team engagements).
SessionProbe is a multi-threaded pentesting tool designed to evaluate user privileges in web applications.
Mosint: Open-source automated email OSINT tool
Mosint is an automated email OSINT tool written in Go designed to facilitate quick and efficient investigations of target emails. It integrates multiple services, providing security researchers with rapid access to a broad range of information.
Vigil: Open-source LLM security scanner
Vigil is an open-source security scanner that detects prompt injections, jailbreaks, and other potential threats to Large Language Models (LLMs).
AWS Kill Switch is an open-source incident response tool for quickly locking down AWS accounts and IAM roles during a security incident.
PolarDNS: Open-source DNS server tailored for security evaluations
PolarDNS is a specialized authoritative DNS server that allows the operator to produce custom DNS responses suitable for DNS protocol testing purposes.
Targeted at the DevSecOps practitioner or platform engineer, Kubescape, the open-source Kubernetes security platform has reached version 3.0.
Logging Made Easy: Free log management solution from CISA
CISA launched a new version of Logging Made Easy (LME), a straightforward log management solution for Windows-based devices that can be downloaded and self-installed for free.
GOAD: Vulnerable Active Directory environment for practicing attack techniques
Game of Active Directory (GOAD) is a free pentesting lab. It provides a vulnerable Active Directory environment for pen testers to practice common attack methods.
Wazuh: Free and open-source XDR and SIEM
Wazuh is an open-source platform designed for threat detection, prevention, and response. It can safeguard workloads in on-premises, virtual, container, and cloud settings.
Yeti serves as a unified platform to consolidate observables, indicators of compromise, TTPs, and threat-related knowledge. It enhances observables automatically, such as domain resolution and IP geolocation, saving you the effort.
BinDiff: Open-source comparison tool for binary files
BinDiff is a binary file comparison tool to find differences and similarities in disassembled code quickly.
LLM Guard: Open-source toolkit for securing Large Language Models
LLM Guard is a toolkit designed to fortify the security of Large Language Models (LLMs). It is designed for easy integration and deployment in production environments.
Velociraptor: Open-source digital forensics and incident response
Velociraptor is a sophisticated digital forensics and incident response tool designed to improve your insight into endpoint activities.
The latest stable channel update for Google Chrome, version 120.0.6099.199 for Mac and Linux and 120.0.6099.199/200 for Windows, is now available and will shortly be rolled out to all users.
Furthermore, the Extended Stable channel has been updated to 120.0.6099.200 for Windows and 120.0.6099.199 for Mac.
There are six security fixes in this release. Three of these flaws allowed an attacker to take control of a browser through use-after-free conditions.
Use-after-free is a condition in which the memory allocation is freed, but the program does not clear the pointer to that memory. This is due to incorrect usage of dynamic memory allocation during an operation.
Use after free in ANGLE in Google Chrome presents a high-severity vulnerability that might have led to a remote attacker compromising the renderer process and using a crafted HTML page to exploit heap corruption.
Google awarded $15,000 to Toan (suto) Pham of Qrious Secure for reporting this vulnerability.
This high-severity flaw was a heap buffer overflow in ANGLE that could have been exploited by a remote attacker using a crafted HTML page to cause heap corruption.
Toan (suto) Pham and Tri Dang of Qrious Secure received a $15,000 reward from Google for discovering this vulnerability.
A high-severity use after free in WebAudio in Google Chrome might potentially allow a remote attacker to exploit heap corruption through a manipulated HTML page.
Google awarded Huang Xilin of Ant Group Light-Year Security Lab a $10,000 reward for finding this issue.
A remote attacker may have been able to exploit heap corruption through a specifically designed HTML page due to high severity vulnerability in Google’s use after free in WebGPU.
The details about the reporter of this vulnerability were mentioned as anonymous.
The use after free conditions existed in Google Chrome before version 120.0.6099.199. To avoid exploiting these vulnerabilities, Google advises users to update to the most recent version of Google Chrome.
How To Update Google Chrome
Open Chrome.
At the top right, click More.
Click Help About Google Chrome.
Click Update Google Chrome. Important: If you can’t find this button, you’re on the latest version.
Information stealing malware are actively taking advantage of an undocumented Google OAuth endpoint named MultiLogin to hijack user sessions and allow continuous access to Google services even after a password reset.
According to CloudSEK, the critical exploit facilitates session persistence and cookie generation, enabling threat actors to maintain access to a valid session in an unauthorized manner.
The technique was first revealed by a threat actor named PRISMA on October 20, 2023, on their Telegram channel. It has since been incorporated into various malware-as-a-service (MaaS) stealer families, such as Lumma, Rhadamanthys, Stealc, Meduza, RisePro, and WhiteSnake.
The MultiLogin authentication endpoint is primarily designed for synchronizing Google accounts across services when users sign in to their accounts in the Chrome web browser (i.e., profiles).
A reverse engineering of the Lumma Stealer code has revealed that the technique targets the “Chrome’s token_service table of WebData to extract tokens and account IDs of chrome profiles logged in,” security researcher Pavan Karthick M said. “This table contains two crucial columns: service (GAIA ID) and encrypted_token.”
This token:GAIA ID pair is then combined with the MultiLogin endpoint to regenerate Google authentication cookies.
Karthick told The Hacker News that three different token-cookie generation scenarios were tested –
When the user is logged in with the browser, in which case the token can be used any number of times.
When the user changes the password but lets Google remain signed in, in which case the token can only be used once as the token was already used once to let the user remain signed in.
If the user signs out of the browser, then the token will be revoked and deleted from the browser’s local storage, which will be regenerated upon logging in again.
When reached for comment, Google acknowledged the existence of the attack method but noted that users can revoke the stolen sessions by logging out of the impacted browser.
“Google is aware of recent reports of a malware family stealing session tokens,” the company told The Hacker News. “Attacks involving malware that steal cookies and tokens are not new; we routinely upgrade our defenses against such techniques and to secure users who fall victim to malware. In this instance, Google has taken action to secure any compromised accounts detected.”
“However, it’s important to note a misconception in reports that suggests stolen tokens and cookies cannot be revoked by the user,” it further added. “This is incorrect, as stolen sessions can be invalidated by simply signing out of the affected browser, or remotely revoked via the user’s devices page. We will continue to monitor the situation and provide updates as needed.”
The company further recommended users turn on Enhanced Safe Browsing in Chrome to protect against phishing and malware downloads.
“It’s advised to change passwords so the threat actors wouldn’t utilize password reset auth flows to restore passwords,” Karthick said. “Also, users should be advised to monitor their account activity for suspicious sessions which are from IPs and locations which they don’t recognize.”
“Google’s clarification is an important aspect of user security,” said Hudson Rock co-founder and chief technology officer, Alon Gal, who previouslydisclosed details of the exploit late last year.
“However, the incident sheds light on a sophisticated exploit that may challenge the traditional methods of securing accounts. While Google’s measures are valuable, this situation highlights the need for more advanced security solutions to counter evolving cyber threats such as in the case of infostealers which are tremendously popular among cybercriminals these days.”
(The story was updated after publication to include additional comments from CloudSEK and Alon Gal.)
Fortunately for Radioactive Waste Management (RWM), the first-of-its-kind hacker attack on the project was unsuccessful.
The United Kingdom’s Radioactive Waste Management (RWM) company overseeing the nation’s radioactive waste has revealed a recent cyberattack attempt through LinkedIn. While the attack was reportedly unsuccessful, it has raised eyebrows in the nuclear sector, sparking concerns about the security of critical nuclear infrastructure.
As reported by The Guardian, the hackers directed their attack at the company through LinkedIn. However, whether it was a phishing attack or an attempt to trick employees into installing malware on the system, the modus operandi remains unknown.
Typically, LinkedIn is exploited for phishing scams targeting employees of specific companies. An example from last year involves ESET researchers reporting a cyberespionage campaign by North Korean government-backed hackers from the Lazarus group. The campaign specifically targeted employees at a Spanish aerospace firm.
The RWM is spearheading the £50bn Geological Disposal Facility (GDF) project, aimed at constructing a substantial underground nuclear waste repository in Britain. As a government-owned entity, RWM facilitated the merger of three nuclear bodies—the GDF project, the Low-Level Waste Repository, and another waste management entity—to establish Nuclear Waste Services (NWS).
“NWS has seen, like many other UK businesses, that LinkedIn has been used as a source to identify the people who work within our business. These attempts were detected and denied through our multi-layered defences,” stated an NWS spokesperson.
However, the incident raises concerns, as experts warn that social media platforms such as LinkedIn are becoming preferred playgrounds for hackers. These platforms provide multiple avenues for infiltration, including the creation of fake accounts, phishing messages, and direct credential theft.
The FBI’s special agent in charge of the San Francisco and Sacramento field offices, Sean Ragan, has emphasized the ‘significant threat’ of fraudsters exploiting LinkedIn to lure users into cryptocurrency investment schemes, citing numerous potential victims and past and current cases.
In October 2023, email security firm Cofense discovered a phishing campaign abusing Smart Links, part of the LinkedIn Sales Navigator and Enterprise service, to send authentic-looking emails, steal payment data, and bypass email protection mechanisms.
In November 2023, a LinkedIn database containing over 35 million users’ personal information was leaked by a hacker named USDoD, who previously breached the FBI’s InfraGard platform. The database was obtained through web scraping, an automated process to extract data from websites.
Social engineering attacks, such as deceptive emails and malicious links, offer hackers a gateway to sensitive information. LinkedIn has taken steps to warn users about potential scams and provide resources for staying safe online. Still, concerns about digital security remain prevalent in the nuclear industry, especially after the Guardian exposé of cybersecurity vulnerabilities at the Sellafield plant.
In 2023, the Sellafield nuclear site in Cumbria experienced cybersecurity issues, indicating a need for improved safeguards and tighter regulations. The RWM incident highlights the growing interest of cybercrime syndicates to target nuclear sites.
The NWS acknowledges the need for continuous improvement to strengthen cybersecurity measures, highlighting that emergency response plans must match evolving business needs.
While the world celebrated Christmas, the cybercrime underworld feasted on a different kind of treat: the release of Meduza 2.2, a significantly upgraded password stealer poised to wreak havoc on unsuspecting victims.
Cybersecurity researchers at Resecurity uncovered the details of New Medusa Stealer malware.
Resecurity is a cybersecurity company specializing in endpoint protection, risk management, and cyber threat intelligence.
Translation: Attention! The New Year’s Update Before the New Year 2024, the Meduza team decided to please customers with an update. Under the Christmas tree, you can find great gifts such as significant improvements in user interface (panel), modal windows on loading, and expansion of data collection objects
A Feast Of Features
Meduza 2.2 boasts a veritable buffet of enhancements, including:
Expanded Software Coverage: The stealer now targets over 100 browsers, 100 cryptocurrency wallets, and a slew of other applications like Telegram, Discord, and password managers. This broader reach increases its potential for victimization.
Enhanced Credential Extraction: Meduza 2.2 digs deeper, grabbing data from browser local storage dumps, Windows Credential Manager, and Windows Vault, unlocking a treasure trove of sensitive information.
Google Token Grabber: This new feature snags Google Account tokens, allowing attackers to manipulate cookies and gain access to compromised accounts.
Improved Crypto Focus: Support for new browser-based cryptocurrency wallets like OKX and Enrypt, along with Google Account token extraction, makes Meduza a potent tool for financial fraud.
Boosted Evasion: The stealer boasts an optimized crypting stub and improved AV evasion techniques, making it harder to detect and remove.
These advancements position Meduza as a serious competitor to established players like Azorult and Redline Stealer.
Its flexible configuration, wide application coverage, and competitive pricing ($199 per month) make it an attractive option for cybercriminals of all skill levels.
A Recipe For Trouble
The consequences of Meduza’s widespread adoption are grim.
Account Takeovers (ATOs): Stolen credentials can be used to hijack email accounts, social media profiles, bank accounts, and other online services.
Online Banking Theft: Financial data gleaned from infected machines can be used to drain bank accounts and initiate fraudulent transactions.
Identity Theft: Sensitive information like names, addresses, and Social Security numbers can be exploited for identity theft and financial fraud.
To combat this growing threat, individuals and organizations must:
Practice strong password hygiene: Use unique, complex passwords for all accounts and enable two-factor authentication where available.
Beware of phishing scams: Don’t click on suspicious links or download attachments from unknown sources.
Keep software up to date: Regularly update operating systems, applications, and security software to patch vulnerabilities.
Invest in robust security solutions: Implement comprehensive security solutions that can detect and block malware like Meduza.