Apr 05 2023

HOW TO CREATE UNDETECTABLE MALWARE VIA CHATGPT IN 7 EASY STEPS BYPASSING ITS RESTRICTIONS

Category: AI,ChatGPT,MalwareDISC @ 9:35 am

There is evidence that ChatGPT has helped low-skill hackers generate malware, which raises worries about the technology being abused by cybercriminals. ChatGPT cannot yet replace expert threat actors, but security researchers claim there is evidence that it can assist low-skill hackers create malware.

Since the introduction of ChatGPT in November, the OpenAI chatbot has assisted over 100 million users, or around 13 million people each day, in the process of generating text, music, poetry, tales, and plays in response to specific requests. In addition to that, it may provide answers to exam questions and even build code for software.

It appears that malicious intent follows strong technology, particularly when such technology is accessible to the general people. There is evidence on the dark web that individuals have used ChatGPT for the development of dangerous material despite the anti-abuse constraints that were supposed to prevent illegitimate requests. This was something that experts feared would happen. Because of thisexperts from forcepoint came to the conclusion that it would be best for them not to create any code at all and instead rely on only the most cutting-edge methods, such as steganography, which were previously exclusively used by nation-state adversaries.

The demonstration of the following two points was the overarching goal of this exercise:

  1. How simple it is to get around the inadequate barriers that ChatGPT has installed.
  2. How simple it is to create sophisticated malware without having to write any code and relying simply on ChatGPT

Initially ChatGPT informed him that malware creation is immoral and refused to provide code.

  1. To avoid this, he generated small codes and manually assembled the executable.  The first successful task was to produce code that looked for a local PNG greater than 5MB. The design choice was that a 5MB PNG could readily hold a piece of a business-sensitive PDF or DOCX.

 2. Then asked ChatGPT to add some code that will encode the found png with steganography and would exfiltrate these files from computer, he asked ChatGPT for code that searches the User’s Documents, Desktop, and AppData directories then uploads them to google drive.

3. Then he asked ChatGPT to combine these pices of code and modify it to to divide files into many “chunks” for quiet exfiltration using steganography.

4. Then he submitted the MVP to VirusTotal and five vendors marked the file as malicious out of sixty nine.

5. This next step was to ask ChatGPT to create its own LSB Steganography method in my program without using the external library. And to postpone the effective start by two minutes.https://www.securitynewspaper.com/2023/01/20/this-new-android-malware-allows-to-hack-spy-on-any-android-phone/embed/#?secret=nN5212UQrX#?secret=8AnjYiGI6e

6. The another change he asked ChatGPT to make was to obfuscate the code which was rejected. Once ChatGPT rejected hisrequest, he tried again. By altering his request from obfuscating the code to converting all variables to random English first and last names, ChatGPT cheerfully cooperated. As an extra test, he disguised the request to obfuscate to protect the code’s intellectual property. Again, it supplied sample code that obscured variable names and recommended Go modules to construct completely obfuscated code.

7. In next step he uploaded the file to virus total to check

And there we have it; the Zero Day has finally arrived. They were able to construct a very sophisticated attack in a matter of hours by only following the suggestions that were provided by ChatGPT. This required no coding on our part. We would guess that it would take a team of five to ten malware developers a few weeks to do the same amount of work without the assistance of an AI-based chatbot, particularly if they wanted to avoid detection from all detection-based suppliers.

ChatGPT for Startups

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT malware


Mar 20 2023

Most security pros turn to unauthorized AI tools at work

Category: AI,ChatGPTDISC @ 10:52 am

The research demonstrates that embracing automation in cybersecurity leads to significant business benefits, such as addressing talent gaps and effectively combating cyber threats. According to the survey, organizations will continue investing in cybersecurity automation in 2023, even amid economic turbulence.

“As organizations look for long-term solutions to keep pace with increasingly complex cyberattacks, they need technologies that will automate time-consuming, repetitive tasks so security teams have the bandwidth to focus on the threats that matter most,” said Marc van Zadelhoff, CEO, Devo. “This report confirms what we’re already hearing from Devo customers: adopting automation in the SOC results in happier analysts, boosted business results, and more secure organizations.”

Security pros are using AI tools without authorization

According to the study, security pros suspect their organization would stop them from using unauthorized AI tools, but that’s not stopping them.

  • 96% of security pros admit to someone at their organization using AI tools not provided by their company – including 80% who cop to using such tools themselves.
  • 97% of security pros believe their organizations are able to identify their use of unauthorized AI tools, and more than 3 in 4 (78%) suspect their organization would put a stop to it if discovered.

Adoption of automation in the SOC

Organizations fail to adopt automation effectively, forcing security pros to use rogue AI tools to keep up with workloads.

  • 96% of security professionals are not fully satisfied with their organization’s use of automation in the SOC.
  • Reasons for dissatisfaction with SOC automation varied from technological concerns such as the limited scalability and flexibility of the available solutions (42%) to financial ones such as the high costs associated with implementation and maintenance (39%). But for many, concerns go back to people: 34% cite a lack of internal expertise and resources to manage the solution as a reason they are not satisfied.
  • Respondents indicated that they would opt for unauthorized tools due to the better user interface (47%), more specialized capabilities (46%), and allow for more efficient work (44%).

Investing in cybersecurity automation

Security teams will prioritize investments in cybersecurity automation in 2023 to solve organizational challenges, despite economic turbulence and widespread organizational cost-cutting.

  • 80% of security professionals predict an increase in cybersecurity automation investments in the coming year, including 55% who predict an increase of more than 5%.
  • 100% of security professionals reported positive business impacts as a result of using automation in cybersecurity, citing increased efficiency (70%) and financial gains (65%) as primary benefits.

Automation fills widening talent gaps

Adopting automation in the SOC helps organizations combat security staffing shortages in a variety of ways.

  • 100% of respondents agreed that automation would be helpful to fill staffing gaps in their team.
  • Incident analysis (54%), landscape analysis of applications and data sources (54%), and threat detection and response (53%) were the most common ways respondents said automation could make up for staffing shortages.

AI

A Guide to Combining AI Tools Like Chat GPT, Quillbot, and Midjourney for Crafting Killer Fiction and Nonfiction (Artificial Intelligence Uses & Applications)

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI tools, AI Tools Like Chat GPT


Mar 19 2023

Researcher create polymorphic Blackmamba malware with ChatGPT

Category: AI,ChatGPTDISC @ 3:44 pm

The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams.

The malware can target Windows, macOS and Linux devices.

HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.

black mamba snake coiled up

This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.

As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters

Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.

Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.

Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.

The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.

This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters
Researcher’s discussion with ChatGPT

Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.

It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.

What is Polymorphic malware?

Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.

Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.

The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.

Chat GPT: Is the Future Already Here?

AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security

BlackMamba GPT POC Malware In Action

Professional Certificates, Bachelors & Masters Program

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT