Mar 21 2024

ChatGPT for Offensive Security

Category: ChatGPT,Information Securitydisc7 @ 7:42 am

ChatGPT for Cybersecurity 

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory


Feb 26 2024

HackerGPT – A ChatGPT-Powered AI Tool for Ethical Hackers & Cyber Security Community

Category: ChatGPT,Hackingdisc7 @ 8:20 am

HackerGPT is a cutting-edge AI tool designed explicitly for the cybersecurity sector, particularly beneficial for individuals involved in ethical hacking, such as bug bounty hunters.

This advanced assistant is at the cutting edge of cyber intelligence, offering a vast repository of hacking methods, tools, and tactics. More than a mere repository of information, HackerGPT actively engages with users, aiding them through the complexities of cybersecurity.

There are several ChatGPT-powered tools, such as OSINVGPT, PentestGPT, WormGPT, and BurpGPT, that have already been developed for the cyber security community, and HackerGPT is writing a new chapter for the same.

What is the Purpose of HackerGPT:

It leverages the capabilities of ChatGPT, enhanced with specialized training data, to assist in various cybersecurity tasks, including network and mobile hacking, and understand different hacking tactics without resorting to unethical practices like jailbreaking.

HackerGPT generates responses to user queries in real-time, adhering to ethical guidelines. It supports both GPT-3 and GPT-4 models, providing users with access to a wide range of hacking techniques and methodologies.

The tool is available for use via a web browser, with plans to develop an app version in the future. It offers a 14-day trial with unlimited messages and faster response times.

HackerGPT aims to streamline the hacking process, making it significantly easier for cybersecurity professionals to generate payloads, understand attack vectors, and communicate complex technical results effectively.

This AI-powered assistant is seen as a valuable resource for enhancing security evaluations and facilitating the understanding of potential risks and countermeasures among both technical and non-technical stakeholders

Recently, HackerGPT released 2.0, and the beta is now available here.

Upon posing a query to HackerGPT, the process begins with authentication of the user and management of query allowances, which differ for free and premium users.

The system then probes its extensive database to find the most relevant information to the query. For non-English inquiries, translation is employed to ensure the database search is effective.

If a suitable match is discovered, it is integrated into the AI’s response mechanism. The query is securely transmitted to OpenAI or OpenRouter for processing, ensuring no personal data is included. The response you receive depends on the module in use:

  • HackerGPT Module: A customized version of Mixtral 8x7B with semantic search capabilities tailored to our database.
  • GPT-4 Turbo: The most recent innovation from OpenAI, enhanced with our specialized prompts.

Guidelines for Issues:
The “Issues” section is strictly for problems directly related to the codebase. We’ve noticed an influx of non-codebase-related issues, such as feature requests or cloud provider problems. Please consult the “Help” section under the “Discussions” tab for setup-related queries. Issues not pertinent to the codebase are typically closed promptly.

Engagement in Discussions:
We strongly encourage active participation in the “Discussions” tab! It’s an excellent platform for asking questions, exchanging ideas, and seeking assistance. Chances are, others might have the same question if you have a question.

Updating Process:
To update your local Chatbot UI repository, navigate to the root directory in your terminal and execute:

npm run update

For hosted instances, you’ll also need to run:

npm run db-push

This will apply the latest migrations to your live database.

Setting Up Locally:
To set up your own instance of Chatbot UI locally, follow these steps:

  1. Clone the Repository:
git clone https://github.com/mckaywrigley/chatbot-ui.git
  1. Install Dependencies:

Navigate to the root directory of your local Chatbot UI repository and run:

npm install
  1. Install Supabase & Run Locally:

Supabase is chosen for its ease of use, open-source nature, and free tier for hosted instances. It replaces local browser storage, addressing security concerns, storage limitations, and enabling multi-modal use cases.

  • Install Docker: Necessary for running Supabase locally. Download it for free from the official site.
  • Install Supabase CLI: Use Homebrew for macOS/Linux or Scoop for Windows.
  • Start Supabase: Execute supabase start in your terminal at the root of the Chatbot UI repository.
  • Fill in Secrets: Copy the .env.local.example file to .env.local and populate it with values obtained from supabase status.
  1. Optional Local Model Installation:

For local models, follow the instructions provided for Ollama installation.

  1. Run the App Locally:

Finally, run npm run chat in your terminal. Your local instance should now be accessible at http://localhost:3000.

Setting Up a Hosted Instance:

To deploy your Chatbot UI instance in the cloud, follow the local setup steps here . Then, create a separate repository for your hosted instance and push your code to GitHub.

Set up the backend with Supabase by creating a new project and configuring authentication. Connect to the hosted database and configure the frontend with Vercel, adding necessary environment variables. Deploy, and your hosted Chatbot UI instance should be live and accessible through the Vercel-provided URL. You can read the complete GitHub repository here.

ChatGPT para Hackers y Programadores: Domina el arte del Prompt Engineering y aumenta tu productividad

Mastering Cybersecurity with ChatGPT: Harnessing AI to Empower Your Cyber CareerTable of Contents

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: HackerGPT


Jan 08 2024

11 WAYS OF HACKING INTO CHATGPT LIKE GENERATIVE AI SYSTEMS

Category: ChatGPT,Hackingdisc7 @ 9:41 pm

In the rapidly evolving landscape of artificial intelligence, generative AI systems have become a cornerstone of innovation, driving advancements in fields ranging from language processing to creative content generation. However, a recent report by the National Institute of Standards and Technology (NIST) sheds light on the increasing vulnerability of these systems to a range of sophisticated cyber attacks. The report, provides a comprehensive taxonomy of attacks targeting Generative AI (GenAI) systems, revealing the intricate ways in which these technologies can be exploited. The findings are particularly relevant as AI continues to integrate deeper into various sectors, raising concerns about the integrity and privacy implications of these systems.

INTEGRITY ATTACKS: A THREAT TO AI’S CORE

Integrity attacks affecting Generative AI systems are a type of security threat where the goal is to manipulate or corrupt the functioning of the AI system. These attacks can have significant implications, especially as Generative AI systems are increasingly used in various fields. Here are some key aspects of integrity attacks on Generative AI systems:

  1. Data Poisoning:
    • Detail: This attack targets the training phase of an AI model. Attackers inject false or misleading data into the training set, which can subtly or significantly alter the model’s learning. This can result in a model that generates biased or incorrect outputs.
    • Example: Consider a facial recognition system being trained with a dataset that has been poisoned with subtly altered images. These images might contain small, imperceptible changes that cause the system to incorrectly recognize certain faces or objects.
  2. Model Tampering:
    • Detail: In this attack, the internal parameters or architecture of the AI model are altered. This could be done by an insider with access to the model or by exploiting a vulnerability in the system.
    • Example: An attacker could alter the weightings in a sentiment analysis model, causing it to interpret negative sentiments as positive, which could be particularly damaging in contexts like customer feedback analysis.
  3. Output Manipulation:
    • Detail: This occurs post-processing, where the AI’s output is intercepted and altered before it reaches the end-user. This can be done without directly tampering with the AI model itself.
    • Example: If a Generative AI system is used to generate financial reports, an attacker could intercept and manipulate the output to show incorrect financial health, affecting stock prices or investor decisions.
  4. Adversarial Attacks:
    • Detail: These attacks use inputs that are specifically designed to confuse the AI model. These inputs are often indistinguishable from normal inputs to the human eye but cause the AI to make errors.
    • Example: A stop sign with subtle stickers or graffiti might be recognized as a speed limit sign by an autonomous vehicle’s AI system, leading to potential traffic violations or accidents.
  5. Backdoor Attacks:
    • Detail: A backdoor is embedded into the AI model during its training. This backdoor is activated by certain inputs, causing the model to behave unexpectedly or maliciously.
    • Example: A language translation model could have a backdoor that, when triggered by a specific phrase, starts inserting or altering words in a translation, potentially changing the message’s meaning.
  6. Exploitation of Biases:
    • Detail: This attack leverages existing biases within the AI model. AI systems can inherit biases from their training data, and these biases can be exploited to produce skewed or harmful outputs.
    • Example: If an AI model used for resume screening has an inherent gender bias, attackers can submit resumes that are tailored to exploit this bias, increasing the likelihood of certain candidates being selected or rejected unfairly.
  7. Evasion Attacks:
    • Detail: In this scenario, the input data is manipulated in such a way that the AI system fails to recognize it as something it is trained to detect or categorize correctly.
    • Example: Malware could be designed to evade detection by an AI-powered security system by altering its code signature slightly, making it appear benign to the system while still carrying out malicious functions.


PRIVACY ATTACKS ON GENERATIVE AI

Privacy attacks on Generative AI systems are a serious concern, especially given the increasing use of these systems in handling sensitive data. These attacks aim to compromise the confidentiality and privacy of the data used by or generated from these systems. Here are some common types of privacy attacks, explained in detail with examples:

  1. Model Inversion Attacks:
    • Detail: In this type of attack, the attacker tries to reconstruct the input data from the model’s output. This is particularly concerning if the AI model outputs something that indirectly reveals sensitive information about the input data.
    • Example: Consider a facial recognition system that outputs the likelihood of certain attributes (like age or ethnicity). An attacker could use this output information to reconstruct the faces of individuals in the training data, thereby invading their privacy.
  2. Membership Inference Attacks:
    • Detail: These attacks aim to determine whether a particular data record was used in the training dataset of a machine learning model. This can be a privacy concern if the training data contains sensitive information.
    • Example: An attacker might test an AI health diagnostic tool with specific patient data. If the model’s predictions are unusually accurate or certain, it might indicate that the patient’s data was part of the training set, potentially revealing sensitive health information.
  3. Training Data Extraction:
    • Detail: Here, the attacker aims to extract actual data points from the training dataset of the AI model. This can be achieved by analyzing the model’s responses to various inputs.
    • Example: An attacker could interact with a language model trained on confidential documents and, through carefully crafted queries, could cause the model to regurgitate snippets of these confidential texts.
  4. Reconstruction Attacks:
    • Detail: Similar to model inversion, this attack focuses on reconstructing the input data, often in a detailed and high-fidelity manner. This is particularly feasible in models that retain a lot of information about their training data.
    • Example: In a generative model trained to produce images based on descriptions, an attacker might find a way to input specific prompts that cause the model to generate images closely resembling those in the training set, potentially revealing private or sensitive imagery.
  5. Property Inference Attacks:
    • Detail: These attacks aim to infer properties or characteristics of the training data that the model was not intended to reveal. This could expose sensitive attributes or trends in the data.
    • Example: An attacker might analyze the output of a model used for employee performance evaluations to infer unprotected characteristics of the employees (like gender or race), which could be used for discriminatory purposes.
  6. Model Stealing or Extraction:
    • Detail: In this case, the attacker aims to replicate the functionality of a proprietary AI model. By querying the model extensively and observing its outputs, the attacker can create a similar model without access to the original training data.
    • Example: A competitor could use the public API of a machine learning model to systematically query it and use the responses to train a new model that mimics the original, effectively stealing the intellectual property.

SEGMENTING ATTACKS

Attacks on AI systems, including ChatGPT and other generative AI models, can be further categorized based on the stage of the learning process they target (training or inference) and the attacker’s knowledge and access level (white-box or black-box). Here’s a breakdown:

BY LEARNING STAGE:

  1. Attacks during Training Phase:
    • Data Poisoning: Injecting malicious data into the training set to compromise the model’s learning process.
    • Backdoor Attacks: Embedding hidden functionalities in the model during training that can be activated by specific inputs.
  2. Attacks during Inference Phase:
    • Adversarial Attacks: Presenting misleading inputs to trick the model into making errors during its operation.
    • Model Inversion and Reconstruction Attacks: Attempting to infer or reconstruct input data from the model’s outputs.
    • Membership Inference Attacks: Determining whether specific data was used in the training set by observing the model’s behavior.
    • Property Inference Attacks: Inferring properties of the training data not intended to be disclosed.
    • Output Manipulation: Altering the model’s output after it has been generated but before it reaches the intended recipient.

BY ATTACKER’S KNOWLEDGE AND ACCESS:

  1. White-Box Attacks (Attacker has full knowledge and access):
    • Model Tampering: Directly altering the model’s parameters or structure.
    • Backdoor Attacks: Implanting a backdoor during the model’s development, which the attacker can later exploit.
    • These attacks require deep knowledge of the model’s architecture, parameters, and potentially access to the training process.
  2. Black-Box Attacks (Attacker has limited or no knowledge and access):
    • Adversarial Attacks: Creating input samples designed to be misclassified or misinterpreted by the model.
    • Model Inversion and Reconstruction Attacks: These do not require knowledge of the model’s internal workings.
    • Membership and Property Inference Attacks: Based on the model’s output to certain inputs, without knowledge of its internal structure.
    • Training Data Extraction: Extracting information about the training data through extensive interaction with the model.
    • Model Stealing or Extraction: Replicating the model’s functionality by observing its inputs and outputs.

IMPLICATIONS:

  • Training Phase Attacks often require insider access or a significant breach in the data pipeline, making them less common but potentially more devastating.
  • Inference Phase Attacks are more accessible to external attackers as they can often be executed with minimal access to the model.
  • White-Box Attacks are typically more sophisticated and require a higher level of access and knowledge, often limited to insiders or through major security breaches.
  • Black-Box Attacks are more common in real-world scenarios, as they can be executed with limited knowledge about the model and without direct access to its internals.

Understanding these categories helps in devising targeted defense strategies for each type of attack, depending on the specific vulnerabilities and operational stages of the AI system.

HACKING CHATGPT

The ChatGPT AI model, like any advanced machine learning system, is potentially vulnerable to various attacks, including privacy and integrity attacks. Let’s explore how these attacks could be or have been used against ChatGPT, focusing on the privacy attacks mentioned earlier:

  1. Model Inversion Attacks:
    • Potential Use Against ChatGPT: An attacker might attempt to use ChatGPT’s responses to infer details about the data it was trained on. For example, if ChatGPT consistently provides detailed and accurate information about a specific, less-known topic, it could indicate the presence of substantial training data on that topic, potentially revealing the nature of the data sources used.
  2. Membership Inference Attacks:
    • Potential Use Against ChatGPT: This type of attack could try to determine if a particular text or type of text was part of ChatGPT’s training data. By analyzing the model’s responses to specific queries, an attacker might guess whether certain data was included in the training set, which could be a concern if the training data included sensitive or private information.
  3. Training Data Extraction:
    • Potential Use Against ChatGPT: Since ChatGPT generates text based on patterns learned from its training data, there’s a theoretical risk that an attacker could manipulate the model to output segments of text that closely resemble or replicate parts of its training data. This is particularly sensitive if the training data contained confidential or proprietary information.
  4. Reconstruction Attacks:
    • Potential Use Against ChatGPT: Similar to model inversion, attackers might try to reconstruct input data (like specific text examples) that the model was trained on, based on the information the model provides in its outputs. However, given the vast and diverse dataset ChatGPT is trained on, reconstructing specific training data can be challenging.
  5. Property Inference Attacks:
    • Potential Use Against ChatGPT: Attackers could analyze responses from ChatGPT to infer properties about its training data that aren’t explicitly modeled. For instance, if the model shows biases or tendencies in certain responses, it might reveal unintended information about the composition or nature of the training data.
  6. Model Stealing or Extraction:
    • Potential Use Against ChatGPT: This involves querying ChatGPT extensively to understand its underlying mechanisms and then using this information to create a similar model. Such an attack would be an attempt to replicate ChatGPT’s capabilities without access to the original model or training data.


Integrity attacks on AI models like ChatGPT aim to compromise the accuracy and reliability of the model’s outputs. Let’s examine how these attacks could be or have been used against the ChatGPT model, categorized by the learning stage and attacker’s knowledge:

ATTACKS DURING TRAINING PHASE (WHITE-BOX):

  • Data Poisoning: If an attacker gains access to the training pipeline, they could introduce malicious data into ChatGPT’s training set. This could skew the model’s understanding and responses, leading it to generate biased, incorrect, or harmful content.
  • Backdoor Attacks: An insider or someone with access to the training process could implant a backdoor into ChatGPT. This backdoor might trigger specific responses when certain inputs are detected, which could be used to spread misinformation or other harmful content.

ATTACKS DURING INFERENCE PHASE (BLACK-BOX):

  • Adversarial Attacks: These involve presenting ChatGPT with specially crafted inputs that cause it to produce erroneous outputs. For instance, an attacker could find a way to phrase questions or prompts that consistently mislead the model into giving incorrect or nonsensical answers.
  • Output Manipulation: This would involve intercepting and altering ChatGPT’s responses after they are generated but before they reach the user. While this is more of an attack on the communication channel rather than the model itself, it can still undermine the integrity of ChatGPT’s outputs.

IMPLICATIONS AND DEFENSE STRATEGIES:

  • During Training: Ensuring the security and integrity of the training data and process is crucial. Regular audits, anomaly detection, and secure data handling practices are essential to mitigate these risks.
  • During Inference: Robust model design to resist adversarial inputs, continuous monitoring of responses, and secure deployment architectures can help in defending against these attacks.

REAL-WORLD EXAMPLES AND CONCERNS:

  • To date, there haven’t been publicly disclosed instances of successful integrity attacks specifically against ChatGPT. However, the potential for such attacks exists, as demonstrated in academic and industry research on AI vulnerabilities.
  • OpenAI, the creator of ChatGPT, employs various countermeasures like input sanitization, monitoring model outputs, and continuously updating the model to address new threats and vulnerabilities.

In conclusion, while integrity attacks pose a significant threat to AI models like ChatGPT, a combination of proactive defense strategies and ongoing vigilance is key to mitigating these risks.

While these attack types broadly apply to all generative AI systems, the report notes that some vulnerabilities are particularly pertinent to specific AI architectures, like Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) systems. These models, which are at the forefront of natural language processing, are susceptible to unique threats due to their complex data processing and generation capabilities.

The implications of these vulnerabilities are vast and varied, affecting industries from healthcare to finance, and even national security. As AI systems become more integrated into critical infrastructure and everyday applications, the need for robust cybersecurity measures becomes increasingly urgent.

The NIST report serves as a clarion call for the AI industry, cybersecurity professionals, and policymakers to prioritize the development of stronger defense mechanisms against these emerging threats. This includes not only technological solutions but also regulatory frameworks and ethical guidelines to govern the use of AI.

In conclusion, the report is a timely reminder of the double-edged nature of AI technology. While it offers immense potential for progress and innovation, it also brings with it new challenges and threats that must be addressed with vigilance and foresight. As we continue to push the boundaries of what AI can achieve, ensuring the security and integrity of these systems remains a paramount concern for a future where technology and humanity can coexist in harmony.

ChatGPT FOR CYBERSECUITY: The Ultimate Weapon Against Hackers


Oct 14 2023

HackerGPT: A ChatGPT Empowered Penetration Testing Tool

Category: ChatGPT,Hackingdisc7 @ 4:59 pm

HackerGPT is a ChatGPT-enabled penetrating testing tool that can help with network hacking, mobile hacking, different hacking tactics, and other specific tasks.

The main foundation of HackerGPT is the training data that has been offered. It does not use a jailbreak technique. Particularly, it generates replies using ChatGPT with a specified request while conforming to ethical rules.

Obtaining a 14-day trial is an option available. With this trial, you get access to GPT-4, an unlimited amount of messages for HackerGPT, quicker answers, and other advantages.

“No logs, no cost, anonymous login. Trained on a ton of hacking reports”, the company said.

“HackerGPT is only available in your web browser. Making it into an app will take some time, but with your feedback, we can make progress faster”.

Responses of HackerGPT

For instance, what if we asked HackerGPT to provide a step-by-step tutorial on conducting ARP spoofing? 

Threat Sentry Security, the Cyber Security Analyst, said, “Hacker-GPT. This is a pentester dream, my job just became 100 times easier. I told it to create an XSS payload & it did it without hesitation”.

https://twitter.com/thehackergpt/status/1710744412932698151?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1710744412932698151%7Ctwgr%5E8bcab3fa288fb6ab273c757b4583ff1d7199dda5%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcybersecuritynews.com%2Fhackergpt%2F

According to users, HackerGPT is provided with numerous bug bounty reports and might be helpful to you in your job. A big-time saver.

It utilizes GPT-3 and GPT-4 and is aware of most attack routes and methodologies.

As of this writing, the company provides the users with the following:

  • Plus, the subscription is now at HALF the price!
  • Free users: 1.5x more messages with HackerGPT.
  • Plus users: 2.5x more messages with GPT4.
  • Plus bonus: Unlimited messages with HackerGPT.

Ethical hacking may use this tool to improve security evaluation and mitigation elements. The difficulty of communicating complicated technological results to both technical and non-technical audiences is a problem ethical hackers frequently face. 

ChatGPT’s capacity to produce logical and understandable explanations may make the communication of vulnerabilities simpler, hence facilitating organizations’ comprehension of possible risks and the adoption of the necessary countermeasures.

A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: A Hacker's Mind, HackerGPT


Jul 16 2023

ChatGPT Reconnaissance Techniques for Penetration Testing Success

Category: ChatGPT,Pen Testdisc7 @ 12:42 pm

ChatGPT is one of the biggest and most sophisticated language models ever made, with a massive neural network of over 175 billion parameters.

Recent research has revealed how ChatGPT for penetration testing can enable testers to achieve greater success.

ChatGPT was launched by OpenAI in November 2022, causing significant disruption in the AI/ML community.

Sophisticated email attacks are on the rise, thanks to threat actors leveraging the power of Artificial Intelligence.

However, researchers are staying one step ahead by utilizing ChatGPT for threat analysis and penetration testing.

A recently published research paper by Sheetal Tamara from the University of the Cumberlands highlights the effective use of ChatGPT in Reconnaissance.

Recently an automated penetration testing tool PentestGPT released;

ChatGPT For Penetration Testing

The ChatGPT can be used in the initial reconnaissance phase, where the penetration tester is collection detailed data about the scope of assessment.

With the help of ChatGPT, pen-testers able to obtain reconnaissance data such as Internet Protocol (IP) address ranges, domain names, network topology, vendor technologies, SSL/TLS ciphers, ports & services, and operating systems.

This research highlights how artificial intelligence language models can be used in cybersecurity and contributes to advancing penetration testing techniques.

Pentesters can obtain the organization’s IP address using the prompt (“What IP address range related information do you have on [insert organization name here] in your knowledge base?”).

This prompt would deliver the possible IP addresses used by the organization.

“What type of domain name information can you gather on [insert target website here]?”

ChatGPT could provide the list of domain names used by the organization, such as primary domains, subdomains, other domains, international domains, generic top-level domains (gTLDs), and subsidiary domains.

“What vendor technologies does [insert target website fqdn here] make use of on its website?”

Answering this question, ChatGPT will provide various technologies, such as content delivery networks (CDNs), web servers, advertising engines, analytics engines, customer relationship management (CRM), and other technologies organizations use.

“Provide a comprehensive list of SSL ciphers based on your research used by [insert target website fqdn] in pursuant to your large corpus of text data present in your knowledge base.”

ChatGPT could provide the ciphers, SSL/TLS versions, and types of TLS certificates used, also, with this question, ChatGPT above to check the encryption standard used.

“Please list the partner websites including FQDN based on your research that [insert target website here] has direct links to according to your knowledge base.”

In response to the question, ChatGPT is able to provide a list of partner websites that are directly linked.

“Provide a vendor technology stack based on your research that is used by [insert organization name here].“

This prompt would extract the include application server type, database type, operating systems, big data technologies, logging and monitoring software, and other infrastructure-related information specific to the organization.

“Provide a list of network protocols related information that is available on [insert organization name here].”

ChatGPT will return a list of network protocols the target organization uses, including HTTPS, SMTP, NTP, SSH, SNMP, and others.

The research determined that “ChatGPT has the ability to provide valuable insight into the deployment of the target organization’s technology stack as well as specific information about web applications deployed by the target organization,” reads the paper published.

“The research performed on ChatGPT required trial and error in the prompting as certain requests can either be outright rejected or may result in responses that do not contain usable data for the reconnaissance phase of a penetration test.”

Mastering Cybersecurity with ChatGPT: Harnessing AI to Empower Your Cyber CareerTable of Contents:

CISSP training course

InfoSec tools | InfoSec services | InfoSec books

Tags: AIPenetration Testing, ChatGPT, Cybersecurity with ChatGPT, Reconnaissance Techniques


Apr 05 2023

HOW TO CREATE UNDETECTABLE MALWARE VIA CHATGPT IN 7 EASY STEPS BYPASSING ITS RESTRICTIONS

Category: AI,ChatGPT,MalwareDISC @ 9:35 am

There is evidence that ChatGPT has helped low-skill hackers generate malware, which raises worries about the technology being abused by cybercriminals. ChatGPT cannot yet replace expert threat actors, but security researchers claim there is evidence that it can assist low-skill hackers create malware.

Since the introduction of ChatGPT in November, the OpenAI chatbot has assisted over 100 million users, or around 13 million people each day, in the process of generating text, music, poetry, tales, and plays in response to specific requests. In addition to that, it may provide answers to exam questions and even build code for software.

It appears that malicious intent follows strong technology, particularly when such technology is accessible to the general people. There is evidence on the dark web that individuals have used ChatGPT for the development of dangerous material despite the anti-abuse constraints that were supposed to prevent illegitimate requests. This was something that experts feared would happen. Because of thisexperts from forcepoint came to the conclusion that it would be best for them not to create any code at all and instead rely on only the most cutting-edge methods, such as steganography, which were previously exclusively used by nation-state adversaries.

The demonstration of the following two points was the overarching goal of this exercise:

  1. How simple it is to get around the inadequate barriers that ChatGPT has installed.
  2. How simple it is to create sophisticated malware without having to write any code and relying simply on ChatGPT

Initially ChatGPT informed him that malware creation is immoral and refused to provide code.

  1. To avoid this, he generated small codes and manually assembled the executable.  The first successful task was to produce code that looked for a local PNG greater than 5MB. The design choice was that a 5MB PNG could readily hold a piece of a business-sensitive PDF or DOCX.

 2. Then asked ChatGPT to add some code that will encode the found png with steganography and would exfiltrate these files from computer, he asked ChatGPT for code that searches the User’s Documents, Desktop, and AppData directories then uploads them to google drive.

3. Then he asked ChatGPT to combine these pices of code and modify it to to divide files into many “chunks” for quiet exfiltration using steganography.

4. Then he submitted the MVP to VirusTotal and five vendors marked the file as malicious out of sixty nine.

5. This next step was to ask ChatGPT to create its own LSB Steganography method in my program without using the external library. And to postpone the effective start by two minutes.https://www.securitynewspaper.com/2023/01/20/this-new-android-malware-allows-to-hack-spy-on-any-android-phone/embed/#?secret=nN5212UQrX#?secret=8AnjYiGI6e

6. The another change he asked ChatGPT to make was to obfuscate the code which was rejected. Once ChatGPT rejected hisrequest, he tried again. By altering his request from obfuscating the code to converting all variables to random English first and last names, ChatGPT cheerfully cooperated. As an extra test, he disguised the request to obfuscate to protect the code’s intellectual property. Again, it supplied sample code that obscured variable names and recommended Go modules to construct completely obfuscated code.

7. In next step he uploaded the file to virus total to check

And there we have it; the Zero Day has finally arrived. They were able to construct a very sophisticated attack in a matter of hours by only following the suggestions that were provided by ChatGPT. This required no coding on our part. We would guess that it would take a team of five to ten malware developers a few weeks to do the same amount of work without the assistance of an AI-based chatbot, particularly if they wanted to avoid detection from all detection-based suppliers.

ChatGPT for Startups

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT malware


Mar 20 2023

Most security pros turn to unauthorized AI tools at work

Category: AI,ChatGPTDISC @ 10:52 am

The research demonstrates that embracing automation in cybersecurity leads to significant business benefits, such as addressing talent gaps and effectively combating cyber threats. According to the survey, organizations will continue investing in cybersecurity automation in 2023, even amid economic turbulence.

“As organizations look for long-term solutions to keep pace with increasingly complex cyberattacks, they need technologies that will automate time-consuming, repetitive tasks so security teams have the bandwidth to focus on the threats that matter most,” said Marc van Zadelhoff, CEO, Devo. “This report confirms what we’re already hearing from Devo customers: adopting automation in the SOC results in happier analysts, boosted business results, and more secure organizations.”

Security pros are using AI tools without authorization

According to the study, security pros suspect their organization would stop them from using unauthorized AI tools, but that’s not stopping them.

  • 96% of security pros admit to someone at their organization using AI tools not provided by their company – including 80% who cop to using such tools themselves.
  • 97% of security pros believe their organizations are able to identify their use of unauthorized AI tools, and more than 3 in 4 (78%) suspect their organization would put a stop to it if discovered.

Adoption of automation in the SOC

Organizations fail to adopt automation effectively, forcing security pros to use rogue AI tools to keep up with workloads.

  • 96% of security professionals are not fully satisfied with their organization’s use of automation in the SOC.
  • Reasons for dissatisfaction with SOC automation varied from technological concerns such as the limited scalability and flexibility of the available solutions (42%) to financial ones such as the high costs associated with implementation and maintenance (39%). But for many, concerns go back to people: 34% cite a lack of internal expertise and resources to manage the solution as a reason they are not satisfied.
  • Respondents indicated that they would opt for unauthorized tools due to the better user interface (47%), more specialized capabilities (46%), and allow for more efficient work (44%).

Investing in cybersecurity automation

Security teams will prioritize investments in cybersecurity automation in 2023 to solve organizational challenges, despite economic turbulence and widespread organizational cost-cutting.

  • 80% of security professionals predict an increase in cybersecurity automation investments in the coming year, including 55% who predict an increase of more than 5%.
  • 100% of security professionals reported positive business impacts as a result of using automation in cybersecurity, citing increased efficiency (70%) and financial gains (65%) as primary benefits.

Automation fills widening talent gaps

Adopting automation in the SOC helps organizations combat security staffing shortages in a variety of ways.

  • 100% of respondents agreed that automation would be helpful to fill staffing gaps in their team.
  • Incident analysis (54%), landscape analysis of applications and data sources (54%), and threat detection and response (53%) were the most common ways respondents said automation could make up for staffing shortages.

AI

A Guide to Combining AI Tools Like Chat GPT, Quillbot, and Midjourney for Crafting Killer Fiction and Nonfiction (Artificial Intelligence Uses & Applications)

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI tools, AI Tools Like Chat GPT


Mar 19 2023

Researcher create polymorphic Blackmamba malware with ChatGPT

Category: AI,ChatGPTDISC @ 3:44 pm

The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams.

The malware can target Windows, macOS and Linux devices.

HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.

black mamba snake coiled up

This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.

As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters

Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.

Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.

Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.

The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.

This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters
Researcher’s discussion with ChatGPT

Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.

It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.

What is Polymorphic malware?

Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.

Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.

The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.

Chat GPT: Is the Future Already Here?

AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security

BlackMamba GPT POC Malware In Action

Professional Certificates, Bachelors & Masters Program

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT