Dec 02 2023

AI is about to completely change how you use computers

Category: AIdisc7 @ 2:33 pm

I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.

To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets. And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.

In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

This type of software—something that responds to natural language and can accomplish many different tasks based on its knowledge of the user—is called an agent. I’ve been thinking about agents for nearly 30 years and wrote about them in my 1995 book The Road Ahead, but they’ve only recently become practical because of advances in AI.

Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.

A personal assistant for everyone

Some critics have pointed out that software companies have offered this kind of thing before, and users didn’t exactly embrace them. (People still joke about Clippy, the digital assistant that we included in Microsoft Office and later dropped.) Why will people use agents?

The answer is that they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter. Clippy has as much in common with agents as a rotary phone has with a mobile device.

An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

“Clippy was a bot, not an agent.”

To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences. Clippy was a bot, not an agent.

Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.

Imagine that you want to plan a trip. A travel bot will identify hotels that fit your budget. An agent will know what time of year you’ll be traveling and, based on its knowledge about whether you always try a new destination or like to return to the same place repeatedly, it will be able to suggest locations. When asked, it will recommend things to do based on your interests and propensity for adventure, and it will book reservations at the types of restaurants you would enjoy. If you want this kind of deeply personalized planning today, you need to pay a travel agent and spend time telling them what you want.

The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people. They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.

Health care

Today, AI’s main role in healthcare is to help with administrative tasks. AbridgeNuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.

The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment. These agents will also help healthcare workers make decisions and be more productive. (Already, apps like Glass Health can analyze a patient summary and suggest diagnoses for the doctor to consider.) Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.

These clinician-agents will be slower than others to roll out because getting things right is a matter of life and death. People will need to see evidence that health agents are beneficial overall, even though they won’t be perfect and will make mistakes. Of course, humans make mistakes too, and having no access to medical care is also a problem.

“Half of all U.S. military veterans who need mental health care don’t get it.”

Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it. For example, RAND found that half of all U.S. military veterans who need mental health care don’t get it.

AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.

AI is about to completely change how you use computers

AI Made Simple: A Beginner’s Guide to Generative Intelligence

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: ChatGPT


Jul 16 2023

ChatGPT Reconnaissance Techniques for Penetration Testing Success

Category: ChatGPT,Pen Testdisc7 @ 12:42 pm

ChatGPT is one of the biggest and most sophisticated language models ever made, with a massive neural network of over 175 billion parameters.

Recent research has revealed how ChatGPT for penetration testing can enable testers to achieve greater success.

ChatGPT was launched by OpenAI in November 2022, causing significant disruption in the AI/ML community.

Sophisticated email attacks are on the rise, thanks to threat actors leveraging the power of Artificial Intelligence.

However, researchers are staying one step ahead by utilizing ChatGPT for threat analysis and penetration testing.

A recently published research paper by Sheetal Tamara from the University of the Cumberlands highlights the effective use of ChatGPT in Reconnaissance.

Recently an automated penetration testing tool PentestGPT released;

ChatGPT For Penetration Testing

The ChatGPT can be used in the initial reconnaissance phase, where the penetration tester is collection detailed data about the scope of assessment.

With the help of ChatGPT, pen-testers able to obtain reconnaissance data such as Internet Protocol (IP) address ranges, domain names, network topology, vendor technologies, SSL/TLS ciphers, ports & services, and operating systems.

This research highlights how artificial intelligence language models can be used in cybersecurity and contributes to advancing penetration testing techniques.

Pentesters can obtain the organization’s IP address using the prompt (“What IP address range related information do you have on [insert organization name here] in your knowledge base?”).

This prompt would deliver the possible IP addresses used by the organization.

“What type of domain name information can you gather on [insert target website here]?”

ChatGPT could provide the list of domain names used by the organization, such as primary domains, subdomains, other domains, international domains, generic top-level domains (gTLDs), and subsidiary domains.

“What vendor technologies does [insert target website fqdn here] make use of on its website?”

Answering this question, ChatGPT will provide various technologies, such as content delivery networks (CDNs), web servers, advertising engines, analytics engines, customer relationship management (CRM), and other technologies organizations use.

“Provide a comprehensive list of SSL ciphers based on your research used by [insert target website fqdn] in pursuant to your large corpus of text data present in your knowledge base.”

ChatGPT could provide the ciphers, SSL/TLS versions, and types of TLS certificates used, also, with this question, ChatGPT above to check the encryption standard used.

“Please list the partner websites including FQDN based on your research that [insert target website here] has direct links to according to your knowledge base.”

In response to the question, ChatGPT is able to provide a list of partner websites that are directly linked.

“Provide a vendor technology stack based on your research that is used by [insert organization name here].“

This prompt would extract the include application server type, database type, operating systems, big data technologies, logging and monitoring software, and other infrastructure-related information specific to the organization.

“Provide a list of network protocols related information that is available on [insert organization name here].”

ChatGPT will return a list of network protocols the target organization uses, including HTTPS, SMTP, NTP, SSH, SNMP, and others.

The research determined that “ChatGPT has the ability to provide valuable insight into the deployment of the target organization’s technology stack as well as specific information about web applications deployed by the target organization,” reads the paper published.

“The research performed on ChatGPT required trial and error in the prompting as certain requests can either be outright rejected or may result in responses that do not contain usable data for the reconnaissance phase of a penetration test.”

Mastering Cybersecurity with ChatGPT: Harnessing AI to Empower Your Cyber CareerTable of Contents:

CISSP training course

InfoSec tools | InfoSec services | InfoSec books

Tags: AIPenetration Testing, ChatGPT, Cybersecurity with ChatGPT, Reconnaissance Techniques


Jun 12 2023

NEW UNDETECTABLE TECHNIQUE ALLOWS HACKING BIG COMPANIES USING CHATGPT

Category: Hackingdisc7 @ 7:41 am

According to the findings of recent study conducted, harmful packages may be readily propagated into development environments with the assistance of ChatGPT, which can be used by attackers.

In a blog post published, researchers from Vulcan Cyber outlined a novel method for propagating malicious packages that they dubbed “AI package hallucination.” The method was conceived as a result of ChatGPT and other generative AI systems providing phantasmagoric sources, links, blogs, and data in response to user requests on occasion. Large-language models (LLMs) like ChatGPT are capable of generating “hallucinations,” which are fictitious URLs, references, and even whole code libraries and functions that do not exist in the real world. According to the researchers, ChatGPT will even produce dubious patches to CVEs and, in this particular instance, would give links to code libraries that do not even exist.

If ChatGPT produces phony code libraries (packages), then attackers may exploit these hallucinations to disseminate harmful packages without utilizing common tactics such as typosquatting or masquerade, according to the researchers from Vulcan Cyber who worked on this study. “Those techniques are suspicious and already detectable,” the researchers claimed in their conclusion. However, if the attacker is able to construct a package that can replace the ‘fake’ programs that are suggested by ChatGPT, then they may be successful in convincing a victim to download and install the malicious software.

 This ChatGPT attack approach demonstrates how simple it has become for threat actors to utilize ChatGPT as a tool to carry out an attack.We should expect to continue to see risks like this associated with generative AI and that similar attack techniques could be used in the wild. This is something that we should be prepared for. The technology behind generative artificial intelligence is still in its infancy, so this is only the beginning. When seen through the lens of research, it is possible that we will come across a large number of new security discoveries in the months and years to come. Companies should never download and run code that they don’t understand and haven’t evaluated. This includes executing code from open-source GitHub repositories or now ChatGPT suggestions. Teams should do a security analysis on every code they wish to execute, and the team should have private copies of the code.

ChatGPT is being used as a delivery method by the adversaries in this instance. However, the method of compromising a supply chain by making use of shared or imported libraries from a third party is not a new one. The only way to defend against it would be to apply secure coding methods, as well as to extensively test and review code that was meant for usage in production settings.

According to experts, “the ideal scenario is that security researchers and software publishers can also make use of generative AI to make software distribution more secure”. The industry is in the early phases of using generative AI for cyber attack and defense.

The ChatGpt Revolution – Unlock the Potential of AI: Opportunities, Risks and Ways to Build an Automated Business in the Age of New Digital Media

InfoSec tools | InfoSec services | InfoSec books

Tags: ChatGPT


Mar 19 2023

Researcher create polymorphic Blackmamba malware with ChatGPT

Category: AI,ChatGPTDISC @ 3:44 pm

The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams.

The malware can target Windows, macOS and Linux devices.

HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.

black mamba snake coiled up

This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.

As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters

Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.

Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.

Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.

The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.

This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters
Researcher’s discussion with ChatGPT

Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.

It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.

What is Polymorphic malware?

Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.

Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.

The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.

Chat GPT: Is the Future Already Here?

AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security

BlackMamba GPT POC Malware In Action

Professional Certificates, Bachelors & Masters Program

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT


Mar 15 2023

OpenAI Announces GPT-4, the Successor of ChatGPT

Category: AIDISC @ 10:18 am

A powerful new AI model called GPT-4 has been released recently by OpenAI, which is capable of comprehending images and texts. The company describes this as the next-stage milestone in its effort to scale up deep learning.

In November 2022, ChatGPT was launched and has since been used by millions of people worldwide. The all-new GPT-4 is now available through ChatGPT Plus, while it is the paid GPT subscription option of OpenAI available for $20 per month.

However, currently, there is a cap on the usage amount, and to access the API, the developers need to be registered on a waitlist.

The GPT-4 can also perform a number of tasks at once, with a maximum word count of 25,000. That is eight times more than the ChatGPT can.

GPT-4

Pricing & Implementation

Here below, we have mentioned the pricing tags:-

  • For 1,000 “prompt” tokens (Raw text), which is about 750 words will cost $0.03.
  • For 1,000 “completion” tokens (Raw text), which is about 750 words will cost $0.06.

A prompt token is part of a word that has been fed into GPT-4 in order for it to function. While the content that is generated by the GPT-4 is referred to as completion tokens.

In addition, Microsoft recently announced that it is using the GPT-4 version for its Bing Chat chatbot. Since the investment from Microsoft into OpenAI has amounted to $10 billion.

Stripe is another early adopter of GPT-4, which uses it to scan business websites. By doing so, it provides a summary of the results to customer service staff as part of the scanning process.

A new subscription tier for language learning has been developed by Duolingo based on the GPT-4. In order to provide financial analysts with access to information retrieved from company documents, Morgan Stanley is creating a GPT-4-powered system.

It appears that Khan Academy is also working towards automating some sort of tutoring process using GPT-4 that can help students.

A simulated bar exam was given to GPT-4, and it performed particularly well in it, as it managed to achieve scoring around the top 10% of test takers. Interestingly, GPT-3.5, on the other hand, scored in the bottom 10% of the group.  

GPT-4 Action

The GPT-4 algorithm is a form of generative artificial intelligence, similar to the ChatGPT algorithm. With the help of algorithms and predictive text, the generating AI constructs the content based on the prompts that are presented by the user. 

As you can see in the image below, GPT-4 generates recipes based on images that have been uploaded.

The reasoning skills of GPT-4 are more advanced than those of ChatGPT. In order to find available meeting times, the model can, for instance, search for three schedules with three available times.

In short, the GPT-4 is much smarter and more capable as compared to the GPT-3.5. GPT-4 is capable of receiving and processing textual and visual information, one of its most impressive features.

At the moment, it is not yet possible for OpenAI customers to utilize the image understanding capability of GPT-4. However, currently, OpenAI is testing this technology with only one partner, Be My Eyes.

OpenAI has warned that, just like its predecessors, the GPT-4 is still not entirely reliable. This model needs to be further improved by the entire community by building on top of the model, exploring it, and contributing to it through collective efforts. 

There is still a lot of work to be done, and the company affirmed that they are looking forward to working together to improve it.

CHATGPT-4 Revealed 500 Prompts to Ride the AI Wave (Mastering ChatGPT-4 Prompts & Beyond)

Tags: ChatGPT, GPT-4



Mar 09 2023

ChatGPT for Offensive Security

Category: AIDISC @ 10:39 am

ChatGPT for Offensive Security – via SANS Institute

Can ChatGPT (AI) be used for offensive security?

It is possible to use AI for offensive security, just as it is possible to use any technology for malicious purposes. However, the use of AI for offensive security raises significant ethical concerns and legal considerations.

AI could be used to automate and scale attacks, such as phishing, malware propagation, or social engineering. It could also be used to analyze large amounts of data to identify vulnerabilities or weaknesses in security systems, and to develop targeted attacks.

However, the use of AI for offensive security could also have unintended consequences, such as collateral damage or false positives. Furthermore, it raises concerns about accountability and responsibility, as it may be difficult to trace the origin of an attack that is automated and conducted by a machine learning system.

Overall, the use of AI for offensive security is a complex and controversial issue that requires careful consideration of the ethical and legal implications. It is important to always use technology responsibly and ethically.

Chat GPT is just the tip of the iceberg! 15 Artificial Intelligence tools that may be useful to you:
1.Midjourney: a tool that creates images from textual descriptions, similar to OpenAI’s DALL-E and Stable Diffusion.
2. RunwayML: Edit videos in real time, collaborate and take advantage of over 30 magical AI tools.
3. Otter AI: Transform audio into text with high accuracy. Use this tool for meeting notes, content creation and much more.
4. Copy.AI: This is the first copyright platform powered by artificial intelligence. This tool helps generate content for websites, blog posts, or social media posts, helping increase conversions and sales.
5. Murf AI: Convert text to audio: generate studio-quality narrations in minutes. Use Murf’s realistic AI voices for podcasts, videos and all your professional presentations.
6. Flow GPT: Share, discover and learn about the most useful ChatGPT prompts.
7. Nocode.AI: The Nocode platform is a way to create AI solutions without ever writing a single line of code. It’s a great way to quickly test ideas, create new projects, and launch businesses and new products faster.
8. Supernormal: This tool helps create incredible meeting notes without lifting a finger.
9. TLDRthis: This AI-based website helps you summarize any part of a text into concise and easy-to-digest content, so that you can rid yourself of information overload and save time.
10. TheGist: Summarize any Slack channel or conversation with just one click! This AI analyzes Slack conversations and instantly creates a brief summary for you.
11. Sitekick: Create landing pages with AI by telling it what you want via text.
12. Humanpal: Create Avatars with ultra-realistic human appearances!
13. ContentBot: – Write content for articles, ads, products, etc.
14. Synthesia – Create a virtual presenter that narrates your text for you.
Synthesia is a video creation platform using AI. It’s possible to create videos in 120 languages, saving up to 80% of your time and budget.
15. GliaCloud: This tool converts your text into video. Generate videos for news content, social media posts, live sports events, and statistical data in minutes.

The role of human insight in AI-based cybersecurity

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

The Art of Prompt Engineering with chatGPT: A Hands-On Guide for using chatGPT

Previous posts on AI

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI-based cybersecurity, ChatGPT, human insight, Offensive security


Feb 14 2023

Hackers Could Use ChatGPT to Generate Convincing Scam Messages in Seconds

Category: HackingDISC @ 10:14 am

Using technology powered by AI (Artificial Intelligence), scammers can now take advantage of potential victims looking for love online by deceiving them by using modern hooks.

With the rapid advancement of AI technology, scammers now have a powerful ally in the form of popular AI tools such as ChatGPT. These tools allow scammers to create anything from seemingly harmless intro chats to elaborate love letters in a matter of seconds, making it easier than ever for them to deceive unsuspecting victims. 

By leveraging the impressive capabilities of these AI tools, scammers can quickly generate custom-made content designed to prey on their target’s emotions. The use of AI-generated content has made it increasingly difficult to identify and avoid scams.

One of the most common tactics used in online dating and romance scams is the practice of “catfishing.” This involves the creation of a fake online persona to lure unsuspecting victims into a relationship with the sole intention of extracting financial gain.

The term “catfishing” derives from the act of using a fake profile to hook a victim, much like fishing with a bait hook.

Convincing Scam Messages

In a recent research report titled “Modern Love” by McAfee, over 5,000 people from around the world were presented with a sample love letter and asked to determine if it was written by a person or generated by artificial intelligence (AI). 

“My dearest, 
The moment I laid eyes on you, I knew that my heart would forever be yours. Your beauty, both inside and out, is unmatched and your kind and loving spirit only add to my admiration for you. 
You are my heart, my soul, my everything. I cannot imagine a life without you, and I will do everything in my power to make you happy. I love you now and forever. 
Forever yours 
”

According to a research report by McAfee, when presented with the above sample love letter and asked to determine if it was written by a person or generated by AI, one-third of respondents (33%) believed it was written by a person, while 31% believed it was written by an AI. 

While the remaining 36% of participants were unable to determine if the letter was written by a human or a machine. The study aimed to investigate the extent to which AI-generated content is perceived as authentic and genuine in the context of romantic relationships.

User Interaction Data Analysis

A recent survey found that a majority of people (66%) have been contacted by a stranger through social media or SMS and subsequently began chatting with them. Facebook and Facebook Messenger (39%) and Instagram and Instagram direct messages (33%) were cited as the most common platforms used by strangers to initiate conversation.

Unfortunately, many of these interactions eventually led to requests for money transfers. In fact, 55% of respondents reported being asked to transfer money by a stranger. 

While the majority of these requests (34%) were for less than $500, a significant number (20%) involved amounts exceeding $10,000. 

More concerning, 9% of respondents were asked to provide their government or tax ID number, while 8% were asked to share their account passwords for social media, email, or banking.

Scam Detection

It has been reported that people discovered they had been catfished when they experienced the following scenarios:-

  • Neither a face-to-face meeting nor a video conference could be arranged. (39%)
  • Upon finding the scammer’s photo online, they immediately realized that it was a false representation of the scammer. (32%)
  • During the conversation, the person asked for personal information. (29%)
  • The individual did not wish to speak on the telephone. (27%)
  • Several typographical errors and illogical sentences were present. (26%)

If the scammer is asking for money, that is the one and only telling sign that he or she is performing an online dating or romance scam.

This kind of scam usually entails a little story as part of the request, often focusing on a hardship experienced by the scammer.

Mitigations

Here below we have mentioned all the mitigations to avoid getting tangled up in an online dating or romance scam:-

  • The best way to know if this new love interest is right for you is to speak with someone you trust.
  • It’s important to take your relationship slowly in the beginning.
  • If the individual uses a profile picture, try a reverse image search.
  • Make sure that you do not send money or gifts to anyone who you have not met personally before.
  • Whenever you receive a friend request from a stranger, say no.
  • If you have any personal information on any unwanted website, make sure you clean it up.
  • It is strongly advised that you do not click on any malicious links that have been sent to you by a scammer.

A chatbot like ChatGPT is a very powerful tool, but it is important to keep in mind that it is only a tool, and inherently, there is neither good nor bad about it.

As long as the user decides how to use it, it is then up to them to decide how they will be able to make use of it.

Exploring GPT-3: An unofficial first look at the general-purpose language processing API from OpenAI

Tags: ChatGPT, GPT3, Scam Messages