Mar 15 2023

OpenAI Announces GPT-4, the Successor of ChatGPT

Category: AIDISC @ 10:18 am

A powerful new AI model called GPT-4 has been released recently by OpenAI, which is capable of comprehending images and texts. The company describes this as the next-stage milestone in its effort to scale up deep learning.

In November 2022, ChatGPT was launched and has since been used by millions of people worldwide. The all-new GPT-4 is now available through ChatGPT Plus, while it is the paid GPT subscription option of OpenAI available for $20 per month.

However, currently, there is a cap on the usage amount, and to access the API, the developers need to be registered on a waitlist.

The GPT-4 can also perform a number of tasks at once, with a maximum word count of 25,000. That is eight times more than the ChatGPT can.

GPT-4

Pricing & Implementation

Here below, we have mentioned the pricing tags:-

  • For 1,000 “prompt” tokens (Raw text), which is about 750 words will cost $0.03.
  • For 1,000 “completion” tokens (Raw text), which is about 750 words will cost $0.06.

A prompt token is part of a word that has been fed into GPT-4 in order for it to function. While the content that is generated by the GPT-4 is referred to as completion tokens.

In addition, Microsoft recently announced that it is using the GPT-4 version for its Bing Chat chatbot. Since the investment from Microsoft into OpenAI has amounted to $10 billion.

Stripe is another early adopter of GPT-4, which uses it to scan business websites. By doing so, it provides a summary of the results to customer service staff as part of the scanning process.

A new subscription tier for language learning has been developed by Duolingo based on the GPT-4. In order to provide financial analysts with access to information retrieved from company documents, Morgan Stanley is creating a GPT-4-powered system.

It appears that Khan Academy is also working towards automating some sort of tutoring process using GPT-4 that can help students.

A simulated bar exam was given to GPT-4, and it performed particularly well in it, as it managed to achieve scoring around the top 10% of test takers. Interestingly, GPT-3.5, on the other hand, scored in the bottom 10% of the group.  

GPT-4 Action

The GPT-4 algorithm is a form of generative artificial intelligence, similar to the ChatGPT algorithm. With the help of algorithms and predictive text, the generating AI constructs the content based on the prompts that are presented by the user. 

As you can see in the image below, GPT-4 generates recipes based on images that have been uploaded.

The reasoning skills of GPT-4 are more advanced than those of ChatGPT. In order to find available meeting times, the model can, for instance, search for three schedules with three available times.

In short, the GPT-4 is much smarter and more capable as compared to the GPT-3.5. GPT-4 is capable of receiving and processing textual and visual information, one of its most impressive features.

At the moment, it is not yet possible for OpenAI customers to utilize the image understanding capability of GPT-4. However, currently, OpenAI is testing this technology with only one partner, Be My Eyes.

OpenAI has warned that, just like its predecessors, the GPT-4 is still not entirely reliable. This model needs to be further improved by the entire community by building on top of the model, exploring it, and contributing to it through collective efforts. 

There is still a lot of work to be done, and the company affirmed that they are looking forward to working together to improve it.

CHATGPT-4 Revealed 500 Prompts to Ride the AI Wave (Mastering ChatGPT-4 Prompts & Beyond)

Tags: ChatGPT, GPT-4


Mar 09 2023

ChatGPT for Offensive Security

Category: AIDISC @ 10:39 am

ChatGPT for Offensive Security – via SANS Institute

Can ChatGPT (AI) be used for offensive security?

It is possible to use AI for offensive security, just as it is possible to use any technology for malicious purposes. However, the use of AI for offensive security raises significant ethical concerns and legal considerations.

AI could be used to automate and scale attacks, such as phishing, malware propagation, or social engineering. It could also be used to analyze large amounts of data to identify vulnerabilities or weaknesses in security systems, and to develop targeted attacks.

However, the use of AI for offensive security could also have unintended consequences, such as collateral damage or false positives. Furthermore, it raises concerns about accountability and responsibility, as it may be difficult to trace the origin of an attack that is automated and conducted by a machine learning system.

Overall, the use of AI for offensive security is a complex and controversial issue that requires careful consideration of the ethical and legal implications. It is important to always use technology responsibly and ethically.

Chat GPT is just the tip of the iceberg! 15 Artificial Intelligence tools that may be useful to you:
1.Midjourney: a tool that creates images from textual descriptions, similar to OpenAI’s DALL-E and Stable Diffusion.
2. RunwayML: Edit videos in real time, collaborate and take advantage of over 30 magical AI tools.
3. Otter AI: Transform audio into text with high accuracy. Use this tool for meeting notes, content creation and much more.
4. Copy.AI: This is the first copyright platform powered by artificial intelligence. This tool helps generate content for websites, blog posts, or social media posts, helping increase conversions and sales.
5. Murf AI: Convert text to audio: generate studio-quality narrations in minutes. Use Murf’s realistic AI voices for podcasts, videos and all your professional presentations.
6. Flow GPT: Share, discover and learn about the most useful ChatGPT prompts.
7. Nocode.AI: The Nocode platform is a way to create AI solutions without ever writing a single line of code. It’s a great way to quickly test ideas, create new projects, and launch businesses and new products faster.
8. Supernormal: This tool helps create incredible meeting notes without lifting a finger.
9. TLDRthis: This AI-based website helps you summarize any part of a text into concise and easy-to-digest content, so that you can rid yourself of information overload and save time.
10. TheGist: Summarize any Slack channel or conversation with just one click! This AI analyzes Slack conversations and instantly creates a brief summary for you.
11. Sitekick: Create landing pages with AI by telling it what you want via text.
12. Humanpal: Create Avatars with ultra-realistic human appearances!
13. ContentBot: – Write content for articles, ads, products, etc.
14. Synthesia – Create a virtual presenter that narrates your text for you.
Synthesia is a video creation platform using AI. It’s possible to create videos in 120 languages, saving up to 80% of your time and budget.
15. GliaCloud: This tool converts your text into video. Generate videos for news content, social media posts, live sports events, and statistical data in minutes.

The role of human insight in AI-based cybersecurity

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

The Art of Prompt Engineering with chatGPT: A Hands-On Guide for using chatGPT

Previous posts on AI

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI-based cybersecurity, ChatGPT, human insight, Offensive security


Feb 08 2023

Developers Created AI to Generate Police Sketches. Experts Are Horrified

Category: AIDISC @ 11:56 pm

Police forensics is already plagued by human biases. Experts say AI will make it even worse.

Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs. 

The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program’s purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet

“We haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.”

AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.     

“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”

The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click “generate profile,” which sends the descriptions to DALL-E 2 and produces an AI-generated portrait. 

For more details: Developers Created AI to Generate Police Sketches. Experts Are Horrified

https://www.vice.com/en/article/qjk745/ai-police-sketches


Oct 19 2021

Using Machine Learning to Guess PINs from Video

Category: AI,HackingDISC @ 11:01 am

#MachineLearning: Hacking Tools for Computer + Hacking With Kali Linux + Python Programming- The ultimate beginners guide to improve your knowledge of programming and data science

Tags: Machine Learning, Machine Learning to Guess PINs


Jul 04 2021

Attackers use ‘offensive AI’ to create deepfakes for phishing campaigns

Category: AIDISC @ 10:05 am

Malware Analysis Using Artificial Intelligence and Deep Learning

Tags: deepfakes for phishing


May 24 2021

AIs and Fake Comments

Category: AIDISC @ 8:49 am

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­– the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

Democracy and Fake News: Information Manipulation and Post-Truth Politics – the analysis of post-truth politics.

The volume sheds light on some topical questions connected to fake news, thereby contributing to a fuller understanding of its impact on democracy. In the Introduction, the editors offer some orientating definitions of post-truth politics, building a theoretical framework where various different aspects of fake news can be understood. The book is then divided into three parts: Part I helps to contextualize the phenomena investigated, offering definitions and discussing key concepts as well as aspects linked to the manipulation of information systems, especially considering its reverberation on democracy. Part II considers the phenomena of disinformation, fake news, and post-truth politics in the context of Russia, which emerges as a laboratory where the phases of creation and diffusion of fake news can be broken down and analyzed; consequently, Part II also reflects on the ways to counteract disinformation and fake news. Part III moves from case studies in Western and Central Europe to reflect on the methodological difficulty of investigating disinformation, as well as tackling the very delicate question of detection, combat, and prevention of fake news.

Tags: AIs and Fake Comments, Information Manipulation


Apr 27 2021

When AIs Start Hacking

Category: AI,IoT SecurityDISC @ 5:00 pm

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

When AIs Start Hacking


« Previous Page