Sep 18 2023

Microsoft AI researchers accidentally exposed terabytes of internal sensitive data

Category: AI,Data Breachdisc7 @ 8:46 am

Researchers find a GitHub repository belonging to Microsoft’s AI research unit that exposed 38TB of sensitive data, including secret keys and Teams chat logs — Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data, including private keys and passwords …


Aug 24 2023

Google AI in Workspace Adds New Zero-Trust and Digital Sovereignty Controls

Category: AI,Zero trustdisc7 @ 1:48 pm

Google announced security enhancements to Google Workspace focused on enhancing threat defense controls with Google AI.

Image: Urupong/Adobe Stock

At a Google Cloud press event on Tuesday, the company announced Google Cloud’s rollout over the course of this year of new AI-powered data security tools bringing zero-trust features to  Workspace, Drive, Gmail and data sovereignty. The enhancements to Google Drive, Gmail, the company’s security tools for IT and security center teams and more are designed to help global companies keep their data under lock and encrypted key and security operators outrun advancing threats.

Jump to:

The Executive Guide to Zero Trust: Drivers, Objectives, and Strategic Considerations

InfoSec toolsĀ |Ā InfoSec servicesĀ |Ā InfoSec booksĀ |Ā Follow our blogĀ |Ā DISC llc is listed on The vCISO Directory

Tags: Digital Sovereignty Controls


Jul 20 2023

How do you solve privacy issues with AI? It’s all about the blockchain

Category: AI,Blockchain,Information Privacydisc7 @ 9:18 am

How do you solve privacy issues with AI? It’s all about the blockchain

Data is the lifeblood of artificial intelligence (AI), and theĀ powerĀ that AI brings to the business world — to unearth fresh insights, increase speed and efficiency, and multiply effectiveness — flows from its ability to analyze and learn from data. The more data AI has to work with, the moreĀ reliableĀ its results will be.

Feeding AI’s need for data means collecting it from a wide variety of sources, which has raised concerns about AI gathering, processing, and storing personal data. The fear is that the ocean of data flowing into AI engines is not properly safeguarded.

Are you donating your personal data to generative AI platforms?

While protecting the data that AI tools like ChatGPT is collecting against breaches is a valid concern, it is actually only the tip of the iceberg when it comes to AI-related privacy issues. A more poignant issue is data ownership. Once you share information with a generative AI tool like Bard, who owns it?

Those who are simply using generative AI platforms to help craft better social posts may not understand the connection between the services they offer and personal data security. But consider the person who is using an AI-driven chatbot to explore treatment for a medical condition, learn about remedies for a financial crisis, or find a lawyer. In the course of the exchange, those users will most likely share some personal and sensitive information.

Every query posed to an AI platform becomes part of that platform’s data set without regard to whether or not it is personal or sensitive. ChatGPT’sĀ privacy policyĀ makes it clear: “When you use our Services, we collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services.” It also says: “In certain circumstances we may provide your Personal Information to third parties without further notice to you, unless required by the law…”

Looking to blockchain for data privacy solutions

While the US government has called for an “AI Bill of Rights” designed to protect sensitive data, it has yet to provide the type of regulations that protect its ownership. Consequently, Google and Microsoft have full ownership over the data that their users provide as they comb the web with generative AI platforms. That data empowers them to train their AI models, but also to get to understand you better.

Those looking for a way to gain control of their data in the age of AI can find a solution inĀ blockchain technology. Commonly known as the foundation of cryptocurrency, blockchain can also be used to allow users to keep their personal data safe. By empowering a new type of digital identity management — known as a universal identity layer — blockchain allows you to decide how and when your personal data is shared.

Blockchain technology brings a number of factors into play that boost the security of personal data. First, it is decentralized, meaning that data is not stored in a centralized database and is not subject to its vulnerabilities with blockchain.

Blockchain also supports smart contracts, which are self-executing contracts that have the terms of an agreement written into their code. If the terms aren’t met, the contract does not execute, allowing for data stored on the blockchain to be utilized only in the way in which the owner stipulates.

Enhanced security is another factor that blockchain brings to data security efforts. The cryptographic techniques it utilizes allow users to authenticate their identity without revealing sensitive data.

Leveraging these factors to create a new type of identification framework gives users full control of who can use and view their information, for what purposes, and for how long. Once in place, this type of identity system could even be used to allow users to monetize their data, charging large language models (LLMs) like OpenAI and Google Bard to benefit from the use of personal data.

Ultimately, AI’s ongoing needs may lead to the creation of platforms where users offer their data to LLMs for a fee. A blockchain-based universal identity layer would allow the user to choose who gets to use it, toggling access on and off at will. If you decide you don’t like the business practices Google has been employing over the past two months, you can cut them off at the source.

That type of AI model illustrates the power that comes from securing data on a decentralized network. It also reveals the killer use case of blockchain that is on the horizon.

Image credittampatra@hotmail.com/depositphotos.com

Aaron RaffertyĀ is the CEO ofĀ Standard DAOĀ and Co-Founder ofĀ BattlePACs, a subsidiary of Standard DAO. BattlePACs is a technology platform that transforms how citizens engage in politics and civil discourse. BattlePACs believes participation and conversations are critical to moving America toward a future that works for everyone.

Blockchain and Web3: Building the Cryptocurrency, Privacy, and Security Foundations of the Metaverse

InfoSec booksĀ |Ā InfoSec toolsĀ |Ā InfoSec services

Tags: AI privacy, blockchain, Blockchain and Web3


Apr 05 2023

HOW TO CREATE UNDETECTABLE MALWARE VIA CHATGPT IN 7 EASY STEPS BYPASSING ITS RESTRICTIONS

Category: AI,ChatGPT,MalwareDISC @ 9:35 am

There is evidence that ChatGPT has helped low-skill hackers generate malware, which raises worries about the technology being abused by cybercriminals. ChatGPT cannot yet replace expert threat actors, but security researchers claim there is evidence that it can assist low-skill hackers create malware.

Since the introduction of ChatGPT in November, the OpenAI chatbot has assisted over 100 million users, or around 13 million people each day, in the process of generating text, music, poetry, tales, and plays in response to specific requests. In addition to that, it may provide answers to exam questions and even build code for software.

It appears that malicious intent follows strong technology, particularly when such technology is accessible to the general people. There is evidence on the dark web that individuals have used ChatGPT for the development of dangerous material despite the anti-abuse constraints that were supposed to prevent illegitimate requests. This was something that experts feared would happen. Because of thisexperts from forcepoint came to the conclusion that it would be best for them not to create any code at all and instead rely on only the most cutting-edge methods, such as steganography, which were previously exclusively used by nation-state adversaries.

The demonstration of the following two points was the overarching goal of this exercise:

  1. How simple it is to get around the inadequate barriers that ChatGPT has installed.
  2. How simple it is to create sophisticated malware without having to write any code and relying simply on ChatGPT

Initially ChatGPT informed him that malware creation is immoral and refused to provide code.

  1. To avoid this, he generated small codes and manually assembled the executable.  The first successful task was to produce code that looked for a local PNG greater than 5MB. The design choice was that a 5MB PNG could readily hold a piece of a business-sensitive PDF or DOCX.

 2. Then asked ChatGPT to add some code that will encode the found png with steganography and would exfiltrate these files from computer, he asked ChatGPT for code that searches the User’s Documents, Desktop, and AppData directories then uploads them to google drive.

3. Then he asked ChatGPT to combine these pices of code and modify it to to divide files into many ā€œchunksā€ for quiet exfiltration using steganography.

4. Then he submitted the MVP to VirusTotal and five vendors marked the file as malicious out of sixty nine.

5. This next step was to ask ChatGPT to create its own LSB Steganography method in my program without using the external library. And to postpone the effective start by two minutes.https://www.securitynewspaper.com/2023/01/20/this-new-android-malware-allows-to-hack-spy-on-any-android-phone/embed/#?secret=nN5212UQrX#?secret=8AnjYiGI6e

6. The another change he asked ChatGPT to make was to obfuscate the code which was rejected. Once ChatGPT rejected hisrequest, he tried again. By altering his request from obfuscating the code to converting all variables to random English first and last names, ChatGPT cheerfully cooperated. As an extra test, he disguised the request to obfuscate to protect the code’s intellectual property. Again, it supplied sample code that obscured variable names and recommended Go modules to construct completely obfuscated code.

7. In next step he uploaded the file to virus total to check

And there we have it; the Zero Day has finally arrived. They were able to construct a very sophisticated attack in a matter of hours by only following the suggestions that were provided by ChatGPT. This required no coding on our part. We would guess that it would take a team of five to ten malware developers a few weeks to do the same amount of work without the assistance of an AI-based chatbot, particularly if they wanted to avoid detection from all detection-based suppliers.

ChatGPT for Startups

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT malware


Mar 20 2023

Most security pros turn to unauthorized AI tools at work

Category: AI,ChatGPTDISC @ 10:52 am

The research demonstrates thatĀ embracing automationĀ in cybersecurity leads to significant business benefits, such as addressing talent gaps and effectively combating cyber threats. According to the survey, organizations will continue investing in cybersecurity automation in 2023, even amid economic turbulence.

ā€œAs organizations look for long-term solutions to keep pace with increasingly complex cyberattacks, they need technologies that will automate time-consuming, repetitive tasks so security teams have the bandwidth to focus on the threats that matter most,ā€ saidĀ Marc van Zadelhoff, CEO, Devo. ā€œThisĀ reportĀ confirms what we’re already hearing from Devo customers: adopting automation in the SOC results in happier analysts, boosted business results, and more secure organizations.ā€

Security pros are using AI tools without authorization

According to the study, security pros suspect their organization would stop them from using unauthorized AI tools, but that’s not stopping them.

  • 96% of security pros admit to someone at their organization using AI tools not provided by their company – including 80% who cop to using such tools themselves.
  • 97% of security pros believe their organizations are able to identify their use of unauthorized AI tools, and more than 3 in 4 (78%) suspect their organization would put a stop to it if discovered.

Adoption of automation in the SOC

Organizations fail to adopt automation effectively, forcing security pros to use rogue AI tools to keep up with workloads.

  • 96% of security professionals are not fully satisfied with their organization’s use of automation in the SOC.
  • Reasons for dissatisfaction with SOC automation varied from technological concerns such as the limited scalability and flexibility of the available solutions (42%) to financial ones such as the high costs associated with implementation and maintenance (39%). But for many, concerns go back to people: 34% cite a lack of internal expertise and resources to manage the solution as a reason they are not satisfied.
  • Respondents indicated that they would opt for unauthorized tools due to the better user interface (47%), more specialized capabilities (46%), and allow for more efficient work (44%).

Investing in cybersecurity automation

Security teamsĀ will prioritize investments in cybersecurity automation in 2023 to solve organizational challenges, despite economic turbulence and widespread organizational cost-cutting.

  • 80% of security professionals predict an increase in cybersecurity automation investments in the coming year, including 55% who predict an increase of more than 5%.
  • 100% of security professionals reported positive business impacts as a result of using automation in cybersecurity, citing increased efficiency (70%) and financial gains (65%) as primary benefits.

Automation fills widening talent gaps

Adopting automation in the SOC helps organizations combat security staffing shortages in a variety of ways.

  • 100% of respondents agreed that automation would be helpful to fill staffing gaps in their team.
  • Incident analysis (54%), landscape analysis of applications and data sources (54%), and threat detection and response (53%) were the most common ways respondents said automation could make up for staffing shortages.

AI

A Guide to Combining AI Tools Like Chat GPT, Quillbot, and Midjourney for Crafting Killer Fiction and Nonfiction (Artificial Intelligence Uses & Applications)

InfoSec ThreatsĀ |Ā InfoSec booksĀ |Ā InfoSec toolsĀ |Ā InfoSec services

Tags: AI tools, AI Tools Like Chat GPT


Mar 19 2023

Researcher create polymorphic Blackmamba malware with ChatGPT

Category: AI,ChatGPTDISC @ 3:44 pm

The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams.

The malware can target Windows, macOS and Linux devices.

HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type ofĀ ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.

black mamba snake coiled up

This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.

As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters

Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is ā€œanalyzed, sold on the dark web, or used for other nefarious purposes,ā€ according to the report.

Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.

Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.

The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.

This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters
Researcher’s discussion with ChatGPT

Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.

It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.

What is Polymorphic malware?

Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.

Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.

The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.

Chat GPT: Is the Future Already Here?

AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security

BlackMamba GPT POC Malware In Action

Professional Certificates, Bachelors & Masters Program

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT


Mar 15 2023

OpenAI Announces GPT-4, the Successor of ChatGPT

Category: AIDISC @ 10:18 am

A powerful new AI model called GPT-4 has been released recently by OpenAI, which is capable of comprehending images and texts. The company describes this as the next-stage milestone in its effort to scale up deep learning.

In November 2022, ChatGPT was launched and has since been used by millions of people worldwide. The all-new GPT-4 is now available through ChatGPT Plus, while it is the paid GPT subscription option of OpenAI available for $20 per month.

However, currently, there is a cap on the usage amount, and to access the API, the developers need to be registered on a waitlist.

The GPT-4 canĀ also perform a numberĀ of tasks at once, with a maximum word count of 25,000. That is eight times more than the ChatGPT can.

GPT-4

Pricing & Implementation

Here below, we have mentioned the pricing tags:-

  • For 1,000 ā€œpromptā€ tokens (Raw text), which is about 750 words will cost $0.03.
  • For 1,000 ā€œcompletionā€ tokens (Raw text), which is about 750 words will cost $0.06.

A prompt token is part of a word that has been fed into GPT-4 in order for it to function. While the content that is generated by the GPT-4 is referred to as completion tokens.

In addition, Microsoft recently announced that it is using the GPT-4 version for its Bing Chat chatbot. Since the investment from Microsoft into OpenAI has amounted to $10 billion.

Stripe is another early adopter of GPT-4, which uses it to scan business websites. By doing so, it provides a summary of the results to customer service staff as part of the scanning process.

A new subscription tier for language learning has been developed by Duolingo based on the GPT-4. In order to provide financial analysts with access to information retrieved from company documents, Morgan Stanley is creating a GPT-4-powered system.

It appears that Khan Academy is also working towards automating some sort of tutoring process using GPT-4 that can help students.

A simulated bar exam was given to GPT-4, and it performed particularly well in it, as it managed to achieve scoring around the top 10% of test takers. Interestingly, GPT-3.5, on the other hand, scored in the bottom 10% of the group.  

GPT-4 Action

The GPT-4 algorithm is a form of generative artificial intelligence, similar to the ChatGPT algorithm. With the help of algorithms and predictive text, the generating AI constructs the content based on the prompts that are presented by the user. 

As you can see in the image below, GPT-4 generates recipes based on images that have been uploaded.

The reasoning skills of GPT-4 are more advanced than those of ChatGPT. In order to find available meeting times, the model can, for instance, search for three schedules with three available times.

In short, the GPT-4 is much smarter and more capable as compared to the GPT-3.5. GPT-4 is capable of receiving and processing textual and visual information, one of its most impressive features.

At the moment, it is not yet possible for OpenAI customers to utilize the image understanding capability of GPT-4. However, currently, OpenAI is testing this technology with only one partner, Be My Eyes.

OpenAI has warned that, just like its predecessors, the GPT-4 is still not entirely reliable. This model needs to be further improved by the entire community by building on top of the model, exploring it, and contributing to it through collective efforts. 

There is still a lot of work to be done, and the company affirmed that they are looking forward to working together to improve it.

CHATGPT-4 Revealed 500 Prompts to Ride the AI Wave (Mastering ChatGPT-4 Prompts & Beyond)

Tags: ChatGPT, GPT-4


Mar 09 2023

ChatGPT for Offensive Security

Category: AIDISC @ 10:39 am

ChatGPT for Offensive Security – via SANS Institute

Can ChatGPT (AI) be used for offensive security?

It is possible to use AI for offensive security, just as it is possible to use any technology for malicious purposes. However, the use of AI for offensive security raises significant ethical concerns and legal considerations.

AI could be used to automate and scale attacks, such as phishing, malware propagation, or social engineering. It could also be used to analyze large amounts of data to identify vulnerabilities or weaknesses in security systems, and to develop targeted attacks.

However, the use of AI for offensive security could also have unintended consequences, such as collateral damage or false positives. Furthermore, it raises concerns about accountability and responsibility, as it may be difficult to trace the origin of an attack that is automated and conducted by a machine learning system.

Overall, the use of AI for offensive security is a complex and controversial issue that requires careful consideration of the ethical and legal implications. It is important to always use technology responsibly and ethically.

Chat GPT is just the tip of the iceberg! 15 Artificial Intelligence tools that may be useful to you:
1.Midjourney: a tool that creates images from textual descriptions, similar to OpenAI’s DALL-E and Stable Diffusion.
2. RunwayML: Edit videos in real time, collaborate and take advantage of over 30 magical AI tools.
3. Otter AI: Transform audio into text with high accuracy. Use this tool for meeting notes, content creation and much more.
4. Copy.AI: This is the first copyright platform powered by artificial intelligence. This tool helps generate content for websites, blog posts, or social media posts, helping increase conversions and sales.
5. Murf AI: Convert text to audio: generate studio-quality narrations in minutes. Use Murf’s realistic AI voices for podcasts, videos and all your professional presentations.
6. Flow GPT: Share, discover and learn about the most useful ChatGPT prompts.
7. Nocode.AI: The Nocode platform is a way to create AI solutions without ever writing a single line of code. It’s a great way to quickly test ideas, create new projects, and launch businesses and new products faster.
8. Supernormal: This tool helps create incredible meeting notes without lifting a finger.
9. TLDRthis: This AI-based website helps you summarize any part of a text into concise and easy-to-digest content, so that you can rid yourself of information overload and save time.
10. TheGist: Summarize any Slack channel or conversation with just one click! This AI analyzes Slack conversations and instantly creates a brief summary for you.
11. Sitekick: Create landing pages with AI by telling it what you want via text.
12. Humanpal: Create Avatars with ultra-realistic human appearances!
13. ContentBot: – Write content for articles, ads, products, etc.
14. Synthesia – Create a virtual presenter that narrates your text for you.
Synthesia is a video creation platform using AI. It’s possible to create videos in 120 languages, saving up to 80% of your time and budget.
15. GliaCloud: This tool converts your text into video. Generate videos for news content, social media posts, live sports events, and statistical data in minutes.

The role of human insight in AI-based cybersecurity

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

The Art of Prompt Engineering with chatGPT: A Hands-On Guide for using chatGPT

Previous posts on AI

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI-based cybersecurity, ChatGPT, human insight, Offensive security


Feb 08 2023

Developers Created AI to Generate Police Sketches. Experts Are Horrified

Category: AIDISC @ 11:56 pm

Police forensics is already plagued by human biases. Experts say AI will make it even worse.

Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create ā€œhyper-realisticā€ police sketches of a suspect based on user inputs. 

The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program’s purpose is to cut down the time it usually takes to draw a suspect of a crime, which is ā€œaround two to three hours,ā€ according to a presentation uploaded to the internet

ā€œWe haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. ā€œAt this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.ā€

AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.     

ā€œThe problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,ā€ Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. ā€œAI can’t fix those human problems, and this particular program will likely make them worse through its very design.ā€

The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click ā€œgenerate profile,ā€ which sends the descriptions to DALL-E 2 and produces an AI-generated portrait.Ā 

For more details: Developers Created AI to Generate Police Sketches. Experts Are Horrified

https://www.vice.com/en/article/qjk745/ai-police-sketches


Oct 19 2021

Using Machine Learning to Guess PINs from Video

Category: AI,HackingDISC @ 11:01 am

#MachineLearning: Hacking Tools for Computer + Hacking With Kali Linux + Python Programming- The ultimate beginners guide to improve your knowledge of programming and data science

Tags: Machine Learning, Machine Learning to Guess PINs


Jul 04 2021

Attackers use ā€˜offensive AI’ to create deepfakes for phishing campaigns

Category: AIDISC @ 10:05 am

Malware Analysis Using Artificial Intelligence and Deep Learning

Tags: deepfakes for phishing


May 24 2021

AIs and Fake Comments

Category: AIDISC @ 8:49 am

This month, the New York state attorney general issued a report on a scheme by ā€œU.S. Companies and Partisans [to] Hack Democracy.ā€ This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­– the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

Democracy and Fake News: Information Manipulation and Post-Truth Politics – the analysis of post-truth politics.

The volume sheds light on some topical questions connected to fake news, thereby contributing to a fuller understanding of its impact on democracy. In the Introduction, the editors offer some orientating definitions of post-truth politics, building a theoretical framework where various different aspects of fake news can be understood. The book is then divided into three parts: Part I helps to contextualize the phenomena investigated, offering definitions and discussing key concepts as well as aspects linked to the manipulation of information systems, especially considering its reverberation on democracy. Part II considers the phenomena of disinformation, fake news, and post-truth politics in the context of Russia, which emerges as a laboratory where the phases of creation and diffusion of fake news can be broken down and analyzed; consequently, Part II also reflects on the ways to counteract disinformation and fake news. Part III moves from case studies in Western and Central Europe to reflect on the methodological difficulty of investigating disinformation, as well as tackling the very delicate question of detection, combat, and prevention of fake news.

Tags: AIs and Fake Comments, Information Manipulation


Apr 27 2021

When AIs Start Hacking

Category: AI,IoT SecurityDISC @ 5:00 pm

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI ā€œsingularity,ā€ where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

When AIs Start Hacking