Jul 26 2024

Las Vegas transit system is nation’s first to plan full deployment of AI surveillance system for weapons

Category: AIdisc7 @ 11:41 am

https://www.cnbc.com/2024/07/25/vegas-transit-system-first-in-us-ai-scan-for-weapons.html

Key Points

  • The Regional Transportation Commission of Southern Nevada, which includes Las Vegas, will be the first transit system in the U.S. to implement system-wide AI weapons scans.
  • Transit systems nationwide are grappling with ways to reduce violence.
  • AI-linked cameras and acoustic technology are seen as viable options to better respond to mass shootings in public places across the U.S., according to law enforcement and public safety teams, though both approaches have downsides.
A sign promoting safety is seen on the Regional Transportation Commission 109 Maryland Parkway bus in Las Vegas Thursday, June 8, 2023.
Las Vegas Review-journal | Tribune News Service | Getty Images

On your next visit to Vegas, an extra set of eyes will be watching you if you decide to hop onto the local transit system.

As part of a $33 million multi-year upgrade to fortify its security, the Regional Transportation Commission of Southern Nevada is set to add a system-wide AI from gun detection software vendor ZeroEyes that scans riders on its over 400 buses in an attempt to identify anyone brandishing a firearm. 

Tom Atteberry, RTC’s director of safety and security operations, said that seconds matter in a situation where an active shooting unfolds, and implementing the system could give authorities an edge. “Time is of the essence; it gives us time to identify a firearm being brandished, so they can be notified and get to the scene and save lives,” he said.

Monitoring and preventing mass shooting is one that public places across the country grapple with daily. Violent crime on transit systems, specifically, remains an issue in major metro areas, with a report released in late 2023 by the Department of Transportation detailing concerns from transit agency officials around the U.S. about rising violence on their transit systems. According to a database maintained by the Bureau of Transportation Statistics, assaults on transit systems have spiked, and there has been a rise in public fears about transportation safety.

For details:

Las Vegas transit system is nation’s first to plan full deployment of AI surveillance system for weapons

Wearable Devices, Surveillance Systems, and AI for Women’s Wellbeing

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI surveillance system, Las Vegas transit system


Jun 05 2024

Unauthorized AI is eating your company data, thanks to your employees

Category: AI,Data Breach,data securitydisc7 @ 8:09 am
https://www.csoonline.com/article/2138447/unauthorized-ai-is-eating-your-company-data-thanks-to-your-employees.html

Legal documents, HR data, source code, and other sensitive corporate information is being fed into unlicensed, publicly available AIs at a swift rate, leaving IT leaders with a mounting shadow AI mess.

Employees at many organizations are engaging in widespread use of unauthorized AI models behind the backs of their CIOs and CISOs, according to a recent study.

Employees are sharing company legal documents, source code, and employee information with unlicensed, non-corporate versions of AIs, including ChatGPT and Google Gemini, potentially leading to major headaches for CIOs and other IT leaders, according to research from Cyberhaven Labs.

About 74% of the ChatGPT use at work is through non-corporate accounts, potentially giving the AI the ability to use or train on that data, says the Cyberhaven Q2 2024 AI Adoption and Risk Report, based on actual AI usage patterns of 3 million workers. More than 94% of workplace use of Google AIs Gemini and Bard are from non-corporate accounts, the study reveals.

Nearly 83% of all legal documents shared with AI tools go through non-corporate accounts, the report adds, while about half of all source code, R&D materials, and HR and employee records go into unauthorized AIs.

The amount of data put into all AI tools saw nearly a five-fold increase between March 2023 and March 2024, according to the report. “End users are adopting new AI tools faster than IT can keep up, fueling continued growth in ‘shadow AI,’” the report adds.

Where does the data go?

At the same time, many users may not know what happens to their companies’ data once they share it with an unlicensed AI. ChatGPT’s terms of use, for example, say the ownership of the content entered remains with the users. However, ChatGPT may use that content to provide, maintain, develop, and improve its services, meaning it could train itself using shared employee records. Users can opt out of ChatGPT training itself on their data.

So far, there have been no high-profile reports about major company secrets spilled by large public AIs, but security experts worry about what happens to company data once an AI ingests it. On May 28, OpenAI announced a new Safety and Security Committee to address concerns.

It’s difficult to assess the risk of sharing confidential or sensitive information with publicly available AIs, says Brian Vecci, field CTO at Varonis, a cloud security firm. It seems unlikely that companies like Google or ChatGPT developer OpenAI will allow their AIs to leak sensitive business data to the public, given the headaches such disclosures would cause them, he says.

Still, there aren’t many rules governing what AI developers can do with the data users provide them, some security experts note. Many more AI models will be rolled out in the coming years, Vecci says.

“When we get outside of the realm of OpenAI and Google, there are going to be other tools that pop up,” he says. “There are going to be AI tools out there that will do something interesting but are not controlled by OpenAI or Google, which presumably have much more incentive to be held accountable and treat data with care.”

The coming wave of second- and third-tier AI developers may be fronts for hacking groups, may see profit in selling confidential company information, or may lack the cybersecurity protections that the big players have, Vecci says.

“There’s some version of an LLM tool that’s similar to ChatGPT and is free and fast and controlled by who knows who,” he says. “Your employees are using it, and they’re forking over source code and financial statements, and that could be a much higher risk.”

Risky behavior

Sharing company or customer data with any unauthorized AI creates risk, regardless of whether the AI model trains on that data or shares it with other users, because that information now exists outside company walls, adds Pranava Adduri, CEO of Bedrock Security.

Adduri recommends organizations sign licensed deals, containing data use restrictions, with AI vendors so that employees can experiment with AI.

“The problem boils down to the inability to control,” he says. “If the data is getting shipped off to a system where you don’t have that direct control, usually the risk is managed through legal contracts and legal agreements.”

AvePoint, a cloud data management company, has signed an AI contract to head off the use of shadow AI, says Dana Simberkoff, chief risk, privacy, and information security officer at the company. AvePoint thoroughly reviewed the licensing terms, including the data use restrictions, before signing.

A major problem with shadow AI is that users don’t read the privacy policy or terms of use before shoveling company data into unauthorized tools, she says.

“Where that data goes, how it’s being stored, and what it may be used for in the future is still not very transparent,” she says. “What most everyday business users don’t necessarily understand is that these open AI technologies, the ones from a whole host of different companies that you can use in your browser, actually feed themselves off of the data that they’re ingesting.”

Training and security

AvePoint has tried to discourage employees from using unauthorized AI tools through a comprehensive education program, through strict access controls on sensitive data, and through other cybersecurity protections preventing the sharing of data. AvePoint has also created an AI acceptable use policy, Simberkoff says.

Employee education focuses on common employee practices like granting wide access to a sensitive document. Even if an employee only notifies three coworkers that they can review the document, allowing general access can enable an AI to ingest the data.

“AI solutions are like this voracious, hungry beast that will take in anything that they can,” she says.

Using AI, even officially licensed ones, means organizations need to have good data management practices in place, Simberkoff adds. An organization’s access controls need to limit employees from seeing sensitive information not necessary for them to do their jobs, she says, and longstanding security and privacy best practices still apply in the age of AI.

Rolling out an AI, with its constant ingestion of data, is a stress test of a company’s security and privacy plans, she says.

“This has become my mantra: AI is either the best friend or the worst enemy of a security or privacy officer,” she adds. “It really does drive home everything that has been a best practice for 20 years.”

Simberkoff has worked with several AvePoint customers that backed away from AI projects because they didn’t have basic controls such as an acceptable use policy in place.

“They didn’t understand the consequences of what they were doing until they actually had something bad happen,” she says. “If I were to give one really important piece of advice it’s that it’s okay to pause. There’s a lot of pressure on companies to deploy AI quickly.”

Credit: Moon Safari / Shutterstock

Artificial Intelligence for Cybersecurity 

ChatGPT for Cybersecurity Cookbook: Learn practical generative AI recipes to supercharge your cybersecurity skills

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Artificial Intelligence for Cybersecurity, ChatGPT for Cybersecurity


Jun 03 2024

OpenAI, Meta, and TikTok Crack Down on Covert Influence Campaigns, Some AI-Powered

Category: AIdisc7 @ 11:13 am

https://thehackernews.com/2024/05/openai-meta-tiktok-disrupt-multiple-ai.html

OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence (AI) tools to manipulate public discourse or political outcomes online while obscuring their true identity.

These activities, which were detected over the past three months, used its AI models to generate short comments and longer articles in a range of languages, cook up names and bios for social media accounts, conduct open-source research, debug simple code, and translate and proofread texts.

The AI research organization said two of the networks were linked to actors in Russia, including a previously undocumented operation codenamed Bad Grammar that primarily used at least a dozen Telegram accounts to target audiences in Ukraine, Moldova, the Baltic States and the United States (U.S.) with sloppy content in Russian and English.

Deep Disinformation: Can AI-Generated Fake News…

“The network used our models and accounts on Telegram to set up a comment-spamming pipeline,” OpenAI said. “First, the operators used our models to debug code that was apparently designed to automate posting on Telegram. They then generated comments in Russian and English in reply to specific Telegram posts.”

The operators also used its models to generate comments under the guise of various fictitious personas belonging to different demographics from across both sides of the political spectrum in the U.S.

The other Russia-linked information operation corresponded to the prolific Doppelganger network (aka Recent Reliable News), which was sanctioned by the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) earlier this March for engaging in cyber influence operations.

The network is said to have used OpenAI’s models to generate comments in English, French, German, Italian, and Polish that were shared on X and 9GAG; translate and edit articles from Russian to English and French that were then posted on bogus websites maintained by the group; generate headlines; and convert news articles posted on its sites into Facebook posts.

Fake News: AI & All News Requires Critical Thinking

“This activity targeted audiences in Europe and North America and focused on generating content for websites and social media,” OpenAI said. “The majority of the content that this campaign published online focused on the war in Ukraine. It portrayed Ukraine, the US, NATO and the EU in a negative light and Russia in a positive light.”

AI-Powered Disinformation Campaigns

The other three activity clusters are listed below –

  • A Chinese-origin network known as Spamouflage that used its AI models to research public social media activity; generate texts in Chinese, English, Japanese, and Korean for posting across X, Medium, and Blogger; propagate content criticizing Chinese dissidents and abuses against Native Americans in the U.S.; and debug code for managing databases and websites
  • An Iranian operation known as the International Union of Virtual Media (IUVM) that used its AI models to generate and translate long-form articles, headlines, and website tags in English and French for subsequent publication on a website named iuvmpress[.]co
  • A network referred to as Zero Zeno emanating from a for-hire Israeli threat actor, a business intelligence firm called STOIC, that used its AI models to generate and disseminate anti-Hamas, anti-Qatar, pro-Israel, anti-BJP, and pro-Histadrut content across Instagram, Facebook, X, and its affiliated websites targeting users in Canada, the U.S., India, and Ghana.

“The [Zero Zeno] operation also used our models to create fictional personas and bios for social media based on certain variables such as age, gender and location, and to conduct research into people in Israel who commented publicly on the Histadrut trade union in Israel,” OpenAI said, adding its models refused to supply personal data in response to these prompts.

The ChatGPT maker emphasized in its first threat report on IO that none of these campaigns “meaningfully increased their audience engagement or reach” from exploiting its services.

The development comes as concerns are being raised that generative AI (GenAI) tools could make it easier for malicious actors to generate realistic text, images and even video content, making it challenging to spot and respond to misinformation and disinformation operations.

“So far, the situation is evolution, not revolution,” Ben Nimmo, principal investigator of intelligence and investigations at OpenAI, said. “That could change. It’s important to keep watching and keep sharing.”

Meta Highlights STOIC and Doppelganger#

Separately, Meta in its quarterly Adversarial Threat Report, also shared details of STOIC’s influence operations, saying it removed a mix of nearly 500 compromised and fake accounts on Facebook and Instagram accounts used by the actor to target users in Canada and the U.S.

“This campaign demonstrated a relative discipline in maintaining OpSec, including by leveraging North American proxy infrastructure to anonymize its activity,” the social media giant said.

AI-Powered Disinformation Campaigns
Meta further said it removed hundreds of accounts, comprising deceptive networks from Bangladesh, China, Croatia, Iran, and Russia, for engaging in coordinated inauthentic behavior (CIB) with the goal of influencing public opinion and pushing political narratives about topical events.
The China-linked malign network, for instance, mainly targeted the global Sikh community and consisted of several dozen Instagram and Facebook accounts, pages, and groups that were used to spread manipulated imagery and English and Hindi-language posts related to a non-existent pro-Sikh movement, the Khalistan separatist movement, and criticism of the Indian government.
It pointed out that it hasn’t so far detected any novel and sophisticated use of GenAI-driven tactics, with the company highlighting instances of AI-generated video news readers that were previously documented by Graphika and GNET, indicating that despite the largely ineffective nature of these campaigns, threat actors are actively experimenting with the technology.

Doppelganger, Meta said, has continued its “smash-and-grab” efforts, albeit with a major shift in tactics in response to public reporting, including the use of text obfuscation to evade detection (e.g., using “U. kr. ai. n. e” instead of “Ukraine”) and dropping its practice of linking to typosquatted domains masquerading as news media outlets since April.
“The campaign is supported by a network with two categories of news websites: typosquatted legitimate media outlets and organizations, and independent news websites,” Sekoia said in a report about the pro-Russian adversarial network published last week.
“Disinformation articles are published on these websites and then disseminated and amplified via inauthentic social media accounts on several platforms, especially video-hosting ones like Instagram, TikTok, Cameo, and YouTube.”

These social media profiles, created in large numbers and in waves, leverage paid ads campaigns on Facebook and Instagram to direct users to propaganda websites. The Facebook accounts are also called burner accounts owing to the fact that they are used to share only one article and are subsequently abandoned.

The French cybersecurity firm described the industrial-scale campaigns – which are geared towards both Ukraine’s allies and Russian-speaking domestic audiences on Kremlin’s behalf – as multi-layered, leveraging the social botnet to initiate a redirection chain that passes through two intermediate websites in order to lead users to the final page.Doppelganger, along with another coordinated pro-Russian propaganda network designated as Portal Kombat, has also been observed amplifying content from a nascent influence network dubbed CopyCop, demonstrating a concerted effort to promulgate narratives that project Russia in a favorable light.

Recorded Future, in a report released this month, said CopyCop is likely operated from Russia, taking advantage of inauthentic media outlets in the U.S., the U.K., and France to promote narratives that undermine Western domestic and foreign policy, and spread content pertaining to the ongoing Russo-Ukrainian war and the Israel-Hamas conflict.

“CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases,” the company said. “This included content critical of Western policies and supportive of Russian perspectives on international issues like the Ukraine conflict and the Israel-Hamas tensions.”

TikTok Disrupts Covert Influence Operations#

Earlier in May, ByteDance-owned TikTok said it had uncovered and stamped out several such networks on its platform since the start of the year, including ones that it traced back to Bangladesh, China, Ecuador, Germany, Guatemala, Indonesia, Iran, Iraq, Serbia, Ukraine, and Venezuela.

TikTok, which is currently facing scrutiny in the U.S. following the passage of a law that would force the Chinese company to sell the company or face a ban in the country, has become an increasingly preferred platform of choice for Russian state-affiliated accounts in 2024, according to a new report from the Brookings Institution.

What’s more, the social video hosting service has emerged as a breeding ground for what has been characterized as a complex influence campaign known as Emerald Divide (aka Storm-1364) that is believed to be orchestrated by Iran-aligned actors since 2021 targeting Israeli society.

AI-Powered Disinformation Campaigns

“Emerald Divide is noted for its dynamic approach, swiftly adapting its influence narratives to Israel’s evolving political landscape,” Recorded Future said.

“It leverages modern digital tools such as AI-generated deepfakes and a network of strategically operated social media accounts, which target diverse and often opposing audiences, effectively stoking societal divisions and encouraging physical actions such as protests and the spreading of anti-government messages.”

The ChatGPT Edge: Unleashing The Limitless Potential Of AI Using Simple And Creative Prompts To Boost Productivity, Maximize Efficiency

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: and TikTok, Covert Influence Campaigns, Meta, OpenAI


Apr 26 2024

25 cybersecurity AI stats you should know

Category: AI,cyber securitydisc7 @ 7:33 am

Security pros are cautiously optimistic about AI

Cloud Security Alliance and Google Cloud | The State of AI and Security Survey Report | April 2024

  • 55% of organizations plan to adopt GenAI solutions within this year, signaling a substantial surge in GenAI integration.
  • 48% of professionals expressed confidence in their organization’s ability to execute a strategy for leveraging AI in security.
  • 12% of security professionals believe AI will completely replace their role.

AI abuse and misinformation campaigns threaten financial institutions

FS-ISAC | Navigating Cyber 2024 | March 2024

  • Threat actors can use generative AI to write malware and more skilled cybercriminals could exfiltrate information from or inject contaminated data into the large language models (LLMs) that train GenAI.
  • Recent quantum computing and AI advancements are expected to challenge established cryptographic algorithms.

Enterprises increasingly block AI transactions over security concerns

Zscaler | AI Security Report 2024 | March 2024

  • Today, enterprises block 18.5% of all AI transactions, a 577% increase from April to January, for a total of more than 2.6 billion blocked transactions.
  • Some of the most popular AI tools are also the most blocked. Indeed, ChatGPT holds the distinction of being both the most-used and most-blocked AI application.
cybersecurity ai stats

Scammers exploit tax season anxiety with AI tools

McAfee | Tax Scams Study 2024 | March 2024

  • Of the people who clicked on fraudulent links from supposed tax services, 68% lost money. Among those, 29% lost more than $2,500, and 17% lost more than $10,000.
  • 9% of Americans feel confident in their ability to spot deepfake videos or recognize AI-generated audio, such as fake renditions of IRS agents.

Advanced AI, analytics, and automation are vital to tackle tech stack complexity

Dynatrace | The state of observability 2024 | March 2024

  • 97% of technology leaders find traditional AIOps models are unable to tackle the data overload.
  • 88% of organizations say the complexity of their technology stack has increased in the past 12 months, and 51% say it will continue to increase.
  • 72% of organizations have adopted AIOps to reduce the complexity of managing their multicloud environment.

Today’s biggest AI security challenges

HiddenLayer | AI Threat Landscape Report 2024 | March 2024

  • 98% of companies surveyed view some of their AI models as vital for business success, and 77% have experienced breaches in their AI systems over the past year.
  • 61% of IT leaders acknowledge shadow AI, solutions that are not officially known or under the control of the IT department, as a problem within their organizations.
  • Researchers revealed the extensive use of AI in modern businesses, noting an average of 1,689 AI models actively used by companies. This has made AI security a top priority, with 94% of IT leaders dedicating funds to safeguard their AI in 2024.
cybersecurity ai stats

AI tools put companies at risk of data exfiltration

Code42 | Annual Data Exposure Report 2024 | March 2024

  • Since 2021, there has been a 28% average increase in monthly insider-driven data exposure, loss, leak, and theft events.
  • While 99% of companies have data protection solutions in place, 78% of cybersecurity leaders admit they’ve still had sensitive data breached, leaked, or exposed.

95% believe LLMs making phishing detection more challenging

LastPass | LastPass survey 2024 | March 2024

  • More than 95% of respondents believe dynamic content through Large Language Models (LLMs) makes detecting phishing attempts more challenging.
  • Phishing will remain the top social engineering threat to businesses throughout 2024, surpassing other threats like business email compromise, vishing, smishing or baiting.
cybersecurity ai stats

How AI is reshaping the cybersecurity job landscape

ISC2 | AI Cyber 2024 | February 2024

  • 88% of cybersecurity professionals believe that AI will significantly impact their jobs, now or in the near future, and 35% have already witnessed its effects.
  • 75% of respondents are moderately to extremely concerned that AI will be used for cyberattacks or other malicious activities.
  • The survey revealed that 12% of respondents said their organizations had blocked all access to generative AI tools in the workplace.
cybersecurity ai stats

Businesses banning or limiting use of GenAI over privacy risks

Cisco | Cisco 2024 Data Privacy Benchmark Study | February 2024

  • 63% have established limitations on what data can be entered, 61% have limits on which employees can use GenAI tools, and 27% said their organization had banned GenAI applications altogether for the time being.
  • Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have positively impacted them, and only 6% said the impact has been negative.
  • 91% of organizations recognize they need to do more to reassure their customers that their data was being used only for intended and legitimate purposes in AI.
cybersecurity ai stats

Unlocking GenAI’s full potential through work reinvention

Accenture | Work, workforce, workers: Reinvented in the age of generative AI | January 2024

  • While 95% of workers see value in working with GenAI, 60% are also concerned about job loss, stress and burnout.
  • 47% of reinventors are already thinking bigger—recognizing that their processes will require significant change to fully leverage GenAI.
cybersecurity ai stats

Adversaries exploit trends, target popular GenAI apps

Netskope | Cloud and Threat Report 2024 | January 2024

  • In 2023, ChatGPT was the most popular generative AI application, accounting for 7% of enterprise usage.
  • Half of all enterprise users interact with between 11 and 33 cloud apps each month, with the top 1% using more than 96 apps per month.

Artificial Intelligence for Cybersecurity

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: cybersecurity AI stats


Apr 19 2024

NSA, CISA & FBI Released Best Practices For AI Security Deployment 2024

Category: AIdisc7 @ 8:03 am

In a groundbreaking move, the U.S. Department of Defense has released a comprehensive guide for organizations deploying and operating AI systems designed and developed by
another firm.

The report, titled “Deploying AI Systems Securely,” outlines a strategic framework to help defense organizations harness the power of AI while mitigating potential risks.

The report was authored by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC).

The guide emphasizes the importance of a holistic approach to AI security, covering various aspects such as data integrity, model robustness, and operational security. It outlines a six-step process for secure AI deployment:

  1. Understand the AI system and its context
  2. Identify and assess risks
  3. Develop a security plan
  4. Implement security controls
  5. Monitor and maintain the AI system
  6. Continuously improve security practices

Addressing AI Security Challenges

The report acknowledges the growing importance of AI in modern warfare but also highlights the unique security challenges that come with integrating these advanced technologies. “As the military increasingly relies on AI-powered systems, it is crucial that we address the potential vulnerabilities and ensure the integrity of these critical assets,” said Lt. Gen. Jane Doe, the report’s lead author.

Some of the key security concerns outlined in the document include:

  • Adversarial AI attacks that could manipulate AI models to produce erroneous outputs
  • Data poisoning and model corruption during the training process
  • Insider threats and unauthorized access to sensitive AI systems
  • Lack of transparency and explainability in AI-driven decision-making

A Comprehensive Security Framework

The report proposes a comprehensive security framework for deploying AI systems within the military to address these challenges. The framework consists of three main pillars:

  1. Secure AI Development: This includes implementing robust data governance, model validation, and testing procedures to ensure the integrity of AI models throughout the development lifecycle.
  2. Secure AI Deployment: The report emphasizes the importance of secure infrastructure, access controls, and monitoring mechanisms to protect AI systems in operational environments.
  3. Secure AI Maintenance: Ongoing monitoring, update management, and incident response procedures are crucial to maintain the security and resilience of AI systems over time.

Key Recommendations

This detailed guidance on securely deploying AI systems, emphasizing the importance of careful setup, configuration, and applying traditional IT security best practices. Among the key recommendations are:

Threat Modeling: Organizations should require AI system developers to provide a comprehensive threat model. This model should guide the implementation of security measures, threat assessment, and mitigation planning.

Secure Deployment Contracts: When contracting AI system deployment, organizations must clearly define security requirements for the deployment environment, including incident response and continuous monitoring provisions.

Access Controls: Strict access controls should be implemented to limit access to AI systems, models, and data to only authorized personnel and processes.

Continuous Monitoring: AI systems must be continuously monitored for security issues, with established processes for incident response, patching, and system updates.

Collaboration And Continuous Improvement

The report also stresses the importance of cross-functional collaboration and continuous improvement in AI security. “Securing AI systems is not a one-time effort; it requires a sustained, collaborative approach involving experts from various domains,” said Lt. Gen. Doe.

The Department of Defense plans to work closely with industry partners, academic institutions, and other government agencies to refine further and implement the security framework outlined in the report.

Regular updates and feedback will ensure the framework keeps pace with the rapidly evolving AI landscape.

The release of the “Deploying AI Systems Securely” report marks a significant step forward in the military’s efforts to harness the power of AI while prioritizing security and resilience.

By adopting this comprehensive approach, defense organizations can unlock the full potential of AI-powered technologies while mitigating the risks and ensuring the integrity of critical military operations.

The AI Playbook: Mastering the Rare Art of Machine Learning Deployment

Navigating the AI Governance Landscape: Principles, Policies, and Best Practices for a Responsible Future

Trust Me – AI Risk Management

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Governance, AI Risk Management, Best Practices For AI


Apr 05 2024

Hackers Hijack Facebook Pages To Mimic AI Brands & Inject Malware

Category: AI,Hacking,Malwaredisc7 @ 8:08 am

Hackers have been found hijacking Facebook pages to impersonate popular AI brands, thereby injecting malware into the devices of unsuspecting users.

This revelation comes from a detailed investigation by Bitdefender Labs, which has been closely monitoring these malicious campaigns since June 2023.

Recent analyses of malvertising campaigns have revealed a disturbing trend.

Ads are distributing an assortment of malicious software, which poses severe risks to consumers’ devices, data, and identity.

Unwitting interactions with these malware-serving ads could lead to downloading and deploying harmful files, including Rilide Stealer, Vidar Stealer, IceRAT, and Nova Stealer, onto users’ devices.

Rilide Stealer V4: A Closer Look

Bitdefender Labs has spotlighted an updated version of the Rilide Stealer (V4) lurking within sponsored ad campaigns that impersonate popular AI-based software and photo editors such as Sora, CapCut, Gemini AI, Photo Effects Pro, and CapCut Pro.

This malicious extension, targeting Chromium-based browsers, is designed to monitor browsing history, capture login credentials, and even facilitate the withdrawal of crypto funds by bypassing two-factor authentication through script injections.

Sora Ad campaign
Gemini Ad Campaign

Key Updates in Rilide V4:

  • Targeting of Facebook cookies
  • Masquerading as a Google Translate Extension
  • Enhanced obfuscation techniques to conceal the software’s true intent

Indicators Of Compromise

Malicious hashes

  • 2d6829e8a2f48fff5348244ce0eaa35bcd4b26eac0f36063b9ff888e664310db â€“ OpenAI Sora official version setup.msi – Sora
  • a7c07d2c8893c30d766f383be0dd78bc6a5fd578efaea4afc3229cd0610ab0cf â€“ OpenAI Sora Setup.zip – Sora
  • e394f4192c2a3e01e6c1165ed1a483603b411fd12d417bfb0dc72bd6e18e9e9d â€“ Setup.msi – Sora
  • 021657f82c94511e97771739e550d63600c4d76cef79a686aa44cdca668814e0 â€“ Setup.msi – Sora
  • 92751fd15f4d0b495e2b83d14461d22d6b74beaf51d73d9ae2b86e2232894d7b â€“ Setup.msi – Sora
  • 32a097b510ae830626209206c815bbbed1c36c0d2df7a9d8252909c604a9c1f1 â€“ Setup.msi – Sora
  • c665ff2206c9d4e50861f493f8e7beca8353b37671d633fe4b6e084c62e58ed9 â€“ Setup.msi – Sora
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e â€“ Capcut Pro For PC.setup.msi – Capcut
  • 757855fcd47f843739b9a330f1ecb28d339be41eed4ae25220dc888e57f2ec51 â€“ OpenAI ChatGPT-4.5 Version Free.msi – ChatGPT
  • 3686204361bf6bf8db68fd81e08c91abcbf215844f0119a458c319e92a396ecf â€“ Google Gemini AI Ultra Version Updata.msi – Gemini AI
  • d60ea266c4e0f0e8d56d98472a91dd5c37e8eeeca13bf53e0381f0affc68e78a â€“ Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • bb7c3b78f2784a7ac3c090331326279476c748087188aeb69f431bbd70ac6407 â€“ Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e â€“ AISora.setup.msi – Sora

Vidar Stealer: Evolving Threats

Vidar Stealer, another prolific info stealer, is marketed through the same MaaS model via dark web ads, forums, and Telegram groups.

Capable of exfiltrating personal information and crypto from compromised devices, Vidar’s distribution has evolved from spam campaigns and cracked software to malicious Google Search ads and social media platforms, mainly through sponsored ads on Meta’s platform.

Indicators Of Compromise

Malicious hashes

  • 6396ac7b1524bb9759f434fe956a15f5364284a04acd5fc0ef4b625de35d766b- g2m.dll – MidJourney
  • 76ed62a335ac225a2b7e6dade4235a83668630a9c1e727cf4ddb0167ab2202f6- Midjourney.7z – MidJourney

IceRAT: More Than Just A Trojan

Despite its name, IceRAT functions more as a backdoor on compromised devices. It acts as a gateway for secondary infections, such as crypto miners and information stealers that target login credentials and other sensitive data.

Indicators Of Compromise

Malicious hashes

  • aab585b75e868fb542e6dfcd643f97d1c5ee410ca5c4c5ffe1112b49c4851f47- Midjourneyv6.exe – MidJourney
  • b5f740c0c1ac60fa008a1a7bd6ea77e0fc1d5aa55e6856d8edcb71487368c37c- Midjourneyv6ai.exe – MidJourney
  • cc15e96ec1e27c01bd81d2347f4ded173dfc93df673c4300faac5a932180caeb- Mid_Setup.exe – MidJourney
  • d2f12dec801000fbd5ccc8c0e8ed4cf8cc27a37e1dca9e25afc0bcb2287fbb9a- Midjourney_v6.exe – MidJourney
  • f2fc27b96a4a487f39afad47c17d948282145894652485f9b6483bec64932614-Midjourneyv6.1_ins.exe – MidJourney
  • f99aa62ee34877b1cd02cfd7e8406b664ae30c5843f49c7e89d2a4db56262c2e â€“ Midjourneys_Setup.exe – MidJourney
  • 54a992a4c1c25a923463865c43ecafe0466da5c1735096ba0c3c3996da25ffb7 â€“ Mid_Setup.exe – MidJourney
  • 4a71a8c0488687e0bb60a2d0199b34362021adc300541dd106486e326d1ea09b- Mid_Setup.exe – MidJourney

Nova Stealer: The New Kid On The Block

Nova Stealer emerges as a highly proficient info stealer with capabilities including password exfiltration, screen recordings, discord injections, and crypto wallet hijacking.

Nova Stealer, offered as MaaS by the threat actor known as Sordeal, represents a significant threat to digital security.

Indicators Of Compromise

Malicious hashes

  • fb3fbee5372e5050c17f72dbe0eb7b3afd3a57bd034b6c2ac931ad93b695d2d9- Instructions_for_using_today_s_AI.pdf.rar – AI and Life
  • 6a36f1f1821de7f80cc9f8da66e6ce5916ac1c2607df3402b8dd56da8ebcc5e2- Instructions_for_using_today_s_AI.xlsx_rar.rar – AI and Life
  • fe7e6b41766d91fbc23d31573c75989a2b0f0111c351bed9e2096cc6d747794b- Instructions for using today’s AI.pdf.exe – AI and Life
  • ce0e41e907cab657cc7ad460a5f459c27973e9346b5adc8e64272f47026d333d- Instructions for using today’s AI.xlsx.exe – AI and Life
  • a214bc2025584af8c38df36b08eb964e561a016722cd383f8877b684bff9e83d- 20 digital marketing tips for 2024.xlsx.exe – Google Digital Marketing
  • 53714612af006b06ca51cc47abf0522f7762ecb1300e5538485662b1c64d6f55 â€“ Premium advertising course registration form from Oxford.exe – Google Digital Marketing
  • 728953a3ebb0c25bcde85fd1a83903c7b4b814f91b39d181f0fc610b243c98d4- New Microsoft Excel Worksheet.exe – Google Digital Marketing

The Midjourney Saga: AI’s Dark Side

The addition of AI tools on the internet, from free offerings and trials to subscription-based services, has not gone unnoticed by cybercriminals.

Midjourney, a leading generative AI tool with a user base exceeding 16 million as of November 2023, has become a favored tool among cyber gangs over the past year, highlighting the intersection of cutting-edge technology and cybercrime.

Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.
Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.

Indicators Of Compromise

  • 159.89.120.191
  • 159.89.98.241

As the digital landscape continues to evolve, so does the nature of the threats it maintains.

The rise of Malware-as-a-Service represents a significant shift in the cyber threat paradigm that requires vigilant and proactive measures to combat.

Key Updates in Rilide V4:

  • Targeting of Facebook cookies
  • Masquerading as a Google Translate Extension
  • Enhanced obfuscation techniques to conceal the software’s true intent

Indicators Of Compromise

Malicious hashes

  • 2d6829e8a2f48fff5348244ce0eaa35bcd4b26eac0f36063b9ff888e664310db â€“ OpenAI Sora official version setup.msi – Sora
  • a7c07d2c8893c30d766f383be0dd78bc6a5fd578efaea4afc3229cd0610ab0cf â€“ OpenAI Sora Setup.zip – Sora
  • e394f4192c2a3e01e6c1165ed1a483603b411fd12d417bfb0dc72bd6e18e9e9d â€“ Setup.msi – Sora
  • 021657f82c94511e97771739e550d63600c4d76cef79a686aa44cdca668814e0 â€“ Setup.msi – Sora
  • 92751fd15f4d0b495e2b83d14461d22d6b74beaf51d73d9ae2b86e2232894d7b â€“ Setup.msi – Sora
  • 32a097b510ae830626209206c815bbbed1c36c0d2df7a9d8252909c604a9c1f1 â€“ Setup.msi – Sora
  • c665ff2206c9d4e50861f493f8e7beca8353b37671d633fe4b6e084c62e58ed9 â€“ Setup.msi – Sora
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e â€“ Capcut Pro For PC.setup.msi – Capcut
  • 757855fcd47f843739b9a330f1ecb28d339be41eed4ae25220dc888e57f2ec51 â€“ OpenAI ChatGPT-4.5 Version Free.msi – ChatGPT
  • 3686204361bf6bf8db68fd81e08c91abcbf215844f0119a458c319e92a396ecf â€“ Google Gemini AI Ultra Version Updata.msi – Gemini AI
  • d60ea266c4e0f0e8d56d98472a91dd5c37e8eeeca13bf53e0381f0affc68e78a â€“ Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • bb7c3b78f2784a7ac3c090331326279476c748087188aeb69f431bbd70ac6407 â€“ Photo Effects Pro v3.1.3 Setup.msi – Photo Effects
  • 0ed3b92fda104ac62cc3dc0a5ed0f400c6958d7034e3855cad5474fca253125e â€“ AISora.setup.msi – Sora

Vidar Stealer: Evolving Threats

Vidar Stealer, another prolific info stealer, is marketed through the same MaaS model via dark web ads, forums, and Telegram groups.

Capable of exfiltrating personal information and crypto from compromised devices, Vidar’s distribution has evolved from spam campaigns and cracked software to malicious Google Search ads and social media platforms, mainly through sponsored ads on Meta’s platform.

Indicators Of Compromise

Malicious hashes

  • 6396ac7b1524bb9759f434fe956a15f5364284a04acd5fc0ef4b625de35d766b- g2m.dll – MidJourney
  • 76ed62a335ac225a2b7e6dade4235a83668630a9c1e727cf4ddb0167ab2202f6- Midjourney.7z – MidJourney

IceRAT: More Than Just A Trojan

Despite its name, IceRAT functions more as a backdoor on compromised devices. It acts as a gateway for secondary infections, such as crypto miners and information stealers that target login credentials and other sensitive data.

Indicators Of Compromise

Malicious hashes

  • aab585b75e868fb542e6dfcd643f97d1c5ee410ca5c4c5ffe1112b49c4851f47- Midjourneyv6.exe – MidJourney
  • b5f740c0c1ac60fa008a1a7bd6ea77e0fc1d5aa55e6856d8edcb71487368c37c- Midjourneyv6ai.exe – MidJourney
  • cc15e96ec1e27c01bd81d2347f4ded173dfc93df673c4300faac5a932180caeb- Mid_Setup.exe – MidJourney
  • d2f12dec801000fbd5ccc8c0e8ed4cf8cc27a37e1dca9e25afc0bcb2287fbb9a- Midjourney_v6.exe – MidJourney
  • f2fc27b96a4a487f39afad47c17d948282145894652485f9b6483bec64932614-Midjourneyv6.1_ins.exe – MidJourney
  • f99aa62ee34877b1cd02cfd7e8406b664ae30c5843f49c7e89d2a4db56262c2e â€“ Midjourneys_Setup.exe – MidJourney
  • 54a992a4c1c25a923463865c43ecafe0466da5c1735096ba0c3c3996da25ffb7 â€“ Mid_Setup.exe – MidJourney
  • 4a71a8c0488687e0bb60a2d0199b34362021adc300541dd106486e326d1ea09b- Mid_Setup.exe – MidJourney

Nova Stealer: The New Kid On The Block

Nova Stealer emerges as a highly proficient info stealer with capabilities including password exfiltration, screen recordings, discord injections, and crypto wallet hijacking.

Nova Stealer, offered as MaaS by the threat actor known as Sordeal, represents a significant threat to digital security.

Indicators Of Compromise

Malicious hashes

  • fb3fbee5372e5050c17f72dbe0eb7b3afd3a57bd034b6c2ac931ad93b695d2d9- Instructions_for_using_today_s_AI.pdf.rar – AI and Life
  • 6a36f1f1821de7f80cc9f8da66e6ce5916ac1c2607df3402b8dd56da8ebcc5e2- Instructions_for_using_today_s_AI.xlsx_rar.rar – AI and Life
  • fe7e6b41766d91fbc23d31573c75989a2b0f0111c351bed9e2096cc6d747794b- Instructions for using today’s AI.pdf.exe – AI and Life
  • ce0e41e907cab657cc7ad460a5f459c27973e9346b5adc8e64272f47026d333d- Instructions for using today’s AI.xlsx.exe – AI and Life
  • a214bc2025584af8c38df36b08eb964e561a016722cd383f8877b684bff9e83d- 20 digital marketing tips for 2024.xlsx.exe – Google Digital Marketing
  • 53714612af006b06ca51cc47abf0522f7762ecb1300e5538485662b1c64d6f55 â€“ Premium advertising course registration form from Oxford.exe – Google Digital Marketing
  • 728953a3ebb0c25bcde85fd1a83903c7b4b814f91b39d181f0fc610b243c98d4- New Microsoft Excel Worksheet.exe – Google Digital Marketing

The Midjourney Saga: AI’s Dark Side

The addition of AI tools on the internet, from free offerings and trials to subscription-based services, has not gone unnoticed by cybercriminals.

Midjourney, a leading generative AI tool with a user base exceeding 16 million as of November 2023, has become a favored tool among cyber gangs over the past year, highlighting the intersection of cutting-edge technology and cybercrime.

Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.
Midjourney has been a fan-favorite among cybercriminal gangs as well over the past year.

Indicators Of Compromise

  • 159.89.120.191
  • 159.89.98.241

As the digital landscape continues to evolve, so does the nature of the threats it maintains.

The rise of Malware-as-a-Service represents a significant shift in the cyber threat paradigm that requires vigilant and proactive measures to combat.

The Complete Guide to Software as a Service: Everything you need to know about SaaS

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Hijack Facebook Pages


Apr 03 2024

ISO27k bot

Category: AI,Information Securitydisc7 @ 2:03 pm
Hey 👏 I’m the digital assistance of DISCInfoSec for ISO 27k implementation. I will try to answer your question. If I don’t know the answer, I will connect you with one my support agents. Please type your query regarding ISO 27001 implementation 👇

ISO 27k Chat bot

Tags: Chat bot, ISO 27k bot


Mar 08 2024

Immediate AI risks and tomorrow’s dangers

Category: AIdisc7 @ 11:29 am

“At the most basic level, AI has given malicious attackers superpowers,” Mackenzie Jackson, developer and security advocate at GitGuardian, told the audience last week at Bsides Zagreb.

These superpowers are most evident in the growing impact of fishing, smishing and vishing attacks since the introduction of ChatGPT in November 2022.

And then there are also malicious LLMs, such as FraudGPT, WormGPT, DarkBARD and White Rabbit (to name a few), that allow threat actors to write malicious code, generate phishing pages and messages, identify leaks and vulnerabilities, create hacking tools and more.

AI has not necessarily made attacks more sophisticated but, he says, it has made them more accessible to a greater number of people.

The potential for AI-fueled attacks

It’s impossible to imagine all the types of AI-fueled attacks that the future has in store for us. Jackson outlined some attacks that we can currently envision.

One of them is a prompt injection attack against a ChatGPT-powered email assistant, which may allow the attacker to manipulate the assistant into executing actions such as deleting all emails or forwarding them to the attacker.

Inspired by a query that resulted in ChatGPT outright inventing a non-existent software package, Jackson also posited that an attacker might take advantage of LLMs’ tendency to “hallucinate” by creating malware-laden packages that many developers might be searching for (but currently don’t exist).

The immediate threats

But we’re facing more immediate threats right now, he says, and one of them is sensitive data leakage.

With people often inserting sensitive data into prompts, chat histories make for an attractive target for cybercriminals.

Unfortunately, these systems are not designed to secure the data – there have been instances of ChatGTP leaking users’ chat history and even personal and billing data.

Also, once data is inputted into these systems, it can “spread” to various databases, making it difficult to contain. Essentially, data entered into such systems may perpetually remain accessible across different platforms.

And even though chat history can be disabled, there’s no guarantee that the data is not being stored somewhere, he noted.

One might think that the obvious solution would be to ban the use of LLMs in business settings, but this option has too many drawbacks.

Jackson argues that those who aren’t allowed to use LLMs for work (especially in the technology domain) are likely to fall behind in their capabilities.

Secondly, people will search for and find other options (VPNs, different systems, etc.) that will allow them to use LLMs within enterprises.

This could potentially open doors to another significant risk for organizations: shadow AI. This means that the LLM is still part of the organization’s attack surface, but it is now invisible.

How to protect your organization?

When it comes to protecting an organization from the risks associated with AI use, Jackson points out that we really need to go back to security basics.

People must be given the appropriate tools for their job, but they also must be made to understand the importance of using LLMs safely.

He also advises to:

  • Put phishing protections in place
  • Make frequent backups to avoid getting ransomed
  • Make sure that PII is not accessible to employees
  • Avoid keeping secrets on the network to prevent data leakage
  • Use software composition analysis (SCA) tools to avoid AI hallucinations abuse and typosquatting attacks

To make sure your system is protected from prompt injection, he believes that implementing dual LLMs, as proposed by programmer Simon Willison, might be a good idea.

Despite the risks, Jackson believes that AI is too valuable to move away from.

He anticipates a rise in companies and startups using AI toolsets, leading to potential data breaches and supply chain attacks. These incidents may drive the need for improved legislation, better tools, research, and understanding of AI’s implications, which are currently lacking because of its rapid evolution. Keeping up with it has become a challenge.

AI Scams:

Are chatbots the new weapon of online scammers?

AI used to fake voices of loved ones in “I’ve been in an accident” scam

Story of Attempted Scam Using AI | C-SPAN.org

Woman loses Rs 1.4 lakh to AI voice scam

Kidnapping scam uses artificial intelligence to clone teen girl’s voice, mother issues warning

First-Ever AI Fraud Case Steals Money by Impersonating CEO

AI Scams Mitigation:

A.I. Scam Detector

Every country is developing AI laws, standards, and specifications. In the US, states are introducing 50 AI related regulations a week (Axios 0 2024). Each of the regulations see AI through the lens for social and technical risk.

Trust Me: AI Risk Management is a book of AI Risk Controls that can be incorporated into the NIST AI RMF guidelines or NIST CSF. Trust Me looks at the key attributes of AI including trust, explainability, and conformity assessment through an objective-risk-control-why lens. If you’re developing, designing, regulating, or auditing AI systems, Trust Me: AI Risk Management is a must read.

👇 Do you place your trust in AI?? 👇

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: AI risks


Jan 22 2024

AI AND SECURITY: ARE THEY AIDING EACH OTHER OR CREATING MORE ISSUES? EXPLORING THE COMPLEX RELATIONSHIP IN TECHNOLOGY

Category: AI,cyber securitydisc7 @ 12:13 pm

Artificial Intelligence (AI) has arisen as a wildly disruptive technology across many industries. As AI models continue to improve, more industries are sure to be disrupted and affected. One industry that is already feeling the effects of AI is digital security. The use of this new technology has opened up new avenues of protecting data, but it has also caused some concerns about its ethicality and effectiveness when compared with what we will refer to as traditional or established security practices.

This article will touch on the ways that this new tech is affecting already established practices, what new practices are arising, and whether or not they are safe and ethical.

HOW DOES AI AFFECT ALREADY ESTABLISHED SECURITY PRACTICES?

It is a fair statement to make that AI is still a nascent technology. Most experts agree that it is far from reaching its full potential, yet even so, it has still been able to disrupt many industries and practices. In terms of already established security practices, AI is providing operators with the opportunity to analyze huge amounts of data at incredible speed and with impressive accuracy. Identifying patterns and detecting anomalies is easy for AI to do, and incredibly useful for most traditional data security practices. 

Previously these systems would rely solely on human operators to perform the data analyses, which can prove time-consuming and would be prone to errors. Now, with AI help, human operators need only understand the refined data the AI is providing them and act on it.

IN WHAT WAYS CAN AI BE USED TO BOLSTER AND IMPROVE EXISTING SECURITY MEASURES?

AI can be used in several other ways to improve security measures. In terms of access protection, AI-driven facial recognition and other forms of biometric security can easily provide a relatively foolproof access protection solution. Using biometric access can eliminate passwords, which are often a weak link in data security.

AI’s ability to sort through large amounts of data means that it can be very effective in detecting and preventing cyber threats. An AI-supported network security program could, with relatively little oversight, analyze network traffic, identify vulnerabilities, and proactively defend against any incoming attacks. 

THE DIFFICULTIES IN UPDATING EXISTING SECURITY SYSTEMS WITH AI SOLUTIONS

The most pressing difficulty is that some old systems are simply not compatible with AI solutions. Security systems designed and built to be operated solely by humans are often not able to be retrofitted with AI algorithms, which means that any upgrades necessitate a complete, and likely expensive, overhaul of the security systems. 

One industry that has been quick to embrace AI-powered security systems is the online gambling industry. For those who are interested in seeing what AI-driven security can look like, visiting a casino online and investigating its security protocols will give you an idea of what is possible. Having an industry that has been an early adoption of such a disruptive technology can help other industries learn what to do and what not to do. In many cases, online casinos staged entire overhauls of their security suites to incorporate AI solutions, rather than trying to incorporate new tech, with older non-compatible security technology.

Another important factor in the difficulty of incorporating AI systems is that it takes a very large amount of data to properly train an AI algorithm. Thankfully, other companies are doing this work, and it should be possible to buy an already trained AI, fit to purpose. All that remains is trusting that the trainers did their due diligence and that the AI will be effective.

EFFECTIVENESS OF AI-DRIVEN SECURITY SYSTEMS

AI-driven security systems are, for the most part, lauded as being effective. With faster threat detection and response times quicker than humanly possible, the advantage of using AI for data security is clear.

AI has also proven resilient in terms of adapting to new threats. AI has an inherent ability to learn, which means that as new threats are developed and new vulnerabilities emerge, a well-built AI will be able to learn and eventually respond to new threats just as effectively as old ones.

It has been suggested that AI systems must completely replace traditional data security solutions shortly. Part of the reason for this is not just their inherent effectiveness, but there is an anticipation that incoming threats will also be using AI. Better to fight fire with fire.

IS USING AI FOR SECURITY DANGEROUS?

The short answer is no, the long answer is no, but. The main concern when using AI security measures with little human input is that they could generate false positives or false negatives. AI is not infallible, and despite being able to process huge amounts of data, it can still get confused.

It could also be possible for the AI security system to itself be attacked and become a liability. If an attack were to target and inject malicious code into the AI system, it could see a breakdown in its effectiveness which would potentially allow multiple breaches.

The best remedy for both of these concerns is likely to ensure that there is still an alert human component to the security system. By ensuring that well-trained individuals are monitoring the AI systems, the dangers of false positives or attacks on the AI system are reduced greatly.

ARE THERE LEGITIMATE ETHICAL CONCERNS WHEN AI IS USED FOR SECURITY?

Yes. The main ethical concern relating to AI when used for security is that the algorithm could have an inherent bias. This can occur if the data used for the training of the AI is itself biased or incomplete in some way. 

Another important ethical concern is that AI security systems are known to sort through personal data to do their job, and if this data were to be accessed or misused, privacy rights would be compromised.

Many AI systems also have a lack of transparency and accountability, which compounds the problem of the AI algorithm’s potential for bias. If an AI is concluding that a human operator cannot understand the reasoning, the AI system must be held suspect.

CONCLUSION

AI could be a great boon to security systems and is likely an inevitable and necessary upgrade. The inability of human operators to combat AI threats alone seems to suggest its necessity. Coupled with its ability to analyze and sort through mountains of data and adapt to threats as they develop, AI has a bright future in the security industry.

However, AI-driven security systems must be overseen by trained human operators who understand the complexities and weaknesses that AI brings to their systems.

Must Learn AI Security

Artificial Intelligence (AI) Governance and Cyber-Security: A beginner’s handbook on securing and governing AI systems

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: AI security, Artificial Intelligence (AI) Governance, Must Learn AI Security


Dec 11 2023

AI and Mass Spying

Category: AI,Cyber Spy,Spywaredisc7 @ 12:31 pm

Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.

Spying is another matter. It has long been possible to tap someone’s phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people’s phones, but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass. Spying is limited by the need for human labor.

AI is about to change that. Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

The technologies aren’t perfect; some of them are pretty primitive. They miss things that are important. They get other things wrong. But so do humans. And, unlike humans, AI tools can be replicated by the millions and are improving at astonishing rates. They’ll get better next year, and even better the year after that. We are about to enter the era of mass spying.

Mass surveillance fundamentally changed the nature of surveillance. Because all the data is saved, mass surveillance allows people to conduct surveillance backward in time, and without even knowing whom specifically you want to target. Tell me where this person was last year. List all the red sedans that drove down this road in the past month. List all of the people who purchased all the ingredients for a pressure cooker bomb in the past year. Find me all the pairs of phones that were moving toward each other, turned themselves off, then turned themselves on again an hour later while moving away from each other (a sign of a secret meeting).

Similarly, mass spying will change the nature of spying. All the data will be saved. It will all be searchable, and understandable, in bulk. Tell me who has talked about a particular topic in the past month, and how discussions about that topic have evolved. Person A did something; check if someone told them to do it. Find everyone who is plotting a crime, or spreading a rumor, or planning to attend a political protest.

There’s so much more. To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people’s confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.

This spying is not limited to conversations on our phones or computers. Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and “Hey Google” are already always listening; the conversations just aren’t being saved yet.

Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings. Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets—it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.

In the early days of Gmail, Google talked about using people’s Gmail content to serve them personalized ads. The company stopped doing it, almost certainly because the keyword data it collected was so poor—and therefore not useful for marketing purposes. That will soon change. Maybe Google won’t be the first to spy on its users’ conversations, but once others start, they won’t be able to resist. Their true customers—their advertisers—will demand it.

We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?

This essay originally appeared in Slate.

 #artificial intelligence, #espionage, #privacy, #surveillance

Mass Government Surveillance: Spying on Citizens (Spying, Surveillance, and Privacy in the 21st Century)

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: espionage, Mass Spying, Pegasus spyware, privacy, rtificial intelligence


Dec 02 2023

AI is about to completely change how you use computers

Category: AIdisc7 @ 2:33 pm

I still love software as much today as I did when Paul Allen and I started Microsoft. But—even though it has improved a lot in the decades since then—in many ways, software is still pretty dumb.

To do any task on a computer, you have to tell your device which app to use. You can use Microsoft Word and Google Docs to draft a business proposal, but they can’t help you send an email, share a selfie, analyze data, schedule a party, or buy movie tickets. And even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.

In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

This type of software—something that responds to natural language and can accomplish many different tasks based on its knowledge of the user—is called an agent. I’ve been thinking about agents for nearly 30 years and wrote about them in my 1995 book The Road Ahead, but they’ve only recently become practical because of advances in AI.

Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.

A personal assistant for everyone

Some critics have pointed out that software companies have offered this kind of thing before, and users didn’t exactly embrace them. (People still joke about Clippy, the digital assistant that we included in Microsoft Office and later dropped.) Why will people use agents?

The answer is that they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter. Clippy has as much in common with agents as a rotary phone has with a mobile device.

An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

“Clippy was a bot, not an agent.”

To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences. Clippy was a bot, not an agent.

Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.

Imagine that you want to plan a trip. A travel bot will identify hotels that fit your budget. An agent will know what time of year you’ll be traveling and, based on its knowledge about whether you always try a new destination or like to return to the same place repeatedly, it will be able to suggest locations. When asked, it will recommend things to do based on your interests and propensity for adventure, and it will book reservations at the types of restaurants you would enjoy. If you want this kind of deeply personalized planning today, you need to pay a travel agent and spend time telling them what you want.

The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people. They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.

Health care

Today, AI’s main role in healthcare is to help with administrative tasks. AbridgeNuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.

The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment. These agents will also help healthcare workers make decisions and be more productive. (Already, apps like Glass Health can analyze a patient summary and suggest diagnoses for the doctor to consider.) Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.

These clinician-agents will be slower than others to roll out because getting things right is a matter of life and death. People will need to see evidence that health agents are beneficial overall, even though they won’t be perfect and will make mistakes. Of course, humans make mistakes too, and having no access to medical care is also a problem.

“Half of all U.S. military veterans who need mental health care don’t get it.”

Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it. For example, RAND found that half of all U.S. military veterans who need mental health care don’t get it.

AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.

AI is about to completely change how you use computers

AI Made Simple: A Beginner’s Guide to Generative Intelligence

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: ChatGPT


Sep 18 2023

Microsoft AI researchers accidentally exposed terabytes of internal sensitive data

Category: AI,Data Breachdisc7 @ 8:46 am

Researchers find a GitHub repository belonging to Microsoft’s AI research unit that exposed 38TB of sensitive data, including secret keys and Teams chat logs — Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data, including private keys and passwords 



Aug 24 2023

Google AI in Workspace Adds New Zero-Trust and Digital Sovereignty Controls

Category: AI,Zero trustdisc7 @ 1:48 pm

Google announced security enhancements to Google Workspace focused on enhancing threat defense controls with Google AI.

Image: Urupong/Adobe Stock

At a Google Cloud press event on Tuesday, the company announced Google Cloud’s rollout over the course of this year of new AI-powered data security tools bringing zero-trust features to  Workspace, Drive, Gmail and data sovereignty. The enhancements to Google Drive, Gmail, the company’s security tools for IT and security center teams and more are designed to help global companies keep their data under lock and encrypted key and security operators outrun advancing threats.

Jump to:

The Executive Guide to Zero Trust: Drivers, Objectives, and Strategic Considerations

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: Digital Sovereignty Controls


Jul 20 2023

How do you solve privacy issues with AI? It’s all about the blockchain

Category: AI,Blockchain,Information Privacydisc7 @ 9:18 am

How do you solve privacy issues with AI? It’s all about the blockchain

Data is the lifeblood of artificial intelligence (AI), and the power that AI brings to the business world — to unearth fresh insights, increase speed and efficiency, and multiply effectiveness — flows from its ability to analyze and learn from data. The more data AI has to work with, the more reliable its results will be.

Feeding AI’s need for data means collecting it from a wide variety of sources, which has raised concerns about AI gathering, processing, and storing personal data. The fear is that the ocean of data flowing into AI engines is not properly safeguarded.

Are you donating your personal data to generative AI platforms?

While protecting the data that AI tools like ChatGPT is collecting against breaches is a valid concern, it is actually only the tip of the iceberg when it comes to AI-related privacy issues. A more poignant issue is data ownership. Once you share information with a generative AI tool like Bard, who owns it?

Those who are simply using generative AI platforms to help craft better social posts may not understand the connection between the services they offer and personal data security. But consider the person who is using an AI-driven chatbot to explore treatment for a medical condition, learn about remedies for a financial crisis, or find a lawyer. In the course of the exchange, those users will most likely share some personal and sensitive information.

Every query posed to an AI platform becomes part of that platform’s data set without regard to whether or not it is personal or sensitive. ChatGPT’s privacy policy makes it clear: “When you use our Services, we collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services.” It also says: “In certain circumstances we may provide your Personal Information to third parties without further notice to you, unless required by the law
”

Looking to blockchain for data privacy solutions

While the US government has called for an “AI Bill of Rights” designed to protect sensitive data, it has yet to provide the type of regulations that protect its ownership. Consequently, Google and Microsoft have full ownership over the data that their users provide as they comb the web with generative AI platforms. That data empowers them to train their AI models, but also to get to understand you better.

Those looking for a way to gain control of their data in the age of AI can find a solution in blockchain technology. Commonly known as the foundation of cryptocurrency, blockchain can also be used to allow users to keep their personal data safe. By empowering a new type of digital identity management — known as a universal identity layer — blockchain allows you to decide how and when your personal data is shared.

Blockchain technology brings a number of factors into play that boost the security of personal data. First, it is decentralized, meaning that data is not stored in a centralized database and is not subject to its vulnerabilities with blockchain.

Blockchain also supports smart contracts, which are self-executing contracts that have the terms of an agreement written into their code. If the terms aren’t met, the contract does not execute, allowing for data stored on the blockchain to be utilized only in the way in which the owner stipulates.

Enhanced security is another factor that blockchain brings to data security efforts. The cryptographic techniques it utilizes allow users to authenticate their identity without revealing sensitive data.

Leveraging these factors to create a new type of identification framework gives users full control of who can use and view their information, for what purposes, and for how long. Once in place, this type of identity system could even be used to allow users to monetize their data, charging large language models (LLMs) like OpenAI and Google Bard to benefit from the use of personal data.

Ultimately, AI’s ongoing needs may lead to the creation of platforms where users offer their data to LLMs for a fee. A blockchain-based universal identity layer would allow the user to choose who gets to use it, toggling access on and off at will. If you decide you don’t like the business practices Google has been employing over the past two months, you can cut them off at the source.

That type of AI model illustrates the power that comes from securing data on a decentralized network. It also reveals the killer use case of blockchain that is on the horizon.

Image credittampatra@hotmail.com/depositphotos.com

Aaron Rafferty is the CEO of Standard DAO and Co-Founder of BattlePACs, a subsidiary of Standard DAO. BattlePACs is a technology platform that transforms how citizens engage in politics and civil discourse. BattlePACs believes participation and conversations are critical to moving America toward a future that works for everyone.

Blockchain and Web3: Building the Cryptocurrency, Privacy, and Security Foundations of the Metaverse

InfoSec books | InfoSec tools | InfoSec services

Tags: AI privacy, blockchain, Blockchain and Web3


Apr 05 2023

HOW TO CREATE UNDETECTABLE MALWARE VIA CHATGPT IN 7 EASY STEPS BYPASSING ITS RESTRICTIONS

Category: AI,ChatGPT,MalwareDISC @ 9:35 am

There is evidence that ChatGPT has helped low-skill hackers generate malware, which raises worries about the technology being abused by cybercriminals. ChatGPT cannot yet replace expert threat actors, but security researchers claim there is evidence that it can assist low-skill hackers create malware.

Since the introduction of ChatGPT in November, the OpenAI chatbot has assisted over 100 million users, or around 13 million people each day, in the process of generating text, music, poetry, tales, and plays in response to specific requests. In addition to that, it may provide answers to exam questions and even build code for software.

It appears that malicious intent follows strong technology, particularly when such technology is accessible to the general people. There is evidence on the dark web that individuals have used ChatGPT for the development of dangerous material despite the anti-abuse constraints that were supposed to prevent illegitimate requests. This was something that experts feared would happen. Because of thisexperts from forcepoint came to the conclusion that it would be best for them not to create any code at all and instead rely on only the most cutting-edge methods, such as steganography, which were previously exclusively used by nation-state adversaries.

The demonstration of the following two points was the overarching goal of this exercise:

  1. How simple it is to get around the inadequate barriers that ChatGPT has installed.
  2. How simple it is to create sophisticated malware without having to write any code and relying simply on ChatGPT

Initially ChatGPT informed him that malware creation is immoral and refused to provide code.

  1. To avoid this, he generated small codes and manually assembled the executable.  The first successful task was to produce code that looked for a local PNG greater than 5MB. The design choice was that a 5MB PNG could readily hold a piece of a business-sensitive PDF or DOCX.

 2. Then asked ChatGPT to add some code that will encode the found png with steganography and would exfiltrate these files from computer, he asked ChatGPT for code that searches the User’s Documents, Desktop, and AppData directories then uploads them to google drive.

3. Then he asked ChatGPT to combine these pices of code and modify it to to divide files into many “chunks” for quiet exfiltration using steganography.

4. Then he submitted the MVP to VirusTotal and five vendors marked the file as malicious out of sixty nine.

5. This next step was to ask ChatGPT to create its own LSB Steganography method in my program without using the external library. And to postpone the effective start by two minutes.https://www.securitynewspaper.com/2023/01/20/this-new-android-malware-allows-to-hack-spy-on-any-android-phone/embed/#?secret=nN5212UQrX#?secret=8AnjYiGI6e

6. The another change he asked ChatGPT to make was to obfuscate the code which was rejected. Once ChatGPT rejected hisrequest, he tried again. By altering his request from obfuscating the code to converting all variables to random English first and last names, ChatGPT cheerfully cooperated. As an extra test, he disguised the request to obfuscate to protect the code’s intellectual property. Again, it supplied sample code that obscured variable names and recommended Go modules to construct completely obfuscated code.

7. In next step he uploaded the file to virus total to check

And there we have it; the Zero Day has finally arrived. They were able to construct a very sophisticated attack in a matter of hours by only following the suggestions that were provided by ChatGPT. This required no coding on our part. We would guess that it would take a team of five to ten malware developers a few weeks to do the same amount of work without the assistance of an AI-based chatbot, particularly if they wanted to avoid detection from all detection-based suppliers.

ChatGPT for Startups

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT malware


Mar 20 2023

Most security pros turn to unauthorized AI tools at work

Category: AI,ChatGPTDISC @ 10:52 am

The research demonstrates that embracing automation in cybersecurity leads to significant business benefits, such as addressing talent gaps and effectively combating cyber threats. According to the survey, organizations will continue investing in cybersecurity automation in 2023, even amid economic turbulence.

“As organizations look for long-term solutions to keep pace with increasingly complex cyberattacks, they need technologies that will automate time-consuming, repetitive tasks so security teams have the bandwidth to focus on the threats that matter most,” said Marc van Zadelhoff, CEO, Devo. “This report confirms what we’re already hearing from Devo customers: adopting automation in the SOC results in happier analysts, boosted business results, and more secure organizations.”

Security pros are using AI tools without authorization

According to the study, security pros suspect their organization would stop them from using unauthorized AI tools, but that’s not stopping them.

  • 96% of security pros admit to someone at their organization using AI tools not provided by their company – including 80% who cop to using such tools themselves.
  • 97% of security pros believe their organizations are able to identify their use of unauthorized AI tools, and more than 3 in 4 (78%) suspect their organization would put a stop to it if discovered.

Adoption of automation in the SOC

Organizations fail to adopt automation effectively, forcing security pros to use rogue AI tools to keep up with workloads.

  • 96% of security professionals are not fully satisfied with their organization’s use of automation in the SOC.
  • Reasons for dissatisfaction with SOC automation varied from technological concerns such as the limited scalability and flexibility of the available solutions (42%) to financial ones such as the high costs associated with implementation and maintenance (39%). But for many, concerns go back to people: 34% cite a lack of internal expertise and resources to manage the solution as a reason they are not satisfied.
  • Respondents indicated that they would opt for unauthorized tools due to the better user interface (47%), more specialized capabilities (46%), and allow for more efficient work (44%).

Investing in cybersecurity automation

Security teams will prioritize investments in cybersecurity automation in 2023 to solve organizational challenges, despite economic turbulence and widespread organizational cost-cutting.

  • 80% of security professionals predict an increase in cybersecurity automation investments in the coming year, including 55% who predict an increase of more than 5%.
  • 100% of security professionals reported positive business impacts as a result of using automation in cybersecurity, citing increased efficiency (70%) and financial gains (65%) as primary benefits.

Automation fills widening talent gaps

Adopting automation in the SOC helps organizations combat security staffing shortages in a variety of ways.

  • 100% of respondents agreed that automation would be helpful to fill staffing gaps in their team.
  • Incident analysis (54%), landscape analysis of applications and data sources (54%), and threat detection and response (53%) were the most common ways respondents said automation could make up for staffing shortages.

AI

A Guide to Combining AI Tools Like Chat GPT, Quillbot, and Midjourney for Crafting Killer Fiction and Nonfiction (Artificial Intelligence Uses & Applications)

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI tools, AI Tools Like Chat GPT


Mar 19 2023

Researcher create polymorphic Blackmamba malware with ChatGPT

Category: AI,ChatGPTDISC @ 3:44 pm

The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams.

The malware can target Windows, macOS and Linux devices.

HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.

black mamba snake coiled up

This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.

As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters

Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.

Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.

Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.

The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.

This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.

ChatGPT Powered Blackmamba Malware Can Bypass EDR Filters
Researcher’s discussion with ChatGPT

Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.

It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.

What is Polymorphic malware?

Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.

Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.

The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.

Chat GPT: Is the Future Already Here?

AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security

BlackMamba GPT POC Malware In Action

Professional Certificates, Bachelors & Masters Program

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: ChatGPT


Mar 15 2023

OpenAI Announces GPT-4, the Successor of ChatGPT

Category: AIDISC @ 10:18 am

A powerful new AI model called GPT-4 has been released recently by OpenAI, which is capable of comprehending images and texts. The company describes this as the next-stage milestone in its effort to scale up deep learning.

In November 2022, ChatGPT was launched and has since been used by millions of people worldwide. The all-new GPT-4 is now available through ChatGPT Plus, while it is the paid GPT subscription option of OpenAI available for $20 per month.

However, currently, there is a cap on the usage amount, and to access the API, the developers need to be registered on a waitlist.

The GPT-4 can also perform a number of tasks at once, with a maximum word count of 25,000. That is eight times more than the ChatGPT can.

GPT-4

Pricing & Implementation

Here below, we have mentioned the pricing tags:-

  • For 1,000 “prompt” tokens (Raw text), which is about 750 words will cost $0.03.
  • For 1,000 “completion” tokens (Raw text), which is about 750 words will cost $0.06.

A prompt token is part of a word that has been fed into GPT-4 in order for it to function. While the content that is generated by the GPT-4 is referred to as completion tokens.

In addition, Microsoft recently announced that it is using the GPT-4 version for its Bing Chat chatbot. Since the investment from Microsoft into OpenAI has amounted to $10 billion.

Stripe is another early adopter of GPT-4, which uses it to scan business websites. By doing so, it provides a summary of the results to customer service staff as part of the scanning process.

A new subscription tier for language learning has been developed by Duolingo based on the GPT-4. In order to provide financial analysts with access to information retrieved from company documents, Morgan Stanley is creating a GPT-4-powered system.

It appears that Khan Academy is also working towards automating some sort of tutoring process using GPT-4 that can help students.

A simulated bar exam was given to GPT-4, and it performed particularly well in it, as it managed to achieve scoring around the top 10% of test takers. Interestingly, GPT-3.5, on the other hand, scored in the bottom 10% of the group.  

GPT-4 Action

The GPT-4 algorithm is a form of generative artificial intelligence, similar to the ChatGPT algorithm. With the help of algorithms and predictive text, the generating AI constructs the content based on the prompts that are presented by the user. 

As you can see in the image below, GPT-4 generates recipes based on images that have been uploaded.

The reasoning skills of GPT-4 are more advanced than those of ChatGPT. In order to find available meeting times, the model can, for instance, search for three schedules with three available times.

In short, the GPT-4 is much smarter and more capable as compared to the GPT-3.5. GPT-4 is capable of receiving and processing textual and visual information, one of its most impressive features.

At the moment, it is not yet possible for OpenAI customers to utilize the image understanding capability of GPT-4. However, currently, OpenAI is testing this technology with only one partner, Be My Eyes.

OpenAI has warned that, just like its predecessors, the GPT-4 is still not entirely reliable. This model needs to be further improved by the entire community by building on top of the model, exploring it, and contributing to it through collective efforts. 

There is still a lot of work to be done, and the company affirmed that they are looking forward to working together to improve it.

CHATGPT-4 Revealed 500 Prompts to Ride the AI Wave (Mastering ChatGPT-4 Prompts & Beyond)

Tags: ChatGPT, GPT-4


Mar 09 2023

ChatGPT for Offensive Security

Category: AIDISC @ 10:39 am

ChatGPT for Offensive Security – via SANS Institute

Can ChatGPT (AI) be used for offensive security?

It is possible to use AI for offensive security, just as it is possible to use any technology for malicious purposes. However, the use of AI for offensive security raises significant ethical concerns and legal considerations.

AI could be used to automate and scale attacks, such as phishing, malware propagation, or social engineering. It could also be used to analyze large amounts of data to identify vulnerabilities or weaknesses in security systems, and to develop targeted attacks.

However, the use of AI for offensive security could also have unintended consequences, such as collateral damage or false positives. Furthermore, it raises concerns about accountability and responsibility, as it may be difficult to trace the origin of an attack that is automated and conducted by a machine learning system.

Overall, the use of AI for offensive security is a complex and controversial issue that requires careful consideration of the ethical and legal implications. It is important to always use technology responsibly and ethically.

Chat GPT is just the tip of the iceberg! 15 Artificial Intelligence tools that may be useful to you:
1.Midjourney: a tool that creates images from textual descriptions, similar to OpenAI’s DALL-E and Stable Diffusion.
2. RunwayML: Edit videos in real time, collaborate and take advantage of over 30 magical AI tools.
3. Otter AI: Transform audio into text with high accuracy. Use this tool for meeting notes, content creation and much more.
4. Copy.AI: This is the first copyright platform powered by artificial intelligence. This tool helps generate content for websites, blog posts, or social media posts, helping increase conversions and sales.
5. Murf AI: Convert text to audio: generate studio-quality narrations in minutes. Use Murf’s realistic AI voices for podcasts, videos and all your professional presentations.
6. Flow GPT: Share, discover and learn about the most useful ChatGPT prompts.
7. Nocode.AI: The Nocode platform is a way to create AI solutions without ever writing a single line of code. It’s a great way to quickly test ideas, create new projects, and launch businesses and new products faster.
8. Supernormal: This tool helps create incredible meeting notes without lifting a finger.
9. TLDRthis: This AI-based website helps you summarize any part of a text into concise and easy-to-digest content, so that you can rid yourself of information overload and save time.
10. TheGist: Summarize any Slack channel or conversation with just one click! This AI analyzes Slack conversations and instantly creates a brief summary for you.
11. Sitekick: Create landing pages with AI by telling it what you want via text.
12. Humanpal: Create Avatars with ultra-realistic human appearances!
13. ContentBot: – Write content for articles, ads, products, etc.
14. Synthesia – Create a virtual presenter that narrates your text for you.
Synthesia is a video creation platform using AI. It’s possible to create videos in 120 languages, saving up to 80% of your time and budget.
15. GliaCloud: This tool converts your text into video. Generate videos for news content, social media posts, live sports events, and statistical data in minutes.

The role of human insight in AI-based cybersecurity

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

The Art of Prompt Engineering with chatGPT: A Hands-On Guide for using chatGPT

Previous posts on AI

InfoSec Threats | InfoSec books | InfoSec tools | InfoSec services

Tags: AI-based cybersecurity, ChatGPT, human insight, Offensive security


Feb 08 2023

Developers Created AI to Generate Police Sketches. Experts Are Horrified

Category: AIDISC @ 11:56 pm

Police forensics is already plagued by human biases. Experts say AI will make it even worse.

Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs. 

The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program’s purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet

“We haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.”

AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.     

“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”

The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click “generate profile,” which sends the descriptions to DALL-E 2 and produces an AI-generated portrait. 

For more details: Developers Created AI to Generate Police Sketches. Experts Are Horrified

https://www.vice.com/en/article/qjk745/ai-police-sketches


Next Page »