InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
CrowdStrike recently announced an agreement to acquire Seraphic Security, a browser-centric security company, in a deal valued at roughly $420 million. This move, coming shortly after CrowdStrike’s acquisition of identity authorization firm SGNL, highlights a strategic effort to eliminate one of the most persistent gaps in enterprise cybersecurity: visibility and control inside the browser — where modern work actually happens.
Why Identity and Browser Security Converge
Modern attackers don’t respect traditional boundaries between systems — they exploit weaknesses wherever they find them, often inside authenticated sessions in browsers. Identity security tells you who should have access, while browser security shows what they’re actually doing once authenticated.
CrowdStrike’s CEO, George Kurtz, emphasized that attackers increasingly bypass malware installation entirely by hijacking sessions or exploiting credentials. Once an attacker has valid access, static authentication — like a single login check — quickly becomes ineffective. This means security teams need continuous evaluation of both identity behavior and browser activity to detect anomalies in real time.
In essence, identity and browser security can’t be siloed anymore: to stop modern attacks, security systems must treat access and usage as joined data streams, continuously monitoring both who is logged in and what the session is doing.
AI Raises the Stakes — and the Signal Value
The rise of AI doesn’t create new vulnerabilities per se, but it amplifies existing blind spots and creates new patterns of activity that traditional tools can easily miss. AI tools — from generative assistants to autonomous agents — are heavily used through browsers or browser-like applications. Without visibility at that layer, AI interactions can bypass controls, leak sensitive data, or facilitate automated attacks without triggering legacy endpoint defenses.
Instead of trying to ban AI tools — a losing battle — CrowdStrike aims to observe and control AI usage within the browser itself. In this context, AI usage becomes a high-value signal that acts as a proxy for risky behavior: what data is being queried, where it’s being sent, and whether it aligns with policy. This greatly enhances threat detection and risk scoring when combined with identity and endpoint telemetry.
The Bigger Pattern
Taken together, the Seraphic and SGNL acquisitions reflect a broader architectural shift at CrowdStrike: expanding telemetry and intelligence not just on endpoints but across identity systems and browser sessions. By aggregating these signals, the Falcon platform can trace entire attack chains — from initial access through credential use, in-session behavior, and data exfiltration — rather than reacting piecemeal to isolated alerts.
This pattern mirrors the reality that attack surfaces are fluid and exist wherever users interact with systems, whether on a laptop endpoint or inside an authenticated browser session. The goal is not just prevention, but continuous understanding and control of risk across a human or machine’s entire digital journey.
Addressing an Enterprise Security Blind Spot
The browser is arguably the new front door of enterprise IT: it’s where SaaS apps live, where data flows, and — increasingly — where AI tools operate. Because traditional security technologies were built around endpoints and network edges, developers often overlooked the runtime behavior of browsers — until now. CrowdStrike’s acquisition of Seraphic directly addresses this blind spot by embedding security inside the browser environment itself.
This approach extends beyond snippet-based URL filtering or restricting corporate browsers: it provides runtime visibility and policy enforcement in any browser across managed and unmanaged devices. By correlating this with identity and endpoint data, security teams gain unprecedented context for detecting session-based threats like hijacks, credential abuse, or misuse of AI tools.
This strategic push makes a lot of sense. For too long, security architectures treated the browser as a perimeter, rather than as a core execution environment where work and risk converge. As enterprises embrace SaaS, remote work, and AI-driven workflows, attackers have naturally gravitated to these unmonitored entry points. CrowdStrike’s focus on continuous identity evaluation plus in-session browser telemetry is a pragmatic evolution of zero-trust principles — not just guarding entry points, but consistently watching how access is used. Combining identity, endpoint, and browser signals moves defenders closer to true context-aware security, where decisions adapt in real time based on actual behavior, not just static policies.
However, executing this effectively at scale — across diverse browser types, BYOD environments, and AI applications — will be complex. The industry will be watching closely to see whether this translates into tangible reductions in breaches or just a marketing narrative about data correlation. But as attackers continue to blur boundaries between identity abuse and session exploitation, this direction seems not only logical but necessary.
At DISC InfoSec, we help organizations navigate this landscape by aligning AI risk management, governance, security, and compliance into a single, practical roadmap. Whether you are experimenting with AI or deploying it at scale, we help you choose and operationalize the right frameworks to reduce risk and build trust. Learn more at DISC InfoSec.
For more than a decade, DuckDuckGo has rallied against Google’s extensive online tracking. Now the privacy-focused web search and browser company has another target in its sights: the sprawling, messy web of data brokers that collect and sell your data every single day.
Today, DuckDuckGo is launching a new browser-based tool that automatically scans data broker websites for your name and address and requests that they be removed. Gabriel Weinberg, the company’s founder and CEO, says the personal-information-removal product is the first of its kind where users don’t have to submit any of their details to the tool’s owners. The service will make the requests for information to be removed and then continually check if new records have been added, Weinberg says. “We’ve been doing it to automate it completely end-to-end, so you don’t have to do anything.
The personal-information removal is part of DuckDuckGo’s first subscription service, called Privacy Pro, and is bundled with the firm’s first VPN and an identity-theft-restoration service. Weinberg says the subscription offering, which is initially available only in the US for $9.99 per month or $99.99 per year, is part of an effort to add to the privacy-focused tools it provides within its web browser and search engine. “There’s only so much we can do in that browsing loop, there’s things happening outside of that, and a big one is data brokers, selling information scraped from different places,” Weinberg says.
DuckDuckGo’s personal-information-removal tool—for now, at least—is taking the privacy fight to people-search websites, which allow you to look up names, addresses, and some details of family members. However, Weinberg says DuckDuckGo has created it so the company isn’t gathering details about you, and it is built on technology from Removaly, which the company acquired in 2022.
Ahead of its launch, the company demonstrated how the system works and some of the engineering efforts that went into its creation. On the surface, the removal tool is straightforward: You access it through the company’s browser and enter some information about yourself, such as your name, year of birth, and any addresses. It then scans 53 data broker websites for results linked to you and requests those results to be wiped. (All 53 data brokers included have opt-out schemes that allow people to make requests.) A dashboard shows updates about what has been removed and when it will next scan those websites again, in case new records have been added.
Under the hood, things are more complex. Greg Fiorentino, a product director at DuckDuckGo, says when you enter your personal data into the system, it’s all saved in an encrypted database on your computer (the tool doesn’t work on mobile), and the company isn’t sent this information. “It doesn’t go to DuckDuckGo servers at all,” he says.
For each of the data brokers’ websites, Fiorentino says, DuckDuckGo looked at its URL structure: For instance, search results may include the name, location, and other personal information that are queried. When the personal information tool looks for you on these websites, it constructs a URL with the details you have entered.
“Each of the 53 sites we cover has a slightly different structure,” Fiorentino says. “We have a template URL string that we substitute the data in from the user to search. There are lots of different nuances and things that we need to be able to handle to actually match the data correctly.”
During testing, the company says, it found most people have between 15 and 30 records on the data broker sites it checks, although the highest was around 150. Weinberg says he added six addresses to be removed from websites. “I found hits on old stuff, and even in the current address, which I really tried to hide a bit from getting spam at, it’s still out there somehow,” Weinberg says. “It’s really hard to avoid your information getting out there.”
Once the scan for records has been completed, the DuckDuckGo system, using a similar deconstruction of each of the data broker websites, will then automatically make requests for the records to be removed, the team working on the product say. Fiorentino says some opt-outs will happen within hours, whereas others can take weeks to remove the data. The product director says that in the future, the tool may be able to remove data from more websites, and the company is looking at potentially including more sensitive data in the opt-outs, such as financial information.
Various personal-information-removal services exist on the web, and they can vary in what they remove from websites or the services they provide. Not all are trustworthy. Recently, Mozilla, the creator of the Firefox browser, stopped working with identity protection service Onerep after investigative journalist Brian Krebs revealed that the founder of Onerep also founded dozens of people-search websites in recent years.
DuckDuckGo’s subscription service marks the first time the company has started charging for a product—its browser and search engine are free to use, and the firm makes its money from contextual ads. Weinberg says that, because subscriptions are purchased through Apple’s App Store, Google Play, or with payment provider Stripe, details about who subscribes are not transferred to DuckDuckGo’s servers. A random ID is created for each user when they sign up, so people don’t have to create an account or hand DuckDuckGo their payment information. The company says it doesn’t have access to people’s Apple IDs or Google account details.
For its identity-theft-restoration service, DuckDuckGo says it is working with identity protection service Iris, which uses trained staff to help with fraudulent banking activity, document replacement, emergency travel, and more. DuckDuckGo says no information is shared between it and Iris.
Weinberg says that while the company’s main focus is providing free and easy-to-use privacy tools to people, running a VPN and the removal tool requires a different business model. “It just takes a lot of bandwidth,” he says of the VPN.
Broadly, the VPN industry, which allows people to hide their web traffic from internet providers and avoid geographic restrictions on streaming, has historically been full of companies with questionable records when it comes to privacy and people’s data. Free VPNs have long been a privacy nightmare.
DuckDuckGo says its VPN, which it built in-house and which uses the WireGuard protocol, does not store any logs of people’s activities and can be used on up to five devices at once. “We don’t have any record of website visits, DNS requests, IP addresses connected, or session lengths,” the company says in its documentation. The VPN runs through its browser, with 13 location options at launch, but shields all internet traffic passing through your phone or computer.
The company says it is conducting a third-party audit of the VPN to allow its claims to be scrutinized, and it will publish the full audit once it’s complete. “We really wanted to do something in the VPN space for a long time, we just didn’t have the resources and people to do it,” Weinberg says. “We looked at partnering in different places. If we have to completely trust a partner versus building something where we can make it anonymous, we decided we would want to do it ourselves.”
Google has fixed the sixth Chrome zero-day bug that was exploited in the wild this year. The flaw, identified as CVE-2023-6345, is classified as an integer overflow in Skia, an open-source 2D graphics library written in C++.
“Google is aware that an exploit for CVE-2023-6345 exists in the wild,” Google said.
There are several potential risks associated with this high-severity zero-day vulnerability, including the execution of arbitrary code and crashes.
On November 24, 2023, Benoît Sevens and Clément Lecigne from Google’s Threat Analysis Group reported the issue.
Google has upgraded the Stable channel version 119.0.6045.199 for Mac and Linux and 119.0.6045.199/.200 for Windows, addressing the year’s sixth actively exploited zero-day vulnerability. This upgrade will be rolled out over the next few days/weeks.
Additionally, Google has fixed six high-severity security vulnerabilities with this update.
Details Of The Vulnerabilities Addressed
Type Confusion in Spellcheck is a high-severity bug that is being tracked as CVE-2023-6348. Mark Brand from Google Project Zero reported the issue.
Use after free in Mojo is the next high-severity bug, tagged as CVE-2023-6347. 360 Vulnerability Research Institute’s Leecraso and Guang Gong reported the issue, and they were rewarded with a bounty of $31,000.
Use after free in WebAudio is a high-severity issue identified as CVE-2023-6346. Following Huang Xilin of Ant Group Light-Year Security Lab’s disclosure, a $10,000 prize was given out.
A High severity bug in libavif, Out-of-bounds memory access, is tagged as CVE-2023-6350. Fudan University reported it, and $7000 was given out.
Use after free in libavif is a high-severity bug identified as CVE-2023-6351. Fudan University reported it, and $7000 was given out.
Update Now
To stop exploitation, Google highly advises users to update their Chrome web browser right away. The following are the easy procedures that you must follow to update the Chrome web browser:-
Go to the Settings option.
Then select About Chrome.
Wait, as Chrome will automatically fetch and download the latest update.
Once the installation process completes, you have to restart Chrome.
IN-DEPTH ANALYSIS: NAVIGATING THE PERILS OF CVE-2023-5218 IN GOOGLE CHROME
The digital realm, while offering boundless possibilities, is also a fertile ground for myriad cybersecurity threats. One such peril that has recently come to light is the User-After-Free vulnerability in Google Chrome, specifically identified as CVE-2023-5218. This vulnerability not only poses a significant threat to user data and system integrity but also opens a Pandora’s box of potential cyber-attacks and exploitations.
UNRAVELING THE USER-AFTER-FREE VULNERABILITY
The User-After-Free vulnerability is a type of cybersecurity flaw that surfaces when a program continues to utilize memory space after it has been freed or deleted. This flaw allows attackers to execute arbitrary code or potentially gain unauthorized access to a system. CVE-2023-5218, identified within Google Chrome, was noted to be potentially exploitable to perform such malicious actions, thereby putting users’ data and privacy at substantial risk.
CVE-2023-5218 was unveiled to the public through various cybersecurity platforms and researchers who detected unusual activities and potential exploitation trails leading back to this particular flaw. This vulnerability was identified to be present in a specific Chrome component, prompting Google to release a flurry of updates and patches to mitigate the associated risks.
THE EXPLOIT MECHANICS
Exploiting CVE-2023-5218 allows attackers to manipulate the aforementioned ‘freed’ memory space, enabling them to execute arbitrary code within the context of the affected application. In the context of Chrome, this could potentially allow attackers unauthorized access to sensitive user data, such as saved passwords or personal information, or even navigate the browser to malware-laden websites without user consent.
THE POTENTIAL IMPACT
The exploitation of CVE-2023-5218 could have a multifold impact:
Data Theft: Sensitive user data, including login credentials, personal information, and financial details, could be compromised.
System Control: Attackers could gain control over the affected system, using it to launch further attacks or for other malicious purposes.
Malware Spread: By redirecting browsers to malicious websites, malware could be injected into users’ systems, further expanding the impact of the attack.
TECHNICAL INSIGHTS INTO CVE-2023-5218
Vulnerability Class: Use After Free
Impact: Confidentiality, Integrity, and Availability
The vulnerability is rooted in the improper handling of memory in the Site Isolation component of Google Chrome. The flaw arises from referencing memory after it has been freed, which can lead to program crashes, unexpected value utilization, or arbitrary code execution. The vulnerability is classified under CWE-416 and CWE-119, indicating its potential to improperly restrict operations within the bounds of a memory buffer and its susceptibility to use after free exploits.
MITIGATION AND COUNTERMEASURES
The primary mitigation strategy recommended is upgrading to Google Chrome version 118.0.5993.70, which eliminates this vulnerability. However, considering the potential risks associated with such vulnerabilities, organizations and individual users are advised to:
Regularly update and patch software to safeguard against known vulnerabilities.
Employ robust cybersecurity practices, including using security software and adhering to safe browsing practices.
Educate users on recognizing and avoiding potential phishing attempts or malicious sites that might exploit such vulnerabilities.
CONCLUSION
The identification and subsequent mitigation of CVE-2023-5218 underscore the perpetual battle between cybersecurity professionals and cyber adversaries. While this vulnerability has been addressed in the latest Chrome update, it serves as a potent reminder of the criticality of maintaining up-to-date systems and employing prudent cybersecurity practices. As we navigate through the digital era, the complexity and sophistication of cyber threats continue to evolve, making vigilance and preparedness crucial in ensuring secure digital interactions.
A Deep Web Search Engine is an alternative search engine when we need to search for something, then Google or Bing will be the first choice hit in mind suddenly. Here is the deep web search engine list.
But unlike the Deep Web Search Engine, Google and Bing will not give all the Hidden information which is served under the Dark web.
Google has the ability to track each and every move on the Internet while you are searching via Google.
If you don’t want Google to collect your personal information and your online activities you should maintain your Anonymity online.
“Deep web” also known as “invisible web”, the term refers to a vast repository of underlying content, such as documents in online databases that general-purposeweb crawlers cannot reach.
The deep web content is estimated at 500 times Bigger than Normal search content, yet has remained mostly untapped due to the limitations of traditional search engines.
Since most personal profiles, public records, and other people-related documents are stored in databases and not on static web pages, most of the higher-quality information about people is simply “invisible” to a regular search engine but we can get it from Deep Search Engine.
Deep Web Search Engine Links
pipl
MyLife
Yippy
Surfwax
Way Back Machine
Google Scholar
DuckDuckgo
Fazzle
not Evil
Start Page
Spokeo
InfoTracer
Why Doesn’t Google Provide Deep Search Engine Results?
Basically, Deep web or Dark Web secret contents Don’t index to provide Results by normal Search Engines such as Google and Bing. all the Deb websites (.onion) are unindexed but few results we can crawl via Deep Web Search Engine.
Google will not provide Search results that don’t index by the world wide web. The content is hidden behind HTML forms.
Regular searches are searched by google and its results re-drive from interconnected servers content publishers are optimizing their content by learning the best search engine optimization training to provide better results for google search users.
When you access the dark web, you’re not surfing the interconnected servers you regularly interact with; instead, everything stays internal on the Tor network, which provides security and privacy to everyone equally.
“According to the researchers, only 4 % of the Internet is visible to the general public and the remaining 96% of Websites and Data’s hidden behind the Deep web”.
The deep web contains many illegal Activities including Drugs, Weapon Dealing, highly sophisticated hacking tools, Illegal Pornography, Government Military secret, and other illegal Actives.
Robots Exclusion:
The robots.txt document, which more often than not lives in the principle catalog of a site, tells seek robots which records and registries, files, and directories should not be indexed.
Henceforth the name “robots Exclusion Files.” If this record is set up, it will hinder certain pages from being listed, which will then be imperceptible to searchers. Blog stages normally offer this component.
Here we go for the interesting Search Engines to get deep search results that probably most People Don’t aware of.
Deep Web Search Engine List
1. pipl
Pipl
Pipl’s query engine helps you find deep web pages that cannot be found on regular search engines.
Unlike other search Engines link Google and Bing, pipl Deep Web Search Engine provides search results retrieved from Deep Web.
Pipl robots are set to interact with searchable databases and extract facts, contact details, and other relevant information from personal profiles, member directories, scientific publications, court records, and numerous other deep-web sources.
According to pipl, they use advanced language analysis and ranking algorithms to bring you the most relevant bits of information about a person in a single, easy-to-read results page.
A MyLife Deep Web Search Engine Public Page can list a person’s data including age, past and current places of residence, telephone numbers, email addresses, employment, instruction, photos, relatives, a smaller-than-expected history, and an individual survey segment that urges other Mylife individuals to rate each other.
You can register for this service and get a fair amount of information for free but for $6.95 US Dollars, you can use the service for a month and get full reports and all kinds of juicy information.
Mylife cases to have “more than 225 million Public Pages with data about practically everybody in America, 18 years of age and over.”
According to MyLife, an “Open Page can’t be erased” and “just premium individuals can conceal content on their Public Page and expel the information from the first source.
Yippy in fact a Metasearch Engine (it gets its outcomes by utilizing other web indexes), I’ve included Yippy here as it has a place with an entryway of devices a web client might be occupied with, for example, such as email, games, videos and so on.
Yippy cases to be a family cordial site and particular of protection(Privacy). Not at all like Google they don’t store your history or look at terms or email which is an or more point, yet the traditionalist family’s well-disposed picture has influenced seek quality.
They assert 5 million “undesirable” sites have been blocked from its index to protect sensitive searchers.
A search for [alxxxol] returns results of alcoholics, and anonymous groups, rather than say, a Wikipedia page on what alcohol is for example.
So Yippy is not good for people looking for information but may be of interest to parents of laptop-owning children.
SurfWax Deep Web Search Engine is available as a free and subscription-based service. The search site is bundled with a number of features other than plain search. The features include:
“Focus” link to add “focus words” that you can add to the search. The focus words are narrower or broader terms that can be used to expand or narrow your search.
“SiteSnaps” highlight ( an amplifying glass symbol to one side of the outcome ) for getting a rundown of the website page and furthermore to recognize what terms the motor considered as pertinent.
“ResultStats” highlight to have the measurements on the wellspring of the considerable number of results and the time it took to recover the outcomes.
According to Surfwax, On waves, surf wax helps surfers grip their surfboard; for Web surfing, SurfWax helps you get the best grip on information — providing the “best use” of relevant search results.
Deep Web Search Engine SurfWax’s design/UI was the first to make searching a “visual process,” seamlessly integrating meaning-based search with key knowledge-finding elements for effective association and recall.
The Wayback Machine is a front end to the Internet Archive’s gathering of open Web pages in the Deep Web Search Engine family. It incorporates more than 100 terabytes of date—a colossal gathering with immense stockpiling prerequisites.
The Wayback Machine gives access to this abundance of information through URLs. It is not content accessible—a client has to know the correct URL of a specific Web page, or possibly the Web website, to have the capacity to enter the chronicle.
The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible.
Its web archive, the Deep Web Search Engine Wayback Machine, contains over 150 billion web captures. The Archive also oversees one of the world’s largest book digitization projects.
similar work link Deep Web Search Engine, a Google Scholar allows you to search across a wide range of academic literature. It draws on information from journal publishers, university repositories, and other websites that it has identified as scholarly.
Google Scholar is designed to help you discover scholarly sources that exist on your topic. Once you discover these sources, you’ll want to get your hands on them.
You can configure Google Scholar to allow automatic access to the NCSU Libraries’ subscriptions to journals and databases
This deep web search engine which, like many other deep web search engines on this list, also lets you search the regular web—has a clean and easy-to-use interface, and doesn’t track your discoveries.
The options for topics to search for are endless, and you can even customize it to enhance your experience.
DuckDuckGo Deep Web Search Engine emphasizes returning the best results, rather than the most results, and generates those results from over 400 individual sources, including key crowdsourced sites such as Wikipedia, and other search engines like Bing, Yahoo!, Yandex, and Yummly.
Fazzle.com is a meta web index Deep Web Search Engine that is accessible in English, French, and Dutch.
Fazzle looks at more than 120 changed web indexes to convey ‘quick exact outcomes’ joined by a see page alongside each posting.
Fazzle’s Deep Web Search Engine query items incorporate Web, Downloads, Images, Videos, Audio, Yellow Pages, White Pages, Shopping, and News.
In this Deep Web Search Engine, Not at all like the greater part of the more well-known meta web indexes available, Fazzle’s outcomes are not covered with supported connections and Fazzle just devotes the #1 spot in the list of items for promoting.
Whatever is left of the query items are assembled from the numerous pursuit lists which Fazzle runs seeks however to decide its “Best Pick” and 20 different outcomes on its SERPS pages.
Unlike other Tor search engines (http://hss3uro2hsxfogfq[.]onion), this Deep Web Search Engine not Evil is not for profit. The cost to run not Evil is a contribution to what one hopes are a growing shield against the tyranny of an intolerant majority.
Not Evil is another Deep Web Search Engine in the TOR network. According to its functionality and quality, it is highly competitive with its competitors.
There is no advertising and tracking. Due to thoughtful and continuously updated algorithms of search, it is easy to find the necessary goods, content, or information. Using not Evil, you can save a lot of time and keep total anonymity.
The user interface is highly intuitive. It should be noted that previously this project was widely known as TorSearch.
10. Start Page
Start Page
If you’re worried about privacy, Ixquick’s Start Page is one the best search engines available, even if you’re not using Tor.
Unlike other search engines, this Deep Web Search Engine Start Page doesn’t record your IP address, allowing you to keep your search history secret. It’s bothersome that Google knows everything about you.
Another best Deep Web Search Engine Thanks to 12 billion public records, which are at disposal of Spokeo, reverse phone checks can give the newest data about the most recent phone number owner.
So, after running a search, there will be the location, email addresses, social media profiles, and even additional criminal records at your service.
This website is really simple to use, just fill in the 10-digit phone number in the special line and press “Search”.
You can use Spokeo in case if you want to find a lost family member or friend, check your loved ones, avoid scammers, find more clues about the person, check who is phoning you, find another chance to contact the person and etc.
InfoTracer’s deep web search tool is specialized in finding people and their non-public information in the deep web. Uncovering hidden activity is one of the specializations of InfoTracer.
You can find out if someone keeps a secret social and dating profile or has any other hidden online activity. Another specialization is checking if your information has been leaked in a data breach by looking up leaked records.
Conclusion
The deep web search engine highlighted here are based on the review, privacy, and how efficiently they pull the results that match the query.
There are trillions of GB of data has secretly maintain in a private area which we can access by normal search engine since the content are not indexed at any cost.
The above list of content is best among the best deep web search engines for those who looking for surfing anonymous content on the internet.
These above-mentioned search engines are that give you deep insights and surf deeply to drive deeply and give you no-indexed content.