The SentinelOne post on cloud risk management covers key strategies to address risks in cloud environments. It outlines identifying and assessing risks, implementing security controls, and adopting best practices such as continuous monitoring and automation. The article emphasizes understanding the shared responsibility model between cloud providers and users and recommends prioritizing incident response planning. It also discusses compliance requirements, vendor risk management, and the importance of security frameworks like ISO 27k, NIST to ensure robust cloud security.
Cloud Risk Management Essentials
Neglecting it can lead to data breaches, fines, and reputational damage.
Understand the shared responsibility model between your obligations and your cloud providers.
Encrypt data, use strong access controls, and regularly patch vulnerabilities.
Keep up with the latest security trends and best practices.
Ensure sensitive data is handled securely throughout its lifecycle.
The article highlights how ransomware groups like BianLian and Rhysida are exploiting Microsoft Azure Storage Explorer for data exfiltration. Originally designed for managing Azure storage, this tool is now being repurposed by hackers to transfer stolen data to cloud storage. Attackers use Azure’s capabilities, such as AzCopy, to move large amounts of sensitive information. Security teams are advised to monitor logs for unusual activity, particularly around file transfers and Azure Blob storage connections, to detect and prevent such breaches.
To understand the implications of using Azure Storage Explorer for data exfiltration, it is essential to grasp the basics of Azure Blob Storage. It consists of three key resources:
Storage Account: The overarching entity that provides a namespace for your data.
Container: A logical grouping within the storage account that holds your blobs.
Blob: The actual data object stored within a container.
This structure is similar to storage systems used by other public cloud providers, like Amazon S3 and Google Cloud Storage.
AzCopy Logging and Analysis – The Key to Detecting Data Theft
Azure Storage Explorer uses AzCopy, a command-line tool, to handle data transfers. It generates detailed logs during these transfers, offering a crucial avenue for incident responders to identify data exfiltration attempts.
By default, Azure Storage Explorer and AzCopy use the “INFO” logging level, which captures key events such as file uploads, downloads, and copies. The log entries can include:
UPLOADSUCCESSFUL and UPLOADFAILED: Indicate the outcome of file upload operations.
DOWNLOADSUCCESSFUL and DOWNLOADFAILED: Reveal details of files brought into the network from Azure.
COPYSUCCESSFUL and COPYFAILED: Show copying activities across different storage accounts.
The logs are stored in the .azcopy directory within the user’s profile, offering a valuable resource for forensic analysis.
Logging Settings and Investigation Challenges
Azure Storage Explorer provides a “Logout on Exit” setting, which is disabled by default. This default setting retains any valid Azure Storage sessions when the application is reopened, potentially allowing threat actors to continue their activities even after initial investigations.
At the end of the AzCopy log file, investigators can find a summary of job activities, providing an overview of the entire data transfer operation. This final summary can be instrumental in understanding the scope of data exfiltration carried out by the attackers.
Indicators of Compromise (IOCs)
Detecting the use of Azure Storage Explorer by threat actors involves recognizing certain Indicators of Compromise (IOCs) on the system. The following paths and files may suggest the presence of data exfiltration activities:
File Paths:
%USERPROFILE%\AppData\Local\Programs\Microsoft Azure Storage Explorer
The article explains how to enhance the security of Infrastructure as Code (IaC) by default. It emphasizes integrating security policies into CI/CD pipelines, automating IaC scanning, and using the application as the source of truth for infrastructure needs. It highlights the risks of manual code handling, such as human error and outdated templates, and discusses the challenges of automated remediation. The solution lies in abstracting IaC using tools that generate infrastructure based on application needs, ensuring secure, compliant infrastructure.
Making Infrastructure as Code (IaC) secure is crucial for maintaining the security of cloud environments and preventing vulnerabilities from being introduced during deployment. Here are some best practices to ensure the security of IaC:
1. Use Secure IaC Tools
Trusted Providers: Use reputable IaC tools like Terraform, AWS CloudFormation, or Ansible that have strong security features.
Keep Tools Updated: Ensure that your IaC tools and associated libraries are always updated to the latest version to avoid known vulnerabilities.
2. Secure Code Repositories
Access Control: Limit access to IaC repositories to authorized personnel only, using principles of least privilege.
Use Git Best Practices: Use branch protection rules, mandatory code reviews, and signed commits to ensure that changes to IaC are audited and authorized.
Secrets Management: Never hardcode sensitive information (like API keys or passwords) in your IaC files. Use secret management solutions like AWS Secrets Manager, HashiCorp Vault, or environment variables.
3. Enforce Security in Code
Static Code Analysis (SAST): Use tools like Checkov, TFLint, or Terraform Sentinel to analyze your IaC for misconfigurations, like open security groups or publicly accessible S3 buckets.
Linting and Formatting: Enforce code quality using linters (e.g., tflint for Terraform) that check for potential security misconfigurations early in the development process.
4. Follow Least Privilege for Cloud Resources
Role-based Access Control (RBAC): Configure your cloud resources with the minimum permissions needed. Avoid overly permissive IAM roles or policies, such as using wildcard * permissions.
Security Groups: Ensure that security groups and firewall rules are configured to limit network access to only what is required.
5. Monitor and Audit IaC Changes
Version Control: Use version control systems like Git to track changes to your IaC. This helps maintain audit trails and facilitates rollbacks if needed.
Automated Testing: Implement continuous integration (CI) pipelines to automatically test and validate IaC changes before deployment. Include security tests in your pipeline.
6. Secure IaC Execution Environment
Control Deployment Access: Limit access to the environment where the IaC code will be executed (e.g., Jenkins, CI/CD pipelines) to authorized personnel.
Use Signed IaC Templates: Ensure that your IaC templates or modules are signed to verify their integrity.
7. Encrypt Data
Data at Rest and In Transit: Ensure that all sensitive data, such as configuration files, is encrypted using cloud-native encryption solutions (e.g., AWS KMS, Azure Key Vault).
Use SSL/TLS: Use SSL/TLS certificates to secure communication between services and prevent man-in-the-middle (MITM) attacks.
8. Regularly Scan for Vulnerabilities
Security Scanning: Regularly scan your IaC code for known vulnerabilities and misconfigurations using security scanning tools like Trivy or Snyk IaC.
Penetration Testing: Conduct regular penetration testing to identify weaknesses in your IaC configuration that might be exploited by attackers.
9. Leverage Policy as Code
Automate Compliance: Use policy-as-code frameworks like Open Policy Agent (OPA) to define and enforce security policies across your IaC deployments automatically.
10. Train and Educate Teams
Security Awareness: Ensure that your teams are trained in secure coding practices and are aware of cloud security principles.
IaC-Specific Training: Provide training specific to the security risks of IaC, including common misconfigurations and how to avoid them.
By integrating security into your IaC practices from the beginning, you can prevent security vulnerabilities from being introduced during the deployment process and ensure that your cloud infrastructure remains secure.
It’s predicted that more than $1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. This is no surprise as the cloud is one of the main digital technologies developing in today’s fast-moving world. It’s encouraging that CEOs recognize that it’s crucial for them to champion the use of digital technologies to keep up with today’s evolving business environment.
However, there are still concerns about using cloud services and determining the best approach for adoption. It’s important to acknowledge that adapting to emerging technologies can be challenging, particularly with the constantly expanding range of products and services. As a business improvement partner, DISC collaborates with clients to identify key drivers and develop best practice standards that enhance resilience.
What Influences Organizations to Store Information on the Cloud?
Organizations should align their business strategy and objectives to determine the most suitable approach to cloud computing. This could involve opting for public cloud services, a private cloud, or a hybrid cloud solution, depending on their resources and priorities.
Security concerns remain the leading barrier to cloud adoption, especially with public cloud solutions. In fact, 91% of organizations are very or moderately worried about the security of public cloud environments. These concerns are not limited to IT departments; 61% of IT professionals believe that cloud data security is also a significant concern for executives.
Despite these challenges, many organizations are influenced by the benefits of managing information on the cloud. These benefits include:
Agility: you can respond more quickly and adapt to business changes
Scalable: cloud platforms are less restrictive on storage, size, number of users
Cost savings: no physical infrastructure costs or charges for extra storage, exceeding quotas etc
Enhanced security: standards and certification can show robust security controls are in place
Adaptability: you can easily adjust cloud services to make sure they best suit your business needs
Continuity: organizations are using cloud services as a backup internal solution
Standards to help you Manage Information on the Cloud
Standards that focus on putting appropriate frameworks and controls in place to manage cloud security.
ISO/IEC 27001international standard for an Information security management system (ISMS). It is the foundation of all our cloud security solutions. It describes the requirements for a best practice system to manage information security including understanding the context of an organization, the responsibilities of top management, resource requirements, how to approach risk, and how to monitor and improve the system.
It also provides a generic set of controls required to manage information and ensures you assess your information risks and control them appropriately. It’s relevant to all types of organizations regardless of whether they are involved with cloud services or not, to help with managing information security against recognized best practices.
ISO/IEC 27017is an international code of practice for cloud security controls. It outlines cloud-specific controls to manage security, building on the generic controls described in ISO/IEC 27002. It’s applicable to both Cloud Service Providers (CSPs) and organizations procuring cloud services.
It provides support by outlining roles and responsibilities for both parties, ensuring all cloud security concerns are addressed and clearly owned. Having ISO/IEC 27017 controls in place is especially important when you procure cloud services that form part of a service you sell to clients.
ISO/IEC 27018 is an international code of practice for Personally Identifiable Information (PII) on public clouds. It builds on the general controls described in ISO/IEC 27002 and is appropriate for any organization that processes PII. This is particularly important considering the changing privacy landscape and focus on protecting sensitive personal data.
All businesses need to continually evolve their cybersecurity management in order to effectively manage the cyber risks associated with cloud use. Request to learn more.
Adopt these standards today to ensure your organization effectively manages data in the cloud.
How to build a world class ISMS:
ISO 27001 serves as the foundation for ISO 27017, ISO 27018, and ISO 27701.
After conducting the risk assessment, it’s essential to compare the controls identified as necessary with those listed in Annex A to ensure no important controls were overlooked in managing the risks. This serves as a quality check for the risk assessment, not as a justification for using or not using any controls from Annex A. This process should be done for each risk identified in the assessment to see if there are opportunities to enhance it.
Any controls that you discover were unintentionally “omitted” from the risk assessment can come from any source (NIST, HIPAA, PCI, or CIS Critical Security Controls) and are not restricted to those in Annex A.
One should consider CIS Controls to strengthen one of the above frameworks when building your ISMS. CIS Controls is updated frequently than frameworks and are highly effective against the top five attack types found in industry threat data, effectively defending against 86% of the ATT&CK (sub)techniques in the MITRE ATT&CK framework.
Statement of Applicability (SoA) is typically developed after conducting a risk assessment in ISO 27001. The risk assessment identifies the information security risks that the organization faces and determines the appropriate controls needed to mitigate those risks.
In ISO 27001, the Statement of Applicability (SoA) is a key document that outlines which information security controls from Annex A ( or from (NIST, HIPAA, PCI, or CIS Critical Security Controls)) are applicable to an organization’s Information Security Management System (ISMS). The SoA provides a summary of the controls selected to address identified risks, justifies why each control is included or excluded, and details how each applicable control is implemented. It serves as a reference to demonstrate compliance with ISO 27001 requirements and helps in maintaining transparency and accountability in the ISMS.
The SoA is essential for internal stakeholders and external auditors to understand the rationale behind the organization’s approach to managing information security risks.
Cloud shared responsibilities:
Most companies appear to be operating in the hybrid or public cloud space, often without fully realizing it, and need to gain a better understanding of this environment.
Cloud shared responsibilities refer to the division of security and compliance responsibilities between a cloud service provider (CSP) and the customer. This model outlines who is responsible for specific aspects of cloud security, depending on the type of cloud service being used: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS).
The division of responsibilities varies based on the cloud service model:
IaaS: The CSP manages the basic infrastructure, but the customer is responsible for everything else, including operating systems, applications, and data.
PaaS: The CSP manages the infrastructure and platform, while the customer focuses on application development, data management, and user access.
SaaS: The CSP handles most security aspects, including applications and infrastructure, while the customer is primarily responsible for data security and user access management.
Understanding the shared responsibility model is crucial for ensuring that both the CSP and the customer are aware of their respective roles in maintaining cloud security, compliance and last but not the least managing risks in the cloud environment.
In summary, The shift to cloud computing is expected to influence over $1 trillion in IT spending over the next five years as companies increasingly adopt digital technologies to stay competitive. Despite the benefits of cloud computing—such as agility, scalability, cost savings, and enhanced security—many organizations face challenges, particularly around security concerns, which are a major barrier to cloud adoption. To navigate these challenges, businesses need to align their cloud strategies with their objectives, choosing between public, private, or hybrid cloud solutions. Additionally, implementing standards like ISO/IEC 27001, ISO/IEC 27017, and ISO/IEC 27018 can help manage cloud security and compliance effectively by providing frameworks for managing information security risks and ensuring data protection. Understanding the shared responsibility model is also crucial for cloud security, as it defines the distinct roles of cloud service providers and customers in maintaining a secure cloud environment.
In this article, we’ll identify some first steps you can take to establish your cloud security strategy. We’ll do so by discussing the cloud security impact of individual, concrete actions featured within the CIS Critical Security Controls (CIS Controls) and the CIS Benchmarks.
Data protection and application security: The foundation of a cloud security strategy
When you’re working with Controls v8 and the CIS Controls Cloud Companion Guide, you need to lay a foundation on which you can build your unique cloud security efforts. Toward that end, you can tailor the Controls in the context of a specific Information Technology/Operational Technology (IT/OT) map.
To help you make an impact at the beginning of your cloud security journey, we recommend you focus on two Controls in particular: CIS Control 3 – Data Protection and CIS Control 16 – Application Security.
Cloud Data Security with CIS Control 3
The purpose of CIS Control 3 is to help you create processes for protecting your data in the cloud. Consumers don’t always know that they’re responsible for cloud data security, which means they might not have adequate controls in place. For instance, without proper visibility, cloud consumers might be unaware that they’re leaking their data for weeks, months, or even years.
CIS Control 3 walks you through how to close this gap by identifying, classifying, securely handling, retaining, and disposing of your cloud-based data, as shown in the screenshot below.
A screenshot of CIS Control 3: Data Protection
Cloud Application Security with CIS Control 16
In addition to protecting your cloud-based data, you need to manage your cloud application security in accordance with CIS Control 16. Your responsibility in this area applies to applications developed by your in-house teams and acquired from external product vendors.
To prevent, detect, and remediate vulnerabilities in your cloud-based applications, you need a comprehensive program that brings together people, processes, and technology. Continuous Vulnerability Management, as discussed in CIS Control 7, sits at the heart of this program. You can then expand your security efforts by using supply chain risk management for externally acquired software and a secure software development life cycle (SDLC) for applications produced in house.
Want to learn more about the CIS Benchmarks? Check out our video below.
Using the CIS Amazon Web Services Foundations Benchmark v3.0.0 as an example, here are two recommendations you can implement to protect your data in the cloud.
Hardening your cloud-based assets with MFA, lack of public access
With CIS Controls 3 and 16 as your foundation, you can build upon your progress by hardening your accounts and workloads in the cloud with the security recommendations of the CIS Benchmarks, which map back to the Controls.
Set up MFA for the ‘root’ user account
The ‘root’ user account is the most privileged user in your AWS account. In the event of a compromise, a cyber threat actor (CTA) could use your ‘root’ user account to access sensitive data stored in your AWS environment.
To address this threat, you need to safeguard your ‘root’ user account. You can do so by implementing Recommendation 1.5, which advises you to set up multi-factor authentication (MFA) using a dedicated device that’s managed by your company. Do not use a personal device to protect your ‘root’ user account with MFA, as this could increase the risk of account lockout if the device owner leaves the company, changes their number, or loses their device.
Block public access on your S3 buckets
Amazon Simple Storage Service (S3) enables you to store objects in your AWS environment using a web interface. The issue is that not everyone configures their S3 buckets securely. By default, S3 buckets don’t allow public access upon their creation. However, an Identity and Access Management (IAM) principal with sufficient permissions could enable public access to your S3 buckets. In doing so, they could inadvertently expose your buckets and their respective objects.
You can mitigate this risk by implementing Recommendation 2.1.4. This guideline consists of ensuring that you’ve configured S3 buckets to “Block public access” in both your individual bucket settings and in your AWS account settings. That way, you’ll block the public from accessing any of your S3 buckets and its contained objects connected to your AWS account.
Streamlining your use of cloud security best practices
The Controls and Benchmarks recommendations discussed above will help you take the first steps in implementing your cloud security strategy. From here, you can save time securely configuring your technologies using the CIS Hardened Images, virtual machine images (VMIs) that are pre-hardened to the security recommendations of the Benchmarks.
Cloud computing and the use of mobile devices challenged the concept of a perimeter-based security model. The change in thinking started with the Jericho Forum in 2007 releasing the Jericho Forum Commandments for a de-perimiterised world where it’s assumed a network perimeter doesn’t exist.
John Kindervag, from Forrester Research, then came up with the term “zero trust” in 2010 and developed the phrase “never trust, always verify” . He identified zero trust as a model that removes implicit trust within a system boundary and continuously evaluates the risks by applying mitigations to business transactions and data flows at every step of their journey. The phrase “assume breach” is also often associated with zero trust and comes from the phrase “assume compromise” used by the US Department of Defense in the 1990’s.
The approach requires a combination of technologies, processes, practices, and cultural changes to be successfully implemented. It involves a fundamental shift in the way organizations approach cybersecurity. Traditional “castle and moat” security models assumed, after data passed through the perimeter, that everything inside a system could be implicitly trusted.
Zero trust basics
The zero-trust model assumes that all business transactions and data flows, whether originating from inside or outside the network, are potentially malicious. Every interaction in a business transaction or data flow must be continuously validated to ensure that only authorized users and devices can access sensitive business data. In effect, it moves the perimeter from the system boundary to the point at which identification, authentication, and authorization take place, resulting in identity becoming the new perimeter. The whole concept often gets simplified down to the “never trust, always verify” principle, but it’s more than that.
Zero-trust architecture requires a cultural shift that emphasizes the importance of security rather than just compliance throughout an organization. This means that implementing a zero-trust architecture involves not only the deployment of specific technologies but also the development of processes and practices that promote a data security first mindset across the organization, building on the data centric security approach we discussed earlier.
When architecting and developing security for a system, an architect should follow a set of principles, tenets, or simply a way of thinking to apply zero trust. Zero trust isn’t an end-to-end method, and a comprehensive approach requires integration with other architectural thinking techniques.
Cloud Active Defense is an open-source solution that integrates decoys into cloud infrastructure. It creates a dilemma for attackers: risk attacking and being detected immediately, or avoid the traps and reduce their effectiveness. Anyone, including small companies, can use it at no cost and start receiving high-signal alerts.
Where honeypots are good at detecting lateral movement once the initial application has been compromised, Cloud Active Defense brings the deception directly into that initial application.
“We do this by injecting decoys into HTTP responses. These decoys are invisible to regular users and very tempting to attackers. This creates a situation where attackers must constantly guess: is that a trap or an exploitation path? This guessing slows down the attack operation and can lead attackers to ignore valid attack vectors as they suspect them to be traps. Furthermore, since the application’s replies cannot be 100% trusted anymore, find-tuning your exploit payload becomes painful,” Cédric Hébert, CISO – Innovation at SAP and developer of Cloud Active Defense, told Help Net Security.
Future plans and download
“In the short term, we plan to make it easy to ingest the generated alerts to a SIEM system for faster response. We also plan to release code to make it simple to deploy on a Kubernetes cluster, where each application can be configured independently. In the mid-term, we want to work on proposing response strategies: surely, banning the IP address can be an option, but what we envision is, upon detection, to give the possibility to route the active session to a clone of the application where no more harm can be done,” Hebert concluded.
Cloud Active Defense is available for free on GitHub.
Businesses increasingly rely on Software as a Service (SaaS) applications to drive efficiency, innovation, and growth.
However, this shift towards a more interconnected digital ecosystem has not come without its risks.
According to the “2024 State of SaaS Security Report” by Wing Security, a staggering 97% of organizations faced exposure to attacks through compromised SaaS supply chain applications in 2023, highlighting a critical vulnerability in the digital infrastructure of modern businesses.
The report, which analyzed data from 493 companies in the fourth quarter of 2023, illuminates the multifaceted nature of SaaS security threats.
From supply chain attacks taking center stage to the alarming trend of exploiting exposed credentials, the findings underscore the urgent need for robust security measures.
Supply Chain Attacks: A Domino Effect
Supply chain attacks have emerged as a significant threat, with 96.7% of organizations using at least one app that had a security incident in the past year.
The MOVEit breach, which directly and indirectly impacted over 2,500 organizations, and North Korean actors’ targeted attack on JumpCloud’s clients are stark reminders of the cascading effects a single vulnerability can have across the supply chain.
The simplicity of credential stuffing attacks and the widespread issue of unsecured credentials continue to pose a significant risk.
The report highlights several high-profile incidents, including breaches affecting Norton LifeLock and PayPal customers, where attackers exploited stolen credentials to gain unauthorized access to sensitive information.
MFA Bypassing And Token Theft
Despite adopting Multi-Factor Authentication (MFA) as a security measure, attackers have found ways to bypass these defenses, targeting high-ranking executives in sophisticated phishing campaigns.
Additionally, the report points to a concerning trend of token theft, with many unused tokens creating unnecessary risk exposure for many organizations.
Looking Ahead: SaaS Threat Forecast For 2024
As we move into 2024, the SaaS threat landscape is expected to evolve, with AI posing a new threat.
The report identifies two primary risks associated with AI in the SaaS domain: the vast volume of AI models in SaaS applications and the potential for data mismanagement.
Furthermore, the persistence of credential-based attacks and the rise of interconnected threats across different domains underscore the need for a holistic cybersecurity approach.
Practical Tips For Enhancing SaaS Security
The report offers eight practical tips for organizations to combat these growing threats, including discovering and managing the risk of third-party applications, leveraging threat intelligence, and enforcing MFA.
Additionally, regaining control of the AI-SaaS landscape and establishing an effective offboarding procedure are crucial steps in bolstering an organization’s SaaS security.
The “2024 State of SaaS Security Report” by Wing Security serves as a wake-up call for businesses to reassess their SaaS security strategies.
With 97% of organizations exposed to attacks via compromised SaaS supply chain apps, the need for vigilance and proactive security measures has never been more critical.
As the digital landscape continues to evolve, so must our approaches to protect it.
Continuous Threat Exposure Management (CTEM) is an evolving cybersecurity practice focused on identifying, assessing, prioritizing, and addressing security weaknesses and vulnerabilities in an organization’s digital assets and networks continuously. Unlike traditional approaches that might assess threats periodically, CTEM emphasizes a proactive, ongoing process of evaluation and mitigation to adapt to the rapidly changing threat landscape. Here’s a closer look at its key components:
Identification: CTEM starts with the continuous identification of all digital assets within an organization’s environment, including on-premises systems, cloud services, and remote endpoints. It involves understanding what assets exist, where they are located, and their importance to the organization.
Assessment: Regular and ongoing assessments of these assets are conducted to identify vulnerabilities, misconfigurations, and other security weaknesses. This process often utilizes automated scanning tools and threat intelligence to detect issues that could be exploited by attackers.
Prioritization: Not all vulnerabilities pose the same level of risk. CTEM involves prioritizing these weaknesses based on their severity, the value of the affected assets, and the potential impact of an exploit. This helps organizations focus their efforts on the most critical issues first.
Mitigation and Remediation: Once vulnerabilities are identified and prioritized, CTEM focuses on mitigating or remedying these issues. This can involve applying patches, changing configurations, or implementing other security measures to reduce the risk of exploitation.
Continuous Improvement: CTEM is a cyclical process that feeds back into itself. The effectiveness of mitigation efforts is assessed, and the approach is refined over time to improve security posture continuously.
The goal of CTEM is to reduce the “attack surface” of an organization—minimizing the number of vulnerabilities that could be exploited by attackers and thereby reducing the organization’s overall risk. By continuously managing and reducing exposure to threats, organizations can better protect against breaches and cyber attacks.
CTEM VS. ALTERNATIVE APPROACHES
Continuous Threat Exposure Management (CTEM) represents a proactive and ongoing approach to managing cybersecurity risks, distinguishing itself from traditional, more reactive security practices. Understanding the differences between CTEM and alternative approaches can help organizations choose the best strategy for their specific needs and threat landscapes. Let’s compare CTEM with some of these alternative approaches:
1. CTEM VS. PERIODIC SECURITY ASSESSMENTS
Periodic Security Assessments typically involve scheduled audits or evaluations of an organization’s security posture at fixed intervals (e.g., quarterly or annually). This approach may fail to catch new vulnerabilities or threats that emerge between assessments, leaving organizations exposed for potentially long periods.
CTEM, on the other hand, emphasizes continuous monitoring and assessment of threats and vulnerabilities. It ensures that emerging threats can be identified and addressed in near real-time, greatly reducing the window of exposure.
2. CTEM VS. PENETRATION TESTING
Penetration Testing is a targeted approach where security professionals simulate cyber-attacks on a system to identify vulnerabilities. While valuable, penetration tests are typically conducted annually or semi-annually and might not uncover vulnerabilities introduced between tests.
CTEM complements penetration testing by continuously scanning for and identifying vulnerabilities, ensuring that new threats are addressed promptly and not just during the next scheduled test.
3. CTEM VS. INCIDENT RESPONSE PLANNING
Incident Response Planning focuses on preparing for, detecting, responding to, and recovering from cybersecurity incidents. It’s reactive by nature, kicking into gear after an incident has occurred.
CTEM works upstream of incident response by aiming to prevent incidents before they happen through continuous threat and vulnerability management. While incident response is a critical component of a comprehensive cybersecurity strategy, CTEM can reduce the likelihood and impact of incidents occurring in the first place.
4. CTEM VS. TRADITIONAL VULNERABILITY MANAGEMENT
Traditional Vulnerability Management involves identifying, classifying, remediating, and mitigating vulnerabilities within software and hardware. While it can be an ongoing process, it often lacks the continuous, real-time monitoring and prioritization framework of CTEM.
CTEM enhances traditional vulnerability management by integrating it into a continuous cycle that includes real-time detection, prioritization based on current threat intelligence, and immediate action to mitigate risks.
KEY ADVANTAGES OF CTEM
Real-Time Threat Intelligence: CTEM integrates the latest threat intelligence to ensure that the organization’s security measures are always ahead of potential threats.
Automation and Integration: By leveraging automation and integrating various security tools, CTEM can streamline the process of threat and vulnerability management, reducing the time from detection to remediation.
Risk-Based Prioritization: CTEM prioritizes vulnerabilities based on their potential impact on the organization, ensuring that resources are allocated effectively to address the most critical issues first.
CTEM offers a comprehensive and continuous approach to cybersecurity, focusing on reducing exposure to threats in a dynamic and ever-evolving threat landscape. While alternative approaches each have their place within an organization’s overall security strategy, integrating them with CTEM principles can provide a more resilient and responsive defense mechanism against cyber threats.
CTEM IN AWS
Implementing Continuous Threat Exposure Management (CTEM) within an AWS Cloud environment involves leveraging AWS services and tools, alongside third-party solutions and best practices, to continuously identify, assess, prioritize, and remediate vulnerabilities and threats. Here’s a detailed example of how CTEM can be applied in AWS:
1. IDENTIFICATION OF ASSETS
AWS Config: Use AWS Config to continuously monitor and record AWS resource configurations and changes, helping to identify which assets exist in your environment, their configurations, and their interdependencies.
AWS Resource Groups: Organize resources by applications, projects, or environments to simplify management and monitoring.
2. ASSESSMENT
Amazon Inspector: Automatically assess applications for vulnerabilities or deviations from best practices, especially important for EC2 instances and container-based applications.
AWS Security Hub: Aggregates security alerts and findings from various AWS services (like Amazon Inspector, Amazon GuardDuty, and IAM Access Analyzer) and supported third-party solutions to give a comprehensive view of your security and compliance status.
3. PRIORITIZATION
AWS Security Hub: Provides a consolidated view of security alerts and findings rated by severity, allowing you to prioritize issues based on their potential impact on your AWS environment.
Custom Lambda Functions: Create AWS Lambda functions to automate the analysis and prioritization process, using criteria specific to your organization’s risk tolerance and security posture.
4. MITIGATION AND REMEDIATION
AWS Systems Manager Patch Manager: Automate the process of patching managed instances with both security and non-security related updates.
CloudFormation Templates: Use AWS CloudFormation to enforce infrastructure configurations that meet your security standards. Quickly redeploy configurations if deviations are detected.
Amazon EventBridge and AWS Lambda: Automate responses to security findings. For example, if Security Hub detects a critical vulnerability, EventBridge can trigger a Lambda function to isolate affected instances or apply necessary patches.
5. CONTINUOUS IMPROVEMENT
AWS Well-Architected Tool: Regularly review your workloads against AWS best practices to identify areas for improvement.
Feedback Loop: Implement a feedback loop using AWS CloudWatch Logs and Amazon Elasticsearch Service to analyze logs and metrics for security insights, which can inform the continuous improvement of your CTEM processes.
IMPLEMENTING CTEM IN AWS: AN EXAMPLE SCENARIO
Imagine you’re managing a web application hosted on AWS. Here’s how CTEM comes to life:
Identification: Use AWS Config and Resource Groups to maintain an updated inventory of your EC2 instances, RDS databases, and S3 buckets critical to your application.
Assessment: Employ Amazon Inspector to regularly scan your EC2 instances for vulnerabilities and AWS Security Hub to assess your overall security posture across services.
Prioritization: Security Hub alerts you to a critical vulnerability in an EC2 instance running your application backend. It’s flagged as high priority due to its access to sensitive data.
Mitigation and Remediation: You automatically trigger a Lambda function through EventBridge based on the Security Hub finding, which isolates the affected EC2 instance and initiates a patching process via Systems Manager Patch Manager.
Continuous Improvement: Post-incident, you use the AWS Well-Architected Tool to evaluate your architecture. Insights gained lead to the implementation of stricter IAM policies and enhanced monitoring with CloudWatch and Elasticsearch for anomaly detection.
This cycle of identifying, assessing, prioritizing, mitigating, and continuously improving forms the core of CTEM in AWS, helping to ensure that your cloud environment remains secure against evolving threats.
CTEM IN AZURE
Implementing Continuous Threat Exposure Management (CTEM) in Azure involves utilizing a range of Azure services and features designed to continuously identify, assess, prioritize, and mitigate security risks. Below is a step-by-step example illustrating how an organization can apply CTEM principles within the Azure cloud environment:
STEP 1: ASSET IDENTIFICATION AND MANAGEMENT
Azure Resource Graph: Use Azure Resource Graph to query and visualize all resources across your Azure environment. This is crucial for understanding what assets you have, their configurations, and their interrelationships.
Azure Tags: Implement tagging strategies to categorize resources based on sensitivity, department, or environment. This aids in the prioritization process later on.
STEP 2: CONTINUOUS VULNERABILITY ASSESSMENT
Azure Security Center: Enable Azure Security Center (ASC) at the Standard tier to conduct continuous security assessments across your Azure resources. ASC provides security recommendations and assesses your resources for vulnerabilities and misconfigurations.
Azure Defender: Integrated into Azure Security Center, Azure Defender provides advanced threat protection for workloads running in Azure, including virtual machines, databases, and containers.
STEP 3: PRIORITIZATION OF RISKS
ASC Secure Score: Use the Secure Score in Azure Security Center as a metric to prioritize security recommendations based on their potential impact on your environment’s security posture.
Custom Logic with Azure Logic Apps: Develop custom workflows using Azure Logic Apps to prioritize alerts based on your organization’s specific criteria, such as asset sensitivity or compliance requirements.
STEP 4: AUTOMATED REMEDIATION
Azure Automation: Employ Azure Automation to run remediation scripts or configurations management across your Azure VMs and services. This can be used to automatically apply patches, update configurations, or manage access controls in response to identified vulnerabilities.
Azure Logic Apps: Trigger automated workflows in response to security alerts. For example, if Azure Security Center identifies an unprotected data storage, an Azure Logic App can automatically initiate a workflow to apply the necessary encryption settings.
STEP 5: CONTINUOUS MONITORING AND INCIDENT RESPONSE
Azure Monitor: Utilize Azure Monitor to collect, analyze, and act on telemetry data from your Azure resources. This includes logs, metrics, and alerts that can help you detect and respond to threats in real-time.
Azure Sentinel: Deploy Azure Sentinel, a cloud-native SIEM service, for a more comprehensive security information and event management solution. Sentinel can collect data across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
STEP 6: CONTINUOUS IMPROVEMENT AND COMPLIANCE
Azure Policy: Implement Azure Policy to enforce organizational standards and to assess compliance at scale. Continuous evaluation of your configurations against these policies ensures compliance and guides ongoing improvement.
Feedback Loops: Establish feedback loops using the insights gained from Azure Monitor, Azure Security Center, and Azure Sentinel to refine and improve your security posture continuously.
EXAMPLE SCENARIO: SECURING A WEB APPLICATION IN AZURE
Let’s say you’re managing a web application hosted in Azure, utilizing Azure App Service for the web front end, Azure SQL Database for data storage, and Azure Blob Storage for unstructured data.
Identification: You catalog all resources related to the web application using Azure Resource Graph and apply tags based on sensitivity and function.
Assessment: Azure Security Center continuously assesses these resources for vulnerabilities, such as misconfigurations or outdated software.
Prioritization: Based on the Secure Score and custom logic in Azure Logic Apps, you prioritize a detected SQL injection vulnerability in Azure SQL Database as critical.
Mitigation: Azure Automation is triggered to isolate the affected database and apply a patch. Concurrently, Azure Logic Apps notifies the security team and logs the incident for review.
Monitoring: Azure Monitor and Azure Sentinel provide ongoing surveillance, detecting any unusual access patterns or potential breaches.
Improvement: Insights from the incident lead to a review and enhancement of the application’s code and a reinforcement of security policies through Azure Policy to prevent similar vulnerabilities in the future.
By following these steps and utilizing Azure’s comprehensive suite of security tools, organizations can implement an effective CTEM strategy that continuously protects against evolving cyber threats.
IMPLEMENTING CTEM IN CLOUD ENVIRONMENTS LIKE AWS AND AZURE
Implementing Continuous Threat Exposure Management (CTEM) in cloud environments like AWS and Azure involves a series of strategic steps, leveraging each platform’s unique tools and services. The approach combines best practices for security and compliance management, automation, and continuous monitoring. Here’s a guide to get started with CTEM in both AWS and Azure:
COMMON STEPS FOR BOTH AWS AND AZURE
Understand Your Environment
Catalogue your cloud resources and services.
Understand the data flow and dependencies between your cloud assets.
Define Your Security Policies and Objectives
Establish what your security baseline looks like.
Define key compliance requirements and security objectives.
Integrate Continuous Monitoring Tools
Leverage cloud-native tools for threat detection, vulnerability assessment, and compliance monitoring.
Integrate third-party security tools if necessary for enhanced capabilities.
Automate Security Responses
Implement automated responses to common threats and vulnerabilities.
Use cloud services to automate patch management and configuration adjustments.
Continuously Assess and Refine
Regularly review security policies and controls.
Adjust based on new threats, technological advancements, and changes in the business environment.
IMPLEMENTING CTEM IN AWS
Enable AWS Security Services
Utilize AWS Security Hub for a comprehensive view of your security state and to centralize and prioritize security alerts.
Use Amazon Inspector for automated security assessments to help find vulnerabilities or deviations from best practices.
Implement AWS Config to continuously monitor and record AWS resource configurations.
Automate Response with AWS Lambda
Use AWS Lambda to automate responses to security findings, such as isolating compromised instances or automatically patching vulnerabilities.
Leverage Amazon CloudWatch
Employ CloudWatch for monitoring and alerting based on specific metrics or logs that indicate potential security threats.
IMPLEMENTING CTEM IN AZURE
Utilize Azure Security Tools
Activate Azure Security Center for continuous assessment and security recommendations. Use its advanced threat protection features to detect and mitigate threats.
Implement Azure Sentinel for SIEM (Security Information and Event Management) capabilities, integrating it with other Azure services for a comprehensive security analysis and threat detection.
Automate with Azure Logic Apps
Use Azure Logic Apps to automate responses to security alerts, such as sending notifications or triggering remediation processes.
Monitor with Azure Monitor
Leverage Azure Monitor to collect, analyze, and act on telemetry data from your Azure and on-premises environments, helping you detect and respond to threats in real-time.
BEST PRACTICES FOR BOTH ENVIRONMENTS
Continuous Compliance: Use policy-as-code to enforce and automate compliance standards across your cloud environments.
Identity and Access Management (IAM): Implement strict IAM policies to ensure least privilege access and utilize multi-factor authentication (MFA) for enhanced security.
Encrypt Data: Ensure data at rest and in transit is encrypted using the cloud providers’ encryption capabilities.
Educate Your Team: Regularly train your team on the latest cloud security best practices and the specific tools and services you are using.
Implementing CTEM in AWS and Azure requires a deep understanding of each cloud environment’s unique features and capabilities. By leveraging the right mix of tools and services, organizations can create a robust security posture that continuously identifies, assesses, and mitigates threats.
Cloud security is a critical aspect of modern computing, as businesses and individuals increasingly rely on cloud services to store, process, and manage data. Cloud computing offers numerous benefits, including scalability, flexibility, and cost efficiency, but it also introduces unique security challenges that need to be addressed to ensure the confidentiality, integrity, and availability of sensitive information.
In this Help Net Security round-up, we present segments from previously recorded videos in which security experts share their insights and experiences, shedding light on critical aspects of cloud security.
Complete videos
Paul Calatayud, CISO at Aqua Security, talks about cloud native security and the problem with the lack of understanding of risks to this environment.
Jane Wong, VP of Security Products at Splunk, talks about challenges organizations are facing to secure their multicloud environments.
Keith Nakasone, Federal Strategist at VMware, discusses how government agencies can scale the use of multicloud environments for mission success.
Dimitri Sirota, CEO at BigID, discusses how companies are unprepared to deal with the unique challenges of securing data in the cloud.
Andrew Slater, Practice Director – Cloud at Node4, talks about how organizations have encountered challenges in getting the final 20-30% of their production workloads into public cloud environments and addresses the cybersecurity implications.
The widespread adoption of SaaS applications, remote work, and shadow IT compels organizations to adopt cloud-based cybersecurity. This is essential as corporate resources, traffic, and threats are no longer restricted to the office premises.
Cloud-based security initiatives, such as Secure Access Service Edge (SASE) and Security Service Edge (SSE), comprising Secure Web Gateway (SWG), Cloud Access Security Brokers (CASB), Data Loss Prevention (DLP), and Zero Trust Network Access (ZTNA), effectively push security to wherever the corporate users, devices, and resources are – all via the cloud. With all security functions now delivered over the cloud and managed through a single pane of glass, the incoming and outgoing traffic (aka, the north-south traffic) is all but secure.
However, the east-west traffic — i.e., traffic that traverses the internal network and data centers and does not cross the network perimeter — is never exposed to these cloud-based security checks.
One way around it is to maintain a legacy data center firewall that monitors and controls the east-west traffic specifically. For starters, this hybrid security architecture adds up the cost and complexity of managing disparate security solutions, something organizations desperately attempt to overcome with cloud-based converged security stacks.
Secondly, the absence of unified visibility across cloud and on-premise security components can result in a loss of shared context, which renders security loopholes inevitable. Even Security Information and Event Management (SIEM) or Extended Detection and Response (XDR) solutions can’t address the complexity and operational overhead of maintaining a hybrid security stack for different kinds of traffic. As such, organizations still need that single, integrated security stack that offers ubiquitous protection for incoming, outgoing, and internal traffic, managed via a unified dashboard.
Extending cloud-native security to east-west traffic
Organizations need a security solution that offers both north-south and east-west protection, but it must all be orchestrated from a unified, cloud-based console. There are two ways to achieve this:
1. Via WAN firewall policy
Cloud-native security architectures like SASE and SSE can offer the east-west protection typically delivered by a data center firewall by rerouting all internal traffic through the closest point of presence (PoP). Unlike a local firewall that comes with its own configuration and management constraints, firewall policies configured in the SSE PoP can be managed via the platform’s centralized management console. Within the unified console, admins can create access policies based on ZTNA principles. For instance, they can allow only authorized users connected to the corporate VLAN and running an authorized, Active Directory-registered device to access sensitive resources hosted within the on-premise data center.
In some cases, however, organizations may need to implement east-west traffic protection locally without redirecting the traffic to the PoP.
2. Via LAN firewall policy
Consider a situation where a CCTV camera connected to an IoT VLAN needs to access an internal CCTV server.
Given the susceptibility of the IoT camera to be compromised by a malicious threat actor and controlled over the internet via a remote C2 server, the camera’s internet or WAN access should be disabled by default. If the data center firewall policy is implemented in the PoP, the traffic from internet-disabled IoT devices will naturally be exempt from such policies. To bridge this gap, SASE and SSE platforms can allow admins to configure firewall policies at the local SD-WAN device.
Typically, organizations connect to the SASE or SSE PoPs through an SD-WAN device, also known as a socket, installed at the site. The centralized dashboard can allow admins to configure rules for allowing or blocking internal or LAN traffic directly at the SD-WAN device, without ever sending it to the PoP over WAN.
In this scenario, if the traffic matches the pre-configured LAN firewall policies, the rules can be enforced locally. For instance, admins can allow corporate VLAN users to access printers connected to the printer VLAN while denying such access to guest Wi-Fi users. If the traffic does not match pre-defined policies, the traffic can be forwarded to the PoP for further classification.
Cloud-based east-west protection is the way to go
As security functions move increasingly to the cloud, it’s crucial not to lose sight of the controls and security measures needed on-site.
Cloud-native protections aim to increase coverage while reducing complexities and boosting convergence. As critical as it is to enable east-west traffic protection within SASE and SSE architectures, it’s equally important to maintain the unified visibility, control, and management offered by such platforms. To achieve this, organizations must avoid getting carried away by emerging threats and adding back disparate security solutions.
As such, any on-premise security measures added within cloud-based security paradigms should maintain a unified dashboard for granular policy configuration and end-to-end visibility across LAN and WAN traffic. This is the only way organizations can reliably bridge the gap between cloud and on-premise security and enable a sustainable, adaptable, and future-proof security stack.
In a startling revelation, Bitdefender, a leading cybersecurity firm, has disclosed a series of sophisticated attack methods that could significantly impact users of Google Workspace and Google Credential Provider for Windows (GCPW). This discovery highlights potential weaknesses in widely used cloud and authentication services, prompting a reevaluation of current security measures.
DISCOVERY OF ADVANCED ATTACK TECHNIQUES
Bitdefender’s research team, working in conjunction with their in-house research institute Bitdefender Labs, has identified previously unknown methods that cybercriminals could use to escalate a breach from a single endpoint to a network-wide level. These techniques, if exploited, could lead to severe consequences such as ransomware attacks or massive data exfiltration.
The attack progression involves several key stages, starting from a single compromised machine. Once inside the system, attackers could potentially:
Move across cloned machines within the network, especially if they are equipped with GCPW.
Gain unauthorized access to the Google Cloud Platform through custom permissions.
Decrypt locally stored passwords, extending their reach beyond the initially compromised machine.
These findings were responsibly disclosed to Google. However, Google has stated that these issues will not be addressed directly, as they fall outside their designated threat model. This decision reflects Google’s risk assessment and security priorities.
THE DUAL ROLE OF GOOGLE CREDENTIAL PROVIDER FOR WINDOWS (GCPW)
At the heart of these vulnerabilities is the Google Credential Provider for Windows (GCPW), a tool designed to streamline access and management within Google’s ecosystem. GCPW serves two primary functions:
Remote Device Management: Similar to Mobile Device Management (MDM) systems like Microsoft Intune, GCPW allows administrators to remotely manage and control Windows devices connected to Google Workspace. This includes enforcing security policies, deploying software updates, and managing device settings without needing a VPN connection or domain registration.
Single-Sign On (SSO) Authentication: GCPW facilitates SSO for Windows devices using Google Workspace credentials. This integration provides a seamless login experience, enabling users to access their devices with the same credentials used for Google services like Gmail, Google Drive, and Google Calendar.
THE OPERATIONAL MECHANISM OF GCPW
Understanding GCPW’s functioning is crucial in comprehending the vulnerabilities. Here’s a breakdown of its operational process:
Local Service Account Creation: Upon installing GCPW, a new user account named ‘gaia’ is created. This account, not intended for regular user interactions, serves as a service account with elevated privileges.
Credential Provider Integration: GCPW integrates a new Credential Provider into the Windows Local Security Authority Subsystem Service (lsass), a critical component responsible for handling security operations and user authentication in Windows.
Local User Account Creation: GCPW facilitates the creation of new local user accounts linked to Google Workspace accounts whenever a new user authenticates with the system.
Logon Procedure: These Google Workspace users are logged in using their newly created local profiles, where a refresh token is stored to ensure continuous access without repeated authentication prompts.
UNCOVERED ATTACK METHODS
GOLDEN IMAGE LATERAL MOVEMENT:
Virtualized Environment Challenge: In environments that use cloned virtual machines (VMs), such as Virtual Desktop Infrastructure (VDI) or Desktop as a Service (DaaS) solutions, the installation of GCPW on a base machine means that the ‘gaia’ account and its password are cloned across all VMs.
Attack Implication: If an attacker discovers the password of one ‘gaia’ account, they can potentially access all machines that have been cloned from the same base image.
Scenario: Imagine a company, “Acme Corp,” uses a Virtual Desktop Infrastructure (VDI) where multiple virtual machines (VMs) are cloned from a single ‘golden image’ for efficiency. This image has Google Credential Provider for Windows (GCPW) pre-installed for ease of access.
Attack Example:
An attacker, Alice, manages to compromise one of Acme Corp’s VMs. During her exploration, she discovers that the VM has GCPW installed.
She learns that the ‘gaia’ account password created during the GCPW setup is identical across all cloned VMs because they were derived from the same golden image.
By extracting the ‘gaia’ account password from the compromised VM, Alice can now access all other VMs cloned from the same image. This allows her to move laterally across the network, potentially accessing sensitive information or deploying malware.
UNAUTHORIZED ACCESS TOKEN REQUEST:
Exploitation of OAuth Tokens: GCPW stores an OAuth 2.0 refresh token within the user’s session, maintaining access to the broader Google ecosystem. Attackers gaining access to this token can request new Access Tokens with varied permissions.
Scope of Abuse: The permissions granted by these tokens can enable attackers to access or manipulate a wide range of user data and Google services, effectively bypassing multi-factor authentication (MFA) processes.
Scenario: At a different company, “Beta Ltd.,” employees use their Google Workspace credentials to log into their Windows machines, facilitated by GCPW.
Attack Example:
Bob, a cybercriminal, gains initial access to a Beta Ltd. employee’s computer through a phishing attack.
Once inside the system, Bob finds the OAuth 2.0 refresh token stored by GCPW. This token is meant to maintain seamless access to Google services without repeated logins.
With this token, Bob crafts a request to Google’s authentication servers pretending to be the legitimate user. He requests new Access Tokens with broad permissions, like access to emails or cloud storage.
Using these tokens, Bob can now access sensitive data in the employee’s Google Workspace environment, like emails or documents, bypassing any multi-factor authentication set up by the company.
PASSWORD RECOVERY THREAT:
Plaintext Credential Risk: GCPW’s mechanism of saving user passwords as encrypted LSA secrets, intended for password resetting, presents a vulnerability. Skilled attackers could decrypt these credentials, allowing them to impersonate users and gain unrestricted account access.
Scenario: A small business, “Gamma Inc.,” uses GCPW for managing their Windows devices and Google Workspace accounts.
Attack Example:
Carla, an experienced hacker, targets Gamma Inc. She successfully breaches one of the employee’s systems through a malware-laden email attachment.
After gaining access, Carla locates the encrypted LSA secret stored by GCPW, which contains the user’s Google Workspace password.
Using advanced decryption techniques, she decrypts this password. Now, Carla has the same access privileges as the employee, not just on the local machine but across all Google services where the employee’s account is used.
This enables Carla to impersonate the employee, access company emails, manipulate documents, or even transfer funds if the employee has financial privileges.
GOOGLE’S STANCE AND SECURITY IMPLICATIONS
Google’s decision not to address these findings, citing their exclusion from the company’s specific threat model, has stirred a debate in the cybersecurity community. While Google’s risk.
How to choose, configure and use cloud services securely.
If you want to store and process data in the cloud, or use cloud platforms to build and host your own services, this guidance will help you do so securely.
Cloud usage continues to grow steadily, both in volume and the type of services being built and hosted in it. In fact, cloud is usually the preferred option when organisations procure new IT services, as reflected in the UK government’s Cloud First Policy.
Against this background, it’s essential that new services are chosen and built in a way which reflects their security needs.
Who is this guidance for?
All organisations can use this guidance to navigate the sometimes confusing array of technologies which make up ‘the cloud’, and the management models which underpin their use.
Defining some common terms, and providing background on the various sections of this guide.
Understanding cloud services
Cloud services can be seen from a number of perspectives. This section considers:
service models and deployment models
the ‘shared responsibility model’ used by many cloud providers to handle day-to-day management of security
two specific security techniques; separation and cryptography
Choosing a cloud provider
The cloud security principles and how to use them, along with our lightweight security framework and some vendor responses to the principles.
Using cloud services securely
Some actions that customers of cloud services will need to take. This includes advice for cloud platforms and software as a service (SaaS), and those looking to lift and shift into the cloud.
Software as a Service (SaaS) security refers to the measures and practices employed to protect SaaS solutions’ data, applications, and infrastructure.
SaaS is a cloud computing model where software applications are hosted and delivered over the internet, rather than installed and run on individual devices or servers.
While SaaS offers numerous benefits, such as scalability and accessibility, it also introduces security challenges that organizations must address to safeguard their data and maintain compliance with regulatory requirements.
The software under this architecture is hosted centrally, with the service provider responsible for everything from database management to network administration to availability checks and infrastructure maintenance.
Data is often kept on centralized servers spread across numerous data centers and accessed by users via a web interface.
SaaS typically employs multi-tenancy, a deployment model in which a single software instance serves numerous customers whose data and settings are isolated.
Virtualization, load balancing, and backup storage are all part of this architecture’s strategy for delivering scalable, dependable, and readily available software solutions on demand.
Following the SaaS security checklist helps you understand the blind spots and focus on securing your SaaS apps and data.
Why is SaaS Security important?
Software as a service (SaaS) applications frequently deal with sensitive data, ranging from personal information to confidential corporate details, making SaaS security essential.
Due to their internet-based nature, these apps are vulnerable to data theft and denial-of-service attacks.
Data loss, financial consequences, legal issues, and reputational harm are all possible outcomes of a hacked SaaS service.
In addition, due to the shared nature of SaaS’s basic infrastructure, a single vulnerability might affect several users.
Moreover, while convenient, attackers may easily exploit SaaS due to its reliance on centralized data storage. Robust security for SaaS protects users and inspires confidence in the digital economy overall.
It’s also a legal need for many businesses. Therefore, SaaS providers must place a premium on security to preserve credibility, safeguard customers, and guarantee the smooth running of operations.
To Protect Your SaaS Apps and data, Download the free Enterprise SaaS Security Technical Guide here.
Challenges and Risks for Security in SaaS
SaaS security checklist – Challenges and Risks
Data breach risk is significant since SaaS services are easily breached due to centralized storage.
The multi-tenancy framework might cause data leakage if clients are not adequately segregated.
Data breaches might occur due to insufficient access restrictions, and when using third-party infrastructure, you have to put your faith in their safety precautions.
SaaS Insecure Application Programming Interfaces APIs, which might open them to cyberattacks if they are not properly secured.
Due to an increasing number of off-site data storage, often in separate countries, ensuring continued regulatory compliance is a challenging task.
CISA released a fact-sheet, listing some of the great tools that CISA offers for orgs to transition and secure their cloud environments?
Five tools are described in the fact-sheet, along with other guidance to “…provide network defenders and incident response/analysts open-source tools, methods, and guidance for identifying, detecting, and mitigating cyber threats, known vulnerabilities, and anomalies while operating a cloud or hybrid environment.”
1- The Cyber Security Evaluation Tool – CISA developed the Cyber Security Evaluation Tool (CSET) using industry-recognized standards, frameworks, and recommendations to assist organizations in evaluating their enterprise and asset cybersecurity posture.
2- Secure Cloud Business Applications (SCuBA) project – which provides guidance for FCEB agencies securing their cloud business application environments and protecting federal information created, accessed, shared, and stored in those environments.
3- Untitled Goose Tool – CISA, together with Sandia National Laboratories, developed the Untitled Goose Tool to assist network defenders with hunt and incident response activities in Microsoft Azure, AAD, and M365 environments.
4- Decider – assists incident responders and analysts in mapping observed activity to the MITRE ATT&CK framework.
5- Memory Forensic on Cloud – Memory Forensic on Cloud, developed by JPCERT/CC, is a tool for building a memory forensic environment on Amazon Web Services.
The Cybersecurity and Infrastructure Security Agency (CISA) has come up with a list of free tools that businesses may use to protect themselves in cloud-based settings. According to the article published by CISA, these tools will assist incident response analysts and network defenders in mitigating, identifying, and detecting threats, known vulnerabilities, and abnormalities that occur in settings that are cloud-based or hybrid.During an attack, threat actors have generally focused their attention on servers located on the premises. However, several threat actors have been drawn in by the fast expansion of cloud migration in order to target cloud systems due to the vast number of attack vectors that are available when it comes to the cloud.
Organizations who do not have the essential capabilities to protect themselves against cloud-based attacks may benefit from the tools that are supplied by CISA. These technologies may assist users in securing their cloud resources from data theft, information exposure, and information theft respectively. The Cloud Industry Security Alliance (CISA) stated that companies should use the security features supplied by Cloud Service Providers and combine them with the free tools that were recommended by the CISA in order to defend themselves from these attacks. The following is a list of the tools that the CISA provides:
Cybersecurity Evaluation Tool (CSET).
The SCuBAGear tool.
The Untitled Goose Tool
Decider Tool
Memory Forensic on Cloud (JPCERT/CC) is an offering of Japan CERT.
THE CYBERSECURITY EVALUATION TOOL, ALSO KNOWN AS THE CSET.
For the purpose of assisting enterprises in the assessment of their cybersecurity posture, the CISA created this tool, which makes use of standards, guidelines, and recommendations that are widely accepted in the industry. Multiple questions about operational rules and procedures, as well as queries on the design of the system, are asked by the tool.This information is then utilized to develop a report that gives a comprehensive insight into the strengths and shortcomings of the businesses, along with suggestions to remedy them. The Cross-Sector Cyber Performance Goals (CPG) are included in the CSET version 11.5. These goals were established by the National Institute of Standards and Technology (NIST) in collaboration with the Computer Security Industry Association (CISA).
SCuBAGear is a tool that was developed as a part of the SCuBA (Secure Cloud Business Applications) project. This project was started as a direct reaction to the Supply Chain hack that occurred with SolarWinds Orion Software. SCuBA is a piece of automated software that does comparisons between the Federal Civilian Executive Branch (FECB) and the M365 Secure configurations of the CISA. CISA, in conjunction with SCuBAGear, has produced a number of materials that may serve as a guide for cloud security and are of use to all types of enterprises. This tool resulted in the creation of three different documents:
SCuBA Technical Reference Architecture (TRA) — Offers fundamental building blocks for bolstering the safety of cloud storage environments. Cloud-based business apps (for SaaS models) and the security services that are used to safeguard and monitor them are both included in the purview of TRA. The Hybrid Identity Solutions Architecture provides the best possible methods for tackling identity management in an environment that is hosted on the cloud. M365 security configuration baseline (SCB) — offers fundamental security settings for Microsoft Defender 365, OneDrive, Azure Active Directory, Exchange Online, and other services.This application generates an HTML report that details policy deviations outlined in the M365 SCB guidelines and presents them.
UNTITLED GOOSE TOOL
The tool, which was created in collaboration with Sandia National Laboratories, is designed to assist network defenders in locating harmful behaviors in Microsoft Azure, Active Directory, and Microsoft 365. Additionally, it enables the querying, exporting, and investigating of audit logs.Organizations who do not import these sorts of logs into their Security Incident and Event Management (SIEM) platform will find this application to be quite helpful. It was designed as an alternative to the PowerShell tools that were available at the time since those tools lacked the capability to gather data for Azure, AAD, and M365.
This is a tool that Network Defenders may use to,
Extraction of cloud artifacts from Active Directory, Microsoft Azure, and Microsoft 365 The Unified Audit Logs (UAL) should have time bounding performed on them. Collect data making use of the time-bounding feature of the MDE (Microsoft Defender Endpoint) data Decider Tool. Incident response analysts may find it useful to map malicious actions using this tool in conjunction with the MITRE ATT&CK methodology. In addition to this, it makes their methods more accessible and offers direction for laying out their actions in the appropriate manner.
DECIDER TOOL
This tool, much like the CSET, asks a number of questions in order to give relevant user inquiries for the purpose of selecting the most effective identification technique. Users now have the ability to, given all of this information:
Export heatmaps from the ATT&CK Navigator. Publish reports on the threat intelligence you have collected. Determine and put into effect the appropriate preventative measures. Prevent Exploitation In addition, the CISA has given a link that describes how to use the Decider tool.
MEMORY FORENSIC ON CLOUD (JPCERT/CC)
It was built for constructing and analyzing the Windows Memory Image on AWS using Volatility 3, which was the reason why it was developed. In addition, Memory Forensics is necessary when it comes to the recently popular LOTL (Living-Off-the-Land) attacks, which are also known as fileless malware. Memory image analysis may be helpful during incident response engagements, which often call for the use of high-specification equipment, a significant amount of time, and other resources in order to adequately prepare the environment.
Cloud environments rely on identity as the security perimeter, and identities are mushrooming and making “identity sprawl” a serious challenge. Users often have multiple identities that span many resources and devices, while machine identities —used by apps, connected devices and other services—are growing at an accelerated pace.
One way to address the large attack surface and unnecessary risk in the cloud is to implement just-in-time (JIT) privileged access. This approach limits the amount of time an identity is granted privileged access before they are revoked. Even if an attacker compromises credentials, it may only have privileged access temporarily or not at all. This is a critical defense mechanism.
Simply put, JIT grants privileged access only temporarily and revokes it once the related task is completed. JIT builds on a least-privilege framework to include a time factor, so users only have access to those resources they need to carry out their functions, and only while they are performing those functions. That said, excessive privileges should, by default, be eliminated wherever possible.
“Right-sizing permissions” has become a buzzword for security professionals, but it’s a challenge. Enforcing the kind of granular permissions management necessary for good cloud security manually—going back and forth trying to determine which privileges are called for and what are the minimal escalations that can get the job done — can be time-consuming and frustrating for both users and security teams.
Organizations have reason to worry. As the annual Verizon Data Breach Investigations Report notes time and again: credentials can be the weak link in any network. The most recent report noted the use of stolen credentials has grown about 30% in the last five years. Since a large share of breaches can be traced back to credential theft and abuse, limiting the potential scope of account compromise will have an outsized effect on improving security.
How to implement JIT access
Deploying JIT access begins with gaining a clear view of who users are, what privileges they have and what privileges they need, including whether they are human and machine identities. Is the user an engineer or developer, an administrator or security staff? Work can’t stop while a user waits to be validated. This is where automation can provide a workable system to provision temporary privileges and revoke them once they’re not necessary.
A few best practices can help security teams implement automated JIT:
A self-service portal: Security staff get a bad rap as creators of user friction, so any tool that can smooth out workflows is a good thing. A self-service portal can reduce friction by allowing users to request elevated privileges and tracking the approval process. This cuts back on delays and requests that fall through the cracks, while also enabling automated permissions management, which in turn reduces cloud attack surface and leads an audit trail for monitoring activity.
Automate policies for low-risk requests: Simple requests involving low-risk activity, such as work in non-production environments, can be automated with policies that approve requests for a limited time and without human intervention.
Define owners for each step of the process: Automation should not equal relinquishing control of business processes. It needs to be monitored to ensure unintended actions do not occur. Each step of the process —reviewing requests, monitoring implementation, and revoking privileges—must be assigned an owner and more complex and sensitive requests should be reviewed and approved by a human, when necessary.
By implementing JIT, security teams can move closer to achieving a least-privilege model and implementing zero trust security. Automation can make this possible by speeding up the process of granting and revoking permissions as necessary, without creating more work for security teams that are already stretched thin, or friction for users that impacts their agility and efficiency.
Cloud mining is a way for you to purchase mining power from a remote data centre. Cloud mining works in the same way as regular cryptocurrency mining, except that instead of purchasing expensive hardware and dealing with its maintenance yourself, you just need to buy some shares and let a service provider do all the work.
This can be especially appealing if you haven’t got access to cheap electricity in your area (or any at all), or if you simply don’t want to deal with the hassle of setting up your rig.
What is Cloud Mining?
Cloud mining is a service that allows you to purchase mining power from data centres. The process of mining is done remotely, and the owner of the data centre pays for the hardware and electricity usage. You pay for the hash power that you rent from them.
It is a process of renting crypto mining capacity from a third-party provider and using it to mine cryptocurrencies yourself. Instead of having to buy an expensive mining hardware, pay for its electricity use, and maintain it yourself, cloud mining lets you buy into a mining pool without requiring any of the hassles involved in normal crypto mining.
How does cloud mining work?
Cloud mining is a way to earn cryptocurrencies without having to buy expensive hardware. You can buy hash power from a cloud mining company, which means you won’t have to set up your hardware or software.
You don’t need any special knowledge or skills to start earning money immediately with this method of cryptocurrency mining.
Bitcoin Cloud Mining is the process by which transactions are verified and added to the public ledger, known as the blockchain. The blockchain is what allows a user to send Bitcoin or other cryptocurrencies between their accounts and to pay for goods or services from any merchant that accepts cryptocurrencies.
The blockchain is distributed across thousands of computers around the world. One of those computers is owned by you! So when your computer works on creating a new transaction block, it adds some cryptographic hashing which validates and secures the block and all subsequent blocks.
The key part here is that if your computer is doing work on someone else’s transaction block, you’ll be rewarded with Bitcoins or other cryptocurrencies, which you can then spend however you’d like. With the Bitcoin price today of over $22,000, this is the currency that receives the most mining.
Advantages of Cloud Mining
No need for hardware: Cloud mining is completely virtual. You don’t need to buy any equipment, so you can start earning immediately without having to worry about maintenance or electricity costs.
No need for software: Unlike traditional mining where you have to install specific software on your computer, cloud mining requires no software installation at all. Once you purchase hash power from a provider and connect it with their platform (usually via API key), everything else works automatically in the background without any additional effort from your side.
No maintenance required: The majority of cloud mining providers offer contracts with monthly fees rather than daily fees like other companies do. This makes it much easier because there’s no need for regular checkups or maintenance work every month like some other platforms require.
Disadvantages of Cloud Mining
High electricity costs: Mining cryptocurrency requires a lot of electricity. If you’re using cloud mining, this cost is passed on to you, the customer. This can be very expensive and make it hard for your ROI (return on investment) to pay off.
Maintenance costs: You’ll also need to consider maintenance costs for your hardware, as well as any downtime or downtime during which the machine may malfunction or be repaired by the company providing it. This could also affect your ROI negatively if they don’t have a good track record with repairs and replacements promptly.
Low returns on investment: Finally, there’s no guarantee that any particular cryptocurrency will increase in value over time; it may even decrease. If this happens while you’re paying high fees just so someone else can mine coins for themselves instead of doing it yourself directly through an ASIC miner or GPU rig at home then those losses will likely outweigh whatever gains might result from having used cloud mining services like Hashflare or Genesis Mining in order.
Types of Cloud Mining
Cloud mining is a way to mine cryptocurrencies without having to buy expensive equipment or even invest in it at all. Instead, you pay someone else to do it for you.
Host Mining
Host mining is a type of cloud mining where you buy a physical mining rig and pay for the electricity. The price of host mining can be very high, but it’s also the most profitable way to earn money. You need technical knowledge and experience to host mine successfully, so this isn’t recommended for beginners.
Hash Power Leasing
Hash power leasing is a way to get hash power without buying the hardware. This can be done by signing up with a service provider and paying them for their services. The provider will then provide you with the necessary equipment, which you need to pay for separately.
The process works like this:
You sign up with a cloud mining company (like Hashflare or Genesis Mining)
They give you access to their mining farm’s equipment and software through an API key or web interface
You set up an account with them and deposit money into it (usually Bitcoin)
You are then able to use this money as if it were your own – but instead of buying physical hardware yourself, all of that work has already been done by someone else.
How to spot potential fraud in cloud mining
To avoid fraud, you should look for companies that are transparent about their ownership and location. Look at the company’s domain name and website for authenticity. Avoid any cloud mining company that does not provide a physical address or phone number on its website.
You should also check for reviews and complaints about the company in question by searching online or contacting local authorities (e.g., Better Business Bureau aka BBB).
BitDeer
BitDeer is a cloud mining platform that allows users to rent computing power to mine various cryptocurrencies, including Bitcoin, Ethereum, Litecoin, and more. It was founded in 2018 and is headquartered in Singapore.
BitDeer partners with mining farms and data centres worldwide to provide cloud mining services. Users can rent mining machines or hash power from BitDeer’s partners, which are located in regions with favourable conditions for cryptocurrency mining, such as regions with low electricity costs and cool climates.
StormGain
StormGain is a cryptocurrency trading and exchange platform that offers a range of services for cryptocurrency traders and investors. It was founded in 2019 and is headquartered in Seychelles.
StormGain aims to provide a user-friendly and accessible platform for trading and investing in cryptocurrencies, with a focus on leveraged trading and cryptocurrency mining. Some of the features and services offered by StormGain include Cryptocurrency Trading, Leverage Trading, Crypto Mining, Wallet Services and more.
GMiners
GMiner is a cloud mining company based in Hong Kong. It’s a subsidiary of Genesis Mining, one of the largest Bitcoin mining companies in the world. GMiner offers a variety of different mining contracts for Bitcoin, Ethereum, Dash, Litecoin and Bitcoin Cash.
Potential Risks
Please note that the cryptocurrency market is constantly evolving, and the performance and reputation of cloud mining companies may change over time. It’s essential to do thorough research, read reviews from multiple sources, and exercise caution when investing in cloud mining services or any other form of cryptocurrency investment. Always consider the risks and consult with experienced investors or seek professional advice before making any investment decisions.
Businesses from all industries are aware of the benefits of cloud computing. Some organizations are just getting started with migration as part of digital transformation initiatives, while others are implementing sophisticated multi-cloud, hybrid strategies. However, data security in cloud computing is one of the most challenging deployment concerns at any level due to the unique risks that come with the technology.
The cloud compromises the conventional network perimeter that guided cybersecurity efforts in the past. As a result, a distinct strategy is needed for data security in cloud computing, one that takes into account both the complexity of the data compliance, governance, and security structures as well as the dangers.
The Shifting Business Environment and Its Effects on Cloud Security
The top investment businesses implementing digital transformation initiatives want to make over the next three years is bolstering cybersecurity defenses. A paradigm shift in cybersecurity is being brought about by the rising trend of remote and hybrid workplaces, which is altering investment priorities.
Cloud computing provides the underlying technology for this transition as organizations want to increase resilience, and people want the freedom to work from anywhere. Yet, the lack of built-in security safeguards in many cloud systems highlights the need for data security in cloud computing.
What Is Cloud Data Security?
Cloud data security involves adopting technological solutions, policies, and processes to safeguard cloud-based systems and apps and the data and user access that go with them. The fundamental tenets of information security and data governance apply to the cloud as well:
Confidentiality: Protecting the data from illegal access and disclosure is known as confidentiality.
Integrity: Preventing unauthorized changes to the data so that it may be trusted
Accessibility: Making sure the data is completely accessible and available when it’s needed.
Cloud data security must be taken into account at every stage of cloud computing and the data lifecycle, including during the development, deployment, and administration of the cloud environment.
Data Risks in Cloud
Cloud computing has revolutionized the way data is gathered, stored, and processed, but it has also introduced new risks to data security. As more organizations rely on the cloud, cyberattacks and data breaches have become the biggest threats to data protection. While cloud technology is subject to the same cybersecurity risks as on-premises solutions, it poses additional risks to data security.
Application Programming Interfaces (APIs) with Security Flaws
Security flaws in APIs used for authentication and access are a common risk associated with the cloud. These flaws can be exploited by hackers to gain unauthorized access to sensitive data. Common issues include insufficient or improper input validation and insufficient authentication mechanisms. APIs can also be vulnerable to denial-of-service attacks (DoS), causing service disruptions and data loss.
Account Takeover or Account Hijacking
Account takeover or hijacking is a common threat in cloud computing, where hackers gain unauthorized access to user accounts and can steal or manipulate sensitive data. Hackers can gain access to cloud accounts due to weak or stolen passwords used by users. This is because users often use simple, easy-to-guess passwords or reuse the same password across multiple accounts. Once a hacker gains access to one account, they can potentially access other accounts that use the same password.
Insider Risks
Insider threats are a significant concern in cloud computing due to the lack of visibility into the cloud ecosystem. Cloud providers typically have a vast and complex infrastructure, which can make it challenging to monitor user activity and detect insider threats. Insider threats can occur when insiders, such as employees, contractors, or partners, intentionally or unintentionally access or disclose sensitive data.
Security Measures Protecting Data in Cloud Computing
Identity governance is the first step in securing data in the cloud. Across all of your on-premises and cloud platforms, workloads, and data access, you need a thorough, unified perspective. Identity management gives you the following:
Install Encryption
Encryption is an essential security measure for protecting sensitive and important data, including Personally Identifiable Information (PII) and intellectual property, both in transit and at rest.
Third-party encryption solutions can offer additional layers of security and flexibility beyond what is provided by CSPs. For example, some third-party encryption solutions may offer more robust encryption algorithms or the ability to encrypt data before it is uploaded to the cloud. They can also provide granular access controls, enabling organizations to determine who can access specific data and under what circumstances.
Archive the Data
Backing up cloud data is critical for data protection and business continuity. The 3-2-1 rule is a best practice, involving having at least three copies of the data, stored in two different types of media, with one backup copy stored offsite. Businesses should have a local backup in addition to the cloud provider’s backup, providing an extra layer of protection in case the cloud provider’s backup fails or is inaccessible.
Put Identity and Access Management (IAM) into Practice
IAM (Identity and Access Management) is essential for securing cloud resources and data. IAM components in a cloud environment include identity governance, privileged access control, and access management, such as SSO or MFA. To ensure effective IAM in a cloud environment, organizations must include cloud resources in their IAM framework, create appropriate policies and procedures, and regularly review and audit IAM policies and procedures.
Control Your Password Rules
Poor password hygiene is a common cause of security events. Password management software can help users create, store and manage strong, unique passwords for each account, making it easier to follow safe password procedures. This can encourage better password hygiene and reduce the risk of password-related security incidents.
Use Multi-factor Authentication (MFA)
MFA (Multi-factor authentication) is a security mechanism that adds an extra layer of security beyond traditional password-based authentication. It reduces the chance of credentials being stolen and makes it more challenging for threat actors to gain unauthorized access to cloud accounts.
MFA is particularly valuable in cloud environments, where many employees and contractors may access cloud accounts from various locations and devices. However, it is important to ensure that it is implemented correctly, easy to use, and integrated with existing security infrastructure and policies.
Summary
Your environment will get more complicated as you continue to utilize the cloud, particularly if you begin to rely on the hybrid multi-cloud. Data security in cloud computing is essential for reducing the dangers to your business and safeguarding not just your data but also your brand’s reputation.
Consider deploying solutions for controlling cloud access and entitlements to protect yourself from the always-changing cloud risks. For a thorough approach to identity management, incorporate these solutions into your entire IAM strategy as well.
A complete, identity-centered solution ensures that you constantly implement access control and employ governance more wisely, regardless of whether your data is on-premises or in the cloud. You will also profit from automation and other factors that increase identity efficiency and save expenses.