By 2022, API abuses will become the most frequent attack vector, predicts Gartner. We’re already witnessing new API exploits reach the headlines on a near-daily basis. Most infamous was the Equifax breach, an attack that exposed 147 million accounts in 2017. Since then, many more API breaches and major vulnerabilities have been detected at Experian, Geico, Facebook, Peleton and other organizations.
So, why are API attacks suddenly becoming so prevalent? Well, several factors are contributing to the rise in API exploits. As I’ve covered before, the use of RESTful web APIs is becoming more widespread through digital transformation initiatives and SaaS productization. And, the data these touchpoints transmit can carry a hefty price tag. Unfortunately, cybersecurity has not sufficiently progressed, making APIs ripe for the hacker’s picking.
I recently met with Roey Eliyahu, CEO of Salt Security, to better understand why more and more APIs hacks are making headlines. According to Eliyahu, a general lack of security awareness means these integration points are a low-effort, high-reward attack target. Establishing protection against zero-day threats means increasing the visibility of API holdings, testing for broken authorization and instigating ongoing monitoring of runtime environments.
Below, I’ll review the top factors contributing to the rise in API exploits. We’ll explore some of the top reasons why API attacks are increasing and consider how a zero-day protection mindset can mitigate common API vulnerabilities.
It guides system administrators and developers of National Security Systems on how to deploy Kubernetes with example configurations for the recommended hardening measures and mitigations.
Below is the list of mitigations provided by the US agencies:
Scan containers and Pods for vulnerabilities or misconfigurations.
Run containers and Pods with the least privileges possible.
Use network separation to control the amount of damage a compromise can cause.
Use firewalls to limit unneeded network connectivity and encryption to protect confidentiality.
Use strong authentication and authorization to limit user and administrator access as well as to limit the attack surface.
Use log auditing so that administrators can monitor activity and be alerted to potential malicious activity.
Periodically review all Kubernetes settings and use vulnerability scans to help ensure risks are appropriately accounted for and security patches are applied.
Learn Kubernetes Security: Securely orchestrate, scale, and manage your microservices in Kubernetes deployments
Further, a recent Sophos survey found that the average post-attack remediation costs, including lost business, grew to nearly $2 million per incident in 2021, about 10 times the size of the ransom payment itself.
CISOs and hands-on security professionals are implementing several tactics to defend their organization, and these include proactive threat hunting and technical defenses like multi-factor authentication.
While these practices are helpful, they are focused on preventing attacks from happening in the first place while the harsh reality is that it’s no longer a question of if hackers are going to get in, but when. With so much at stake, why are data recovery and restoration often put on the back burner of the security conversationwhen it could be the most valuable tool in the security arsenal?
Embracing new technologies lead to qualitative growth but simultaneously holds high chances of quantitative data breaches. While adopting cloud technology, it is important to see the security of cloud infrastructure as one of the crucial responsibilities. There are various organizations out there that are still unsure of the security of their data present in the cloud environment.Â
Nowadays, cloud computing servers are becoming susceptible to data breaches. Cloud infrastructure security solutions help in ensuring that data like sensitive information and transaction is protected. It also helps in preventing the third party from tampering with the data being transmitted.
DDoS Protection
Distributed denial of service, aka DDoS attacks, is infamously rising and deployed to flood the computer system with requests. As a result, the website slows down to load to a level where it starts crashing when the number of requests exceeds the limit of handling. Cloud computing security provides solutions that focus on stopping bulk traffic that targets the company’s cloud servers.
Constant Support
When it comes to the best practices of cloud infrastructure security solutions, it offers consistent support and high availability to support the company’s assets. In addition, users get to enjoy the benefit of 27/7 live monitoring all year-round. This live monitoring and constant support offer to secure data effortlessly.
Threat Detection
Infrastructure security in the cloud offers advanced threat detection strategies such as endpoint scanning techniques for threats at the device level. The endpoint scanning enhances the security of devices that are accessing your network.
Supervision of Compliance
In order to protect data, the entire infrastructure requires to be working under complaint regulations. Complaint secured cloud computing infrastructure helps in maintaining and managing the safety features of the cloud storage.
The points mentioned above are clear enough to state how beneficial and vital is cloud infrastructure security for an organization. Unfortunately, there are very many high-profile cases that have been witnessed in past years relating to data breaches.
To patch the loopholes and strengthen the IT infrastructure security, it is crucial to keep the security of cloud storage services a high priority. Engage with the top-class cloud computing security tools to get better results and have the data secured.
The growing reliance on public cloud services as both a source and repository of mission-critical information means data owners are under pressure to deliver effective protection for cloud-resident applications and data. Indeed, cloud is now front of mind for many IT organisations. According to recent research by Enterprise Strategy Group (ESG) cloud is “very well-perceived by data protection decision makers”, with 87% of saying it has made a positive impact on their data protection strategies.
However, many organisations are unclear about what levels of data protection are provided by public cloud infrastructure and SaaS solutions, increasing the risk of potential data loss and compliance breach. At the same time, on-premises backup and disaster recovery strategies are increasingly leveraging cloud infrastructure, resulting in hybrid data protection strategies that deliver inconsistent service levels.
Despite these challenges, there are a significant number of organizations that still don’t use a third-party data protection solution or service. This should be cause for concern considering that everything an organization stores in the cloud, from emails and files to chat history and sales data (among many other datasets) is its responsibility and is subject to the same recoverability challenges and requirements as traditional data. In fact, only 13% of survey respondents see themselves as solely responsible for protecting all their SaaS-resident application data.
A strong case can be made that shoring up defenses requires “automating out” the weakest link – i.e., humans – from any cloud that companies are entrusting with their data. This applies to their internal, on-premise clouds as well as to the external cloud vendors that they choose to engage with.
In “automating out the weak link,” the ability of superusers or IT administrators – or of bad actors who have gained access to valid admin credentials – to manually interfere with sensitive data becomes non-existent, because human interaction is eliminated.
Trust no one
The zero-trust model, which has gained favor in recent years among many cloud vendors, serves as a starting point for making this happen.
The zero-trust security framework challenges the idea of trust in any form, whether that’s trust of networks, trust between host and applications, or even trust of super users or administrators. The best way to secure a network, according to the zero trust framework, is to assume absolutely no level of trust.
For some time, the public cloud has actually been able to offer more protection than traditional on-site environments. Dedicated expert teams ensure that cloud servers, for example, maintain an optimal security posture against external threats.
But that level of security comes at a price. Those same extended teams increase insider exposure to private data—which leads to a higher risk of an insider data breach and can complicate compliance efforts.
Recent developments in data security technology—in chips, software, and the cloud infrastructure—are changing that. New security capabilities transform the public cloud into a trusted data-secure environment by effectively locking data access to insiders or external attackers
This eliminates the last security roadblock to full cloud migration for even the most sensitive data and applications. Leveraging this confidential cloud, organizations for the first time can now exclusively own their data, workloads, and applications—wherever they work.
Even some of the most security-conscious organizations in the world are now seeing the confidential cloud as the safest option for the storage, processing, and management of their data. The attraction to the confidential cloud is based on the promise of exclusive data control and hardware-grade minimization of data risk.
What is the confidential cloud?
Over the last year, there’s been a great deal of talk about confidential computing—including secure enclaves or TEEs (Trusted Execution Environments). These are now available in servers built on chips from Amazon Nitro Enclaves, Intel SGX (Software Guard Extensions), and AMD SEV (Secure Encrypted Virtualization).
Today, rapid digitalization has placed a significant burden on software developers supporting remote business operations. Developers are facing continuous pressure to push out software at high velocity. As a result, security is continuously overlooked, as it doesn’t fit into existing development workflows.
The way we build software is increasingly automated and integrated. CI/CD pipelines have become the backbone of modern DevOps environments and a crucial component of most software companies’ operations. CI/CD has the ability to automate secure software development with scheduled updates and built-in security checks.
Developers can build code, run tests, and deploy new versions of software swiftly and securely. While this approach is efficient, major data breaches have demonstrated a significant and growing risk to the CI/CD pipeline in recent months.
New research suggests the overall state of cloud security continues to improve at a time when more organizations rely on multiple cloud service providers.
A survey of 1,900 security and IT professionals published this week by the Cloud Security Alliance (CSA) in collaboration with AlgoSec, a provider of network security tools, finds only 11% of respondents said they encountered a cloud security incident in the past year. The most common problems encountered were issues with a specific cloud provider (26%), security misconfigurations (22%) and attacks such as denial-of-service exploits (20%).
When asked about the impact of the cloud outages, more than a quarter of respondents said it took more than half a day to recover.
Despite growing confidence in cloud platforms, however, security remains a major area of focus. Top areas of concern include network security (58%), lack of cloud expertise (47%), migrating workloads to the cloud (44%) and insufficient staff to manage cloud environments (32%). In all, 79% of respondents noted some kind of issue involving IT staffing.
In the report, 52% of respondents reported they employed cloud-native tools to manage security as part of their application orchestration process, with half (50%) using orchestration and configuration management tools such as Ansible, Chef and Puppet. Less than a third (29%) said they used manual processes to manage cloud security.
Less clear, though, is who within the IT organization is responsible for cloud security. More than a third (35%) said their security operations team managed cloud security, followed by the cloud team (18%) and IT operations (16%). Other teams, such as network operations, DevOps and application owners, are all below 10%, the survey found.
The pandemic and lockdowns hit their first anniversary mark, and many companies continue to have their employees work from home for the foreseeable future. Over the past year, organizations have seen how important cloud computing is to business operations.
In fact, according to a MariaDB survey, 40% of respondents said that COVID-19 accelerated their migration to cloud, and IDC found that while cloud spending increased slightly during the early months of the pandemic, other IT-related spending decreased.
If nothing else, 2020 showed organizations the advantages of cloud services. Of course, with more cloud use, there is more cloud risk. With almost all cloud teams working remotely, there has been an uptick in security vulnerabilities and a concern that there are ongoing cloud security issues that have yet to be discovered. Organizations are migrating so quickly to the cloud that security is an afterthought, and that has consequences.
Instead, a new Deloitte study recommended, this move to the cloud should work with cybersecurity as a differentiator to gain consumer trust. “An integrated cloud cyber strategy enables organizations to use security in their transformation in a way that promotes greater consumer trust, especially in today’s digital age,” the report stated. Any migration to the cloud should take a security-first approach.
Why Security First?
With an integrated, security-by-design cloud cybersecurity strategy, organizations can use security in digital transformation as a driver rather than as an afterthought, said Bhavin Barot, a Deloitte risk and financial advisory principal in the cyber and strategic risk practice, in an email interview. Leveraging secure design principles during a digital transformation or cloud migration helps organizations in the following ways, Barot added:
Incorporating leading-edge, innovative approaches such as intelligent threat detection.
Reducing risks related to technology, insider threats and the supply chain.
Elevating the DevSecOps posture for developers and engineers and
Establishes a cyber-forward approach that reinforces business objectives, enabling security principles such as zero trust.
Jean Le Bouthillier, CEO of Canadian data security startup Q​ohash​, says that organizations have had many issues with solutions that generate large volumes of (often) not relevant and not actionable data.
“My first piece of advice for organizations looking for the right data security solutions would be to consider whether they provide valuable metrics and information for reducing enterprise data risks. It sounds obvious, but you’d be surprised at the irrelevance and noisiness of some leading solutions — a problem that is becoming a nightmare with data volumes and velocity multiplying,” he told Help Net Security.
They should also analyze the pricing model of solutions and ensure that they are not presenting an unwelcome dilemma.
“If the pricing model for protecting your data is volume-adjusted, it will mean that over time, as data volumes increase, you’ll be tempted to reduce the scope of your protection to avoid cost overruns,” he noted. Such a situation should ideally be avoided.
Another important point: consider returning to basics and ensuring that you have a solid data classification policy and the means to automate it.
“Data classification is the fundamental root of any data security governance because it provides clarity and authority to support standards and other programs like user awareness efforts. In the context of data governance, data visibility and, ultimately, data-centric controls can’t work without data classification,” he explained.
“Think back on the millions of dollars spent on artificial intelligence projects that didn’t result in operational capabilities because little attention was paid to data quality, and accept that data protection projects – like any other ambitious project – can’t succeed without rock-solid foundations.”
The secret to resolving compliance and security issues before they escalate into costly audit penalties is to proactively add an automated compliance and security management system in the cloud environment. This way your company can take advantage of all the security benefits offered by the cloud provider while also managing other security aspects critical to your company’s operations while also providing an audit trail that can be used to help verify compliance.
In short, your company needs the means to detect specific issues and correct them prior to an official compliance certification audit. The top areas that auditors check are all centered on data access. That’s understandable given that Gartner predicts that “by 2023, 75% of security failures will result from inadequate management of identities, access, and privileges, up from 50% in 2020.”
Cloud security automation can scale along with your workloads in cloud environments and correct compliance issues and security vulnerabilities as they occur. Your company should consider the following when selecting an Identity Access Management (IAM) product to use in cloud environments to automate corrections and ensure compliance.
More easily visualize the current IAM posture and get alerts about excessive permissions
Get proof of regulatory compliance and data hygiene along with verification that relevant assets can only be accessed from specific areas in the application
Monitor any changes in the application that require updates in its security policy
If needed, create a new security policy that reflects the needs of each cloud-based asset
Ease of deployment in the pre-production and production environments
In this article, we’ll outline the key areas you should consider if you want to keep your serverless architecture secure. While the solution that best fits your own ecosystem will be unique to you, the following will serve as strong foundations upon which to build your approach.
The sheer number of organizations moving to the cloud is staggering: we’re seeing 3-5 years-worth of business transformation happening in just months due to the pandemic. As cloud-enabled digital transformation continues to accelerate, there are a variety of concerns.
For example, the visibility of data. Organizations (and users) must assess what controls cloud services providers offer in order to understand the security risks and challenges. If data is stored unencrypted, that implies significant additional risk in a multi-tenant environment. Or what about the ability of security models to mimic dynamic behavior? Many anomaly detection and predictive “risk-scoring” algorithms look for abnormal user behavior to help identify security threats. With the sudden and dramatic shift to remote work last year, most models require significant adjustments and adaptation.
Normally, companies begin exploring the move to a cloud service provider with a detailed risk analysis assessment. This often involves examining assets, potential vulnerabilities, exploitation probabilities, anticipated breach-driven outcomes, and an in-depth evaluation of vendors’ capacity to effectively manage a hybrid solution (including authentication services, authorization, access controls, encryption capabilities, logging, incident response, reliability and uptime, etc.).
AWS offers multiple services around logging and monitoring. For example, you have almost certainly heard of CloudTrail and CloudWatch, but they are just the tip of the iceberg.
CloudWatch Logs is the default logging service for many AWS resources (like EC2, RDS, etc.): it captures application events and error logs, and allows to monitor and troubleshoot application performance. CloudTrail, on the other hand, works at a lower level, monitoring API calls for various AWS services.
Although listing (and describing) all services made available by AWS is out of scope for this blog post, there are a few brilliant resources which tackle this exact problem:
“Logging in the Cloud: From Zero to (Incident Response) Hero” are the annotated slides (131 pages!) of a good talk delivered at RSA 2020 by the Secureworks team which tries to answer questions like “What Should I Be Logging?”, “How Specifically Should I Configure it?”, and “What Should I Be Monitoring?”. Especially interesting since it doesn’t cover only AWS, but also GCP and Azure.
“Overview of AWS Logs” lists main AWS logging sources with a summary table, format, example and a Grok regex to parse log and ingest into a tool like Elastic Stack (ELK).
In the remainder of this section I’ll provide a summary of the main services we will need to design our security logging platform. Before doing so, though, it might be helpful having a high-level overview of how these services communicate (special thanks to Scott Piper for the original idea)