InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
Today, rapid digitalization has placed a significant burden on software developers supporting remote business operations. Developers are facing continuous pressure to push out software at high velocity. As a result, security is continuously overlooked, as it doesn’t fit into existing development workflows.
The way we build software is increasingly automated and integrated. CI/CD pipelines have become the backbone of modern DevOps environments and a crucial component of most software companies’ operations. CI/CD has the ability to automate secure software development with scheduled updates and built-in security checks.
Developers can build code, run tests, and deploy new versions of software swiftly and securely. While this approach is efficient, major data breaches have demonstrated a significant and growing risk to the CI/CD pipeline in recent months.
The fundamentals of a formal, effective application security plan should start with business objectives, tools, processes and most of all, data, with the primary driver for securing applications focused on protecting data.
While it is important to surgically address the insecurities in a mission-critical application, it is equally important to continuously upskill the development and security teams, and create a culture where security is not looked at simply a ‘check-the-box’ item.
According to Setu Kulkarni, vice president of strategy at WhiteHat Security, the first step is to identify the right inflection points for injecting application security.
“CISOs need to recognize that no SDLC is built the same and no application is at the same level of maturity within its life cycle,” he said. “We have learned that testing applications continuously in production is critical to identify the real, exploitable vulnerabilities that create the maximum risk of being breached in production.”
Kulkarni noted one way to (almost always) ensure that security does not become an afterthought is to “top & tail” – in other words, make sure that your team gets a voice when the exit criteria is being defined during the requirements phase, and make sure the team is testing in pre-production and production.
“Everything in between is really a negotiation based on the maturity of the SDLC and the application itself. The most consequential best practice is to ensure that the Dev, Sec and Ops teams get accurate and actionable insight from the AppSec tests that are executed,” he said. “After all, the only way to eventually have security operate at the speed of DevOps is through some level of automation, and the efficacy of automation is directly proportional to the accuracy of the data used to drive the automation.”
Doug Dooley, COO of Data Theorem, pointed out that the business driver for AppSec is about privacy, trust and reputation that is directly tied to the brand of those who build and publish the applications.
He noted traditional AppSec testing focused on static and dynamic application security testing, including static application security testing (SAST) and dynamic application security training (DAST).
“However, with a more modern application stack, AppSec programs are starting to factor in third-party risks introduced by open source and software development kits, covered by software composition analysis,” Dooley explained.
Further, cloud-native applications make infrastructure services just another software extension of the application buildout, so many AppSec programs increasingly add cloud security tools, such as cloud security posture management (CSPM).
Enough about culture and on to DevSecOps. What kind of culture allows it to thrive?
An important aspect is having a better understanding of the motivators of and detractors in each element. I won’t review those here because they are covered well in this article: https://www.stackrox.com/post/2021/02/devops-vs-devsecops-heres-how-they-fit-together/ But I will say that this topic brings to mind the Stephen Covey quote, “Seek first to understand, then to be understood.”
Another cultural aspect is that the model requires people to “fail fast.” Failure must be allowed. It’s not the kind of failure that leads to company ruin, though it may might be personally embarrassing. You know that main network cable that leads to your office, the one with the sign “Do not unplug!”? I’m the guy who accidentally unplugged it. I’m also the guy who returned a laptop without an RMA: it got lost on the return trip, so we never got our money back. I’m the one who worded something poorly in a policy and was glad that someone caught it before it went out. People make mistakes. The allowance of failure will actually lead to encouraging people to fail, born out of the idea that the more they do, the more they’ll fail AND succeed.
The culture also has to engender the attitude of “a failure is an event, not a person.” As I referenced earlier – we’re not talking about allowing failures that destroy companies and reputations; I’m talking about failures such as “Oops! I deleted that section of code because I didn’t think it was needed. I’ll get it back ASAP.”
This kind of culture leads, not to diminishing returns, but to cohesion in the team and growth in technical acumen. Do those failures get pointed out and documented? Of course – the team doesn’t really want to spend another 4 hours on another night correcting the same mistake. The person doesn’t get called out, but the failure gets pointed out.
The collaborators must be able to expose a vulnerability, have it prioritised, and get it fixed. No naming and shaming, because the goal is not a person’s desire never to fail, but to provide a secure and well-working product.
DevSecOps culture also lends itself to letting those doing the work determine what works best for them, which empowers them to be better professionals. Over time, the team notices patterns in failures and successes, and knows best what product or service would overcome those failures and automate successes.
There needs to be ample maker time, so DevSecOps needs to be free from an interrupt-driven culture. There’s a creative aspect to DevSecOps that requires time to think. Anyone in the arts knows that about the sliding scale of concentration (though they may not call it that). On one end is complete focus on a task, but this extreme focus removes the emotional element. On the other end is the emotional scale, but this extreme leaves out the technical part. Toward the middle is the proper mindset, where there’s a free-thinking and open sensitivity required for being creative, in addition to keeping the boundaries of the techniques and protocols, provided by business requirements, customer demand, etc.
Perhaps you aren’t currently part of a corporate culture for proper DevSecOps to thrive, for whatever reason (e.g., current management attitude, a change in leadership). You could work on creating a subculture. You might have a co-worker with whom you can work to make improvements while not negatively impacting the current speed of production. Or you have some leeway to introduce a tool that can help slightly.
Technology changes frequently, and those making things happen need to stay up-do-date with training. Embrace it, incorporate it as part of the incentives, make it part of the day, make it happen.
DevSecOps are people, and they need rest. There’s only so much and so fast that people can work, and that’s why we use technology. In DevSecOps, technology does not replace people, but enables them to perform their various duties at the speed of light.
Metrics have to be as concrete as possible. How does management determine if personnel are doing things right and doing the right things? Judging success by hearsay and feeling is never a rational metric
Regardless of what else it’s called – DevOps with a security focus, DevOpsSec, Secure DevOps – the end result is to have Development, Operations, and Security work together to iteratively create a good and secure product that is delivered timely. When the culture adopts these elements, DevSecOps will flourish.
A research from Secure Code Warrior has revealed an attitudinal shift in the software development industry, with organizations bucking traditional practices for DevOps and Secure DevOps.
The global survey of professional developers and their managers found 70% of organizations recognize the importance of secure coding practices, with results indicating an industry-wide shift from reaction to prevention is underway.
Dr. Matias Madou, CTO at Secure Code Warrior, said, “We are seeing a fundamental shift in mindsets across the world, as the industry slowly moves from reactive, band-aid solutions rolled out after a breach, to the proactive and human-led practice of writing quality software that is intrinsically free from vulnerabilities right from the very first keystroke.”
“This research shows that ‘secure code’ is becoming synonymous with ‘quality code’ within software development, and security is becoming the responsibility of development teams and leaders—not just AppSec professionals,” he said.
Secure coding practices
Reactive practices like using tools on deployed applications and manually reviewing code for vulnerabilities were the top two practices respondents associated with coding securely.
However, a proactive shift in mindset was evidenced across the globe, with 55% of the developers surveyed also recognising secure coding as the active, ongoing practice of writing software protected from vulnerabilities.
The Spectre vulnerability, which stems from vulnerabilities at the CPU design level, has been known for over 3 years now. What’s so interesting about this PoC is that its feasibility for leaking the end-user’s data has now been proven for web applications, meaning that it’s no longer just theoretical.
The vulnerability in affected CPUs has to do with speculative execution, which in certain situations can leave behind observable side-effects and result in data leakage to the attacker. All the attacker needs is a way to execute exploit code in the same executing context as other JavaScript handling sensitive data.
The attacker could use the web supply chain, for instance, presenting itself as a useful library so that victims voluntarily add it to their webpages, or deliberately compromise a third-party library as a way to attack websites that use it. Another vehicle would be to find an injection vulnerability on the website and combine that with the Spectre exploit.
Regardless of the method, the list of victims would be long, as Spectre exploits the JavaScript engines of browsers across several different operating systems, processor architectures, and hardware generations.
It was a pirated and malware-tainted version of Apple’s XCode development app that worked in a devious way.
You may be wondering, as we did back in 2015, why anyone would download and use a pirated version of Xcode.app when the official version is available as a free download anyway.
Nevertheless, this redistributed version of Xcode seems to have been popular in China at the time – perhaps simply because it was easier to acquire the “product”, which is a multi-gigabyte download, directly from fast servers inside China.
The hacked version of Xcode would add malware into iOS apps when they were compiled on an infected system, without infecting the source code of the app itself.
The implanted malware was buried in places that looked like Apple-supplied library code, with the result that Apple let many of these booby-trapped apps into the App Store, presumably because the components compiled from the vendor’s own source code were fine.
As we said at the time, “developers with sloppy security practices, such as using illegally-acquired software of unvetted origin for production builds, turned into iOS malware generation factories for the crooks behind XcodeGhost.”
As you probably know, this sort of security problem is now commonly known as a supply chain attack, in which a product or service that you assumed you could trust turned out to have had malware inserted along the way.
AI and ML technologies have made great strides in helping organizations with cybersecurity, as well as with other tasks like chatbots that help with customer service.
Cybercriminals have also made great strides in using AI and ML for fraud.
“Today, fraud can happen without stealing someone else’s identity because fraudsters can create ‘synthetic identities’ with fake, personally identifiable information (PII),” explained Rick Song, co-founder and CEO of Persona, in an email interview. And fraudsters are leveraging new tricks, using the latest technologies, that allow them to slip past security systems and do things like open accounts where they rack up untraceable debt, steal Bitcoin holdings without detection, or simply redirect authentic purchases to a new address.
Some increasingly popular fraud tricks using AI and ML include:
Deepfakes that mimic live selfies in an attempt to circumvent security systems
Replicating a template across a dozen or more accounts to create fake IDs (these often use celebrity photos and their public data)
Mimicking the voice of high-level officials and corporate executives to extort personal information and money
“With this pace of evolution, companies are left at risk of holding the bag — they are not only losing money directly through things like loans and fees they can’t recoup and any restitution to impacted customers, but they’re also losing trust and credibility. Fraud costs the global economy over $5 trillion every year, but the reputational costs are hard to quantify,” said Song.
“Application security was traditionally very low on CISOs’ priority list but, as the attacks targeting applications increase in frequency, it’s getting more attention,” Eugene Dzihanau, Senior Director of Technology Solutions at EPAM Systems, told Help Net Security.
“The application layer is quickly becoming more exposed to the outside world, drastically increasing the attack surface. Applications are deployed on the public cloud, mobile phones and IoT devices. Also, applications process a lot more data than before, making them a more frequent target of an attack.”
In addition to that, modern applications and tech stacks are evolving and becoming increasingly complex – applications are integrating more external dependencies and are becoming very interconnected through API calls. The increased complexity significantly increase the chance of security issues
“SAST scan results are massive, with very little insight into prioritizing fixes for critical or exploitable vulnerabilities. DAST rarely brings desired results without additional steps; the out of the box crawlers can rarely traverse the modern web applications,” he explained.
“This leaves glaring gaps in the security of deployment pipelines, security defects on the architecture level and third party/open source dependencies checks.”
“SAST scan results are massive, with very little insight into prioritizing fixes for critical or exploitable vulnerabilities. DAST rarely brings desired results without additional steps; the out of the box crawlers can rarely traverse the modern web applications,” he explained.
“This leaves glaring gaps in the security of deployment pipelines, security defects on the architecture level and third party/open source dependencies checks.”
If you’ve ever used the Python programming language, or installed software written in Python, you’ve probably used PyPI, even if you didn’t realize it at the time.
PyPI is short for the Python Package Index, and it currently contains just under 300,000 open source add-on modules (290,614 of them when we checked [2021-03-07T00:10Z]).
You can download and install any of these modules automatically just by issuing a command such as pip install [nameofpackage], or by letting a software installer fetch the missing components for you.
Crooks sometimes Trojanise the repository of a legitimate project, typically by guessing or cracking the password of a package owner’s account, or by helpfully but dishonestly offering to “assist” with a project that the original owner no longer has time to look after.
Once the fake version is uploaded to the genuine repository, users of the now-hacked package automatically get infected as soon as they update to the new version, which works just as it did before, except that it includes hidden malware for the crooks to exploit.
Another trick involves creating Trojanised public versions of private packages that the attacker knows are used internally by a software company.
Penetration Testing is a method that many companies follow in order to minimize their security breaches. This is a controlled way of hiring a professional who will try to hack your system and show you the loopholes that you should fix.
Before doing a penetration test, it is mandatory to have an agreement that will explicitly mention the following parameters −
>what will be the time of penetration test,
>where will be the IP source of the attack, and
>what will be the penetration fields of the system.
Penetration testing is conducted by professional ethical hackers who mainly use commercial, open-source tools, automate tools and manual checks. There are no restrictions; the most important objective here is to uncover as many security flaws as possible.
As just one symptom, 83 percent of the Top 30 U.S. retailers have vulnerabilities which pose an “imminent” cyber-threat, including Amazon, Costco, Kroger and Walmart.
2020 is shaping up to be a banner year for software vulnerabilities, leaving security professionals drowning in a veritable sea of patching, reporting and looming attacks, many of which they can’t even see.
A trio of recent reports tracking software vulnerabilities over the past year underscore the challenges of patch management and keeping attacks at bay.
“Based on vulnerability data, the state of software security remains pretty dismal,” Brian Martin, vice president of vulnerability intelligence with Risk Based Security (RBS), told Threatpost.
Security researchers looked at CVE details across the Top 50 software vendors and found that since 1999, Microsoft is the hands-down leader with 6,700 reported, followed by Oracle with 5,500 and IBM with 4,600.
“New software is being released at a faster rate than old software is being deprecated or discontinued,” Comparitech’s Paul Bischoff told Threatpost. “Given that, I think more software vulnerabilities are inevitable. Most of those vulnerabilities are identified and patched before they’re ever exploited in the wild, but more zero days are inevitable as well. Zero days are a much bigger concern than vulnerabilities in general.”
The newly published Building Security in Maturity Model provides the software security basics organizations should cover to keep up with their peers.
As application security methodology and best practices have evolved over more than a decade, the Building Security in Maturity Model (BSIMM) has been there each year to track how organizations are making progress. BSIMM11, released last week by Synopsys, is based on the software security practices in place at 130 different firms across numerous industries, including financial services, software, cloud, and healthcare.
The practices were measured by the model’s proprietary yardstick, which lumps 121 different software security metrics into four major domains: governance, intelligence, secure software development lifecycle (SSDL) touchpoints, and deployment. Each of these domains are further broken down into three practice categories containing numerous activities that slide from simple to very mature.
Similar to previous reports, BSIMM11 shows that most organizations are at the very least hitting the basics — including activities like performing external penetration testing and instituting basic software security training across development organizations. The following are the most common activities cited for each practice category, providing an excellent yardstick for the bare minimum that organizations should be doing to keep up with their peers.
There is a considerable demand for data-centric projects, that is why companies have quickly opened their data to their ecosystem through REST or SOAP APIs.
To make sure a deleted file can’t be recovered, you’ll need to use a third-party shredding tool. Here’s a look at three such free programs: Eraser, File Shredder, and Freeraser.