Aug 04 2022

GitHub blighted by “researcher” who created thousands of malicious projects

Category: App Security,MalwareDISC @ 10:46 am

Just over a year ago, we wrote about a “cybersecurity researcher” who posted almost 4000 pointlessly poisoned Python packages to the popular repository PyPI.

This person went by the curious nickname of Remind Supply Chain Risks, and the packages had project names that were generally similar to well-known projects, presumably in the hope that some of them would get installed by mistake, thanks to users using slightly incorrect search terms or making minor typing mistakes when typing in PyPI URLs.

These pointless packages weren’t overtly malicious, but they did call home to a server hosted in Japan, presumably so that the perpetrator could collect statistics on this “experiment” and write it up while pretending it counted as science.

A month after that, we wrote about a PhD student (who should have known better) and their supervisor (who is apparently an Assistant Professor of Computer Science at a US university, and very definitely should have known better) who went out of their way to introduce numerous apparently legitimate but not-strictly-needed patches into the Linux kernel.

They called these patches hypocrite commits, and the idea was to show that two peculiar patches submitted at different times could, in theory, be combined later on to introduce a security hole, effectively each contributing a sort of “half-vulnerability” that wouldn’t be spotted as a bug on its own.

As you can imagine, the Linux kernel team did not take kindly to being experimented on in this way without permission, not least because they were faced with cleaning up the mess:

Please stop submitting known-invalid patches. Your professor is playing around with the review process in order to achieve a paper in some strange and bizarre way. This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university…

GitHub splattered with hostile code

Accelerate DevOps with GitHub: Enhance Software Delivery Performance with GitHub Issues, Projects, Actions, and Advanced Security

Tags: DevOps, DevSecOps, malicious projects


Apr 08 2022

Developers Remediate Less Than a Third of Vulnerabilities

Category: Security vulnerabilitiesDISC @ 8:28 am

Developers Remediate Less Than a Third of Vulnerabilities

Developers are regularly ignoring security issues as they deal with an onslaught of issues from security teams, even as they are expected to release software more frequently and faster than ever before.

In addition, developers fix just 32% of known vulnerabilities, and 42% of developers push vulnerable code once per month, according to Tromzo’s Voice of the Modern Developer Report.

The report, based on a survey of more than 400 U.S.-based developers who work at organizations where they currently have CI/CD tools in place, also found a third of respondents think developers and security are siloed.

Tromzo CTO and co-founder Harshit Chitalia pointed out the top security vulnerabilities of the past few years—Log4j, SolarWinds, Codecov—have all been supply chain attacks.

“This has made AppSec an urgent and top priority for CISOs worldwide,” he said. “In addition, everything as code with Kubernetes, Terraform and so on have made all parts of the development stack part of AppSec.”

From his perspective, the only way this big attack surface can be overcome is with security and development teams working hand in hand to secure the application in every step of the development cycle.

He added developers ignoring security issues is one of the fundamental issues AppSec engineers have with security.

“Security teams put their blood, sweat and tears into finding different vulnerabilities in code through orchestrating scanners and manual testing,” he said. “After all the work, seeing the issue on Jira queue for months is disappointing and quite frustrating.”

Fighting Friction

On the other hand, he pointed to developers who are now asked not only to develop features and fix bugs but also look at DevOps, performance and security of their applications.

“This leads to friction in priorities and, if unresolved, leads to unhappy employees,” he said. “The C-suite is very much aware of this problem, but they are stuck with security tools which are not created for developers. As application security is going through a big transformation, we believe the tooling will also shift.”

He explained there were several concerning findings from the survey but that two, in particular, stood out.

The first thing Chitalia found deeply concerning was the fact that 62% of developers are using 11 or more application security tools.

He said application security has evolved in recent years with AppSec teams now responsible for source-code analysis, DAST, bug bounty, dependency, secrets scanning, cloud scanning and language-specific scanners.

“This means developers are constantly fed information from these tools without any context and they have to triage and prioritize the workload these tools generate for them,” he said. 

The second big worry was the fact that a third of vulnerabilities are noise.

“If someone told you that a third of the work you did needs to be thrown away every single day, how would you feel about that?” he asked. “But that’s the current state of application security.”

False Positives a Big Negative

developers

Securing DevOps: Security in the Cloud

Tags: DevOps, DevSecOps, Securing DevOps


Feb 09 2022

Adding Data Privacy to DevSecOps

Category: Information PrivacyDISC @ 1:44 pm

Colorado and Virginia passed new data privacy laws in 2021. Connecticut and Oklahoma are among the states that could enact new legislation around data privacy protections in 2022. California, which kicked off the conversation around data privacy at the state level, is updating its laws. Couple that with the EU’s GDPR and other data privacy laws enacted worldwide, and it is clear that data privacy has become incredibly important within cybersecurity. And that includes within the DevSecOps process.

It’s been enough of a challenge to integrate security into the DevOps process at all, even though it is now recognized that adding security early in the SDLC can eliminate issues further along in app development and deployment. But adding data privacy? Is it really necessary? Yes, it is necessary, said Casey Bisson, head of product growth at BluBracket, in email commentary. Applications now include more and more personal data that needs protection, such as apps that rely on medical PII. Those apps must have security and privacy baked into each phase of the SLDC via DevSecOps.

“There have been far too many examples of leaks of PII within code, for instance, because many companies don’t secure their Git repositories,” said Bisson. “As more sensitive information has made its way into code, it’s natural that hackers will target code. True DevSecOps will bake privacy concerns into every stage and will make these checks automated.”

Data in the Test Process

In DevSecOps, applications are developed often by using test data. “If that data is not properly sanitized, it can be lost,” said John Bambenek, principal threat hunter at Netenrich, in an email interview. “There is also the special case of secrets management and ensuring that development processes properly secure and don’t accidentally disclose those secrets. The speed of development nowadays means that special controls need to be in place to ensure production data isn’t compromised from agile development.” Beyond test data, real consumer data has to be considered. Ultimately, every organization has information they need to protect so it’s important to focus on data privacy early in development so the team working on the platform can build the controls necessary into the platform to support the privacy requirements the data has, explained Shawn Smith, director of infrastructure at nVisium, via email. “The longer you wait to define the data relationships, the harder it is to ensure proper controls are developed to support them.”

Bringing Privacy into DevSecOps

Putting a greater emphasis on privacy within DevSecOps requires two things—data privacy protocols already in place within the organization and a strong commitment to the integration of cybersecurity with data privacy. “An organization needs to start with a strong privacy program and an executive in charge of its implementation,” said Bambenek. “Especially if the data involves private information from consumers, a data protection expect should be embedded in the development process to ensure that data is used safely and that the entire development pipeline is informed with strong privacy principles.” The DevSecOps team and leadership should have a strong understanding of the privacy laws and regulations—both set by overarching government rules and by industry requirements. Knowing the compliance requirements that must be met offers a baseline to measure how data must be handled throughout the entire app development process, Smith pointed out, adding that once you have the base to build upon, the controls and steps to actually achieve the privacy levels you want will fall into place pretty easily. Finally, Bisson advised DevSecOps professionals to shift security left and empower developers to prevent any credentials or PII from being inadvertently accessible through their code before it makes it to the cloud. “DevSecOps teams should scan code both within company repositories and outside in public repos; on GitHub, for instance. It’s so easy to clone code that these details and secrets can easily be leaked,” said Bisson.

Consumers don’t understand how or where in the development process security is added, and it’s not entirely necessary for them to understand how the sausage is made. The most important concern for them is that their sensitive data is protected at all times. For that to happen most efficiently, data privacy has to be an integral part of DevSecOps.

Understanding Privacy and Data Protection: What You Need to Know

#DevSecOps: A leader’s guide to producing secure software without compromising flow, feedback and continuous improvement

Tags: DevSecOps


Oct 07 2021

Divide Between Security, Developers Deepens

Category: App SecurityDISC @ 9:16 am

Security professionals work hard to plan secure IT environments for organizations, but the developers who are tasked with implementing and carrying these plans and procedures are often left out of security planning processes, creating a fractured relationship between development and security.

This was the conclusion from a VMware and Forrester study of 1,475 IT and security managers, including CIOs and CISOs and managers with responsibility for security strategy and decision-making.

The report found security is still perceived as a barrier in organizations, with 52% of developer respondents saying they believed that security policies are stifling their ability to drive innovation.

Only one in five (22%) developers surveyed said they strongly agree that they understand which security policies they are expected to comply with and more than a quarter (27%) of the developers surveyed are not involved at all in security policy decisions, despite many of these decisions greatly impacting their roles.

The research indicated that security needs a perception shift and should be more deeply embedded across people, processes and technologies.

This means involving developers in security planning earlier and more often; learning to speak the language of the development team rather than asking development to speak security, sharing KPIs and increasing communication to improve relationships and automating security to improve scalability, the report recommended.

Set a Clear Scope for Security Requirements

“Regardless of whether if it’s customer-facing functionality or a business logic concern, every line of code developed should prioritize security as a design feature,” he said. “Once security is taken as seriously as other drivers for DevOps adoption, then a fully holistic integration can be achieved.”

#DevSecOps: A leader’s guide to producing secure software without compromising flow, feedback and continuous improvement

Tags: DevSecOps, Software developer


Jul 12 2021

APPSEC TESTING APPROACHES

Category: App Security,Pen TestDISC @ 1:59 pm

AppSec testing Approach CheatSheet pdf download

5 Things a Pen Tester Looks for When Evaluating an Application

PenTest as a Service

Pentest as a Service Platform

The Web Application Hacker’s Handbook

Tags: #PenTest, AppSec, DevSecOps, PentestasaService


May 20 2021

Hiring remote software developers: How to spot the cheaters

Category: App SecurityDISC @ 10:11 am

How are software development applicants cheating?

Prior to COVID-19, many companies had engineering applicants take coding skills assessments in person. On-premises testing allowed employers to control the environment and observe the applicant’s process. Now, employers are providing these assessments (and getting observations) remotely, and applicants (almost exclusively at the junior level) are gaming the platforms.

The two most common strategies are plagiarism and identity misrepresentation. In the former, applicants copy and paste code found on sites like Github or they are lifting code from prior assessments administered by the same employer that have been published and/or sold online. (Companies that have only a few variations of a coding challenge will find, with a quick Google search, that prior test-takers have either posted it online or are offering the answers privately. They’ll even sprinkle in some minor differentiations so that it’s harder to catch.) Identity misrepresentation means asking or paying someone else to log in to the test platform and solve the test (or part of it) for the applicant.

Globally, the rate for plagiarism in 2020 was 5.6%, and suspicious connectivity patterns – indicative of session handover to someone else other than the applicant – appear in 6.48% of sessions. We are seeing a slight growth in the percent of sessions with suspicious behaviors, and this growth is visible in both global and financial markets in particular.

Some industries will have higher rates of cheating than others; for example, organizations in the government, education, and non-profit sectors can see up to double the global average for red-flag behavior. The general shortage of HR professionals with deep technical knowledge make practically all employers vulnerable to inefficiencies and the perils of under-qualified tech candidates making it too far into the recruitment funnel. Higher rates of cheating mean that IT professionals need smarter tools to avoid mis-hires.

Addressing this problem needs to be a priority for employers looking to hire remotely on a larger scale or as a permanent practice, because the short- and long-term consequences are always more costly than whatever investments they put into preventative safeguards.

Hiring a person who cheated in the recruitment process is a recipe for disaster, both for the employer and the employee. Job seekers will typically cheat because they lack the qualifications to pass the recruitment process or, sometimes, just lack the confidence that they can succeed. In either case, if the recruitment leads to employment, the nascent working relationship is botched from day one. The lack of qualifications surfaces sooner or later, frequently damaging schedules, reliability, and security of software products and services, not to mention driving business costs up and reputation down.

More alarmingly, common sense and academic research suggest (Peterson et al., 2011; Schneider & Goffin, 2012), says that the lack of integrity has a potential to reoccur on the job, quite possibly leading to security breaches immensely more dangerous than software bugs. Last but not least, it is plainly emotionally difficult for many individuals to grow a healthy relationship towards the employer and the workplace when the relationship started with dishonesty.

Don't Hire a Software Developer Until You Read this Book: The software survival guide for tech startups & entrepreneurs (from idea, to build, to product launch and everything in between.) by [K.N. Kukoyi]

Tags: DevSecOps