Related Posts
Subscribe via Email
Subscribe to our blog to get insights sent directly to your inbox.
The trust that forms the bedrock of GitHub's open-source community is facing a new challenge. The latest attack is from threat actors posing as recruiters with legitimate profiles to target cybersecurity professionals and others. These “talent recruiters” invite developers to collaborate on a GitHub repository that has been infected with poisoned code. Once the target clones or executes the code, the malware spreads.
Here's an image of a lure document provided by bleepingcomputer.com.
“Poisoned code” is malicious or compromised code introduced into a software project. This code can go undetected by cloning a legitimate project and hiding malware in a few inconspicuous lines.
While this recent example is attributed to the North Korean Lazarus Group or Jade Sleet, it's just one anecdote in a larger troubling trend. According to an article by Dark Reading, poisoned code is growing rapidly. Over 35,000 malicious code injections were identified on GitHub just last month in an effort by a user called “Pl0xP” and were rolled back following their discovery.
Similarly, a computer science Assistant Professor and a Ph.D. student introduced false patches into Linux that contained “half-vulnerabilities.” This method allowed their code to escape detection, and once combined with another false patch, created a "whole" bug. They did this without considering the impact this poisoned code could have. You can read more about this here.
Testing malware in open-source communities to see what will happen is not a way to win friends and influence people. It is damaging to the foundation of open-source ecosystems. There is the explicit harm of malware infecting machines and spreading, but there is also the cultural harm of diminishing the shared value of trust. Open-source projects have created and supported core capabilities of the Internet, and remain a critical source of innovation, experimentation, and collaboration.
These open-source projects rely on the goodwill and collaboration of contributors, and this trust powers knowledge-sharing in the community. The increased instances of poisoned code in open-source communities threaten to unravel this foundational principle. GitHub’s mission statement is to make it easier to “work together, to solve challenging problems, and to create the world’s most important technologies.” In the current climate, it’s a mission statement that is becoming harder to realize.
This increase in activity drives an important question: Who is responsible for ensuring that poisoned code injection is, at least, identified, and at best, prevented from being introduced into GitHub projects? Should the community look to GitHub, to project owners, or to themselves as developers, for protection?
The core of this argument is similar to other issues of content suitability, integrity, and value, and applies to a wide range of content. Newspapers are responsible for the words written by their journalists and can be sued for libel. National Public Radio (NPR) handles responsibility for its content by adding a disclaimer before each program stating, “The views expressed in this program in no way reflect the views of NPR.” Facebook recently accepted responsibility for the content posted on its platform.
So, is it GitHub's responsibility to moderate the “content” in their code repositories to protect users from threat actors? Determining ownership, and liability in these increasingly collaborative digital spaces is a new frontier. Simplistic arguments that GitHub should increase its security measures to better detect poisoned code ignore the actual nature and breadth of the challenge.
A more realistic and distributed approach is to improve our own skepticism and hygiene as developers. In our Pwned podcast episode 181 on the latest GitHub breach, Jack Danahy, VP of Strategy/Innovation at NuHarbor Security, describes this, saying, “It's always the responsibility of the last person who touches the code.” He suggests the value of something like CISA’s software security self-assessment checklist and the impact of a more robust and complete Software Bill of Materials (SBOM). Justin Fimlaid, CEO of NuHarbor Security, considers the idea of a universal stamp for approved software, like a "trust mark" that could help developers identify safe software or software components.
The answer lies somewhere between the individual and the platform but requires the commitment of both. What does a community-based software defense collective look like, and how can platform providers, developers, and security experts come together to address new measures to ensure open-source trust?
We don’t have the answer, but we’re looking forward to finding one, fast.
We’ve pulled together some resources to help you step up your security posture and cover your assets (CYA):
Key Initiatives to Join
Open Source Security Foundation (OSSF): A collaborative initiative that aims to improve the security of open-source software by providing best practices, tools, and resources.
Core Infrastructure Initiative (CII): A project managed by the Linux Foundation that supports critical open-source software projects by providing funding, security audits, and best practices.
Bountysource: A crowd-sourced platform that allows developers to receive financial rewards for finding and fixing security vulnerabilities in open-source projects.
Tools to Consider
Practices to Implement
Not sure where to start? Talk to one of our experts. We can help you develop a plan for when the trust fails.
Hear our take on the GitHub breach, the disparity in the CI/CD pipeline, and ideas for securing open-source spaces in Episode 181 of Pwned.
Subscribe to our blog to get insights sent directly to your inbox.