Inside the Government's Quest to Safely Use Open-Source Code
One security company found that about 10 percent of individual software components contain a known vulnerability.
On July 29, 2017, the IT security team at Equifax noticed some unusual activity on one of the credit bureau’s public websites. The team blocked the suspicious traffic, but the next day, it came back.
The company started formally investigating the situation a few days later, but at that point it was too late. Hackers had already made off with sensitive data on millions of people, including the names, birthdays and Social Security numbers of nearly half the U.S. population.
The Equifax incident, which stands as the fifth largest data breach in history, grew out of a bug in the open source code the company used to build an application for people to dispute credit reports. The Homeland Security Department notified Equifax about the vulnerability in the Apache Struts software in March 2017, but the company never fixed the bug, leaving wide open a door that hackers used for more than two months to scoop up records on 145 million people.
The breach highlights one of the most pressing issues facing the cybersecurity community today: How do government agencies and private companies make sure the open source software that underlies nearly every piece of tech on the market is safe to use?
Related: US Drones May Soon Run on Open-Source Software
Related: The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth
Related: How DHS Is Trying to Sort Good Cyber Tools from Snake Oil
Inside the Open Source Supply Chain
Unfamiliar with "open source software"? It is essentially chunks of code that are available online for anyone to use. While many non-coders may think software is written from scratch, in fact, much of the modern development process involves piecing together these blocks of code to create new applications. It’s kind of like building with Legos: You can stack the blocks in infinite ways, but you don’t mold the plastic yourself.
The popularity of open source software has exploded in recent years to keep up with the growing demand for fresh tech, according to Derek Weeks, vice president of the software security company Sonatype. The system allows developers to churn out more code in less time and prevents them from constantly recreating the wheel, he said.
“Every bit of software in every single market and every single agency is using open source components. It’s so ubiquitous now,” Weeks told Nextgov. Researchers at Sonatype estimate 80 to 90 percent of every modern application is comprised of open source components.
But despite its efficiency, open source development could also pose serious cybersecurity issues. If a block of open source code contains a vulnerability, developers who use it are unknowingly building the bug into their software. And that happens pretty often.
For the popular open source coding language Java, Sonatype found about 10 percent of individual software components contain a known vulnerability, and other coding languages are no safer. In a recent survey, some 25 percent of developers in government and industry said their organization suffered a security breach as a result of an open source vulnerability in the past year, up more than 70 percent from 2014.
Historically, the tech community assumed open source code was comparatively secure because it’s touched by so many different developers, but that’s not necessarily the case, according to Emile Monette, a cyber supply chain risk specialist at the Cybersecurity and Infrastructure Security Agency. Most open source developers focus more on functionality than security, and don’t always keep up with the latest vulnerabilities and updates, he told Nextgov.
As such, it can be difficult for agencies to know if the software they’re buying contains an open source vulnerability that’s been overlooked by the vendor. And beyond known bugs, it’s likely there are even more components carrying defects that have yet to be discovered.
“No one can write perfect code,” Weeks said. “All code everywhere, anywhere, whether it’s an open source component or written from scratch, probably has a security flaw in it somewhere.”
How to Squish a Bug
When a bug is discovered and disclosed in a piece of software, open source or otherwise, it usually takes about three days for online adversaries to figure out how to exploit it, according to Weeks. During that window, organizations are racing to check if any of their applications contain the compromised software and patch the bug before hackers can get there.
Researchers at the firm Risk Based Security documented more than 22,000 new software vulnerabilities in 2018, meaning agencies and companies needed to potentially check their systems for some 420 bugs every week.
Locating and fixing vulnerabilities within 72 hours at that scale is a heavy lift for any large organization, and given the convoluted nature of the government’s IT systems, federal agencies may be particularly ill-equipped for the challenge.
Bob Metzger, who leads the D.C. office of the law firm RJO and specializes in cyber and supply chain security for federal contractors, said most agencies don’t have a systematic process for fixing vulnerabilities in their IT ecosystems. To issue patches, agencies must first know what systems contain the bug, but Metzger said most of the government probably doesn’t understand its IT systems in that much detail.
“I would be very surprised if even a small percentage of federal agencies today had a useable inventory of the open source components in the software that they rely upon for their critical agency functions,” he told Nextgov. According to government watchdogs, some agencies don’t have a complete list of the applications on their networks, much less the open source software each one is running.
In recent years, the government has started recognizing the need to take stock of its tech, but “we've seen not yet that interest coalesce into a clear direction that agencies would take, much less consistent policies or practices” for dissecting the open source components of each application, Metzger said.
CISA recently issued an order requiring agencies to patch critical IT vulnerabilities within 15 days—a significant step up from the previous 30-day standard—but agencies have historically struggled to meet such deadlines. According to Monette, the murkiness of the government’s tech ecosystem is one of the main reasons why.
“We don't know what's in [the software], and that transparency problem then does not allow us to appropriately manage the vulnerabilities and weaknesses that [arise],” he said. “You can't manage what you don't know about."
Getting a Grip on Your Tech
Government and industry tech leaders have proposed a handful of ways the government could better wrap its head around its IT systems.
One popular option is pushing agencies to purchase tech that comes with a so-called “software bill of materials,” or SBoM, which lists the various components that underlie a particular system. Because open source components are identical across applications, they’re easy to label and trace through the development and deployment process. That way if a bug is discovered in a piece of open source component, agencies could quickly scan the bill of materials to see if the application contains the compromised code and issue patches as needed.
The push for component documentation is already gaining traction across government. The Homeland Security Department requires vendors to submit SBoMs for every tool offered under the Continuous Diagnostics and Mitigation program, and the National Telecommunications and Information Administration last year launched a program focused partly on exploring how SBoMs could improve software transparency.
In its high-profile “Deliver Uncompromised” framework published last year, MITRE recommended the Pentagon also start requiring SBoMs from defense contractors to help lock down the military’s supply chain against outside threats. Metzger, who co-authored “Deliver Uncompromised,” said the practice isn’t a magic fix for the government’s software security issues, but agencies shouldn’t make perfect the enemy of good.
“[A software bill of materials] is not going to guarantee security, but ... it can improve your knowledge of the inventory that's represented by your software,” he said. “When you find that something you have installed has either an operational problem or has been subject to a breach, you have a chance to fix it rather than wait until you've suffered the consequence of not knowing and not fixing.”
Given the sheer size of the government’s IT infrastructure, Metzger said it’s also crucial for agencies to automate the process of scanning for bugs, something Homeland Security officials have long advocated. Weeks said agencies should also lean on automation when created software bills of materials, which he said can be created retroactively for systems they’ve already bought.
“There’s no way to employ enough people to manually assess what is going on” with open source components and vulnerabilities, he said. “The volume is just too large.”
SBoMs and automation would help agencies keep vulnerabilities in check, but protecting government tech en masse will require officials to prioritize security in every IT purchase they make, according to Monette. They haven’t always done so, he said, but today CISA is working with the General Services Administration and other agencies to make cyber protections standard in software acquisitions.
“Governmentwide policies on using open source ... don't really touch security. They're all about licensing,” Monette said. “Many [industry leaders] are already conducting themselves in accordance with best practices for software development, open source or otherwise, but it's not something that the government has necessarily valued in the source selection process."
By demanding security from its contractors, Monette believes the government could potentially drive more tech companies to adopt safer software development processes. Since CISA started requiring SBoMs and other best practices from CDM vendors, Monette said he’s already seen more companies working to outdo each other on security.
“We want to incentivize a race to the top, he said. “We think that the approach that we've taken in CDM is a very positive first step. This is really an instance where the government is putting its money where its mouth is."
Cyanide in the Tylenol Factory
As the government works to secure its tech amid a flood of new vulnerabilities, a new threat is emerging in the open source software world that will make it even more important for agencies to lock down their IT supply chains.
In the last 18 months, security researchers have seen a rise in malicious code injections, a type of attack in which adversaries secretly insert bugs into software that’s still in the development stage. In the open source world, that means building a backdoor or other vulnerability into a block of code that could be downloaded millions of times by unknowing developers.
With traditional, publicly disclosed vulnerabilities, cyber attackers and defenders are operating on a level playing field, the former working to exploit the bug before the latter can patch it. But these hidden bugs allow bad actors to fly under the radar, covertly scooping up data or monitoring unsuspecting targets until the bug is discovered and revealed by security researchers.
“[It]’s kind of the equivalent of the person inside the Tylenol factory injecting cyanide into the tablets,” Weeks said. “From an adversary’s standpoint, it’s incredibly efficient.”
While software bills of materials and rapid patching practices could help agencies quickly close those backdoors once they’re revealed, they’re powerless against vulnerabilities that are unknown to the public. To protect against malicious code injections, the government needs to purge its entire software supply chain of potential perpetrators.
Supply chain security is becoming a top priority for federal cyber leaders, but while agencies like the Pentagon and Homeland Security Department have undertaken efforts to keep away dubious tech, the government largely lacks a comprehensive, scalable approach to ensuring it only does business with trusted partners. Monette pointed to the governmentwide ban on Kaspersky Lab software as a model for handling wholesale supply chain threats, but he said detecting malicious code at the individual component level remains a persistent challenge.
And the threat of malicious code injections isn’t going away any time soon, Weeks said.
On April 25, the security team at Docker Hub, the world’s largest open source repository for container images, revealed hackers had potentially accessed login credentials for some 190,000 users. Armed with that data, adversaries could potentially log into the system and insert malware into countless applications and IT systems around the globe.
“It’s not saying there was a malicious code injection into [Docker software], but someone just got the keys to be able to do that,” Weeks said. “[It’s] super dangerous in a supply chain-oriented context. The implications of this downstream in the supply chain could be tremendous.”