To improve national security, we need to turn the spotlight on the data
The United States is at increasing risk of cyberattacks from adversarial nations. Here’s how agencies can share sensitive data safely.
The U.S. government faces increasing cyber risk from adversarial foreign governments. This reality is underscored by Russia’s invasion of Ukraine and the heightened threat of cyberattacks against Ukraine’s supporters. So it’s no surprise that discussions on how to securely share sensitive data among U.S. agencies and allies have become a key aspect of the national defense conversation.
Agencies have made great strides in bolstering information sharing and cybersecurity measures – from moving to the cloud and protecting networks to securing boundaries and implementing tools for security information and event management. While this progress should be applauded, secure information sharing is still a massive challenge. For instance, some officials are still required to request sensitive information by phone or email because there’s no approved technical process for securely sharing intelligence.
That’s not just enormously inefficient. It’s also potentially dangerous. When information sharing becomes onerous, or when people don’t trust the security of data transfer, they might under-share information. This can inhibit collaboration and the ability to gain a complete and accurate picture of threats.
That’s not a situation we want to be in, especially in a world where threats are quickly evolving. Instead, we must allow secure, efficient and timely sharing of information across boundaries, especially for people who require access to sensitive and classified data.
Securing the data itself
Traditional approaches to cybersecurity largely focus on protecting identities, devices, networks and applications. They are important to a layered security strategy, but they don’t go far enough to protect the most essential asset — the data itself. But that deep level of protection is fundamental for agencies seeking to implement zero trust cybersecurity practices.
Traditional approaches to data protection have been cumbersome and introduced hurdles to collaboration. But the good news is that fast, secure collaboration is absolutely possible, and we’ve been able to do so for some time through the Trusted Data Format. TDF is an open standard, leveraged by the U.S. intelligence community (IC) and the private sector, to secure information at the data level.
TDF creates a protective wrapper around any type of data — including video feeds, sensor transmissions and other shared files — and tags it with attribute-based access controls that equip systems to identify who should or should not have access. With the data itself protected and tagged, information can easily pass back and forth between organizations without concern that it will be compromised while in motion or at rest. The tags also ensure that only people authorized to access the files can do so.
While the TDF has been an IC standard for several years, it has never been more important. The U.S. and its allies must come together to share ideas and defend against emerging threats. For this coalition to be effective, it requires technologies that achieve these four objectives:
Break down network barriers. We must negate the need to rely solely on network isolation. Instead, we must protect the data itself – as called for in the president’s Cybersecurity Executive Order, which promotes zero trust data protection. While network isolation does provide targeted access privileges to certain users, it can also pose barriers to collaboration and efficient data sharing. It protects, but perhaps a bit too much. Data must be allowed to flow more freely while still mitigating risk.
Operate independent of system-specific security protocols. Securing the data itself allows data exchange to operate independent of system-specific security enforcement mechanisms, which can change over time and vary from agency to agency. Data that can only be accessed by authorized individuals can easily and safely be exchanged regardless of different organizational protocols.
Make the location where the data resides far less important. Agencies spend a lot of time worrying about where their data resides and whether storing data on-premises is more secure than storing it in the cloud. Protecting each piece of data individually makes the location of the data much less of a concern. When location is less important, data assets can be co-located, making it easier for users to access all necessary information and further reducing friction around data sharing.
Support zero trust. At its core, zero trust requires a “deny by default” mentality, granting or denying access based on a user’s credentials, location and other factors. Organizations have tried to answer this call by developing zero trust network architectures, but developing these architectures requires segmentation and silos. Much more efficient is the ability to implement zero trust at the data level.
Delivering object-level data protection, along with easily customizable and highly targeted access privileges, is the best option. This approach allows collaboration among parties that might not otherwise have access to data on a secure network. Meanwhile, the information itself remains protected. This approach is what zero trust should look like.
Trust isn’t just about data sharing. It’s also about believing that the information being shared will reach its destination securely and that the delivery mechanism is reliable. Without this confidence in the security of shared sensitive data, we’ll never be able to collaborate or exchange intelligence effectively, and we’ll continue to put ourselves at risk.
Let’s not do that any longer. The price is far too high.