Army researchers develop metrics for cyber defenders' agility
The cyber agility framework can help organizations better understand the effectiveness of their cybersecurity efforts.
Army researchers have developed a cyber agility framework – a new way to train defensive cyber operators to thwart attackers.
As with a set of rules or an algorithm, application of the framework can help organizations better understand the effectiveness of their cybersecurity efforts. It also serves as a foundation for developing software.
"Historically, when dealing with cybersecurity, analysts are looking at screens full of numbers, trying to identify where, and what kind of, cyberattacks are taking place by looking for patterns," said Purush Iyer, division chief of network sciences at Army Research Office, which is a part of Army Research Laboratory.
"The cyber agility framework offers a better way of identifying (and predicting) attacks, by taking into account past history of traffic, and allowing an analyst to concentrate on higher order reasoning. It's a big step in enhancing cybersecurity predictability."
In a partnership with the University of Texas, San Antonio (UTSA) and the Army Research Laboratory, cybersecurity researchers developed a set of metrics to help operators measure how well their methods and tactics work during an active intrusion.
Professor Shouhuai Xu, director of the UTSA's laboratory for cybersecurity dynamics, said that agility, as the framework's name implies, is a cornerstone of cybersecurity.
"We realized that agility is an important aspect of cybersecurity and is poorly understood. Through the framework and limited empirical study, we observed that cyber attackers are more agile than cyber defenders in many cases," Xu said via email.
The framework uses a suite of 14 metrics to measure agility based on timeliness and effectiveness, he said, namely how long it takes for a cyber defender to counter an attacker and adapt. Xu said cyber practitioners can use the framework on real-world cyber datasets -- even classified or sensitive ones.
Jose Mireles, who co-developed the framework as part of his graduate thesis at UTSA, said that the research team couldn't find any "quantitative number with respect to time" related to the interactions between attackers and defenders in existing literature.
"A lot of people will give you a qualitative number," such as a system is pretty good and catches 90% of attackers, "but the figures don't stack up" when comparing vendor products, Mireles said. So the researchers set out to find common ground.
"The framework's intent is to try to measure with an actual number some of these performance standards or questions that we've proposed," Mireles said. "If you [monitor] attackers and defenders …how are you doing? Are you effectively catching or not?" he said. Framework users get a hard number that they "can present and compare against for improvement."
For now, the framework is in the foundational research phase, but Xu said researchers are "investigating how to improve the framework and possibly establish some use cases to make it easier to adopt the framework in practice." The ultimate goal, he said, is for it to be widely adopted and improved upon via collaborations with industry.
Xu said the research team is reaching out to industry partners and Army Science and Technology program, but the future is unclear.
Iyer said the project isn't quite ready for prime time, but given the current threat environment, it is definitely necessary with the U.S. and Iran "hacking each other ... and DOD is in the thick of things."
But hopefully, the framework will be ready to make its way to U.S. Cyber Command or elsewhere in the Defense Department in the next three to five years, Iyer said.