By Panayotis A. Yannakogeorgos and Col David Vernal, USAF, Air Force Cyber College
/ Published June 09, 2017
Offensive cyberpower is a dark art of manipulating computer logic, misdirection, and deception. We do not use the term cyberpower broadly herein. We differentiate it from all the obvious benefits of employing computers and network technology to compute, store, and transmit information, or what we might think of as the intended uses by the owners and authorized users. We are not writing about cybersecurity. Instead, we offer four axioms of exploiting cyber insecurity to achieve cyberpower: misdirecting functionality, stealth access, exploiting vulnerabilities, and superior human cognition.
Offensive cyberpower is based on superior knowledge about what logical statements computers can and will trust.
The foundation of cyberspace is control. Logical statements control computers and embedded devices to execute programmed functionality. In a cyber competition, unauthorized users win when they convince a computer to trust their commands. System owners can unwittingly lose control when a competitor uses greater system knowledge to employ commands seizing functionality. To best illustrate this, consider system vulnerabilities, those logical errors in a computer that allow unauthorized users to convince a computer their commands are legitimate instructions. Exploitation of these vulnerabilities relies on the use of logical statements a computer understands to execute a function preexisting in the system. The logical statement does not add functionality. It allows the computer to do what it is already capable of doing but has not yet been told to do. Indeed, the system owner may not even know that this functionality exists. The problem for the unauthorized user is that the "magic words" of the deceptive logic statements must remain secret. They lose their power if the system owner discovers the vulnerability and is able to inject code to ensure the computer does not get tricked into ignoring the owner's instruction. However, computer networks are complex. Thus, "patching" vulnerabilities in a large enterprise is a complex endeavor. A system owner may not be able to get word to all computers in time to immunize them from the magic words. The owner may not even know that there are still devices that believe the words. The unauthorized user may still be able to use the magic words to exploit those computers; however, the owner may be on alert for those words. This decreases the power of the unauthorized user. Hence, offensive cyberpower is the exploitation of the "unconceived" vulnerability, where your "unknown cyber insecurity" trumps your "known cybersecurity" in a complex, interactive system of computers, networks, and humans.
Offensive cyberpower favors stealth access, enabling observation rather than the creation of noticeable effects.
Access is the basis of all cyberspace operations. Stealth access allows for the discovery of functions and processes within a system without the system owner's knowledge. Silent observation allows observers to understand processes and functionality in a system. Such observers can glean information about the successful outcomes of the process and determine what can go wrong to manipulate, disrupt, deny, degrade, or destroy the process, to prevent the system from executing as the legitimate user intended. This then enables the collection of information and the ability to build a lexicon of trusted logical schemes. Therefore, stealth observation enables the understanding of a target system and expands the options for acting at the time and place of one's choosing. As with all information-based choices, though, creating observable effects runs the risk of divulging how much one knows and how one knows it. Suddenly, magic words can lose their power. One can still take actions, but the further actions are removed from the sources of knowledge (in space, time, or apparent causality), the more likely one is to protect the methods by which the hidden competitor convinced the computers to believe his instruction set over those of the system owner. The imperative to protect these sources and methods and the bias toward gaining knowledge through observation bind cyberpower inseparably with intelligence, creating potential information superiority that changes into actual information superiority when the observer chooses to act. The degree to which that information superiority remains an enduring advantage depends in large part on whether the observed and affected can discover they have been observed and affected.
Computers are only as vulnerable to magic words as humans let them be.
Since cyberspace is overwhelmingly a man-made domain, we are only at risk to the extent we accept functionality that can be exploited by known and unknown instruction sets to allow unauthorized observation and the creation of effects. For example, if everyone in America lived on hyper-locally sourced energy generation—down to the Tesla battery and rooftop solar array level—vulnerabilities in the national power grid that could affect millions under the current system would not worry us. If we chose otherwise, we could make ourselves much more secure, at the various costs of sheer capital on the one hand and availability and convenience of communications and information flow on the other. This also applies to our business processes; with enough good design and forethought, we could make it much harder for a cyber vulnerability to translate into a consequential cascading action through redundancy, well-designed mission assurance, limited liability, and graceful failures. The more certain a defender is that a computer is only going to obey her words, the better off she is; likewise, the more certain she is that a computer under the evil influence of magic words can still only create limited harm, the more secure she will be. However, the economics of assuring that a computer only obeys commands from a legitimate user are prohibitive to the extant business model of software, firmware, and hardware producers. Reducing vulnerability requires increasing the time to test for security vulnerabilities and fix them. That takes away time from putting a product to market, risking competitive disadvantage. Since companies are motivated to make money, there is not an incentive to create secure products, especially since there is no product liability in this domain. But trying to find the last bug in the entire system or to stop the most sophisticated attack against all unknown cyber insecurities requires unattainable time and resource investments. Therefore, until the producers who build the elements that compose the domain build security into their products, we have to live with "pretty good" security. Granted, no system is entirely invulnerable; however, the amount of time and cost to trick a system can be increased on the adversary.
Cyberpower, more so than the other physical-based powers, is a purer form of cognitive struggle.
In the physical world, overmatches can occasionally make intelligence irrelevant—even if the residents of Hiroshima and Nagasaki had known the bombs were on their way, without a meaningful way for Japan to resist airborne attack, the cities would still have been destroyed. Cyberpower may be different. If cyberpower is based on superior system knowledge to craft better overriding logical statements, it follows that cyber contests will rely more on outthinking than outperforming the adversary materially in the physical world. All domains rely on thought and discovery to learn how to exploit the domain and, by extension, how to create and employ weapons to achieve effects. Air-minded thinking turned into airpower and action via such discoveries, leading to the creation of physically fit and equipped troops to turn the air domain of warfare into a physical battlefield. These physical aspects of the air and other domains are what differentiate the cyber domain. If they can deduce how to create their own desired effects within the superpower's computers and networks, a nation-state or violent nonstate actor does not have to develop comparable material and nonmaterial military might to compete with a superpower dependent upon cyberspace to command and control military forces. Likewise, even the most exquisite and ingenious cyber weapons created by the best-resourced minds can be rendered impotent if the right magic words lose their power, flattening the playing field so that even a clever prince can succumb to a cleverer pauper.
Therefore, cyberpower relies upon misdirection and deception. More so than in the physical world, contests in cyberspace may be decided by who can be sneakier and more adept at creating illusions by exploiting the unseen informational advantage. As an example, we are slowly waking up to the fact that unverifiable data may be more dangerous than destroyed data because at least with destroyed data, one knows someone's been using magic words. The side better able to hide the sources and methods of—and make more creative use of—its information superiority will have the advantage.
All four axioms have implications for what a cyber workforce meant to create, employ, and defend against cyberpower should look like in terms of its skill sets and desirable attributes. If future conflicts are expected to have a cyber component, the nation should aim to recruit and develop intelligence and military personnel most capable of turning cyber thought rapidly into cyber action based on misdirecting functionality, stealth access, exploiting vulnerabilities, and superior human cognition. They should have as few constraints as possible imported from the physical world, where military thought has to be mediated by the tyrannies of time and distance and the operation of heavy equipment before it can be turned into military action. Since cyberwarfare is a cognitive art, more attention should be given to cultivating agile, creative, and critical thinkers who understand how to exploit both technology and the foibles of human nature and how to guard against their own cognitive blind spots.
Dr. Panayotis A. Yannakogeorgos is dean of the Air Force Cyber College, Maxwell AFB, Alabama.
Lt Col David G. Vernal, USAF, is a career intelligence officer who has served in a variety of roles, most recently as the commander of the 450th Intelligence Squadron, Ramstein Air Base, Germany.
60 Schumacher Ave.
Maxwell AFB, AL 36112