/ Published February 14, 2014
Embracing the Fog of War: Assessment and Metrics in Counterinsurgency by Ben Connable. RAND, 2012, 340 pp.
War assessment provides two very important functions in a period of conflict: transparency to the taxpaying public regarding the overall trajectory of a campaign and guidance to military strategists and policymakers as to the next steps to achieve success. It is difficult enough in a time of war to collect accurate information, but the greater and unavoidable task is to assess the state of the campaign to make it meaningful to decision makers. Timely and accurate assessments are therefore a necessary component of victory, but they are simultaneously not exactly the hot-ticket debate item. No one disputes the importance of war assessment, but few truly understand the pitfalls and tradeoffs associated with data collection and interpretation. Assessment literature is heady, often abstract, and generally a conversation restricted to methods scholars and technical analysts. It is therefore welcome and fitting that RAND policy analyst and former Marine Corps foreign area officer Ben Connable opens the conversation to a wider readership. Connable’s task is to assess the state of war metrics for counterinsurgency (COIN) doctrine in his monograph Embracing the Fog of War.
The central argument is that the current system of assessment (pattern and trend analysis and time-series analysis) relies upon a centralized assessment process that remains fundamentally at odds with the nature of the nation’s COIN doctrine—a decentralized practice. To illustrate this, Connable highlights two instances of COIN assessment that utilize centralized aggregated assessments—Vietnam and Afghanistan. In both cases he reveals a struggle with the desire to rely upon information that is countable and therefore easily compiled into an aggregate measure. Connable cogently reveals that the aggregate data in the two cases produced wildly ambiguous and misleading results. His discussion of the assessments in Vietnam and Afghanistan reveal three primary points in data recovery and interpretation that render centralized quantitative assessment deeply troubling: the selection of the relevant variables for counting, the accuracy and consistency of numbers being generated, and the interpretation and use of the assessment once produced. In all three instances data generated in Vietnam and Afghanistan are faulty.
Overall, the picture of COIN assessment that emerges is complex and disappointing. Counting instances of any variable is fraught with peril for analysis at the strategic level—this Connable reveals with descriptive historical flair. Sadly, what the reader is unprepared for is the DoD’s seeming lack of analytical depth in taking assessment seriously. Throughout the book the reader develops the suspicion that the centralized metric is really only intended to placate a public rather than provide deep data regarding the trajectory of the campaign—a suspicion that Connable heaps onto the reader, leaving no doubt of his position.
As far as the distrust for aggregate data interpretation processes, Connable is knowingly working through a well-established debate in political science (something that those outside academia might miss) about the role of numbers as having primary value in what is not a hard science, but a social one. The book ultimately makes very strong claims about the nature of counterinsurgency, the act of interpretation through centralized aggregate data, and the utility of aggregate data in warfare. We are reminded that Vietnam and Afghanistan are kaleidoscopes of power contexts into which the United States hopes to create governing stability. The act of centralized assessment is akin to mixing the colors together. The colors come up brown every time—violating the very attempt to interpret a complex environment. Connable reminds us that holism in COIN assessment is not about the averages but contextualized understanding.
The book’s strongest chapter is the eighth—and likely the one that will endure for Connable’s career. It is here he provides a point-by-point critique of COIN assessment techniques in light of DoD standards regarding assessment—he finds it fails on all counts and makes a plea for a turn away from this tendency toward something far more sensitive to the truths in the field. Dissatisfied with the state of the current system, Connable ambitiously provides a final chapter that lays out a new framework for understanding the state of the campaign—one that this reviewer hopes decision makers take seriously.
If there is any difficulty with the text, it rests in the focus of the project. Although the book was written as a critique of centralized quantitative assessments in nonconventional warfare, the points made regarding assessment leave the reader wondering whether centralized assessment is incompatible with the conduct of all wars, not just counterinsurgencies. In this sense, the book expounds somewhat on the unique nature of COIN, but the critique of centralized aggregates is not specific to COIN. Since the monograph is ultimately a product of RAND’s internal review and publication process, it is unclear whether earlier versions contained the patches to gaps in argumentation that were edited to meet a specific project request. The book is not for the average reader but is thankfully accessible by anyone. It is likely to be best utilized by those with a dog in the fight—policymakers and those trying to increase military effectiveness in a time of war.
The critique could have been even more radical. If successful COIN operations are a function of decentralized and context-specific solutions to stability, then we cannot eliminate the context that frames social interaction, and here’s why. Even given purely accurate and consistently collected raw data for universally agreed upon variables—i.e., even if hamlet A and hamlet B have identical profiles of countable objects—a change in the variables might have entirely different meanings. An increase in roadside bombs in both hamlets could be a good sign in one place (the last gasp of an enemy on the run) and a bad sign in the other (a sign that more insurgents are embedding themselves into the local population). This means that, by definition, aggregating the rise or fall of those numbers would be to misinterpret them. Even if analysts could create an ironclad, eagle-eye data collection mechanism, and even if all the field commanders looked over a list of important variables and agreed universally that these were the important ones—the radical nature of these kinds of operations may mean that each variable is important for completely different reasons—not simply the weight of the variable, but the meaning of the variable itself.
Finally, Connable could go even farther in his lambast of aggregate counting and effects-based assessment (EBA) as potentially inappropriate for collection by soldiers in the field—the act of certain kinds of analysis can disrupt delicate contexts of interaction. Immediate and aggressive research is needed to determine whether there may be perverse effects exhibited in the behavior of soldiers in an attempt to feed or starve the metric. By this I mean that it is not enough to worry about the accuracy of the numbers for analysis. There is a real likelihood that the counting will take on the status of being the objective itself, as if fewer civilians disobeying curfew, fewer soldier casualties, or more schools being opened were more than a proxy for stability but treated instead as if they were the end itself. If this happens, we have the potential for any and all of the following: exacerbated levels of violence, increased risk avoidance, oppression of civilian populations to reduce casualties, and severe morale issues among soldiers who meet the metrics perfunctorily. In short, winning the metric but losing the campaign.
"The views expressed are those of the author(s) and do not reflect the official policy or position of the US government or the Department of Defense."