Military “Science”: What Have You Done For Me Lately?

Grant Martin
13 min readFeb 16, 2021

--

A Response to COL Celestino Perez’s article, “”What Military Education Forgets: Strategy is Performance.”

In the 1950s Herbert Simon and Dwight Waldo, two political science professors, squared off against one another to attempt to answer the question as to why the social sciences had made little headway in solving society’s toughest problems. Poverty, war, hunger, drug use, healthcare, education- if anything, the social sciences had watched as the world became worse since the universities had established those disciplines. These professors of the so-called “softer” sciences looked on with envy as physicists split the atom, engineers started landing on the moon, and medical doctors saved more lives. If public administrators ignored public administration researchers, then what good were they?

Likewise, with the ascendancy of the military profession in the West to a science beginning during the Enlightenment, one might ask the same question: why have the military “sciences” not solved society’s most pressing problems with respect to war? Even with all of the West’s impressive technological improvements- “smart bombs,” targeting methods, logistics supply chains, and tactical dominance- the record of First World countries’ militaries of late seems to many observers to be less than impressive. In Afghanistan, today, the U.S. and its allies are facing year twenty of a protracted war with little to show.

One possibility is that we do not conduct strategy formulation correctly. This is indeed the argument that COL Celestino Perez makes in his 2018 article, “What Military Education Forgets: Strategy is Performance.“ Strategy is the preferred tool of the military sciences. It is taught in all professional military education schools, written into all forms of doctrine, and fiercely debated as to whether it is done correctly in professional journals.

COL Perez makes the case that the reason the military does not do strategy well enough is that military schools do not require enough practice from its students. This might be a pertinent argument if strategy was a sufficient approach for all times and places. Just practice it more! Is that, however, a good assumption? What Simon and Waldo were arguing about was whether or not social phenomena were the same as physical phenomena. Is war a social phenomenon? Is there a difference? If it is and if there is, then are the best tools to apply to it found within the realms of science, art, or a combination of both? Further, should we just “do” strategy better? Is that our main problem today?

Social Phenomena and Physical Phenomena

Simon argued the reason the social sciences were not solving social problems was that they were not applying the scientific method correctly. COL Perez and many others argue similarly that the military is not applying the strategic method correctly. Many strategists’ reasoning is that the politicians are at fault for why the military does not do strategy well. They simply will not give the military a militarily achievable end state with which the military can conduct the strategic process. COL Perez offers a different explanation: that the military is not producing competent strategists, and specifically because they do not practice strategy enough.

Waldo, in contrast to Simon, attempted to make the case that social problems were not the same kind of thing as physical phenomena. Simon had been guided by the philosophy of logical positivism: the idea that all things in the world could be understood through empirical observation and data analysis using hypothesis testing. Waldo, on the other hand, was influenced by post-positivism: the belief that social phenomena were inherently different from physical phenomena, and thus required different types of methods. Simply put, a post-positivist would assert that war was not the same kind of thing as a solar system or a piece of rock. According to logical positivists, the same methods used to study solar systems and rocks could also be applied to war (and poverty, politics, and drug use for that matter).

Looking at the military sciences today, it is hard not to find many parallels with the logical positivist philosophy. Take, for instance, the U.S. Military’s preferred construct for meeting policy objectives: strategy. According to U.S. military doctrine,

“…strategy” is focused on “end states.” It is the “bridge between national strategic guidance and the …planning required to achieve national and regional objectives….” It “links … [c]ommand activities, operations, and resources to US Government policy…” It “should describe the regional end state and the objectives, ways, and means to achieve it. Strategy “defines national military objectives (i.e., ends), how to accomplish these objectives (i.e., ways), and addresses the military capabilities required to execute the strategy (i.e., means).”

“The cognitive approach to develop strategies means organizing and employing “military forces by integrating ends, ways, and means.”

“When using the logic of geographical position, lines of operation link multiple tasks to focus efforts towards strategic conditions. The military, when using the “logic of purpose,” utilizes the same linkage within a line of effort while conducting “stability [operations]…” to accomplish strategic conditions.

Lastly, strategy is “a prudent idea… for employing instruments of national power in a synchronized and integrated fashion to achieve theater, national, and/or multinational objectives.”

All of these descriptions for how to conduct strategy share at least three common characteristics. First, they assume that whatever over-arching political purpose in play is internally consistent in its logic. That is, that the purpose of the policy does not contain contradictions, paradoxes, false choices, competing political value propositions, generalizations for political obfuscation, attempts to divert attention, and the like. Second, that the best way to reach these objectives is to plot out a linear and logical path towards these objectives in a deterministic manner. Third, that accomplishing political and military objectives is best done using reductive processes. So, for instance, breaking one’s approach down to “ends-ways-means” or linked tasks along a “line” of either geographic advance (which makes some sense as it is physical in nature) or of “purpose,” naturally nested under other “higher” purposes.

But what has such thinking given us of late? Surely one can make the case that today’s security environment is much worse than it was a year after 9/11.

The Assumptions Behind the Strategic Method

The U.S. military by and large has long ago adopted Herbert Simon’s position on public administration research: that our methods are not wrong, it is our execution of those methods that are the problem. In COL Perez’s article, he castigates the normal military education practice of historical case studies and facilitated discussion. Instead, he recommends multiple iterations of:

“conduct[ing] research on a given problem, posit[ing] hypotheses…, propos[ing] desired outcomes, and posit[ing] recommended interventions.”

I would offer that this is not the same as “performing” strategy, even in a laboratory-like manner. In other words, the students are just dealing in conjecture without any benefit of seeing how their ideas, or, even better, their approach, would play out in the “real” world. Without somehow putting their ideas into real practice (impossible), they are only learning whatever steps COL Perez believes is necessary to conduct strategy. They are not learning anything about the viability of the steps themselves (maybe there is a better way to “do” strategy-?), they are not learning whether their causal arguments would hold up (I would be surprised if the students utilized anything more than linear logical thought processes- not that complicated regression analyses would offer much more), and, most importantly, they are not learning whether applying strategy in and of itself is the right tool to use. Simulations could help somewhat in this, but simulations come with their own issues.

The greater problem with COL Perez’s solution is the underlying assumptions behind the strategic approach. Indeed, these same problems also color his proposed steps for conducting strategy. The underlying assumptions that COL Perez, and, to be fair, the entire U.S. Military, makes are those associated with Herbert Simon’s logical positivism. That is, the U.S. Military assumes that war can be approached just like the study of an atom: conduct research, propose hypotheses, propose desired outcomes, and apply causal analysis and analytical skills related to complexity, institutional analysis, narrative framing, and ethics. COL Perez even goes so far as to suggest that those conducting strategy must “see” the environment of their future intervention in the same way a young officer builds a terrain model and conducts rehearsals, creating “a depiction of the strategic landscape…”

There are two fundamental problems, from my view, with the U.S. Military’s preferred approach. The first is that the institution assumes the logical positivist view that war is fundamentally the same as a solar system or a rock. In the military’s case it is the conflation of tactics with the aggregated effects of tactics. The U.S. Military assumes that tactical action is the same as aggregating that tactical action over time to accomplish policy objectives. The same type of cognitive tools that a platoon leader might use to prepare for an ambush patrol are the same ones that higher-ranking staff officers and leaders should use for the accomplishment of policy objectives, that is that cause and effect are linear and deterministic, no matter what kind of phenomena with which one is confronted. It is the same faulty logic that assumes that the lines of operation construct can be used to accomplish objectives during “stability” missions simply by aligning the lines under their purpose, as if “purpose” and “stability” are best thought of as a geographic landscape or, as COL Perez describes, an “obstacle course.”

The second fundamental problem with the U.S. Military’s preferred approach is that it is akin to pseudo-science. The constructing of “hypotheses” and the utilization of “causal analysis” implies a use of the scientific method replete with falsification testing. Falsification testing is the normal way in which scientists conduct hypothesis testing and is the, at least in theory, way in which social scientists also try to figure out what causes what by crunching lots of numbers. There are surely, one would think, quantitative methods that can rule out spurious claims about various things that supposedly led to a victory or even a loss in a specific example of war.

What kind of hypotheses, however, do military “scientists” test? What kinds of regression models do they build? Rarely, if ever, have I observed military “scientists” conducting regression analyses, the main tool for conducting hypothesis testing in the social sciences. The average military “scientist,” assuming that is what we should call a military officer teaching military “science” in a university classroom, simply trains military students in the steps of military leadership, patrolling fundamentals, and aerobic conditioning. Principles are taught through historical case studies.

Likewise, most military officers involved in higher-level studies do not apply the kind of quantitative analytical tools to their hypotheses that most social scientists might. The U.S. Army War College, where COL Perez teaches, assumes things about strategic theory in much the same way a fundamental religious college might assume about how life began on Earth. That is, the College does not encourage its students to question the underlying theory. As its website states: “Graduates of U.S. Army War College programs… [t]hink strategically and skillfully develop strategies…” If the College expects its graduates to think strategically, then surely there is no chance to think critically about whether strategic thinking is even a valid concept or not. This simply reflects, however, what is assumed in the U.S. Military’s doctrine. In short, the doctrine does not question the assumptions related to logical positivism. Even more curious, however, the U.S. military does not apply the methods of the social sciences (multi-variate regression, for one, but even proper methodological case analysis as well), even though it preaches them in its doctrine. Instead, most U.S. Military doctrine and theory are supported by case studies using historical methodologies.

I assert most, if not all, of these case studies are cursory at best: they are cherry-picked to support the assertions made in the doctrine (which is not surprising, as there is a difference between the purpose of historical case studies and social science case studies: the former which attempts to make sense of a specific context, the latter which attempts to do causal analysis).

Take Center of Gravity (CoG) analysis, for instance. COL (ret.) Dale Eikmeier, a professor at the U.S. Army War College, in a podcast for the website Principles of war used a case study of the Battle of the Atlantic during World War II to prove that his way of using CoG analysis works. This is disingenuous for two reasons: one, there is no guarantee that the same tool would be successful in a situation in which the key variables in play are different, and, second, there is no evidence that anyone used CoG analysis in the case of the Battle, therefore by definition it proves nothing.

Pseudo-Science?

The physical sciences often scoff at the social ones for producing next to nothing in terms of rigorous scholarship and repeatable experiments. Much the same can be said of the military: artillerists, logisticians, and soldier performance specialists have made great strides in targeting, logistics, and soldier performance on the battlefield, but strategists have largely failed of late. If social scientists and military scientists cannot offer up causal explanations for their field, then what good are they?

There are two options. One is that the military could double-down on their current methods, much as the quantitative methods fans do in the social sciences, dreaming of the day when the right algorithm will show the magic variable or group of variables that will explain everything. This means trying to do our current methods better. The other option is to question whether the current method is the correct method in all situations. That current method being, of course, strategic thinking and the logical positivist philosophy behind it.

What, however, can military “science” offer military practitioners? Causal assertions mapped out linearly on white boards with nothing to back them up other than linear logic or conjecture? Hypothesis testing through cherry-picked case studies? Linearly-constructed theories of action within reductive frames that, when confronted with failure, are blamed on either poor execution of the theory of action or a flawed overarching political logic? This is pseudo-science at best.

I suggest instead that military science should view itself as a “postnormal science” as Norma Riccucci describes Funtowicz and Ravetz labeling the discipline of public administration. This is juxtaposed, first, with “normal science.” “Normal science,” as Riccucci describes Thomas Kuhn’s idea, is a science that is governed by a well-developed set of paradigms, or fundamental assumptions about the nature of the world and how humans gain knowledge of the world. Those sciences that can be described as “normal” work well enough for whatever problems they were originally meant to handle (think physics and rocket trajectories) and are only replaced when they are unable to handle a given set of problems (think Quantum Mechanics). When the prevailing explanations are no longer able to explain things sufficiently, a revolution in thought occurs, replacing the old paradigm with another one. They remain, however, normal in the sense that they deal with phenomena in which good explanations are constantly improved upon.

Postnormal science, however, describes those disciplines that are not governed by one set of well-developed paradigms. To put it another way, a postnormal science is one which is not able to solve the fundamental problems it was set up to solve because of the nature of its subject. “A postnormal science is a process of inquiry for which objectivity is not always achievable. Environmental factors, particularly politics, interfere with the quest for objectivity, and consequently, prediction and control are limited.” It “is one that is relevant when high risks, uncertainty, and divergent values prevail.” Since the military is beholden to politics, it might make sense that this kind of thinking more closely matches the activities the military engages in more so than that which a “normal” science would deal.

A postnormal science, according to Riccucci, requires multiple methods, all borrowing from different sets of assumptions about the nature of things and how humans gain knowledge about those things. She uses the term triangulation from Michael Patton or, “mixed methods” to describe the use of multiple methods. The key is, as she points out, to use methods that have different assumptions associated with them, as this gets the approximation of useful insights closer than if a researcher simply uses one method.

In a military context, it might seem on the surface that this would only apply to how the military might approach situations and learn from them. I would posit that this may be true, that actual action “within the world” could thus still follow much of the same action that the military takes now: set an objective, drive towards the objective, re-evaluate. The way in which military planners and commanders think about and learn during and afterwards might be the only things that would need changing. The other alternative is that the military has not developed this idea sufficiently and that applying a mixed methods approach could entail designing everything in a mixed methods manner, to include one’s actions. If true, this could be more akin to reflective practice and learning-in-action.

In conclusion, I encourage military thinkers, but especially the young, up-and-coming ones (us old guys are institutionalized into our own self-constructed psychic prisons), to question the assumptions that lie behind strategic thinking and the associated tools of the logical positivists that have held sway in the U.S. Military since its early days. We might would prefer a world that matched our intuition and a battlefield that lent itself to the simple gathering of empirical evidence to tell us if we are winning or not. We should not, however, pretend that case studies offer up proof that can help us falsify hypotheses, nor should we assume that the only things that are real are those that are able to be measured or observed through our senses. The hard sciences, especially quantum mechanics, do not even hold that to be true anymore. Instead, we might want to investigate the worldview of the post-positivists and those who believe that social phenomena are different than physical phenomena and that our methods to understand them, if not to interact with them as well, should thus also be different.

Grant M. Martin is a lieutenant-colonel in the U.S. Army, assigned to U.S. Special Operations Command. He is a graduate of The Citadel and holds an MBA from George Mason University, an MMAS from the School of Advanced Military Studies, and is currently a Ph.D. candidate at NC State University. His views are his own and do not represent the position of the U.S. Army or USSOCOM.

--

--

Grant Martin
Grant Martin

Written by Grant Martin

Grant is an officer in the U.S. Army, currently serving in USSOCOM.

No responses yet