Moneyball and the Air Force

  • Published
  • By ?Col. Stuart Pettis
I've often been asked what books influenced me as a leader. The first book I mention usually turns heads; it's "Moneyball: The Art of Winning an Unfair Game" by Michael Lewis. While some may have read the book, others are probably more familiar with the 2011 movie starring Brad Pitt and Jonah Hill.

Moneyball is the story of the 2002 and 2003 Oakland A's, a small market baseball team living on a small, frugal budget and trying to compete against behemoth teams with vast resources and money such as the New York Yankees. Both the book and movie show how the Oakland A's and their manager, Billy Beane (played by Brad Pitt in the movie), were able to successfully compete through a better understand of what measures and metrics were important. The A's approach was that traditional baseball metrics measured things that weren't important and neglecting other more important aspects of the game.

For example, traditional baseball valued saves, earned when a pitcher came in later in a game a preserved a win ... or as the Oakland A's thought, didn't mess up. Because it was easy to take a modest pitcher, put him into the game and ensure he didn't mess up, the A's were able to create pitchers with a lot of saves ... something traditional baseball minds overvalued. They were then able to trade these pitchers to other teams for more than they were really worth, creating value from nothing. Another example was that they valued walks while rejecting steals ... as Brad Pitt explains in the movie, "I pay you to get to first, not get thrown out at second." The book is filled with these sorts of examples and the take away for me is, "Are we measuring something valuable?"

Early in my Air Force career, we embarked on a Quality Air Force kick which led to a slew of metrics being developed. In fairness, Quality Air Force, which came from the civil sector's Total Quality Management, had many great attributes. In particular, the focus on process improvement, pushing decision making down and focusing on customer's requirements were great.

However, what many remember were extensive briefings full of metrics, many of which seemed arbitrary at best. One of my favorite Quality Air Force stories was when I was asked by a squadron commander to create a metric which captured what I did. I asked what exactly he wanted, he replied "anything as long as it says I'm doing a good job." With that clear guidance, I was able to meet his intent although the metric itself was worthless.

This isn't to say that all metrics are bad or people who use them nefarious. Rather, we as Air Force professionals need to be careful, just like Billy Beane and the A's did, that the metrics we're using measure something valuable. We also need to ensure that we're not focusing on metrics that measure unimportant things.

For example, one common metric I've seen is to measure our timeliness of submitting OPRs, EPRs and decorations to a higher headquarters. No one would disagree that ensuring our Airmen's reports and decorations are in their records prior to a board is unimportant. However, is getting a draft report to higher headquarters important or is it tracking when reports get into someone's records? I'd argue it is the later.

Another example comes from my time in Iraq. When I was there, the Army headquarters I supported inherited a metric which measured close air support (CAS) effectiveness by comparing the amount of CAS requested versus the amount of CAS they received. Fortunately I was able to show that there was a fixed variable (the total amount of CAS aircraft available) and several uncontrollable variables, the amount of CAS requested by other organizations. Therefore, since the only variable I could control was the amount of CAS we requested, the only way to improve was to request less CAS ... something incredibly dumb. Fortunately, my Army counterparts agreed and we were able to focus on more worthwhile metrics.

My suggestion for Airmen is to stop and think any time you're asked to prepare or are presented a metric. The questions I ask myself are:

Does this measure something valuable to our organization?

Are there variables in the metric that are beyond our control?

Is this metric specific enough that it allows us to improve something?

We also need to accept that metrics may be bad. Often they're bad for reasons beyond an organization's control; for example, a squadron preparing for a major functional inspection may be letting other things slide ... this is understandable provided they catch back up in the future. Finally, we need to understand that, if we're measuring worthwhile things, there should be areas where we are lagging ... otherwise we'd have nothing to improve.