top of page

Armed with Data - Turning Common Sense into Common Practice

Updated: Mar 11, 2022

Common sense does not necessarily mean common practice, especially when it comes to the use of data to drive organizational decision making.

Weaponizing data is no different than weaponizing the humans you’re collecting the data from. It’s a collective effort that requires providing purpose, equipment, and education that empowers the end users to operate on behalf of an organization that serves the agenda of elected officials and, hopefully, the citizens who elected them. Citizens might see world peace or they might just see lower gas prices at the other side of armed conflict but we cannot expect soldiers to adapt the same purpose as the people they’ve sworn to defend. They need their own reasons for going to war. Similarly, we can’t expect the individual who’s sweating all over some uncomfortable wearable before placing it into the hands of some untrusted “scientist” to share the human performance team and leadership’s passion for heart rate variability and respiratory rate.

Let’s takes look at how we can more effectively weaponize data in the tactical setting, with a common theme of consideration for the professional who closes with and engages the enemy, enters the burning building while the rest of us run out, or regularly interacts with the dark side of humanity while patrolling the streets where we roll the windows up, turn the music down, and drive a little faster.

Burn Down the Silos

Include as many stakeholders in the process as possible. The room can get crowded quickly so keep in mind that while everyone might get a voice and a vote (and even if every vote counts), not every vote can win. Ultimately, leaders are responsible for everything their organization’s do or fail to do so the decision rests in their hands. The human performance staff can inform and even attempt to influence a leader’s decision but must execute the unit’s agenda and not their own, no matter how far from their ideals it might fall.

Some folks to include in the process:

  • Leaders at All Levels - down to the one who’s bunking alongside the end user

  • Relevant Human Health and Performance Staff - including anyone who might need to make adjustments to how they do business based on the information collected

  • Logistical Support Personnel - such as those who can troubleshoot hiccups in the collection process

  • Uninvested Third Party - at least one set of eyes from an emotionally detached entity can help maintain objectivity

  • Sample Representation of End Users - the people who are providing the data

Unfortunately, the Sample Representation of End Users is too often neglected when setting up a system for how data is going to be incorporated into operations. They bring an invaluable element of reality based skepticism that is inherently invisible to leadership.

Involving end users early and often helps to:

  1. Avoid implementation pitfalls only they can see from their perspective on the ground

  2. Build the trust needed to ensure efficacy of the data collected (especially subjective information)

  3. Instill ownership that increases the likelihood that actionable feedback is actually actioned

Each of stakeholder should have their own WHY for being involved (besides FOMO).

Observe the Process as Much as the Outcome

Shoutout to Dan Bornstein (an actual scientist) for this one. If we only watch the marionette and see it dance, we won’t know how the hand that manipulates it generated such sweet moves. Data is typically collected in an effort to enhance outcomes or dependent variables that we’d like to see improved. The hand that manipulates those outcomes consists of process oriented independent variables. BUT almost every variable is dependent on something…

Even the Fingers that Pull the Strings Have a Brain that They Depend On

We’re trying to evaluate data in the real world and not a randomized, doubleblind, placebo controlled study… We cannot account for, let alone control, every last variable but we can at least be aware of their existence.

Let’s say the outcome we’d like to observe is lethality, as measured by marksmanship accuracy. Maybe we hypothesize an improved outcome associated with faster heart rate recovery. We can evaluate aerobic fitness as part of the process and we can evaluate shooting accuracy as part of the outcome. Keep in mind, however, that even aerobic fitness is an outcome (dependent variable) that has its own preceding process (independent variable) such as physical training, body composition, genetics, etc. We might want to evaluate those inputs too.

Chances are that almost every layer of the process is also an outcome with a preceding process. So…. How deep do you take it?

In addition to your outcome, look to evaluate the cog in the process that is the most:

1. Controllable - If the cog cannot be manipulated then it’s not worth wasting much time collecting information on it

Trying to increase a candidate’s sleep quantity at Ranger School isn’t going to happen because it’s not in that candidate’s control. Instead focus maybe on sleep quality so they can make the most of the little sleep they do get.

2. Influential - Look for the domino that’s most likely to knock down other dominoes that influence the outcome you’re measuring

Basic trainees are shown to be undernourished - Just imagine how many dominos of physical and cognitive health and performance upgrading their nutrition would knock down.

3. Implementable - If you provide feedback to the person who holds the domino, will they actually implement it, consistently and sustainably?

Even soldiers are humans and humans have free will. Ignoring their preferences, willingness to change, and other behavioral patterns might leave you with an unexecuted ideal that is an extremely controllable and influential process oriented solution that will surely enhance your outcome.

Last note on process monitoring. Let’s say, like many tactical entities , you decide to record injury stats as an outcome measure of success for a human performance program. Now let’s say, like many programs, those injury numbers increase significantly.

If you only measure the outcome, you’ll miss the likelihood that what appears to be an outcome failure can be attributed to a process success.

In this case, injury numbers are increasing because more soldiers/first responders are seeking care for injuries they would have ignored in the past, due to increased interaction and trust in the performance program staff.

Check out more content like this video in our Tactical Initiator Certification Course at our website:

Provide Frequent & Meaningful Feedback

Much like data collection should inform and influence up the chain, feedback from that data should inform and influence the end user (soldier, firefighter, officer, etc.).

The appropriate frequency depends on the information shared and how quickly it can be actioned. Frequent, consistent touch points can help hold people accountable without the threat of repercussions. For example, a weekly recap of quantity and focus of physical training serves as a reminder that physicality is valued by the organization and its leadership without the heavy handedness of a potentially punitive and logistically intensive PT test administration. Coaches are extremely savvy at weaving small doses of tests into training to provide actionable, nonthreatening feedback. Some feedback, like sleep quality and quantity, might be better delivered daily while other feedback, like heart rate variability, might be best looked at periodically to identify trends over time and how they correlate to personal and professional events.

Feedback should have meaning to the person providing it - otherwise, why are you collecting the data that drives it? Feedback should also have meaning to the person receiving it. Remember, you cannot impose your WHY on someone else. You can, however, include education that empowers the end user to incorporate the feedback provided, likely increasing their willingness to continue to provide the data that leads to it.

Education that accompanies feedback should:

  • Interpret the Information - Rather than provide raw data, explain the implications of that data in a way that’s relevant to the end user

Example: Instead of simply telling someone how many hours they spent in REM sleep, explain what REM sleep is and why that matters for them personally and professionally.

  • Suggest Potential Courses of Action - Don’t just identify problems, provide solutions. Make sure solutions are not so idealistic that they don’t account for the reality of the end user.

Example (continued from above): Increasing sleep quantity might be impossible due to current operational tempo but suggestions could focus on components of sleep hygiene that helps improve the quality of sleep.

  • Provide Comparative Norms - Let the end user know where they stand compared to their peers. Competition can be a powerful motivator within the tactical setting and a comparison to people who have similar challenges instead of an arbitrary ideal is easier to relate to and harder to excuse.

A related word of caution...

Selection processes often collect data and never provide feedback, perhaps because that data is used for candidate evaluation or for driving extremely high level decisions that do not directly affect the candidate. Many of those candidates continue to carry their distaste for data and distrust of the human performance professionals who collected it on to their assigned organization, creating quite an uphill battle for the receiving staff to win them back. Treat candidates less like guinea pigs and more like the prospective professional assets that they are.


Tactical professionals hate nothing more than providing personal data that disappears into a blackhole while serving someone else’s agenda. Include them in the process, along with as many stakeholders as possible without excessively bogging things down. Be sure to evaluate the processes that lead to valued outcomes, and sometimes even evaluate those processes’ processes. Lastly, provide frequent and meaningful feedback that empowers the end user to act - because even actionable feedback requires actual action.

354 views0 comments

Recent Posts

See All
bottom of page