Showing posts with label Army. Show all posts
Showing posts with label Army. Show all posts

Wednesday, March 25, 2026

Sooner or Later, Someone Is Going to Need to Think


The military doesn't do PT every day because every day requires a high level of physical fitness.  The military runs every day because there are days when they will have to run, and they want to be ready.

Nobody questions this.  Nobody argues that physical training is a waste of time because most duty days don't involve sprinting.  The logic is obvious: peak demand is unpredictable, preparation must be continuous, and the time to build the capacity is before you need it, not during.

We have no equivalent practice for thinking.

We should.  And the fact that we don't is about to become one of the most consequential gaps in how we prepare people, in the military, in business, in education, for a world saturated with artificial intelligence.

The Wrong Way to Hear This

Don't get me wrong.  This is not an argument against using AI. I am not about to tell you to put down the chatbot and pick up a pencil.  That argument is boring, it's wrong, and it misunderstands the problem completely.

The case for cognitive independence is not anti-AI any more than the morning run is anti-vehicle.  Soldiers run every day.  They also drive vehicles, fly helicopters, and ride in the backs of Strykers.  The running doesn't replace the vehicles.  The running makes them better at everything they do, including the things they do from vehicles.  Cardiovascular fitness affects alertness, stress tolerance, decision-making under fatigue, and recovery time.  You don't run instead of driving.  You run so that when you're driving, or planning, or leading, or making a call under pressure, you're operating from a higher baseline.

That's the argument for cognitive independence.  Not that you should think without AI.  That you should be able to think without AI, so that when you think with AI, you're actually thinking and not just accepting.

The Muscle You Don't Know You're Losing

The research on this is early but it's pointing in a direction that should make anyone paying attention uncomfortable.

In 2025, a team at MIT's Media Lab ran an experiment.  They had people write essays under three conditions: with ChatGPT, with a search engine, or with no tools at all.   Then they measured what happened in their brains using EEG.  The people who used AI produced their work faster.  But they also showed weaker brain connectivity, lower memory retention, and (this part is striking) a fading sense of ownership over what they'd written.  The AI-assisted group didn't just think less hard.  They stopped experiencing the work as theirs.

That study is small and not yet peer-reviewed, so I want to be careful not to overweight it.  But it's consistent with a pattern the automation bias literature has documented for decades.  When autopilot systems became standard in commercial aircraft, researchers discovered that pilots who relied on automation for routine flight operations showed measurable degradation in their ability to fly manually when the automation failed.  The skills were still there, somewhere.  But the reaction times were slower, the judgment was less crisp, and the confidence was lower.  This wasn't because the pilots were lazy or bad.  It was because the skill wasn't being exercised, and unexercised skills atrophy.  That's not a moral failing.  That's physiology.

A theoretical perspective paper published in Cognitive Research: Principles and Implications laid out what likely makes AI-specific atrophy particularly insidious.  The researchers identified what they called "illusions of understanding," people who work with AI developing a false sense that they understand more than they actually do.  They believe they've considered all the options when they've only considered the ones the AI surfaced.  They believe they grasp a problem deeply when they've actually just accepted the AI's framing of it.  And they believe the AI's output is objective when it carries the biases of its training data. 

The worst part?   These illusions remain hidden until the AI is removed.  Performance looks fine.  The person feels competent.  The gap only becomes visible at exactly the moment you can least afford to discover it, when you need the independent judgment and it isn't there.

There's another dimension that I think the literature is just starting to catch up with.  A 2025 study published in Nature Scientific Reports ran four experiments with over 3,500 participants.  People who worked with generative AI and then transitioned to working alone reported significant decreases in intrinsic motivation and increases in boredom.  Some of that is predictable.  If you've been using a powerful tool and someone takes it away, of course the old workflow feels slower and more tedious.  Going back to fat-fingering Python after a month of Claude Code is going to feel boring.  That's rational.

But the study found something harder to explain away.  Even people who kept the AI for both tasks showed declining motivation.  The contrast explanation doesn't cover that.  If boredom were just about losing the better tool, the people who never lost it should have been fine.  They weren't.

I think what's happening is something that anyone familiar with the research on intrinsic motivation would predict.  People are intrinsically motivated by three things: autonomy, mastery, and purpose.  The sense that you're directing your own work.  The feeling that you're getting better at something difficult.  The belief that the difficulty matters.  When a new technology takes over the parts of the work where those three things lived, the challenge, the craft, the small acts of problem-solving that prove to you that you're good at what you do, intrinsic motivation drops.  Not because the person got lazy, but because the fuel is gone.

This is predictable.  It has happened every time a technology has displaced skilled craft work and it is, at bottom, a leadership problem.  When you introduce a technology that strips autonomy, mastery, and purpose out of someone's workflow, you should expect a motivation collapse unless you actively manage the transition.  Unless you help people find the new sources of mastery in the AI-augmented workflow.  Unless you rebuild purpose around the capabilities that remain distinctly human.  Left unmanaged, the gap between "more productive" and "less engaged" will widen until the productivity gains are eaten by the disengagement they created.

Whether you frame it as skill atrophy, illusions of understanding, or the erosion of intrinsic motivation, the direction is the same.  The person who used to draft a planning estimate from scratch now edits one that the AI produced. The person who used to argue with a source's methodology now skims the AI's summary and moves on. The person who used to stare at a blank page until the right framing emerged now never sees the blank page at all.

These are capacities.  They require exercise.  And if the early research is any indication, they are at serious risk of quiet degradation across an entire generation of knowledge workers who are using AI every day without maintaining the underlying cognitive fitness that makes their AI use worth anything.

Every Day Is Leg Day

Most versions of this argument focus on the wrong scenario.  They say the danger is the rare day when the technology fails, the network goes down, the power cuts out, the system crashes.  And sure, that's real.  If your AI tools go offline and you've lost the ability to think without them, you're in trouble.

But the strongest case for cognitive independence is that it matters every single time you use AI.  Not just on the day the system fails.  Every day.  Every interaction.

Every interaction with AI is an evaluation task.  The AI produces something. You have to decide: Is this good enough?  Is this framed correctly?  Is something missing?  Should I act on this?  Every one of those decisions requires independent judgment, judgment that didn't come from the AI, that exists prior to the AI's output, and that you bring to the interaction from your own thinking.

If you can't do that, if you can't form an independent take before or alongside the AI's output, then you're not using a tool.  You're being used by one.  You're a rubber stamp with a salary.

The military doesn't just run for the rare day someone has to chase an insurgent through an alley.  Cardiovascular fitness affects everything: how clearly you think at hour fourteen of a planning cycle, how quickly you recover from a bad night's sleep, how well you regulate your stress response when the plan falls apart.  The fitness isn't for the emergency.  The fitness is the baseline that makes everything else work.

Cognitive independence is the same.  It's not for the day the network goes down.  It's the baseline that makes every AI-assisted decision trustworthy.  Without it, you're not collaborating with AI.  You're just surrendering to it in slow motion.

The Organizational Blind Spot

If this were just an individual problem, it would be serious but manageable.  People can decide to maintain their own cognitive fitness, just like people can decide to go for a run.

But PT in the military isn't optional.  It isn't left to individual motivation.  It is institutional.  It is scheduled.  It is led.  It is, in many units, the first thing that happens every duty day.  The organization decided that physical readiness was too important to leave to personal choice, because personal choice is unreliable when the thing you're choosing is difficult and the consequences of skipping are invisible in the short term.

Every condition that justified making PT institutional applies to cognitive fitness, and then some.  Cognitive atrophy is even more invisible than physical atrophy.  You can look in the mirror and see that you've gained weight.  You can't look in the mirror and see that you've lost the ability to independently evaluate an AI-generated planning estimate.  The degradation is silent, the consequences delayed.  And by the time you discover the gap, the moment you need the judgment and it isn't there, it's too late to build it.

This is a leadership problem, not a personal development problem.  When leaders introduce a technology that displaces the autonomy, mastery, and purpose their people used to find in their work, they own the consequences.  Expecting individuals to find new sources of meaning on their own, without organizational support, is like issuing Humvees and canceling PT because "they have vehicles now."  Nobody would do that.  But that is, functionally, what every organization adopting AI without investing in cognitive independence is doing.

Yet no organization I'm aware of has built cognitive independence maintenance into its daily rhythm the way the military builds in PT.  I teach senior military officers.  I watch them work with AI every day. The ones who came up solving hard problems on their own still push back on the machine, still catch the framing errors, still say "that's not quite right" and know why. 

But no one is scheduling twenty minutes of "think without the machine" before the workday starts.  No one appears to be assessing whether their team can still frame problems independently, generate alternatives without AI assistance, or catch errors in AI-generated analysis.  We seem to be measuring AI adoption rates, how many people are using the tools, how often, for what tasks, and treating that as progress.  We don't seem to be measuring whether the humans in the loop are maintaining the capacity that makes the loop meaningful.

We are tracking how far people drive.  We are not checking whether they can still run.

The Invisible Bet

Every organization that has adopted AI without investing in cognitive independence has made a bet.  Most of them don't know they've made it.

The bet is: our people will maintain the ability to think independently without any deliberate effort to ensure it.  They'll just... keep being sharp.  The AI will handle more and more of the cognitive work, but somehow the humans will retain the judgment to evaluate that work, to catch errors, to recognize when the framing is wrong, to know when to override the machine.

That bet has been tested in other domains (aviation, nuclear power, automated trading) and it has lost every time.  The more reliable the automation, the harder it was for humans to catch the automation's failures.  We already know how this goes.

The AI systems people are using today are more persuasive, more fluent, and more confident-sounding than any automation that came before.  They produce outputs that look like expert human work.  They structure arguments, cite evidence, and anticipate objections.  The psychological pull toward acceptance is enormous, and it increases over time as the user's own independent capacity decreases.  It's a flywheel, and it turns in only one direction.

None of this requires AI to be malicious or deceptive.  The system doesn't have to be trying to undermine your judgment.  It just has to be good enough that you stop exercising your own and the rest takes care of itself.

The Morning You Find Out

There is a moment, and it's coming for a lot of people, that will feel like stepping off a treadmill you didn't know you were running on.

Maybe it's the analyst who has been using AI to draft intelligence assessments for eighteen months and then gets asked, in a meeting with no laptop, to walk a general through her reasoning on a developing situation.  The AI isn't there.  The polished structure isn't there.  And she discovers, in real time, in front of people who matter, that she can't reconstruct the thinking that used to come naturally.  She's been editing AI drafts for so long that she's lost the ability to generate one.

Maybe it's the lawyer who has delegated research memos to AI for a year and then gets deposed.  Opposing counsel asks how he arrived at a particular legal theory.  He knows the answer is in the memo.  He can picture the paragraph.  But he can't explain the reasoning because the reasoning was never his.  He approved it.  He didn't build it.

Maybe it's simpler than that.  Maybe it's the moment you sit down to write an email, not a report, not an analysis, just an email, and you open the AI out of habit, and then you stop, and you try to write it yourself, and you notice that the words come slower than they used to.

Everyone is doing what they are supposed to be doing, what some organizations require that they do.  They used a powerful tool the way it was designed to be used.  They got more efficient. They produced more output.  They looked, by every metric their organizations track, like high performers.  The gap in their capacity was invisible right up until the moment it wasn't.

We know how to prevent this.  The military figured it out for physical fitness a long time ago. You don't wait for the moment someone needs to run.  You build the running into the daily rhythm so the capacity is there when it matters. 

We have not done any of this for cognitive fitness. Not in the military. Not in business. Not in education. We are fielding the most powerful cognitive tools in human history and we have not asked — seriously, institutionally, as a matter of policy — how we keep the humans sharp enough to use them well.

Sooner or later, we are going to need to think for ourselves. Will we still be able to?

Tuesday, May 25, 2021

What If "Innovator" Was A Job Title?

I have been thinking a lot about innovation recently.  It occurred to me that the US Army has
a number of official specialties.  We have Strategists and Simulators and Marketers, for example.  Why not, I thought, make Innovator an Army specialization?  

I tried to imagine what that might look like.  I know my understanding of Army manpower regulations and systems is weak, but bear with me here.  This is an idea not a plan.  Besides, what I really want to focus on is not the details, but how the experience might feel to an individual soldier.  So, this is one of their stories...

I made it! The paperwork just became final. Beginning next month, I am--officially--a 99A, US Army Innovator.

The road to this point wasn’t easy. I graduated college with a degree in costume design and a ton of student debt. After my plans to work on Broadway fell through (Who am I kidding? They never even got off the ground), I had to do something. The Army looked like my best option.

For the last two years, I have been a 68C, a "practical nursing specialist", working out of a field hospital at Ft. Polk. My plan had always been to make sergeant and then put in my OCS packet. Things changed for me after a Joint Readiness Training Center rotation.

Patients kept coming to us with poorly applied field dressings. They were either too tight and restricted blood flow or too loose and fell off. As I thought about it, it occurred to me that there might be a combination of fabrics, that, if sewn together correctly, would be easy to apply, form a tight seal to the skin, and still be easy to change or remove.

As soon as I got back to the barracks, I hit the local fabric store, pulled out my sewing machine, and made a prototype. It took a few tries (and lots of advice and recommendations from the doctors and nurses in the unit) but eventually I got it to work. I never thought I would be able to use both my nursing skills and my costume design skills in one job but here I was, doing it!


I wasn’t sure what I was going to do with my new kind of field dressing until one of the RNs made me demonstrate it for the hospital commander. He watched without saying a word. He finally asked a few questions to make sure he knew how it worked, and then things got quiet.

Finally, my RN spoke up, “I think we could really use something like this, Sir.” He stood up straight and said, “I agree.” Then he looked at me. “I’m going to hate to lose you, Specialist,” he said, “but I think you need to put in for an MOS reclassification.”

Until the hospital commander told me about it, I had never even heard of 99A. There were some direct appointments, of course, but those were coming out places like MIT and Silicon Valley. For normal soldiers like me, getting into the Innovation Corps was more like going into Civil Affairs or Special Forces. You had to have some time in service but, more importantly, you had to have a good idea.

At first, it was easy. I simply submitted my idea to a local Innovation Corps recruiter.  I included some pictures and a short video that I shot on my cell phone of my prototype in action.  The recruiter told me that the Army used the same “deal flow” system used by venture capitalists. I’m not sure what that all entails but, in the end, it meant that my idea was one of the 50% that moved on to the next level.

For more info on deal flows see, Basics of Deal Flow.

My next step was a lot more difficult. You can think of it as the Q course for Army innovators. I went TDY for a month to the Army’s Innovation Accelerator in Austin, Texas. Like all business accelerators, the goal was to give me time, space, mentorship and (a little) money to flesh out my idea. I worked with marketing experts and graphic designers to come up with a good name and logo. I worked with experts in the manufacturing of medical equipment to help refine the prototype. I even had a video team come in and make a great 2 minute video showcasing the product. It was exciting to see all of the other ideas and to have a chance to talk about them with the enlisted soldiers, officers, and even some college students and PHDs--all trying to bring their ideas to life.

The Army crowdsourced the decision about which projects got to move on from the accelerator. That meant that each of us put together a “pitch page,” kind of like what you would see on Kickstarter or IndieGoGo. Units all across the Army had a fixed number of tokens they could spend on innovative projects each quarter. Each of us needed to get a set number of tokens or we would not be allowed to move on. In the end, out of the hundreds of applications and the dozens of people at the accelerator, I was one of the 10 chosen to move forward, one of 10 who gets to call themselves a US Army Innovator.

That’s where I am today. My next step is a PCS move to a business incubator. I could stay here in Austin with the Army’s business incubator, but the Army has deals with incubators all over the country. I am hoping to get a slot in one of the better medtech incubators in Boston or Buffalo. It will be a two year tour (with the possibility of extension), which should give me plenty of time to bring my idea to market, with the Army as my first customer.

For me, the best part is that I am now getting Innovation Pay. It is a lot like foreign language proficiency pay or dive pay. I’m not getting rich but it sure is better than what I got as a specialist. More importantly, there are ten tiers, and each time you move up, you get a pretty substantial raise. This means that once you become an Innovator, you are going to want to stay an Innovator.

The other great part about this system is that you can move up as fast as you can move up. There are no time-in-service requirements. If I am successful in the business incubator, for example, I could be a CEO (Innovator Tier 6) in just a couple of years. Running my own company at 28? Yes, thank you!

And if I fail? I know there are still bugs to work out with my idea. I have to get the cost of production down, and there are lots of competitors in the medical market. Failure could happen. While I won’t be happy if it does, the truth is that, by some estimates, 90% of all start-ups fail. The Army has thought about this, of course, and gives Innovators three options if their projects fail. 

First, I could go back to nursing. I would need some refresher training but my promotion possibilities wouldn’t take a hit. The Army put my nursing career on pause while I was in the Innovation Corps. 

The second option is that I come up with a new idea or re-work my old one. The Innovation Corps has developed a culture of “intelligent failure,” which is just a fancy way of saying “learn from your mistakes.” In an environment where 90% of your efforts are going to fail, it is stupid to also throw away all of the learning that happened along the way. Besides, the Army also knows that persistence is a key attribute of successful entrepreneurs. The Army wants to keep Innovators who can get up, brush themselves off, and get back in the saddle. 

Finally, I might be able to go back to the accelerator as an instructor or take a staff position in Futures Command or one of the other Army organizations deeply involved in innovation.

I’ve had a chance to talk to a lot of soldiers, enlisted, NCOs, and officers, on my journey. The Innovation Corps is pretty new and, while many have heard about it, almost none of them really understand what it takes to become an Innovator. That doesn’t seem to matter though. Almost all of them, and particularly the old-timers, always say the same thing: “The Army has been talking about innovation my whole career. I am glad they finally decided to do something about it.”

For me? I’m just proud to be part of it. Proud to help my fellow soldiers, proud to help the country, and proud to be a US Army Innovator.