The dominant question about AI is what people will do when machines can do everything. The question is badly framed, and the framing has consequences.
It assumes that meaning lives in labor, that what people do for a living is what makes them people. Lose the job, lose the meaning. This turns every advance in automation into an existential threat, and every response into either a defense of human employment or a promise that new jobs will appear. Both responses accept the premise. Neither examines it.
The real anxiety isn't about jobs. It's about whether humans will retain the capacity to set the terms of what machines optimize for. The question isn't what we'll do. It's whether we'll still be able to say what counts as good.
That distinction matters practically. A world where machines handle most production but humans retain the capacity to evaluate and judge what's produced is a different world from one where machines handle production and humans have lost the ability to tell whether the results are worth having. Both worlds might have full employment. Both might have impressive GDP. Only one has anyone at the wheel.
What follows is an argument about what sustains that capacity, in individuals and in the organizations that increasingly run on machine output. The answer involves something most people assume is either innate or trivial: the ability to look at what a system has produced and say, before you can fully explain why, that something isn't right, and then to do the work of explaining why.
That capacity is called taste. It is more fragile, and more dependent on the conditions in which people work, than it first appears. To understand why, it helps to start with how judgment develops in the first place.
The Philosophy of Friction
You don't arrive in the world knowing what you think. You discover it by running into resistance.
You try to build something and the material won't cooperate. You try to write a sentence and it refuses to say what you meant. You try to lead a team and the team pushes back. The friction between your intention and the world's response is where self-knowledge gets produced. You learn what you actually value by discovering what you're willing to fight for when things go wrong.
This is a description of how competence and judgment develop in practice. The carpenter who has only ever used a nail gun doesn't understand fastening the way the one who started with a hammer does, because the resistance taught something the automation skipped. The engineer who has only ever accepted AI-generated code doesn't understand the architecture the way the one who wrestled with it manually does, because the wrestling is where understanding lives.
Work, in the sense that matters here, is any sustained encounter with something that resists your will and forces you to revise your understanding. Parenting is work in this sense. So is writing. So is debugging. So is learning to read: the child who struggles through a sentence that resists their current understanding of the world is revising their model of what's possible, not just their decoding. An AI that reads aloud to them eliminates the friction of decoding, but also the encounters where comprehension develops.
The automation threat, stated plainly: if machines absorb the resistance, if they smooth the path between intention and execution, what happens to the process that produces people capable of having intentions worth executing?
This is not nostalgia for manual labor or an argument against tools. Good tools introduce new forms of friction even as they eliminate old ones. The printing press eliminated the friction of manual copying but introduced the friction of evaluating competing texts, because more material demanded more judgment about what to trust. The calculator eliminated arithmetic but introduced the friction of deciding what to calculate and interpreting the results. In both cases, the tool displaced difficulty upward, into territory that required more judgment.
The question is whether the current wave of automation is doing that, displacing friction upward, or whether it's doing something different in kind. AI absorbs cognitive friction across levels simultaneously. It handles the coding, the writing, the analysis, the design, and increasingly the evaluation.
Whether new forms of judgment-developing friction will emerge fast enough to compensate is an open question. But treating this as just another turn of a familiar cycle is a bet that previous patterns will hold under categorically different conditions. It is a bet, not an observation.
And the bet is not just about individuals. Individuals don't choose their own friction. Organizations distribute it through the work they assign, the tools they provide, and the decisions they automate. The way a system arranges who encounters resistance and who is shielded from it determines, over time, the distribution of judgment itself. The question stops being about the self and starts being about the whole: whether friction produces meaning, and who governs a world in which the friction is disappearing.
Who Steers?
Every system that regulates itself has two layers: the process and the governor. The thermostat executes; the person who sets the temperature decides what comfortable means. The assembly line produces; someone upstream decides what's worth producing.
As systems grow more complex, governing them doesn't get simpler. It gets harder. A thermostat is easy to set. A city's energy grid is not. A hospital triage system can sort patients by severity faster and more consistently than any human. But deciding what "severity" means, whether to prioritize the patient most likely to die without intervention or the one most likely to survive with treatment, requires judgment about what the hospital is for. That's the governance question. And the person approving the algorithm's priority weights may not even realize they're making it.
The thing doing the steering has to be at least as sophisticated as the thing being steered. A simple rule can govern a simple process. But any organization deploying AI has become a complex, adaptive, self-modifying system, and governing it requires a capacity that can match its complexity. That capacity is the ability to look at what the system is doing and determine whether it's doing the right thing, even when "the right thing" hasn't been fully specified yet.
No single mind holds that capacity. It is held, or lost, by the social practice through which judgment gets exercised and contested across the people responsible for the system. The governor of a complex system isn't a person. It's an ecology of judgment. And governance degrades when the practice of contesting judgment breaks down, when the people steering stop challenging each other's assumptions or when too few people are left doing the steering at all.
When machines handle all the first-order work, humans don't get freed from work. They get promoted to the hardest kind of work, which is deciding what the machines should care about. But this promotion is invisible. It doesn't feel like work. It feels like reviewing outputs, rubber-stamping suggestions, choosing from machine-generated options. And if the human in the loop treats this as a clerical task rather than a judgment task, the system loses its governor. It runs, it optimizes, it produces, but for what?
Information, in any system that regulates itself, is selection: the act of deciding what matters from what doesn't. When machines handle the first-order selections, human meaning-making operates at the meta-level. Which differences matter, to whom, and why. That's governance in its deepest sense: the ongoing, effortful determination of what this system should be for.
The Optimist's Trap
The strongest response to everything above is that meaning doesn't come from friction. It comes from connection. Art isn't meaningful because it's hard. It's meaningful because it creates new relationships between things that didn't previously belong together. Meaning isn't found through struggle. It's assembled from the available materials, and the more materials available, the richer the assembly.
In this view, automation isn't a threat. It's a liberation. Freed from repetitive execution, humans gain access to an unprecedented range of materials for meaning-making. The end of drudgery could be the beginning of the most creatively abundant period in human history.
This is appealing, and it is incomplete. We already have a machine that proliferates connections at scale. It's called the internet. Social media, recommendation algorithms, and generative AI are prolific assemblers of new relationships. They produce novelty constantly. They put more things in front of more people than ever before. And the result is not liberation. The result, overwhelmingly, is noise, exhaustion, and the flattening of attention. More connections, less meaning.
What's missing is discrimination. A world flooded with machine-generated content is a world of infinite assembly and zero curation. Proliferation without judgment produces volume, not value.
But the absence of discrimination doesn't just produce noise. It erodes the conditions under which people learn to discriminate in the first place.
Thought starts when something doesn't fit. When the thing in front of you resists your existing categories and forces you to revise them. Learning is what happens when your usual way of making sense gets interrupted by something it can't absorb. The interruption is where thinking begins.
Consider Khan Academy and its descendants. Unlimited tutorials, instant feedback, personalized pacing. The liberation model applied to learning. And the results are genuinely good for certain kinds of skill acquisition. Students get better at solving problems within frameworks. But the places where this model falls short are revealing: it develops fluency without developing the capacity to question the framework itself. Students learn to perform within the system without ever encountering the resistance that would teach them to evaluate whether the system is the right one. They become fluent without becoming thoughtful. The ease is real. The loss is invisible until you need someone who can do more than apply a method -- someone who can see that the method itself is the problem.
Stupidity, in the sense that matters here, is not ignorance. Ignorance is a lack of information, and machines cure it easily. Stupidity is the inability to be affected by what's in front of you.
The recognition machinery runs so smoothly that nothing surprises, nothing interrupts, nothing forces genuine thought. You process everything and encounter nothing. You have access to all the world's information and none of it changes your mind.
An environment optimized for smooth processing, where machines pre-digest every input and pre-solve every problem, doesn't just make people idle. It makes them stupid in this precise sense. It eliminates the encounters that force thought. The organization full of AI tools that nobody pushes back on isn't efficient. It's stupefied. It recognizes patterns beautifully and notices nothing.
The connection-optimist is right that meaning is assembled, not merely suffered through. But assembly requires genuine encounter with difference, with things that don't yet fit your framework. And genuine encounter requires that the thing in front of you be allowed to resist you, to stay strange long enough to force a new thought. Smooth that resistance too quickly and you don't reduce friction. You foreclose learning.
The Return of Taste
Neither friction alone nor connection alone produces meaning. Friction without connection is mere suffering. Connection without friction is mere noise. And an environment that eliminates both friction and genuine encounter produces something worse than either: a smoothly functioning stupidity.
What's needed is a capacity that operates at the boundary: the ability to evaluate a particular case without a predetermined rule. To look at something the machine has produced and say, this isn't right, before you can fully explain why. And then to do the work of explaining why.
There are two kinds of judgment. The first applies an existing rule to a new case: is this code syntactically correct? Does this building meet fire code? Has the patient's blood pressure exceeded the threshold? Machines are already better at this than humans, and the gap will only widen. Call this rule-following.
The second kind of judgment evaluates a particular case without a rule to apply. The code compiles and passes tests, but something about the architecture feels brittle. The building meets code, but the space doesn't work: people avoid the lobby, conversations die in the conference room. The patient's numbers are normal, but the doctor senses something the chart doesn't show. A manager looks at a team that's perfect on paper and knows it isn't going to work, because something about the combination doesn't fit. Call this taste.
Taste provides the friction that self-knowledge requires: evaluating without a rule is effortful, and the effort of articulating why something is wrong is where you discover what you actually think. It performs the governance function that complex systems demand: taste is what sets the target, what decides what the system should be optimizing for when the objective hasn't been formalized yet. And it supplies the discrimination that pure connection lacks: taste is what distinguishes productive assembly from noise, signal from proliferation.
Taste is a practiced capacity. It develops through exposure, repetition, feedback, and above all through the effort of articulating your reasons. You develop taste by saying why something is good or bad, and discovering in the process whether your reasons hold up. The wine novice who says "I don't like this" becomes a connoisseur not by drinking more wine but by learning to say what they don't like and why, and subjecting that account to the judgment of others.
In technical work, taste looks like the ability to review a system's output and say "this is wrong" before you can say why it's wrong. It's the senior engineer who looks at an architecture diagram and feels the brittleness before the failure materializes. It's the editor who cuts the sentence that's technically correct but tonally dead. It's pre-formal knowledge, and it's the hardest thing to automate because it precedes formalization.
An obvious counter: isn't taste just pattern-matching? And aren't machines already better at that? The history of AI is a history of formalizing capacities that were previously considered ineffable. Medical imaging, protein folding, fraud detection: in each case, something that looked like human intuition turned out to be a pattern recognizable by a sufficiently powerful system. What reason is there to think taste is any different?
The reason is structural. Taste doesn't recognize patterns within a formal system. It operates at the boundary where the formal system meets the not-yet-formalized. The senior engineer who senses brittleness isn't matching against a database of known brittle architectures. They're perceiving a relationship between the specific system and principles of good design that haven't been codified yet, and their perception is the first step of codification. Each act of taste moves the boundary.
The machine can match patterns within the boundary. Taste operates at its edge.
As machines absorb more of the formal, the boundary doesn't disappear. It moves, and the demand on taste intensifies. This is not a claim that machines will never be good enough. It's a claim that judging at the boundary of the formal is structurally different from pattern recognition, no matter how powerful the recognizer becomes.
If taste is a practiced capacity that develops through friction and articulation, then it doesn't survive on its own. It requires conditions. Taste develops through articulation because the boundary of the formal is precisely where you have to struggle to put something into words for the first time -- and that struggle is social. You articulate your judgment to others, they contest it, you refine it, and the boundary moves. Alone, taste remains intuition. Among practitioners who challenge each other, it becomes shared judgment. That shifts the question from what taste is to what kind of organization sustains it.
The Organizational Ecology
Automation discourse tends toward the adversarial: humans versus machines, preservation versus replacement. But the actual situation in any organization adopting AI is ecological, in the specific sense that judgment, like any living capacity, depends on the conditions of the environment in which it operates. Machines and humans occupy the same environment, depend on each other's outputs, and jointly determine the quality of what the system produces. The relevant question isn't how to protect human meaning from machines. It's what kind of ecology produces and sustains good judgment when the ratio of machine work to human work keeps shifting.
Visible reasoning is the organizing principle of this ecology: the practice of reasoning in the open, performing judgment where others can witness it, challenge it, and learn from it. When a human reasons visibly about a machine's output, they're training their own taste through the friction of articulation. They're creating the occasion for other humans to develop taste by witnessing judgment in action, which is apprenticeship rather than knowledge transfer. And they're generating signal that makes the machine's future outputs better.
This works because reasoning becomes robust when people who see a system differently are forced to articulate what they see and why. Visible reasoning is how that social practice happens in concrete terms: written rationales, argued code reviews, explained decisions. And the machines are part of the culture being shaped. An organization's AI systems reflect the judgment, or the lack of it, of the humans who prompt, evaluate, fine-tune, and decide when to override them. A strong culture of visible reasoning produces better human-machine collaboration, because the quality of the human judgment that governs the machine is higher and more contested.
What the ecology rewards determines what it produces. If the incentive structure rewards speed and volume, tickets closed and features shipped, then machines win on every metric, and the humans in the loop learn to rubber-stamp rather than evaluate. The ecology selects for compliance. Taste atrophies not because it's impossible but because nothing in the environment rewards its exercise. The culture dies while the productivity metrics improve.
Consider an engineering organization that automated all code review through AI tools. Reviews became faster, more consistent, more thorough on surface metrics. But the team stopped arguing about code. Reviews became transactions rather than occasions for judgment. Within a year, the senior engineers couldn't articulate why the architecture was shaped the way it was, because they'd stopped being forced to defend their choices to anyone who pushed back. The organization didn't lose information. It lost the social practice through which understanding stayed alive. The artifacts were still there. The fire was out.
The written artifact, the design rationale or the contested code review or the post-mortem that names what the automated pipeline couldn't detect, is not a record. It's an occasion for judgment to be practiced and refined. But mistake the artifact for the fire itself and you've lost the point. If organizations treat visible reasoning as a documentation exercise, or as a knowledge base to feed back into AI to eliminate the need for human judgment, they will produce more text and less meaning. The artifacts multiply. The culture collapses.
Visible reasoning, as a practice, resists the compression that machines are optimized to perform. It is inherently inefficient, social, and generative of friction. These aren't bugs. They're the properties that sustain the conditions for taste. An organization that protects this practice, that builds its incentives and its promotion criteria around the exercise of judgment rather than the production of output, is cultivating the process through which the human-machine system makes better decisions, catches more errors, and develops better judgment in new people.
Whether organizations will actually do this is a different question, and the answer determines more than organizational performance.
The Stakes
The effort to ensure that increasingly powerful systems do what humans actually want rests on an assumption so foundational it's rarely examined: that humans will retain the capacity to evaluate machine outputs and correct the trajectory. That capacity is taste. And taste is an ecological product, emerging from and sustained by organizational cultures that reward its exercise.
There are three ways this can fail. The machine optimizes a proxy rather than the real objective, because no one with sufficient taste is setting the reference signal. What humans value changes or degrades without anyone noticing, because the ecology has stopped rewarding evaluation and the capacity to detect drift has atrophied. The system generates endless outputs but no one can distinguish signal from noise, because the culture of discrimination has collapsed.
These describe the default trajectory of an organization that adopts powerful AI tools without deliberately cultivating the ecology described above. And they compound: as human judgment degrades, the machine outputs it governs degrade too, but more slowly and less visibly. By the time the gap is obvious, the capacity to close it may have been lost.
These failures are compounded by a question of access: who gets to participate in the ecology at all.
Taste develops through practice. Practice requires access to the workshop, to the friction, to the occasions where judgment is exercised and contested. When organizations automate away junior and mid-level work, they eliminate the rungs on which taste develops. The senior engineer who "just knows" something is wrong learned that capacity by spending years getting things wrong in environments that let them fail, reflect, and try again. Automate the early work and you don't produce an organization that needs fewer tasteful people. You produce one that can no longer grow them.
This is compounded by a structural tendency in the current economy: the returns from machine-augmented production flow disproportionately to the people who already own the systems and set the objectives. Meaning-making power -- the power to decide what the machines should optimize for -- concentrates in fewer hands, serving the interests of the people who own the systems rather than the people affected by their outputs. The ecology restricts the circle of people whose taste matters, and in doing so it makes the governing judgment less diverse and more brittle. An ecology where only five people's taste counts is not an ecology. It's a bottleneck pretending to be governance.
None of this happens naturally under incentive structures that treat human judgment as overhead. The default gradient pushes toward fewer humans making more consequential decisions with less scrutiny. Some institutional forms push against this. Employee-owned companies, for instance, create incentives for decision-makers to optimize for the wellbeing of the people doing the work, not distant shareholders. They don't guarantee a culture of taste, but they create conditions where one can develop, because the people doing the work have standing to contest the judgment of the people setting the direction. Whether these forms can hold against the concentrating pressures of machine-augmented production is an open question.
But the argument doesn't require optimism about the outcome. It requires clarity about what's at stake. Whether powerful AI systems end up doing what humans actually want is an ecological question, answered daily by the thousands of organizations that deploy them. The builders can make the system steerable. Only the organizations can sustain the culture that keeps someone's hands on the wheel -- and keeps those hands capable.
"The organizations" means specific people making specific choices about who gets to practice judgment, whose evaluations count, and whether the circle of taste-makers is widening or narrowing. The civilizational bet is not just that someone will steer. It's that enough people, with enough diversity of perspective and enough room to disagree, will be allowed to.
What Remains
What will humans do when machines do everything? The same thing they've always done when they were doing it well. Exercise judgment under conditions of uncertainty. Articulate why something isn't right before the failure materializes. Show their work, not as documentation but as the practice through which judgment develops and endures.
What changes is not the nature of meaning but the stakes of neglecting it. The machines are not the threat. The threat is the assumption that because machines can produce, humans need not judge. That because the output looks right, someone must be steering. That because the system is running, it must be running toward something worth reaching.
Somewhere right now, a person is looking at a machine's output and pausing. Not because they have a rule that says it's wrong, but because something doesn't sit right. They can't yet say what. The pause is uncomfortable. The effort of figuring out why is real work, harder in some ways than the work the machine just did for them.
That pause is where meaning lives. Everything depends on whether we build cultures that reward it -- or whether we optimize it away in the name of speed, and never notice what we've lost until the capacity to notice is gone.