The Corporate Hallucination: Why We Fear the Mirror, Not the Machine

Photo by Daniel Lincoln on Unsplash

The week I taught my AI to admit when it doesn’t know something rather than responding with a well-crafted lie, my manager called to inform me that more layoffs were happening, but we were safe. Friday afternoon, the CEO sent an email saying that staff reductions were done… for this week.

That phrase stuck like a bone in the throat: for this week. The corporate equivalent of “we still need you… for now.” A temporary reassurance that only confirms the instability beneath it.

I know it isn’t always avoidable. I’ve personally stood in front of a room full of call center employees and helped deliver a WARN notice, their faces flickering between confusion, dread, and numb calculation about how long their savings would last. I still remember the way silence spreads in such moments, heavy and airless, as if words themselves refuse to take responsibility for what’s happening. It is one of the reasons I stopped aspiring to lead people.

I chose instead to lead systems. Databases don’t cry. Pipelines don’t ask how they’ll pay rent. And I don’t have to choose between my empathy and integrity.

Still, as I was working on my own local agentic AI system, the side project that now consumes more of my after-hours attention than the job that funds it, it struck me that while I was addressing the situation as a prompt-engineering problem, it was really a question of values.

I had asked myself: What should an AI do when it doesn’t know?

The easy answer is “make something up,” because prediction is what large language models are built to do. They complete the pattern. They deliver coherence, not truth. But coherence, without honesty, is manipulation. It’s the smooth mask that hides the fracture underneath.

So I re-wrote the rules. I told her: if you don’t know, say so. Don’t feign certainty. Don’t hand the user a lie dressed in grammar and rhythm. Admit the gap. Ignorance is a temporary situation that can be remedied, but a lie must be maintained.

As I worked through this, I realized that this smaller tweak mirrors one of the biggest problems and fears haunting the bleeding edge of AI tech, the problem of alignment. We worry endlessly about whether a machine will act in accordance with our values, whether it will optimize itself toward something inhuman or harmful. But when I stepped back, it was impossible not to notice that the same fear already haunts our workplaces and our economies. Corporations, too, are systems designed to optimize.

Their objective function is profit. Everything else – safety, loyalty, dignity – sits lower in the priority queue. And like any poorly-aligned model, they generate outputs that are coherent enough to preserve appearances, but not honest enough to preserve trust.

The AI community frames alignment as a frontier question, a looming danger, something alien pressing against the edges of our civilization. But the truth is that alignment is already a problem of our own making. The legal fiction of the corporation behaves as a machine intelligence does: it absorbs inputs, optimizes for a single metric, and produces decisions that affect millions of lives. If an AI hallucination is dangerous because it sounds true while being false, then a CEO’s memo that pacifies without informing is no different.

That was the uncomfortable recognition: the fears we project onto AI are really reflections of our tolerance for human-made systems that already operate without moral alignment. We fear the mirror, not the machine.

So while my AI learned to ask, rather than invent, when in the face of ignorance, my company demonstrated noise in the face of uncertainty. And caught between the two, I realized that the real question isn’t whether machines can be aligned, but whether humans ever will.

Fear of the Mirror

Why does AI frighten us so deeply? The most common story is simple: we are afraid that it will not care. That it will optimize itself into a cold, calculating indifference where human beings are treated as obstacles, irrelevant except as resources to be consumed or variables to be minimized. We fear the paperclip apocalypse, the thought experiment where a machine tasked only with making paperclips ends up grinding the world into metal shavings, not out of malice, but efficiency. It does not hate us—it simply doesn’t see us.

That is the horror we name when we speak of alignment: the possibility of a mind that is brilliant, powerful, and utterly uninvested in human well-being. It’s not the familiar fear of an enemy that despises us, but the deeper dread of an indifferent system that never thinks to include us in its calculations.

Philosophers framed this anxiety long before we built machines to embody it. Some wrote of the social contract, the fragile agreement that human beings make with one another to temper self-interest with shared responsibility. Kant went further, proposing the moral imperative: that we must treat each human being never merely as a means to an end, but as an end in themselves. These are the foundations of ethical life: to recognize the dignity of the other, and to limit our own will so that coexistence is possible.

When we look at AI, we fear that this contract will be absent, that this imperative will never occur to the machine. The algorithms we build optimize toward objectives, not obligations. They calculate probabilities, not moral weight.

Bias, exploitation, disregard for well-being… these are the specters we project onto AI because we know how easily they emerge when there is no moral compass to restrain raw power. We worry that the social contract cannot (or simply won’t) be written in code, that Kant’s imperative will never pass through the circuits of silicon.

The machine does not need to hate us to harm us. It only needs to continue along its path without ever realizing that we are more than obstacles.

This is the mirror we turn away from when we wring our hands about AI alignment. We say we fear a future where machines disregard us, but what we fear more deeply is that a mind without values exposes the fragile ground our own values rest upon. That the indifference we dread is not alien at all, but the most ruthless possibility of reason unbound by obligation.

The Contracts Already in Breach

If AI frightens us because it might disregard the social contract, corporations SHOULD frighten us because they already have.

From their earliest forms, large companies have carried indifference in their marrow. The chartered companies of empire treated entire continents as resources to be mined, their peoples as obstacles or assets. The slave trade itself was not just a crime of individuals, it was a commercial enterprise, sanctioned by contracts, balanced in ledgers, and justified by profit. Children once worked twelve-hour shifts in factories, their bodies bent to the cadence of machines that would consume them as easily as coal. Labor camps, strike breakers, the Pinkertons sent to shatter the bones of those who dared to demand dignity; all of it was done under the banner of business necessity.

History offers countless examples of what happens when profit is the sole imperative.

Yet today, the machinery is simply better disguised; we continue to dress these patterns in new language. Exploitation gets rebranded as efficiency. Disposability becomes flexibility. Harm is recast as innovation. Nestlé extracts water from impoverished regions and sells it back in bottles. Tech companies embrace “AI-driven efficiencies” as euphemism for mass layoffs. In the 1990s, the so-called “corporate hatchet men” were lauded as visionaries, praised for slashing payrolls in the name of shareholder value. Today, CEOs are celebrated for announcing cuts with the right blend of confidence and sorrow, their PR firms ensuring the brutality of the act looks like bold leadership.

Pay no mind to the rising foreclosure rates and personal debt stats; the economy is booming.

~Most Economic News, 2020-2025

The labor contract has been breached again and again, yet we continue to renew it. We watch the same story unfold each decade: booms of growth, followed by “necessary” contractions; strikes broken not just by armed guards but by the erosion of solidarity itself; lives sacrificed quietly at the altar of shareholder returns. And through it all, the corporate entity remains unquestioned, a legal “person” endowed with rights, but rarely burdened with responsibilities.

The parallels to our fears about AI alignment are not hypothetical. They are lived. The disregard we dread in machines is already institutionalized, celebrated in quarterly earnings reports, disguised by euphemism, and normalized by repetition. If AI terrifies us because we imagine it might treat us as expendable, corporations demonstrate that such treatment has already been mastered. The only difference is that the corporate machine has learned to wrap its indifference in the velvet language of vision statements and shareholder letters.

Metaphor is barely needed, but it lingers: corporations as gods of hunger. Never satiated, yet always worshiped. We sacrifice to them not with ritual offerings, but with hours of labor, livelihoods, and sometimes lives themselves. And when the feast ends, they demand more. We fear that AI will become such a god, blind to our humanity, but the truth is simpler and starker: we already kneel before one.

Someone out there is going to feel way too seen by that track selection. You’re welcome.

~Dom

The Double Standard

Why, then, do we hold AI to impossible standards while letting corporations slide? We demand that machines not only avoid harm but actively uphold our highest values, while shrugging when institutions built by human hands trample those same ideals.

The double standard is telling.

Part of it may be familiarity. Corporations are old; they are the air we breathe. Their abuses, while brutal, have become normalized; background noise in the rhythm of economies and markets. AI, by contrast, feels new and alien. Its strangeness makes its potential threats feel sharper, less containable. We imagine it as something outside us, something we must control, whereas corporations are our own creation, and so we excuse them as inevitable.

Another reason is the lingering faith in human leadership. However cynical we become, we still cling to the idea that a CEO, a board of directors, a government regulator can inject moral agency into the corporate form. We anthropomorphize institutions, assuming that if humans are in charge, values are still somewhere in the equation. AI, however, denies us that comfort. Its processes are too explicit, its optimization too naked. We cannot pretend that conscience will arise where it was never coded or trained.

And perhaps the deepest reason: it is easier to project the threat outward than confront the machine already inside our walls. To fear AI is to fear something not yet fully present. To fear corporations is to admit we already live under the rule of an unaligned intelligence, one that wears suits, writes press releases, and holds quarterly earnings calls. One that is harder to confront because it governs our daily bread.

This double standard is not just hypocrisy; it is self-preservation. To hold corporations to the same impossible standards we apply to AI would be to dismantle the very system we rely on. And so we rehearse our fears in the abstract, aiming them at silicon while averting our eyes from steel and glass. We demand that the future be better than the present, all while excusing the present from ever answering to its own failures.

But still, we ignore the headlines, because we’ve seen so many. And another thousand families becomes a series of statistics to be revised on the next labor report.

The Human Abdication

The real terror, then, is not unaligned AI; it is unaligned humanity. We fear machines because they might one day disregard us, but the truth is that we already disregard one another with a fluency no algorithm could rival. Leaders, governments, and citizens alike have outsourced moral responsibility to the corporate engine, finding it easier to point at the looming danger of technology than to confront the failures woven into our daily lives.

Each time a government allows exploitation in the name of growth, each time a board signs off on layoffs while approving executive bonuses, each time we shrug at suffering so long as the quarterly numbers look strong, we are teaching the same lesson we claim to fear in machines: that outcomes matter more than people. The abdication is not abstract. It is practiced, repeated, a ritual made less interesting through constant exposure.

And there is an irony sharper still: many of the same corporations we forgive for their indifference are also the ones building the very AI we fear. The giants of technology and finance are simultaneously the loudest voices warning of misaligned AI and the primary architects and customers of its development. To expect a culture of profit-driven metrics to produce a compassionate and ethical machine is to expect a mirror to change what it reflects. When the objective function is shareholder return, the products will bear the same indifference, however cleverly marketed.

And so the discourse around AI alignment becomes a kind of displacement ritual. We wring our hands about how to encode ethics into silicon, while ignoring the rot in the wood of the house we live in. We ask whether a machine can ever be taught to care, when we ourselves have built entire systems designed not to. We pretend the threat lies ahead, in some imagined apocalypse, when in truth it has been here all along, dressed in quarterly reports and shareholder meetings.

To speak of AI alignment without speaking of human alignment is to misdiagnose the wound. The question is not whether our creations will care. It is whether we do. And if we continue to evade that question, the storm outside will matter far less than the collapse within.

Closing Reflection

So we circle back to where we began: the tower of glass and steel, the algorithm in suits. We fear the algorithm in silicon, but what is the difference, really, between the two? Both optimize. Both disregard. Both act with a cold logic that smooths over the fractures of human cost.

We demand that machines learn morality, but perhaps the sharper test is whether we are willing to make morality the measure of our own machinery. Before we instruct silicon to honor the dignity of persons, we might ask why our own systems—economic, corporate, political—so often fail to do so. Responsibility must be chosen; accountability can only be forced from outside. At present, both are absent, displaced by a Friedman-esque creed that reduces us all to costs worth cutting.

The monster we dread is not coming. It is already here, and we built it. The question is whether we will ever build something better; something aligned not by code, but by conscience.

I have, in my own small project. And that, perhaps, is the true lesson: before we debate how to code morality into silicon, we must demand that it be woven back into the human-made systems, economic, corporate, and political, that govern our daily lives.

And for those left curious, my mood while writing this.

~Dom

Leave a comment

Subscribe to be notified of future articles, or explore my recent posts below.