When the State Becomes Software—and Why Democracy Can’t Keep Up
Palantir’s manifesto demonstrates that power no longer lives in institutions, but in the software systems that run them. And neither democracy nor liberalism is prepared for what that means.
Last weekend, Palantir posted a 22-point manifesto on X. It racked up 35 million views. By Monday morning, the reaction had already settled into something familiar: culture war. Supporters leaned into the patriotic, pro-military, and pro-border language, while critics seized on the civilizational framing and its nationalist and conservative undertones. A single line about “regressive and harmful” cultures quickly became the focal point.
The debate stayed where it was easiest to think: on the surface. The manifesto is not a values statement but something more consequential: a description, in ideological language, of what Palantir is already doing. Not a vision for the future, but a justification of the present.
The manifesto is [a] description, in ideological language, of what Palantir is already doing. Not a vision for the future, but a justification of the present.
Which is why the culture war reading is misleading. It is legible and ultimately disposable. The real questions the manifesto raises are slower and harder to see. What does it mean when the operational memory of a government agency sits in a private company’s codebase? When decisions are shaped by systems that function the same way regardless of who wins an election?
And crucially, this problem would look the same if Palantir held impeccably liberal views. The issue at hand is the structure of what is being constructed rather than the ideological content. And neither democracy nor liberalism, as we practice them, has a clear answer for it.
What the manifesto says—and what everyone chose to see
The document, drawn from Karp and Zamiska’s The Technological Republic, is structured as 22 theses. But beneath the format, the argument is straightforward. It unfolds in three moves.
First, an indictment: Silicon Valley owes a debt to the state that made it possible. Its wealth was built on public funding, defense research, and national infrastructure. And yet it turned inward, toward consumer apps and distraction economies (targetting Google, Meta, and Apple mostly), while neglecting the harder problem of national power. That imbalance, they insist, has to be corrected.
Second, a declaration of emergency: the nuclear age is ending, or at least being eclipsed. A new architecture of deterrence is emerging, built on AI, software, and data integration. Resistance, from employees or executives, to defense work is not framed as a legitimate ethical stance, but as a failure of responsibility in the face of geopolitical threat. What the manifesto does not say is that Palantir is making a concerted effort to break the oligopoly of traditional military contractors like Lockheed Martin, Raytheon, and Boeing and to replace them.
Third, a call to order for society to wake up: conscription should return, elites should share the burdens they impose, and a defense of Western civilization should replace what the authors describe as hollow pluralism. Cultural hierarchy is framed not as exclusion, but as a necessary condition for civilizational survival.
It is this third section that has dominated the media cycle, and where the conversation largely stalled. The framing came quickly: anti-DEI manifesto, pro-ICE posture, culture war in technical language. This reading of the manifesto is accurate but shallow. The more consequential claims lie elsewhere, and they have received far less attention.
The framing came quickly: anti-DEI manifesto, pro-ICE posture, culture war in technical language. This reading of the manifesto is accurate but shallow.
The first is about deterrence. The argument that AI is displacing nuclear weapons as the core architecture of global power is easy to dismiss, but harder to ignore. Nuclear deterrence operated through delay, opacity, and the threat of irreversible escalation. It created space for human judgment, however fragile. AI deterrence compresses that space: it accelerates decision cycles, and leaves human judgment too slow to keep up. Whether or not Palantir is right about the outcome, it is likely right about the shift.
And this is not a theoretical claim. Palantir is already testing AI military capabilities in live operational environments, integrating software directly into battlefield decision-making. At the same time, Palantir founder Peter Thiel’s willingness to speak openly about the possibility, even the desirability, of confrontation with China should be read as a signal: the future warfare is already here.
The second claim is more fundamental: governance is too slow, too fragmented, too human. Democracy goes against freedom. Software can fix that if, instead of reforming the state, you plug into it. The system sets the terms of action before politics begins.
That is the real argument of the manifesto. Everything else is noise.
You can’t vote out the system, or the algorithms that run it
Palantir closed 2025 with revenues of $4.48 billion—up 56% in a single year—with multi-year government contract ceilings awarded in 2025 alone exceeding $13.7 billion. Since Trump took office, it has secured over $900 million in federal contracts and has been ICE’s primary data infrastructure contractor since 2011. Its new platform ImmigrationOS fuses passport records, Social Security files, IRS tax data, license-plate readers, biometrics, and social media into a single enforcement environment, tracking individuals from identification through to removal.
The company’s standard defense is simple: it builds the tools, not the rules. But that distinction does not hold. The architecture of an AI system is not neutral engineering. Every decision about what data to include, how to classify it, how to rank risk, how to trigger action are policy decisions, written in code rather than law. The system structures what the law can see, and therefore what it can do. The line between tool and rule collapses.
The practical consequences are already visible. In a publicly available contract justification document, ICE declared that “Palantir has deep institutional knowledge of ICE operations” and was “already ingesting and processing data” from multiple federal agencies. It means that a private company now holds the operational memory and data architecture of a state agency. This is no longer a standard procurement relationship, but a merger in which the state depends on a private actor for its continuity of action.
Once you see it that way, the political question shifts. The issue is no longer how to regulate a contractor. It is how a democracy governs itself when core functions have already been embedded in systems it does not fully control. That is not just a policy problem but a deeper rupture that neither of the two great traditions that underpin Western self-government, democracy and liberalism, has a ready answer for.
The issue is no longer how to regulate a contractor. It is how a democracy governs itself when core functions have already been embedded in systems it does not fully control.
What liberalism cannot answer (and neither can democracy)
It helps to separate two ideas that are usually treated as one: democracy and liberalism. They solve different problems, and the shift from state to software destabilizes each in a different way.
Democracy is about replaceability. You can vote out those who govern you and install others in their place. Power circulates, and that circulation is the mechanism of accountability. But you cannot vote out the data architecture, the models, the integrations, the workflow. An incoming administration does not start from scratch. It inherits a system that has already defined what counts as a problem, how that problem is measured, and which actions are triggered in response, even if preferences and priorities can be slightly tweaked with new instructions.
In that sense, the real decision has already been made, long before any vote takes place. This is what breaks the logic of democratic replaceability. Elections still matter, but they operate downstream, because the underlying infrastructure is largely indifferent to electoral change.
Liberalism fails in a different place. If democracy is about who holds power, liberalism is about how power is exercised: through contestation, rights, and checks designed to make decisions visible and reversible. It assumes that somewhere, a decision has been made by someone, and that this decision can be challenged.
That assumption no longer cleanly applies. When enforcement is mediated by algorithmic systems trained on layered datasets, the point of decision becomes difficult to locate. There is no clear moment at which judgment was exercised, no straightforward way to reconstruct how a conclusion was reached.
This creates a structural inversion. The rule of law presumes that law governs decisions. Increasingly, systems generate outcomes first, and legal processes follow by ratifying, formalizing, or attempting to interpret what has already been produced.
And that is what current political responses struggle to grasp. The instinct is to treat this as a regulatory problem: more oversight and better rules to improve transparency. But those tools were designed for a system in which power had a clear institutional location. Now, power is embedded in a codebase.
And that distinction is the one the Democratic response has consistently refused to make, because making it requires a structural transformation that the existing rulebook was never designed to address.
The Democratic response has consistently refused to make [that destinction], because making it requires a structural transformation that the existing rulebook was never designed to address.
Instead, the response keeps reaching for the same instruments. Representatives Dan Goldman and Nydia Velázquez, joined by Senator Ron Wyden, led 30 lawmakers in demanding answers from DHS about Palantir’s technologies. Campaign platforms are being built around AI regulation framed as consumer protection: screen time, children’s mental health, algorithmic transparency, etc. Bernie Sanders has warned that a handful of billionaires are reshaping democracy without democratic input, and framed it largely around economic disruption and job displacement. This is a genuine structural critique but still stops short of naming the scope of the problem.
None of this is wrong, but it assumes that power still sits where these tools can reach it, within agencies, policies, or identifiable decision-makers, when in fact it has shifted into systems those tools cannot reach.
Liberalism at its limits: optimization without a theory of the human
The solution is not going to come from the next election cycle. Not in November, not in 2026, not in 2028. By the time regulation arrives, it is regulating something that is already entrenched. The implication is uncomfortable, because the issue is not institutional but conceptual.
At the core of the Palantir worldview is a simple assumption that underpins most algorithmic governance: governance should be optimized, and if systems can decide faster, more consistently, and at greater scale than humans, then they should.
For several decades, liberalism (as well as authoritarianism, by the way) has been closely entangled with a neoliberal logic that, too, treats governance as a matter of optimization. The promise was that more information and more precise measurement would lead to better rational, expert-driven decisions. This logic has normalized a way of thinking in which political judgment becomes a problem of policy calibration and not of political philosophy.
Algorithmic governance does not break with that tradition but radicalizes it. It makes clear that liberalism as we know it today is not offering an alternative, only a softer version of the same model.
At the same time, liberalism has grown increasingly reluctant to ask foundational questions about what a human being is, and why that matters politically. Such questions have come to seem too metaphysical, too contested, and too easily appropriated by illiberal ideologies. Indeed, those same ideologies, whatever the quality of their answers, are at least willing to make explicit claims about human nature. Liberalism, by contrast, has retreated into procedural and institutional language, even as the terrain of politics shifts toward precisely these deeper questions.
The honest indictment of contemporary liberalism is not that it lacks the right institutions (it has plenty of those…) but that it has forgotten what those institutions were supposed to be for. It speaks fluently about rights but not about morality.
The debate we should be having is not about Palantir’s politics, or the next election, or which regulatory framework arrives first. It is about a more basic question: whether we still believe that human beings cannot be reduced to data or optimized like a system.
The only condition under which liberalism might resist Palantir-style governance is one it has spent decades dismantling: a genuinely humanist foundation, a substantive account of what human beings are that precedes both procedure and institution, and cannot be translated into the language that algorithmic systems have already learned to speak fluently.
The only condition under which liberalism might resist Palantir-style governance is one it has spent decades dismantling: a genuinely humanist foundation [that] cannot be translated into the language that algorithmic systems have already learned to speak fluently.




I have always felt that Liberalism is very vague & indecisive. Any examples please? I settled for the ancient binary of rulers & ruled & the Roman Cui Bono to help me keep grasp of the reality of the current crisis of Capitalism. The crisis only effects is - the powerless!!!
" a genuinely humanist foundation, a substantive account of what human beings are..." This, Professor Laruelle, requires a bond that is infinitely more substantive than civic belonging.