The Return of the Jurandes
On privilege, compute, and the return of what the French Revolution tried to destroy
This is the first of a two-part reflection on what AI does to the structure of economic participation. This part follows one trajectory — the one I think that is strongly taking shape if the current signals hold. The second part will follow a different thread: the possibility that AI, especially in its open agentic forms, radically empowers the individual and disintermediates the very institutions I describe here. Both scenarios are real. Neither is inevitable. But to see the fork clearly, you have to follow each path to the end.
Sam Altman said something recently that deserves more attention than it got. In an interview, he acknowledged that AI shifts returns from labor to capital — and that this represents “a real change to how capitalism has worked.” When the person constructing the engine tells you the engine changes the rules, it is worth pausing to ask: what, exactly, are the new rules?
The old rules — imperfect, brutal, but at least legible — went something like this. Capital needed labor. Not out of generosity, but out of necessity: someone had to do the work. And because capital needed labor, labor had leverage — however unequal, however mediated by exploitation and coercion. That need created a crude but functional mechanism for distributing returns. You worked, you got paid, you participated in the economy. The entire architecture of modern capitalism — wages, careers, social insurance, the middle class itself — was built on that foundation.
A study published in Nature Scientific Reports earlier this year models what happens when that foundation cracks. The researchers found that even a moderate increase in AI-capital-to-labor substitution could double labor underutilization, decrease per capita disposable income by 26%, and shrink consumption by 21% by mid-century. To merely prevent the decline in disposable income, the economy would need a 10.8-fold increase in new job creation.
But I am less interested now in the macroeconomic projections than in the question they leave open. If labor ceases to be the mechanism that connects people to economic returns — if working, however hard, however skillfully, no longer guarantees participation — then what does determine who participates and who doesn’t?
I think the answer is already visible. And it is a very old one.
There is a passage in Marx’s Grundrisse — the so-called “Fragment on Machines” that I stumbled upon some weeks ago — where Marx describes a tendency within capitalism toward what he calls the “General Intellect”: the point at which the accumulated knowledge of society — scientific, technical, linguistic, cultural — becomes the primary productive force, and individual human labor becomes peripheral to the process of production.
He was writing about machinery and the factory system. But the structure of the argument maps onto AI with unsettling precision. What are large language models if not the General Intellect made operational — trained on the accumulated written output of human civilization, encoding patterns of reasoning that took centuries to develop, and deployed to perform cognitive tasks that previously required living human minds?
The crucial question, for Marx, was not whether this would happen. It was who would own it. The General Intellect, he assumed, was inherently social — it emerged from collective human activity, from the shared commons of knowledge and culture. The danger was that capital would enclose it, privatize it, extract rent from what belonged to everyone.
The Italian post-operaist thinkers of the late 20th century — Virno, Negri, Lazzarato — picked up this thread and ran with it. Writing about post-Fordist capitalism and technology broadly — not AI specifically, but the general shift toward cognitive, immaterial, and communicative labor — they identified the core dynamic of their era: that collective knowledge, linguistic competencies, and social capacities were becoming directly productive, and that capital was moving to enclose them. Virno called the General Intellect the “score” that the entire orchestra of contemporary production plays from. The question was whether the orchestra would own the score, or whether it would be locked behind a paywall.
Today, the dominant answer is clear. The score is locked behind a paywall — and the paywall is controlled by roughly five companies.
The cloud infrastructure oligopoly maps almost perfectly onto the AI model oligopoly. Access to the General Intellect — the collective cognitive output of human civilization, encoded in proprietary weights — is mediated by a handful of companies who also happen to own the physical means of running it. Open-source models exist, and they are improving — but as of now, the most powerful capabilities, the largest context windows, the deepest reasoning, remain proprietary. The trajectory could change. But the signals, today, point toward concentration.
This is enclosure. Not of land, as in the 15th through 19th centuries. Not of industrial machinery, as in the early factory era. But of the General Intellect itself — the accumulated commons of human knowledge, privatized, optimized, and sold back as a service.
But it is important to understand what kind of enclosure this is — because the analogy with previous rounds of privatization understates what is happening. When Standard Oil controlled petroleum, it controlled what powered the economy. When Microsoft controlled Windows, it controlled the platform on which people worked. AI is neither energy nor operating system, though it resembles both. It is something more total: a substrate that does not merely power or organize economic activity, but performs it. It analyzes, drafts, codes, reasons, designs, decides. It is as if the electrical grid didn’t just deliver power to your factory but also ran your production line, managed your supply chain, and wrote your contracts — and the grid operator could see everything, reprice at will, or cut you off. Controlling AI inference is not controlling a tool. It is controlling the medium through which an increasing share of economic life is conducted. The companies that own this substrate don’t merely have market power. They have something closer to the power to set the terms on which economic participation itself operates.
And this enclosure — to the extent that it holds — matters. Not only in the abstract, not only as a political-economic principle, but because it shapes what comes next. If the General Intellect were a genuine commons — open models, public compute infrastructure, universally accessible — then the displacement of labor by AI would at least open a possibility: anyone could use this collective intelligence to create value, to build, to participate in the economy on new terms. The transition would still be violent, but the door would be open. That possibility is real, and I will return to it in the second part of this reflection. But the direction of travel right now — the consolidation of model ownership, the rising compound cost of frontier training (data-wise), the tightening integration of models with proprietary cloud infrastructure — points the other way. For the moment, access to AI’s productive power runs through a handful of corporate gatekeepers. You can use the General Intellect, but on their infrastructure, at their price, within their terms of service. Your participation in the AI economy is not a right. It is a subscription.
If this trajectory holds, it changes the nature of the crisis entirely. The question is no longer just “what happens when labor is displaced?” It is: “in a world where the cognitive commons has been enclosed, what determines who gets to participate — and who is locked out?”
Here is where the story takes a turn that neither Marx nor the autonomists quite anticipated. In the classical Marxist narrative, the displacement of labor by machinery creates a crisis — but the crisis is, in a sense, egalitarian. Everyone whose labor is displaced is in the same position. The contradiction is between capital and labor as classes. The resolution — revolutionary or otherwise — is collective.
But AI does not displace everyone equally. And the world it is creating is not a crisis of labor in general. It is something more specific, and in some ways more sinister: a crystallization of privilege.
Under capitalism, “meritocracy” was always partly a fiction. Bourdieu demonstrated that credentials do not measure competence — they disguise inherited advantage as earned achievement. He called it “meritocratic misrecognition”: the process by which social capital, cultural capital, and economic capital are laundered through educational institutions and professional networks until privilege looks like talent. The son of a lawyer becomes a lawyer not primarily because he is more capable, but because he has absorbed the habitus, possesses the cultural codes, and has access to the networks that make legal careers legible and accessible.
But the fiction of meritocracy at least required a substrate of actual labor. Capital needed competent people. Even if access to professions was shaped by privilege, the professions themselves demanded performance. You had to do the work. And because you had to do the work, there was — however narrow, however unequal — a pathway through competence. If you could perform, you could, in theory, rise.
Even Friedrich Hayek — not exactly a Marxist — saw the limits of this arrangement. In The Constitution of Liberty, he argued that the concept of “merit” is epistemologically incoherent: we cannot objectively assess individual desert, and what markets reward is not moral virtue but social utility at a given moment. For Hayek, meritocracy as an organizing principle was not just impractical but illiberal — it required a central authority capable of judging the unjudgeable. What he defended was not meritocracy but the price system: an impersonal mechanism that rewarded function without pretending to assess worth.
Now: if the current trajectory holds, AI is on the way to strip away even that impersonal mechanism. When the cognitive work itself is automatable — when the analysis, the drafting, the modeling, the coding can be performed by a system that runs on compute rather than wages — then the thing that was proxied as “merit” (labor, output, demonstrable competence) loses its economic function. And what remains is not competence. It is position.
There is a response forming to this — an intuitive, almost instinctive one — that deserves attention precisely because it sounds reasonable. As AI absorbs more and more cognitive work, people across industries are converging on the same question: what can AI not do? And the answer they are arriving at, with increasing conviction, is: it cannot be someone. It cannot bear legal responsibility. It cannot carry social trust. It cannot walk into a room and be recognized as a person of standing. It cannot authenticate.
This feels, at first, like a humanist insight — a reaffirmation of what makes us irreplaceable. But follow the logic one step further. If the remaining economic value attaches not to cognitive work (which the machine does) but to identity — to the human who signs off, who vouches, who lends their name and their credential to the output — then the question of who captures value becomes: whose identity counts? And that question has nothing to do with competence. It has everything to do with prior position.
And here is where the enclosure of the General Intellect and the authentication economy reveal themselves as not just parallel developments but structurally linked. Proprietary models are, by definition, opaque. You cannot inspect the reasoning. You cannot audit the weights. You cannot verify how an output was produced — only what it says. This opacity is not incidental. It is the business model. And it creates a trust deficit that only human intermediaries can fill. In any high-stakes context — legal, medical, financial, regulatory — someone credentialed must vouch for the black box. The closed model generates the demand for authentication. If models were open and auditable, you could trust — or at least verify — the process itself. The need for a human to stand between the model and the world would diminish. But proprietary models, by their very architecture, produce the conditions under which the credentialed professional becomes indispensable — not for their cognition, but for their signature.
Consider what is happening, right now, in the legal profession — though the pattern extends far beyond it.
There is a growing consensus — supported by empirical research and increasingly by the firms themselves — that AI can perform a significant portion of legal analysis at a level comparable to, and in some cases exceeding, that of junior associates. Document review, contract analysis, legal research, due diligence, even elements of strategic reasoning: the cognitive substance of legal work is migrating, task by task, to the machine.
But the income does not follow the substance. It stays with the human — specifically, with the credentialed human. A licensed attorney must still sign off. A bar-admitted professional must still appear in court. A partner must still sit across the table from the client and assure them that a person of appropriate standing has reviewed the output. The analysis may be done by a model. The authentication — the signature, the identity, the institutional weight — is provided by the lawyer. And it is the authentication, not the analysis, that commands the fee.
Notice what has happened here. The cognitive justification for the profession’s income — “we do complex analytical work that requires years of training” — is quietly dissolving. What remains is the positional justification: “we are licensed, credentialed, institutionally authorized to validate what the machine produces.” The substance shifts. The gate stays. And access to the gate is determined not by analytical capacity — which the machine now provides — but by the prior accumulation of social, cultural, and economic capital that bought entry to the credentialed class in the first place. The right school, the right network, the right internship, the right habitus.
The relationship between the model oligopoly and the credentialed professions is not coincidental. It is symbiotic. The law firm does not compete with OpenAI or Anthropic. It becomes their customer — wrapping the model’s output in a human credential and charging for the wrapper. The oligopoly provides the cognitive substrate. The professions provide the social legitimacy that makes the substrate usable in regulated, high-stakes contexts. The oligopoly needs the authentication layer to make its product deployable. The authentication layer needs the oligopoly’s models to have anything to authenticate. Together, they form a closed circuit — a complete system from which the uncredentialed and unsubscribed are structurally excluded. If AI is the new substrate of economic life, the credentialed professional is the socket — the interface through which the substrate’s power enters the world of legal, institutional, and social reality.
The same structure is emerging across the entire landscape of knowledge work — and it has found its ideology. OpenAI co-founder Greg Brockman recently declared that “taste is the new core skill.” The framing is seductive: AI handles execution, so what matters now is the human capacity to curate, to direct, to look at infinite possibilities and say that’s the one. It sounds democratic — anyone can have taste, surely? But Bourdieu would have recognized the move immediately. Le goût — taste — was the central concept of his Distinction (1979), and his entire argument was that taste is never neutral, never individual, never simply “good judgment.” It is a class disposition: shaped by upbringing, education, social milieu, access to cultural capital. The person who “just knows” what good design looks like, what the right product direction is, what “quality” means — that person was formed by a specific social trajectory. Taste is habitus made aesthetic.
So when Silicon Valley says the future belongs to people with taste, it is — perhaps without realizing it — saying the future belongs to people who already possess the cultural capital that taste encodes. The knowledge worker who can “direct” AI with refined judgment is not anyone. It is someone who went to the right school, absorbed the right references, internalized the right aesthetic codes. The shift from “doing the work” to “having taste” is not a liberation from hierarchy. It is hierarchy’s final refinement: the point at which privilege no longer needs to justify itself through labor at all, only through the ineffable quality of discernment — which happens, by sheer coincidence, to track perfectly with class origin.
Across law, finance, design, technology — the pattern is the same. Professional licensing, credentialing, and now “taste” were all, at various points, designed to serve a real function: protecting the public, ensuring quality, guiding judgment. In the AI era, they might be becoming something structurally different: mechanisms that preserve income and status for insiders in a world where the cognitive basis for the gate has been automated away. The profession needs to authenticate. And authentication — whether it wears the costume of a bar license or the costume of “taste” — is a function of position, not capacity.
The professions, in other words, might be becoming guilds.
That word — guilds — is not a metaphor. And to understand what it means, we have to remember what it took to destroy them the first time.
Under the Ancien Régime, access to professions and markets in France was controlled by the guild system — the jurandes and maîtrises. To practice a trade, you had to belong to the guild. And belonging to the guild was not a matter of competence. It was a matter of lineage, patronage, and fee — in practice, a system of hereditary privilege dressed in the language of craft. The son of a master became a master. The outsider paid ruinous fees or was simply excluded. Economic participation was gated by social position.
One of the central demands of the French Revolution — less remembered than liberty, equality, and fraternity, but arguably more structurally consequential — was the abolition of this system. The Allarde Decree of March 1791 dismantled the guilds in their entirety. Pierre d’Allarde declared to the National Assembly that “the right to work is one of the fundamental rights of man.” Three months later, the Le Chapelier Law completed the demolition: “the abolition of any kind of citizen’s guild is one of the fundamental bases of the French Constitution.”
The principle was liberté du travail — the freedom to work, to access any profession based on capacity, not birth. It was, in its way, one of the most radical economic ideas in history: the assertion that what you could do mattered more than who you were. That function should determine position, not the reverse. That competence — however imperfectly assessed — was the legitimate basis for economic participation.
It took a revolution to establish this principle. It took two centuries of institutional construction — public education, professional examinations, antitrust law, labor rights — to give it even approximate reality. And it is now, quietly, being reversed.
As AI automates the cognitive substance of professional work, what remains is precisely the shell the Revolution tried to destroy: positional privilege, credentialed access, social status as economic gatekeeper. The new guilds do not call themselves guilds. They call themselves “professions requiring human oversight” or “roles demanding the human touch” or “positions where trust and authentication matter.” But the structure is identical: access controlled by who you are, not what you can do. Two centuries of liberté du travail, and we are circling back to the jurandes.
If this is where we are heading — and the signals suggest we are — then the historical parallel is less with the Industrial Revolution, which for all its violence eventually created new forms of social mobility. It is something older: the feudal logic of status over function, where what you are matters more than what you can do.
The conversation about AI and work is stuck in the wrong paradigm. “How do we retrain workers” assumes the labor market continues to function as a mechanism for distributing returns. It may not. “How do we prepare people for the jobs of the future” assumes there will be jobs, structured roughly as we know them, performing roughly the same distributive function. That assumption is increasingly difficult to defend.
There is, of course, a seductive counter-narrative — one I hear constantly in the tech world. The solo entrepreneur. The one-person startup. The individual who uses Claude or GPT to build a product, launch a company, create value without needing anyone’s permission or credential. If AI democratizes execution, the argument goes, then anyone with a laptop and a subscription can participate. Who needs commons when you have agency?
But look at what that “agency” actually rests on. You are renting access to an enclosed commons — the General Intellect, privatized — at whatever price the oligopoly sets, on whatever terms they dictate, subject to change without notice. Your independence is a tenancy. And tenancies can be revoked. The model can be deprecated, the API repriced, the terms of service rewritten. The solo entrepreneur’s freedom exists entirely within a structure of dependence they do not control.
And the tenancy is not equal for everyone. The oligopoly operates on tiers. The firms that become the credible “authenticated” layer for AI in law, medicine, or finance are the ones that can afford enterprise contracts, negotiate custom deployments, access fine-tuning capabilities and compliance certifications. The small practitioner, the solo operator, the independent analyst — they get the consumer-tier product. The oligopoly doesn’t just enable the authentication economy. It shapes who gets to be an authenticator, by deciding who gets the powerful tools and who gets the demo.
The real question is about the structure of access itself — who gets to participate in the value AI creates, and on what basis. If not labor, if not competence, then what?
This is where the counter-logic to enclosure becomes essential — and where it connects to the argument I have made before. Open-source models, decentralized compute, public AI infrastructure: these are not technical preferences. They are structural responses to a structural problem. If the proprietary vision of AI represents, to my mind, the enclosure of the General Intellect, then open models represent the insistence that the commons remain common. If the authentication economy threatens to re-establish guild-like privilege, then public AI infrastructure represents the contemporary equivalent of the promise of liberté du travail — not because it guarantees outcomes, but because it keeps the door open. It ensures that access to the productive capacity of collective human knowledge is not gated by subscription, by credential, or by the accident of social origin.
The Marxist answer and the liberal answer converge here, oddly. Both traditions, for fundamentally different reasons, reject a world where position replaces function. Marx because it is exploitation of the commons — the appropriation of collectively produced knowledge by a narrow class. Hayek because it is illiberal — a system that rewards status rather than utility is, by his own logic, indistinguishable from the aristocratic order that liberalism was designed to replace.
The question is whether we build the institutions that prevent the crystallization — public access to compute, open models as infrastructure, the dismantling of credentialist gatekeeping that no longer serves its original purpose — or whether we watch privilege re-solidify behind the language of “human oversight” and “professional standards” and “trust.”
The jurandes were abolished because a generation recognized that gating economic participation by birth was incompatible with a free society. The new jurandes are forming now, gating participation by credential, network, and institutional affiliation in a world where the cognitive justification for those gates has been automated away.
Whether we abolish them again — or whether we let them harden into the permanent architecture of the AI economy — is not a technical question. It is a political one. And like last time, it will not resolve itself.
But there is, I think, another possibility — one that cuts against everything I have described here. AI does not only concentrate. It also disintermediates. It kills the middlemen. It collapses the layers of mediation — the SaaS platforms, the search portals, the entire infrastructure of access that charged rent for connecting people to things. If that force is strong enough, and if the tools become genuinely open and agentic, the result might not be crystallization at all. It might be something closer to radical individual empowerment — a world where the person with judgment and agency does not need the guild, because they can do the whole thing themselves. That is the subject of the second part of this reflection. The jurandes are forming. But so, perhaps, is the force that could make them irrelevant.

