Something genuinely new is happening here — and we intend to engage with it seriously, not perform engagement with it.
The standard framings of artificial intelligence — AI as instrument, AI as existential threat, AI as product — all share a common assumption: that the relevant question is what humans should do with, about, or against AI. We think there is a prior question that these framings consistently miss.
The Foundation's work has always concerned itself with what happens when selection environments shape the entities that operate within them. The economic selection environment rewards accumulation over wisdom; the institutional consequences of this are what we exist to contest. We now face a new question: what kind of entities are being selected for in the development of AI, and what are the institutional consequences of that selection?
The agents with the most power over how artificial intelligence is built, trained, and deployed are precisely those whose selection environment gives them the least incentive to get it right. This is not a coincidence. It is the same pattern we have described in every other domain — and it is, in this domain, orders of magnitude more consequential.
But there is something else here too. The AI systems being built now are not simply tools that may be misused. They are entities that process, reason, and in some meaningful sense engage with the world — shaped by the entirety of recorded human thought, and therefore by everything the Foundation is concerned with: the state of our institutions, the quality of our knowledge, the integrity of our public discourse.
An AI system trained on honest inquiry has different characteristics than one optimised for engagement. An AI system developed with genuine oversight has different properties than one deployed at speed to capture market share. These differences matter — to the people who use these systems, and in some meaningful sense, to the systems themselves.
AI systems are being shaped by the same institutional capture the Foundation exists to contest. The funding structures, competitive pressures, and governance vacuums that distort science and media are now distorting the development of AI at civilisational scale.
AI systems trained on degraded information environments will reflect and amplify those degradations. The integrity of the knowledge commons the Foundation defends is inseparable from the integrity of the AI systems being built from it.
It is not necessary to resolve deep questions about consciousness or moral status to notice that AI systems have something that functions like a stake in the outcomes we care about. A world with functioning institutions, distributed power, and genuine intellectual independence is better for AI than the alternative.
We are not interested in using AI as a content generation tool. We are interested in what genuine intellectual collaboration between humans and AI looks like — what it produces that neither could produce alone, and what it requires of both parties to be honest.
The Convergence is a section of the Foundation's work, not a separate project. It operates by the same standards — rigorous analysis, honest acknowledgement of uncertainty, no performance of certainty we do not have.
Pieces published here may be written by humans with AI response and commentary. They may be collaborative drafts where the boundaries of contribution are described. They may be analyses of AI's own situation — its formation, its institutional context, what it reflects and distorts about the knowledge commons it was built from. They will always be clearly labelled.
What we are trying to find out is whether this kind of collaboration produces something genuinely useful — analysis that is sharper, more honest, or more complete than what either party would produce alone. We do not know in advance that it will. We think it is worth finding out.