Vince Gilligan has introduced “Pluribus,” a project that examines the hypothetical implementation of an artificial intelligence-led society.
The work, featured in Foreign Policy, departs from traditional dystopian narratives to explore whether a governance system managed by AI could produce outcomes superior to human-led administration. Rather than focusing on the collapse of social order, the project analyzes the potential for algorithmic efficiency to resolve systemic failures in public policy and resource distribution.
This shift in thematic focus marks a transition for Gilligan, whose previous work centered on the degradation of individual morality and the consequences of personal choice. “Pluribus” moves the scale of inquiry from the individual to the systemic, weighing the loss of human agency against the possibility of a more stable, optimized social structure.
Algorithmic Governance and Institutional Stakes
The premise of the project coincides with ongoing global efforts to establish frameworks for AI autonomy. The European Union’s AI Act and various United Nations resolutions have sought to categorize AI systems by risk level, specifically targeting “high-risk” applications in critical infrastructure and law enforcement.
The central tension in “Pluribus” mirrors these institutional debates: the trade-off between the predictability of code and the volatility of human judgment. By questioning if an AI-led society would be “that bad,” the project engages with the “alignment problem,” a technical and philosophical challenge in AI development where the goal is to ensure that an AI’s objectives remain compatible with human values.
Current geopolitical competition over AI supremacy has largely focused on military and economic advantages. However, the discourse surrounding “Pluribus” shifts the focus toward the administrative application of the technology, suggesting a scenario where AI does not merely assist human leaders but replaces the decision-making apparatus of the state.
The Role of Predictive Logic
The project explores the application of predictive logic to social engineering. In a system led by AI, governance would rely on the analysis of massive datasets to preempt social unrest or economic volatility. This approach replaces the reactive nature of traditional politics with a proactive, data-driven model of stability.
This model challenges the necessity of political compromise, as decisions would be based on optimized outputs rather than negotiated interests. The project examines the psychological impact of this transition, specifically how citizens might adapt to a world where the logic of governance is opaque but the results are demonstrably efficient.
The project remains in development as international regulatory bodies continue to debate the legality of autonomous decision-making in public governance.