There is a certain kind of gravity that accompanies the halls of the Real Academia de Jurisprudencia y Legislación (RAJL). This proves a space where the ink of the law meets the weight of history and where the most influential legal minds in Spain gather to curate the future of the judiciary. When Manuel Marchena—a man whose name has become synonymous with some of the most contentious legal battles in modern Spanish history—stepped into this sanctuary, it wasn’t just a professional milestone. It was a statement.
For those who have followed the trajectory of the Spanish courts, Marchena is more than a judge; he is a lightning rod. From his pivotal role in the Constitutional Court to his leadership in the trials surrounding the Catalan independence movement, his career has been a masterclass in navigating the intersection of law and political volatility. His induction into the Academy signals a transition from the heat of the courtroom to the reflective heights of legal scholarship, but the themes he chose to highlight during his entry suggest he isn’t ready to stop fighting—he’s just changing the battlefield.
The core of the matter isn’t merely the prestige of the appointment. It is Marchena’s urgent warning regarding the digital frontier. In an era where algorithms can predict recidivism and Large Language Models can draft briefs, Marchena is sounding the alarm: the law cannot be a passive observer of technology. He argues that while Artificial Intelligence (AI) can assist, it can never legitimize a criminal investigation. To let a machine dictate the “truth” of a case is not progress; it is a surrender of judicial sovereignty.
The Algorithmic Gavel: Why AI Cannot Be the Final Arbiter
Marchena’s entry into the RAJL focused heavily on the danger of “technological shortcuts” in criminal proceedings. The risk he identifies is the seductive nature of efficiency. In a judicial system often bogged down by bureaucracy, the promise of an AI that can scan millions of documents or identify patterns in evidence is intoxicating. Still, Marchena posits that the “human element”—the ability to weigh intent, nuance, and moral culpability—is precisely what AI lacks.

This isn’t just a theoretical concern. Across Europe, the implementation of the EU AI Act is attempting to create a risk-based framework for AI, specifically categorizing “justice and democratic processes” as high-risk areas. Marchena’s stance aligns with the most cautious interpretations of this law: that AI must remain a tool for the judge, never the judge itself.
The danger lies in “automation bias,” where human operators trust the output of a machine even when it contradicts their own senses. In a criminal trial, where liberty is at stake, a “black box” algorithm that cannot explain how it reached a conclusion is an affront to the right to a fair trial. As Marchena noted, “not everything is permitted” in the pursuit of a conviction, and the use of unverified tech to justify a search or an arrest is a line that must not be crossed.
Bridging the Gap Between Ancient Codes and Neural Networks
The tension Marchena highlights is a macro-trend affecting judiciaries worldwide. We are witnessing a collision between the gradual, deliberate pace of jurisprudence and the exponential speed of silicon. The “Information Gap” in current legal discourse is often the lack of technical literacy among senior judges, which leads to either blind trust or blanket rejection of new tools.
To understand the gravity of this, we look to the broader European legal landscape. The European Court of Human Rights has frequently grappled with the admissibility of digital evidence. The challenge is that law is based on precedent and stability, while AI is based on probability and iteration. When these two worlds clash, the result is often a legal loophole that can be exploited by both the state and the defense.
“The challenge for the modern judiciary is not simply to adopt AI, but to govern it. We are moving toward a ‘hybrid justice’ where the ability to audit an algorithm will be as important as the ability to cross-examine a witness.”
This perspective, shared by leading digital rights analysts, underscores why Marchena’s focus is so timely. By bringing this debate into the Real Academia, he is pushing the Spanish legal elite to move beyond the “wow factor” of AI and start drafting the restrictive frameworks necessary to protect civil liberties.
The Political Echoes of a Scholarly Induction
One cannot ignore the optics of Marchena’s move into the Academy. For his critics, he is the architect of a restrictive legal regime; for his supporters, he is the bulwark against institutional collapse. His induction into the RAJL provides him with a new platform—a scholarly shield—from which to influence the long-term direction of Spanish law.

By pivoting his public discourse toward the “neutral” ground of technology and AI, Marchena is effectively expanding his legacy. He is moving from being a judge of specific, politically charged cases to becoming a theorist on the future of the state. This represents a classic power move in the legal world: transitioning from the application of the law to the definition of the law.
The winners in this transition are the institutions that seek a modernized, yet disciplined, approach to digital evidence. The losers are those who hoped the “old guard” of the judiciary would remain ignorant of the digital shift. Marchena is proving that the establishment is not only watching the tech revolution—it is preparing to regulate it.
The Verdict on the Future of Justice
Manuel Marchena’s entry into the Real Academia de Jurisprudencia y Legislación is a reminder that the most powerful tool in any courtroom isn’t a piece of software, but a well-reasoned argument. His warning is clear: the law must adapt, but it must not be consumed. If we allow the efficiency of AI to replace the rigor of the law, we aren’t optimizing justice—we are automating injustice.
As we move toward a world where “deepfakes” can challenge the validity of video evidence and AI can predict a defendant’s likelihood of re-offending, the human judge becomes the last line of defense. The question is no longer whether AI will enter the courtroom, but whether the people wearing the robes have the courage to say “no” when the machine is wrong.
What do you think? If an AI could analyze 10,000 hours of evidence in seconds but couldn’t explain its reasoning, would you trust it to help decide a verdict? Let’s discuss the ethics of the “algorithmic gavel” in the comments below.