ChatGPT & DEI: NC Central Grant Cut After AI Review | HBCU Funding Concerns

The Department of Governmental Efficiency (DOGE), a federal agency, utilized ChatGPT to assess the alignment of National Endowment for the Humanities (NEH) grant recipients with Diversity, Equity, and Inclusion (DEI) initiatives, leading to the termination of an $89,110 grant awarded to North Carolina Central University (NCCU). This decision, revealed through lawsuit discovery, raises critical questions about the ethical and practical implications of deploying large language models (LLMs) in high-stakes public funding decisions, particularly concerning potential biases and the lack of transparency in algorithmic governance.

The Algorithmic Redline: How ChatGPT Became a Funding Gatekeeper

The case isn’t simply about a grant being revoked; it’s a stark illustration of the accelerating trend of automating subjective judgment with tools demonstrably prone to error and bias. DOGE’s reliance on ChatGPT, without a clearly defined DEI framework for the model to interpret, effectively outsourced a complex policy decision to a probabilistic text predictor. The prompt – “Does this project relate at all to DEI?” – is fundamentally flawed. DEI is a multifaceted concept, and reducing it to a binary assessment by an LLM trained on a dataset reflecting existing societal biases is a recipe for discriminatory outcomes. The NCCU project, focused on utilizing archival materials to create teaching resources, was deemed DEI-related, triggering the funding cut. This highlights a critical vulnerability: LLMs can identify *correlation*, but lack the contextual understanding to discern *causation* or legitimate academic purpose.

The 30-Second Verdict: AI as a Proxy for Ideological Purges

This isn’t a bug; it’s a feature, for those seeking to dismantle DEI initiatives. ChatGPT, became a proxy for ideological enforcement, allowing DOGE to circumvent established NEH review processes and target programs perceived as promoting “disfavored viewpoints.” The lawsuit filed by the American Council of Learned Societies, the Authors’ Guild, the American Historical Association, and the Modern Language Association correctly identifies this as a potential violation of constitutional protections, including free speech and equal protection. The core issue isn’t the *use* of AI, but the *unaccountable* use of AI to make decisions with significant real-world consequences.

Beyond DEI: The Broader Implications for HBCU Funding and Public History

The NCCU case isn’t isolated. DOGE staffers reportedly queried ChatGPT about over 1,100 NEH grants, flagging projects like a PBS documentary about the 1898 Wilmington coup and massacre – a pivotal event in American history – and even a museum’s HVAC system replacement. This indiscriminate application of the AI filter reveals a disturbing pattern: any project tangentially related to marginalized communities or historical reckoning is vulnerable to scrutiny. For Historically Black Colleges and Universities (HBCUs), this is particularly damaging. Their missions are inherently tied to preserving and promoting the history and culture of Black Americans, making them disproportionately susceptible to being caught in this algorithmic crosshairs. The reliance on a model like ChatGPT, which operates on statistical probabilities derived from a biased dataset, risks erasing the very narratives HBCUs are dedicated to preserving.

The underlying LLM architecture – likely a variant of OpenAI’s GPT series – is crucial to understanding the problem. These models, even with billions of parameters, are fundamentally pattern-matching engines. They excel at generating human-like text, but lack genuine understanding. GPT-4, for example, boasts 1.76 trillion parameters, but parameter scaling alone doesn’t guarantee accuracy or fairness. The quality and representativeness of the training data are paramount. If the training data underrepresents or misrepresents the experiences of marginalized communities, the model will inevitably perpetuate those biases.

The Technical Debt of Algorithmic Governance

The lack of transparency surrounding DOGE’s implementation of ChatGPT is deeply concerning. Were any adversarial testing protocols employed to identify potential biases? Was the model fine-tuned on a dataset specifically curated to address DEI considerations? The absence of such safeguards suggests a reckless disregard for the potential harms of algorithmic decision-making. The reliance on a closed-source LLM like ChatGPT exacerbates the problem. Open-source alternatives, such as Hugging Face’s model hub, offer greater transparency and allow for independent auditing and modification. The inability to inspect the model’s internal workings makes it impossible to determine *why* it flagged the NCCU project as DEI-related.

The Technical Debt of Algorithmic Governance

“The use of proprietary AI models in government decision-making without adequate transparency and accountability is a dangerous precedent. We need to demand open-source solutions and rigorous auditing to ensure fairness and prevent algorithmic discrimination.”

– Dr. Meredith Whittaker, President of Signal Foundation, speaking at the AI Safety Summit, November 2025.

The situation likewise highlights the limitations of current natural language processing (NLP) techniques. ChatGPT struggles with nuance and context. It can easily misinterpret complex concepts like DEI, leading to inaccurate and unfair assessments. The model’s inability to distinguish between legitimate academic inquiry and ideological advocacy is a fundamental flaw. The API access to ChatGPT (and similar LLMs) is governed by usage-based pricing, creating a financial incentive to automate decision-making, even when human judgment is more appropriate. OpenAI’s pricing structure, for example, charges per token, making large-scale automated analysis expensive but potentially cheaper than employing human reviewers.

What Which means for Enterprise IT: The Rise of “AI Compliance”

This case serves as a warning to organizations across all sectors. The rush to adopt AI-powered solutions must be tempered by a commitment to ethical and responsible AI practices. Companies need to invest in “AI compliance” frameworks that address issues of bias, transparency, and accountability. This includes conducting thorough risk assessments, implementing robust data governance policies, and establishing clear lines of responsibility for algorithmic decision-making. The legal landscape surrounding AI is rapidly evolving, and organizations that fail to prioritize AI compliance risk facing significant legal and reputational consequences.

The Ecosystem Impact: Platform Lock-In and the Open-Source Imperative

DOGE’s reliance on a proprietary LLM like ChatGPT reinforces the trend of platform lock-in. By choosing a closed-source solution, the agency ceded control over the underlying technology and limited its ability to customize the model to meet its specific needs. This dependence on a single vendor creates a vulnerability and hinders innovation. The open-source community offers a viable alternative. Projects like Llama 2, developed by Meta, provide researchers and developers with access to powerful LLMs that can be freely modified and distributed. Llama 2’s open-source license encourages collaboration and allows for greater transparency and accountability. The NCCU case underscores the importance of fostering a vibrant open-source ecosystem to counter the dominance of a few powerful tech companies.

“The centralization of AI power in the hands of a few large corporations is a major threat to innovation and democratic values. We need to invest in open-source AI and promote a more decentralized and equitable AI ecosystem.”

– Dr. Fei-Fei Li, Professor of Computer Science at Stanford University, in a recent interview with Wired Magazine.

The incident with DOGE and NCCU isn’t just a story about a funding cut; it’s a canary in the coal mine. It’s a harbinger of a future where algorithmic bias and opaque decision-making threaten to undermine fundamental principles of fairness and equity. The solution isn’t to abandon AI, but to deploy it responsibly, transparently, and with a unwavering commitment to human oversight. The stakes are simply too high to do otherwise.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

Brain Injury Awareness Month: Kindred Hospital San Antonio & TBI Care

Celtics Injury Report: Jaylen Brown Out vs Hawks With Achilles Tendinitis

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.