freecking

Federal Judge Rules DOGE's Use of ChatGPT to Vet Grants Was Unconstitutional, Slams $100M Cancellation

By Satoshi Itamoto • 2026-05-09T09:00:16.771073

Federal Judge Rules DOGE's Use of ChatGPT to Vet Grants Was Unconstitutional, Slams $100M Cancellation
In a landmark decision, US District Judge Colleen McMahon has deemed the Department of Government Efficiency's cancellation of over $100 million in grants as unconstitutional. The ruling, which spans 143 pages, criticizes DOGE's unorthodox method of using ChatGPT to assess whether grant proposals aligned with diversity, equity, and inclusion (DEI) principles. This approach, according to Judge McMahon, was not only misguided but also illegal, as it unfairly disqualified grants based on the presence of specific, protected characteristics.



The lawsuit, filed in 2025 by humanities groups, challenged the abrupt termination of funding from the National Endowment for the Humanities (NEH). The plaintiffs argued that the cancellation process lacked transparency and was biased against projects promoting diversity and inclusion. Judge McMahon's ruling vindicates these concerns, stating that DOGE's use of ChatGPT to vet grants was a clear example of an unconstitutional approach to funding allocation.



The implications of this ruling extend beyond the immediate restoration of the cancelled grants. It sets a significant precedent for how government agencies can and cannot use AI tools in decision-making processes, particularly when it comes to sensitive issues like diversity and inclusion. For everyday users, this could mean a more transparent and fair allocation of public funds, as agencies will be required to adopt more nuanced and human-centric approaches to grant evaluation.



From an industry perspective, this ruling highlights the need for caution when integrating AI into critical decision-making processes. While AI tools like ChatGPT can be incredibly powerful, they are not substitutes for human judgment and empathy. The DOGE's misguided attempt to use ChatGPT as a shortcut for evaluating complex social issues serves as a stark reminder of the limitations and potential biases of AI systems.



As the tech industry continues to push the boundaries of AI innovation, this ruling serves as a timely reminder of the importance of responsible AI development and deployment. It underscores the need for developers, policymakers, and users to work together to ensure that AI systems are designed and used in ways that promote fairness, transparency, and human values.



The broader market and societal effects of this ruling will likely be significant, as it challenges the status quo of AI-driven decision-making in the public sector. It may lead to a reevaluation of how AI is used in various government agencies, with a greater emphasis on human oversight and ethical considerations. Ultimately, this shift could reshape how we think about the role of AI in public policy, promoting a more balanced and equitable approach to technology adoption.