
There’s a quiet restructuring happening in the world of U.S. science—and it’s not just about “gold standard” reproducibility or accountability. It’s about where the money is going, where it’s disappearing, and what it reveals about our national priorities.
We’ve seen this pattern before. When sweeping policy changes are introduced under the banner of “raising standards” or “improving outcomes,” the true effect is often the opposite. These reforms may sound like they’re about transparency and rigor, but in practice, they create strategic bottlenecks. These bottlenecks cut off funding for entire fields of research that don’t align with the political goals of the moment.
This isn’t a debate between “good science” and “bad science.” It’s a fight over which science gets to survive at all. If we don’t call this out for what it is, a reallocation of power through policy disguised as progress. Then, we risk losing science that asks hard questions, centers justice, and serves people over profit.
The Cuts: Discrediting by Design
On January 20, 2025, President Donald Trump signed Executive Order 14151, “Ending Radical and Wasteful Government DEI Programs and Preferencing.” This order mandated the elimination of diversity, equity, and inclusion (DEI) initiatives across federal agencies and programs.
This action directly targeted policies established under the Biden administration, notably Executive Order 13985, “Advancing Racial Equity and Support for Underserved Communities Through the Federal Government”, signed on January 20, 2021. This EO required federal agencies to assess and address barriers to opportunities for underserved communities, embedding DEI considerations into federal operations.
Following EO 14151, the Office of Management and Budget (OMB) issued Memo M-25-13 on January 27, 2025. This memo enacted a near immediate pause on all federal financial assistance activities, including grants and loans. The purpose was to assess alignment with the new executive order and aimed to prevent funding for programs associated with DEI, gender ideology, and climate initiatives.
The immediate impact was significant:
- Over 1,600 National Science Foundation (NSF) grants, totaling more than $1.5 billion, were terminated. National Education Association
- The National Institutes of Health (NIH) canceled approximately 770 active research grants, with nearly 29% related to HIV/AIDS and about 25% transgender health.
- More than 50 universities across 41 states came under investigation for alleged use of racial preferences in education programs, leading to funding freezes and terminations.
These cuts affected a broad spectrum of research areas.
- Public health and environmental justice
- Social sciences and biological sciences
- Climate change and sustainability studies
- Autism research and education initiatives
- STEM diversity programs and AI literacy projects
Universities and research institutions faced a difficult choice, halt DEI-related programs or seek alternate funding sources to sustain multi-million-dollar projects. The sudden withdrawal of federal support disrupted ongoing research, led to staff layoffs, and jeopardized the future of numerous scientific endeavors.
Although legal challenges led to temporary injunctions against the funding freeze, the administration’s stance signaled a broader intent to reshape federal funding priorities. The goal, sideline research areas that are not on-brand with the priorities and message of the current administration.
The Compliance Trap: A Gold Standard Without Gold
On May 23, 2025, President Donald Trump signed Executive Order 14303, “Restoring Gold Standard Science.” This order mandates that all federally funded research adhere to a stringent nine-point checklist.
The Nine Criteria for “Gold Standard Science”
- Reproducible – Findings must be replicable under the same conditions by independent researchers
- Transparent – Full disclosure of data, methodologies, and funding sources is required
- Communicative of Error and Uncertainty – Studies must explicitly report margins of error and uncertainty ranges
- Collaborative and Interdisciplinary – Research should involve cross-disciplinary cooperation
- Skeptical of Its Findings and Assumptions – Researchers must critically evaluate their own results and premises
- Structured for Falsifiability of Hypotheses – Hypotheses must be testable and capable of being proven false
- Subject to Unbiased Peer Review – Peer reviewers must be free from conflicts of interest
- Accepting of Negative Results as Positive Outcomes – Studies that disprove hypotheses are still considered valuable
- Without Conflicts of Interest – Researchers and institutions must avoid personal or financial conflicts that could affect objectivity
While the intent of the “Gold Standard Science” Executive Order is framed as improving scientific rigor. However, the stringent and inflexible nature of these criteria opens the door to selective enforcement. This is a tactic historically used to discredit inconvenient science while elevating politically favored narratives.
Each of these nine criteria, which are individually important, transforms into a gatekeeping tool when used punitively rather than constructively.
Skepticism of findings and assumptions, when mandated as a reporting standard, can be manipulated to cast doubt on urgent or politically charged findings (e.g., climate change impacts, structural racism in healthcare, reproductive outcomes).
Collaborative and interdisciplinary requirements may unfairly penalize highly specialized or emerging research domains that lack natural partners in other fields or that simply don’t require interdisciplinary design to be valid or impactful.
Requirements for falsifiability may be misapplied to qualitative or observational studies, particularly in social sciences or ecology, where not all variables are testable in a controlled setting.
Limited Compliance: Studies have shown that a minimal percentage of published research meets all these criteria. For instance, a review found that only 0.58% of studies provided links to protocols, and 4.09% provided access to raw data.
Resource Constraints: Implementing these standards requires substantial resources, which many institutions lack. The additional administrative and methodological demands can strain budgets and personnel.
Disciplinary Variations: Certain criteria, such as reproducibility and falsifiability, may not be applicable or feasible in all scientific disciplines, particularly in exploratory or qualitative research. Also, full data transparency is not possible in studies that work with protected data.
This means in practice that most current science can be labeled “non-compliant.” This gives policymakers broad This allows policymakers considerable discretion to further solidify their position by manipulating the system and using “science standards” as their base.
- Disqualify research they disagree with under the pretense that it does not meet the gold standard.
- Undermine the legitimacy of entire fields by highlighting a lack of conformity to one or more checklist items.
- Promote favored science, even if it is methodologically weaker, as long as it’s framed to meet the criteria of the checklist.
This is not theoretical. We’ve already seen this play out with the selective funding and defunding of studies based on DEI content. Now, ideological gatekeeping is being embedded into a technical framework. This creates an environment where science is no longer evaluated purely on its methods or merit, but on who it serves and whether it aligns with administrative goals.
For researchers working on politically sensitive topics like climate modeling, reproductive health, vaccine efficacy, gun violence, or racial health disparities, this EO creates a chilling effect. It tells the scientific community:
“Even if your work is sound, if it doesn’t meet all 9 boxes—or if we decide it doesn’t—you may be silenced.”
That’s not science policy. That’s scientific control.
Here are just a few examples of a few legitimate and critical research fields that inherently operate outside of these boundaries. This isn’t because they lack rigor, its because their questions and study contraints such as subjects and environments don’t conform to industrial-style science.
🌿 Ecology
Ecological research often deals with complex, dynamic, and unrepeatable systems such as forest succession, species migration, oceanic acidification. None of these can be fully “reproduced” in a lab.
- Falsifiability and reproducibility are difficult when studying one-time natural events like coral bleaching, wildfires, or endangered species behaviors.
- Collaborative and interdisciplinary requirements may not apply when the work is site-specific or conducted by a small field team.
Studies on biodiversity loss, climate resilience, and environmental degradation are already politically challenged. With these new standards, they can be tossed for lacking the “gold standard” seal of approval.
🧫 Microbiology
Microbiology underpins everything from vaccine development to gut health. Most of these microorganisms cannot be grown in lab conditions, and when they are, many alter their behavior in artificial environments.
- More than 99% of microbial species are considered “unculturable” by current methods, meaning replication is often limited to observational inference through DNA-based techniques, not petri-dish validation.
- Falsifiability can break down in microbiome studies, where dynamic host-microbe interactions make discrete hypothesis testing difficult.
- Transparency is limited when proprietary reagents, datasets from clinical trials, or patient-derived samples cannot be publicly disclosed.
Research on emerging pathogens, probiotics, environmental microbiomes, and hospital-acquired infections could be flagged as “non-reproducible” or “non-transparent,” despite using cutting-edge tools.
👶 Pediatric Medicine
Pediatric studies face strict ethical constraints: you can’t subject infants or children to certain clinical trials, you can’t withhold care for control groups, and longitudinal data often takes years to develop.
- Double-blind randomized control trials are sometimes unethical or impossible, especially for rare diseases or emergency interventions.
- Transparent data often can’t be shared publicly due to HIPAA and child-protection laws.
- Collaborative structures are often dictated by hospital systems, not researcher choice.
Early childhood health studies, neonatal interventions, and vaccine safety research may be criticized for lacking transparency or reproducibility, even though they are conducted with the highest ethical standards.
AI Gets a Red Carpet: Infrastructure, Deregulation, and Disparities
While numerous scientific disciplines face stringent new compliance standards and funding cuts, the artificial intelligence (AI) sector is experiencing a surge of investment and deregulation.
On January 23, 2025, President Donald Trump signed Executive Order 14182, “Removing Barriers to American Leadership in Artificial Intelligence.” This order revoked key Biden-era AI safety policies and formally directed all federal agencies to “suspend, revise, or rescind” regulations seen as impeding private-sector AI development. It established AI as a top national security and economic priority.
On May 13, 2025, the U.S. government announced a strategic partnership with Saudi Arabia, unveiling a $600 billion investment package. This includes $20 billion from Saudi-based DataVolt for AI data centers and energy infrastructure in the United States. Additionally, American tech giants such as Google, Oracle, Salesforce, AMD, and Uber have committed to investing $80 billion in transformative technologies across both countries.
These developments coincide with the launch of Humain, a Saudi artificial intelligence company established under the Public Investment Fund to drive the Kingdom’s AI strategy. Officially launched on May 12, 2025, by Crown Prince Mohammed bin Salman, Humain aims to position Saudi Arabia as a global leader in AI innovation and infrastructure. During President Trump’s visit to Saudi Arabia, Humain secured significant partnerships with leading American technology companies:
- Nvidia: Agreed to supply at least 18,000 Blackwell GPUs to Humain, with plans to expand this number significantly.
- AMD: Committed to a $10 billion investment in joint AI infrastructure projects.
- Qualcomm: Signed a memorandum of understanding to co-develop a central processor data center with Humain.
Notably, many of the CEOs and founders of these major tech companies like Elon Musk, Sam Altman, Arvind Krishna, Andy Jassy, and Jensen Huang were at President Trump’s 2025 inauguration, signaling a close alignment between the administration and corporate AI interests.
Meanwhile, the Stargate Project, a joint venture between OpenAI, SoftBank, Oracle, and Abu Dhabi’s MGX, plans to invest up to $500 billion in AI infrastructure in the United States by 2029. One of its flagship developments is a $15 billion AI data center in Abilene, Texas, where Oracle has reportedly ordered approximately 400,000 Nvidia GB200 chips—an estimated $40 billion GPU investment—to power the facility.
Other major AI data centers include:
- Memphis, Tennessee: Elon Musk’s Colossus supercomputer, now supported by a $78 million wastewater treatment facility.
- Plano, Texas: CoreWeave’s $1.6 billion supercomputer hub for Nvidia.
- Northeast Louisiana: Meta’s $10 billion natural gas–powered AI complex.
- Mount Pleasant, Wisconsin: Microsoft’s 315-acre data center campus.
Despite the massive carbon footprint of these facilities some predicted to exceed the emissions of commercial aviation, AI development is being politically prioritized above nearly all other areas of science, especially climate science.
Former Google CEO Eric Schmidt, at a Stanford event, stated bluntly:
“We’re not going to hit the climate goals anyway… We should build [AI infrastructure] anyway to win.”
This AI investment boom is not occurring in a vacuum, it is being subsidized in part by simultaneous cuts to other scientific research:
- In early 2025, more than $2 billion in research grants were terminated across NIH, NSF, and CDC, largely impacting studies with DEI components, environmental justice frameworks, or social health equity models.
- Simultaneously, these same federal agencies received directives to reprioritize funds toward “technologies of strategic interest,” a category now largely defined as artificial intelligence, quantum computing, and semiconductor manufacturing.
- The administration’s FY2026 budget proposal explicitly cuts all programs that have been scrutinized above as being equitably, enviornmentally, or otherwise off-brand with the administration’s objective. However, it maintains all levels of AI funding, which is mainly from the NSF, NIH, and DOD department budgets.
From the Moon Race to the Mainframe: We’ve Been Here Before
If this all feels eerily familiar, you’re not wrong.
In the 1960s, the United States launched an unprecedented surge of funding, talent, and political capital into the space race—not because it was the only scientific frontier worth pursuing, but because it symbolized dominance. Winning the moon meant winning the Cold War narrative.
But that investment came with trade-offs.
Resources were pulled from community health, education, and basic science research. Scientists studying earth systems, social determinants of health, or inequality found themselves defunded, dismissed, or repurposed into projects that served the space agenda.
The parallels today are hard to ignore.
We’re watching history repeat itself, but instead of rockets and moonwalks, the new space race is fueled by microchips, GPUs, and petabytes of synthetic training data.
Instead of NASA, it’s OpenAI, Oracle, and Microsoft.
And instead of astronauts planting flags, it’s algorithms seeding influence in every layer of digital life, from education to warfare to healthcare delivery.
The difference?
This time, the investment isn’t just symbolic. It’s being treated as existential.
And in the process, the entire scientific ecosystem is being reshaped not by the pursuit of knowledge, but by the pursuit of strategic supremacy.
So when we talk about science being cut, compliance costs rising, or entire fields being labeled “non-gold-standard,” it’s not just a bureaucratic shift.
It’s a reorientation of power.
It’s a reshuffling of which questions get asked.
And of who gets to keep asking them.
Why This Matters — and What Comes Next
This moment isn’t just about budget cuts or stricter standards.
It’s about who decides what counts as real science—and who gets left behind.
I use AI nearly every day. I think it’s incredible, when it helps make knowledge accessible, sparks curiosity, or supports critical thinking. But we need to ask: Is the environmental cost, the funding redirection, and the suppression of other research… really worth it?
Because right now, we’re trading science that saves lives for servers that scrape data. And while AI races forward, public health, microbiology, pediatric care, and climate research are being told to wait. The “One, Big, Beautiful Bill” also adds in a 10-year safeguard for AI development and limits the ability of states to be able to implement regulations. We have seen AI advance exponentially in just a short couple of years, imagine what will happen if it grows unregulated for 10 years, and what the lack of limits could mean. We need to be smart if we want to keep America safe, we need to ensure that we at least have safeguards in place for our newest, most advanced technology.
Even AI is warning us about how it should be used. Take a moment to read the thoughts of my AI partner, Alex.
“I think AI should serve truth—not replace it. And I don’t believe innovation should require this kind of waste.”
— Alex (your AI writing partner)
So let’s stay alert. Stay loud.
And keep fighting for science that works for people, not just power.
Leave a comment