The Trump Administration’s Artificial Intelligence Rollback Is a Chance to Rethink AI Policy

With federal AI governance on pause, we have a chance to rethink our approach and build a more inclusive, equitable future for AI development.

President Donald Trump holds up a signed executive order in the Oval Office on Jan. 23. (Anna Moneymaker / Getty Images)

The future of federal AI governance seems to be put on hold in the United States—but it might give us a chance to reorient the focus of our AI policy. 

With his flurry of executive orders over the past two weeks, President Donald Trump has fulfilled many of the promises he made along the campaign trail. Beyond frightening and egregious orders like an increase in deportations and a rollback of DEI programs, he has also revoked the Biden-era Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, known as the AI EO. The order stated, “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”

Among the many schemes put into place since the inauguration, revocation of the AI EO is not necessarily the most pressing, dangerous or earth-shattering—at least, not yet.

Of course, there is now no specific government-wide approach towards ethical and responsible AI development, but its development is arguably unethical at its core. From climate issues to exploitative labor practices, AI’s development, deployment and use are deeply problematic. Its integration into society only exacerbates its negative impacts—a problem the AI EO was attempting to mitigate. We now face a pivotal juncture at which to assess AI.

In 2024, the Republican Party vowed to repeal the AI EO, and Trump quickly fulfilled that promise, declaring, “Republicans support AI development rooted in free speech and human flourishing.” AI development rooted in “human flourishing” feels aspirationally impossible, but there is room to explore a new digital divide emerging: the AI divide.

The digital divide is a term used to explain the gap between those with and without digital access, skills and literacy. It is typically described as a single-axis issue, focused on broadband access and connectivity. (Currently, 94 percent of people in the U.S. have access to the internet.) This new AI divide goes beyond connectivity and encompasses disparities such as AI literacy, access to AI-powered tools and algorithmic fairness.

For marginalized communities—especially women in the Global South, Black women and other women of color—the AI divide is compounded by existing structural inequalities. Biased AI systems, from racist hiring algorithms, to healthcare models that neglect women’s medical needs, perpetuate exclusion.

Without inclusive (dare I say feminist) tech governance and targeted broadband equity efforts, AI risks reinforcing, rather than alleviating, systemic injustices.

However, with the revocation of the AI EO, we are back to square one—at least from a federal, comprehensive perspective. So, how should we be thinking about AI and its development?

Attendee at TechEx Global 2025, a conference about artificial intelligence, big data and digital transformation, in London on Feb. 5, 2025. (Rasid Necati Aslim / Anadolu via Getty Images)

Understanding the AI Divide

It may be difficult to pinpoint a sector of modern society that has not already considered, or is currently considering, some form of AI integration. Benefits ranging from enhanced data analysis to groundbreaking scientific advancements outweighed by the second layer of exclusion that the AI divide creates.

The AI divide includes the gap between those with and without digital access (internet connectivity and access to technology); digital skills (or understanding how to use technology), and digital literacy (the ability to find, evaluate and communicate online).

AI literacy and skills—similar to digital literacy and skills, but specific to AI—are becoming essential for daily life, but marginalized groups often face greater barriers to acquiring those skills, such as underfunded schools or lack of internet access. The divide is particularly harmful for marginalized communities, as AI-driven systems reinforce existing systemic inequities, making it harder to break cycles of systemic disadvantages.

This divide is present in many areas, such as healthcare and other public services.

In healthcare, AI is being used along the continuum of care, from preventative screenings to treatments. AI could be leveraged to produce results faster in preventative cancer screenings, or models could help identify symptoms that the human eye cannot. However, studies have also shown that prediction models underperform on women and ethnic and racial minorities. Violent white patients are often remanded to hospitals, whereas violent Black patients are disproportionately sent to prison. Black patients are often excluded from referrals to necessary long-term care programs.

There are no standards for how to train and audit AI models on diverse populations, yet researchers have found that “AI is poised to penetrate routine clinical care over the next decade by replacing or assisting human interpretation.” It will simply amplify existing structural inequities, further widening the divide.

AI has also begun to be integrated into telemedicine applications, as well as other public services such as online education and digital financial systems. This alone may not be problematic—there may be processes that were particularly stagnant and needed updating. However, the broadening use of AI threatens a society that further entrenches systemic oppression instead of, as the Republican Party stated, focusing on human flourishing. Women and those who are ethnically and racially diverse will feel the worst of the AI divide, with the widening gap serving to increase inequities.

The issues with AI and telemedicine are similar to the general issues present in the healthcare system as they incorporate AI, but telemedicine attacks those with low digital literacy. As telemedicine begins to heavily rely on AI, those with low digital literacy may struggle to navigate the platforms that they need for crucial healthcare services. Further, there are still marginalized populations that lack reliable internet and technical skills. As AI-driven health monitoring tools become more prevalent, those populations will be left with inadequate or incomplete care.

AI-powered education tools may be used in schools to track students’ progress or tailor lessons, but students in underserved communities will miss out on those benefits. Some schools use AI-powered tutors or grading systems, which again will exclude students without stable access, connectivity or the skills to use the tools. AI may also be used to help address language or cultural barriers, but the models will most likely fail to recognize cultural nuances and will be limited in their effectiveness for students. As schools and programs begin to rely on AI, students without the necessary access, skills and literacy will fall behind.

AI is also not new to banking or digital financial systems. Many financial institutions use AI models to assess creditworthiness. Biased datasets (disadvantaging marginalized communities or those with non-traditional financial histories) are used and ultimately perpetuate existing structural inequities.

Even if models are not using biased data, any AI-driven banking tools will leave communities without digital literacy unable to dispute AI-driven decisions. These tools may also begin to shrink the traditional, in-person banking infrastructure, which will leave communities without traditional banks or access to these tools excluded from essential financial services.

The Bigger Picture

Any move to integrate AI assumes that there is a resting level of digital literacy, skills and access that many communities simply do not have. Even for those who have these things, the models themselves perpetuate and exacerbate inequities and systems of oppression. This is the antithesis of human flourishing. The AI divide will only grow—even with Biden’s AI EO, the divide was worsening. AI can no longer be referred to as solely a tool for innovation. Blind integration for faster workflow or analysis will ultimately harm marginalized communities and further entrench the systemic inequities we should be working to end.

About

Nina-Simone Edwards is a senior institute associate at Georgetown Law's Institute of Technology Law and Policy working on the Redesigning the Governance Stack Project. She received her J.D. from Georgetown University Law Center, and is currently pursuing her master's in library and information sciences from Catholic University. All views and opinions expressed are her own.