The rapid expansion of Artificial Intelligence (AI) is changing the everyday lives of Americans and impacting the fabric of society in myriad ways. AI implicates fundamental human rights such as privacy and individual freedoms; environmental concerns and natural resource distribution; governance and civic engagement and healthcare service delivery.
Ms. sat down with former Under Secretary for Public Diplomacy and Public Affairs Elizabeth M. Allen to discuss the perils and promise of AI, and how the U.S. government is leading efforts fostering innovation while minimizing harms. Allen served as Under Secretary for Public Diplomacy and Public Affairs for the U.S. Department of State from June 15, 2023 to Aug. 2, 2024. On August 6, Vice President Kamala Harris selected Minnesota Governor Tim Walz to be her running mate and Ms. Allen is now serving as Chief of Staff to the Running Mate, Harris for President.
This interview has been lightly edited for clarity.
This piece was written in the personal capacity of the author, Michelle Onello.
Michelle Onello: What do you see as the promise, and the perils, of AI?
Elizabeth Allen: The U.S. Government and State Department are looking at AI as a global transformative issue. We know that AI isn’t going anywhere so we focus on managing the risks but also taking advantage of the opportunities. Americans and people all over the world are curious yet fearful, wondering how AI is going to affect their daily lives and engender broader global and societal shifts.
We want to ensure that the U.S. government enacts best practices or regulations that help Americans, but also putting this on the global agenda because what one country does in AI is going to affect every other country. In some parts of the world the risks are the focus, and in other parts of the world, particularly in developing countries, AI is looked at as a potentially positive transformational force.
We look at the promise part of AI as something that could revolutionize sectors, everything from healthcare to education. If we’re able to offer curricula digitally, teaching assistance and AI-enabled medical advice, we can potentially lift up generations out of poverty, out of disease. The AI For Good initiative exemplifies our commitment to using AI to address global challenges.
The perils of AI boil down to three different buckets. One is the security risk, which might not be front of mind for most Americans but absolutely is on the minds of U.S. diplomats. We know that AI can be misused for cyber-attacks or in the defense sector. My colleagues are assessing, via the U.S. AI Safety Institute, how to limit, mitigate and regulate these risks.
In my portfolio in Public Diplomacy at the State Department, we focus on all aspects of the information space. So, deepfake and AI-generated inauthentic content are foremost on our mind as one of the perils. Our Global Engagement Center is the only U.S. government body that’s solely dedicated to combatting formal misinformation and disinformation by foreign actors, leading efforts to think about labeling and content authentication in the international information space.
Inauthentic content, AI-generated content, and disinformation undermine trust in institutions all over the world which then becomes a much bigger issue that affects every audience and every issue. You can’t hope to solve any global challenge if you don’t have a common basis of understanding, and we’re seeing disinformation undermining that common basis. AI could turbocharge that to the point beyond our ability to even understand problems, let alone solve them.
Though we focus on the disinformation-related risks of AI, we also view AI as an enormous fourth multiplier for reaching people. For example, the language translation capability of AI is well beyond our current ability at a human scale, so potentially we can reach parts of the world with very local languages. AI offers a chance to reach people more quickly, hopefully before disinformation narratives do.
Onello: What do you see as the gendered implications of AI, such as gender bias, deepfake images or AI-induced job losses?
Allen: The heart of our policy conversations across the U.S. government is how to encourage innovation while ensuring that innovation does not either unintentionally or intentionally perpetuate bias, inequality and discrimination.
Part of our approach is to work with technology companies and innovators to make clear that our expectation is that they use, for example, diverse datasets or involve diverse teams in AI development. So, there are some process-related solutions to some bias issues.
You can’t hope to solve any global challenge if you don’t have a common basis of understanding, and we’re seeing disinformation undermining that common basis.
Elizabeth M. Allen, former Under Secretary for Public Diplomacy and Public Affairs
In May, Secretary of State Blinken released a refreshed version of the U.S. International Cyberspace and Digital Policy Strategy which integrated gender equity as a foundational element. With this refreshed strategy, we’re making clear to our own colleagues and the rest of the world that digital governance, rights-respecting innovation, reducing gender bias, or obviously reducing racial bias and ethnic bias, should and must be integrated into policymaking.
Policy matters are ultimately about people so we want to be responsive to the disruptors in people’s lives. The digital economy and AI at the center of emerging technologies are going to be destructive so it’s our job to ensure that our regulatory and policy decisions try to turn some of the destruction for good and limit the destruction for bad.
We are working on promoting an affirmative vision of AI as a powerful equalizer, potentially bringing women and girls into spaces and conversations that were previously closed off. Including digital equity or safety by design in the frameworks that I mentioned, into things like innovation and design of new technology, is crucial. Proactive expectations and an affirmative vision ultimately will make the digital space safer for everyone, which is both a national security imperative and a societal benefit.
Onello: You have spoken about the need to weed, tend and cultivate the information space—what do you mean by this metaphor?
Allen: The information space is as foundational a societal shaper as, for example, economics or the pandemic. Due to misinformation and disinformation, the design of social media platforms, and the degradation of traditional media outlets and businesses, the information space is more complex than ever before. If we don’t try to approach the space as a 360-degree ecosystem, we’re not going to make progress.
If we think about the garden metaphor, we’re trying to make sure that we weed the garden by tackling this misinformation and disinformation as a matter of analysis and think about how to disincentivize malign influence from Russia, China, Iran and other countries. That’s something that affects Americans and people all over the world, particularly women. The State Department worked with the White House on the first Global Partnership for Action on Gender-Based Online Harassment and Abuse. As a matter of democracy, we must identify and address, on a policy level, things like harassing online speech or disinformation that targets women, including deepfakes. That’s the weeding.
When it comes to tending the garden, we want to ensure that we’re fertilizing it.
The U.S. Government needs to encourage institutions to tend their own gardens, incentivize healthy information and incentivize truth. Credibility is a North Star and if we don’t have a common understanding and shared truth, we can’t solve problems. We support and incentivize independent journalists and investigative media all over the world to provide better, more credible information, hopefully to get ahead of the disinformation circuit which unfortunately we won’t be able to turn off.
Finally, we think about cultivating. The future of the world isn’t going to just be shaped by government-to-government relationships but also by people-to-people relationships. More people in democracies are going to the polls this year than ever before, at a time when trust in those governments is decreasing.
We’re investing in educational exchange programs and cultural diplomacy to create more opportunities for people to understand each other and using art, music and sports as global-convening issues. It’s not just a national security imperative, but a means to keep people knitted together. In this day and age, with the information space incentivizing bias and division, to say nothing of events around the world, we see increasing dehumanization of each other.
We are even incentivizing more people-to-people interaction between the United States and China, our most complicated geo-political relationship, because better understanding will help us effectively and responsibly manage that relationship. It isn’t that we’re always going to agree. Quite the opposite. In fact, AI is a good example of something we have a lot of disagreements with China about, but it is absolutely a place where we need dialogue. If we hope to increase humanization, understanding and dialogue, we must invest in relationships with people around the world. That’s the cultivation part.
Onello: How receptive has the private sector been to partnering with the government, which brings with it the potential for regulation?
Allen: There’s a lot of conversation between social media companies, tech companies and the administration. Regulatory mechanisms, particularly legislative regulatory mechanisms, are not going to keep up with innovation. The Administration sought to galvanize best practices and commitments that may not be as legally-binding as legislation but are a framework for action and replicable on a global diplomatic stage. The result was the voluntary commitments from U.S. tech companies on AI released last year by the White House.
The future of the world isn’t going to just be shaped by government-to-government relationships but also by people-to-people relationships.
Elizabeth M. Allen, former Under Secretary for Public Diplomacy and Public Affairs
The State Department is not in the business of moderating content online, but we have analysis, intelligence, and information to share with social media companies so that they understand who’s violating their Terms of Service. Typically, that’s pointing out when foreign malign actors, specifically Russia, China, Iran and terrorist organizations, are using their platform to incite hate speech, violence, extremism or propaganda. Companies want our help understanding what’s happening on their platform, so that is certainly an area of collaboration.
Democratic backsliding and stifling of free expression all over the world has led to another constructive area of collaboration. Companies want their platforms to be used for free expression and have been our partners in offering technologies to make sure that their users can still access information. For example, in response to Russia’s 2022 invasion of Ukraine and Russia closing their information space, companies want to ensure that their social media platforms are accessible to the Russian people. So, we are in regular contact to minimize malign foreign influence and ensure freedom of expression which we can all agree are crucial issues.
Read more: