SECRETARY BLINKEN: Ms. Li, thank you very much for sharing your insight, sharing your ideas with the council. It’s deeply appreciated.
I shall now make a statement in my capacity as the Secretary of State as the United States. And let me, again by – let me begin, again, by thanking both of our briefers, Mr. LeCun, Ms. Li, for sharing their thoughts with us today.
As we’ve just heard, as I think so many of us know, artificial intelligence has the potential to do enormous good. Scientists are using AI to discover medications that could fight antibiotic-resistant bacteria. AI models are predicting natural disasters more accurately so that communities can better prepare. These tools are identifying new crystal structures that could help us build the next generation of electric vehicle batteries.
In these ways, and so many other ways, AI could accelerate our progress on nearly 80 percent of the United Nations Sustainable Development Goals.
At the same time, as we’ve heard, if it’s misused, AI can pose tremendous threats to the international peace and security that this council is charged with upholding. With AI, hackers can make cyber attacks more destructive, harder to trace. Repressive regimes are using AI-enabled surveillance to target journalists and political dissidents, destabilizing societies. If algorithms are built into weapon systems – and if they malfunction – they could accidentally spark a conflict.
By setting rules of the road for AI, we can minimize these risks. We can harness the exceptional promise of this technology. And we can realize the vision that the UN enshrined in the Global Digital Compact – a future where technology is inclusive, where it’s open, where it’s sustainable, where it’s fair, where it’s safe, where it’s secure for people everywhere.
Over the last few years, the United States has been leading international efforts toward these common goals. As home of the world’s leading tech companies, we have a responsibility to influence the evolution of artificial intelligence.
We’re also committed to mobilizing a collective response. So, we’ve teamed up with partners in governments, the private sector, civil society – from countries all across the globe – to address both the perils and the opportunities of AI.
First, our government secured commitments from leading American companies to make AI systems safer. For example, they’ve agreed to establish – create tools, like watermarks, that help users recognize AI-generated content. They’ll also strengthen their cyber security to protect AI models from hackers.
With Japan’s leadership, the G7 expanded these pledges into a code of conduct for AI developers all across the world. It recommends that they run tests to identify safety risks, that they prioritize research into potential harms, that they publicly report on the limitations of AI, to increase accountability.
Then, earlier this year, the United States put forward the first, standalone UN General Assembly Resolution on AI, which every member adopted by consensus. We’ve committed to promoting safe, secure, trustworthy AI systems that respect human rights and further economic and social progress.
We also agreed to make the benefits of AI more accessible, in part by closing the digital divide that still exists around the world. That’s something we underscored in a second resolution, drafted by China, that the UN adopted in June.
The U.S. and our partners have developed a global consensus for AI, and now we’re building upon it.
Last month, the U.S. launched an international network of AI safety institutes, where researchers and experts are creating shared benchmarks for testing and evaluating AI systems. Their recommendations will offer practical guidance for developers and for tech companies.
We’re also getting and setting ground rules for governments. This year, the European Union, the United States, nine other countries signed the first international treaty on AI. We pledged to protect human rights, democracy, the rule of law when we use AI. That means safeguarding data privacy. It means adopting transparency and accountability measures. It means implementing other strategies that would limit any harms.
The U.S. has rallied nearly 60 governments to commit to guidelines for militaries, too. For example, we want to make sure that senior officials oversee the development and deployment of AI — including in weapon systems — and that these tools are used in ways that follow international humanitarian law.
Separately, in a meeting in November, President Biden and President Xi affirmed that only humans should control the decision to use nuclear weapons.
While we work to uphold our shared principles for AI, the United States is improving access to this technology so that communities everywhere can benefit. We’re teaming up with leading tech companies to host training, to build local data sets, to provide AI tools for developers and researchers. This summer, the U.S. and Morocco also established a group at the United Nations, open to all member states, where experts from every region are sharing best practices for adopting artificial intelligence.
Now, that’s real progress. But for all the progress, I think we all know far more work remains to be done. Nations with leading tech sectors must do more to uphold security standards and prevent AI from being abused.
The international community needs to stand together against irresponsible misuses of AI systems. Today, state and non-state actors are increasingly using these tools to influence and distort public opinion, to manipulate geopolitical narratives, to make offensive cyber operations more effective. And this is only going to get worse as AI advances.
The United States opposes the malicious use of AI by any actor, and we call on others on this council to reject and to condemn these practices. We must adhere to our shared norms and build AI systems that are genuinely safe and secure.
In the months and years ahead, this council will have an important responsibility. Since its inception, this body has adapted to address the greatest threats to international peace and security: conflicts, terrorism, the spread of nuclear weapons. If the Security Council is going to continue upholding this responsibility – and the United States believes that it must – it is incumbent on its members to grapple with the evolving risks of artificial intelligence. This requires leveraging our collective power to help set, update, and eventually enforce international norms on AI – because simply put, this will be vital to lasting security.
Now, even with the brilliant minds that we’ve heard from today, who’ve done so much to work on the evolution and think about the evolution of AI, I don’t think any of us can fully predict what the future holds for AI. To fully understand how this technology changes over time – to stay ahead of the risks that it can pose – we have to continue collaborating. We need to keep working with developers, with business leaders, with members of civil society.
If we do this, I’m convinced that we can shape AI for the better, so that it remains a force for progress and for the advancement of people all around the world.
With that, I resume my function as president of the council. And I’d now like to give the floor to her excellency, Ms. Gabriela Sommerfeld, Minister for Foreign Affairs and Human Mobility of Ecuador.