ylliX - Online Advertising Network
What Are AI’s Rules of the Road?

What Are AI’s Rules of the Road?


If 2023 was artificial intelligence’s breakout year, then 2024 was when the rules of the road were established. This was the year that U.S. government agencies acted on the White House executive order on AI safety. Over the summer, the European Union’s AI regulation became law. In October, the Swedes weighed in as the Nobel Prizes became a referendum on the technology’s use and development; Bhaskar Chakravorti, a frequent writer for Foreign Policy on the subject of AI, suggested the committee’s choice of recipients could be read as a “recognition of the risks that come with AI’s unfettered growth.”

Just how fettered that growth should be was top of mind for FP contributors in 2024. Some, such as Viktor Mayer-Schönberger and Urs Gasser, think countries should go their own way in the spirit of experimentation—as long as they can find productive ways to come together and learn from each other’s mistakes. Rumman Chowdhury is dismayed this isn’t happening, especially for residents of global-majority countries who are just being introduced to AI without adequate tools to use and consume it safely. And Chakravorti worries about a regulatory trap—that, in a bid to establish guardrails, governments may inadvertently contribute to the problem of AI monopolies.

If 2023 was artificial intelligence’s breakout year, then 2024 was when the rules of the road were established. This was the year that U.S. government agencies acted on the White House executive order on AI safety. Over the summer, the European Union’s AI regulation became law. In October, the Swedes weighed in as the Nobel Prizes became a referendum on the technology’s use and development; Bhaskar Chakravorti, a frequent writer for Foreign Policy on the subject of AI, suggested the committee’s choice of recipients could be read as a “recognition of the risks that come with AI’s unfettered growth.”

Just how fettered that growth should be was top of mind for FP contributors in 2024. Some, such as Viktor Mayer-Schönberger and Urs Gasser, think countries should go their own way in the spirit of experimentation—as long as they can find productive ways to come together and learn from each other’s mistakes. Rumman Chowdhury is dismayed this isn’t happening, especially for residents of global-majority countries who are just being introduced to AI without adequate tools to use and consume it safely. And Chakravorti worries about a regulatory trap—that, in a bid to establish guardrails, governments may inadvertently contribute to the problem of AI monopolies.

In a preview of where the AI debate may be going in 2025, Ami Fields-Meyer and Janet Haven suggest we’re all worrying about the wrong thing: Rather than focus exclusively on AI’s deleterious effects on misinformation and disinformation in elections, like what happened in the lead-up to the U.S. presidential election this year, governments need to see the technology’s potential for a broader dismantling of civil liberties and personal freedom. Meanwhile, Jared Cohen points to the coming collision of AI and geopolitics, and makes the case that the battle for data will build or break empires in years to come.


1. What if Regulation Makes the AI Monopoly Worse?

By Bhaskar Chakravorti, Jan. 25

The accelerationists won in the competition to steer AI development, writes Chakravorti, the dean of global business at Tufts University’s Fletcher School. But as regulators rush to corral bills into law, they may inadvertently add to the accelerationists’ market power, he argues in this prescient piece.

How can it be that regulators tasked with preserving the public interest could take actions that might make matters worse? Because, Chakravorti writes, AI regulation is emerging haphazardly in a “global patchwork,” and smaller companies are automatically disadvantaged as they lack the resources to comply with multiple laws. Then there are the regulations themselves, which typically entail red-teaming requirements to identify security vulnerabilities. That preemptive approach is costly and entails different kinds of expertise not readily available to start-ups.

Fortunately, Chakravorti identifies several ways that governments can work to head off this concentration in the AI market without having to forfeit regulation altogether.


2. A Realist Perspective on AI Regulation

By Viktor Mayer-Schönberger and Urs Gasser, Sept. 16


An illustrations shows a robot-like representation of AI covered in various modes of regulation: chains, caution tape, and ropes.
An illustrations shows a robot-like representation of AI covered in various modes of regulation: chains, caution tape, and ropes.

George Wylesol illustration for Foreign Policy

From two professors of technology governance—one at Oxford University and the other at the Technical University Munich—comes a different take on AI regulation through a realist lens. Mayer-Schönberger and Gasser argue that AI’s regulatory fragmentation worldwide is a feature, not a bug, because the goals for regulating the technology are not clearly defined yet.

In this “concept and search phase,” open channels of communication and innovation are most important. However, the world lacks institutions to facilitate regulatory experimentation, and the existing institutions—such as the post-World War II Bretton Woods setup—are ill-suited to the task. “Perhaps we need different institutions altogether to aid in this experimentation and learning,” the authors conclude, before suggesting some possible paths forward based on past technological breakthroughs.


3. What the Global AI Governance Conversation Misses

By Rumman Chowdhury, Sept. 19

More digitally established countries are already grappling with how to protect their citizens from generative AI-augmented content. How will a family in Micronesia introduced to reliable internet access for the first time be equipped to avoid these same problems? That’s the question posed by Chowdhury, a U.S. science envoy for AI, who returned from a trip to Fiji concerned by a lack of attention to this issue for those in global-majority countries.

This disconnect is not due to a lack of interest, Chowdhury writes. But solutions are often too narrow—focusing on enhancing digital access and capability, without also providing appropriate funding to developing safeguards, conducting thorough evaluations, and ensuring responsible deployment. “Today, we are retrofitting existing AI systems to have societal safeguards we did not prioritize at the time they were built,” Chowdhury writes. As investments are made to develop infrastructure and capacity in global-majority nations, there is also an opportunity to correct the mistakes made by early adopters of AI.


4. AI’s Alarming Trend Towards Illiberalism

By Ami Fields-Meyer and Janet Haven, Oct. 31

Fears about the impacts of AI on electoral integrity were front and center in the lead-up to November’s U.S. presidential election. But Fields-Meyer, a former policy advisor to Vice President Kamala Harris, and Haven, a member of the National AI Advisory Committee, point to an “equally fundamental threat” posed by AI to free and open societies: the suppression of civil rights and individual opportunity at the hands of opaque and unaccountable AI systems.

Reversing this drift, they write, will involve reversing the currents that power it. Going forward, Washington needs to create a new, enduring paradigm in which the governance of data-centric predictive technologies is a core component of a robust U.S. democracy. A range of policy proposals must be complemented, the authors write, by a separate but related project of ensuring individuals and communities have a say in how AI is used in their lives—and how it is not.


5. The Next AI Debate Is About Geopolitics

By Jared Cohen, Oct. 28

Cohen, president of global affairs at Goldman Sachs, makes the case that data is the “new oil,” shaping the next industrial revolution and defining the haves and have-nots in the global order. There is a crucial difference with oil, however. Nature determines where the world’s oil reserves are, yet nations decide where to build data centers. And with the United States facing bottlenecks it cannot break at home, Washington must look to plan a global AI infrastructure buildout. Cohen calls this “data center diplomacy.”

As the demand for AI grows, the urgency of the data center bottleneck also grows. Cohen argues that the United States should develop a set of partners with whom it can build data centers—not least because China is executing its own strategy to lead in AI infrastructure. Such a strategy is not without risks, and it runs counter to the current trend in geopolitical competition for turning inward and building capacity at home. Still, with greater human prosperity and freedom at stake, the United States must act now to put geography at the center of technological competition, and Cohen goes on to outline the first necessary steps.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *