(CN) — AI ethicists are cautioning that the rise of artificial intelligence may bring with it the commodification of even one’s motivations.
Researchers from the University of Cambridge’s Leverhulme Center for the Future of Intelligence say — in a paper published Monday in the Harvard Data Science Review journal — the rise of generative AI, such as chatbots and virtual assistants, comes with the increasing opportunity for persuasive technologies to gain a strong foothold.
“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve”, Yaqub Chaudhary, a visiting scholar at the Center for Future of Intelligence, said in a statement.
When interacting even causally with AI chatbots — which can range from digital tutors to assistants to even romantic partners — users share intimate information that gives the technology access to personal “intentions” like psychological and behavioral data, the researcher said.
“What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary added.
In fact, AI is already subtly manipulating and influencing motivations by mimicking the way a user talks or anticipating the way they are likely to respond, the authors argue.
Those conversations, as innocuous as they may seem, leave the door open for the technology to forecast and influence decisions before they are made.
“We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes,” Chaudhary said.
Jonnie Penn, a technology historian at the center, worked with Chaudhary on the paper, which focused on how the “intention economy” will surpass the attention economy as AI further integrates into daily life.
“For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy,” Penn said. “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions.”
A large language model AI could be used to gather information about its users — such as their politics, vocabulary, preferred communication style, age, gender, browsing history — and use that information to suggest choices determined by companies.
“While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers,” said Chaudhary.
In a typical attention economy, advertisers can purchase access to users attention in the present through online ads and in the future through planned physical ad space like a billboard.
But in an intention economy, which the researchers wrote is still “more aspiration than reality” currently, advertisers will be able to bid for access in real-time and against possible futures.
“This transition would empower diverse actors to intervene in new ways on shaping human actions,” the authors say in the paper.
For instance, a large language learning model linked with a brokered bidding network may suggest a user see a particular movie or suggest purchasing a movie ticket for the future.
The researchers cautioned what such an intention economy may bring with it.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences,” Penn said.
Leaders at tech companies such as OpenAI, Shopify and Nvidia have spoken about chatbots focusing on human intent. Apple launched a developer framework in to connect apps to its voice-controlled personal assistant, Siri, which includes options to predict actions a user may take in the future or suggest the app intent to a user.
“These companies already sell our attention,” Chaudhary said. “To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are.”
Still, Penn noted that the developments aren’t inherently bad but do pose a risk.
“Public awareness of what is coming is the key to ensuring we don’t go down the wrong path,” Penn said.
Subscribe to Closing Arguments
Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.