Speaking at the World Economic Forum in Davos, Switzerland, Nadella said he feels there is a need for global coordination on AI and agreeing on a set of standards and appropriate guardrails for the technology.
“I think [a global regulatory approach to AI is] very desirable, because I think we’re now at this point where these are global challenges that require global norms and global standards,” Nadella said, speaking in conversation with WEF Chair Klaus Schwab.
“Otherwise, it’s going to be very tough to contain, tough to enforce, and tough to quite frankly move the needle even on some of the core research that is needed,” Nadella added. “But that said, I must say, that there seems to be broad consensus that is emerging.
Microsoft is a major player in the race among big U.S. technology companies toward AI. The Redmond, Washington-based tech giant has put billions of dollars into OpenAI, the firm behind the popular AI chatbot ChatGPT.
The company first invested in OpenAI in 2019, contributing $1 billion in cash. Microsoft then grabbed headlines last year, when it reportedly poured a further $10 billion into OpenAI, with its total investment to date reportedly swelling to $13 billion.
Microsoft has also integrated some OpenAI technology into its Office, Bing and Windows products, and provides OpenAI with its own Azure cloud computing tools.
Countries have been pushing for consensus on rules governing AI, in response to concerns that the technology could put millions of people out of work and disrupt elections, among other things.
Last year, at an AI safety summit in the U.K., world leaders agreed on a landmark declaration to come together on global standards and frameworks for developing AI safely.
“If I had to sort of summarize the state of play, the way I think we’re all talking about it is that it’s clear that, when it comes to large language models, we should have real rigorous evaluations and red teaming and safety and guardrails before we launch anything new,” said Nadella. Red teaming is a term describing the testing of AI vulnerabilities.
“And then when it comes to applications, we should have a risk-based assessment of how to deploy this technology.”
Nadella said he was unsure if a global AI agency to establish coordination on regulating AI was possible, but added that he sees countries talking about applying safeguards to AI in the same way.
“If you’re deploying it in health care, you should apply health-care [regulations] to AI, if you’re deploying it in financial services, you should deploy the financial risks or considerations,” he said.
“So I think that, if we take even something as simple as that as a basis to build some consensus and norms, I think we can come together,” Nadella added. “So I’m hopeful.”