ABU DHABI, United Arab Emirates (AP) — Artificial intelligence poses an “existential risk” to humanity, a key innovator warned during a visit to the United Arab Emirates on Tuesday, suggesting an international agency like the International Atomic Energy Agency oversee the ground-breaking technology.
OpenAI CEO Sam Altman is on a global tour to discuss artificial intelligence.
“The challenge that the world has is how we’re going to manage those risks and make sure we still get to enjoy those tremendous benefits,” said Altman, 38. “No one wants to destroy the world.”
OpenAI's ChatGPT, a popular chatbot, has grabbed the world's attention as it offers essay-like answers to prompts from users. Microsoft has invested some $1 billion in OpenAI.
ChatGPT's success, offering a glimpse into the way that artificial intelligence could change the way that humans work and learn, has sparked concerns as well. Hundreds of industry leaders, including Altman, have signed a letter in May that warns “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Altman made a point to reference the IAEA, the United Nations nuclear watchdog, as an example of how the world came together to oversee nuclear power. That agency was created in the years after the U.S. dropping atom bombs on Japan at the end of World War II.
“Let's make sure we come together as a globe — and I hope this place can play a real role in this,” Altman said. “We talk about the IAEA as a model where the world has said ‘OK, very dangerous technology, let's all put some guard rails.' And I think we can do both.
"I think in this case, it's a nuanced message 'cause it's saying it's not that dangerous today but it can get dangerous fast. But we can thread that needle.”
Lawmakers around the world also are examining artificial intelligence. The 27-nation European Union is pursuing an AI Law that could become the de facto global standard for artificial intelligence. Altman told the U.S. Congress in May that government intervention will be critical to governing the risks that come with AI.
But the UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, offers the flip side of the risks of AI. Speech remains tightly controlled. Rights groups warn the UAE and other states across the Persian Gulf regularly use spying software to monitor activists, journalists and others. Those restrictions affect the flow of accurate information — the same details AI programs like ChatGPT rely on as machine-learning systems to provide their answers for users.
Among speakers opening for Altman at the event at the Abu Dhabi Global Market was Andrew Jackson, the CEO of the Inception Institute of AI, which is described as a company of G42.
G42 is tied to Abu Dhabi's powerful national security adviser and deputy ruler Sheikh Tahnoun bin Zayed Al Nahyan. G42's CEO is Peng Xiao, who for years ran Pegasus, a subsidiary of DarkMatter, an Emirati security firm under scrutiny for hiring former CIA and NSA staffers, as well as others from Israel. G42 also owns a video and voice calling app that reportedly was a spying tool for the Emirati government.
In his remarks, Jackson described himself as representing “the Abu Dhabi and UAE AI ecosystem.”
“We are a political powerhouse and we will be central to AI regulation globally,” he said.
OpenAI is a research organization that claims to be creating artificial intelligence (AI) that can “benefit humanity without causing harm or being misused.” Its most famous product is ChatGPT app that has become popular for generating near-realistic text and speech for essays, scripts, poems, and solving computer coding. A ChatGPT review, among others, has vouched for the app’s ease of use and efficacy. ChatGPT is essentially a chatbot and GPT stands for Generative Pre-trained Transformer.
On May 16, 2023, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, about the impacts and risks of AI, and the need for government regulation. Here are some of the main points of his testimony:
AI is becoming increasingly powerful and could cause significant harm to the world if not regulated properly. The OpenAI CEO testimony cited examples of potential harms, such as misinformation, bias, job displacement, and cyberattacks.
He suggested that an international agency like the UN’s nuclear watchdog could oversee AI and ensure its safety and ethics. He proposed that a new regulatory agency should license the most powerful AI systems and have the authority to revoke them if they violate safety standards.
He said that OpenAI is committed to creating beneficial and trustworthy AI, and that it does not make money from its products. He also said that ChatGPT is not intended for children and has safety measures in place, such as filters, warnings, and human oversight.
He acknowledged that ChatGPT and other generative AI tools could be used for cheating, plagiarism, impersonation, and other malicious purposes. He said that OpenAI is working on developing tools to detect and counter such misuse, and that it supports legal actions against abusers.
He also said that ChatGPT and other AI tools could have positive impacts on society, such as enhancing education, creativity, communication, and innovation. He said that OpenAI is collaborating with various partners and stakeholders to ensure that AI benefits everyone.
Altman’s testimony was one of the first major events as lawmakers dig into how to regulate the new technology. He faced questions from senators who expressed concerns about the potential threats AI poses to democracy, security, privacy, and human dignity. He also received praise from some senators who appreciated his willingness to work with them on this issue.