December 23, 2024

OpenAI CEO agrees on need to regulate AI technology: Here’s why

0

 

Sam Altman, the chief executive of OpenAI, agreed on the need to regulate AI technology. Altman was testifying before the members of the US Senate subcommittee for privacy, technology and the law.

“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said.
Altman said that his company was founded on the belief that AI has the potential to transform human lives, but it creates risk too. Considering concerns about disinformation, job security and other issues, Altman proposed regulatory intervention by the government.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said.

He suggested the establishment of an agency that grants permits for the creation of large-scale AI models, safety rules and tests that AI models must pass before being made public. He also suggested that the authority must have powers to “take that licence away and ensure compliance with safety standards”.

Senator Richard Blumenthal at the start of the hearing demonstrated a computer-generated voice that was similar to his own and reading a written text.

“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” Senator said.

Europe has already made considerable progress in its AI Act, which is set to go to a vote in European Parliament next month and a Senator from the sub-committee also took note of the same.

“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman said.

Senator Blumenthal said the hearing was the first of a series of hearings to learn more about the advantages and harm of AI before writing rules for it.

The subcommittee members proposed a separate agency to regulate AI and they also proposed rules that require companies to reveal how their models work and the data sets they use. In addition to this, the subcommittee members proposed antitrust rules to stop tech giants like Microsoft and Google from dominating the emerging market.

One of the senators from the subcommittee noted that OpenAI has not been transparent about the data it uses to develop its system.

Earlier this month, Geoffrey Hinton left his job at Google to warn the world about the dangers of AI technology. As companies improve their AI system, he believes they are becoming increasingly dangerous and he regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr Hinton said in an interview with New York Times.

The post OpenAI CEO agrees on need to regulate AI technology: Here’s why appeared first on Techlusive.

 

 

Sam Altman, the chief executive of OpenAI, agreed on the need to regulate AI technology. Altman was testifying before the members of the US Senate subcommittee for privacy, technology and the law.

“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” Altman said.
Altman said that his company was founded on the belief that AI has the potential to transform human lives, but it creates risk too. Considering concerns about disinformation, job security and other issues, Altman proposed regulatory intervention by the government.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said.

He suggested the establishment of an agency that grants permits for the creation of large-scale AI models, safety rules and tests that AI models must pass before being made public. He also suggested that the authority must have powers to “take that licence away and ensure compliance with safety standards”.

Senator Richard Blumenthal at the start of the hearing demonstrated a computer-generated voice that was similar to his own and reading a written text.

“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” Senator said.

Europe has already made considerable progress in its AI Act, which is set to go to a vote in European Parliament next month and a Senator from the sub-committee also took note of the same.

“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman said.

Senator Blumenthal said the hearing was the first of a series of hearings to learn more about the advantages and harm of AI before writing rules for it.

The subcommittee members proposed a separate agency to regulate AI and they also proposed rules that require companies to reveal how their models work and the data sets they use. In addition to this, the subcommittee members proposed antitrust rules to stop tech giants like Microsoft and Google from dominating the emerging market.

One of the senators from the subcommittee noted that OpenAI has not been transparent about the data it uses to develop its system.

Earlier this month, Geoffrey Hinton left his job at Google to warn the world about the dangers of AI technology. As companies improve their AI system, he believes they are becoming increasingly dangerous and he regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr Hinton said in an interview with New York Times.

The post OpenAI CEO agrees on need to regulate AI technology: Here’s why appeared first on Techlusive.