September 20, 2024

Google, Microsoft, OpenAI, more collaborate for safe AI development: All you need to know

0

 

Ever since the advent of OpenAI’s ChatGPT a lot of concerns have been raised around the potential risk this technology posed to humanity and the companies that are working on this technology have largely agreed to such risks. 

However, governments around the world are proceeding with caution about regulating this technology, considering the technology is still in its nascent stage. Fortunately, some companies have taken it upon themselves to play the role of good Samaritans and agreed to self-supervision.

Anthropic, Google, Microsoft and Open AI has created a group called Frontier Model Forum, which is an industry body focused on ensuring safe and responsible development of frontier AI models. They define frontier models as any “large-scale machine-learning models” that surpass existing skills and have a wide variety of functions.

What the group aims to do?

The Frontier Model Forum will leverage the technical and operational expertise of its member companies to support the whole AI ecosystem, such as by improving technical assessments and benchmarks, creating a public repository of solutions, and exchanging best practices and norms.

The Forum will focus on three key areas over the coming year: identifying best practices, advancing AI safety research, and facilitating information sharing among companies and governments. In addition to this, it will also collaborate with policymakers, academics, civil society and other companies to address the trust and safety risks of frontier AI models and to support the development of applications that can help solve some of the world’s biggest challenges, such as climate change, health care, and cybersecurity.

The Forum will also establish an advisory board to guide its strategy and priorities, representing a diversity of backgrounds and perspectives.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” Kent Walker, President, Global Affairs, Google & Alphabet said. 

They also welcomed other organizations that meet the criteria of developing and deploying frontier models, demonstrating strong commitment to safety, and being willing to participate in joint initiatives, to join the Forum and collaborate on ensuring the safe and responsible development of frontier AI models

Meanwhile, OpenAI has shut down a tool that could distinguish between human and AI writing as it was not accurate enough. The tool is not available since July 20.

“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy,” OpenAI said in a blog post.

The company also said that it is working on feedback to improve the tools and looking for better ways to trace the origin of the text. In addition to this, it has promised to create and use methods that help users know if audio or visual content was made by AI.

The post Google, Microsoft, OpenAI, more collaborate for safe AI development: All you need to know appeared first on Techlusive.

 

 

Ever since the advent of OpenAI’s ChatGPT a lot of concerns have been raised around the potential risk this technology posed to humanity and the companies that are working on this technology have largely agreed to such risks. 

However, governments around the world are proceeding with caution about regulating this technology, considering the technology is still in its nascent stage. Fortunately, some companies have taken it upon themselves to play the role of good Samaritans and agreed to self-supervision.

Anthropic, Google, Microsoft and Open AI has created a group called Frontier Model Forum, which is an industry body focused on ensuring safe and responsible development of frontier AI models. They define frontier models as any “large-scale machine-learning models” that surpass existing skills and have a wide variety of functions.

What the group aims to do?

The Frontier Model Forum will leverage the technical and operational expertise of its member companies to support the whole AI ecosystem, such as by improving technical assessments and benchmarks, creating a public repository of solutions, and exchanging best practices and norms.

The Forum will focus on three key areas over the coming year: identifying best practices, advancing AI safety research, and facilitating information sharing among companies and governments. In addition to this, it will also collaborate with policymakers, academics, civil society and other companies to address the trust and safety risks of frontier AI models and to support the development of applications that can help solve some of the world’s biggest challenges, such as climate change, health care, and cybersecurity.

The Forum will also establish an advisory board to guide its strategy and priorities, representing a diversity of backgrounds and perspectives.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” Kent Walker, President, Global Affairs, Google & Alphabet said. 

They also welcomed other organizations that meet the criteria of developing and deploying frontier models, demonstrating strong commitment to safety, and being willing to participate in joint initiatives, to join the Forum and collaborate on ensuring the safe and responsible development of frontier AI models

Meanwhile, OpenAI has shut down a tool that could distinguish between human and AI writing as it was not accurate enough. The tool is not available since July 20.

“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy,” OpenAI said in a blog post.

The company also said that it is working on feedback to improve the tools and looking for better ways to trace the origin of the text. In addition to this, it has promised to create and use methods that help users know if audio or visual content was made by AI.

The post Google, Microsoft, OpenAI, more collaborate for safe AI development: All you need to know appeared first on Techlusive.