‘Superintelligent’ AIs will be governed by a new team created by OpenAI

ChatGPT is a perfect example of generative AI. Ever since its establishment, several organizations and governments have shown their concern about the uncontrolled development of AI. Currently, OpenAI is creating a specialized team to manage the potential risks related to highly intelligent AIs in an effort to allay these worries.

Notably, the team is headed by the co-founder of OpenAI Ilya Sutskever. Another important member of the team is Jan Leike, a member of the alignment team of OpenAI. The group will create strategies to deal with the speculative situation in which highly intelligent AI systems eclipse human intelligence and start acting independently. Even though this scenario might appear improbable. Some experts believe that superintelligent AI would be more dominant in the future decades. For this reason, it is necessary to create protective plans.

The blog post that announced the formation of a new team states that presently for controlling the superintelligent AI no solution or methodology exists. We have no idea how to prevent superintelligent AI from rogue.

What will the Superalignment team do?

OpenAI has formed the Superalignment team. The team will have access to about 20% of the current computational resources as well as scientists and engineers from OpenAI’s former alignment division. They will create a “human-level automated alignment researcher,” which would mainly help in assessing other AI systems and carrying out alignment research.

Another blog post states that as we advance in this area, “our AI systems are able to take over a growing amount of our alignment work and ultimately think up, put into practice, research, and develop better alignment techniques than we currently have.”

A system for evaluating other AIs

Well, this sounds quite unusual that AIs will be regulated with the help of an AI system. According to OpenAI, an AI system can perform better when it comes to alignment research in contrast to humans. This strategy would enable human academics to focus on reviewing alignment research produced by AI rather than just doing it themselves, as well as saving them time. One thing to notice here is that the company admits the potential risks and threats linked with this approach. The company aims to share a proper plan for its research interest and goals in the near future.

Leave a Reply