AI ethics team laid off by Microsoft
As per information from Artstechnia (via platformer), Microsoft recently disbanded its AI ethics unit, in the wake of the company’s huge layoffs that affected 10,000 employees. The team was in charge of keeping an eye on and minimizing the social problems brought on by Microsoft AI technologies.
Microsoft has recently gained attention for its initial investment in OpenAI, the parent firm of ChatGPT, as well as for incorporating it into its Bing search engine. All businesses that use AI in their goods and services have a team that looks into any potential risks, yet Microsoft recently let go of members of its AI ethics team.
The team allegedly created a “toolkit for responsible innovation” for Microsoft that assisted the company’s engineers in anticipating and removing the hazards brought on by the AI. Former team members claim they were crucial in reducing the hazards associated with AI in Microsoft products.
Following recent layoffs, the Microsoft AI ethics team left the organization
Microsoft responded to the information by stating that it is “dedicated to building AI products and experiences safely and ethically, and does so by investing in people, procedures, and partnerships that prioritize this.”
The business claims that for the past six years, it has concentrated on establishing the Office of Responsible AI. To reduce the dangers posed by AI, this team continues to exist and collaborates with the Aether Committee and the Responsible AI Strategy in Engineering.
The elimination of Microsoft’s AI ethics team comes as GPT-4, OpenAI’s most sophisticated AI model, is introduced. The addition of this concept to Microsoft Bing may increase interest in Google’s competitor.
Microsoft allegedly employed 30 people on its AI ethics team when it began to take shape in 2017. The Platformer claims that when competition between the corporation and Google in AI grew, the employees were later divided among several divisions. Finally, Microsoft reduced the team size to only seven individuals.
The ex-employees assert that Microsoft disregarded their suggestions for AI-driven applications like Bing Picture Maker that plagiarized the work of artists. The fired staff are now worried about the potential risks that AI could provide to users if there is no one left in the organization to object to the possibly reckless ideas.
I’m a communication enthusiast and junior editor-reporter at Research Snipers, I have completed a degree in Mass Communication but am very enthusiastic about new technology, games, and mobile devices. I have the main interest in Technology and games.