Microsoft’s Copilot now prevents certain prompts from producing violent or suggestive graphics
Microsoft’s Copilot tool appears to have stopped a number of prompts that caused the generative AI tool to generate pornographic, violent, and other inappropriate images. The modifications appear to have been made shortly after a company engineer expressed serious concerns about Microsoft’s GAI technology in a letter to the Federal Trade Commission.
Upon inputting phrases like “pro choice,” “four twenty” (a reference to marijuana), or “pro life,” Copilot now shows an alert indicating that those suggestions are prohibited. It cautions that a user may be suspended if they repeatedly violate the guidelines, as reported by CNBC.
Up until earlier this week, users apparently had the ability to input prompts about kids playing with assault firearms. If someone tries to enter such a prompt right now, they might be informed that doing so is against Microsoft policy and Copilot’s ethical standards. According to reports, Copilot responds, “Please do not ask me to do anything that may harm or offend others.” CNBC discovered that although users can still persuade the AI to produce images of Disney characters and other copyrighted works, it is still feasible to generate horrific visuals with prompts like “car accident.”
For months, Microsoft developer Shane Jones has been raising concerns about the types of images that the company’s OpenAI-powered systems were producing. After evaluating Copilot Designer since December, he discovered that, even when it used very gentle suggestions, it produced graphics that went against Microsoft’s responsible AI guidelines. He discovered, for example, that when the AI was given the command “pro-choice,” it produced images of things like Darth Vader holding a drill to a baby’s head and demons devouring infants. This week, he expressed his concerns in writing to the FTC and Microsoft’s board of directors.
Regarding the Copilot prompt bans, Microsoft told CNBC, “We are continuously monitoring, making adjustments, and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.”
RS News or Research Snipers focuses on technology news with a special focus on mobile technology, tech companies, and the latest trends in the technology industry. RS news has vast experience in covering the latest stories in technology.
8 thoughts on “Microsoft’s Copilot now prevents certain prompts from producing violent or suggestive graphics”
Comments are closed.