Microsoft Engineer Raises Concerns About Safety of OpenAI’s DALL-E 3
Illustration: bdtechtalks.com |
Microsoft engineer Shane Jones has raised concerns about the safety of OpenAI’s DALL-E 3, suggesting that the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He alleges that Microsoft's legal team has blocked his attempts to alert the public to the issue.
Jones has taken his complaint directly to the Federal Trade Commission (FTC), urging them to take action. In a letter to FTC Chair Lina Khan, Jones wrote that he had repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place. He is now asking the company to add disclosures to the product to alert consumers to the alleged danger and change the rating on the app to ensure it is only for adult audiences.
Jones has also written a separate letter to Microsoft's board of directors, urging them to initiate an independent review of the company's responsible AI incident reporting processes.
Jones claims that Microsoft's implementation of DALL-E 3 can easily create violent or sexual imagery. He says it is possible to "trick" the platform into generating the grossest and most unsavory images imaginable. He points out that even innocuous prompts can result in disturbing images, such as demons feasting on infants or sexualized women in car accidents. These issues have been replicated by CNBC using the standard version of the software.
Jones alleges that Microsoft is not adequately addressing these concerns. He states that the Copilot team receives over 1,000 daily product feedback complaints, but there are not enough resources available to fully investigate and solve the problems. He also highlights the lack of proper reporting channels and immediate action if harmful images are generated and spread globally.
OpenAI has stated that the prompting technique shared by Jones does not bypass security systems and that the company has developed robust image classifiers to steer the model away from generating harmful images. A Microsoft spokesperson has emphasized that the company has established internal reporting channels to address any issues and urges Jones to validate and test his concerns before escalating them publicly. However, Jones claims that his concerns have not been properly addressed.
Illustration: logos-world.net |
This incident follows a similar controversy with Google's Gemini chatbot, which generated historically inaccurate images. Google has disabled the image generation platform while working on a fix.
This article contains affiliate links. If you click on such a link and make a purchase, we may earn a commission.
Tidak ada komentar