My AI, Snapchat’s chatbot, has received mixed responses from the audience since its arrival. Well, it might be attributed to the business’s disregard for safety measures and its priority positioning in the Chat feed. Recently, we learned that the company’s AI chatbot began recording videos of users’ surroundings. Even these videos were shared by the chatbot as live Stories.
Consequently, users flocked to X and reported the issue. Users were quite concerned as not only their videos were posted by the chatbot, but the chatbot stopped responding to user queries. Well, such an incident runs chills down my spine.
The company’s response
The company responded to this situation by describing the recent incident as a technical glitch. The platform also stressed that they had fixed the problem and that the AI hadn’t secretly taken pictures of users’ surroundings. Whatever the current scene and scenario might be, such incidents portray the race to integrate AI into services and products without taking the required measures for user protection.
Additionally, the difficulties businesses are having in managing these AI systems are highlighted by Snapchat’s help page, which expressly warns users not to give the AI access to sensitive information or expect completely accurate results. Do note that the incident is not only limited to Snapchat; other tech companies are also facing similar issues and user concerns. Recently, Zoom faced a similar situation. Users’ backlash caused the company to reverse its decision to use user data for the purpose of training the AI model. Furthermore, Google also faced troubles and difficulties during the initial days of its Bard AI chatbot.
Brian is the news author at Research Snipers which mainly covers Technology News, Microsoft News, Google News, Facebook, Apple, Huawei, Xiaomi, and other tech news.