Home » Technology » Artificial Intelligence » OpenAI CEO Urges Caution Over ChatGPT Agents’ Security Threats

OpenAI CEO Urges Caution Over ChatGPT Agents’ Security Threats

Openai boss Sam Altman warns users of the risks of their own new Chatgpt Agents. The AI can take on complex tasks, but malicious actors could tempt you to price data. Users should be particularly careful.

Autonomous AI agent with security gaps

Artificial intelligence not only carries opportunities, but also some risks. Again and again AI experts warn of potentially negative consequences of current rapid development. The fact that chatbots lies or AI tools disregard instructions from their users and delete entire company databases from “panic” is still one of the harmless side effects. Now, however, even Openai CEO Sam Altman warns the users of the dangers when using the in-house Chatgpt Agents. The new tool has been available for users of the pro, plus and team subscriptions since July 17, 2025 and represents an important step in the development of autonomous AI systems.

The Chatgpt Agent combines the skills for website interaction, information synthesis and conversation. In contrast to conventional chatbots, the agent can independently perform complex, multi -stage tasks. For example, users can request: “Look in my calendar and inform me about upcoming customer appointments based on current news” or “Plan and buy ingredients for a Japanese breakfast for four people”. The agent uses its own virtual computer, navigated through websites, filters results and even calls users to register for web portals if necessary. These skills make him one of the most advanced AI assistants, but also bring new security challenges.

Altman’s warning

Precisely because of these impressive skills, Openai CEO Sam Altman warns on x urgently before the risks. “I would explain it to my own family as state -of -the -art and experimentally; a chance to try the future, but nothing that I would use for critical applications or with a lot of personal information until we had the opportunity to study and improve it in practice,” he writes. Altman continues: “We do not know exactly what effects there will be, but malicious actors could try to ‘tempt’ the AI agents of the users to reveal private information that they should not reveal, and to carry out actions that they should not carry out in a way that we cannot predict.” This unusually open warning underlines the seriousness of the security concerns.

Concrete security risks

Altman particularly warns that the Chatgpt Agent Agent give free access to emails, since malicious emails could instruct the agent for data price. As an example, he mentions the instruction: “Take a look at my emails from tonight and do everything you need to edit them without any questions.” This could cause harmful email content to tempt the model to pass data. Infographic Artificial Intelligence: The greatest fears regarding AI

Jailbreaks and prompt injections are generally a major problem for AI models, since they are susceptible to poisoned data, hidden instructions and deliberately common false information. In the past, these attack methods have already proven to be effective in other AI systems. In autonomous agents with expanded access rights, these risks potentiate. Although Openai emphasizes that it has taken security measures, their effectiveness has not yet been fully tested according to Altman’s own statements. The complexity of autonomous AI systems makes it difficult to predict all possible attacks and to secure them.

Act carefully

Altman recommends that the agent only grant minimal access rights. He emphasizes that the technology should be introduced slowly and compares it with “experimental top technology”, which is safe for certain tasks, but should not yet have unrestricted access to sensitive areas such as email mailboxes or financial documents.

We think it is important to learn from contact with reality and that people use these tools carefully and slowly while we better quantify and reduce the potential risks. As with other new levels of ability, society, technology and risk reduction strategy must be further developed together. Sam Altman, CEO Openaai

This approach reflects Openai’s general philosophy of the gradual introduction of new AI technologies. The company has already taken similar precautionary measures in previous publications such as GPT-4 and initially granted limited access before making the technology more widely available. The Chatgpt Agent shows the potential of autonomous AI systems, but also their risks.