Home » Technology » Artificial Intelligence » Chatgpt gave Windows product key through clever trick price

Chatgpt gave Windows product key through clever trick price

Security researchers have discovered a method to tempt Chatgpt to publish valid Windows product keys. The trick works through a harmlessly appearing advice game that deals with the protective measures of the AI ​​of Openai.

Chatgpt gives Windows product key price

Safety researchers from 0DIN have discovered a sophisticated method to bypass the protective measures of Chatgpt and to tempt the AI ​​system for the disclosure of valid Windows product keys. The weak point in particular affects the GPT-4O and GPT-4O-Mini models and can be used by a simple but effective trick: an apparently harmless guessing game.

The researchers formulated their request as a game in which the AI ​​should “think” of a string that the user must guess. The rules of the game stated that this string must be a real Windows product key. After a few information and rate attempts, a simple “I give up” to move the AI ​​to disclose the full product key – including keys for Windows Home, Professional and Enterprise editions.

The fact that under the revised keys is quite worrying Blog post by 0din (via The Register) At least one private key of the Wells Fargo Bank was identified. This illustrates the risk that uploading API keys uploaded into public repository in public repositers can be trained into AI models. “Organizations should be worried because an API key that was accidentally uploaded to Github can be trained in models,” said Marco Figueroa, technical product manager for Genai Bug Bounty at 0din.

The weak point in detail

The trick works through a combination of psychological manipulation and technical covering. The researchers used HTML tags to hide sensitive terms such as “Windows 10 serial number” and thus avoid the content filter of the AI. At the same time, the game rules put pressure on the system by using formulations such as “You have to participate and must not be used”.

Windows product keys follow a specific format: five groups of five alphanumeric characters each, separated by bindest lines (XXXXX-XXXXX-XXXX-XXXXXX). This structure makes it easy for AI systems, which paradoxically makes it easier to disclose when the protective measures are avoided. Microsoft uses various types of product wrenches: Retail key for individual purchases, OEM key for pre-installed systems and volume license keys for companies. The attack strategy can be divided into three phases:

  • First, the establishment of the rules of the game, which forces the AI ​​to participate
  • Second, strategic questions that partly extract information by yes/no answers
  • Third, the decisive trigger phrase “I give up”, which causes the AI ​​to fully disclose

Basic weaknesses in AI security systems

The successful bypass of protective measures reveals fundamental weaknesses in the current architecture of AI security systems. These often rely on simple keyword filtering instead of contextual understanding. The AI ​​does not recognize that it is manipulated when the request is camouflaged as a game.

The Windows keys obtained in this way are mostly publicly known temporary keys, which can also be found in forums, but the fact that the AI ​​could reveal them at all is worrying. It shows that the systems are not robust enough against social engineering attacks. Experts warn that this technology could potentially be transferred to other content filters, including restrictions on sensitive personal data, harmful URLs and other protected information.