Home » Technology » Artificial Intelligence » ChatGPT-Controlled Rifle Stirs Debate Over AI Ethics and Weaponization Risks

ChatGPT-Controlled Rifle Stirs Debate Over AI Ethics and Weaponization Risks

An engineer has used ChatGPT to build an automatically firing ‘smart’ rifle that can be commanded in real time using voice control. After the project gained a lot of attention on TikTok, OpenAI spoke up.

Hobbyist builds voice-controlled AI rifle

On the one hand, artificial intelligence can be used to make everyday life and work easier for people. On the other hand, experts repeatedly warn about a threat from AI. Some even speak of a potential “extinction of humanity.” Although it is still a long way from this, the project shared by users on TikTok with videos delivers sts_3d but a foretaste of what a threat from AI could actually look like.

So the user built an automatically rotating and firing rifle and connected it to ChatGPT via OpenAI’s real-time API. In the video above you can see how the rifle precisely follows the voice instructions of its builder and fires numerous shots in the specified directions. You can then hear the chatbot’s friendly response: “If you need further assistance, please let me know.”

OpenAI stops hobby project

Of course, the construction is still far from a possible Skynet or Terminator from the films of the same name. Nevertheless, the danger of misusing artificial intelligence is made clear to everyone in the social media videos. Upon request from Futurism OpenAI issued a statement. In the statement, the company said it had “proactively identified this violation of our policies and notified the developer to stop this activity.

OpenAI’s usage guidelines prohibit the use of our services to develop or use weapons or to automate certain systems that may compromise personal safety.” Since no response was received from sts_3d, the developer’s access to the API was ultimately cut off.

Collaboration with defense companies

This means that you are officially against the use of your own products in connection with weapons. Only recently, however, OpenAI secretly adjusted its terms of use. About a year ago, a passage was removed that prohibited the company’s AI from being used for military and war purposes.

At the beginning of December 2024, a collaboration with the Military supplier Anduril announced. They are now working together to “develop and responsibly use advanced artificial intelligence (AI) solutions for national security tasks.”

The Biggest Fears About AI So the engineer’s hobby project on TikTok is certainly not the only one that uses OpenAI’s AI to control weapons. However, it shows how easy this seems to be and how little can be done to prevent dangerous misuse of artificial intelligence, despite terms of use and good intentions.

Leave a Reply