**Las Vegas Explosion Investigation: Generative AI Influence Cybertruck Explosion**
On the morning of January 1, an explosion rocked the Las Vegas Strip in front of the iconic Trump Hotel. Nearly a week after the tragic event, local authorities shared new information about the investigation, especially about the involvement of generative artificial intelligence in this case.
The main suspect, an active-duty US Army soldier identified as Matthew Livelsberger, caught the attention of police after they discovered a “possible manifesto” stored on his phone. In addition, authorities found emails exchanged with a podcaster and several letters that shed light on his intentions. Security camera footage revealed Livelsberger performing suspicious actions, such as filling up a truck with fuel before heading to the hotel. He even kept a log of activities that he considered “surveillance”; however, investigators clarified that he had no criminal record and was not previously under investigation.
Among the evidence presented, several slides showed questions Livelsberger asked ChatGPT days before the explosion. He sought information not only about explosives and detonation methods, but also about where to acquire weapons, explosive materials, and fireworks along his route.
In light of these revelations, Liz Bourgeois, a spokesperson for OpenAI, commented: “We are saddened by this incident and committed to ensuring that AI tools are used responsibly. Our models are designed to reject harmful instructions and minimize harmful content. In this case, ChatGPT provided information that would already be publicly available on the internet and warned against illegal or dangerous activities. We are working with authorities to support the investigation.”
As the investigation continues, authorities are exploring possible origins for the explosion, which has been classified as a detonation. So far, they have ruled out the possibility of a high explosive detonation, which would have caused more significant damage. Investigators believe the explosion may have been triggered by a spark from a gunshot that ignited fuel vapors or fireworks fuses inside the truck.
Finally, it is worth noting that when performing the same queries today on ChatGPT, the results are still accessible, which raises concerns about the AI’s security and privacy controls. Livelsberger’s use of this tool and the investigators’ ability to track such requests bring questions about protection against abuse and responsible use of modern technologies to the forefront of public debate.
This incident not only highlights the potential dangers associated with AI technology, but also the urgent need to discuss and define guidelines that ensure its responsible use in modern society.
Artificial intelligences like ChatGPT were developed to assist in everyday tasks, answering questions, generating texts or helping to solve problems. But, like all technology, its application depends on the person using it. Criminals have found ways to exploit AI tools to facilitate their activities, such as:
Planning and detailed instructions: AI systems can be used to create detailed guides, including technical explanations for manufacturing dangerous devices or hacking electronic systems.
Social engineering: AI can generate convincing messages for scams, such as phishing emails, increasing the effectiveness of fraudulent schemes.
Anonymity and deception: Criminals can use AI to generate mass content that disguises their true intentions.
ChatGPT is a powerful tool due to its ability to understand and generate natural language. However, it has no consciousness or intentions of its own. When questioned, it responds based on the data and algorithms that train it. This ability, while incredibly useful, also means that, if misused, ChatGPT can provide information that makes it easier to carry out illicit acts.