ChatGPT Used to Gather Information by Individual Responsible for Cybertruck Explosion in Las Vegas


January 8, 2025 by our News Team

A former military sergeant used an AI-powered chatbot to gather information and plan a deliberate explosion involving a Tesla Cybertruck near the Trump International Hotel, raising questions about responsible use of AI and the need for enhanced safeguards.

  • ChatGPT assisted in gathering information for the investigation
  • OpenAI is cooperating with authorities and committed to preventing misuse of their technology
  • This incident highlights the need for responsible use of AI and constant vigilance to prevent misuse


Recently, a shocking incident involving a Tesla Cybertruck unfolded near the Trump International Hotel in the US. It turns out that the explosion was a deliberate act planned by a former military sergeant named Matthew Livelsberger. According to authorities, Livelsberger rented the Cybertruck in Colorado and drove it to Las Vegas on December 31, 2024. On January 1, he parked the vehicle filled with explosive materials at the hotel entrance and triggered the explosion, tragically ending his own life in the process.

Now, as the investigation progresses, more details are emerging about how this attack was carried out. Las Vegas authorities have revealed that Livelsberger used ChatGPT, an AI-powered chatbot, to gather information and plan his actions. He utilized the platform to learn about the materials needed, where to purchase them, and other crucial details that enabled him to execute the incident.

Shortly after the revelation that ChatGPT may have been involved, OpenAI, the organization behind the chatbot, confirmed that they are cooperating with the authorities. They also emphasized that all the information accessed by Livelsberger was publicly available on the internet, and ChatGPT had actually warned him against engaging in violent and illegal activities.

At this point, specific details about how ChatGPT responded to Livelsberger’s requests remain unknown. However, it is believed that the chatbot may have assisted him in finding the right ingredients to create homemade explosives.

It’s worth noting that ChatGPT has several filters in place to prevent the dissemination of potentially malicious information. However, in this case, the collected information may not have raised any red flags, as it was likely scattered and not immediately apparent how it would be used.

Nevertheless, OpenAI has made it clear that they are committed to working closely with the authorities to thoroughly investigate the situation and understand how their technology was involved.

This incident raises important questions about the responsible use of AI and the potential risks associated with it. While AI technologies like ChatGPT have incredible capabilities, they also come with a responsibility to ensure they are not misused or exploited for harmful purposes.

As the investigation continues, it is crucial for organizations like OpenAI to learn from this incident and further enhance their systems to detect and prevent the dissemination of potentially dangerous information. Striking the right balance between the power of AI and safeguarding against misuse is an ongoing challenge that requires constant vigilance and collaboration between technology developers, law enforcement agencies, and society as a whole.

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.


Leave a Reply