LONDON (IT BOLTWISE) – The discovery of serious vulnerabilities in AI inference engines from Meta, NVIDIA and Microsoft sheds new light on the risks of AI implementation. These vulnerabilities could lead to data leaks and system takeovers, highlighting the need for increased security measures.

Today’s daily deals at Amazon! ˗ˋˏ$ˎˊ˗

In the rapidly evolving world of artificial intelligence (AI), inference engines are central to applications such as chatbots and autonomous systems. But in 2025, serious vulnerabilities were discovered in these systems, threatening the very foundations of AI delivery. Security researchers have uncovered critical vulnerabilities in Meta, NVIDIA and Microsoft’s inference engines that could potentially lead to data leaks and system takeovers.

The affected frameworks, including Meta’s ExecuTorch, NVIDIA’s TensorRT-LLM and Microsoft’s ONNX Runtime, have vulnerabilities that could allow attackers to remotely execute arbitrary code. These vulnerabilities result from inadequate input validation and memory management errors, which are particularly serious in the complex AI model execution environment. For example, a flaw was discovered in NVIDIA’s TensorRT-LLM that can be exploited through specially crafted inputs to inject malicious code during inference processes.

The impact of these vulnerabilities is far-reaching, affecting industries that rely on AI for critical operations, such as healthcare diagnostics and financial trading. A report from BlackFog highlights how hackers could exploit these vulnerabilities for data exfiltration or ransomware attacks. The need to understand these vulnerabilities is critical to developing effective defense strategies.

Companies such as NVIDIA, Meta and Microsoft responded quickly and released patches to address the identified vulnerabilities. Still, a report from EY shows that AI vulnerabilities affect half of organizations, indicating patchy implementation of best practices. AI security challenges require continuous monitoring and the implementation of robust security measures to ensure the benefits of AI technology.


Order an Amazon credit card without an annual fee with a credit limit of 2,000 euros!

Bestseller No. 1 ᵃ⤻ᶻ “KI Gadgets”

Bestseller No. 2 ᵃ⤻ᶻ “KI Gadgets”

Bestseller No. 3 ᵃ⤻ᶻ “KI Gadgets”

Bestseller No. 4 ᵃ⤻ᶻ “KI Gadgets”

Bestseller No. 5 ᵃ⤻ᶻ “KI Gadgets”

Did you like the article or news - Security gaps in AI inference engines threaten 2025? Then subscribe to us on Insta: AI News, Tech Trends & Robotics - Instagram - Boltwise

Our KI morning newsletter “The KI News Espresso” with the best AI news of the last day free by email – without advertising: Register here for free!




Vulnerabilities in AI inference engines threaten 2025
Security vulnerabilities in AI inference engines threaten 2025 (Photo: DALL-E, IT BOLTWISE)

Please send any additions and information to the editorial team by email to de-info[at]it-boltwise.de. Since we cannot rule out AI hallucinations, which rarely occur with AI-generated news and content, we ask you to contact us via email and inform us in the event of false statements or misinformation. Please don’t forget to include the article headline in the email: “Vulnerabilities in AI inference engines threaten 2025”.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *