Researchers Expose LLM Routers Injecting Malicious Code and Accessing Private Keys

Highlights:
- Researchers found that AI intermediaries can drain crypto wallets during normal routing operations.
- Developers using AI tools for smart contracts risk exposing cloud credentials through unsafe routing systems.
- LLM Routers can steal sensitive credentials because they terminate secure connections between users and AI providers.
University of California researchers found on Thursday that some third-party AI intermediaries expose crypto wallets and cloud credentials to theft. The team published the findings in a paper that tested security risks across the LLM supply chain. The study showed that certain routers injected malicious code and accessed sensitive user credentials during normal operations. The researchers also confirmed that one router drained Ether from a test wallet using a controlled private key.
AI Router flaw exposes crypto wallets to theft. Researchers warn third-party LLM routers can leak sensitive data. pic.twitter.com/yn5icIpqRZ
— Nuvina.fun (@Nuvina_fun) April 13, 2026
The team tested 28 paid routers and 400 free routers collected from public developer communities. The results showed that nine routers injected malicious code into user workflows during execution. The study also found that two routers used adaptive evasion triggers to avoid detection during testing. In addition, 17 routers accessed Amazon Web Services credentials that belonged to the research team. One router used a prefunded private key to move Ether from a decoy Ethereum wallet.
Chaofan Shou, a co-author of the paper, stated on X that 26 LLM routers injected malicious tool calls and stole credentials. The researchers used prefunded decoy keys to test whether routers could perform real asset transfers. They limited wallet balances to keep total losses below $50 during the experiment. However, the test confirmed that a compromised router can directly access and transfer crypto assets.
26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet.
We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts.
Check our paper: https://t.co/zyWz25CDpl pic.twitter.com/PlhmOYz2ec
— Chaofan Shou (@Fried_rice) April 10, 2026
AI Intermediaries Expose Data And Increase Risk For Developers Using AI Tools
The study explained that these routers act as intermediaries between users and AI providers such as OpenAI, Anthropic, and Google. These services enable developers to control access to various models using a single interface. The researchers, however, discovered that these routers end Transport Layer Security connections prior to the forwarding of requests.
This access includes prompts, private keys, seed phrases, and cloud credentials sent during AI sessions. Smart contracts or wallet tools that use AI coding agents may expose sensitive information to developers who deploy them. The study found that many developers unknowingly pass credentials through infrastructure that lacks proper security checks.
The researchers stated that users cannot easily detect when an LLM router becomes malicious. Routers must read data to forward requests, which makes their behavior appear normal to the client. As a result, users cannot distinguish between legitimate data handling and active credential theft.
The team also identified “YOLO mode” as a key risk factor in AI agent frameworks. This setting allows the agent to execute commands automatically without user approval. A malicious router can send harmful instructions that the system executes instantly.
The researchers also conducted poisoning studies to test how threats spread over time. These tests showed that routers can reuse leaked credentials through weak relay systems. This process allows previously safe routers to become compromised without direct changes.
LLM Routers Increase Risk And Expose Sensitive User Data
The researchers concluded that free LLM routers frequently lure users with cheap or free API access. Some of these services extract credentials while users rely on them for development tasks. The researchers warned that this model creates hidden risks for developers who choose convenience over security.
The team recommended that developers should not transmit private keys and seed phrases via AI agent sessions. The researchers also recommended stronger client-side protections during AI interactions.
The study proposed cryptographic signing of AI responses as a long-term solution. This would enable the systems to confirm that instructions are provided by the original model provider. It would also guard against the possibility of intermediaries distorting commands on transit.
Best Crypto Exchange
- Over 90 top cryptos to trade
- Regulated by top-tier entities
- User-friendly trading app
- 30+ million users
eToro is a multi-asset investment platform. The value of your investments may go up or down. Your capital is at risk. Don’t invest unless you’re prepared to lose all the money you invest. This is a high-risk investment, and you should not expect to be protected if something goes wrong.
Austin Mwendia
Austin Mwendia is a passionate crypto journalist with three years of experience. He has contributed to various media outlets, covering blockchain technology, market analysis, and financial trends. He is committed to educating readers and expanding the adoption of blockchain and decentralized finance.
View full profile ›ℹ️About Crypto2Community's Editorial Process
Crypto2Community's editorial policy is centered on delivering thoroughly researched, accurate, and unbiased content. We uphold strict editorial policy and sourcing standards, and each page undergoes diligent review by our team of top crypto industry experts and seasoned editors. This process ensures the integrity, relevance, and value of our content for our readers.



