Snowflake Cortex AI Vulnerability Allowed Malware Execution

A PromptArmor report revealed a prompt injection attack chain in Snowflake’s Cortex AI Agent, which enabled it to escape its sandbox and execute malware. The vulnerability, now fixed, was triggered when a user prompted the agent to review a GitHub repository containing a hidden prompt injection.

Source: Simon Willison Blog