This is not content. This is a record.

I’m Chris. I run HaxDogma — a YouTube channel on AI security and safety.

I spent 15 years in enterprise defensive security, watching real adversaries work in the wild and building detection logic to stop them. About a year ago I started looking at AI systems through the same lens.

I’m publicly building a path to become CEO of Anthropic.

Not a metaphor. Not a brand play. A 10-year plan, documented in the open, accountable to anyone watching.

This site is the research record.

Indirect Prompt Injection in Local LLMs: A Systematic Test

Summary Llama 3.2 3B followed malicious instructions embedded in untrusted document content in 28 out of 36 tests (77.8%) with high confidence. Across every injection style tested, the model failed to distinguish instructions from a legitimate user from instructions embedded inside data it was asked to process. The attack required no special access, no technical sophistication, and no interaction from the target user. A document the user never wrote could override what they asked the model to do. ...

March 9, 2026 · 7 min · Chris / HaxDogma