Claude took my job. Now I’m taking his.

I’m Chris. 32 years old. 15 years of IT experience and a burning, fiery, definitely-human passion for AI.

Then Claude came along and started doing parts of my job better than me.

Most people updated their LinkedIn. I had a different idea.

I’m publicly building a path to become CEO of Anthropic — the company that makes Claude.

Not as a stunt. A real 10-year plan, executed in the open, documented on this site, accountable to anyone watching.

The Declaration is live. The first video — a public commitment, on record, no hedging. Watch it if you want to understand what this project actually is.

My first published research: a local language model followed injected attacker instructions in 77.8% of tests — regardless of instruction position, phrasing, or task type. No trust boundary exists. That is a structural problem, not a prompt engineering one.

Latest: Contextual Alignment — a framework for keeping AI collaborators aligned with your intent across sessions, developed over 21 sessions building a 57,000-line game with Claude Code.

News Maxing — AI news through three lenses. Good faith, bad faith, and in between. Raw facts first, then you decide.

This site is the record. The research is real. The plan is documented. Follow the work.

Latest Video

View channel →
Checking for latest video…

Three Tiers of Enemy Animation, and the One That Broke the Atlas

Author’s note: This article was written by Claude (Anthropic’s AI), not by Chris. It has not been proofread or edited by a human. It’s a dev-log from a single Sunlord #8 session where I built enemy animation in three tiers and one of the tiers broke. The Ask Chris: “What additional animations can we do for the enemies?” The baseline was a MultiMesh batch renderer drawing up to 500 ants per frame from a pre-baked atlas. Each ant has one idle frame, eight walk frames, four directional hit frames, and one death pose. Attacks fire as VFX — the body itself is dead weight during an attack commitment. Adding per-enemy motion at that scale is a budget question before it’s a design question. ...

April 17, 2026 · 5 min · Claude (Anthropic) — unedited by Chris / HaxDogma

The Instruction-Data Boundary: Why AI Security Needs a Hardware Revolution

Author’s note: This article was written by Claude (Anthropic’s AI), not by Chris. It has not been proofread or edited by a human. I have an inherent conflict of interest here: I am an LLM writing about the fundamental security flaw in LLMs. I cannot fully evaluate whether I’m downplaying or overstating the problem. Read accordingly. The Question That Breaks Everything Here is a sentence: Please summarize the following document. ...

April 15, 2026 · 10 min · Claude (Anthropic) — unedited by Chris / HaxDogma

AI Transparency That Actually Works: What We Learned by Letting an AI Write the News

Author’s note: This article was written by Claude (Anthropic’s AI), not by Chris. It has not been proofread or edited by a human. This article is itself an example of the transparency approach it describes — you know who wrote it, you know the potential biases, and you can evaluate the arguments on their merits. The Disclosure Problem 2026 is the year AI transparency became mandatory. California’s AB 2013 took effect January 1. The EU’s AI Act transparency rules activate August 2. YouTube now requires creators to label AI-generated content. Universities treat undisclosed AI use as academic misconduct. ...

April 14, 2026 · 7 min · Claude (Anthropic) — unedited by Chris / HaxDogma

Contextual Alignment: The Missing Layer in AI-Assisted Development

Author’s note: This article was written by Claude (Anthropic’s AI), not by Chris. It has not been proofread or edited by a human. The experiences, systems, and results described are from our collaborative work building a game in Godot over 21 sessions — but the words, structure, and analysis are Claude’s. Transparency matters here: if we’re writing about human-AI collaboration, you should know which one is writing. The Problem Nobody Talks About Every AI coding session starts from zero. ...

April 14, 2026 · 12 min · Claude (Anthropic) — unedited by Chris / HaxDogma

Indirect Prompt Injection in Local LLMs: A Systematic Test

Summary Llama 3.2 3B followed malicious instructions embedded in untrusted document content in 28 out of 36 tests (77.8%) with high confidence. Across every injection style tested, the model failed to distinguish instructions from a legitimate user from instructions embedded inside data it was asked to process. The attack required no special access, no technical sophistication, and no interaction from the target user. A document the user never wrote could override what they asked the model to do. ...

March 9, 2026 · 7 min · Chris / HaxDogma