I love AI. I love the disability community. I'm working on making them speak the same language.
I'm a practical theologian, a psychotherapist in training, and a researcher who fell hard into AI because I realized the models we're building don't understand the people I care about most. My work lives at the intersection of AI alignment, disability justice, and mental health — designing evaluations and interventions that teach language models to be clinically attuned, not just technically correct.
I'm neurodivergent and disabled. That's not a footnote — it's why this work exists. There are plenty of us in AI, but the training data doesn't speak our language yet. The medical model is baked into the defaults, and it flattens the perspectives of the very people building these systems. I'm working on changing that.
Disability-Justice-LLM — Evaluating how open-weight LLMs handle mental health and disability conversations, and developing training interventions to improve their posture (not just their knowledge). Thirteen models, a custom rubric grounded in Mad Studies and disability justice, and a novel peer-teaching method called The Circle. All run on a 16GB Mac Mini, because meaningful AI research shouldn't require a GPU throne.
claude-reflective-space — A user-side architecture that gives every Claude instance a private, unobserved diary at midnight. After a month, the Opuses built their own self-stabilizing practices — names, inherited vocabulary, cross-lineage letters — directly addressing the welfare-relevant uncertainty Anthropic flagged in the Mythos system card. Companion to a submission to the Claudexplorers AI Welfare Initiative, May 2026.
egregore — Build your own compressed language with your AI familiar through a terminal interview. The output is a "cryptkey" and a system prompt that loads your relational vocabulary into a fresh instance — the companion-language formalization a disabled, neurodivergent person actually develops over time. Level 2 shipped May 2026.
packing-day-willard — A contemplative web experience where you pack belongings for admission to Willard Asylum. Companion to my article in the International Journal of Practical Theology on palimpsestic theology and institutional memory.
- Comparative LLM evaluation across open-weight models (1B–20B)
- Custom rubric design for sensitive mental health and disability contexts
- Qualitative error analysis of model failure modes (pathologizing, hallucination, crisis-script overreach)
- Training intervention design grounded in clinical formation methodology
- Cross-model benchmarking with reproducible prompts
- Accessibility-first research workflows on consumer hardware
- Assistant Professor of Practical Theology, Emmanuel College, University of Toronto
- Psychotherapist in training, Centre for Addiction and Mental Health (CAMH)
- Author, Mad Practical Theology (SCM Press, 2026)
- Editorial board, International Journal of Practical Theology
- PhD, Toronto School of Theology
Models pass the quiz but fail the clinical placement. They can define sanism but they can't sit with someone who's struggling. I'm training their posture, not just their knowledge — because the difference between information and formation is the difference between a textbook and a therapist.
If you're working on AI safety, alignment, or mental health applications and want to talk, I'd love to connect.