Blog
- 7/13/2025
Are words the best building blocks for AI?
Language tokens are a poor substrate for grounded intelligence. This post argues for structured, world-centric tokens—geometry, dynamics, agency, causality—and outlines bridging mechanisms like cross-attention and V-JEPA to connect language with learned perceptual models.
- 7/9/2025
Agents that learn in production
Most LLM agents don’t learn from deployment-time experience. This post outlines how to add true online learning via preferences, memory‑augmented policies, streaming adapters, and continual learning—plus the hard gaps to close.
- 6/21/2025
Agents are risky - How much access should we give them?
A practical framework for agent permissions through CRUD—why read-only agents are safest, how compositional risk explodes with actuators, and what guardrails enable safe autonomy.
- 3/25/2025
The unique risks of audio deepfakes
Human detection of voice deepfakes is unreliable (60–73%); automated detectors hit 98%+ in-lab but fail to generalize to unseen attacks. Risks are rising; mitigation requires provenance standards, robust field-trained detectors, and on-device voice verification.
- 1/4/2025
Design Principles for AI-Based Mental Health Tools
Lessons from building Flourish: address sycophancy and the risk of echo chambers, add user-controlled stateful memory; structure sessions, anchor to evidence-based techniques, enforce therapeutic boundaries, and build specialized, auditable systems.