Beyond detection. Beyond removal.

Toward lasting change.

Current approaches to online hate and health misinformation focus on detection and removal. Flag the post. Ban the account. Move on.

But removal doesn't change minds. It doesn't rebuild trust. It doesn't create the conditions for people to think and act differently.

This site presents a different approach—one grounded in behavioral science research on how people actually change.

The core insight

Research on social contagion by Damon Centola reveals something that challenges conventional wisdom: lasting behavior change requires multiple reinforcing interactions within trusted networks—not one-time interventions.

A single AI chatbot isn't enough. A single fact-check isn't enough. A single removed post definitely isn't enough.

What actually works involves:

  • Multiple touchpoints across different contexts
  • Trusted messengers—nurses, peers, community members
  • Different interfaces for different roles—tools designed for each stakeholder
  • Visible community data—showing what others in your network are doing

AI can be a component—but only one component in a larger system designed around how behavior actually changes.

A framework that integrates the research

This approach draws on seven complementary lines of research:

Complex Contagion Contact Theory Restorative Practices SEL Bystander Nudge Moral Dev

These aren't just abstract theories. They inform concrete design decisions: which intervention to use when, which messenger to deploy, how to sequence touchpoints, how to make community progress visible.

Explore the framework →

Two areas of application

I've been exploring how this framework applies to two challenging domains:

Responding to Online Hate

How might we move beyond detection and removal to restorative responses on decentralized social networks like Bluesky?

Read more →

Addressing Vaccine Hesitancy

What would it take to help parents navigate vaccine decisions—combining AI conversations with trusted nurses and community data?

Read more →


Why this matters

We're at an inflection point. AI makes new kinds of interventions possible—but we need to think carefully about what actually creates change.

The easy path is building more chatbots, more detection systems, more content moderation. The harder path is designing systems that account for trust, networks, multiple touchpoints, and the messy reality of how humans change their minds.

This site documents an ongoing exploration of that harder path.