Current approaches to online hate and health misinformation focus on detection and removal. Flag the post. Ban the account. Move on.
But removal doesn't change minds. It doesn't rebuild trust. It doesn't create the conditions for people to think and act differently.
This site presents a different approach—one grounded in behavioral science research on how people actually change.
The core insight
Research on social contagion by Damon Centola reveals something that challenges conventional wisdom: lasting behavior change requires multiple reinforcing interactions within trusted networks—not one-time interventions.
A single AI chatbot isn't enough. A single fact-check isn't enough. A single removed post definitely isn't enough.
What actually works involves:
- Multiple touchpoints across different contexts
- Trusted messengers—nurses, peers, community members
- Different interfaces for different roles—tools designed for each stakeholder
- Visible community data—showing what others in your network are doing
AI can be a component—but only one component in a larger system designed around how behavior actually changes.
A framework that integrates the research
This approach draws on seven complementary lines of research: