Applied Testbeds
EverCurious AI uses a mixed-methods approach to study how AI interfaces affect reasoning and agency. Our research is conducted across three primary workstreams:
1. Academic & Pedagogical Environments
Testbed output: Classroom-ready protocols and micro-lessons deployed in middle school, high school, and undergraduate settings.
What it tests: How repeatable routines and "Inquiry Labs" affect AI-enabled "shallow mastery" and the development of students' independent thinking habits.
Rationale: To ensure students develop the habits needed to think with AI rather than outsourcing reasoning to it, protecting learning outcomes and public trust.
2. "In-the-Wild" Field Intelligence
Testbed output: Mapping of real-world usage through interviews, diary studies, and lightweight instrumentation.
What it tests: The environmental triggers—such as time pressure, confidence, and high stakes—that lead people to defer to, override, or interrogate AI systems.
Rationale: To build a Deference Conditions Atlas that identifies cognitive failure modes in the naturalistic settings where AI is becoming the default gateway to knowledge.
Note: Recruitment for these studies is supported by the EverCurious community and existing digital audience networks.
3. Collaborative Product Pilots
Testbed output: A/B/C testing environments and "Reflective-AI" variants scaled via partner channels.
What it tests: Comparison of standard "answer-first" flows against reflective mediation layers, using the CDAI agency metric set as the primary outcome.
Rationale: To generate actionable product guidance for AI labs and tech firms, ensuring that "durable trust" is built through systems that support inquiry rather than quietly suppressing it.
Interested in participating in pilot studies or serving as a testbed?
Join a pilot study