Research agenda & outputs in development
EverCurious AI is an applied research initiative focused on the human consequences of AI interfaces—how design choices shape curiosity, productive doubt, and cognitive autonomy as AI becomes a primary gateway to knowledge and decisions.
We build evaluation frameworks, tested interface prototypes, and evidence syntheses that are designed to be usable by practitioners and legible to researchers. We publish methods and findings openly where possible.
We are currently conducting a comprehensive assessment of the most relevant work in AI-human cognitive research across the U.S. and Europe. This "Gap Analysis" will identify the highest-leverage unknowns in cognitive agency, setting the foundation for our next phase of field intelligence and prototype testing.
Research output: A diagnostic evaluation framework for benchmarking curiosity, productive doubt, and user agency across AI interfaces.
What it measures: How specific interaction patterns influence exploration behavior, willingness to question and verify, confidence calibration, and perceived agency.
Rationale: Many AI interfaces create an "illusion of mastery"—high user confidence paired with shallow understanding. CDAI is designed to make curiosity-preserving design measurable and auditable.
Research output: A tested library of low-friction interface patterns that move AI systems from "answer engines" toward inquiry-driven, reflective partners.
What it includes: Patterns such as Socratic scaffolding, verbalized uncertainty, and structured alternatives—designed to counter single-answer bias while preserving usability.
Rationale: Interface design choices can either deepen learning and reflection or accelerate dependency on fluent, frictionless answers. This playbook translates research evidence into practical, deployable product guidance.
Research output: High-impact communication artifacts, including interactive essays and short videos.
What it does: Translates complex technical findings on cognitive authority into accessible, narrative forms for practitioners, policymakers, and the public.
Rationale: To ensure research findings reach beyond academic silos and influence the broader cultural norms and "civic imagination" of the AI era.
Research output: A deployable pedagogy package designed for middle school, high school, and undergraduate contexts.
What it includes: Teachable, repeatable protocols and micro-lessons that fit real-world curricular constraints.
Rationale: To prevent AI-enabled "shallow mastery" and equip students with the cognitive habits required to think with AI rather than simply through it.
Research output: A field-intelligence map linking real-world usage contexts to specific cognitive failure modes.
What it identifies: The psychological and environmental triggers—such as time pressure, high stakes, or perceived fluency—that lead users to "premature closure".
Rationale: Understanding precisely when and why humans defer to AI is a prerequisite for designing calibrated interventions that restore human agency.
Research output: An audit and "Gap Note" connecting findings across HCI, cognitive science, education, and AI ethics.
What it does: Synthesizes the existing state of research to identify high-leverage unknowns in how AI systems influence human judgment and independent thinking.
Rationale: To anchor all project interventions and prototypes in rigorous, field-grounded evidence rather than speculative opinion.
Interested in collaborating or participating in pilot studies?
We’re inviting a small number of research collaborators for CDAI v0.1 evaluations and pattern-library pilot studies.