AI Assisted Decision Making Under Uncertainty

This research project examines how individuals adapt their decision-making strategies when working with AI under uncertain circumstances. By studying trust, over-reliance, and strategic adaptation, the research helps ensure AI tools in healthcare strengthen — rather than compromise — patient safety.

What is this project about?

This study is designed to examine how individuals adapt their decision-making strategies when working with AI decision-support tools in time-sensitive, high-uncertainty environments.

Participants play a modified version of the strategy game Battleship. In each round, they must identify hidden targets under uncertainty while managing limited moves. Alongside their own reasoning, they receive probabilistic recommendations from an AI system that varies in accuracy across experimental conditions.

The study systematically manipulates AI performance levels — from highly accurate to moderately accurate to marginally better than chance — in order to observe:

  • Reliance behavior: Do participants defer to AI when it performs well?
  • Under-reliance behavior: Do participants abandon AI prematurely when its accuracy drops, even if it remains statistically superior to guessing?
  • Cognitive offloading: Do individuals stop independently analyzing patterns when AI accuracy is high?
  • Strategic adaptation: Do participants integrate AI analysis intelligently, adjusting strategy based on its strengths and weaknesses?
  • Trust calibration: Does participant trust track true AI performance, or does it drift due to isolated errors or streaks?

This experiment provides a structured and measurable framework for studying human–AI interaction in dynamic decision environments — particularly where outcomes matter and uncertainty is inherent.

Why is this important?

As AI systems increasingly support decisions in medicine, national security, and critical infrastructure, a core question emerges:

How do humans behave when AI becomes part of the decision loop?

In healthcare, clinicians are already using AI for:

  • Diagnostic imaging interpretation
  • Risk stratification and predictive analytics
  • Clinical decision support alerts
  • Treatment recommendations

However, the safety impact of AI does not depend on algorithmic accuracy alone. It depends on how humans respond to that accuracy.

Poorly calibrated trust can create two major risks:

  1. Over-reliance
    If clinicians defer too heavily to highly accurate AI systems, they may:
    • Stop critically evaluating recommendations
    • Miss rare but consequential errors
    • Experience skill degradation over time
  2. Under-reliance
    If clinicians abandon moderately accurate AI tools after observing a few mistakes, they may:
    • Ignore statistically beneficial decision aids
    • Reduce system-level safety gains
    • Reinforce cognitive biases

The Battleship Experiment allows us to isolate and quantify these behaviors in a controlled environment before studying them in clinical contexts.

Understanding trust calibration, strategy adaptation, and cognitive offloading is essential for ensuring that AI improves — rather than inadvertently harms — patient safety.

Where can it be applied?

While the experimental environment uses a game framework, the implications extend directly to healthcare and other high-stakes domains.

1. Safer Clinical Decision Support Systems

Results can inform the design of AI tools that:

  • Communicate uncertainty more effectively
  • Adjust feedback mechanisms to prevent blind reliance
  • Encourage continued clinician engagement rather than passive acceptance

2. Training for Human–AI Collaboration

Findings can support the development of:

  • Medical training modules that teach appropriate AI trust calibration
  • Simulation environments for practicing AI-assisted diagnosis
  • Protocols that reduce automation bias and alert fatigue

3. Risk Governance and Policy Design

At the institutional level, this research can guide:

  • AI implementation strategies in hospitals
  • Oversight policies for clinical AI deployment
  • Metrics for monitoring appropriate human–AI interaction

4. Improving Patient Safety in the AI Era

Ultimately, the study contributes to a central patient safety challenge:

Not just “Is the AI accurate?”
But “Do humans use it in ways that improve outcomes?”

By rigorously studying behavioral adaptation under varying AI accuracy conditions, the Battleship Experiment helps build a foundation for AI systems that enhance human judgment — without replacing it or undermining it.

For healthcare, where lives are at stake, calibrated human–AI collaboration is not optional — it is essential.

What are this results?

This project is still in progress. Sign up to our mailing list to be notified when results are published.

Research You Can Rely On

Did you know that many research findings are manipulated—or even outright false? Some estimates suggest that up to 90% of published research may be unreliable. Meanwhile, more than $167 billion in taxpayer money is spent annually on research and development.

At BRITE Institute, we believe research should do more than just look credible. It should be credible. That’s why we go above and beyond typical standards with rigorous practices that ensure honesty, transparency, and accuracy at every step. Below are just some of the ways we safeguard the integrity of our work:

Get the latest updates  in your inbox

Thanks for joining our newsletter.
Oops! Something went wrong.
FaqS

Frequently Asked Questions

Cras tincidunt lobortis feugiat vivamus at morbi leo urna molestie atole elementum eu facilisis faucibus interdum posuere.

What does BRITE Institute do?

BRITE Institute is a research and development nonprofit organization dedicated to advancing the science of risk.  We conduct both basic and applied research.  We also develop tools and technologies to improve risk management. Id sed montes.

Is BRITE Institute a 501(c)(3) organization?

Yes, BRITE Institute is proud to be recognized as a 501(c)(3) nonprofit organization. All donations to BRITE Institute are tax deductible.

What kind of research does BRITE Institute do?

Our research includes basic studies for understanding complex system risks and applied studies for developing effective risk management technologies.

Why should we trust BRITE Institute?

As a public charity, we believe we need to go above and beyond to earn and keep your trust. We have adopted a four pillar framework which goes far above and beyond what is required by law.  Our four pillars of integrity are: independent audits, transparency, expert oversight, and compliance These pillars guide our operations and are central to maintaining the highest standards of integrity and effectiveness in our work. You can read more about our governance here.

How can I donate to BRITE Institute?

Donations are vital to our mission and operations. To support us financially, you can visit our website's donation page. Your contribution is greatly appreciated, and we take our responsibility to spend funds wisely seriously!

Is there a way I can support BRITE Institute if I cannot afford to make a donation?

There are many ways to support the BRITE Institute including volunteering, supporting our social media, and more. Visit our support page to learn more!

How can I contact BRITE Institute?

We welcome your queries and interest. You can reach out to us via email at info@briteinstitute.org or through our website's contact page.

Where are you located?

BRITE Institute's headquarters is in Arizona, but we are a remote team with team members across the USA and the world. You can find more detailed information about our operations here and state specific donation disclosures here.

What is this research about?

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

The Problem:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Goals and Methods:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Outcomes:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.