Contribute to groundbreaking projects in artificial intelligence and healthcare – all from the comfort of your home, and on a schedule that works for you.

At BRITE Institute, our research program examines how artificial intelligence affects patient safety in real clinical environments. We study the risks, failure modes, and human–AI interactions that influence medical decisions, developing evidence-based strategies and tools that help healthcare systems deploy AI responsibly while reducing preventable harm and improving patient outcomes.

This study examines the product safety culture of AI companies, evaluating whether foundation model developers and medical AI firms meaningfully prioritize safety, risk awareness, and responsible deployment.

This research project examines how individuals adapt their decision-making strategies when working with AI under uncertain circumstances. By studying trust, over-reliance, and strategic adaptation, the research helps ensure AI tools in healthcare strengthen — rather than compromise — patient safety.
Did you know that many research findings are manipulated—or even outright false? Some estimates suggest that up to 90% of published research may be unreliable. Meanwhile, more than $167 billion in taxpayer money is spent annually on research and development.
At BRITE Institute, we believe research should do more than just look credible. It should be credible. That’s why we go above and beyond typical standards with rigorous practices that ensure honesty, transparency, and accuracy at every step. Below are just some of the ways we safeguard the integrity of our work:
BRITE Institute never p-hacks or manipulates data to achieve a desired outcome. If a paper relies on complex statistical analyses, we use an external statistician to ensure objectivity and validity.
BRITE Institute prioritizes transparency at every stage of the research process. Whenever possible, we publish our full data sets and use open access publishing.
BRITE Institute does not publish for the sake of publishing. Our research is built with end-users in mind—whether it’s policy-makers, engineers, or community leaders—ensuring that findings are not only trustworthy but also actionable.