Generative artificial intelligence is shaping how people search for information and made decisions, including many decisions about risk. How can we safely and effectively utilize AI to help us assess these risks?
Generative artificial intelligence is shaping how people search for information and made decisions, including many decisions about risk. How can we safely and effectively utilize AI to help us assess these risks?
AI is well integrated into the risk management of technical and financial systems, where factors can easily be quantified. We are working to discover the benefits and pitfalls of AI in decision making when the states are high.
When used improperly, AI can amplify risk instead of reducing it. In high-stakes environments—such as national security, healthcare, and public safety—decisions often rely on context, human judgment, and nuanced qualitative data that AI may misinterpret or overlook. If we treat AI-generated insights as infallible, we risk making critical decisions based on incomplete, biased, or misunderstood information.
Without clear guidelines, AI systems may be trusted too quickly, used inappropriately, or embedded in decision processes without proper oversight. This can lead to cascading failures, erosion of public trust, and serious harm. That’s why it is essential to establish robust, practical frameworks that ensure AI is used responsibly, transparently, and with proper human evaluation—especially when lives, livelihoods, or national interests are on the line.
This project is still in progress.
Want to be the first to know about what we discover? Join our news letter to discover what it takes to succeed before the rest of the world, giving you a head start to success.
Did you know that many research findings are manipulated—or even outright false? Some estimates suggest that up to 90% of published research may be unreliable. Meanwhile, more than $167 billion in taxpayer money is spent annually on research and development.
At BRITE Institute, we believe research should do more than just look credible. It should be credible. That’s why we go above and beyond typical standards with rigorous practices that ensure honesty, transparency, and accuracy at every step. Below are just some of the ways we safeguard the integrity of our work:

BRITE Institute never p-hacks or manipulates data to achieve a desired outcome. If a paper relies on complex statistical analyses, we use an external statistician to ensure objectivity and validity.
BRITE Institute prioritizes transparency at every stage of the research process. Whenever possible, we publish our full data sets and use open access publishing.
BRITE Institute does not publish for the sake of publishing. Our research is built with end-users in mind—whether it’s policy-makers, engineers, or community leaders—ensuring that findings are not only trustworthy but also actionable.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.