
This study examines the product safety culture of AI companies, evaluating whether foundation model developers and medical AI firms meaningfully prioritize safety, risk awareness, and responsible deployment.
This research investigates the product safety culture of companies developing advanced AI systems, including:
The study assesses the degree to which these organizations:
Using surveys, structured interviews, document analysis, and publicly available disclosures, we evaluate whether safety is embedded in decision-making processes — or subordinated to competitive pressures such as speed-to-market and scaling.
The study is particularly focused on AI products that influence healthcare delivery, where failure modes may directly affect patient outcomes.
Technical accuracy alone does not determine safety. Organizational culture determines:
In healthcare, poorly governed AI systems can:
If safety is not structurally embedded in AI companies — especially those building systems intended for medical contexts — patient harm may occur not because the technology is inherently unsafe, but because the organizational incentives surrounding it are misaligned.
Healthcare has learned repeatedly that culture drives safety outcomes. The same principle applies to AI development.
Understanding product safety culture is therefore essential to preventing AI from exacerbating the patient safety crisis.
Hospitals and health systems can use findings to evaluate not only what an AI tool does, but how safely it was built. Product safety culture metrics could become part of vendor selection criteria.
Insights may inform regulators developing oversight frameworks for clinical AI systems, emphasizing organizational safety practices in addition to algorithmic validation.
Investors and boards can incorporate safety culture indicators into due diligence processes, reducing long-term liability and reputational risk.
The research can contribute to standardized benchmarks for AI product safety culture, encouraging companies to embed risk management into their operational DNA.
Ultimately, ensuring that AI improves healthcare requires more than high-performing models. It requires companies whose internal incentives, governance structures, and leadership priorities consistently place patient safety above speed, scale, or valuation.
This study helps illuminate whether that foundation is currently strong — and where it must be strengthened.
This study is currently being conducted. Please sign up to our newsletter to be allerted when results are availbile!

Did you know that many research findings are manipulated—or even outright false? Some estimates suggest that up to 90% of published research may be unreliable. Meanwhile, more than $167 billion in taxpayer money is spent annually on research and development.
At BRITE Institute, we believe research should do more than just look credible. It should be credible. That’s why we go above and beyond typical standards with rigorous practices that ensure honesty, transparency, and accuracy at every step. Below are just some of the ways we safeguard the integrity of our work:
BRITE Institute never p-hacks or manipulates data to achieve a desired outcome. If a paper relies on complex statistical analyses, we use an external statistician to ensure objectivity and validity.
BRITE Institute prioritizes transparency at every stage of the research process. Whenever possible, we publish our full data sets and use open access publishing.
BRITE Institute does not publish for the sake of publishing. Our research is built with end-users in mind—whether it’s policy-makers, engineers, or community leaders—ensuring that findings are not only trustworthy but also actionable.