Skip to main content
Skip to footer

Cerner’s AI guiding principles

  1. Governance Structures. We have established processes to provide oversight over the lifecycle of our AI/ML products. Oversight includes:
    • product lifecycle management — focused on ensuring execution of our product development framework during each phase of development;
    • hazard analysis — monitoring before, during, and after a feature is implemented or modified to evaluate performance and mitigate risks of patient safety, financial loss, regulatory, security and/or privacy risks and data integrity; and
    • oversight by patient safety committee — consisting of clinical engagement, peer review, data analysis and review of the potential for bias.

    We continually monitor applicable regulations and guidance for changes and update our governance structures accordingly. Cerner also engages with regulatory agencies to discuss our approaches to responsible AI/ML development and provide feedback on best practices.

  2. Consideration of Potential Bias and Inequities. Cerner develops and continually monitors its AI/ML products for the potential presence of bias and inequities. When we develop AI/ML products algorithms, we research applicable third-party literature, review the work of outside experts, and consult internal data scientists, product managers and clinicians, to understand likely sources of bias, including bias inherent in the data itself, and consider if modifications to the AI/ML product(s) is appropriate to address the potential bias in the data. While models are in production, our data science teams remain engaged to monitor the model’s performance and investigate feedback from stakeholders and clients on potential bias or disparities.

  3. Remediation of Bias and Inequities. If a potential disparity or bias in Cerner’s AI/ML product is discovered by, or reported to, Cerner, we investigate and notify our clients. We leverage our corrective action process to determine the root cause of the potential disparity or bias, any patient safety or other clinical impacts, and necessary corrective actions. If corrective actions are necessary, we update the model or remove it from production.

  4. Transparency. Cerner embraces transparency to promote trust and confidence in our AI/ML products. We disclose to clients how our AI/ML products use data within their client systems to generate risk scores and alerts. We work closely with stakeholders, data experts and clients to define expectations for transparency and explainability so that when end users see an alert or risk score, they understand why they are seeing it.


1Examples include formal comments to HHS, Agency for Healthcare Research and Quality, and the White House Office of Science and Technology Policy and National Science Foundation.