2023 was a breakout year for generative AI as we started to see how companies could use it to better meet consumer needs in ways that were almost as endless as the content it’s capable of producing. This excitement is measurable: the gen AI adoption rate in the US was nothing short of revolutionary, by tech standards, as it has blown past smartphones in its relative adoption curve with explosive growth.
Venture funding in AI-enabled solutions in healthcare led the pack in 2023, with nearly $2.8B in funding across 101 deals (as of Q3 2023). It has not slowed down from there. So far in 2024, over $400M has been invested in AI-enabled U.S. digital health startups.
In other words, we’re at the very forefront of an emerging field of AI applications – applications we can’t even begin to anticipate as more and more resources are directed toward development and integration.
This is an exciting time to be at BrightInsight, a company that strives to transform patient care through digital health solutions — and it’s also a time for both discernment and proactive risk management as we develop products for patient and clinician usage that may be used in clinical decision support and disease management.
Further, at BrightInsight we always have an eye on data privacy and security. Our BrightInsight Platform regularly undergoes rigorous independent verification to ensure conformance with compliance controls, achieving certifications against global standards to earn the trust of our clients and business partners.
BrightInsight applies this same exacting level of accountability with our deployment of AI-powered features for our Disease Management Solution. We have developed concrete, industry-specific principles that get us beyond the generic “buzz” words, and guide us to ensure proper and effective use of AI in our product development. I am pleased to provide them for your review below and welcome your feedback.
BrightInsight’s AI Principles
- Accountability: Responsibility, safety and respect
- Innovate with accountability. We are responsible to our clients and product end-users for our deployed AI use cases and generated results. BrightInsight must have a fundamental understanding of how an AI LLM works and how AI inputs are curated.
- We must be steadfast in designing and implementing appropriate clinical and other requirements for the veracity, consistency and completeness of AI outputs in the overall validation and verification of our products. Risks (such as AI hallucinations) are researched, documented and addressed within an appropriate proactive risk management model. In collaboration with our clients, we bring the right scientific rigor to the AI screening process. In short, reliance on unknown “black box” elements is unacceptable if AI is to be relied upon by patients in managing their disease.
- Respect the rights of patients and societal values in our use of AI, including acknowledging and addressing unjust bias, cultural differentiators, compliance and overall legal norms.
- Patient-centric approach: Put the end-user first
- AI use must be centered on helping product end-users in the patient journey and managing patient diseases. AI deployment is to lead to worthwhile outcomes and satisfaction for the patient and their caregivers, such as an on-point, responsive personalized patient experience, improving ease of use and streamlining care delivery.
- Patient safety and efficacy cannot be compromised.
- Privacy, security & quality by design: Build a solid foundation for compliant deployment
- We prioritize privacy, security, and quality by design practices at BrightInsight.
- We deploy and adhere to end-user upfront consents to and control over AI use (including specific opt-ins as appropriate). Data minimization requirements and other applicable personal data laws and regulations are strictly adhered to.
- Deployed security controls and practices must be effective in ensuring data preservation, confidentiality, authenticity, and integrity. We must also properly guard against improper manipulation of AI inputs or algorithms.
- End-user inputs may be used to train AI LLMs only as consented and appropriate under the circumstances. Such inputs must be properly screened for sensitive personal data before being so used.
- The quality of AI results must conform with BrightInsight’s strict Quality Management System.
- Transparency: Dutifully and fully inform on AI use cases and associated risks
- We know our audience and are upfront and clear in communicating all AI use cases to our clients and end-users.
- We recognize in particular that patients cannot be presumed to be AI experts. It is our duty to educate them on the practical implications of each AI use case – what it means to them, what personal data it uses and what to expect (or not expect) from the results.
- We timely and fully inform clients and end-users on the benefits and limitations of our chosen AI solutions and establish the right expectations for use.
- As appropriate, we communicate the need for independent HCP or other user review and approval before AI-generated clinical decision support is to be assumed valid and reliable. We provide appropriate, timely and effective disclaimers at point of use.
- Active monitoring & feedback: Ensure continued fit-for-purpose use
- Production use of AI must be properly and actively monitored to ensure conformance with requirements and assumptions for safe and effective use.
- We solicit and encourage client and end-user feedback and incorporate learnings in our product development lifecycle.
- Ongoing insight from deployments is critical to selecting and improving the “best fit” AI use cases.