
by Ariane Siegel, BA, LL.B. LL.M.
General Counsel and Chief Privacy Officer
As Chief Privacy Officer and General Counsel at OntarioMD (OMD), my mandate is to stay ahead of critical new digital health tools, legal and regulatory updates and trends that impact physicians’ responsibilities and risks. Artificial Intelligence (AI) technologies present us with game-changing use cases. Last November, I attended the AI Governance Global Conference, an education and training event hosted by the International Association of Privacy Professionals (IAPP). I would like to share some key takeaways.
- To help facilitate innovation, we need to develop tools and guidelines that build trust. In a presentation on technological implementation, J. Trevor Hughes, President and Chief Executive Officer of the IAPP, spoke about lessons we can learn from the introduction of the automobile. Before the advent of automotive brakes, traffic lights and stop signs, it was customary for people to hold signs along roadways warning drivers of impending danger and the need to slow down. However, with the introduction of brakes and safety regulations, drivers gradually began to trust in their vehicles’ safety. Ultimately, cars travelled faster, and roadways were less congested with accidents. The key takeaway is that trust supports innovation and adoption. Similarly, people need to build their trust in AI technologies and the use of the data that feeds the algorithms. As a society, we need to build the legal, regulatory, governance and accountability mechanisms to keep users safe and fuel trust. These mechanisms and fail-safes will help us feel confident in our use of these tools and allow us to take advantage of their positive benefits.
- We need practical ground rules on the use of AI in health care. Federal, provincial, and territorial authorities have launched principles to guide the responsible development and use of generative AI technologies in Canada. There is still important detailed work to be done within the health-care sector to create a practical framework for the implementation of AI. This includes:
- Identifying risk, legal compliance, and governance models (including opportunities for their evolution) to address key considerations.
- Ensuring medical practices align with privacy and data protection regulations such as the Personal Health Information Protection Act (PHIPA) and Personal Information Protection and Electronic Documents Act (PIPEDA).
- Creating policies to address the purpose and intended use of AI technologies.
- Conducting risk assessments and mitigation strategies for concerns related to personal health information (PHI) and data privacy, security, bias, and unintended consequences.
- Engaging patients and caregivers to understand and address their concerns and establish mitigation strategies.
- Implementing data governance practices to protect PHI accessed by AI and defining data ownership and access/management controls.
- Creating patient, clinic staff and clinician buy-in through trust and developing training programs that incorporate best practices and external standards.
- We need to find the right balance between progress and ethical responsibility. AI technologies are transforming various industries. These technologies require guidelines supporting appropriate implementation that align with homegrown economic, legal, and ethical considerations. It will be important to find the right balance to create a framework that supports the promise of new technologies and innovation while managing their risks.
- AI technology cannot replace human interaction between physicians and patients. AI technologies are not a substitute for physicians, but rather tools that can complement their decision-making and enhance their roles. According to one study, employing AI has been demonstrated to enhance the amount of time physicians spend with patients by alleviating the administrative workload associated with documentation. Another study suggests that the use of AI contributes to the increased efficiency of patient visits. Overall, the use of AI may improve interaction with patients by supporting more meaningful engagement to provide better care.
- Patients should be engaged in the use of AI in their care. The discussions surrounding AI are replete with both positive and negative opinions. Investigating patient perspectives on its application and value in delivering care may enhance health outcomes and foster its acceptance. Within Ontario’s Trustworthy Framework and the Office of the Privacy Commissioner of Canada’s principles on generative AI technologies, transparency and openness play pivotal roles in establishing trust and confidence in the use of data-enhancing technologies. Alongside obtaining informed consent for AI usage during medical visits, clinicians should communicate the reasons for its use and assure patients of secure data management and confidentiality.
- Human oversight and diverse engagement will promote healthy AI practices. There are critical concerns regarding the use of AI in health care, specifically related to privacy, surveillance, bias, discrimination, and the potential for human rights violations. Effectively managing these risks will require open dialogue and collaboration among physicians, patients, advocates, regulators, technology innovators, policymakers, and vendors. This includes implementing audit mechanisms, evaluating AI usage, establishing structures for clear and transparent reporting, and developing technical solutions that prioritize safety, privacy, equality, fairness, and transparency.
AI technologies offer promising opportunities to improve our health-care system by helping to reduce physicians’ administrative burden, assisting in the diagnosis of medical conditions, facilitating open communications with patients, and improving patient outcomes. The input and alignment of diverse stakeholders who collaborate to embrace the opportunities that technology innovation presents, but practically manage risks will help provide a practical path for implementation and adoption. This path must be designed in accordance with the economic, social, legal, and ethical framework Ontarians value and leverage the critical trust that underlies the physician-patient relationship. In the months ahead, OMD, together with the Ontario Medical Association, will work to tackle some key AI-related issues to ensure our health-care system is well-positioned to leverage innovation.
If you are interested in learning more, join me at the next OMD Educates webinar on Feb. 28, from noon to 1:00 pm, where I will discuss the potential impact of AI on physicians’ practices and other critical privacy and security issues in primary care in 2024. OMD is committed to helping clinicians navigate the evolving privacy and security landscape to optimize the security and efficiency of their practices.
