
In the complicated corridors of Canadian jurisprudence, negligence claims remain one of the more popular approaches to holding professionals accountable. Nowhere is its sway more profound than in the domains where human health and well-being hang in the balance, such as dentistry. This legal scrutiny involves a tripartite test assessing duty of care, breach of the standard of care, and causation. These elements collectively determine whether an individual’s actions or failures have unjustly harmed others. Recently, dentistry has undergone a transformative integration of Artificial Intelligence (AI), offering improvements in diagnostic precision and treatment efficiency. However, this technological leap introduces complex challenges, including diagnostic inaccuracies, questionable treatment recommendations, privacy concerns, changes in informed consent dynamics, and the potential for algorithmic bias.
These emerging issues highlight the urgent need for an evolved legal and ethical framework that not only comprehends but also adeptly navigates the intricacies introduced by AI advancements. As AI reshapes dental practices, it tests the traditional boundaries of negligence law and prompts a reevaluation of how duty of care and causation are conceptualized within this new context. This essay explores the intersection of AI and negligence in dentistry, delving into how AI’s application challenges established legal norms and what this means for maintaining patient safety and professional integrity. Against the backdrop of increasing dental negligence claims in Canada, this analysis seeks to illuminate the complexities of adapting legal principles to the digital age of dentistry, advocating for a regulatory approach as dynamic and innovative as the technologies it seeks to govern.
Success produces confidence; confidence relaxes industry, and negligence ruins the reputation which accuracy had raised.
— Ben Jonson
Duty of Care in The Age of AI
The duty of care principle is a fundamental aspect of negligence law, obliging individuals and entities to act in ways that prevent foreseeable harm to others. In the landmark Cooper v. Hobart [2001] case, the Supreme Court of Canada the Supreme Court of Canada emphasized the importance of the well-known Anns/Cooper test for determining the existence of a duty of care, establishing a methodological approach to determining the existence of a duty of care through assessing proximity and the foreseeability of harm. This judicial framework is crucial when considering AI technologies in dentistry, where the potential for harm must be meticulously evaluated and mitigated.
AI’s application in dental diagnostics and treatment planning introduces a novel paradigm where the traditional patient-dentist interactions are augmented or even replaced by algorithms. The question then arises: How do we apply the concept of foreseeability and proximity in this new context? By their nature, AI technologies process vast amounts of data at speeds and complexities beyond human capability, potentially identifying risks and recommending treatments with unprecedented precision. However, this capability also introduces risks of misdiagnosis or treatment errors due to algorithmic biases or data inaccuracies, thereby challenging the boundaries of foreseeable harm. The dentist is ultimately responsible for the interpretation of any patient data and the treatment of the patient. However, since the technology developer has the most significant economic gain from using the technology, they should also have a new additional layer of responsibility and risk.
Breaching the Standard of Care with AI
The standard of care, traditionally evaluated against the actions of a ‘reasonable person’ with similar expertise, enters uncharted territory with the incorporation of AI in dental care. Dental and medical professionals must apply their knowledge and instruments carefully. However, when those instruments include AI systems, determining what constitutes a breach of the standard of care becomes complex. If an AI diagnostic instrument overlooks a critical condition that a competent dentist would have caught, does the reliance on AI technology lower the standard of care, or does it represent a failure to apply it appropriately?
Causation and Harm: The AI Complexity
Establishing causation in AI-driven dental care necessitates proving that harm would not have occurred ‘but for’ the application of AI technology. This aspect of negligence law, exemplified in Clements v. Clements [2012], highlights the challenge of establishing causation in scenarios where AI recommendations play a significant role. The complexity is magnified when considering the opaque nature of some AI decision-making processes, which may only sometimes be transparent or understandable to practitioners and patients alike. Furthermore, integrating AI in dentistry involves diagnosing and treating known conditions and predicting potential future health issues based on data analysis. This predictive capability raises questions about the extent of a dental professional’s duty to act on AI-generated insights and the legal implications of failing to prevent harm that AI technologies forecast.
Establishing causation in AI-driven dental care is particularly complex. It necessitates a multifaceted approach that involves understanding AI algorithms, meticulous documentation, comparative analysis, expert testimony, data analysis, consideration of legal precedents, regulatory compliance, and continuous monitoring. Dentists and legal experts must comprehend how AI algorithms function and keep detailed records of AI-driven processes and patient outcomes. Comparative analysis between cases with and without AI recommendations and expert testimony could help elucidate the causal relationship. Data analysis techniques and adherence to regulatory standards further support the establishment of causation. Continuous monitoring outputs ensure ongoing assessment and refinement of AI systems to optimize patient care and safety.
Navigating Legal Challenges with AI in Dentistry
As AI technologies become increasingly prevalent in clinical settings, healthcare practitioners and technologists must thoroughly understand and address their legal ramifications. This necessity underscores a broader dialogue between law and technology, aiming to reconcile rapid technological advancements with established legal principles. Cases like Bolton, which examines the reasonable foreseeability of harm, and Hughes, which delves into the reasonably foreseeable consequences of a breach of duty, provide essential legal precedents for understanding how traditional negligence principles apply in AI.
Integrating AI in dentistry presents a unique set of legal challenges that necessitate a nuanced understanding and application of negligence law. By examining critical cases and legal theories, this discussion underscores the imperative for an evolved legal framework that can effectively address the complexities introduced by AI technologies. This framework must ensure the safety and efficacy of AI applications in dental care, balancing the potential benefits of technological advancements with the need to prevent harm and uphold professional standards. As law and technology intersect, the goal remains clear: to foster an environment where innovation thrives within the bounds of patient welfare and professional accountability.
There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors.
— Tony Hoare
Diagnostic Errors
AI algorithms are increasingly used in dentistry to analyze dental images to detect cavities, periodontal diseases, and oral cancers. For example, AI-powered software can analyze dental X-rays to identify areas of concern, potentially assisting dentists in making more accurate diagnoses.
AI in dentistry also has profound legal implications, potentially leading to negligence claims with significant ramifications for practitioners and patients. For instance, imagine an AI-driven diagnostic tool erroneously labelling a benign lesion as malignant, precipitating unnecessary surgical intervention and causing undue emotional distress and physical trauma. This scenario underscores the duty of care, as established in Donoghue v. Stevenson [1932] AC 562, where the obligation to avoid acts or omissions likely to cause harm to others was firmly established.
Moreover, the foreseeability of harm from diagnostic errors introduces a nuanced challenge, as delineated in Palsgraf v. Long Island Railroad Co. [1928] 248 NY 339, which elucidates the scope of foreseeability in determining liability. This principle is critical when assessing the impact of AI, necessitating a re-examination of what can be reasonably foreseen in light of technological advancements.
The mental injury resulting from diagnostic inaccuracies further complicates the legal landscape of AI diagnostics, as highlighted in Saadati v. Moorhead [2017] 1 SCR 543. This case broadened the understanding of harm and included psychological impacts. It underscored the need for rigorous vetting of AI tools for their potential to cause mental distress alongside physical harm.
In ensuring the reliability and accuracy of AI in healthcare, the principles laid out in Clements v. Clements [2012] 2 SCR 181 on the “but for” test for causation become critical. Dental professionals integrating AI into diagnostics must navigate this intricate legal terrain, ensuring technologies are accurate and reliable to maintain the duty of care and mitigate potential harm. This balance between innovation and patient safety calls for a continuous evaluation of AI tools against established legal standards, ensuring the welfare of patients remains at the forefront of dental practice in the age of AI.
Treatment Recommendations
AI systems are also employed to assist dentists in developing personalized patient treatment plans. These systems analyze patient data, including medical history and diagnostic images, to recommend appropriate treatment options. Integrating AI into treatment recommendation processes introduces a multifaceted layer of complexity within dentistry, impacting the landscape of negligence law. Envision an AI system proposing an assertive treatment regime for a condition traditionally managed through conservative means, potentially placing the patient at undue risk and harm. This is the case currently with caries detection and the tendency for the algorithm to overestimate dental decay, leading to overtreatment and patient harm. This scenario underscores the legal standard of care, as highlighted in landmark decisions like Ter Neuzen v. Korn [1995] 3 SCR 674, obliging healthcare providers to anchor their treatment advice in well-established medical standards, considering a comprehensive assessment of associated risks and benefits.
This potential breach is further complicated by principles elucidated in Athey v. Leonati [1996] 3 SCR 458, which delineates methodologies for apportioning losses between tortious and non-tortious causes. When AI-recommended treatments intersect with pre-existing conditions or external factors, discerning their contribution to patient harm becomes imperative. This resonates with the principles presented in Sunrise Co V Lake Winnipeg [1991] SCR 3, which addresses vicarious liability scenarios.
When AI-generated recommendations diverge significantly from medical standards without adequate rationale or neglect individual patient profiles, it raises concerns of breaching the standard of care, potentially constituting negligence. Such breaches, leading to patient harm, may invite legal scrutiny and potential liability for healthcare professionals involved, emphasizing practitioners’ need to thoroughly inform patients about treatment risks and benefits, as underscored in Hopp v. Lepp [1980] 2 SCR 192.
Moreover, Snell v. Farrell [1990] 2 SCR 311 delves into the causation aspect of medical negligence, necessitating a demonstration of how a practitioner’s deviation from the standard of care results in harm. This aspect becomes particularly salient when adverse outcomes are associated with AI’s treatment suggestions, emphasizing the critical need for healthcare professionals to assess AI-generated advice rigorously and, if necessary, challenge it.
As AI’s role in shaping treatment recommendations becomes increasingly prominent within dentistry and broader healthcare contexts, aligning these technological advancements with established legal standards of care is paramount. This alignment ensures that healthcare providers prioritize patient safety while capitalizing on AI’s potential benefits, fulfilling their legal and ethical obligations amidst evolving medical practices.
Sometimes, giving up your privacy is a little like going to the dentist and we have let him have access that no one’s ever had.
— Tom Petty
Privacy Breaches and Bill C-27
Integrating AI technologies in dentistry, heralding advancements in patient care and operational efficiency simultaneously unveils a complex maze of challenges associated with ensuring patient privacy. The digitization of patient records, combined with AI’s pivotal role in diagnostics and treatment planning, markedly amplifies the potential for data breaches, posing a considerable risk to the sanctity of sensitive patient information. This risk is not merely theoretical but is underscored by scenarios in which AI systems, compromised due to lax security protocols, facilitate unauthorized access to patient data. Such breaches starkly contravene legal obligations designed to protect patient information, precipitating a significant erosion of trust between patients and healthcare providers. For example, the Hopkins v. Kay [2015] ONSC 1128 decision emerges as a pivotal legal benchmark, elucidating the repercussions of failing to protect personal health information.
Bill C-27 and the AIDA (Artificial Intelligence Data Act) are poised to have significant implications for dentistry. They will shape how technology, particularly AI, is integrated into dental practices and patient care.
1. Regulatory Compliance: Bill C-27 and the AIDA will likely introduce new regulatory requirements and compliance standards for dental practices utilizing AI technology. Dentists and dental clinics must ensure that their use of AI complies with the specific regulations outlined in these bills, which may include data protection measures, transparency requirements, and guidelines for AI-driven diagnostics and treatment planning.
2. Data Security and Privacy: Data security and privacy are both key focuses for Bill C-27 and the AIDA. As dental practices increasingly rely on AI systems to process and analyze patient data for diagnostic and treatment purposes, ensuring the confidentiality and integrity of patient information will be paramount. Dental professionals must implement robust data security measures to protect patient privacy and comply with the requirements outlined in these bills.
3. Quality of Care: Bill C-27 and the AIDA may also impact the quality of care dental practices provide. By leveraging AI technology, dentists can enhance diagnostic accuracy, treatment outcomes, and patient experience. However, adherence to the regulations outlined in these bills will be essential to ensure that AI-driven care maintains high standards of quality, safety, and efficacy.
4. Ethical Considerations: Introducing AI into dentistry raises ethical considerations regarding patient autonomy, consent, and the responsible use of technology. Bill C-27 and the AIDA may include provisions addressing these ethical concerns, such as requirements for informed consent for AI-driven procedures and transparency about the role of AI in patient care. Dental professionals will need to navigate these ethical considerations to uphold the trust and confidence of their patients.
5. Professional Development and Training: As AI becomes more prevalent in dentistry, dental professionals must acquire the necessary skills and training to leverage these technologies effectively. Bill C-27 and the AIDA may allocate resources for professional development programs and training initiatives to educate dentists and dental staff on using AI in patient care.
Overall, Bill C-27 and the AIDA represent a significant step forward in regulating the use of AI in dentistry and prioritizing patient safety, privacy, and quality of care. By complying with the regulations outlined in these bills and addressing the associated challenges and opportunities, dental practices can harness AI’s transformative potential while upholding their commitment to patient well-being.
Informed Consent
In the evolving landscape of dentistry, where AI increasingly plays a pivotal role in diagnostics and treatment planning, the doctrine of informed consent encounters complex new dimensions. As traditionally understood, informed consent mandates a comprehensive disclosure to patients about the nature of their treatment, encapsulating potential risks and viable alternatives. This principle is enshrined in landmark cases such as Reibl v. Hughes [1980] 2 SCR 880, which underscores the necessity for patients to be thoroughly informed to make autonomous healthcare decisions, establishing a baseline for informed consent that becomes even more critical when AI’s role in treatment is introduced.
However, integrating AI into clinical decision-making processes adds layers of complexity to this already intricate duty. For instance, a scenario where a patient agrees to undergo a treatment recommended by an AI system without fully grasping the AI’s operational intricacies or limitations could compromise informed consent’s integrity. The case of Ciarlariello v. Schacter [1993] 2 SCR 119 extends the discourse on informed consent to include the requirement that patients must understand the immediate treatment and the broader context and means through which diagnoses and recommendations are made, including any AI involvement.
Furthermore, Clements v. Clements [2012] 2 SCR 181 elaborates on the ‘but-for’ causation and material contribution to harm, an aspect that is particularly pertinent in AI-informed consent. This principle necessitates a clear explanation of how AI recommendations could materially contribute to treatment outcomes, ensuring patients understand the causal links between AI-driven advice and their care. Moreover, the application of AI in healthcare necessitates expanding the dialogue around consent to encompass data privacy concerns, as highlighted in Hopkins v. Kay [2015] ONSC 1128. While primarily addressing privacy breaches, this case indirectly emphasizes the importance of informed consent in the context of data handling and processing—an area where AI systems are extensively involved.
Additionally, introducing AI technologies, such as Scribeberry, challenges the traditional paradigms of patient-physician interactions, as discussed in Malette v. Shulman [1990] 2 SCR 123. This case, which dealt with the nuances of informed consent in emergency medical contexts, underscores patients’ fundamental right to be the architects of their medical care. This principle gains additional significance when AI systems influence treatment pathways. Patients may not consent to use AI for their care due to distrust of the technology or personal preferences.
Thus, as AI becomes more ingrained in dental practices, it is paramount to ensure that patients are adequately informed about how AI tools influence their treatment options and decisions and how dentists make decisions. This entails explaining the potential risks and benefits of AI-recommended treatments and addressing how data is utilized and protected within these systems. The goal is to foster an environment where consent is not just a procedural formality but a manifestation of patient autonomy and understanding in the age of digital healthcare
Algorithm Biases
Integrating AI in dentistry introduces novel challenges, notably the potential for algorithmic biases to impact patient care. Algorithmic biases refer to the tendency of AI systems to produce results that systematically disadvantage specific individuals or groups based on factors such as race, gender, or socioeconomic status. This issue raises significant ethical and legal concerns, particularly in healthcare, where fair and equitable treatment is paramount. Landmark cases such as R. v. Oakes [1986] 1 SCR 103, which established the framework for assessing constitutional rights infringements, provide a foundation for understanding the legal principles relevant to algorithmic biases. These cases underscore the importance of ensuring that AI systems do not disproportionately harm specific demographics or perpetuate existing healthcare access inequalities.
Moreover, the application of AI in dentistry must adhere to the principles outlined in Andrews v. Grand & Toy Alberta Ltd. [1978] 2 SCR 229, which emphasizes the duty of care owed to patients. Dentists and AI developers are responsible for mitigating algorithmic biases that could lead to discriminatory outcomes in diagnosis or treatment recommendations. Additionally, Seneca College v. Bhadauria, [1981] 2 SCR 181 underscores the legal obligation to provide equal opportunities and access to services free from discrimination. This principle extends to developing and deploying AI algorithms in dentistry, necessitating measures to identify and rectify biases that may adversely affect patient outcomes.
Furthermore, the legal framework established in Law Society of British Columbia v. Trinity Western University [2018] 1 SCR 101, which addresses discrimination based on sexual orientation, offers insights into combating biases within AI systems. While not directly related to healthcare, this case highlights the importance of proactive measures to promote inclusivity and prevent discriminatory practices.
As AI technologies become more prevalent in dentistry, it is imperative to address algorithmic biases to ensure equitable patient care. Dentists and AI developers must work collaboratively to identify and mitigate biases, leveraging legal principles and ethical frameworks to uphold the principles of fairness, non-discrimination, and patient-centred care. By doing so, they can help foster a healthcare environment where all patients receive the highest standard of treatment, regardless of their background or demographic characteristics.
In cases where AI algorithms make diagnostic errors or recommend inappropriate treatments, determining liability becomes complex. Dentists may be held accountable for negligence in using or interpreting AI-generated data. At the same time, AI developers or manufacturers could also face liability for producing faulty algorithms or inadequate training data.
The last thing I want my robot to be is sarcastic. I want them
— Sebastian Thrun
to be pragmatic and reliable – just like my dishwasher.
Moffatt V. Air Canada and Its Implications
The Moffatt v. Air Canada, 2024 BCCRT 149 decision marks a pivotal moment in legal accountability for digital interactions, explicitly addressing the liabilities associated with automated chatbots. This case underlines the evolving legal landscape, where companies can no longer disassociate from the misrepresentations made by their automated systems, including AI-driven chatbots. This development is particularly relevant in negligence law as it intersects with the use of AI in sectors like dentistry, where chatbots are becoming more common on dental websites. It highlights a shift towards greater corporate responsibility for technology-driven interactions.
In Moffatt v. Air Canada, the tribunal’s decision to hold Air Canada liable for inaccurate information provided by its chatbot serves as a precedent for the principle that organizations are responsible for the acts or omissions of their computer systems. The case emerged from an automated chatbot misleading a customer, Jake Moffatt, regarding bereavement fares, leading to financial loss and the broader question of digital misrepresentation. The tribunal’s finding emphasizes that a company’s duty of care extends to ensuring the accuracy and reliability of the representations made by its automated systems.This establishes that using an electronic agent, like a computer program or chatbot, does not absolve an entity from liability resulting from the tool’s actions. This perspective reinforces the notion that automated systems are extensions of the entity that employs them, attributing the intentionality and outcomes of these tools to their human operators or owners.
The legal reasoning in Moffatt v. Air Canada can be extrapolated to using AI in dentistry, where diagnostic tools and treatment recommendation systems play a crucial role. Similar to the chatbot scenario, AI systems in dentistry must adhere to a standard of care that ensures accurate and non-misleading information is provided to patients. This standard is critical in avoiding misdiagnoses, inappropriate treatment plans, or breaches of patient privacy, each of which could lead to significant harm and legal liability for negligence.
The Need for Evolving Legal Frameworks
The broader implications of the Moffatt v. Air Canada decision and the principles outlined in Google Inc. v. Equustek Solutions Inc. [2017] 1 SCR 34 advocate for legal frameworks that evolve with technological advancements. These cases underscore the judiciary’s readiness to extend traditional legal obligations to encompass modern technological interactions, reflecting a growing expectation for entities to exercise due diligence in deploying automated systems.
In the context of AI in healthcare, including dentistry, this evolution calls for legal standards that address AI systems’ reliability, testing, and validation. Future litigation may delve deeper into the methodologies behind AI training and deployment, scrutinizing whether entities have taken sufficient steps to mitigate the risks of inaccurate AI outputs.
The Moffatt v. Air Canada decision is emblematic of a legal shift towards holding entities accountable for the digital tools they employ, setting a precedent that likely extends beyond customer service chatbots to include AI systems across various sectors. For AI in dentistry, this ruling highlights the importance of ensuring that AI-driven diagnostics and treatments meet rigorous accuracy standards to prevent negligence. As legal frameworks continue to adapt, the focus on AI’s role in professional settings underscores the need for a proactive approach in aligning technological practices with legal obligations, ensuring entities are prepared to meet the evolving standards of care and accountability in the digital age.
Navigating AI, dentistry, and negligence law requires a differentiated approach to managing complexity. Innovation brings inherent chaos, challenging legal and ethical frameworks. Regulation should not stifle innovation but provide a structured framework for the safe and ethical use of AI technologies. This requires identifying potential risks to patient safety and privacy, setting rigorous standards for accuracy, reliability, and fairness, and implementing flexible guidelines that adapt to rapid technological advancements. A balanced approach ensures that legal and ethical frameworks evolve with AI, promoting innovation while prioritizing patient care and safety.
Recommendations
Enhanced Training and Education: Dental professionals should receive comprehensive training on using AI technologies, including how to interpret AI-generated reports and recommendations. Continuing education programs could help dentists stay updated on the latest advancements in AI and best practices for integrating AI into clinical practice.
Transparent Documentation and Informed Consent: Dentists should communicate to patients when AI technologies are used in their diagnosis or treatment planning. Patients should be informed about the limitations of AI systems and have the opportunity to ask questions and provide consent before AI-generated recommendations are implemented.
Quality Assurance and Oversight: Regulatory bodies and professional organizations should establish standards for developing, validating, and deploying AI technologies in dentistry. Regular audits and quality assurance measures can help ensure that AI systems meet predefined accuracy and reliability benchmarks, reducing the risk of errors and patient harm.
Collaboration and Interdisciplinary Approach: Collaboration between dental professionals, AI developers, legal experts, and policymakers is essential for developing robust legal frameworks that effectively address AI’s challenges in dentistry. Interdisciplinary research initiatives and forums can facilitate knowledge sharing and the development of consensus-based guidelines for AI integration in dental practice.
Harnessing these strategies and insights, stakeholders stand poised to navigate the dynamic landscape of AI in dentistry, forging a path that not only tames the potential hazards but also unleashes AI’s full potential to revolutionize patient care and outcomes.
Oral Health welcomes this original article.
About the Author

Dr. Peter Fritz is a distinguished periodontist, academic, and business leader with an extensive background in clinical practice, research, and education. He is known for his innovative approaches to patient care and dedication to interdisciplinary collaboration. His leadership in integrating AI and digital technologies into dental practice showcases his forward-thinking vision and commitment to advancing healthcare.

Abdi Aidid is a lawyer and an Assistant Professor at the University of Toronto, researching and teaching the areas of torts, procedure and privacy. With Benjamin Alarie, he is the co-author of “The Legal Singularity: How Artificial Intelligence Can Make the Law Radically Better,” which was released in 2023.