AI is revolutionizing healthcare — but is your patient data safe?

by Tony Anscombe, Chief Security Evangelist at ESET

Patient records are also likely to be the subject of increased processing using advanced AI algorithms. (iStock)

Artificial intelligence is transforming healthcare before our eyes. A key benefit is faster, more accurate diagnosis both in routine care and for more serious concerns, such as 3D dental imaging and computed tomography scanning used to examine thyroid tumours.

There are many examples where medical care is benefiting from advanced technology using AI to assist medical teams, many of them also have a commonality – they are digital. The digitization of medical treatment, as opposed to just medical records, brings with it new challenges as third-party service providers may be involved in the processing of images and data, and if in-house, the storage and systems use are an extension to patient record data.

Patient records are also likely to be the subject of increased processing using advanced AI algorithms, identifying patterns and predicting concerns to allow medical teams to offer preventative care rather than being reactive.

But as small and mid-sized medical and oral healthcare practices adopt AI-powered tools, often without the benefit of in-house IT departments, the question remains: How secure is your patient data?

Medical records are among the most valuable assets on the black market. They contain not just names and addresses but full health histories. For cybercriminals, they’re a gold mine. For healthcare providers, a single data breach can mean a significant financial impact to address the breach and irreparable damage to patient trust.

As AI tools become embedded in everything from appointment scheduling to transcription services and clinical decision support, many providers may not fully understand where patient data is going, how it’s being stored, or who ultimately has access. Health care providers must consider safety protocols and scenarios like whether AI systems are handling sensitive data in secure, encrypted environments, what happens to patient information if you change service providers or cloud platforms, and whether AI tools inadvertently leave fragments of data unprotected across multiple systems.

These aren’t abstract concepts – they’re real concerns that every healthcare provider, regardless of size, must be asking.

The key is not to fear AI, but to approach its adoption with clear-eyed diligence. AI can be a powerful ally in improving healthcare efficiency and accuracy, but it must be implemented thoughtfully and securely.

  1. Ensure any AI solution complies with healthcare regulations like HIPAA. Find out how and where data is stored, encrypted, and accessed, and get those answers in writing. Using a system where data is located in a different jurisdiction may require additional consideration to meet health and privacy legislation in your location.
  2. Understand data ownership. When switching AI providers, clarify how and when your patient data will be deleted or transferred, and whether it can be done securely. Auditing providers on a regular basis is recommended, ensure they are doing what they are required contractually to do.
  3. Limit data sprawl. Many practices use multiple tools for scheduling, records, and communication. Ensure data isn’t duplicated or left vulnerable across disconnected platforms. If patients have the convenience of using an app for such tasks ensure the app provider understands the privacy requirements for your location, and that the app is regularly updated.
  4. Train your staff. The best security tools are only effective if the people using them understand basic cybersecurity hygiene, especially when handling AI outputs.
  5. Build cybersecurity into your AI strategy. Don’t treat security as an afterthought. It should be central to your digital transformation.

The promise of AI in healthcare, including oral health, is undeniable. It has the potential to revolutionize how providers deliver care and streamline workflows. But innovation without caution can be costly. As AI tools become more deeply embedded in day-to-day operations, often behind the scenes, it can be easy to overlook the cybersecurity implications until it’s too late.

Every healthcare provider, whether in a large hospital or a small dental practice, has a responsibility to ensure that the tools they adopt are effective and secure. That means asking tough questions, setting clear expectations with vendors, and building a culture where cybersecurity is everyone’s job, not just IT’s.

Patient trust is hard-won and easily lost. By approaching AI adoption with equal parts enthusiasm and vigilance, providers can protect that trust while still embracing the future of care.


Tony Anscombe is the Chief Security Evangelist for ESET, an industry-leading IT security software and services company for businesses and consumers worldwide. With over 20 years of security industry experience, Anscombe is an established Author, Blogger and Speaker on the current threat landscape, security technologies and products, data protection, privacy and trust, and Internet safety. His speaking portfolio includes industry conferences, such as RSA, CTIA, MEF, GlobSEC and Child Internet Safety (CIS). He has been quoted in security, technology and business media, including BBC, the Guardian, the Times and USA Today, with broadcast appearances on Bloomberg, BBC, CTV, KRON and CBS. Anscombe has served on the board of MEF and FOSI and held an executive position with the Anti-Malware Testing Standards Organization (AMTSO).

RESOURCES