ChatGPT, the OpenAI chatbot capable of writing text and responding to conversation prompts, has been the buzz since its launch in 2022. It has spawned multiple iterations and been heralded as the technological marvel that will transform entire industries, the way work, and even how we go about our daily lives.

While we have yet to realize the full power (for better or worse) of ChatGPT and similar programs, it is already impacting healthcare and the area of cardiovascular medicine in particular.

The Growth of AI in Cardiology Care

The field of cardiology has been at the forefront of embracing AI medicine. Of the more than 500 clinical AI algorithms approved for use by the U.S. Food and Drug Administration, 58 are being used in the area of cardiology; only radiology has more.

Most of these are for use in imaging, including CT, MRI, nuclear imaging, and cardiac ultrasound. Additionally, many of radiology-specific algorithms are used in cardiovascular and peripheral vascular imaging.

By training the AI on numerous images and the accompanying diagnostic data, the algorithms are helping to ensure consistency in image measurements and reduce variability in image readings between technicians. Researchers at the Mayo Clinic are in the process of developing AI that looks at ultrasound images to detect disease by identifying “radiomic signatures” in the image that may not be evident to the human eye.

With AI acting as an “extra set of eyes” in image reading, cardiologists have additional data-backed support that can help save lives.

How AI Chatbots are Revolutionizing Patient Engagement and Doctor Workloads

While AI is already assisting in diagnostic imaging, AI-powered chatbots have the potential to assist doctors in their day-to-day tasks and impact the doctor-patient relationship through enhanced consultations and improved communication. This could save doctors valuable time and better engage patients to be proactive in their own healthcare.

ChatGPT’s use of natural language processing (NLP) is what enables it to understand conversations and write text. By analyzing a patient’s medical history, lifestyle, symptoms, and conversations with their doctor, ChatGPT has the potential to provide personalized treatment recommendations and diagnostic advice to cardiologists.

This ability to process large amounts of information can help save doctors’ time, helping them reach diagnostic decisions faster. It also has potential time-saving capabilities in administrative tasks and in streamlining doctor-patient communication.

In an experiment by cardiologists from the Cleveland Clinic and Stanford University, ChatGPT showed potential as a way to “assist clinical workflows by augmenting patient education and patient-clinician communication” in answering questions about cardiovascular disease prevention.

This could provide doctors with a readily available question-and-answer list to educate their patients, even before an appointment. Paired with a patient’s health data and current symptoms, it could then be used to personalize those questions and answers, helping to better facilitate a patient’s consultation with their doctor, saving time for both and getting to a diagnosis and treatment plan faster.

A similar experiment by researchers at the University of Amsterdam also concluded that “the true potential of AI in healthcare lies in its ability to transform clinical workflows by automating routine and time-consuming tasks. This not only improves efficiency and productivity, but also frees up healthcare providers to focus on more critical and complex tasks, where AI can serve as a valuable support tool.”

Challenges to Implementing AI Chatbots in Cardiology Care

This amazing diagnostic and time-saving potential is not without concerns, particularly around information accuracy and patient privacy.

Apart from their own knowledge, the team of cardiologists testing ChatGPT’s question-answering capabilities had no way to verify the accuracy of the information it provided, as “the AI tool’s responses did not include references to evidence to support any statements.”
The algorithm learns from the information it is fed and accesses, meaning the accuracy of its responses and recommendations depend on the accuracy and quality of the data being used. Without a reference to a verified source, it can leave doctors scrambling to verify the information (the antithesis of saving them time) or cast doubt on if it should even be used.

There are also concerns around if chatbots are accessing and training on the most current information. With constant innovation and new research being published in the field of cardiology, there is a high likelihood that chatbots can be working with outdated information.

In the Cleveland Clinic/Stanford University study, ChatGPT provided a response with outdated information on the availability of a heart medication. If the model was fed information that is now out of date, there is a risk of it sharing that outdated information with a patient, especially if the patient is using ChatGPT (or a similar chatbot) for their own research, without the oversight of a doctor.

In the study from the Netherlands, the chatbot struggled when asked to provide an “expert-level consultation,” the accuracy of answers varying by topic and “half of the responses being incomplete, inconclusive or just flat-out incorrect.”

These findings underscore that while it has great potential to assist knowledgeable physicians, a chatbot is no replacement for an actual “consultation” with a doctor. Unlike the FDA-approved AI algorithms, ChatGPT (and similar AI chatbots) are not specifically designed to be a healthcare resource. Patients would be wise to keep that in mind before turning to a chatbot to diagnose their heart issues.

However, there seems to be little concern for that, as patients indicate they are reluctant to turn to ChaptGPT or any AI for their healthcare. A recent Pew Research poll found that 6 in 10 American adults say they feel “uncomfortable” with using artificial intelligence for their healthcare and would not want their doctor using it to provide treatment recommendations. Their top concerns are around privacy and security of their health information.

The chatbot collects and stores any data and information typed into it. That’s partly how it “learns” to provide the responses a requestor wants. Any information entered is visible to the AI training staff and a number of “entities” ChatGPT shares its data with. Currently, ChatGPT (or any chatbot) is under no obligation to be HIPAA compliant. The ChatGPT privacy policy explains that if you intend to enter the personal data of others into the chat, you need to provide those individuals with adequate privacy notices.

Patients need to be careful about any personally identifiable information they may share, and doctors need to take precautions and give the appropriate notices when entering patient information into any chatbot.

While the AI chatbot revolution is coming to cardiology care, both doctors and patients need to be aware of how it is being used, thoughtful in how it’s implemented, and clearly communicate the expectations and potential for how it can help improve heart health and care.