Exhaustion from physicians and nurse practitioners: Could AI provide a prescription to treat an overtaxed health-care system?
Back in early 2020, Mahshid Yassaei found herself deeply worried about the burnout levels of health-care workers — and this was before COVID-19 even hit. She’d just spent six months helping the Public Health Agency of Canada with an information platform, and she’d heard plenty of stories of exhaustion from physicians and nurse practitioners. “It was shocking to me that they were spending 15 hours a week just on data entry,” Yassaei says. “These are some of the most intelligent, passionate people in the country, and they’re losing so much of their time to admin tasks.”
That realization spurred a new invention: Tali AI, an AI assistant that can record and summarize doctor-patient conversations, facilitate medical searches and send off prescriptions. (Think of it as a personal secretary, quietly handling all of the details.) Four years ago, when Yassaei, as CEO, first approached investors about Tali, she tended to hear back: “Sure, the problem is obvious, but can you actually do this?” But improvements in generative AI — the model of artificial intelligence that can produce a boatload of content from fairly basic commands — have been fast, and they’ve been considerable. Now, Yassaei says investor response is more likely to be: “OK, how are you different from the 100 other companies doing this?”
As the pace of generative AI has accelerated, so have its promises: virtual assistants will make you more productive, more creative, better informed, better entertained. One Microsoft exec said of its version, Co-Pilot, “you won’t be able to imagine your life without it.”
For some, that might sound ominous, but for care providers navigating a system stretched to the max, it holds the promise of restoring crucial work-life balance — and the potential of an option other than leaving the profession altogether. In Ontario alone, 65 per cent of GPs anticipate closing (or dramatically reconfiguring) their practice in the next half-decade, according to a 2024 analysis from the Ontario College of Family Physicians.
Outsized administrative burdens play a role in this exodus, says Hassaan Ahmed, the CEO of Phelix.AI, which has developed software tools for health-care organizations. In the past, he notes, clinics would simply hire more staff to lend a hand, “but in recent years, especially post COVID, margins are thinner than ever, and operating expenses keep soaring. This creates a vicious cycle where those who are left are expected to do more with less, creating even more admin burden and burnout.”
Last summer, the federal government announced that it would be directing more than $15 million in funding toward the Health Compass II project, a Vancouver-based hub for AI in health care. Tali.AI is one of five ventures included in that hub, which François-Philippe Champagne, Canada’s minister of innovation, science and industry, described as a “game-changer.” It could certainly change the outlook for doctors and nurses across the country struggling to manage paperwork without sacrificing the time they allocate for patient visits. Canada Health Infoway’s 2024 National Survey of Canadian Physicians found that more than 44 per cent of respondents experienced symptoms of burnout — and more than half regularly spend two or more hours outside of their regular workday tackling administrative tasks related to their job. In this context, a virtual assistant could be a real-world lifeline.
AI 101: A virtual assistant primer
Thirteen years ago, when Apple unveiled its iPhone 5, there was something new tucked into the software: Siri, a no-nonsense assistant that could read your texts, convert measurements on the fly and recommend nearby restaurants. Since then, chatbots have moved from primarily saying things to us to increasingly doing stuff for us. Powered by large language models (or LLMs), which recognize and generate text based on gigantic data sets taken from the internet, these assistants can perform a variety of tasks, says Frank Rudzicz, a faculty member at Toronto’s Vector Institute. “You can ask it to make a report, come up with code, write poetry, pass the bar exam, tell you where to go this weekend — and it does a pretty good job.”
Unsurprisingly, the biggest tech players are making the biggest splash. Open AI’s ChatGPT has become practically synonymous with chatbots — since its launch in November 2022, ChatGPT has proved to be not only the fastest growing virtual assistant but the fastest growing application of all time, hitting 100 million weekly users in less than a year. Microsoft built Co-Pilot off of Open AI’s GPT-4, while Toronto-based Ada, a 2021 unicorn that provides customer service automation for the likes of Verizon and Square, said it would partner with Open AI, as well. Cohere, another Toronto-based generative AI startup, has developed a ChatGPT competitor for enterprise use that might just propel the company to raise $500 million from investors. Earlier this year, Google combined its two main AI labs, DeepMind and Google Brain, to design a new LLM to power the digital assistant Gemini, touted as a significant improvement on its somewhat lacklustre chatbot, Bard.
Broadly speaking, helper bots have an impressive range: this tech has been used to edit a Sundance documentary on Brian Eno and can mix a cocktail while engaging in sympathetic bartender banter. In a medical setting, they could even serve as an extra set of hands — a skilled robot can learn how to sew a tidy row of stitches on a simple wound. But mostly, virtual assistants dispense with more mundane, annoying tasks. AI can help with the drudgery of summarizing a meeting; it can tackle programming jobs and dump data into a spreadsheet; it can fill out insurance claims and input invoicing information into accounts payable software. And it can help employees answer the sort of human resources questions — Does my dental plan cover this procedure? How do I get these expenses reimbursed? — that can take a shockingly long time to actually resolve.
There are certain scenarios in which a virtual assistant’s particular skillset surpasses that of their human counterpart. As Yassaei notes, Tali.AI was developed with an aim to “improve the quality and accuracy of clinical notes,” by documenting a visit in real-time and avoiding the risk of details getting lost or distorted through hasty transcription, allowing clinicians to focus on actively listening to patients during visits. Phelix’s AI-powered “inbox agent,” on the other hand, functions much like a hyperefficient air-traffic controller for inbound and outbound communication, categorizing faxes, generating digestible summaries, directing files for review to the appropriate person, performing patient searches and generating charts. The platform, says Hassaan Ahmed, was “built from the ground up to be generalized for most health-care use cases, so it’s able to be implemented out of the box without custom training” for many tasks — although the company does train custom models using specific information (such as naming conventions) unique to each clinic.
“In one instance, we helped a clinic streamline 4,000 document types down to 300, which we then used to train a custom classifier with close to 90 per cent accuracy,” he says. “It’s a bit time intensive, but it makes a huge difference.” For now, Phelix’s technology still includes a human as part of the process in most use cases to review and approve the virtual assistant’s work, but Ahmed says this will shift over time.
Benefits and caveats
There are tangible benefits to adopting this technology. In a report released in February 2024, the Canadian Centre for the Innovation Economy suggested generative AI could add nearly two per cent to Canada’s GDP; a Boston consulting group estimated last year that the annual productivity gains in the U.S. public sector alone could total US$1.75 trillion by 2033. But helper bots can also bolster overall job satisfaction. “When employees offload the tasks that are a burden, and automate all those hours spent on monotonous things, they’re freed up to focus on the more interesting stuff,” Rudzicz says.
Right now, virtual assistants are pretty handy at responding to commands — but they haven’t proven themselves to be proactive yet. In the early 2000s, when Rudzicz was getting started, “the promise of AI was that these agents would be working on your behalf, telling you to work on this today or booking time in your schedule for that,” he says. “We haven’t seen that kind of thing yet.” Chatbots still show a troubling tendency to “hallucinate,” which is a gentler way of saying that they flat-out make up facts. For months, Air Canada argued that it wasn’t responsible for a fake refund policy that its chatbot cobbled together; instead, the airline insisted “the chatbot is a separate legal entity that is responsible for its own actions.” In February, a B.C. Civil Resolution Tribunal member called that position “remarkable” — not in a good way — and ordered Air Canada to pay up.
Last July, Meta released the third version of its LLM, Llama — the latest step in Mark Zuckerberg’s quest to build general AI that’s freely available to everyone. He maintains the most robust and useful AI framework requires less gatekeeping, drawing a parallel with the Linux operating system, which was widely available for public use in the 1990s and 2000s. Meanwhile, experts want to hammer out concrete regulations before adopting an open-source approach to this tech. It’s worth noting that any data that a user feeds into an AI tool, or anything they create with that tool, will almost certainly be used to train the next generation of these models. (In the case of ChatGPT, users can change their settings to ensure their conversations aren’t used for training purposes, although that option also cuts off access to their chat history.)
Given the sensitivity of personal medical information, there are additional considerations for any company developing AI tools for health-care applications. Adhering to a closed-source model — that is, ensuring that any information added is fully encrypted and restricted so that it can only be accessed by a designated pool of users — is one guardrail. Phelix’s proprietary, pre-trained AI model, for instance, does not save data or exchange it with third parties such as Open AI. Tali’s tech does not store audio gathered during patient consultations, and all identifying details are stripped from its recordings and transcripts.
This cautious approach does create some hiccups, as Yassaei notes. While the limitations around data collection are intended to protect patients and clinicians, they also affect the speed and quality of documentation. “Finding the right balance between improving clinical workflows and minimizing privacy and security risks highlights the intricate challenges of integrating AI into health care,” she says. This process involves a fulsome understanding of the regulatory environment and the ethical considerations unique to this sector, as well as a firm grasp of how to best deploy this technology to enhance services and reduce burnout.
Rudzicz cautions people to resist binary thinking around generative AI — it’s not going to be society’s saviour or its downfall — and encourages organizations to play around with the technology. “It’s a good idea to turn the faucet on a little bit, try to find strategic areas where these agents can be useful, but not just go all in,” he says. “I think it’s too early to do that. We still need a human in the driver’s seat.”
CTVNews.ca has partnered with MaRS to highlight Canadian innovations in health care.
This article was first reported by CTV News