The Healthcare Divide – Episode 2
The Digital Therapist
Artificial Intelligence is rapidly changing the healthcare industry. While it’s creating new opportunities for treatment, there are ongoing concerns with the ways AI perpetuates and amplifies bias. We explore the promises and the perils of this emerging technology in the mental health field with the creators of the Mind-Easy app and a digital health researcher.
Dalia Ahmed and Akanksha Shelat, two of the co-founders of the Mind-Easy app
Dr. Nelson Shen, project scientist with the Digital Innovations Unit within the Centre for Complex Interventions at CAMH.
The Healthcare Divide
Dr. Alika Lafontaine Before we begin, I want to let you know that we’re talking about mental health and suicide in this episode. If you’re experiencing a mental health crisis or just need to talk to someone, we encourage you to reach out to talksuicide.ca. We’ll include other resources in the show notes.
Akanksha Shelat There is already a very obvious shortage of therapists and clinicians in the world. And that number drops significantly when you talk about clinicians that are culturally competent.
Dr. Alika Lafontaine Canada’s healthcare system should provide equal access to everyone. But in reality, it’s a system of haves and have nots. I’m talking to the people who have experienced these inequities first-hand, and those who are working to create change.
Dalia Ahmed There’s gaps in the ways that we use AI and the field needs to develop a lot more until we can feel like this is something that we can use all the time in different ways. But just knowing that it’s a tool right now that can help us, I think is the best way to look at it.
Dr. Alika Lafontaine There’s a major conversation going on in healthcare right now about how artificial intelligence could change almost every aspect of health service delivery. In this episode, we’ll talk about what the introduction of AI means for the field of mental health, and how it could amplify existing inequities if we’re not careful. I’m Dr. Alika Lafontaine, an Anesthesiologist and the first Indigenous physician to have led the Canadian Medical Association. From the Canadian Race Relations Foundation…this is The Healthcare Divide.
Maya AI Hi! Welcome to Mind-Easy.
Dr. Alika Lafontaine So, do you want to talk to Maya?
Hailey Choi Yes. Do you want to click the start button?
Dr. Alika Lafontaine Sure. So I’m going to click start here. Let’s see what Maya has to say.
Maya AI Let’s do your daily check in. There is no right or wrong answer. I am here to help you…[fade under VO]
Dr. Alika Lafontaine Maya is the name of an avatar in a wellness app called Mind-Easy. Me and my producer Hailey sat down together to check out the app.
Maya AI How are you feeling today?
Dr. Alika Lafontaine I’m given a few different options. Sad, stressed. Happy. Calm. Anger. Yeah, I’m feeling pretty calm today, so I’m going to click the calm button. We’ll see where that takes me.
Maya AI Feeling calm and being in a state of relaxation may not be the easiest emotions to come by. So it is nice to take the time to fully enjoy this feeling. Let us dive through how it feels…[fades out]
Dr. Alika Lafontaine Though her voice has that robotic, algorithmic quality to it, the video looks human. Maya is a Black woman, dressed in a suit, standing in front of a simple background. She is just one of many avatars in this app. If you switch to a lesson in Arabic, for example, the avatar changes. Now she has a different skin tone and her mannerisms have changed.
Arabic Avatar [speaking Arabic]
Dr. Alika Lafontaine The avatars deliver lessons in topics like “Standing Up To Your Inner Critic,” “Coping with Miscarriage” and “Challenging Procrastination.” I click on that last one.
Maya AI Are you afraid of failure? Are you trying to be perfect? Once you identify the root cause, you can start to work on overcoming it. Remember, you’re not lazy. You’re just human. We all have moments of procrastination…[fades out]
Dr. Alika Lafontaine It’s a pretty good affirmation, actually.
Maya AI The key is to not let it take over your life.
Dr. Alika Lafontaine Remember you’re not lazy. You’re just human.
Hailey Choi Hmm. Has there been anything that you’ve been procrastinating recently?
Dr. Alika Lafontaine You know, there’s a variety of different tasks that I’ve been procrastinating recently. I usually clean the garage at the beginning of summer. It’s been about four weeks, so.
Dr. Alika Lafontaine We recorded this in the summer. I still haven’t cleaned my garage.
Dr. Alika Lafontaine I’m going to click finish. Oh, and now I’ve just started my one day streak.
Dr. Alika Lafontaine The reason we’re trying out this app is to explore the question of how artificial intelligence is changing the field of mental health. One of the long-standing concerns with AI is that it amplifies bias against marginalized people.
[sound clip] MP Pat Kelly What happens when artificial intelligence goes wrong?
[sound clip] MP Michelle Rempel Garner A man allegedly committed suicide after interacting with an AI chatbot.
[sound clip] Professor Woodrow Hartzog AI systems are notoriously biased along lines of race, class, gender, and ability.
Dr. Alika Lafontaine Algorithms are only as good as the data you train them on. And studies have repeatedly proven that medical research falls short when it comes to representing marginalized communities.
[sound clip] Senator Josh Hawley I think some of what we’re learning about the potentials of AI is exhilarating.
[sound clip] MP Ryan Williams We have AI working right now with health care diagnostics. Assist doctors in diagnosing diseases like cancer, enabling earlier detection and improved treatment outcomes.
[sound clip] MP Mark Gerretsen It is going to transform just about everything in our lives…
Dr. Alika Lafontaine On the other hand, AI is being used to create new access to care in a field where human resources can’t keep up with demand. That’s where the founders of Mind-Easy thought they could make a difference. I sat down with two of Mind-Easy’s co-founders, Dalia Ahmed and Akanksha Shelat, to talk about their app and the expanding role of technology in mental health.
Dalia Ahmed I’m Dalia. I am the co-founder and chief clinical officer at Mind-Easy. My background is in the clinical space, so I’m a registered psychotherapist, qualifying.
Akanksha Shelat And I’m Akanksha Shelat. I am one of the three co-founders of Mind-Easy, as well as the CTO. I have two degrees in computer science and cognitive science from the University of Toronto, so I’m a full stack developer.
Dr. Alika Lafontaine Along with Alexandra Assouad, Dalia and Akanksha developed Mind-Easy to address poor availability of mental health services and the lack of personalized, culturally appropriate mental healthcare.
Akanksha Shelat The two areas that we focus on is one, understanding that there isn’t enough human capital in the world to really target every single demographic and every single identity that we want to focus on. And the other aspect is in the current mental health industry and the way most people understand it, it is very much you’re either in therapy or you’re not in therapy. There’s no in-between.And so we use digital avatarsthat look like you, speak like you, sound like you!
Dr. Alika Lafontaine The idea is, the avatar can help users feel a sense of comfort that they may not get with a therapist who doesn’t share their cultural background.
Akanksha Shelat You know, all the clinicians that they’re going through, they look different from them and having to go through that whole process of, I’m already dealing with stress and now it’s still my job to educate you and to understand what I’m going through. So we take all of that out and we deliver all of our resources through human-like avatars.
Dr. Alika Lafontaine Mind-Easy’s founders were all international students living in Canada, which informed their approach to the app.
Akanksha Shelat I’m originally from India, raised in the Middle East. Dalia is originally from Yemen, raised in the Middle East. Our third co-founder, Alexandra, she’s from originally from Lebanon. And so while we were dealing with the pandemic together, I think there was a sense from all of us where we recognize that in this space there wasn’t as much information and resources that were directed specifically for minority groups. You hear a lot of people being in therapy. You hear a lot of people, you know, trying really hard to find the right clinician to get to. But no one really addresses this area of, well, people are different and people from different parts of the world experience these stressors differently.
Dr. Alika Lafontaine In some ways, this was a very personal endeavour.
Dalia Ahmed You know, I’m from Yemen. There’s war in Yemen. It’s kind of, just being able to process all of that. There are therapists who are very knowledgeable about this. They live in Detroit. I lived in Toronto, for example, and I wasn’t able to access that therapy. I wasn’t able to get that validation or the acknowledgment or the potential specific interventions that could speak to my identity. It was too far. There were all these like, barriers in place.
Dr. Alika Lafontaine A report from 2019 by the Mental Health Commission of Canada says that immigrants, refugee, ethnocultural, and racialized populations are less likely to seek mental health treatment than other Canadians. And more likely to use emergency rooms if their mental health challenges reach a breaking point. The reasons for this divide include language barriers, limited access to services, fear, and stigma. The report also says, quote: “In many cases mainstream mental health care is inconsistent with the values, expectations, and patterns of immigrants and refugee populations” End quote. Back to Akanksha.
Akanksha Shelat It’s not like other cultures and places don’t practice psychotherapy. You know, if you go to East Asia, they have a version. If you go to South Asia, there’s another version.I think in North America we sometimes have that perspective that, you know, the way we do it is the way that works, and that’s not necessarily true. And so what we have done is we’ve built a network of over 70 different experts that we work with globally. These are experts that have actually spent their whole lives in that demographic, learning from that demographic, not just, oh, this is what’s prevalent, but this is how, how they see mental health.
Whether that’s the language that they use around mental health or what are some interventions that actually work in those spaces.
Dr. Alika Lafontaine Research supports the thesis that there is a gap that needs to be closed between the needs of culturally diverse populations and culturally specific mental health care. For example, Toronto’s Centre for Addiction and Mental Health, otherwise known as CAMH, notes that South Asians have relatively high rates of anxiety and mood disorders, but they are 85% less likely to seek treatment for those challenges when compared to other Canadians. CAMH has found that using culturally informed approaches to therapy, which they call culturally adapted CBT, have more positive results than traditional CBT. Working and studying in the psychotherapy field, Dalia saw this type of groundbreaking work being done.
Dalia Ahmed Coming across culturally adapted resources has been kind of a mind blowing experience where I was like, whoa, like, why do we not have this everywhere? Why? Like, this is why people don’t access therapies because they don’t get resources that speak to them in the way that they understand and communicate and live life.Maybe there is a way to make scalable change through this approach in a way where you can include, you know, diversity, you can include different voices and you can make it a collective efficacy approach that can really serve a diverse set of people.
Dr. Alika Lafontaine So I know AI has kind of been having its moment. To some degree, it’s almost become a meme. But I have seen and in a couple of your published interviews that you have talked about AI and it kind of it having a role in all of this. Maybe you could explain a bit about that.
Akanksha Shelat Yeah, so when people think of AI now, I think their mind jumps to ChatGPT and and you know, some of the trends that they’re seeing. But almost every single thing that we use does have some form of AI in it. One is obviously the avatars itself.There’s a lot of tracking that happens behind the scenes. So there is a mechanism that assesses, okay, this person’s at this stage so this is how they need to be prompted. So there’s some assessment that happens there. We do want to expand that more and more as we go forward. But when we initially started, that was pre-ChatGPT boom, so we are now working towards incorporating a lot more of it. Except, you know, we also want to be careful in how we incorporate it. You might have seen there’s actually quite a boom in like, chatbots that are like mental health companions and mental health coaches or whatever. And it’s actually very easy to break those. It’s very, very easy to say something inappropriate and have it say something inappropriate back to you. So we’re working towards, you know, how do we work with some of these giant models of language? and how do we incorporate not just language itself, which is getting better and better at doing. For example, if you spoke to ChatGPT in Hindi, for example, it’s actually going to respond to you in Hindi. But it doesn’t have any kind of like clinical information behind it to give you something that’s clinically helpful as opposed to just saying, oh, you’re not feeling well. Okay, great. Go on a walk. You know, that’s the extent of its knowledge.
Dalia Ahmed Yeah, I think that AI has been really helpful in making a scalable solution as well.We have been able to also provide resources in the Middle East, you know, after times that might have been crisis where, you know, there are no clinicians available or there might not be resources to the traditional maybe community center for that education. And so this is where I see that the scalability is really, really helpful. Like, I grew up in the Middle East where the topic of mental health was taboo. Like, I was studying psychology and people, they would laugh and be like, well, you’re not going to have a job. And so now being able to say, hey, look, we can actually provide something that still can be… people can watch it privately in their homes. This can be something that has been really, really amazing. And I’m not from the tech space, so I’ve just been super mind blown by all the things that, you know, the accessibility that has been made possible by it.
Dr. Alika Lafontaine So it sounds like you put a lot of effort into making sure that this is grounded in things that work and reaching out to humans who are interacting. But one of the promises of the AI is obviously the scalability that you talked about. And so we now have generative AI that is so good at convincing us that it’s a human that sometimes we just walk away and do what it tells us because we assume that it must know what it’s talking about. It’s AI, right?
Akanksha Shelat Yep.
Dr. Alika Lafontaine And I think with with the advances in generative images and video, you know, increasingly it’s going to be much more difficult to tell. Is this a person? Is there a person behind this machine? Or is it just the machine? From from both your viewpoints in the space, where do you think this all can go horribly wrong?
Akanksha Shelat It’s really funny you ask that because I don’t know if it’s just because I’m a cynical person in general, but I feel like, out of the three of us, I’ve always been the one that’s, like, overly careful about, you know, even when we saw some of the other trends come up in Web3 and some of the other AI transformations that have happened even in like the last two, three years, I’ve always been the person going, mmm, that sounds iffy. Like, I’m not sure about that! We have seen some of these like AI assistants and chatbots not have the capacity for…forget empathy, I mean, that’s a different conversation, too. But just even that sense of, just because someone says they want to do something harmful to themselves and maybe they can rationalize their own conversation with you, doesn’t mean they should do something harmful to themselves. Right? And we as humans can say, no, don’t harm yourself, but it becomes really easy for us to convince the, the machine that I’m doing this because I have these three logical reasons why I want to do it and the machine to go, yeah, you’re right, do it. And it’s such a scary thought that it can go in that extreme direction. And that’s what we always think of, I think, as, you know, entrepreneurs in the space for mental health and in the space of AI. It’s our job, quite literally, to think of the worst case scenarios and go, okay, how do we prevent that? How do we not go down that slippery slide? And the one thing that I think keeps us motivated enough to stay in the space is this change is happening whether or not we’re a part of it.
Dalia Ahmed We’re not trying to replace therapy. I don’t think we’re ever going to replace therapists. I think it’s important to have therapists. But to optimize their resources. And then the other piece is the data that’s out there is not something that we can just very easily trust and use. Just knowing that it’s a tool right now that can help us, I think is the best way to look at it.
Akanksha Shelat Yeah, and I think we haven’t even touched the fact that, like, on the other technical side, like it’s humans dissecting this data and humans are inherently also biased. And so, you know, what do we do about that? So I think. I think it’s just good questions to ask. And I think the more we incorporate it in situations where we can trust that there is a fallback for a clinician or a human to to validate and to understand if this machine and if this AI is doing the right thing. I don’t think we’re going to be replaced anytime soon.
Dr. Alika Lafontaine Other people have asked this same question: What do we do about bias in the data? One review from 2022 by a research team from CAMH and the University of Toronto found serious limitations in the way race-based data is collected and used for mental health research. The problem isn’t just that there isn’t enoughdata. It’s also that the methods for measuring it are inconsistent. Around a third of the studies they looked at didn’t provide clear definitions of race or ethnicity. In addition, marginalized groups—Indigenous people in particular—were often excluded entirely from these studies due to the small sample sizes. The review also brings up AI. It reads: “Rapid advancements in machine learning make it important to revisit how race or ethnicity are measured and operationalized in mental health research, since biases can be amplified when baked into machine learning data and models.” Dr. Dr. Nelson Shen is a co-author of that review.
Dr. Nelson Shen You know, we did a review and we found that, you know, even just the way we operationalize the term “race” in data sets can mean many different things. And, you know, oftentimes it’s used as a proxy for discrimination. But, I think we need to be very careful about how we structure these data.
Dr. Alika Lafontaine Dr. Shen is a project scientist at CAMH. He’s part of their Digital Mental Health Lab, where his research involves engaging clinicians, patients, and others in digital health and AI initiatives. Dr. Shen says that even with the best intentions, promising technology can widen the gap between haves and have nots.
Dr. Nelson Shen Good intentions are not enough. I mean, my favourite example is an app that I really dislike, but Pokemon Go, it’s just because everyone always had their head down. But you know, Pokemon Go, great intentions, and I thought it’s transformative in the way that it got people to be active without really thinking about it, right? You got people walking around all city and facing the phone. But what we quickly found was people started calling Pokemon Go racist because all the good Pokemon were in the high density areas, in the urban areas, whereas people in more remote areas or low socioeconomic status areas, there were no good Pokemons around. And why was that? It was because they weren’t really mindful on how they designed the algorithms and they used these maps from a previous app that was predominately white, affluent, and commercial areas. And that’s what happened, right? So a potentially well intentioned app somehow gets labelled as being racist.
Dr. Alika Lafontaine But overall, Dr. Shen remains optimistic.
Dr. Nelson Shen There’s a lot of stuff going on and people are really excited. You know, this past year with all the generative AI, people are really excited about leveraging AI in healthcare and in all aspects of life, really.
Dr. Alika Lafontaine He’s excited about the ways artificial intelligence can help patients get better access and providers distribute more personalized care. One example is in triaging, the process of deciding the nature and urgency of care each patient needs.
Dr. Nelson Shen Recently, they had an announcement with Kids Health Phone in collaboration with the Vector Institute here at U of T, where they’ll be using AI to really understand who’s on the other side of the line by analyzing their voice, their speech patterns, their word choices to provide more personalized and precise services. So if there is something that is high risk, you can escalate to some crisis responders. Whereas if someone’s at low risk, you can have that conversation with a chat bot.
Dr. Alika Lafontaine New technology has also revolutionized the way that doctors can collect data in real time, using it to predict where a patient’s health journey is heading. And then there are possibilities for treatment: helping AI and clinicians to work more seamlessly together.
Dr. Nelson Shen There’s kind of cognitive behavioural therapy, which is kind of standard practice across a lot of the mental health conditions. So there’s a lot of guided treatments where AI recommendations are being made in lieu of actually in person. But what we’re finding is that you need people to be there as well.
Dr. Alika Lafontaine You know, as a clinician, you know, I provide anaesthesia. I read through, you know, thousands of pages of medical records every month. It’s clear that there’s a lot of bias and often inaccuracies within those medical records. Patients are being labelled one thing versus another. And, you know, that really drives the way that you treat and interact with them in the healthcare encounter. You know, you have machine learning models and algorithms trained on this exact same data. How do you think this will affect patients as these learning models continue to mature and become a bigger part of receiving health care?
Dr. Nelson Shen I think there needs to be understanding across the board with everyone what this data will do, right? Everyone’s starting to understand, you know, if we have data that’s not correct or have biases baked into them, you’re going to exacerbate inequities that are already there. So if we keep on training and using, it’s a perpetual cycle that’s going to continue to exacerbate existing inequities. Also, the data is not complete, as well.There are people that are hesitant to engage in this. And, you know, sometimes, at the point of care, might not be as forthcoming or may not be willing to share the data and these large data sets. And that poses an issue as well. Right? Because if they don’t participate, then by trying to protect their own privacy, they’re also but they may inadvertently also create these inequities based on whatever data we’re training the AI. So that is a serious risk. And I don’t know what the solution is for that. I think we’re trying to figure that out right now.
Dr. Alika Lafontaine Do you have any ideas on how, from your standpoint and experience, we could mitigate this moving forward? Because I know no one’s solved this problem, but based on what you know, how could we go about solving the problem?
Dr. Nelson Shen The approach we’re taking, in both, from a clinical standpoint and from a patient standpoint, we’re doing a lot of engagement work, you know, across the spectrum, whether it’s consultation or getting in there and, you know, co-designing these solutions with people.So some work I’ve been doing with Canada Health Infoway where we’ve been doing some pan-Canadian surveys to just kind of gauge, what is the temperature on private privacy in the public? And we found that, you know, the more you trust in the system, the more you’re willing to share your data. And, you know, a lot of that is, how do we build trust in the system?
Dr. Alika Lafontaine Just to touch on that point about trust. If we rely so heavily on trust in the system, what’s the unique danger that marginalized communities, you know, equity-deserving communities have? Because trust tends to be lower if you’ve had bad experiences or less access to health care. You know, are we creating a situation where they just continue to be left out? Or, you know, how do we chart our way to a future where we’re not worsening inequities?
Dr. Nelson Shen You know, that’s where really meaningful engagement is necessary to understand, well, where are you at and how can you bridge that gap that’s there?
Dr. Alika Lafontaine This is obviously a passion of yours. What makes you most excited looking into the future? And the other side of that question: what scares you the most, looking in the future, if it was to happen?
Dr. Nelson Shen Should I start with the negative or the positive first? [laughs] You know, What scares me the most is still a lot of uncertainty about what AI is and where it’s going. And you know, how the public views it. And I think, we need to start being more collaborative in how we approach AI. What excites me? So, you know, what got me into health informatics, you know, was this idea of patient empowerment and giving patients the tools they need to feel that they can take better care of themselves. But for me, I’m interested in how AI can be used to help advocate for patients. And be a platform for them. So, you know, in mental health, we have our own stories, but oftentimes we don’t know how to tell it. So how can we use AI to help us tell that story? Help us put the right words in there so that we’re not discriminated against, but we also have the avenue to share our rich experiences to people so that people don’t see us in such a stigmatized light, but really understand what our experiences are. Yeah, those are my, I guess, fears and dreams. Ha.
Dr. Alika Lafontaine The inertia against change in medicine can be pretty heavy. History and tradition are as much a part of medical practice as science and humanity. To make this more concrete, major changes in medical practice take 17 years. Every so often, however, overwhelming crisis drives momentum. And things that seemed like they’d never change, suddenly do. There are few areas of medicine where the crisis is more severe than that in mental health. And it’s going to get worse. Today 6.7 million Canadians struggle with mental illness and one in two Canadians will have dealt with a past mental illness by age 40. The consequences of access gaps are dire, whether it’s looking at the heavy burden of individual suffering or the ongoing cost to Canadian society as a whole. So what’s the future of mental health access in Canada? Will we solve access gaps with technology, or replace providers with artificial intelligence? Our guests today leave me with the thought that the future is less likely to be either/or, but a new path where we fuse the availability and efficiency of technology with health providers judgement and humanity. Technology won’t replace people. But people who don’t use technology will likely be replaced by people who do. We really are stepping into the unknown. But for folks struggling with mental illness, we owe them the effort to step through the darkness to find what’s on the other side.