Kids Help Phone seeks help from AI technology to meet the demand for mental health support
![](https://thehalifaxtimes.com/wp-content/uploads/2023/07/20230704170756-64a495d251206826103cf64djpeg-780x470.jpg)
TORONTO – Kids Help Phone says it is turning to artificial intelligence to help respond to the “massive need” as more and more young people seek mental health help and support.
“Young people are changing fast and technology is changing faster,” said Michael Cole, senior vice president and chief information officer for Kids Help Phone.
The helpline partners with the Toronto-based Vector Institute, which bills itself as a consultant to organizations, businesses and governments to develop and deploy “responsible” AI programs.
The planned AI will be able to recognize key words and speech patterns of young people who contact Kids Help Phone to help busy counselors find what they need and tailor their support accordingly.
But Kids Help Phone says it is well aware that the term ‘artificial intelligence’ can alarm people if they call up images of a computer or chatbot instead of a human being on the other end of the helpline.
That’s not how the AI program will work, said Katherine Hay, the organization’s president and CEO.
“It’s always person to person,” Hay said. “It doesn’t take the place of a human-to-human approach.”
Instead, the information gathered by AI will be available to human counselors as they work with the young person on the other end of the call or text, she said.
The 24-7 national support line for children and adults has seen a huge surge in demand for its services since the start of the COVID-19 pandemic. After receiving about 1.9 million calls, texts, live chats or visits to its website in 2019, Kids Help Phone has seen that number rise to more than 15 million since 2020, according to figures from the organization.
The organization is already using some AI technology to help triage texts, Hay said.
For example, if someone uses trigger words or phrases like “I feel hopeless, I think I want to die,” or something along those lines, that conversation will be front and center (to talk to a counselor). ,” she said.
Roxana Sultan, Vector’s chief data officer and vice president of its health division, said treating AI as a tool, not a replacement for humans, is a critical part of using the technology responsibly in healthcare.
“We have been very clear with all our partners that the tools we develop are always intended to support the clinicians. They are never intended to replace physician judgment or physician involvement,” Sultan said.
The Kids Help Phone AI tool uses “natural language processing” to “identify keywords or trigger words that correlate with specific types of problems,” she said.
“If a young person uses a specific word in their communication that is related or related to a specific problem or issue, this model will flag it and alert the professional staff,” Sultan said.
For example, AI can be trained to recognize words that suggest a possible eating disorder, allowing a counselor to steer the conversation in that direction and provide specific resources and support.
AI can also be trained to identify new words and trends related to situations that cause unrest and fear, such as a pandemic, climate change, wildfires or a mass shooting.
“It really aims to improve the services provided by the professional staff,” Sultan said. “(It) helps them be more efficient and effective in terms of how they then deal with the issues that come up over the course of the conversation.”
The key, Sultan said, is to ensure that the AI tools are thoroughly tested by clinicians before they are launched. Kids Help Phone and Vector expect to launch the new technology sometime in 2024.
After it’s launched, it’s critical that the frontline workers who use it continually evaluate the information they’re given.
“You really can’t blindly follow what an algorithm informs you to do in practice. So no matter how high-quality the model may be, how well-trained it may be, it is never intended to replace your judgment and your experience as a clinician,” Sultan said.
If the AI generates something that seems “a little off,” that should be flagged and investigated, she said.
Another concern people may have about AI is the confidentiality of their personal information, she said.
“It’s very important to be clear that all data used to train the models is anonymized,” Sultan said.
“So there’s no risk of knowing information about someone’s name or you know, all identifying factors are all removed beforehand.”
The use of AI in mental health care is on the rise across the country, said Maureen Abbott, manager of the division of access to quality mental health care at the Mental Health Commission of Canada.
Current uses range from individual services to monitoring social trends to driving a growth in mental health apps, she said.
“AI is being used in speech recognition to pick up a cadence in a person’s voice and help diagnose manic episodes and depression,” Abbott said.
“It is used in machine learning chatbots and also in social media to identify suicidal ideation trends, for example to scan for phrases and words.”
Abbott said there is a need to develop and implement standards governing the use of AI in mental health care in Canada to keep up with its rapidly increasing prevalence.
“AI is already being used in our daily lives, whether we see it or not,” Abbott said.
“So it’s only natural for it to happen for mental health as well, and it’s happening fast.”
This report from The Canadian Press was first published on July 5, 2023.
Canadian Press health coverage is supported by a partnership with the Canadian Medical Association. CP is solely responsible for this content.