UL’s Prof Pepijn van de Ven discusses his research, which involves using simple AI models to benefit mental health interventions.
The topic of AI’s use in healthcare has been prevalent in the world of tech recently.
Last month, prominent generative AI companies OpenAI and Anthropic both launched dedicated healthcare-focused services for their respective chatbots.
While both features – ChatGPT Health and Claude for Healthcare – were developed to assist users with tasks such as understanding test results and preparing for appointments, some are looking at the potential of AI in more focused areas of the healthcare umbrella.
One such researcher is Prof Pepijn van de Ven, a professor in the Department of Electronic and Computer Engineering at University of Limerick (UL).
With a background in electronic engineering – and a PhD in artificial intelligence – van de Ven is currently the course leader of Ireland’s National Master’s in AI, delivered by UL in close collaboration with ICT Skillnet, as well as the founding director of UL’s D2iCE research centre, which conducts research into AI development and deployment with ethical, sustainable and trustworthy use of AI in society at its core.
Currently, van de Ven’s research focuses on the use of AI in mental health interventions.
“I’ve been very lucky and have had the opportunity to collaborate with some of the trailblazers in what we call internet interventions, which is any intervention delivered via the web,” he tells SiliconRepublic.com.
“In the last 15 years, I have contributed to research programmes which focused on the use of smart technologies in the delivery of mental health interventions with partners across Europe, Australia, North and South America, and of course also Ireland.”
He explains that the contributions he and his team have made to these projects revolve around using artificial intelligence to improve the delivery of said interventions.
“For example, we have shown that AI can do the time-consuming screening of patients that a clinician would otherwise have to do, thus freeing up that person for contact with patients,” he says. “Such screening interviews tend to use a battery of questionnaires that can be a real burden on patients. We do a lot of work around analysing the questionnaires typically used in mental health during screening to see if these can be shortened.”
‘We’ll need to think very carefully about the use of AI wherever we consider its use to prevent unintended consequences.’
Benefits and caution
Van de Ven considers his research important because of its potential to assist an area of healthcare that has long suffered from a lack of proper attention.
“Unfortunately, there is still a massive stigma on mental health and services tend to be under-resourced. The well-considered use of AI has the potential to reduce thresholds to access in these services and can also make the provision of these services more efficient.
“As our population ages, the need for healthcare services, including, of course, mental healthcare services, will only increase. I think it’s a simple fact that the only way we can ensure high quality services for everybody is through the use of AI.”
One misconception he says people have about his work is the belief that “AI equates to generative technologies such as ChatGPT”.
“This misconception, given all the remarkable advances with generative AI, has led to a lot of hesitance around the use of AI,” he says. “The models that we use are really simple compared to ChatGPT.”
He explains that by using simple AI models within such a sensitive area, the risk of harm to patients is lessened – adding that he cautions against the use of generative AI and large language models to replace human staff in services such as counselling.
“We should be very careful,” he says. “I am a proponent of the careful use of AI to support healthcare providers in their roles and to allow them to spend more time with patients where possible.
“We’ve all heard the stories of people using generative models such as ChatGPT to discuss their mental health issues and really confiding in these AI models. And unfortunately, this has led to catastrophic outcomes in some cases.”
For instance, in December OpenAI was sued over claims that ChatGPT encouraged a man with mental illness to kill his mother and himself.
“As it stands, we cannot guarantee how a generative model will respond to a prompt and for this reason such use requires further research and careful testing before it can become mainstream.
“Although any AI model can cause harm just like most other technologies, the simple models we develop help with a very narrow task and often do so in a way that can be understood by a clinician,” he says. “As a result, their capability to do harm is limited and well understood.”
Personae
One project that van de Ven and his team is involved with – as the only non-Danish partner, he adds – is the Personae project, which aims to adapt a fully online mental health service already used in the Danish healthcare system to a “so-called stepped care model”, according to van de Ven.
He explains that this model presents support for patients across three different steps, or levels.
At the lowest level, patient engagement is self-directed, while the second level incorporates a blended approach where patients have access to self-directed treatment, while also being able to avail of a therapist in online sessions.
The last step or level is the “traditional approach”, he says, where patients see a therapist for every session, albeit through an online format.
“The expectation is that this stepped-care approach will result in more efficient use of healthcare resources and thus an opportunity to treat more people with the available resources,” he says. “Our role in this project is to create AI models that can predict what type of intervention a patient requires based on assessing the information people provide when they enter the service.
“Down the line, the hope is that our models can also inform what step in the stepped care model a patient should receive.”
In terms of current progress of Personae, van de Ven tells us that his project partners in Denmark have created a new intervention that is suitable for delivery on these three different levels, as well as a brand-new mobile platform to support delivery of the intervention.
“After two years of hard work, the trial was started recently and it’s going well. In the very near future we hope to receive lots of interesting data to improve the performance of our AI models further.”
Speaking of the future, what are van de Ven’s hopes for the long-term impact of his work?
“I’m hopeful that we can do right by mental health patients and their loved ones by improving the services provided to them,” he says. “Internet interventions and AI will play an important role in this process, but AI is very much a double-edged sword.
“We’ll need to think very carefully about the use of AI wherever we consider its use to prevent unintended consequences.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.






