Dr. Pim Cuijpers returns!
In part two, Dan and Dr. Cuijpers explore the evolving landscape of psychotherapy, particularly in the context of AI and digital tools. They discuss the risks and benefits of AI in therapy, the importance of simplifying psychotherapy, and the need for effective training for therapists. Dr. Cuijpers emphasizes the significance of understanding the mechanisms behind psychotherapy and the necessity for large-scale research to improve treatment outcomes. Then, Dr. Cuijpers breaks down the role of MetaSci in advancing meta-analytic research in psychotherapy.
Dr. Pim Cuijpers is director of the WHO Collaborating Centre for Research and Dissemination of Psychological Interventions in Amsterdam and professor emeritus of Clinical Psychology at the Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute.
Special Guest: Dr. Pim Cuijpers
MetaPsy: Database and interactive meta-analytic tool of Psychotherapy RCTs for Depression
Innovations to improve outcomes and uptake of psychotherapies for
mental disorders: a state-of-the-art review
Five Decades of Research on Psychological Treatments of Depression:
A Historical and Meta-Analytic Overview
Effect of matching therapists to patients vs assignment as usual on adult psychotherapy outcomes: A randomized clinical trial
[Music] In the first part of my conversation with our guest, we laid out the larger public health challenge facing psychotherapy. How do we help more people when time, money, and trained clinicians are all limited? In this second part, we get into what that means in practice. My guest explains why guided digital interventions can work as well as face-to-face therapy. Why simplifying treatment may sometimes help us reach more people? And why clinicians and researchers should be cautious about claiming that we know exactly how psychotherapy works? But first, if you're new here, I'm your host, Dr. Dan Cox, a professor of counseling psychology at the University of British Columbia. This is psychotherapy and applied psychology where I talk with leading researchers about what actually matters in practice. What's behind the findings and what they wish clinicians new sooner? And if you enjoy the show, please subscribe when your podcast player or on YouTube, like, and subscribe. That small click makes a surprisingly big difference for the podcast. And also be sure to check out our website over at psychotherapy and appliedpsychology.com. This episode begins with my guest talking about internet-based therapy and digital mental health tools. So without further ado, here's my conversation with my very special guest, Dr. Pym Kipers.[Music] That's something that's very interesting from that purple cancer perspective. Because that requires much less therapist time. And I've been looking at guide itself help for, since forever. Because I think that's really fascinating for some people. And it's, I mean, digital interventions. They used to look like that you have a psychological treatment earlier in book format. So you had a psychological treatment. And you learned in a book how you could apply that to them. And then you talked once a week to a therapist if things were good. So going well, they should understand everything. Do you, they should do your homework? Well, I was that. And so you had weekly telephone calls. And that research goes back to the 1960s. So that's not new at all. In the 1960s, you have these studies in anxiety disorders where therapists recorded their instructions on LPs, long play records. Where people could listen to the instructions. So it's not new at all. But now in the digital age, these methods to learn how to apply a psychological treatment to yourself, they are on the internet. And that's been going on for more than 25 years. That's research. And that's the classical internet digital interventions, so to say. They're very old. They're 25 years old. I got my first grant for a digital for developing a digital interventions in 1998. So that's a long time ago. And what we have seen over the years and we have done these fancy network meta analysis on that that you look at the four treatment format. And we published one in Charmacykiatry 2019, I think, in which we only looked at CBT. And that could be done individually in groups, by telephone or guide itself, mostly digital or unguide itself. So you also have these unguided tools where you just go to the internet and learn how to apply the therapy to yourself. And we found no difference, significant difference between these treatment formats as long as there's human support. So no difference in the virtual groups guide itself, help telephone, all makes no difference. They all work. Unguided interventions are less effective than guided or individual. And so that means that you can apply CBT and whatever format you want as long as there's a human. And human support in digital interventions, in these classical digital interventions, that's often, that's not, you don't need highly trained clinicians to do that. You can do that by, we used in our trials, trained clinical psychology master students and it did it very well. And you, it takes much less time than individual therapy. But these are the classical old-fashioned internet interventions. They work. So if you do that, they work just as well or just as bad as individual therapy. Which is interesting for many reasons because that means you can, if you want, if you go for example to lower metal income countries with digital interventions, you need much less resources than individual therapy or whatever. And an interesting thing is that in low and middle income countries, we did not find that difference between guided and unguided interventions. So unguided interventions, they work just as well in low and middle income countries. But that's a whole different story. But these are the old-fashioned digital interventions where you have to go to your computer and read and do your own work assignments and things like that. But of course nobody does that anymore. People have their phones and they work on iPads, but not on the computer. And so you have all these smartphone apps which, and there's that's booming as well. And that's, there's a lot of research on digital tools. Most digital apps in the app store and Google Play Store or however it's called. But they are not, they're not examined in terms of effects. But there is a lot of research on apps that have been tested in randomized lines. And what you see is that they work, but the effect sizes are pretty small. Between 0.3 and 0.4. For most mental health problems. But they do work when you, so if you manage to complete them, they are usually affected. Or they, sorry, they can be affected. And now you have the, of course, the, the, the, the, the, the, the, the, the, the, the, the, the language models, the GDP and they are dangerous. We know they're dangerous because they can, yeah, they can, in a way help people with suicidal thoughts to increase these thoughts. And help them that we have examples of that. And we, we know that that that's a large language models can also increase, for example, people with psychotic thoughts that they strengthen. Yeah, these hallucinations and all kinds of delusions that people have. So they're now they're dangerous, but you can build that in in large language models that you, that, that, that doesn't happen. And there are, there is an increasing number of randomized trials in which large language models are tested to see if they work as a treatment for depression or for any other mental health problem. And they until now it seems that they do work. But the question is, of course, today work better than old fashioned internet interventions. Or are they just as good as guided digital tools in the sense that they, that you don't need human support anymore than when you, when you apply it. And well, the evidence is not there yet. So we don't know what my expect, what I expect is that they will make a big difference in terms of dissemination of therapies. But I'm not too optimistic about the possibility of these tools to increase the outcomes. So will they be better than the 42% response rates in digital interventions or face to face. Yeah, that's interesting. Yeah, with the large language models also, unlike the old tools where you create the modules and it's so it's static, right? It is what it is. And they're going to get this piece out of the, the patient's going to get this piece out of this module with the LLMs. You can do a lot of that, but there's still this random stuff that's happening, right? And so it can't, it's not as scripted. And so it can go off script. And, you know, so in my experience, there can be, and I really love these models, but that there's a, there's always going to be more of a risk of it doing something harmful than the old school models, the old school approaches. Absolutely, although I do think that these models can be trained to do the right thing. On the, on the other hand, when I think of treatments of depression, what works for many people, for example, is behavioral activation. And you can, you can talk with chatbot about behavioral activation, but you still have to do it. And you still have to think about what are these pleasant events that I like and how do I build them into my schedule and what do I do if I, if I cannot find time for the pleasant events. And it's, and I think that's where the change comes from. And the same, for example, for cognitive restructuring, you can think about your thinking and you can think about your thoughts and how you can challenge them. But changing them during your daily life, that's what matters. And then I, I'm not sure whether a large language model, it's so easy to talk and it's so easy to exchange things and they explain it to you how it works and what you should do, but you should do it. And I said, "Well, it works." And so I'm not, I'm not sure, I'm not sure, but we will see. Well, I hope they, they improve things. I really hope, do, do hope that. But I have to see the results first. Yeah, I think, you know, in my own experience with it, it's such a useful tool for coming up with creative solutions. So you have to be relatively high functioning to start with to be able to use it as a tool for that. But if you're trying to figure out, okay, what are some pleasant activities that I could do given whatever constraints you can have a five minute conversation with a LLM and it'll give you things you never thought of. And you have to be able to say, oh, 70% of this is terrible, but there's a couple jewels in here that maybe I never would have come up with. Like for that, those sorts of things, I think it could be tremendously beneficial to patients. Yeah, perfect. I hope it's worth that. Yeah, let's see. So, you know, another thing you've talked about is simplifying psychotherapy or removing unnecessary components to psychotherapy, which I haven't heard many other folks talk about. Could you talk a little bit about that? Yeah, I mean, that's, that's my public health perspective. That's that you that you, I mean, what we also found in a lot of research is that psychotherapy, sorry, depression work regardless of the number of sessions. So we did not find an association between the number of sessions and the outcome. So if you have a five session therapy, that works just as well as a 20 session therapy. And it's, I know it sounds crazy, but that's what the evidence says. And I've talked a lot. I mean, I have had these discussions on conferences and also on podcasts with Bruce Womple. He's in, he says that all therapies work through common mechanisms and they, uh, uh, then it doesn't matter so much what the specific techniques are that you learn, but it's the common factors that makes that therapies work. And, um, well, the evidence supports that until a certain level, not enough because we don't know how therapies work. So I wouldn't be go as far as Bruce Womple says, but it does certainly in depression. We see that all therapies work and they don't differ significantly from each other. So during these discussions with Bruce Womple, I always say, well, the most fascinating thing of this is the question. So if everything works, what can we take away without without losing its effects? Because, I mean, that's important because then if we can minimize therapies without them being less effective, that means that we can, that we can help more people with the same resources as we have now. But in one way or another Bruce, that's not what Bruce wants to hear. He just wants to hear that everything works through common mechanisms. But I think it's, it's, uh, this is one of the most important things, uh, of the finding that all therapies work in the same way. I mean, have comparable effects that we can try to take away parts without reducing the effects. And that's, um, you can do that in many ways. I mean, digital tools are one important way. And that doesn't mean that, that I think that everybody should be treated with digital tools. There are people who cannot handle that who just want to talk with a person. And I think we should do it. But it also says, for example, that you can start with 10 sessions. And if you didn't don't get better after 10 sessions, then maybe you should switch to another therapy or you should get antidepressants in addition to that. Or you should get another therapist or you, or maybe you say, okay, 10 sessions. It's, I, I, I'm not that are yet, but I do expect that in the next 10 sessions, we're gonna, I'm gonna improve. And you trust that and or you start with digital tools. And then if that doesn't work, you move to another therapy. And so it's, it's, uh, the finding that all therapies have comparable effects that it's not about the number of sessions that says how it works that that should stimulate people to change more to experiment more. Um, as I said, 42% of people respond to therapy. Why would you need 20 sessions to get that 42% if you also get an intense sessions. And then it's still only 42%. But of the 58% that's remaining, you can try another therapy or you can try drugs or you can switch to another type of therapy or whatever. So, so I listened to at least one podcast with you with you and Bruce and one of the things that I want you to tell me if my interpretation of what you're saying is correct. So, so what I heard you saying was let's imagine that we're training a bunch of therapists or therapists and training on CBT for depression. Let's just imagine that. But we find that, uh, I'm just picking something randomly here, you know, the cognitive restructuring doesn't matter. Like when, when based on the literature that and I'm making this up that cognitive structuring that that CBT with cognitive restructuring, CBT without cognitive restructuring doesn't matter. Well, then if we can remove that from the training that simplifies the training by 20% 30% whatever it happens to be, which then makes the training, you know, you can focus on the more important things. The training might not be as long. You might not need folks who are quite as, um, have as much experience, have as much expertise to do the truth because all of a sudden the training of the treatment gets easier. And so more people can do it, which is if that component or those components don't matter, then why don't we do that? So that's how that's how I interpreted what you were saying. Is that am I, was I on the right page? Yeah, I don't think I talked with Bruce about training in that sense. And that was that was in my head. That was how I was thinking about I was making sense of what you were saying of like what that could look like in real life. Yeah, I mean, I mean, in that respect trans diagnostic treatment, so very interesting because then you have one treatments and you apply it to many different problem areas and that's much more simple. I'm a bit more about training. I do think that training is very important and also that you need to be trained in a lot of different areas. But not everybody has to do that. I mean, so overall depression, that's where I have done most research on 42% response to the first treatments. And then after the second treatments, 42% of the remaining 58% response. So, but there is a small group of people who don't respond after five therapies and in in in in psychiatry they call that treatment to resistance. I think that's a false term, but that's a different story again. But after a while, when you have done these protocolized treatments, which have been evidence based on a lot of research, there are still that small group of people who do not respond. And in these people, I think you you you need psychotherapists who are really good at their job and then not in terms of doing protocols, but in terms of looking at the person in front of them. What's really going on? Where do these problems come from? What's what's really going on? And what are things that could help this person in this situation with this specific context? And I think these are the very good trains clinicians and we need them as well and they need to be trained in everything possible. And because they are the what the last resort for a lot of patients. One of the things that you've written about and you were actually talking with Bruce about in that conversation, which I think about a lot is that we don't have good empirical evidence that our psychotherapies work for the reasons we say they work. That we're just talking about CBT here that CBT does it that does it actually work because it's by by changing thoughts like we don't have compelling evidence for that. So because I have you as an expert and I just for my own personal, if you were to design to design, you know, the ideal study for doing that type of work, where does your head go? What do you think would be the most compelling study to get there? There is not one study that will that will solve the issue and that's the whole problem. I mean with randomized trials, they are that's the best scientific design we have because you randomized people and then you split them into two. And then one group gets a treatment and the other not and then after the treatment the only difference between the two groups is that one had a treatment and then the other did not. So from a scientific perspective, that's the best design that we have to show if a therapy works. And we show how it works. That's a completely different question. And we we if you want to examine that, that's a whole different story. And that's not unique for psychotherapy. We also don't know how anti-depressants work. We also don't know how smoke and causes cancer. What happens when people inhale tobacco in the lungs and what happens in the cell with these with the smoke and the tobacco and how that causes. We don't know that we know parts of it, but we don't understand the whole process. And that's the same for psychotherapy. I mean, there is a lot of research on mediating factors like that you measure cognitive errors, so to say. And when you then you when you get cognitive therapy for depression, then you can see, okay, these negative thoughts they improve and the depression improves. But that's not evidence that the change in the negative thoughts causes the improvement in depression. It's also very well possible that it works the other way around that. That it's the change in the in the depression that makes people think more positively. Or it could be a third factor to examine for example, what Bruce would say it's the therapeutic alliance, which causes that both that people get less depressed, but also that I think more positively. And there's no way to examine that causally and that's something we know from epidemiology epidemiology. If you want to examine things in the population, you also often cannot do randomized trials. So you you have these population studies where you have to examine associations and then try to find out if there is a causal chain. And to do that, you need a lot of research. So in the case of psychotherapy, you have to show that you you cannot just look at the start of the therapy and the end of the therapy. And you have to make measures in between and you have to show that the change in cognitions has to comes before the change in depression. But that's still not evidence of causality. And it helps if there is a dose response relationship. So if people change more in their thinking that they also change more in the depression, but even then you're never certain whether that's causal. And this is the reasoning that the tobacco industry used for decades to say it's not a tobacco that causes the cancer. It's all all the other factors that causes it. And that's that's the same with with and Bruce as well. Okay, if you believe that tobacco causes cancer, then why not believe that it's the common factors that cause that. But I'm too much of a scientist to make that step too fast. And in epidemiology, we know that you need a lot of evidence. So it's not only that the temporal association, so the change in thinking comes before the change in depression. It's a dose response relationship. You have to have a good theoretical framework to explain why the cognitions caused the change. But you also have to have supporting evidence, for example, from lab studies or from animal studies to support your claim that this is the causal mechanism of psychotherapy. And when you look at it for psychotherapies, there is no psychotherapy for which we have for which we can say, okay, this for he we know now how this therapy works. Unfortunately, we don't have that. So from your perspective, in one way, we could almost never know because the the amount of alternative explanations is so vast. Perhaps in another way, even if we could never know for sure or likely never know for sure, I don't want to say never that if folks were to do the work that you could have more and more evidence, let's say for the argument that changing maladaptive thoughts changes depression. It's one of those where yes, we could do good science and get us further and further towards a causal claim and you could feel good at some point saying it seems like most likely this is what's causing this, but I can't say 100%. But I feel close to certain are very confident, but not still as a good scientist, still not certain certain that. Yeah, that's a that's a good summary, but I would be willing to say that that that's a 95%. But we're not yet at 50%. Right. Right. Well, I think you know, in terms of rough, that's just for just for the argument that's not that percentage is not based on anything. I mean, you know, I'm just thinking also about like, you know, I think about my own career and like, okay, where, you know, what are the future directions, what are the big questions that we really haven't answered that there's still. Yeah, I think it is more than just an academic puzzle. I mean, partially it's an academic puzzle or a scientific puzzle, but also it is to truly understand these things could if we say that it's not this, but it is that. Well, that's great. And I think that's really important. And I think the way that we're going to do that is that we're going to be able to make sure that we have the right and the right way to do that. I mean, one of the things of our fields is that we have done the simple things. We have done, we have examined the simple research questions that can be answered relatively easily. That's all done. We don't need to do that again. I see all these new randomized trials every year coming up. They're not needed. So what we do need is very large trials in which we have a lot of power to examine predictors and moderators where we have a lot of statistical power to follow what happens exactly with people in these trials. And we have a lot of trials that are very expensive and it's very difficult to get them funded. We don't have a truck company or something supporting this kind of work, but for the field that's really very important. And I had to welcome trust a couple of years ago to build up a new grand scheme for mental health research. And they're trying to do this. And I think they're funding our very large trials. And of course in the US, they used to be these large trials. I don't know. They sure are still being done. But I mean, that's the way forward. And not the simple, small studies that we've been doing in the past 50 years. That's not needed. We need large, well designed, well power trials of a few thousand people. And there are a few of these trials. And I really think that's the way forward, the only way forward. You wrote this. I didn't have any questions about it, but what you were just talking about. You wrote a, when I was preparing for this, you wrote this paper on something like how, how to ensure that you the study of your treatment. How to ensure that it comes out as effective or something like that. Could you, could you remember what the title of it was? How to prove that your therapy is effective even when it's not. That was it. It's a guideline. It's a guideline for how to prove that. I thought it was, I loved it. I think that we could use more humor in our writing and our thinking. And, but yeah, one of the ways was to. What's that? Not everybody likes it. Not everybody can have fun. You know, like it's okay to smile everyone's a while. And laugh and to say like, but it was, it was tremendously helpful because it's, you know, it's telling researchers, you know, hey, here are some common pitfalls, some things to watch out for when you're designing your studies. And also for reviewers. I mean, this is how I read it. Right. So when I'm a reviewer on an RCT or whatever that I can go to your paper. And I can say, okay, let's make sure they didn't commit any of these cardinal sins. And I have a list now of the sins. It's tremendously helpful. And why not make it a little fun when you're right? But anyway, that's my opinion. So could you talk a little bit about if I'm pronouncing this correct, metas eye, your open access, metanalytic database. Yeah, that's, I mean, I'm a retard now, you know, that's right. I'm a retard, since two and a half years. And judging by your output, no, I don't know that. But yeah. That means I don't get paid anymore, but I brought you with the work that I like and metas eye is the thing that I continue. So I stopped with all the other research that I could do continue with metas eye and a few other things that I do for the WHO. And the APA, but metas eye, I mean, that's, I think it's, it's a, what we try to do is collect all the randomized control trials examining the effects of psychotherapies for mental disorders. And they are all living systematic reviews. So for, so that means that we search the literature at least once a year, but for the bigger data sets every couple of months, we do new searches and add new studies to the databases that we have. And at this moment, we, we have about 30 databases that are ready. 20 of them are online, which means that it's all open, open signs. So you can go to the website and download the whole data set or you can, we have the shiny apps where you can run mad analysis online at the metas eye website without any statistical software. So if you want to know whether CBT works for all the adults with depression, you can go to the website and select these studies and run a mad analysis online with everything on it risk of bias, heterogeneity, numbers needed to be treated for us, plots, whatever. And what we, so the ambition is that we keep up with all the research on psychotherapies for mental disorders. And there are now about 30 teams across the whole world who coordinate one or more data set. And so it's not only me or my team who is doing this, but it's a collaborative effort with 30 research teams across the world who leads one of the data sets that we have. And the big advantage of that is not only that we have a standard methodology, which we use across all databases. But we can also do research across different disorders. So we published last year in psychological bulletin, mad analysis of more than a thousand randomized trials of psychotherapies versus control groups for I think 12 mental health problems. And so I published a paper last year in JAMA psychiatry about CBT for 10 mental health conditions. And so we can look at psychotherapy research from a broader perspective. And it's I think it's also efficient. I mean, all these people doing mad analysis is doing these searches again and selecting studies and doing the ratings and all that. You can just go to metaside and you see the search strings we use to you see the you can download the data sets. There's no need to do all these all that work all over again. It can it can take psychotherapy research to the next level because we can look at it from a much broader public health perspective. And yeah, I think it's it's a great initiative and I really enjoy working all that. Yeah, it's a great tool for anybody who hasn't checked it out. It's a wonderful tool. It's very easy to use. Obviously you can get really nuanced and sophisticated, but you can also keep it a little more surface level. It's great, you know, for researchers and this sort of thing, but it's also really nice if you're a student and one to actually get a picture of what's happening. But also if you're learning how to do a meta how to do meta analysis is a very user friendly way to get started and Dr. Kipers has a free book on how to do a meta analysis, which is available on the website. So that that there's there's tons of resources there that are definitely worth taking advantage of one quick one quick nerdy meta analysis question because I was just on a dissertation defense where the students dissertation. It was a it was a methods dissertation and she was making the argument throughout the dissertation should, you know, some more theoretical stuff simulation study or review study. She was arguing that we should wait studies within a meta analysis based on the study quality. And of course that needs to be defined beforehand. Just just because I did this is a top of mind for me. Do you have any thoughts about that? I understand where it comes from, but the problem is how do you measure quality. And I don't think there is a good good measure that can be used everywhere in all trials and all that. You can look at bias. So you can look at risk of bias. But for example, the the co-crain risk of bias was so detailed that it also doesn't work anymore. We know that our biases and some of them are very obvious and some of them are more difficult to understand. That I well, I I don't think you should weigh effect sizes based on quality. But you should look always look at what the studies say with low risk of bias. Because these are the best studies. They are the gold standard. They are the ones you can trust. And if the other studies deviate from that, then you have a problem. And maybe it's better than to ignore those studies which are not low risk of bias. So I need to let you go. But before I do based on your work, based on your thinking, do you have one or two suggestions for practitioners who might be listening? And then do you have one or two suggestions also for training programs in terms of what we should be focusing more in our training? And that all in one minute, right? Right now, I know. Well, for clinicians, I think it's important to when somebody does not respond to the treatments that has nothing to do with how you do the therapy. Or and it is not always evidence that your diagnosis is wrong. It's just a logical therapy's work, but not that good. And that's has nothing to do with how you do your therapy or whatever. It's just a logical and you should be honest about that to patients. Because therapies do not work for everyone. And they in most cases, they do not work. And so it's searching for the right therapy that helps this patient. And that can take a while. That can take several therapies, maybe different therapists, maybe drugs, maybe other things that need to be done. And so when something is evidence based, that doesn't mean that it always works. It's better than doing nothing. That's definitely sure. But it's not true that it should work for everyone. That's my most important recommendation for research. Yeah, basically it's the same thing. It's we should stop chasing dreams and get back to the reality of that therapies work, but not that well. And that we that we that we should try to improve its step wise. And for training, yeah, I think training is the same thing. Training is you can learn techniques and you can you should learn techniques and you that's part of the training. But that doesn't mean you can help everyone and not everyone will get better. It's it's a training also in modesty of what you what you can do for patients. And that you need a long breath. Oh, wonderful. Well, Dr. Kipers, I cannot tell you how much I appreciate this. What's nice talking to you. Ladies and gentlemen, Dr. Pym Kipers. That's a wrap on our conversation. As I noted at the top of the show, it'd be much appreciated if you spread the word to anyone else who you think might enjoy it. Until next time.[Music]