|Written by Rhonda Osborne, LPC|
For the past fifty years or so a rumor has been floating around that “shrinks” and therapists tend to be nuts. It tends to be the student body joke within psychology departments and unfortunately, a fair number of professors in the field walk ridiculously close to the cliff of both creepy and clinically insane.
Graduate psych students are easily identified by the passenger lab rats riding on their right shoulders to class, the dorky shirts with the screen-printed brain on the front, and who all show up at the Halloween parties costumed as Sigmund Freud. We are a weird bunch.
Then take the subcategory of mental health professionals: we are not only weird, but the far majority of us having a long list of trauma and unfinished business that too often interferes with good clinical work. And after all of that, we are still offended when the science world questions the validity of our research and the public is leery of engaging in counseling.
It is unfortunate that Psychobabble speaks so poorly about its own kind. Simultaneously, how is the world to learn that some professionals within the field acknowledge that there are a lot of kooks out there that make the job even more difficult for the ones committed to the science of the emotional brain. “Professionals” in this field cannot be lumped together; just like the fact all Middle Easterners cannot be considered terrorists. A broad spectrum of quality exists in this field, much like it does in all professional fields.
I was reminded of this recently when I attended an equine-assisted psychotherapy training in Washington. For those in mental health work, it is fairly accepted that incorporating animals into therapy improves clinical outcomes for those suffering from depression, anxiety, boundary and assertiveness issues, confidence and communication problems. The research in the area of equine-assisted treatment is just beginning and at this time the results look promising.
However, one must keep in mind the owners of such research. Most of them are therapists themselves. If you are uncertain why this would matter, please begin reading this article again. Second, the method to which we measure outcome is somewhat ambiguous. Pre- and post-measures are often done with the “procedure” in-between being abstract, with few controlled variables. Whether the horse is the reason post-test measures are improved continues to remain the mystery.
Back to Washington: this seminar leader was a licensed clinical social worker who had trained horses for years. She had a background in community mental health and was trained in both EGALA and EPONA (organizations that use horses in therapy).
All seemed legitimate, and to tell you the truth she was a heck of a nice lady. She shared the results of a five-participant study of female trauma victims who agreed to participate in a study regarding the benefits of equine-assisted psychotherapy. According to her results, all five participants vastly improved after 12 weeks of therapy, far more than the improvement of most evidence-based modalities. When questioned about her procedures and variable controls however, interesting responses emerged.
Pre- and post-test measurements were self-determined, so improvements were based on whether the client reported feeling better, regardless of whether changes in their life had occurred. There was no random assignment to the study group, meaning these five women had already decided equine therapy would be helpful to them before participation in the study was even offered—thus, they were not exactly unbiased participants.
There were no control groups for comparisons, such as a group of individuals who received another form of 12-week therapy in the same environment (which happened to be a majestic country side with a view of the mountains), or a group who received no therapy at all.
When the question of a placebo effect arose, the leaders seemed offended. One of them even said “Oh, I assure you, this was not a placebo effect. This stuff works.” And I’m sitting there thinking, after six years of research standards being hammered into my head, “Well, that’s good enough for me. Let’s go with it.”
Even the realistic comment from a participant (i.e. Rhonda Osborne) implying that people will find what they go looking for and that in itself will skew the study results, was frowned upon.
The point is the mental health world is full of nutty people, and I’m not talking about the clients. It must be acknowledged, it must be confronted, and the consumer must be wise in their selection of both provider and educator.
But along with the bad, there is some good. Much like the study just discussed, with all its limitations, there were still valuable learning points. One must sift through the material to determine what part, no matter how small, is legitimatly meaningful—but it’s in there. I am hopeful that you remain skeptical while also opening yourself to the possibility of finding a gem in a pile of plain old gravel.