Summaries And Meta Analyses

There are, despite the claims of Seligman, a multitude of studies involving random assignment that attempt to assess what is actually done in the field— without limiting the practitioner to following a care fully crafted protocol. See, for example, the article of Laneman and Dawes—discussed in greater detail later—for a description of the diversity of the studies involving random assignment.

These diverse studies have been either summarized qualitatively, analyzed by "vote counts'' based on their outcomes, or subjected to meta-analysis to reach general conclusions, because each in fact concerns one type of distress in one setting, often with a single or only a few therapists implementing the procedure under investigation. (Note that the same limitation applies to the "validated efficacy studies'' as well.) Most summaries and meta-analyses consider reductions in symptoms that the people entering therapy find distressing or debilitating. Some measure of these symptoms' severity is then obtained after the random assignment to treatment versus control, and the degree to which the people in the randomly assigned experimental group differ from those in the control group on the symptoms is assessed, and averaged across studies. (Occasionally, difference scores are assessed as well.) The summaries and meta-analyses concentrates on symptoms, which can be justified because it is the symptoms that lead people to come to psychotherapists. The summaries and meta-analyses involves ''combining applies and oranges,'' which can be justified by the fact that the types of nonprotocol therapies are extraordinary diverse (fruits). For example, simply providing information on a random basis to heart attack victims in an intensive care unit can be considered to be psychotherapy.

The ''classic'' meta-analysis of psychotherapy outcomes was published by Smith and Glass in 1977, and there has been little reason since that time to modify its conclusions. In general, psychotherapy is effective in reducing symptoms—to the point that the average severity of symptoms experienced by the people in the experimental group after completion of therapy is at the 25th percentile of the control group (i.e. less severe than the symptoms experienced by 75% of the people in the control group after the same period of time). That translates roughly (assuming normality and equal variance of the two groups) into the statement that if we chose a person at random from the experimental group and one at random from the control group, the one from the experimental group has a .67 probability of having less severe symptoms than the one from the control group. The other major conclusions were that the type of therapy did not seem

Standards for Psychotherapy to make a difference overall, the type of therapist did not seem to make a difference, and even the length of psychotherapy did not seem to make a difference. These conclusions are based both on evaluating the consistency of results and evaluating their average effect sizes. These conclusions have survived two main challenges.

The first is that while Smith and Glass included an overall evaluation of the "quality" of the study, they did not specifically look at whether the assignment was really random. To address that problem, Landman and Dawes published a paper in 1982 reporting an examination of every fifth study selected from the Smith and Glass list (which had increased to 435 studies by the time it was given to Landman and Dawes); these researchers concluded—with a very high degree of inter-rater reliability based on independent judg-ments—that fully a third of the studies did not involve true random assignment. A particularly egregious example involved recruiting students in a psychology department with posters urging group psychotherapy to address underachievement; the authors then compared the students who self-selected for this treatment with some students with similar GPAs "randomly" chosen from the registrar's list, who for all the experimenters knew have given up and left town. A more subtle example may be found in comparing people who persist in group psychotherapy with people in an entire randomly selected control group. Yes, the two groups were originally randomly constructed, but the problem is that we do not know which people in the control group would have stayed with the group psychotherapy had they been assigned the experimental group—thereby invalidating the control group as a comparison to the experimental one. While it is possible to maintain that it seems bizarre to include in an evaluation of group psychotherapy those who did not actually participate in the groups, if there is really an effect of a particular treatment and assignment is random, then it will exist—albeit in attenuated form— when the entire experimental group is compared to the control group. (A mixture of salt and fresh water is still salt water.) The way to deal with selective completion is to study enough subjects to have a study powerful enough to test the effects based on "subsets" of the people's assigned to experimental manipulation (e.g., those who completed). Landman and Dawes deleted the 35% of their studies that they believed not to be truly random ones from their meta-analysis, and reached exactly the same conclusion Smith and Glass had earlier.

A second problem is the "file-drawer" one. Perhaps there are a number of studies showing that psychotherapy does not work, or even having results that indicated that it might be harmful, which simply are not published in standard journals—either because their results do not reach standard results of ''statistical significance" or because flaws are noted as a result of their unpopular conclusions that might be (often unconsciously) overlooked had the conclusions been more popular. The problem has been addressed in two ways. First, the number of such studies would have to be so large that it appears to be unreasonable to hypothesize their existence in such file drawers. Second, it is possible to develop a distribution of the statistics of statistical significance actually presented in the literature and show that their values exceed (actually quite radically) those that would be predicted from randomly sampling above some criterion level that leads to publication of the results.

Another problem concerns the identity of the psychotherapists. Here, there is some ambiguity, because the studies attempting to ''refute'' the conclusion of Smith and Glass are generally conceived poorly, in that the psychotherapy subject rather than the psychotherapists themselves are sampled and used as the unit of measurement—especially for statistical test. But if we want to generalize about psychotherapists, then it is necessary to sample psychotherapists. For example, if a standard analysis of variance design is used where therapists are the ''treatment'' effect, then generalization to therapists—or to various types of therapists— requires a "random effects" analysis rather than a "fixed effects" one. One study did in fact follow this prescription, but then concluded on the basis of a post hoc analysis how more successful therapists were different from less successful ones after finding no evidence for a therapist effect overall!

The results of studies treating each psychotherapist as a separate sample observation generally conclude that beyond a very rudimentary level of training, credentials and experience do not correlate (positively) with efficacy, as summarized by Dawes in his 1994 book, Chapter 4. There is some slight evidence that people who are considered ''empathetic'' tend to achieve better outcomes (where this characteristic is assessed by colleagues—not in a circular manner by clients who themselves improve); also there is some

Standards for Psychotherapy evidence that when therapists agree to engage in different types of therapy, they do best applying ones in which they have the greatest belief. (It is possible to question the importance of the latter finding, given that outside of randomized control studies, therapists tend to provide only the type of psychotherapy that they believe to be the most helpful to their clients.)

Biofeedback Mastery

Biofeedback Mastery

Have you ever wondered what Biofeedback is all about? Uncover these unique information on Biofeedback! Are you in constant pain? Do you wish you could ever just find some relief? If so, you are not alone. Relieving chronic pain can be difficult and frustrating.

Get My Free Ebook


Post a comment