Assessing The Value Of Data

Assessment methods differ in their accuracy. For example, self-report of clients or significant others may not accurately reflect what occurs in real life. Observers may be biased and offer inaccurate data. Measurement inevitably involves error. One cause of systematic error is social desirability; people present themselves in a good light. Criteria that are important to consider in judging the value of assessment data include: (1) reliability, (2) validity, (3) sensitivity, (4) utility, (5) feasibility, and (6) relevance. Reliability refers to the consistency of results (in the absence of real change) provided by the same person at different times (time-based reliability), by two different raters of the same events (individual-based reliability) as in inter-rater reliability, or by parallel forms of split-halfs of a measure

(item-bound reliability). Reliability places an upward boundary on validity. For example, if responses on a questionnaire vary from time to time (in the absence of real change), it will not be possible to use results of a measure to predict what a person will do in the future.

Validity concerns the question: Does the measure reflect the characteristic it is supposed to measure? For example, does behavior in a role play correspond to what a client does in similar real-life situations? Assessment is more likely to be informative if valid methods are used—methods that have been found to offer accurate information. Direct (e.g., observing teacher-student interaction) in contrast to indirect measures (e.g., asking a student to complete a questionnaire assumed to offer information about classroom behavior) are typically more valid. Validity (accuracy) is a concern in all assessment frameworks; however, the nature of the concern is different in sign and sample approaches. In a sign approach, behavior is used as a sign of some entity (such as a personality trait) that is at a different level. The concern is with vertical validity. Is the sign an accurate indicator of the underlying trait? Horizontal validity is of concern in a sample approach. Different levels (e.g., behavior and personality dispositions) are not involved. Examples include: (1) Does self-report provide an accurate account of behavior and related circumstances? (2) Does behavior in role play reflect what occurs in real life? Different responses (overt, cognitive, and physiological) may or may not be related to an event. For example, clients may report anxiety but show no physiological signs of anxiety. This does not mean that their reports are not accurate. For those individuals, the experience of anxiety may be cognitive rather than physical.

The sensitivity of measures is important to consider; will a measure reflect changes that occur? The utility of a measure is determined by its cost (time, effort, expense) balanced against information provided. Feasibility is related to utility. Some measures will not be feasible to gather. Utility may be compromised by the absence of empirically derived norms for a measure. Norms offer information about the typical (or average) performance of a group of individuals and allow comparison of data obtained from a client with similar clients. The more representative the sample is to the client, the greater the utility of a measure in relation to a client. Relevance should also be considered. Is a measure relevant to presenting problems and related

Clinical Assessment outcomes? Do clients and significant others perceive it as relevant?

Unraveling Alzheimers Disease

Unraveling Alzheimers Disease

I leave absolutely nothing out! Everything that I learned about Alzheimer’s I share with you. This is the most comprehensive report on Alzheimer’s you will ever read. No stone is left unturned in this comprehensive report.

Get My Free Ebook

Post a comment