Published tests are developed by authors who have a superior understanding of a subject and of measurement. The author's goal in developing a commercially published test is often to provide other professionals with a means of assessing some characteristic of other people.
Commercially published tests provide three great benefits....
Tests can be used in research or in individual assessment. Individual assessment is a process of human interaction. During the process, one individual tries to form an understanding (opinion or belief) about the person being assessed. In some cases, it is possible to provide questionnaires or present tasks which collect a meaningful sample of behavior which may assist the assessor in forming his/her professional opinion. Standardized tests may be used to attempt to measure the person's skills and abilities, prevalence of target behaviors, his/her reported feelings and beliefs, etc.
Tests which measure skills and abilities may variously called aptitude tests, achievement tests, diagnostic tests, IQ tests, tests of maximum performance, etc. These may be classed as within the "hard" area of measurement. All this means is that an examinee cannot adopt a responding strategy on these tests so as to "fake good" and cause a higher score on the tests than honest effort would obtain. However, it is still possible to "fake bad" or simply to put forth less than maximum effort and get a lower score than honest effort would produce.
Tests which ask questions about an individual's past behaviors, beliefs or feelings are classed as operating in the "soft" area of measurement. These inventories may be referred to as personality tests, interest inventories, honesty and reliability tests, etc. These tests do not have a single "right" answer in the same way that one may give a correct response on a math test. Usually, items are self-reports of thoughts, beliefs and feelings, or past behavior.
For soft area tests, there are no right or wrong responses -- only honest and dishonest ones. Herein lies a problem. It is possible for examinees to design their responses in order to present themselves in a favorable light to an assessor (called faking good), or to present themselves in an unfavorable light (called faking bad). Sometimes people may even be uncooperative and respond randomly. Psychometricians of today seldom use the terms fake-good or fake-bad, prefering instead to refer to impression-management (IM).
It is also possible for examinees to respond differently from one day to another (depending on changes in feelings, beliefs, or due to the influence of some outside factor). Their answers may still be honest even if they are quite different from a previous occasion.
From the foregoing, one might deduce that an examiner must interpret the scores of soft-area tests. Often the examiner must assess the client's responding strategy before (s)he even begins to interpret what a soft area test suggests about an examinee. Many such tests include "lie" scales (or negative-IM scales) which attempt to identify examinees whose responding strategy is grossly dishonest. However, even the best of these scales cannot control very well for some individuals' natural tendency toward "openness" and "frankness" and the tendency of others to be more circumspect.
Soft area tests may be very useful. For example counsellors who have the cooperation of their clients often rely on results of soft-area tests. Vocational couselling is facilitated to a great extent by a group of such tests called interest inventories.
The great danger with all soft area tests is that beginners (or those from other disciplines -- i.e. lawyers) often want to read more into a test result than is there.
How much reliance can be placed on soft-area tests as a source of measurement data when conducting assessments? Most well designed and researched soft-area scales comprised of 15 or more unique items, and have been researched to show that scores may be used to rank an individual as Low, Normal, or High in terms of a some target characteristic. Attempting to read much more than this into a test score is seldom supportable. While the experienced professional knows that the difference of a couple of score points either way should signify nothing on soft area tests -- the problem exists that at some specific point it is considered necessary to begin calling a score high, or low. This fact often frustrates computer analysis of test scores, because computers require an algorithm that includes some specifics about when a score is to be considered high.
If you require the type of information that is provided by standardized tests, a good solution may be to retain the services of an experienced professional who can help you design your assessment program. Best-practices in the use and interpretation of test-scores have been well-studied, and are the subject matter of advanced-level courses in Education, Psychology, and Business.
Copyright © 1996, M.D. Angus & Associates Ltd.