CHAPTER II

PROCEDURES OF THE STUDY

The Experimental Plan

     This study utilizes an experimental plan known as a complex factorial design.1 The design is presented schematically in Figure I.  Referring to the front face of Figure I, it is noted that there are three variables; therapy, age and education. Therapy is varied in three ways, age is varied in two ways, and education is varied in two ways.  Thus, there are twelve separate conditions (or cells) which vary systematically from those (in the first cell) who are characterized as being in therapy group one and age group one and educational group one, to those (in the last cell) who represent the combination of therapy group three and age group two and educational group two.
     Within each of these twelve cells (or sets of conditions) there are three numbers representing three persons—all of whom meet the conditions for the particular cell.
     Figure I represents three dimensions.  The depth in the design depicts simultaneous measurement in the areas of intelligence, personality, and mathematics.  The same subjects, then, are measured in each of these areas.
     In any area, the score for an individual is entered into its appropriate place within the particular cell, which characterizes him.
     The score for each individual indicates the quantity of change having occurred through the interval between testings.

Selection of Subjects

     The dianetic center publicized rather widely (in newspapers and correspondence) the advent of a new series of sessions of dianetic therapy, and called for a meeting of all those interested in participating.  At this meeting, the director first talked generally about dianetic therapy.  He pointed out that this next series was planned for the following two-month period.  He requested that only those apply who could definitely set aside a number of hours each week during this period.  The director then discussed the cost of this series.  He asked all those who could fulfill the obligation of time and money to come to the secretary at the end of the meeting for the purpose of recording names, addresses, and free times.
     After the meeting, letters were sent to the first twenty-four applicants, notifying them that another meeting would be held for the purpose of routine psychological testing.  No other selective device was utilized.

The Test Materials

Interview

     The examiner brought to each personal interview a prepared sheet, which called for the name, amount of previous exposure to dianetic therapy, date of birth, and educational history for each subject.  Although this statistical information might have been obtained with less trouble by including a specially prepared form with the regular test materials, the interview served as a vehicle for another purpose.  The situation provided an opportunity to stimulate motivation.  This was attempted by impressing upon the subjects the idea that maximal effort would result in test results that the dianetic center would be able to use to plan his therapeutic procedure to give him greater benefit.
     
The results of the interview and tests were not made available to the center until after the completion of the study.

Tests of Intellectual Functioning

     There is a high intercorrelation among most of the standard tests of intellectual functioning.  Because of their variability of content, however, it is desirable to have more than one measure so that the mean result will be more valid and reliable in terms of internal ecological considerations.2  That is to say, the combined score is a more representative measure than either of its components.
     The first test in this area was the SRA Non-Verbal Form.3 The alternate form for this test is the SRA Verbal Form.4 The forms are highly correlated and both show significant validity and reliability.5,6 The Non-Verbal Form was given in the first testing situation and the Verbal Form in the second.
     The second test in this area was the Revised Alpha Examination Form.7   The alternate form for this test is the Revised Alpha Examination Form.8  Both are highly correlated and show significant validity and reliability.9,10,11  Form 5 was given in the first testing situation, and Form 7 in the second.
     Since both of these types of tests must be taken into account for better representativeness in the area of intellectual functioning, some combination of their scores is necessary.  The SRA Manual12 gives enough data for the calculation of normative standard deviations, as does the Wells’ Manual.13  Since both of these error terms reflect the variation of a normal population, the difference between them is due mostly to differences in test construction.  Then, the standard scores are comparable—being corrected for differences in test construction.  The raw scores were converted, by means of the appropriate standard deviation, into standard scores, and these were combined for each subject to represent his performances in the area of intellectual functioning.

Tests of Arithmetical Ability

     In the area of mathematical ability, test constructors have taken cognizance of the factors of manipulation of fundamentals, and special reasoning processes.
     This kind of reasoning was measured by the Arithmetical Reasoning Test.14,15  This test has alternate forms (A and B).  The forms are highly correlated and both show significant validity and reliability.16  Form A was administered in the first testing situation and Form B in the second.
     The manipulation of fundamentals was measured by the Schorling-Clark-Potter Hundred Problem Arithmetic Test.17,18  This test has alternate forms (V and W).  The forms are highly correlated and both show significant validity and reliability.19,20  Form V was administered in the first testing situation and Form W in the second.
     Since both of these factors must enter into any consideration of arithmetical ability, some combination of them would best represent performance in this area.  Thus, it was necessary to find some means of equating the two tests. The Schorling Manual21 presents normative standard deviations while the Cardall Manual22 gives enough data for these to be calculated.  Since these error terms both reflect the variation of normal population, the difference between them is due mostly to differences in test construction.  Then, the standard scores are comparable—being corrected for differences in test construction.  The raw scores were converted, by means of the appropriate standard deviation, into standard scores and these were then combined for each subject to represent his performance in the area of arithmetical ability.

Test of Personality Conflicts

     To measure personality conflicts, the test chosen was Rotter’s Incomplete Sentence Blank Adult Form.23  This provided a valid and reliable score which indicated the effect of the intensity of conflicts in personality.24
     The area, which subsumes personality conflicts, is probably the least clearly delineated in psychology.  This emphasizes the need for representativeness in measurement.  However, a study of the literature of available tests did not reveal any two-group tests whose scores were comparable.  Thus, the choice was narrowed in a single measure.
     The advantages of Rotter’s form were that it was specifically designed to measure personality conflicts, and that it presented more difficulty than the other tests to the subjects in anticipating what was being scored.

General Remarks

     It will be noted that the tests chosen have met the criteria of being practical for group administration, having equivalent alternate forms, and being both valid and reliable measures.  Group tests were used because the time involved in the administration of individual measures constituted an undue interference with the dianetic center’s schedules.  Alternate forms were used because, in the retest situation, it was desirable to avoid the complications that arise with increasing familiarity with the test material.  The insistence upon the criterion of a high degree of validity and reliability was pronounced beyond the levels, which are usually set.  This was desirable in that it provided a finer degree of measurement so that subtle variations in change, if present, would be isolated in the extremely refined statistical analysis.
     Each of the tests in this study has a manual with specific directions for administration.  These were followed exactly.

The Testing Situations

     The dianetic center offered the use of a large auditorium for the test sessions. The arms of the chairs were equipped with writing surfaces.  There was enough room for the subjects to be seated both a row and a seat apart to forestall collaboration.
     The subjects were tested simultaneously at one uninterrupted session.

Tabulation of Data

     The tests were first scored by the experimenter and then checked independently by two graduate students in psychology who were enlisted for this purpose.
     The same procedure was followed with other data.
     The number of therapeutic hours for each subject during the experimental period was crosschecked.  First, this information was kept as a continuous record by the dianetic center.  Second, this information was obtained directly from the subjects during the second testing session, after the therapeutic interval, during the un-timed Rotter test.

Statistical Treatment of the Data

     The method for the statistical treatment of the data – the analysis of variance of a complex factorial design – was chosen for three reasons: (1) It affords the maximum surety of the result with the smallest number of cases, (2) It enables an analysis of the interactions of the variables with maximum surety of the result because of its simultaneous nature, and (3) It is a refined technique which is sensitive to slight changes.25

Difference Scores

     For each subject, in each area of measurement, there was a first testing session score and a second testing session score.  The first score was subtracted from the second.  Thus, a positive difference indicated a greater numerical performance in the second test.  A zero difference indicated that the performance of the first test was the same as the performance on the second.  A negative difference indicated a lesser numerical performance on the second test.       Scanning the array of difference scores for all subjects in each area, the greatest negative value was noted.  Then, the value of plus one was added to the real value (disregarding sign) of this greatest negative and the resultant number was taken as a constant to add to each difference score in the array for each area.  Thus, these final coded values preserved the relative amounts of change among the subjects for each area.  The coding also took away all negative values; a condition necessary for the statistical analysis.26
     The coded scores were then entered into a table of analysis for each area similar to the front face of the design represented in Figure I.

Prerequisite Test of Homogeneity

     Within each area of measurement, the main extractable variables (age, education, therapy and random sequence) were each subjected to the test for homogeneity of variance.27  This is a necessary condition, which must obtain in the data before the extraction and analysis of variances.28
     In all of the test of homogeneity, except for one, the hypothesis was upheld. In this case, the data was transformed in scale29 until homogeneity was found a tenable hypothesis.

Extraction of Variances

     For each of the areas of measurement, the variances were extracted and tabled.
     When this was completed, a summary of the variances in each area of measurement was tabled.

Test of Variances

     The choice of an appropriate error term with which to test the mean variances of the summary tables depended upon the possible of hypotheses, which might derive from the experimental design.
     The first possible error term is that of the highest order interaction.30  However, this makes the assumption that the categories within each of the controlling variables is a random selection.31  This assumption had not been met in this study, and that error term was discarded.  The other possible error term is that of the residual mean variance.32  The use of this error term confines speculation to these particular age categories, these particular educational categories, and these particular therapy categories.  It provides no test for speculations beyond the limits actually incorporated in the raw data.33
     The mean variances for the variables and their interactions (for each summary table) could be tested against the appropriate residual variance.  However, since this error term was able to be broken down into two components (variance due to random variation and residual error variance), a finer test of the difference is afforded by using the residual error after the extraction of the sampling error.  This was done and the results incorporated into the summary tables.

A Brief Note

     This study was designed to afford an objective test of the claims for dianetic therapy, and to do this with definitiveness.  It provided for adequate information without anticipating the direction of the effects of dianetic therapy.  The data derived permitted an extensive analysis of the therapy because of the range of the measured controlling factors.  Since dianetic claims only specifically emphasize the areas of mathematical ability, intellectual functioning, and personality conflicts, this study utilized standardized tests, which were especially designed to measure these areas.  The total design is somewhat complex, but an attempt was made to clarify it by representing it diagrammatically (see Figure I).


Table of Contents |  Index | Chapter III


1. A. Edwards, Experimental Design in Psychological Research, p. 237.
2. E. Brunswik, Systematic and Representative Design of Psychological Experiments, p. 3.
3. R. McMurray and J. King, SRA Verbal Form.
4. T. Thurstone and L. Thurstone, SRA Verbal and Non-Verbal Forms.
5. Examiner Manual for the SRA Verbal and Non-Verbal Forms.
6. O. Buros, The Third Mental Measurements Yearbook, pp. 263-264.
7. Revised Alpha Examination Form 5.
8. Revised Alpha Examination Form 7.
9. F. Wells, Manual of Directions.  Revised Alpha Examination Formsms 5 and 7.
10. F. Finch and M. Odoroff, "The Reliability of Certain Intelligence Tests," Journal of Applied Psychology, 21 (February, 1937), pp. 104-106.
11. G. Bennett, "Distribution of Scores of Army Alpha," Journal of Applied Psychology, 27 (April, 1943), pp. 100-101.
12. Examiner Manual for the SRA Verbal and Non-Verbal Forms.
13. F. Wells, op. cit.
14. A. Cardall, Arithmetical Reasoning Test. Form A.
15. A. Cardall, Arithmetical Reasoning Test. Form B.
16. A. Cardall, Preliminary Manual for the Arithmetical Reasoning Test.
17. R. Schorling, J. Clark, and M.  Potter, Hundred Problems Arithmetic Test. Form V.
18. R. Schorling, J. Clark, and M.  Potter, Hundred Problems Arithmetic Test. Form W.
19. R. Schorling, J. Clark, and M.  Potter, Hundred Problems Arithmetic Test. Manual of Directions.
20. O. Buros, The Third Mental Measurements Yearbook, p.  344.
21. Schorling, et. al., op. cit.
22. Cardall, op. cit.
23. J. Rotter, Incomplete Sentences Blank – Adult Form.
24. J. Rotter, and J.  Rafferty, Manual.  The Rotter Incomplete Sentences Blank, pp. 7-10.
25. A. Edwards, Experimental Design in Psychological Research, pp. 174-175.
26. Ibid., p. 203.
27. Ibid., p. 196.
28. C. Peters and Van Voorhis, Statistical Procedures and Their Mathematical Bases, p. 334.
29. Edward, op. cit., p. 199.
30. Edward, op. cit., p. 248.
31. Loc. cit.
32. Loc. cit.
33. Loc. cit.


Brought to you by:
Operation Clambake