Validity Study Summary

By Robert Soltysik

By clicking on the summary title you can download the Validity Study chapter by chapter.  At the bottom of the page you may download the book in PDF version in its entirety.

Instrument and Methodologies

Organizational Engineering survey instrumentation (DecideX®, "I Opt"™), group consolidation methodologies (TeamAnalysis™), leader-group assessments (LeaderAnalysis™, OrgAnalysis™) and two person comparisons ("One-to-One"™, TwoPerson Analysis™) are addressed in this validation study.

Face Validity -

A expert panel of 50 professionals administered 14,655 surveys and found disagreement with the survey report in less than 1% of the cases (n=128, 0.87%). The group based TeamAnalysis™ was tested by 44 experts in 921 administrations and was found to be inaccurate in less than 1% of the cases (n=1, 0.1%). The face validity of both the instrument and the consolidation methodology as represented by TeamAnalysis™ is judged to be very high.

Construct Validity

Statistical evidence in the context of differential population methodology was applied to three occupational categories involving 75 distinct groups and 887 people, which were compared to a database population (N~8,700). The findings are statistically significant at the .05 standard adopted in this study (p= .0152). In addition, the theory's use of only a single assumption minimizes exposures from undefined assumptions inherent in any theory. Overall, Organizational Engineering appears to meet or exceed the standards of construct validity within the discipline.

Content Validity

Content validity is more a matter of logic than of statistics. However, a nomological net demonstrates that between 84% and 92% of the survey responses can be directly traced to specific dimensions of the underlying theory. In addition, 100% of the 50 members of the expert panel agree that the response structure incorporated in the survey is not contaminated by respondent misunderstanding. These findings suggest that the content validity is at least equal and perhaps superior to other theories within the discipline.

Convergent Validity

Convergent validity was tested by comparing 19 plants of the same character involving 188 people. Individuals at the 19 plants were tested on all four strategic styles, and every test and associated multiple contrast performed failed to find any differences as a result of location at the standard p < .05 level of significance, providing evidence for convergent validity.

Discriminate Validity

Discriminate validity was tested using an unsupervised learning method of cluster analysis. The PAM algorithm run with k=3,887 was able to discriminate among three groups that should be different at a p < 10-29 significance level, a level substantially in excess of the generally accepted p < . 05 standard of significance.

Concurrent Validity

This dimension of validity relied upon the judgment of the expert panel of 50 professionals. Between 32 and 48 experts responded to the various instruments and methodologies tested under concurrent validity. The experts reported that in their administrations, the number of inaccurate reports was zero (0%). The concurrent validity of the instrumentation and methodologies is judged to be high.

Predictive Validity

The assessment of the predictive dimension relies upon the judgment of the expert panel. Of the 50 expert professionals available, 39 believed themselves positioned to judge the predictive accuracy of the TeamAnalysis™ methodology. The experts reported that in their administrations the number of inaccurate reports was zero (0%). The predictive validity of the instrumentation and methodologies is judged to be high.

Conclusion Validity

The large number of individuals (N = 8,721) and groups (1,003) encompassed by the study provide assurance of generalizability. The statistical tests performed were shown to fully satisfy the proper criteria (e.g., identical dispersions, equality of variances, etc.) minimizing exposures based on statistical power. In addition, the cross-validation across multiple dimensions of validity amplifies the assurance of the validity of the underlying theory and its expression in instrumentation and methodology. In the author's judgment, the theory and methodology fully meet the standards of validity as applied within the discipline of organizational development.

Reliability

Reliability is technically not a form of validity. The reliability of the instrument was tested for all pairwise combinations for the years 1994 through 1999 (15 individual contrasts) using the Kruskal-Wallis test. In all cases, the findings confirmed reliability by showing that differences in the data between years could not be established. The survey instrument is judged reliable by the accepted standards of the discipline.

Download the complete Validity Study in PDF format