476 APPENDIX I example, we might have started with the method of weighted summation, without any preliminary determination of the self-correlations by simple summation. We could then have begun by seeking the best-fitting factorial matrix of rank one ; then of rank two ; and so on : i.e. the first fit would contain in each diagonal cell the square of one saturation only ; the second fit would contain the sum of the squares of two ; and so on. This would again lead to a factorial matrix of minimum rank, with every column un- correlated with every other. Such an approach might be of theoretical interest, but is hardly a practical procedure, since, as a rule, it leads to pre- cisely the same figures by what in the end is not a shorter but a more cumber- some procedure. Again, if we assume that the results of correlating persons and correlating tests should be equivalent (as they should under certain conditions, e.g. when the selection of both persons and tests is * random '), we should expect each bipolar column of saturation coefficients to add up to zero both when un- weighted and when weighted by one of the other columns. A process of successive approximation with weighted and unweighted summation used alternatively will usually yield a close approach to this result. But whatever self-correlations we assume, and whatever rank we accept, there is an obvious gain in having a factorial matrix whose columns are uncorrelated : any further linear transformations into which the correlation matrix enters— e.g. rotating axes, calculating regressions, comparing results with those of correlating persons—are greatly simplified. If, however, the self-correlations or variances are known from the outset, then, as we have seen, successive approximation may be entirely avoided by * table-by-table' multiplication—a procedure that is sometimes convenient with a very small table of covariances (for illustrations, see [102], p. 185). Or again a triangular matrix of positive saturations may be obtained by the earlier procedure due to Lagrange (described, e.g., in B6cher's Higher Algebra^ 1907, p. 131), which will be found to fit certain correlation problems quite well. Partial Correlations.—It should be noted that the residual correlations obtained as above by simple subtraction do not give the correlations that would be found in a sample population selected so as to be homogeneous for the factors subtracted. For example, we may desire to know what degree of correlation would exist between Reading, say, and Spelling (or Arithmetic) in a class or form where the children had been so selected as to be upon practically the same level of general ability. For this purpose the full formula for partial correlation must be employed. This is equivalent to dividing the residual correlation by the mean residual variance of the two correlated tests1—a process which greatly enlarges the figures. However, in 1 [93], !>• 306, and L.C.C. Report [35], p. 57- In my earliest article [16] this further adjustment was not applied, because the problem was to determine the re* lative importance of the first or general factor and the remainder. And in most forms of multiple-factor analysis it is, as a rule, omitted. That, however, should not obscure the fact that multiple-factor analysis (and, indeed, single-factor analysis according to Spearman's approach) is essentially a development of partial correlation : cf. Am. /* PsychoL,XV, 1904, p. 256 (where Yule's formula for partial correlation is cited), and Yule's determmantal solution for the partial variances ([no], pp. 267 et seq ).