SIMPLE AND WEIGHTED SUMMATION METHODS 323 was suggested that any matrix of correlations or covariances could be regarded as a sum of c hierarchies,' i.e. of matrices of rank one. In matrix notation this structure can be clearly expressed by what I have called the canonical expansion of any correlation matrix: viz.: R = *! El + z/2 £2 + ... + vn En where v denotes the factor variances numbered in descending order of magnitude, Ej i reduced or unit hierarchies,' and Hj the series of one-rank matrices obtained by post-multiplying the vector of satura- tion coefficients for the jth factor by its transpose in the ordinary way. The ( unit hierarchies/ it will be remembered, possess two important properties characteristic of selective operators [115]: viz. (i) Ejm = EJ ; (ii) E1l Ej = o, (i *tf). Now, unless n (the number of tests) is very small, the determinants required for the evaluation of the latent roots and vectors are much too large for explicit calculation. But from these two properties it follows that: = tfj*" El (approx.) = vj" ~~l H^ provided m is taken large enough to make the ratio v™\v™ (and a fortiori the ratios v^/v^",..., vnm[v^) negligible. Rm can be easily computed. We have then merely to add up its columns; and thus, by applying ' simple summation ' at a higher stage ^ we can at once obtain values closely proportional to the * saturation co- efficients ' or tf factor loadings.' The error incurred by taking Rm in place of R* depends primarily on ^/V*? anc* ™& therefore diminish almost in geometrical progression : the smaller the size of m9 the larger the amount of error. Evidently, therefore, the figures obtained for the factor loadings by giving m the smallest possible value, namely I (i.e. summing the columns of R just as they stand), form the first and simplest approximation to the figures that would be obtained by taking ra ~> GO or by attempting a direct solution. The figures reached by Hotelling's method and by Kelley's are virtually equivalent to those obtained by summing Rm, where m is still not infinitely large ; hence we must regard these values too— as their advocates admit—as being equally approximations, though doubtless much closer than those derived from the initial R1. It follows, therefore, that by repeated self-multiplication we can in theory reduce any matrix of correlations or covariances to a matrix of rank one.1 In practice there 1 The self-multiplication of a determinant or matrix (really a special case of the familiar root-squaring device) has long been in use by physicists and