Skip to main content

Full text of "ESTIMATION OF (CO)VARIANCE COMPONENTS BY WEIGHTED AND UNWEIGHTED SYMMETRIC DIFFERENCES SQUARED, AND SELECTED MIVQUE'S: RELATIONSHIPS BETWEEN METHODS AND RELATIVE EFFICIENCIES"

See other formats


INFORMATION TO USERS 



This reproduction was made from a copy of a manuscript sent to us for publication 
and microfilming. While the most advanced technology has been used to pho- 
tograph and reproduce this manuscript, the quality of the reproduction is heavily 
dependent upon the quality of the material submitted. Pages in any manuscript 
may have indistinct print. In all cases the best available copy has been filmed. 

The following explanation of techniques is provided to help clarify notations which 
may appear on this reproduction. 

1. Manuscripts may not always be complete. When it is not possible to obtain 
missing pages, a note appears to indicate this. 

2. When copyrighted materials are removed from the manuscript, a note ap- 
pears to indicate this. 

3. Oversize materials (maps, drawings, and charts) are photographed by sec- 
tioning the original, beginning at the upper left hand corner and continu- 
ing from left to right in equal sections with small overlaps. Each oversize 
page is also filmed as one exposure and is available, for an additional 
charge, as a standard 35mm slide or in black and white paper format. * 

4. Most photographs reproduce acceptably on positive microfilm or micro- 
fiche but lack clarity on xerographic copies made from the microfilm. For 
an additional charge, all photographs are available in black and white 
standard 35mm slide format.* 



♦For more information about black and white slides or enlarged paper reproductions, 
please contact the Dissertations Customer Services Department. 



UMI 



Dissertation 
Information Service 



University Microfilms International 

A Bell & Howell Information Company 

300 N. Zeeb Road, Ann Arbor, Michigan 48106 



8618794 



Keele, John William 



ESTIMATION OF (CO)VARIANCE COMPONENTS BY WEIGHTED AND 
UNWEIGHTED SYMMETRIC DIFFERENCES SQUARED, AND SELECTED 
MIVQUE'S: RELATIONSHIPS BETWEEN METHODS AND RELATIVE 
EFFICIENCIES 



The Ohio State University Ph.D. 1986 

University 
Microfilms 

I n t£m &tl O PI Si 300 N. Zeeb Road, Ann Arbor, Ml 48106 



ESTIMATION OF (CO>VARIANCE COMPONENTS BY WEIGHTED AND UNWEIGHTED 
SYMMETRIC DIFFERENCES SQUARED, AND SELECTED MIVQUE's: RELATIONSHIPS 
BETWEEN METHODS AND RELATIVE EFFICIENCIES 

DISSERTATION 

Presented in Partial Fulfillment of the Requirements for 
the Degree Doctor of Philosophy in the Graduate 
School of The Ohio State University 

By 

John William Keele, B.S., M.S. 
* * * * * 

The Ohio State University 
1986 



Dissertation Committee: 

Walter R. Harvey Approved by 

Francis R. Allaire 

Michael E. Davis IjJoJt^ f?. jig / 



Adviser 
Keith M. Irvin Department of Dairy Science 



iry 3c 



ACKNOWLEDGMENTS 

I would like to thank Walter R. Harvey for his suggestions, 
endless support, patience, example and the idea to do this research. 
Thanks are due to the other members of my committee, Drs. Frank R. 
Allaire, Mike E. Davis, and Keith M. Irvin for their helpful 
suggestions and comments. The painstaking task of typing this thesis 
was accomplished by Debbie Gallagher to whom I am most grateful. David 
and Kathy Keller deserve special thanks for making copies and providing 
a place for me to stay while taking my final exam. I am grateful to 
the Gilmore family for the Gilmore Award. I owe a debt of gratitude to 
the Department of Dairy Science, The Ohio State University, and the 
taxpayers of Ohio for my associateship and the privilege to study at 
this great University. I would like to thank my wife, Wendy for her 
understanding, faith, and willingness to put up with all of this. I 
would also like to thank my son Ben for being himself and being a good 
kid despite my inexperience as a parent. Most of all, I would like to 
thank God who provided the sequence of events that made this work 
possible. 



11 



VITA 



April 2, 1957 Born - ChaJlis, Idaho 

1979 B.S., University of Idaho, 

Moscow, Idaho 

1979-1982 Research Assistant, Department of 

Animal Science, University of 
Idaho, Moscow, Idaho 

1982 M.S., University of Idaho, 

Moscow, Idaho 

1982-1986 Research Associate, Department of 

Dairy Science, The Ohio State 
University, Columbus, Ohio 



PUBLICATIONS 



Keele, J.W. 1982. Digestion and metabolism of diets with whole 
cottonseed or extended soybeans fed to cows. Unpublished Masters 
thesis. University of Idaho, Moscow, Idaho. 

Keele, J.W. and R.E. Roffler. 1982. Nutrient digestion in cows fed 
rations containing whole cottonseed or extended soybeans. J. Dairy 
Sci. (suppl. 1) 65:137 (abstract). 



in 



TABLE OF CONTENTS 

ACKN0WLED3EI1ENTS ii 

VITA iii 

LIST OF TABLES vi 

LIST OF FIGURES vii 

INTRODUCTION 1 

REVIEW OF LITERATURE 6 

VARIANCE COMPONENT ESTIMATION PROBLEM 6 

MIVQUE AND REML FOR ANIMAL MODEL 7 

NONADDITIVE RELATIONSHIP MATRICES 9 

REDUCED ANIMAL MODEL 9 

FEASIBILITY OF MIVQUE AND REML 10 

METHODS MORE FEASIBLE THAN MIVQUE AND REML 11 

COMPUTING A 14 

UNWEIGHTED MIVQUE 14 

CONCLUSIONS 16 

THEORY 17 

THE EQUIVALENCE OF MIVQUE AND WEIGHTED SYMMETRIC DIFFERENCES 

SQUARED 17 

1. Definition of Synmetric Differences Squared .... 20 

2. Expectation of Symmetric Differences Squared .... 21 

3. Synmetric Differences Squared Equations 23 

4. Variance-Covariance Matrix Among Symmetric 

Differences Squared 23 

5. The Inverse of the Error Variance-Covariance Matrix 

of Y 25 

6. Weighted Synmetric Differences Squared Equations . . 26 

iv 



7. The Equivalence of SDS Weighted by the Inverse of 

the Error Variance-Covariance Matrix and 

MCVQUE(O) 27 

8. The Equivalence of SDS Weighted by the Inverse of the 

Total Variance-Covariance Matrix and MIVQUE . . 34 

CX5MPUTAT00WAL REQUIREMENTS OF WSDS OR MIVQUE(O) 37 

METHODOLOGY 49 

RESULTS AND DISCUSSION 57 

SUMMARY AND CONCLUSIONS 73 

LITERATURE CITED 77 

APPENDICES 80 

A 80 

B 84 

C 92 



v 



LIST OF TABLES 
TABLE PAGE 

1. Storage, multiplications, and additions for each type of 
element required to set up the MIVQUE(O) equations when 
relationship categories are not considered 38 

2. Multiplications, additions and storage needed to compute and 
save the matrices needed to set up the WSDS equations when 
relationship categories are considered 43 

3. Multiplications, additions and storage required to save 
matrices needed to set up WSDS and SDS equations for mice data 
of Grimes and Harvey (1980) When relationship categories are 
considered 46 

4. Storage, multiplications, and additions required to compute 
each type of element needed to obtain MIVQUE(O) or WSDS 
equations for the data of Grimes and Harvey (1980) when 
relationship categories are not considered 47 

5. Comparison of standard errors of SDS estimates of variance 
components obtained by simulation study of Grimes and Harvey 
(1980) for a maternal effects model with standard errors 
obtained by the numerical method of the current study .... 57 

6. Efficiency of SDS estimates of (co) variance components 
relative to MIVQUE estimates computed from mating designs of 
Thompson (1976) and Eisen (1967), replicated 200 times each . 59 

7. Efficiency of WSDS estimates of (co) variance components 
relative to MIVQUE estimates computed from mating designs of 
Thompson (1976) and Eisen (1967), replicated 200 times each . 62 

8. Efficiency of MIVQUE(0,M,E) estimates of (co)variance 
components relative to MIVQUE estimates computed from mating 
designs of Thompson (1976) and Eisen (1967), replicated 200 
times each 66 

9. The efficiency of MIVQUE(l) estimates of (co)variance 
components relative to MIVQUE estimates computed from 200 sets 

of designs described by Thompson (1976) and Eisen (1967) . . 70 



vi 



LIST OP FIGURES 

FIGURE PAGE 

1. Two mating designs of Thompson (1976) 50 

2. The 18 individuals comprising a set of an Eisen (1967) design 51 



VI 1 



UtfTRODUCTTON 

Variance and covariance components and ratios of these are used to 
characterize the type and quantity of genetic and environmental 
variation present in a population of animals. In addition to being 
unbiased and efficient (small sampling variance), it is desirable that 
estimates of (co)variance components be computable with a reasonable 
amount of computer time and memory. Even though the amount of computer 
time and memory considered to be reasonable has increased with improved 
technology, computational feasibility is still an important attribute 
of variance component estimation in light of the large amount of data 
usually needed to obtain relatively precise estimates. The challenge 
is to choose from the computable unbiased methods, the most efficient 
method. 

The computational requirements for estimating genetic variance 
components also depend on the type of family structures that are 
present in the population. Populations of farm animals usually contain 
many different types of relatives. One way to account for 
relationships among animals is to use an animal model. In general, 
analysis of variance (ANOVA) procedures are not adequate under the 
animal model, although they have been used as ad hoc procedures in the 
past (Dickerson, 1942; Hazel et al., 1943; Eisen, 1967). 

Henderson (1985a and 1985b) showed how to obtain estimates of 
additive and nonadditive genetic variance components by minimum 

1 



2 

variance quadratic unbiased estimates (MIVQUE) (Pao, 1971) for assumed 
prior values, and restricted maximum likelihood estimates (REML) 
(Patterson and Thompson, 1981), under the animal model. Henderson's 
approach for obtaining MIVQUE and REML require the inverse of one 
matrix of order n and the g-inverse of another matrix of order n+p, 
where n is the number of observations, and p is the number of fixed 
effects. REML can be obtained by iterative MIVQUE, or by the 
expectation maximization (EM) algorithm (Dempster et al„ 1977). Both 
MIVQUE and REML are not computationally feasible for large (greater 
than 500-1000 animals) data sets under the animal model. 

When the genetic model is additive, the computational requirements 
for REML and MIVQUE can be reduced considerably. With the Henderson 
approach to obtaining MIVQUE and REML, the mixed model equations (MME) 
(Henderson et al„ 1959) and best linear unbiased predictions of random 
effects (BLUP) are obtained as intermediate results. Quaas and Pollack 
(1980) showed that a reduced animal model (RAM) with MME only for 
parents of progeny with records was equivalent to the full animal model 
with MME for every animal. Quaas and Pollack (1980) described RAM for 
a multitrait model appropriate for beef cattle, and Henderson (1985c) 
described RAM for the unitrait case. Hudson and Kennedy (1985) were 
able to obtain BLUP for parents and nonparents under RAM on swine data. 
In this data set, there were 5.6 times as many animals as there were 
parents with progeny records. Therefore, using RAM instead of the 
animal model saves computer storage because fewer MME are needed. 
However, even under RAM, the number of MME probably were too large to 
obtain MIVQUE or REML estimates of variance components because Kennedy 



3 
et al. (1985) used a simplified model for data in which the animal 
model was appropriate. For the swine data set of Hudson and Kennedy 
(1985), there were 4,000 to 20,000 MME under RAM for the four breeds 
considered. A need exists for relatively efficient variance component 
estimation procedures that can be computed under RAM, or the animal 
model if there are nonadditive genetic effects for large data sets. 

Henderson (1985a) suggested an approximate MIVQUE called diagonal 
MIVQUE or Henderson's Simple Method for data sets in which the g- 
inverse of the left hand side of MME is too large to compute. The 
relative efficiency of this method has not been well characterized 
under the animal model, but Henderson (cited by Hudson and Van Vleck, 
1982) found this method to be more efficient than method 3 of Henderson 
for selected data sets in which method 3 could be used. 

Another method of reducing the computations is to assume that all 
variances except the error variance are zero prior to applying MIVQUE 
(Rao, 1971). These estimates are MIVQUE if unknown variances except 
the error variance actually are near zero. This method is known as 
MIVQUE(O). However, MIVQUE(O) was found to be inefficient relative to 
ANOVA when the variances other than error variance were larger than the 
error variance for selected 2 stage nested designs (Brocklebank and 
Giesbrecht, 1984). 

Grimes and Harvey (1980) extended the symmetric sums of products 
(SSP) method of Koch (1968) to the animal model. They used their 
method, which they called symmetric differences squared (SDS) to 
estimate genetic and environmental (co)variance components for weight 
and gain traits of 1,780 mice. While this method required little 



4 
computer storage, it required a large amount of computer time. 

Christian (1980) showed how to reduce the computer time needed to 
perform SDS by grouping SDS resulting from pairs of individuals with 
the same relationship before obtaining expectations. In addition, 
Christian (1980) suggested that SDS could be corrected for 
inquadmiss ability (uniformly less efficient than some other method) by 
weighting SDS by the inverse of the error variance-covariance matrix 
among SDS (WSDS). However, completing the analysis by WSDS as 
described by Christian would require computational steps proportional 
to n raised to a power of four, where n is the number of observations. 

Setting some of the parameters other than the error variance to 
zero prior to applying MIVQUE can result in a method that is less 
computationally demanding than MIVQUE. For example, when permanent 
maternal environmental effects are important, the permanent maternal 
environmental and residual variance could be set to nonzero priors and 
all other (co)variances set to zero. This method might have a higher 
efficiency than MIVQUE(O), without greatly increasing the computational 
requirements over that of MIVQUE(O). This method will be referred to 
in this paper as MIVQUE(0,M,E). 

In addition to reducing computational requirements of variance 
component estimation methods, it is convenient to avoid the assignment 
of prior values. Rao (1971) suggested that all priors be given a 
value of one when nothing is known a priori about the (co)variance 
components. This method will be referred to in this paper as 
MLVQUE(l). 



5 
The objectives of this study are to: 1) demonstrate that SDS 
weighted by the inverse of the error variance-covariance matrix among 
SDS (WSDS) is equivalent to MIVQUE(O); 2) demonstrate that SDS weighted 
by the inverse of the total variance-covariance matrix among SDS is 
MIVQUE when the priors are correct; 3) present a computationally 
feasible algorithm for obtaining WSDS; and 4) evaluate the influence of 
the unknown parameters on the efficiency of SDS, WSDS, MIVQUE (0,M,E), 
and MIVQUE(l) relative to MIVQUE. 



REVIEW OF LITERATURE 
VARIANCE COMPONENT ESTIMATION PROBLEM 

Methods of estimating genetic and environmental (co)variance 
components utilize records and the relationships among individuals who 
produced the records. Farm populations have a family structure that 
consists of many different types of relatives. One way to account for 
these relationships when estimating {co)variance components is to use 
the animal model (Henderson and Quaas, 1976; Quaas and Pollack, 1980; 
Henderson, 1985a,b,c, ; Hudson and Kennedy, 1985). Analysis of variance 
(ANOVA) procedures are not appropriate under the animal model because 
the random part of the model can not be written as the sum of sets of 
mutually uncorrelated random effects. When the animal model is 
appropriate, ANOVA procedures should be weighted by the inverse of the 
variance-covariance matrix among the estimated covariances among 
relatives to be efficient (Eisen, 1967). This method is usually not 
computationally feasible. The methods that can be used to estimate 
(co) variance components assuming an animal model are (i) minimum 
variance quadratic unbiased estimation (MIVQUE) (Rao, 1971); (ii) 
maximum likelihood (ML) (Hartley and Rao, 1967); (iii) restricted 
maximum likelihood (REML) (Patterson and Thompson, 1971); (iv) 
symmetric differences squared (SDS) (Grimes and Harvey, 1980); and (v) 
weighted symmetric differences squared (WSDS) (Christian, 1980). In 
addition, various methods which are adaptations of MIVOJJE (Henderson, 

6 



1984, 1985a) can be applied to an animal model. 
MIVQUE AND REML FOR ANIMAL MODEL 

Henderson (1985a, b) showed how to obtain MIVQUE and REML 
estimates of additive and nonadditive genetic variance components under 
an animal model. Henderson first obtained best linear unbiased 

A A 

predictions (BLUP) for total genotypic merits (m) and residuals (e), 
then expressed the quadratics needed for MIVQUE and REML as quadratic 
functions of ta and e. This process avoids the need to obtain the 
inverse of the n x n variance-covariance matrix among the observations 
(V) where n is the number of observations. However, the inverse of the 

a 

variance-covariance matrix (M) among m is still needed to set up the 
mixed model equations (MME) of Henderson et al. (1959). In general, 
the inverse of M will be as difficult to obtain as the inverse of V 
when there is one record per animal. Once the MME are obtained, m can 
be calculated by iteration without finding the g-inverse of C, where C 
is the left hand side of MME. Submatrices of the g-inverse of C are 

A A 

needed however to find the expectations of the quadratics in m and e. 

A A 

The quadratics in m and e are set equal to their expectations and the 
resulting equations are solved to obtain MIVQUE. Computing the g- 
inverse of C is costly because this matrix is of order n+p, where p is 
the number of fixed constants in the model. 

In practice, the parameters are not known; therefore guesses or 
estimates by other methods serve as priors for the MIVQUE algorithm. 
The closer the priors are to the true parameters the closer the 
estimates will be to MIVQUE. However, the closeness of the priors to 
the true parameters is unknown with experimental data. 



8 
REML can be obtained by iterative MIVQUE when only estimates in 
the parameter space are allowed. The solution from the k 01 iterate of 
iterative MIVQUE is obtained by applying MIVQUE using as priors the 
solution from the (k-l) t " iterate. This process is continued until 
prespecified convergence criteria are met. Estimates from the 
literature, ANOVA estimates, MIVQUE(O) estimates, MIVQUE(l) estimates, 
or guesses can be used as priors for the first iterate. 

REML can also be obtained by applying an expectation maximization 
(EM) (Dempster et al., 1977) algorithm. The EM algorithm results in 
much simpler calculations at each iterate of REML than iterative 
MIVQUE. The quadratics in m and e are equated to their expectations 
under the pretext that the prior (co) variances are correct. 

Advantages of the EM algorithm are that if prior values are 
assigned from within the parameter space, estimates will not converge 
outside the parameter space, and at each iterate the likelihood is 
guaranteed to increase. A disadvantage of the EM algorithm is that it 
converges very slowly. This can be a real problem if each iterate 
requires 20 minutes to an hour of computer time. Iterative MIVQUE 
converges more rapidly than EM, however, estimates outside the 
parameter space occur, and computational requirements per iterate can 
be much greater than EM algorithm. If the computational requirements 
of an iterate of iterative MIVQUE are not too much greater than that of 
an iterate of the EM algorithm, it is this author's opinion that 
iterative MIVQUE should be the method chosen because iterative MIVQUE 
usually converges in 3 to 5 iterations whereas the EM algorithm can 
take well over 50 iterations to converge. 



9 
Swallow and Monahan (1984) compared MIVQUE with ANOVA estimates 
used as priors (MIVQUE(A)) to REML by iterative MIVQUE for 10 different 
number patterns for a random one-way classification using simulation. 
They found that the efficiencies of MIVQUE(A) and REML were of such a 
similar magnitude that the extra iterations required to complete REML 
for convergence were not justified. 
NONADDITIVE RELATIONSHIP MATRICES 

Henderson (1985a) showed how the matrices of coefficients of 
relationship for dominance (D), additive by additive epistasis (AA), 
additive by dominance epistasis (AD), etc. could be obtained from 
Wright's numerator relationship matrix (A) using results due to 
Cockerham (1954) when no inbreeding has occurred. When inbreeding has 
occurred, the nonadditive genetic effects are difficult to interpret 
(Cockerham, 1954); therefore an additive genetic model is usually 
assumed as an approximation when inbreeding exists. When an additive 
genetic model is assumed, the computational requirements of MIVQUE or 
REML can be reduced. The greatest reduction comes about because of the 
structure of the inverse of A, and because the inverse of A can be 
obtained directly without first finding A (Henderson, 1973; Quaas, 
1976) . 
REDUCED ANIMAL MODEL 

Quaas and Pollack (1980) showed that a reduced animal model (RAM) 
with equations in the MME only for parents who have progeny records is 
equivalent to a full animal model with equations in the MME for all 
animals. Nonparent BLUPs can be obtained from the BLUPs of their 
parents, the herd-year-season effects, and the residuals. Their model 



10 
was a multiple trait model which included both direct and maternal 
genetic effects and a correlation between direct and maternal genetic 
effects. Henderson (1985c) described the RAM for single traits. 
Utilizing RAM greatly reduces the size of the MME which reduces the 
computer storage required to obtain BLUP. For example, Hudson and 
Kennedy (1985) obtained BLUP's for swine using Ontario Record of 
Performance data of Yorkshire, Landrace, Hampshire, and Duroc breeds. 
In this data set, there were 5.6 times as many total animals as there 
were parents and ancestors with progeny with records. However, the 
computation of MIVQUE or REML estimates of (co)variance components 
using quadratics in BLUP still is not feasible because the g-inverse of 
the left-hand side of the MME is needed for these calculations; and 
this matrix can be quite large even under RAM. 

In practice, BLUP's are computed by substituting estimates of 
variance components for the corresponding parameters in the MME. These 
estimates can come from either the same data set or from an independent 
data set. When estimates of variance components are used in the MME in 
place of true parameters, the resulting estimates of genetic values are 
not actually BLUP but are probably close approximations if the 
estimates of variance components are precise. Also, if the estimates 
of variance components used with BLUP are obtained from the same data 
set as BLUP then the BLUP estimates obtained are biased, but this bias 
is likely to be very slight if the data set is large. 
FEASIBILITY OF REDUCED ANIMAL MODEL 

Kennedy et al. (1985) estimated the variance components used by 
Hudson and Kennedy (1985) using an approximate REML procedure 



11 

originally due to Henderson (cited by Hudson and Van Vleck, 1982). 
Kennedy et al. (1985) used the same, or some of the same, data to 
estimate variance components as Hudson and Kennedy (1985) used to 
obtain BLUP. Kennedy et al. (1985) used a model in which the random 
effects were sires and litters. The sires and litters were assumed to 
have constant variances, and the correlations among sires, among 
litters, and between sires and litters were assumed to be zero. If RAM 
was appropriate for obtaining BLUP, why did these authors not use RAM 
to estimate variance components? The answer is that it is not feasible 
to use REML or MIVQUE to obtain estimates of variance components under 
RAM when the MME contain 4,000 to 20,000 equations as was the case in 
these studies. It is not known whether it is better to use an 
approximate genetic model and an efficient method such as REML as was 
done in this study, or to use a more accurate genetic model such as RAM 
and a less efficient method such as diagonal MIVQUE, SDS, or WSDS When 
the more efficient method cannot feasibly be used with RAM. In the 
remainder of this paper, it will be assumed the animal model or RAM is 
appropriate, and methods that are more computationally feasible than 
REML or MIVQUE will be discussed. 
METHODS MORE FEASIBLE THAN MIVQUE AND REML 

Henderson (1985a) proposed a method called diagonal MIVQUE when 
the number of MME is too large to feasibly obtain the g-inverse of the 
coefficient matrix (left hand side of MME). For this method, 
approximate solutions for genetic random effects (g) are obtained by 
dividing the right hand sides of MME by the corresponding diagonal 
element of the coefficient matrix. Quadratics in terms of approximate 



12 
g (i.e., (g)) and approximate e (i.e., (e)) are then computed. These 
quadratics are then equated to their expectations, and the resulting 
equations are solved for estimates of variance components. The 
expectations of these quadratics are easier to compute than 
expectations of quadratics in BLUP for g and e because the inverse of 
the diagonal elements of the coefficient matrix are used in place of 
the g-inverse of the coefficient matrix. 

The inverse of V is proportional to I n for most models if all 
variances and covariances except the residual variance are assumed to 
be zero for the purpose of simplifying calculations. If MIVQUE is 
applied in this case the resulting estimates are MIVQUE if all 
(co) variances except the error variance are close to zero relative to 
the error variance. This method was suggested by Rao (1971) to help 
improve computational feasibility. However, these estimates have been 
shown to be poor relative to ANOVA when variances other than the error 
variance are as large as eight times the size of the error variance for 
some hierarchial designs (Brockelbank and Giesbrecht, 1985). The 
extent to which this is true for the animal model is not known. 

Grimes and Harvey (1980) extended the use of symmetric sums of 
products (SSP) of Koch (1967, 1968) to the animal model. They called 
their method symmetric differences squared (SDS). They compared the 
standard errors of SDS and ANOVA for a random paternal half sib 
analysis using simulated data and found that SDS was slightly less 
efficient than ANOVA for this case. Grimes and Harvey (1980) also 
used simulation of a maternal effects model to evaluate the standard 
errors of SDS estimates of variances due to direct genie effects, 



13 
maternal genie effects, permanent maternal environmental effects, and 
residual effects, and the covariance between direct and maternal genie 
effects computed from data simulated using mating designs A and B 
described by Thompson (1976). They found that the standard errors of 
the estimates were quite large even for data sets of 1,600 animals 
resulting from simulating 200 A or B sets. Grimes and Harvey (1980) 
demonstrated the computational feasibility of SDS by using it to 
estimate the same (co)variances as in their maternal effects simulation 
described above from the weight and gain traits of 1,780 mice in a 
random mating control population. This procedure required very little 
computer storage but used considerable computer time because of the 
calculation of 1,583,310 symmetric differences squared and their 
expectations . 

Christian (1980) showed how the computer time requirements of SDS 
can be reduced by summing groups of symmetric differences squared that 
have the same expectation and then computing the expectation of the sum 
of each group of SDS. These sums and their expectations are equated, 
and the resulting equations solved to obtain estimates of (co) variance 
components. Christian (1980) proved that this method is equivalent to 
SDS. Computer time was less because fewer multiplication operations 
were needed. The number of addition operations required for this 
method was not different from SDS. In addition, Christian (1980) 
suggested that SDS weighted by the inverse of the error variance- 
covariance matrix among SDS (WSDS) would correct for the 
inquadmissibility (inefficiency) of SDS. However, to complete the 
analysis by this method as Christian described, it would require 



14 
computational steps proportional to n raised to a power of four. Where 
n is the number of observations. 
COMPUTING A 

The numerator relationship matrix (A) is needed for SDS, WSDS, 
REML, MIVQUE and diagonal MIVQUE under the additive and nonadditive 
genetic animal model. The computation and storage of this matrix can be 
costly. Hudson et al. (1982) described a computer algorithm that can 
compute and use A by storing only the diagonal elements and the nonzero 
off diagonal elements of A. This algorithm requires storage for three 
vectors of order n and two vectors of order equal to the number of 
nonzero elements of A. They found that only 6.6 to 22.6% of the 
n(n+l)/2 elements of the upper triangle of A were nonzero for samples 
from five populations of sires. 
UNWEIGHTED MIVQUE 

In addition to computational feasibility, it would be convenient 
if methods of estimating (co)variance components did not depend on the 
assignment of prior values. Methods such as REML and ML do not depend 
on the assignment of prior values. Such methods are iterative and can 
require large amounts of computer time when more than 500 to 1,000 
parents are included in a data set especially when the animal model is 
applied. MIVQUE is not computationally feasible under the animal model 
with large data sets, and it depends on prior values for its optimal or 
near-optimal properties. 

Rao (1971) suggested that MIVQUE could be applied with all prior 
(co)variances assigned a value of one including the residual variance 
so that the experimenter would not have to assign prior values. This 



15 
method will be abbreviated MIVQUE(l) in the remainder of this paper. 
However, this method would not be computationally feasible for the 
animal model with large data sets. 

Brocklebank and Giesbrecht (1984) compared MIVQUE(l) to ANOVA for 
4,225 different parameter combinations involving 15 different 2 stage 
nested designs. For all but one of the 15 designs, MIVQUE(l) resulted 
in lower standard errors than did ANOVA for more than one-half of the 
parameter combinations. 



CONCLUSIONS 

REML or MIVQUE with reasonable priors yield efficient estimates of 
(co)variance components. However, when the animal model is appropri- 
ate, estimates by REML and MIVQUE require too much computer time for 
large data sets with greater than 500 to 1,000 parents. For additive 
genetic models, Quaas and Pollack (1980) described a reduced animal 
model equivalent to the animal model that has MME only for parents of 
progeny with records. This reduces the size of the MME needed for 
obtaining BLUP estimates of genie values and for obtaining genie (co)- 
variance components (Henderson, 1985a, b, c). However, with today's 
computers it appears not to be feasible to obtain REML or MIVQUE under 
even the reduced animal model as evidenced by the choices made in the 
studies of Hudson and Kennedy (1985) and Kennedy et al. (1985). There- 
fore, the breeder is forced to choose an approximate model and use an 
efficient method, such as REML as it appears that Kennedy et al. (1985) 
did, or choose the animal model or RAM and a less efficient, more 
computationally feasible method for estimating variance components. 
Hopefully, in the future with improving computer technology and innova- 
tive computing shortcuts, this choice will not have to be made. 

There is a need to find a computationally feasible algorithm for 
computing WSDS, and to compare the efficiency of WSDS with other 
computationally feasible methods such as MIVQUE(O), SDS, and diagonal 
MIVQUE under the animal model. 

16 



THEORY 
THE EQUIVALENCE OF MIVQUE AND WEIGHTED SYMMETRIC DIFFERENCES SQUARED 

It is shown in this section that Symmetric Differences Squared 
(SDS) (Grimes and Harvey, 1980) weighted by the inverse of the 
variance-covariance matrix among squared differences (Y) is MIVQUE 
(Rao, 1971) when the true parameters are used to construct the Var(Y). 
The idea of weighting SDS to obtain more efficient estimates of 
variance components was suggested by other workers (Forthofer and Koch, 
1974? Christian, 1980). 

Forthofer and Koch (1974) suggested that Symmetric Sums of 
Products (SSP) (Koch, 1967) might be weighted in some way to obtain 
more efficient estimates of variance components. Grimes and Harvey 
(1980) showed that SDS estimates are equivalent to SSP estimates (Koch, 
1967) for the one-way classification random model. Christian (1980) 
suggested that SDS could be corrected for inquadmissibility 
(inefficiency) by weighting by the inverse of the error variance- 
covariance matrix of Y. While MIVQUE is appropriate for both mixed and 
random models, SDS was defined only for the random model (Grimes and 
Harvey, 1980). Therefore, only a random model need be considered when 
showing the equivalence of MIVQUE and weighted SDS. 

The linear model for the phenotypes or observations to be assumed 

is 

c 

y = l n P + E u ± + e [1] 

i=l 

17 



18 
where: 

V is a fixed constant, 
n is the total number of observations, 
y is an n x 1 vector of observations, 
l n is an n x 1 vector of l's, 

c is the number of random sources of variation not including e, 
and 

u^ and e are n x 1 norma] random vectors. 

The first and second moments of the random effects are: 

E(e) = EU^) = 0, 

Var(u ± ) = V ± o\, 

Var(e) = I n eg, 

Covfu^e') = 0, 

CovfUjyUj) = 0, if ij*j, and 

Var(y) = Z V ± o? + I n a| , [2] 

i=l 

where V^ and l n are known matrices. For example, V^ might be Wright's 

numerator relationship matrix and V"2 might be the dominance 

relationship matrix. a? and ag represent unknown variance and 

covariance components. It should be noted that elements within a set 

of random effects can be correlated in many different ways. However, 

correlations among random sets of effects are assumed to be zero. 

First, the equivalence of MIVQUE and weighted SDS will be shown 

for a case where the observations are uncorrelated. When all variances 

and covariances except the residual error variance are assumed to be 

zero, the observations are uncorrelated. In the second part of the 



19 
proof, it is shown that weighted SDS and MIVQUE are equivalent for 
cases in which correlations exist among observations in y. The vector 
of observations y is transformed to a random vector y* whose elements 
are uncorrelated. It is shown that weighted SDS on y* results in the 
same estimates of variance components as weighted SDS on y. Then, 
applying the first part of the proof for uncorrelated observations in 
y, weighted SDS on y* is equivalent to MIVQUE on y* which is equivalent 
to MIVQUE on y. 

The proof of the equivalence of weighted SDS and MIVQUE can be 
divided into the following steps. 

1. The vector of symmetric differences squared Y is defined using 
slightly different notation than that of Grimes and Harvey 
(1980). Y is defined as a vector of squares (HY^) minus two 
times a vector of crossproducts (Y2). 

2. The expectation of Y is given. These expectations are needed 
to set up the SDS and weighted SDS equations. 

3. The equations to solve to obtain estimates of variance 
components by SDS (unweighted) are given. 

4. The variance-covariance matrix for Y is given. The inverse 
of this matrix is needed to obtain estimates by weighted SDS. 

5. The inverse of the error variance-covariance matrix among 
differences squared in Y is given. Note that this is 
proportional to the inverse of the variance-covariance matrix 
of Y if there are no correlations among observations in y. 

6. The equations to solve to obtain estimates of variance 
components by weighted SDS are given. 



20 

7. It will be shown that SDS weighted by the inverse of the error 
variance-covariance matrix of Y (WSDS) is equivalent to 
MIVQUE(O). 

8. Using the results from step 7, it will be shown that SDS 
weighted by the inverse of the total variance-covariance 
matrix is equivalent to MIVQUE. 

1. Definition of Symmetric Differences Squared 

The q = n(n-l)/2 x 1 vector of symmetric differences squared is 
formed by taking all possible unique differences among observations. 
These difference are then squared. Define these differences squared as 

y= C(y 1 -y 2 ) 2 (y 1 -y 3 ) 2 ...(y 1 -y n ) 2 (y 2 -y3) 2 ...(y 2 -y n ) 2 ...(y n . 1 -y n ) 2 ]' 

which may be written as: 

y = t(y k - yi ) 2 ] C33 

for k = 1, 2, ... n-1; 1 = k + 1, k + 2 , ... n. 

Expand the squares in [3] and express Y as a vector of squares (HYj) 
minus two times a vector of crossproducts Y 2 in [4J. 

Y = [y 2 - 2y kYl + y 2 ] 
= Cy 2 + yj] - 2[y kyi ] 



= BYj - 2Y 2 [4] 



where: 



*1 = Cyf y 2 , ... y 2 ^, and [5] 

Y 2 = Cy 2 y 2 y^ ... y^ y 2 y 3 ... Y^y^' C6] 

H is a q x n matrix containing zeros and ones, which specifies the 
elements of Y^ to be incorporated into Y. It is defined as 



21 



H = 



1 
1 



1 




1 






1 





1 





1 


















1 




[7] 



... 1 1 
There are three useful matrix identities involving H that will be 
needed in the proof that follows. These are 

[8] 
[9] 



ffl n = 21 q< 



H'l, 



(n-l)l n# and 



H'H = (n-2)^ + l n l^ . [10] 

Identity [8] can be explained by noting that every row of H has two 
ones and n-2 zeros. Likewise, identity [9] can be explained by noting 
that every column of H has n-1 ones and (n-l)(n-2)/2 zeros. The 
diagonals of [10] are all n-1 because each column of H has n-1 ones and 
(n-l)(n-2)/2 zeros. Each of the off-diagonal elements of [10] is one 
because each pair of columns of H has a one in the same row only one 
time. 
2. Expectation of Symmetric Differences Squared 

The expected value of Y (E(Y)) is needed to set up the SDS and 
weighted SDS equations. E{Y) is written as a function of unknown 
variance components and the elements of V^ as follows: 

E{Y) = Edffj^ - 2Y 2 ) - HE(Y X ) - 2E(Y 2 ). [Ill 

The expected value of Yj^ is 

E(Y X ) = l n p 2 + X x a 2 [12] 

where: 



x l - 



o = to 2. & 2 ' ' 



< V 11>1 < v ll>2 
< V 22>1 < v 22>2 



o c e J , 



< v ll>c 
< v 22>c 



< v nn>l < v nn>2 • • • < v ™) 



nn'c 



, and 



,th 



* v kk^i ^ s the diagonal element of V^, 
The expected value of Y 2 is 



where 



x 2 = 



< V 12>1 
< V 13>1 

< v m> 1 

< V 23>1 



E(Y 2 ) = l qU 2 + X 2 a 2 / 



< v 12>2 
< v 13>2 



< v ln> 2 
(v 23 ) 2 



< v 12>c 
< v 13>c 

■ 

< v ln>c 
< v 2n>c 






■ 






< v 2n> 1 < v 2n> 2 



( v 2n>c ° 



< v n-l,n>l < v n~l,n>2 • - * < v n-l,n>c ° 
(v }c] ) i is the element in the k th row and 1 th column of V^. 

It should be noted that when there is no inbreeding, X^ is 



x l _ ^c+l- 



22 
[13] 

[14] 



[15] 



, and [16] 



[17] 



Substitute [12] and [15] into [11] and the E(Y) can be written as: 

E(Y) = Hl n u 2 + HX^ 2 - 21 q v 2 - 2X30 2 . [18] 

Substitute [8] into [18] and E(Y) can be written as: 

E{Y) = 21qM 2 + HX 2 a 2 - 21 v 2 - 2X 2 a 2 , which, after 
combining terms and rearrangement is 



23 



E(Y) = {HX X - 2X2)o 2 , or 



= Xo 2 , 
where 

X=HX 1 - 2X 2 . [19] 

3. Symmetric Differences Squared Equations 

Estimates of variance and covariance components can be obtained by 
solving 

X'Xg 2 = X'Y. [20] 

It should be emphasized that [20] are the same equations as used by 
Grimes and Harvey (1980) to obtain estimates of variance components. 

4. The Variance - Oovariance Matrix Among Symmetric Differences 
Squared 

The inverse of the variance-covariance matrix of Y is needed to 
obtain estimates of variance components by weighted SDS. Define Var(Y) 
as 

Var(Y) = Cov(Y,Y'), 

= [Cov«y k - yi ) 2 f (y m - y ) 2 >], or 

= 2 ^ v km " v lm " v ko + v lo> 2 3< C213 

for k = 1, 2, . .. n - 1; 1 = k + 1, k + 2, ...n, 

and 

m = 1/ 2, ... n — 1; o = m + 1, m + 2, ... n, 

where Vj an is the element in the "k^ row and the m column 

of Var(y) = V. 

The inverse of [21] is difficult to represent algebraically because V 

can be any positive definite nonsingular matrix. Fortunately, this 

inverse is not needed to accomplish the objectives of this paper. 



24 
First, the inverse of [21] will be obtained for the special case when V 
is proportional to I_, which can be written as: 

VaI n- 

It will then be shown that even when Var(y) is complicated, y can be 

transformed to a y* with a variance proportional to I n , and that 
estimates of variance and covariance components by MIVQUE and weighted 
SDS are identical whether the analyses are completed on y or y*. 

The variance of Y can be rewritten as 
Var(Y) - VardHi - 2Y 2 ) 

= CovCHYl - 2Y 2 ,YjH' - 2Y 2 ) 

= CovtHYpYjH' ) - 2Cov(Hy 1# Y 2 ) - 2Cov{Y 2 ,YjH' ) + 4Cov(Y 2 ,Y 2 ), or 
= H VarfY-^H' - 2 H CovfY-^Y^) - 2Cov(Y 2 ,Yi)H' + 4Var(Y 2 ). [22] 
When all variance and covariance components except o~ are assumed to be 
zero, the second moments involving Y^ and Y 2 are 

VarCY^ = 2 a^, [23] 

Var(Y 2 ) = a*I q , [24] 

Cov(Y lf Y 2 ) = 0^ q , and [25] 

Cov(Y 2 ,Yi) = pgn. [26] 

Substitute [23], [24], [25], and [26] into [22] and the Var(Y) is 

Var(Y) = 2 a^HH' + 4 o*I q , 

= 2 a^(HH' + 2I q ), or 
= 2 o^W, 
when Oq is the only nonzero prior value, where 

W = {HH 1 + 2l q ). [27] 

It should be noted that SDS weighted by the inverse of W is equivalent 
to SDS weighted by the inverse of Var(Y) if 



25 
W a Var(Y) . 
It should be emphasized that 

Var(Y) a W 
whenever 

Var(y) ot^. 
This is true if all variance components except o ^ are assumed to be 
zero, or if y is a random vector with variance proportional to I n . 
5. The Inverse of the Error Variance - Oovariance Matrix of Y 
The inverse of W can be written as: 

VT 1 = (BH' + Zlg)" 1 . [28] 

Now, let us apply equation [17] of Henderson and Searle (1981) to find 
the inverse of the sum of 2 matrices. Their equation [17] is 

(A + UBUT 1 = A" 1 - A^UCB" 1 + U'A^lD^U'A" 1 [29] 
If we let A = 2I a 

U = H, and 

B = Ij^, and substitute into [29], 
W _1 can be written as 

WT 1 = [^I q - ^Hdn + ^H'H)" 1 ^ |]. [30] 

After rearrangement, [30] is 

W" 1 = ^ [I q - H^ + H'Hr^-H']. [31] 

Substitute [10] into [31], and observe that 

(2^ + H'H)" 1 = (21^ + (n-2)l n + l^) -1 , or after combining terms, 

= <"*n + Vn> _1 - ^ 

Now we can apply equation [3] of Henderson and Searle (1981) to 



26 



evaluate [32]. Their equation [3] is 



b 

(A + buv') -1 = A -1 A" 1 uv , A" 1 . [33] 

1 + bv'A~i 



We let 



A=nl n , 
u = 1 n' 



v ' = *n 

b= 1, 

and substitute into Z331 to obtain 






n 



which after simplification is 



n " 2n 
Substitute [34] into [31] to obtain 



= ; ^-^Vn'- ^ 



^ = I Cl q " n" H(I n - In VA> H '3' **** 
simplifies after rearrangement to 

W-l = i [I q - i BH' + L l q l q ]. [35] 

6. Weighted Symmetric Differences Squared Equations 

Estimates of variance components by weighted SDS can be obtained 
by solving 

X'W^Xa 2 = X'W^Y. [36] 

The solution to [36] yields the same estimates that Christian (1980) 



27 
suggested after correcting SDS for inguadmissibility. 

7. The Equivalence of SDS Weighted by the Inverse of the Error 
Variance-Covariance Matrix and MIVQUE(O) 
It will now be shown that the solution to [36] is equivalent to 
MIVQUE if all variances and covariances except cjg are zero, or if 
Varfyjai^. it will first be shown that 

X'frtc = CtrfMVjMVj)], [37] 

for 

(i, j = 1, 2, ... c, c+1), 
where 

V c+1 = V 

m - *n - \ v;- ^BJ 

and 

CtrtMV^MVj)] 
is a (c + 1) x (c + 1) matrix that contains the left hand side of 
equations used by Rao (1971) to obtain MIVQUE(O) estimates of variance 
and covariance components. It will then be shown that 

X'W^Y = [y'M^My], [39] 

for 

(i = 1, 2, ... c, c+1), 
where (y'MVjMy] is a (c + 1) x 1 vector that is the right hand side of 
the equations used by Rao (1971) to obtain MIVQUE(O) estimates of 
variance components. Let estimates of variance components by MIVQUE(O) 
and SDS weighted by W -4- be denoted by o and o , respectively, 
where 



28 

a 2 = (x'lrto^x'W^Y, C4o] 

and 

o 2 = Ctr(HV i MVj)]" 1 [y'MV i My]. [41] 

Therefore, if equalities [37] and [39] are true then [37] and [39] can 
be substituted into [40] to obtain 

o 2 = [tr(MV i MVj)]"' 1 [y , MV i My3, 

which after applying [41] is 

"2 * 2 

Therefore, if equalities [37] and [39] are true and if Var(y)oti n , 
estimates of variance components by MIVQUE and SDS weighted by W" 1 are 
equivalent. 

If [35] is substituted into [36], the left hand sides of [36] are 

X'\r l X = ^ (XjH' - 2X£)[l q - ^ HH' + -- lgl^fHX-L - 2 X 2 ), 

which after expanding through the center parentheses can be written as: 

= ^ [(X[H' - 2X^(HX 1 - 2X 2 ) 

(XjH' - 2X 2 )HH'(HX 1 - 2X 2 ) 

+ -- (X^ - 2X^1 l^fHX-L - 2X 2 )]. [42] 

rr ^ ^ 

The first term within the brackets of [42] is 

(X^H" - 2X3) (HX 1 - 2X 2 ) = XJH'HX-l - ^H)^ - ^{H'Xg + 4X3X3, 
which after substituting [10] for H'H simplifies to 

{n^JXjX-L + X^lJXj - 2X2HX-L - 2X}H'X 2 + 4X!X 2 . [43] 



29 
It should be noted that [43] is the same as the left hand side of 
the SDS equations used by Grimes and Harvey (1980) to obtain estimates 
of variance and oovariance components. 

The second term within the brackets of [42] is l/n multiplied by 
(XjH* - 2X£)HH'(HX 1 - 2X 2 ) = XJH'HH'IWl - 2XjH , HH , X 2 

- 2X 2 HH'HX 1 + 4X2HH'X 2 , 
which after substituting [10] for IfH simplifies to 

(n^^XjX-L + On-^JXj^lj^ - 2(n-2)XjH'X 2 
- 4X^1^X2 - 2(n-2)X 2 HX 1 - 4X 2 l q I ] £C 1 + 4X 2 EH'X 2 . [44] 
The third term within the brackets of [42] is 2/n 2 multiplied by 

(XJH* - 2X 2 )l q l q (HX 1 - 2X 2 ) = X^B'1 C ^^K 1 - 2XjH'l q l q X 2 " 2X 2 1 q 1 q HX l 

+ 4X21^2, 
which after substituting [9] for H'l q simplifies to 

(XiH' - 2X 2 )l q l q (HX 1 - 2X 2 ) = (n-D^lnl^C-L - 2(n-l)Xjl n l q X 2 

- 2(n-l)X 2 l q l' r X 1 + 4X21^2- [45] 
We can substitute [43], [44], and [45] into [42] to obtain 

X'W" 3 * = - {(n^X-JX,^ + Xjljjl^ - 2X 2 HX 1 - 2XjH'X 2 + 4X3X3 



- - [(n^^X-^ + On^X-J^l,^ - 2(n-2)XjH'X 2 

~ 4X i 1 n 1 q K 2 - 2(n-2)X 2 HX 1 - W^Ri + 4X2HH'X 2 ] 

+ -- [(n-1) 2 X^l^ - 2(n-l)Xil n l q X 2 

n 

- 2{n-l) X^lgl,^! + 4X2^1^3]}, [46] 
or 



30 



x'w^x = ( - n ~ 2 - } - x^ +y x^ifo - 2 - x^ - 2 - x>px 2 + 2X3X3 

+ ^ X i W2 + ~ n 2 ^Wl " I X 2 HH,X 2 + ^2 Wft' C473 

which after further rearrangement is 
X'vrtc = (X^ + 2X2X 2 ) 

- - {X 2 H + Xi)(H'X 2 + X x ) 

+ ^ < X i X n + 2^ V <Wl + 21 <?2> • C48] 

n^ 

The following matrix equalities are useful: 

[tr^V^] - Xj^ + 2X3X3, [49] 

for i, j = 1, 2, ... c + 1, 

^1^ = 1& + 21^X3, [50] 

and 

^j 1 ^ = 1^1 + 21 q*2- C 51 ^ 

Equalities [49] through [51] are proven in Appendix A. Substitute [49] 

through [51] into [48] to obtain 

X'W _1 X = [trtViVj)] - I [tray-jV^)] + -- [trd^l^Vjln)], which 

after rotating traces is 

= [tr(V iVj )]- -^[trtVil^Vj)]- ^[tr(l n l^V iVj ) 

■*-i [tr(l n l^ Vi l n l n V 5 )], or 



31 
X'WTlx = {trCViVj - i V^l^Vj - i l^V^ + i- l^V^l^] >, 

= (trCVi - i l^i^ViXVj - i VA v j^ >- 

Finally, substitute [38] into [523 to obtain 

X'VT^X = trf^MVj), which proves [37]. 
Substitute [35] into the right hand sides of [36] to obtain 

X . W -1 Y = \_ {x ^. _ 2X^Cl q - i HH 1 + -- lgl^KHY! - 2Y 2 ) f 
which after expanding through the center parentheses becomes 



X'W~ 1 Y= - C(X]H' - 2X^)(HY 1 - 2Y 2 ) 



- - (XjH 1 - 2X 2 )HH'(HY 1 - 2Y 2 ) 

+ -- {X^H' - 2X 2 )1 C J^(HY 1 - 2Y 2 )]. [53] 

rr 

The first terra within the brackets of [53] is 

(XJH'- 2X 2 )(HY 1 - 2Y 2 ) = XjH'HYj^ - 2X2HYJ - 2XjH'Y 2 + 4X 2 Y 2 , 

which after substituting [10] for H'H simplifies to 

(n-2)XjSr 1 + X^l^ - 2X3^ - 2XjH'Y 2 + 4X^2- [54] 

It should be noted that [54] is the same as the right hand side of the 

SDS equations used by Grimes and Harvey (1980) to obtain estimates of 



32 
variance and covariance components. 

1 
The second terra within the brackets of [53] is - multiplied by 

n 

(XJH' - 2X£)HH'(HY 1 - 2Y 2 ) = XjH'HH'Hr^ - 2XjH'HH'Y 2 

- 2X 2 HH'HY 1 + 4X£EH'Y 2 , 
which after substituting [10] for H'H simplifies to 

(n-2) 2 X{Y 1 + (3n-4)Xil n l^Y - 2(n-2)X|H'Y 2 

- 4Xj_l n l^Y 2 - 2(n-2)X 2 HY 1 - 4X 2 l q l^Y 1 + 4X2HH'Y 2 [55] 

2 

The third term within the brackets of [53] is — multiplied by 

n 2 
(XjH'- 2X2)1^^ - 2Y 2 ) = XiH'lgl^ - 2X^1^2 - 2X^1^^ 

which after substituting [9] for H'l q simplifies to 

(n-l) 2 Xil n l nYl - 2(n-l)Xil n l4Y 2 

- 2(n-l)X 2 l q l^Y 1 + 43^1 q l q y 2 . [56] 

We can substitute [54] through [56] into [53] to obtain (after 
simplification) 

X' W -1 Y = i™ } - Xfr + ^- Xil n 17! - I X^ - I X^'Y 2 + 2X^2 

+ -^ 2 X i 1 n 1 o Y 2 + 2 ~2 W^l " \ X ^ H ' Y 2 + ^ W?* ^57] 

which after further rearrangement is 



33 



X'W^Y = (X^ + 2X^Y 2 ) 



- - (X^H + X[)(H'Y2 + Y x ) 



n 



+ -~ (Xil n + 2X^1 Jd^ + 21^Y 2 ). [58] 

rr 

The following matrix equalities are useful: 

[y' Vi y] = X£Y X + 2X2'Y 2 , [59] 

yy'l n = H'Y 2 + Y v [60] 

and 

l^yy'l n =i;Y 1 + 2^Y 2 . [61] 

Proofs of [59] through [61] are in Appendix A. 
We can substitute [59] through [61] into [58] to obtain 

X-^Y = [y'V iY - \ l n V iW 'l n + lg^&flj' 

= [y'V iy - ly'ViVJy - ^y'l n l^V iy + ^fl^^Vtfl, 

= [y'Vi(y - \ i^y) - i y'iniAVity - \ 1^)], 
= [(y'v i -iy'l n l^V i )(y-il n l^y)], 
= [(y' - i y^l^Cy - i l^y)], 

= Ey' t^ - I VA)Vid n - I l n l n )y] C62] 

Finally, substitute [38] into [62] to obtain 

X'W^Y = [y'MlTjMy], 



34 
to prove equality [39]. 

Therefore, it is shown that the equations for computing estimates of 
variance and covariance components by MIVQUE are equivalent to SDS 
equations weighted by the inverse of Var(Y) if the Var(y) al n . Since 
the equations are equivalent then so are the estimates of variance 
components. 

8. The Equivalence of SDS Weighted by the Inverse of the Total 
Variance-Covariance Matrix and MIVQUE 
Now it will be shown that MIVQUE and weighted SDS are equivalent 
even when Var{y) is not proportional to I n . One strategy is to obtain 
a transformation of y, say Ty such that Var(Ty) = I n , and then show 
that weighted SDS completed on Ty is equivalent to MIVQUE. This is the 
procedure to be followed here. Let V - LDL', where L'I» = I n and D is a 
diagonal matrix of positive real numbers. Then, the inverse of V is 

V 1 = UT 1 !.* . [63] 

If we obtain the transformation of y — > Ty, and let 

T = D" 1 / 2 !/ , [64] 

V~ can be written as a function of T as follows: 

TT 1 = LD- 1 / 2 D~ 1 / 2 L' = T'T. [65] 

The variance of Ty is 

Var(Ty) = TVT', 

= D^^L'UXj'LD" 1 / 2 = 3^, or 



= I ojTVjT' + ogpr 1 . [66] 



There are two important points to be made here. First, Var(Ty) aI n so 
that the results given previously on the equivalence of weighted SDS 
and MIVQUE are valid when the analysis is on Ty. Second, the Var(Ty) 



35 
[66] is written in terms of the same unknown parameters as the Var(y) 
[2], 
Let us denote the vector of transformed observations as 

y* = Ty= CyJ y*E ... y*] ' , and [67] 

the vector of symmetric differences squared as 

Y* = C(y£ - yf) 2 ]. for [68] 

(k = 1, 2, ... n - 1; 1 = k + 1, k + 2, ... n). 

The columns of Xf are now the diagonals of TVjT" and D" 1 instead of the 

diagonals of V^ and I n . Also, the columns of X£ are the q off diagonal 

elements of TVjT' and D -i instead of the off diagonal elements of V^ 



and I n . Hence, 



Var(Y*) = 2W* = <HH' + 21 ), and [69] 



t-1 ^ 



1 . 1 .2 



W*-= 2 -Cl q - n -HH' +n --l qV . [70] 

W*" 1 [70] is identical to W" 1 [35] and W* [69] is identical to W [27]. 
The weighted SDS equations, using y are 

X^VarfY)]"^ 2 = X^VartY)]" 1 ^ [71] 

and the weighted SDS equations using y* are 

X*'W* _1 X* cr^X*^*" 1 ^. [72] 

The solutions to equations [71] and [72] are identical. 
Now let 

M * = *n " Tl^T'TV" 1 ^, 
which after substituting [65] for T'T is 

D^ - n^l^r 1 !^)- 1 ^ . [73] 

By equality [37] the left hand side of weighted SDS equations using y* 



is 



36 



x *i W *-l x * _ [tr(M*TV,T'M*TV^T')], 

x J 

which after rotation of traces is 



and since 



then 



= [tr(T , M*TV i T'M*TVj);j, [74] 

t*m*t = T'djj - Ti n u; 1 ir 1 i n )- 1 i n T , )T, 

= T'T - T , Tl ri (i; i V r " 1 :L n )~ 1 i; i T'T, or 

= V 1 - V^l^yy-l^-lyy-l = P , [75] 

Xt'W'hc* = CtrtPViEVj)], [76] 

for i, j = 1/ 2, . .. c + 1, 
which is the left hand side of the equations used to estimate MIVQUE 
(Rao, 1971) for model [1]. Now by equality [39] the right hand side of 
the weighted SDS equations using y* is 

xa.^-ly* = [y'T'MVTVjT'^Ty], 
which after substituting in [75] for T'M*T is 

X^^-ly* = [y'FVjPy], 

which is the right hand side of the MIVQUE equations (Rao, 1971). It 
has therefore been shown by these derivations that the weighting of the 
symmetric differences squared by the inverse of the total variance - 
covariance matrix to estimate the variance components yields estimates 
that are minimum variance unbiased estimates if the prior values used 
are near the true parameters. It must be emphasized that transformation 
of y to y* was simply a device used to prove the above, which would not 
be recommended as a computational approach in practice. 



37 



COMPUTATIONAL REQUIREMENTS OF WSDS OR MIVQUE(O) 

The computational requirements to complete analysis by MIVQUE(O) 
or WSDS will now be discussed. The left and right-hand side of the 
MIVQUE(O) or WSDS equations are as follows: 

trtvfjtrCV^) . . . trtV^) tr^) 
tr(V§) . . . tr(V 2 V c ) tr(V 2 ) 



[trfMV.jMVj)] = 



symmetric 



2 

n 






symmetric 



tr(v£) tr(V c ) 



n. 
1 nV 2 V c l n l^V 2 l n 

■ * 

n 



1 
+ — 

n 2 






1*V„1„ 
n c n 



n 



IWVn l^Valn . . . 1^1^ n] 



[77] 



and 

[y'MV-jMy] = Cy'V^y y'V 2 y . . . y'V^ y'yiT 

- ^V^y l n V 2 y . . . 1^ ny] ' 

+ Y^Wn W„ • • ' ^Vn ^' . [78] 

The multiplications, additions, and storage needed to obtain and save 
the elements in the right hand sides of [77] and[78] are in table 1. 



38 



Table 1. Storage, multiplications and additions for each type 
of element required to set up the MIVQUE(O) or WSDS equations 
when relationship categories are not considered. 



Type of 
element 


Number of 
elements of 
this type 3 

c(c+l)/2 


Total multiplications 
for all elements of 
of this type* 

n(n+l)c(c+l)/4 


Total additions 
for all elements 
of this type 


trfVjVj) 


n(n+l)c(c+l)/4 


tr(V ± ) 


c 







nc 


n 


1 







n 


WVn 


c(c+l)/2 


nc(c+l)/2 




cn 2 +nc(c+l)/2 


Wn 


c 







cn(n+l)/2 


y'V iy 


c 


cn(n+l)/2 




cn(n+l)/2 


^y 


c 


nc 




n(n+l)c 


y'y 


1 


n 




n 




1 


1 




n 



a c is the number of sets of random effects other than error 
"n is the total number of observations. 



Computing [tr(MV^MV^)] and [y'MVjMy] from the elements in Table 1 
and solving for the estimates of variance components also requires 
computer time and storage, but these requirements are small relative to 
the requirements needed to obtain the elements represented in Table 1. 
Additional computer resources are needed to compute and save V^ 
matrices and store y. 

For animal breeding problems, the elements of V^ are usually 
coefficients of relationship which specify the fraction of the i 
variance or covariance component's contribution to the covariance 



39 
between two members of y. The most widely used V^ matrix in animal 
breeding problems is Wrights numerator relationship matrix (A) . If 
inbreeding has occured, a strictly additive genetic model is usually 
invoked as an approximation because the nonadditive effects under 
inbreeding are difficult to interpret (Cockerham, 1954) . If no 
inbreeding has occured, nonadditive effects such as dominance and 
epistasis can also be considered. Maternal effects and covariance 
between direct and maternal effects can be important in animals. 
Matrices of dominance and epistatic coefficients of relationship can be 
constructed from Wright's numerator relationship matrix (A) if there is 
no inbreeding (Henderson, 1985) . A can be saved for later use by 
storing only its p nonzero elements, where p £ n(n+l)/2 and p is the 
number of nonzero elements in the upper triangle of A (Hudson et al . , 
1982) . Three vectors of order n and one vector of order p in addition 
to the p vector containing the nonzero elements of A are needed to 
locate nonzero elements in A. This process can save considerable 
computer storage. Hudson et al. (1982) found that only 6 to 22.6% of 
the n(n+l)/2 upper triangular elements of A were nonzero for five 
breeds of dairy sires. 

The multipications required to complete WSDS or MIVQUE(O) 
equations can be reduced if the related pairs of individuals having 
records in y can be classified into a relatively small number of 
relationship categories. For example, with the mice data of Grimes and 
Harvey (1980), there were 107,398 related pairs of individuals 
distributed among only 39 relationship categories. Christian (1980) 
showed how to take advantage of these relationship categories to reduce 



40 
the computer time required to compute SDS by reducing the number of 
multiplications needed. It will now be shown how to reduce the number 
of multiplications required to do WSDS or MIVQUE(O) when there are a 
relatively small number of relationship categories. 

If we can assume that no inbreeding has occured, X^ = l n *c+l ^^ 
can be substituted into [47] and [57] to obtain (after simplification) 

i -1 _ i 2 • i 2 

X'W X = (n-l)l c+1 l^. +1 - - X^l q l^. +1 - - 1 C+1 1^X 2 

+ 2X 2 X 2 - I X^'^ + '2 X 2Vq*2' ™ 

and 

X'WT l Y = (y'y - ny 2 )l c+1 + 2X 2 Y 2 - 2yX 2 Hy 

+ 2y 2 X*,l q [80] 

for the left and right-hand sides of the WSDS equations. For purposes 
of comparison, it seems appropriate to give the SDS equations in this 
same notation. Substitute [17] into [43] and [54] to obtain 
XX = 2n(n-l)I c+1 l ( L +1 - 4Xp q li +1 - 41 c+1 lgX 2 

+ 4X 2 X 2 , and [81] 

X'Y = 2n(y'y - ny 2 )l c+1 - 2X3^ + 4X 2 Y 2 [82] 

for the left and right hand sides of the SDS equations. 
We can let 

X 2 = ZU, [83] 

where D is an r x (c+1) matrix whose i th column contains the 
coefficients for the i™ variance component among the r relationship 
categories, and Z is a q x r matrix of zeros and ones that specifies 
the relationship category to which the pair of individuals belong whose 



41 



records are multiplied together to make an element of Y2. 
We can also let 

B = Z'Z, [84] 

where B is a diagonal matrix whose m tn diagonal element, for m = 1, 2, 
... r, is the number of pairs of individuals related by the m 
relationship category. This is the same as the W matrix defined by 
Christian (1980). 

Now we can let 

N = H'Z, [85] 

or 



N = 



n ll n 12 * * * n lr 
n 21 n 22 • * • n 2r 



[86] 



n nl "n2 * • • "nr 
where n^ is the number of individuals in the entire sample related to 

individual k by the relationship of the m 1 -" relationship category, for 

k = 1, 2, . .. n, and m = 1, 2, ...r. 



Hence, 



N'N = 



z n 



l kl 



? n kl n k2 



? n kl n k2 Z n i 



k2 



J "k^kr 
k 



? n k2 n kr 



We can let 



? n kl n kr ? n k2 n kr • • • ? n kr 



Q = z'l„# 



[87] 



[88] 



where Q is an r x 1 vector whose i th element is the same as the i 



ith 



42 
diagonal element of B. Substitute [83] , [84] , [85] , and [88] into [79] 
and [80] to obtain 

X'frtc = (n-l)l c+1 li +1 - 1 U'Ql^i - I 1 C+1 Q'U 

2 4 
+ 20'BU - - O'N'NU + -- U'QQ'U, [89] 

n n 2 
and 

X'tf^Y = (y'y - ny 2 )l c+1 + 2U'Z , Y 2 - 2yU'N'y + 2y 2 U'Q, [90] 
for the left and right hand sides of the WSDS equations. 
Likewise, substitute [83] , [84] , [85] and [88] into [81] and [82] to 
obtain 

X'X = 2n(n-l)l c+1 l^ +1 - 40'Ql^ - 41 C+1 Q'D + 40'BU, [91] 
and 

X'Y = 2n(y'y - ny 2 )l c+1 - 20*11^ + 4U , Z , Y 2 , [92] 
for the left and right hand sides of the SDS equations. Matrices on 
the right hand side of [89] , [90] , [91] and [92] that can be stored 
economically are in table 2. Also in table 2 are the storage 
requirements for the matrices, and the number of multiplications and 
additions required to compute the matrices. It is assumed that square 
symmetric matrices will be half stored. Further computations required 
to obtain estimates of variance components by WSDS or SDS are small 
relative to the computations needed to construct some of the matrices 
in table 2, therefore, these are not given. 

The r diagonal elements of B can be stored as Q so that B does not 
need to be stored separately. It is clear that estimates of variance 
components by SDS will take less time to compute than estimates of 



43 
variance components by WSDS because N'N is needed for WSDS but not for 
SDS. A computational example for obtaining SDS and WSDS by [89] 
through [92] is presented in Appendix C 



Table 2. Multiplications, additions and storage 
needed to compute and save the matrices needed to set 
up the WSDS equations when relationship categories are 
considered. 



Matrix 


Storage 3 ' b 


Multipl icat ions c 


Additions 


U 


re 







re 


Q 


r 







n(n-l)/2 


N'N 


r(r+l)/2 


nr(r+l)/2 


n 2 +nr(r+l)/2 


Z'Y 2 


r 


n(n- 


-D/2 


n(n-l)/2 


N'y 


r 


nr 




nr 


N'Yl 


r 


nr 




nr 


n 


1 







n 


y'y 


1 


n 




n 


y 


1 


1 




n 



a r is the number of relationship categories. 

^c is the number of random effects other than residual. 

c n is the total number of observations. 

The computational requirements for completing WSDS using the mice 
data of Grimes and Harvey (1980) will now be discussed. Arithmetic 
steps and storage required to compute WSDS by [89] and [90] and by [77] 
and [78] will be compared. For the mice data analyzed by Grimes and 
Harvey (1980), O, Q, r, c, n, and p are as follows: 



a 
ii 

M| I- 



Q3COOCOmU1^iNibitb*iiMi*>*iliOJWWli)NWWIOMMWW(OHHI-'MHHHMM 



oowoo>o*»aiajjiMioi-'HooooajfeHoa)itii[»wioMi-'oouiiMJWWPi-'00 



0>^<M^(D>bMCOOi^OaiOGO>MOOOOiM^aM»>tiObO^O^OiM>OtOOIOOIOO 



oooc^ooooooooooooooooooooooooooooooooooo 



ooooooooooooooooooooooooooooooooooooooo 



*» 
*» 



45 
Q = [4,739 10,215, 536 1,367 556 936 40 4,419 256 14,862 19,203 743 747 
1,280 304 621 302 6,323 264 56 107 176 4,603 2,546 9,214 127 821 56 
491 347 2,546 190 9,475 144 170 5,578 1,458 61 1,519]', 

r = 39, 

c = 4, 

n = 1,780, and 

p = 109,178. 
The first three columns of U were obtained directly from the last three 
columns, excluding the first row, of table 5 of Grimes and Harvey 
(1980) . The columns of U are the coefficients for the variance and 
covariance components for the covariances among individuals related by 
the relationship categories. The first column contains the 
coefficients for the direct additive genetic variance (a^) , the second 
for maternal additive genetic variance (a^L) , the third for direct- 
maternal additive covariance {Oggn,) t the fourth for permanent maternal 
environmental variance (a^) , and the fifth for residual variance (a^) . 
Q was obtained from the first column, excluding the first row, of Table 
5 of Grimes and Harvey (1980) . 

The basic matrices needed to construct WSDS and SDS equations are 
given in table 3 with their storage requirements and the numbers of 
multiplications and additions required to obtain them for the mice data 
of Grimes and Harvey (1980) . 

A small amount of additional computer resources is needed to 
compute the WSDS or SDS equations from the matrices in table 3 and to 
solve for estimates of variance components. 



46 
A large amount of computer time and storage is needed to compute 
and store A. Grimes and Harvey (1980) traced pedigrees back through 2 
generations to compute A. Assuming that their A is essentially 
correct, the procedure of Hudson et al. (1982) would require storage 
for 223,696 numbers. In addition, computer time proportional to 
n 2 = 3,168,400 would be needed to compute A. The elements of the other 
V^ can be obtained from A. In order to determine how much is gained by 
taking advantage of relationship categories by using [89] and [90] , we 
need to evaluate computer requirements when we do not take advantage of 
relationship categories as in [77] and [78] . 



Table 3. Numbers of multiplications, 
additions and stored real numbers required to 
save matrices needed to set up WSDS and SDS 
equations for mice data of Grimes and Harvey 
(1980) . 



Matrix 
D 


Storage 
156 


Mu ] t ipl ica t ions 



Additions 
156 


Q 


39 





1,583,310 


N'N 


780 


1,388,400 


4,556,800 


Z'Y 2 


39 


1,583,310 


1,583,310 


N'y 


39 


69,420 


69,420 


N , Y 1 


39 


69,420 


69,420 


n 


1 





1,780 


yty 


1 


1,780 


1,780 


y 


1 





1,780 



47 

The storage, multiplications, and additions required to obtain the 

matrices needed for MIVQUE(O) or WSDS if we do not take advantage of 

relationship categories for the mice data of Grimes and Harvey (1980) 

are presented in table 4. 



Table 4. Numbers of stored real numbers, 
multiplications, and additions required to 
compute each type of element needed to obtain 
the MIVQUE(O) or WSDS equations for the data of 
Grimes and Harvey (1980) if relationship 
categories are not considered. 



Type of 
Element 

tr(V ± Vj 


Storage 
) 1015, 85C 


Multiplications 
1,900 


Additions 
15,850,900 


tr(V ± ) 


4 







7,120 


n 


1 







1,780 


Wj 1 !! 


10 




17,800 


12,691,400 


Wn 


4 




1 


6,340,360 


Y'Vjy 


4 




6,340,360 


6,340,360 


^y 


4 




7,120 


12,680,720 


y'y 


1 




1,780 


1,780 


y 


1 




1 


1,780 



Comparison of tables 3 and 4 demonstrates that consideration of 
relationship categories increases the requirement for computer storage 
but reduces the requirement for multiplication and addition operations. 
Therefore, consideration of relationship categories by using 
expressions [89] and [90] instead of expressions [77] and C78] to set 



48 
up WSDS equations will reduce computer time required substantially and 
increase computer storage requirement slightly when the number of 
relationship categories is small. Computer resource requirements for 
obtaining A are the same whether or not relationship categories are 
considered. 



METHODOLOGY 
It was established in the current study that estimates of 
(co) variance components by SDS weighted by the inverse of the error 
variance-covariance matrix (WSDS) are MIVQUE if all {co) variances 
except Og are near zero relative to o|. since SDS and WSDS are more 
computationally feasible than MIVQUE and REML under the animal model, 
it would be of interest to know how efficient SDS and WSDS estimates 
are relative to MIVQUE as a ?/°g departs from zero. Other approximate 
MIVQUE methods that reduce the computational requirements, or remove 
the arbitrariness of assigning priors without markedly reducing 
efficiency would also be of interest. Permanent maternal environmental 
effects (variance = c*) will usually account for a portion of the 
phenotypic variance of measurements taken on young animals. MIVQUE 
with all priors set to zero except a ^, and a^ (MIVQUE (0, M, E,)) is 
easier to compute than MIVQUE because the Var(y) under the assumption 
that all variances except a^ and a| are zero can be easily inverted 
using partitioned matrix techniques (Henderson and Searle, 1981) . To 
avoid the arbitrariness of assigning prior values, Rao (1971) 
recommended assigning priors: o* = 1, for all i, and a~ = 1. This 
method will be referred to in this paper as unweighted MIVQUE or 
MIVQUE (1). 

A numerical study was conducted to evaluate the influence of the 
unknown (co) variances on the efficiency of SDS, WSDS, MIVQUE (0, M, E) , 

49 



50 
and MIVQUE(l) relative to MIVQUE {true parameters used as priors) under 
the animal model. 

Two mating designs (Figure 1) described by Thompson (1976) and 
three mating designs described by Eisen (1967) (figure 2) were chosen 
for this evaluation. Thompson's (1976) designs were chosen because 
they are appropriate for species with one or two offspring, whereas 
Eisen' s (1967) designs were chosen because they are appropriate for 
litter bearing species. For the Thompson designs, a random sire (S) is 
mated to two random dams (Dj and D2) to produce a male and a female 
offspring from each mating. In design A, a female progeny from one dam 
and a male progeny from the other dam are bred to random mates (M^ and 
M2) to produce two progeny each. In design B, the female progeny from 
both dams are bred to random males to produce two progeny each. This 
results in eight progeny to form a set of a Thompson design. 




Figure 1. Two mating designs of 
Thompson (1976) . In design A, 
!>! and P3 are males and P 2 and 
P 4 are females. In design B, P 2 
and P 3 are males and P-^ and P 4 
are females. 



51 
For the Eisen designs, s sets are sampled from initial random 
matings. The 18 individuals that comprise a set are shown in figure 2. 
S^ and S 2 are male parents, D±, D 2 , ... Dg are female parents, and 0^, 
2 , ... Og are offspring. Each set contains three unrelated families. 
In design I, Sj and S 2 are a full sib family, D^, D 2 , D3 and D4 are a 
full-sib family, and D5, Dg, D7, and Dg are a half sib family. In 
design II, S^, Dj, and D 2 are a full sib family; S 2 , D3, and D4 are a 
full-sib family; and D 5 , Dg, D 7 , and D Q are a full sib family. Design 
III is the same as design II with the exception that D5, Dg, Dy, and Dg 
are a half-sib family instead of a full-sib family. 




Figure 2. The 18 individuals comprising a set of an Eisen (1967) 
design. Si and S 2 are male parents, D 1# D 2 , ... Dg are female 
parents, arid O^, 2 , ... Og are offspring. 

The model assumed in this study for the phenotype of the i th 
individual from the jt dam is 

*ij = v+ 9i + 9j + ™j + e ij' t 93 J 

where p is a fixed unknown constant, 

g^ is the direct genie effect of the 1 th individual, 

g^ 1 is the maternal genie effect of the j^ 1 dam, 



and 



mj is the permanent maternal environmental effect of the j dam, 



e^ is the temporary environmental effect, 



E<e i; j) = E(mj) = E(<£j) = E(g ± ) = 0, 



52 
Var{ gi ) = q|, Var{<^) = o^, VarOrij) = oj, Cov( gi , gj) = ° ggm and 
Var(e i j k ) = a|, and 
Cov{gj,mj) = Oov(e 1 , e^) = Cov(g^, irij) = Cov(mj, ey) = Cov(g?J, e^) = 
Since individuals of different sets are unrelated, it is convenient to 
define the variance of terms in the formulas for sampling variance as a 
function of the number of sets (s) . Let y be the vector of phenotypes 
for one set. The variance of y can be written 

Var(y) = V = V X a2 + v 2 a ggm + V 3 a^ + V 4 a* + i n c|, [94 ] 
where n is the number of individuals in a set (n = 8 for Thompson 
designs, n = 18 for Eisen designs); and V lr V 2 , V 3 , and V 4 are n x n 
matrices of coefficients which when multiplied by °|, Oggu,/ c*™* and cf2 
specify the contribution of a|, o , a^ , and ojj- to the variance among 
individuals in a set. V^, V 2 , V3, and V 4 for the Eisen and Thompson 
designs are in Appendix B for the individuals in a set. The 
coefficients in V^, for i = 1, 2, ... 4, were obtained from table 1 
of Thompson (1976) and table 1 of Eisen (1967) . For the V^ matrixes in 
Appendix B, y was sorted for the Thompson designs as Pp P 2 , ... Pg 
(figure 1) and for the Eisen designs as S 1# S 2 , Dj, D 2 , ... Dg, 0-^, 2 , 
. . . 8 (Figure 2) . 

Relative efficiency in this study was computed as: 



/ Var(a?) 

Relative Efficiency = / , [95] 

\ / Var(af) 

where a? is an SDS, WSDS, MIVQUE (0, M, E) , or MIVQUE(l) estimate of 

the i^ 1 variance component and h ? is the MIVQUE estimate of the i 

variance component. 



53 



The sampling variances of 3 and g 2 can be written as a f line t ion 
of the number of sets (s) and the V^ matrices defined previously. The 
sampling variance among MIVQUE estimates of the variance components for 
data resulting from s sets is 

Varfa 2 ) = 2Cs[tr(\T 1 V i <Nr 1 V j )] - 2(l^V _1 l n )" :l [l^ ;L V i \r 1 V j \r 1 l n ] 

+ dAV" 1 ^) " 2 Il^V^lnl [l^VjV 1 ^] 3 _1 [96] 
for i, j = 1, 2, ... 5, where 

V 5 = In- 

The sampling variance among SDS estimates of (co) variance components is 



Var(a 2 ) = (X'X) -1 Var(X'Y) (X'X)" 1 , 



[97] 



where for s sets 



Var(X'Y) = BstsCs^trtV^nljIjV 2 :^ + (l n Vl n ) 2 )l c+1 l c+1 

- sn[ [tr (FiV 2 ) ] 1^ +1 + l c+1 [tr (FjV 2 ) ] 3 
+ sn[ [tr (V^ 2 ) ] 1^ +1 + l c+1 [tr (VjV 2 ) ] 3 

+ flA^i^n^c+l + W^j^nJ 

- HAWiYlnll^! - l c+1 [lAWjVl n ] 
+ [trfFiVFjV)] 

- ItrtFiWjV)] - [tr^VFjV)] 
+ [tr(V i W j V)]3, 



[98] 



where 



P i = 



Z [v n ) ± 








Z (v 21 ) ± 








? < v nl)i 



[99] 



for 1 = 1, 2, 



n, and 



54 
X'X = 2s£sn 2 l c+1 l£ +1 - [lAVil n ]l' +1 - l^fl'Vjl^ + ttrtV^j)]). [100] 
The sampling variance among WSDS estimates of (co) variance components 
is 

Var(o 2 ) = (X'W^X)" 1 VarfX'W^Y) (X'vrtt)" 1 , [101] 

where for s sets 

VartX'W^Y) = 2ts[tr(W i Wj)] - - [l^WjWjln] 
1 

+ -- f2 l'vi fi'V'W'i i + 2ri'wi l ri'wi 1 

o a n v - L n ia n v n vv i J 'n J ZL - L n vv i- l n J L - L n vv n- l n J 

+ ri'w-vi i ri'vi ] + fi'V'i i [l'wvi ] 

LJ -n vv i VA n J t-'rrj-'-n- 1 L4 -n Y i J -n J L - l n vv 3 v - L n J 
2 

- -- ri'vi fri'vvi i ri'vi 1 + ri'vi lri'w-i n 

o l± n va n l LX n vv i a n J ^n^-'-n- 1 lx n v i A n J lJ, n vv n J, n J J 
n J 

+ S ^A^n) 2 tlA V iV tl n Vjl n ] 3 . ^02] 

and 

X^X = sttr^Vj)] - 2 ElnVjVjlJ + i- [1^1^ [l'Vjl^ . [103] 

The sampling variance among MIVQUE (0, M, E) or MIVQUE(l) estimates of 
(co) variance components is as follows: 

Var(a 2 ) = (IHS)"" 1 Var(RHS) (ms)" 1 , [104] 

where 

a 2 = (LHSJ^RHS. [105] 



55 
For s sets, 
Var(RHS) - 2{s[tr{GV i GVj)] 

+ 2[^v- 1 v i Gi n 3Ei r ;v^ 1 v j Gi n ] 

+ Cl^GViGl^El^Vj^ 1 !^ } 

+ Ci^v i Gi n ]Ci-v^ 1 v j ^i n ]> 

+ ^A^ln)" 4 ^^) 2 ^^^ 1 ^]:^^^ 1 !^}, [106] 
where 

V a = o%y 4 + a^, for MIVQUE (0, M, E), and 

V a ~ v l + v 2 + V 3 + V 4 + I n f for M*VQUE{1), and 
G = V^W^ 1 ; and 

+ (^^V'^^^i^yciA^vs; 1 ^:. C1073 

Two hundred sets were considered resulting in 1,600 individuals 
for Thompson (1976) designs and 3,600 individuals for Eisen designs. 

Each parameter other than the error variance was set to two levels 
- direct genie at 15 or 30, maternal genie at 15 or 30, correlation 
between direct and maternal genie at -.1 or -.3, and permanent maternal 
environment at 20 or 50. The phenotypic variance was held constant at 
100. Taking all possible combinations of the four parameters at two 
levels each results in 16 parameter set combinations. The error 
variance was computed as the difference between 100 and the sum of all 



56 

other components. However, for the parameter set in which ag = 30, 

0q m = 30, a ggm = -.1, and ojf, = 50, the sum of these components is 104 

>100. Therefore, for this set, a^ = 27.8, a| m = 27.8, 0g qm = -.1, ajj, 

= 46.3, and c| = 3.70 to hold the phenotypic variance constant and to 

o 7 7 

keep a* positive. In addition, the parameter set a g = 9.8, a gm = 9.8, 

a = -1.96, and cr^ = 21.56 was used to make a direct comparison of 

this numerical procedure with the simulation study of Grimes and Harvey 

(1980). 



RESULTS AND DISCUSSION 
Standard errors of SDS estimates of (co) variance components 
computed as the square root of the result of [97] agree quite well with 
the standard errors obtained in the simulation study of Grimes and 
Harvey (1980) (table 5) . 

Table 5. Comparison of standard errors of SDS estimates of variance 
components obtained by simulation study of Grimes and Harvey (1980) for 
a maternal effects model with standard errors obtained by the numerical 
method of the present study. 



Item 



Mating' 
Design 



Parameters 

a a 2 a 2 

ggm gm m 



o2 

e 



Parameter values 

Current study A 

Grimes and Harvey (1980) 

Current study B 

Grimes and Harvey (1980) 



Parameter values 

9.8 -1.96 9.8 9.8 21.56 
Standard errors 

9.3 9.2 15.9 9.8 5.1 

9.4 10.1 17.1 10.3 5.0 



12.1 
12.8 



6.2 
6.8 



7.9 
7.9 



4.9 
5.0 



6.2 
6.4 



a The mating designs were due to Thompson (1976) , of which 200 
replicates were used. 

°qr a qm' a m' a™ 3 o 2 are variances due to direct genie, maternal 
genie, permanent maternal environmental, and environmental effects, 
and a nam is the covariance between direct and maternal genie effects. 



The efficiencies [95] of SDS and WSDS estimates of (co) variance 

components relative to MIVQUE are in tables 6 and 7. In tables 6 4 and 

7, the efficiencies corresponding to each parameter set are ranked by 

o^ from high to low. Efficiency of SDS and WSDS relative to MIVQUE 

seems to follow for the most part the same ranking as °| (table 6 and 

57 



58 
7) . This result is to be expected for the relative efficiency of WSDS 
because the assumptions underlying the MIVQUE(O) property of WSDS are 
violated to a greater and greater extent as cf| becomes smaller. This 
is because °?/ °g increases as cr^ decreases in this study. 
Discrepancies between the rankings of efficiency of WSDS and a| are 
due to certain combinations of parameters having more effect on 
efficiency than other combinations. The efficiencies of SDS and WSDS 
relative to MIVQUE were quite sensitive to decreases in cr| for all 
designs except Thompson B. For the Thompson B design, efficiency of 
SDS and WSDS varied little over all parameter sets studied. Comparison 
of tables 6 and 7 showed that WSDS is more efficient than SDS for 
almost all parameter sets, estimates of variance components, and mating 
designs. Exceptions to this result occur when °^ is estimated from 
Eisen designs I, II, and III or when a| ± s estimated from Eisen design 
III. 



59 



Table 6. Efficiency 3 of SDS estimates of (co) variance components 
relative to MIVQUE estimates computed from mating designs of Thompsons 
and Eisen" replicated 200 times each. 







Parameters 




•5 


"2 

°g 


Efficiency 
°ggm °gm 


of 

;2 
a m 




Rank d 


°1 


°% 


gm 


r ggm 


4 










Thompson A Design 










1 


59.00 


15 


15 


-.3 


20 


.66 


.67 


.68 


.71 


.66 


2 


53.00 


15 


15 


-.1 


20 


.66 


.66 


.68 


.72 


.66 


3 


47.72 


15 


30 


-.3 


20 


.65 


.64 


.66 


.71 


.64 


3 


47.72 


30 


15 


-.3 


20 


.68 


.67 


.68 


.72 


.67 


5 


39.24 


15 


30 


-.1 


20 


.65 


.63 


.66 


.71 


.63 


5 


39.24 


30 


15 


-.1 


20 


.67 


.66 


.68 


.72 


.65 


7 


38.00 


30 


30 


-.3 


20 


.66 


.64 


.66 


.71 


.64 


8 


29.00 


15 


15 


-.3 


50 


.62 


.58 


.61 


.68 


.59 


9 


26.00 


30 


30 


-.1 


20 


.65 


.62 


.65 


.71 


.62 


10 


23.00 


15 


15 


-.1 


50 


.60 


.55 


.60 


.67 


.57 


11 


17.72 


15 


30 


-.3 


50 


.57 


.50 


.56 


.65 


.54 


11 


17.72 


30 


15 


-.3 


50 


.61 


.56 


.60 


.68 


.58 


13 


9.24 


15 


30 


-.1 


50 


.51 


.44 


.52 


.63 


.48 


13 


9.24 


30 


15 


-.1 


50 


.57 


.51 


.57 


.66 


.54 


15 


8.00 


30 


30 


-.3 


50 


.56 


.49 


.55 


.65 


.52 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.52 


.45 


.53 


.64 


.49 










Thompson B Design 










1 


59.00 


15 


15 


-.3 


20 


.80 


.89 


.88 


.95 


.82 


2 


53.00 


15 


15 


-.1 


20 


.80 


.90 


.89 


.95 


.81 


3 


47.72 


15 


30 


-.3 


20 


.80 


.89 


.89 


.95 


.81 


3 


47.72 


30 


15 


-.3 


20 


.82 


.91 


.88 


.94 


.83 


5 


39.24 


15 


30 


-.1 


20 


.80 


.90 


.89 


.95 


.80 


5 


39.24 


30 


15 


-.1 


20 


.82 


.91 


.89 


.94 


.82 


7 


38.00 


30 


30 


-.3 


20 


.82 


.91 


.89 


.95 


.83 


8 


29.00 


15 


15 


-.3 


50 


.82 


.88 


.82 


.93 


.83 


9 


26.00 


30 


30 


-.1 


20 


.80 


.91 


.89 


.95 


.80 


10 


23.00 


15 


15 


-.1 


50 


.82 


.89 


.81 


.93 


.82 


11 


17.72 


15 


30 


-.3 


50 


.82 


.88 


.80 


.93 


.82 


11 


17.72 


30 


15 


-.3 


50 


.84 


.90 


.81 


.93 


.84 


13 


9.24 


15 


30 


-.1 


50 


.81 


.88 


.77 


.93 


.81 


13 


9.24 


30 


15 


-.1 


50 


.83 


.90 


.80 


.93 


.83 


15 


8.00 


30 


30 


-.3 


50 


.83 


.89 


.79 


.93 


.83 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.81 


.90 


.79 


.93 


.81 



60 



Table 6 continued 







Parameters 








Efficiency 


of 




Rank d 


n 2 
°e 


~2 

a g 


«2 

a gm 


ggm 


°l 


Z 2 

a g 


a ggm 


n 2 

a gm 


n 2 
CT m 


Jl 










Eisen 


I Design 










1 


59.00 


15 


15 


-.3 


20 


.53 


.72 


.61 


.67 


.65 


2 


53.00 


15 


15 


-.1 


20 


.53 


.71 


.62 


.67 


.62 


3 


47.72 


15 


30 


-.3 


20 


.51 


.71 


.64 


.69 


.57 


3 


47.72 


30 


15 


-.3 


20 


.56 


.72 


.62 


.67 


.63 


5 


39.24 


15 


30 


-.1 


20 


.49 


.69 


.64 


.69 


.53 


5 


39.24 


30 


15 


-.1 


20 


.55 


.71 


.62 


.67 


.58 


7 


38.00 


30 


30 


-.3 


20 


.53 


.71 


.64 


.68 


.55 


8 


29.00 


15 


15 


-.3 


15 


.46 


.64 


.56 


.64 


.46 


9 


26.00 


30 


30 


-.1 


20 


.49 


.68 


.64 


.67 


.48 


10 


23.00 


15 


15 


-.1 


50 


.44 


.62 


.56 


.64 


.42 


11 


17.72 


15 


30 


-.3 


50 


.40 


.58 


.55 


.63 


.37 


11 


17.72 


30 


15 


-.3 


50 


.46 


.63 


.56 


.63 


.43 


13 


9.24 


15 


30 


-.1 


50 


.35 


.52 


.54 


.62 


.31 


13 


9.24 


30 


15 


-.1 


50 


.42 


.58 


.56 


.63 


.37 


15 


8.00 


30 


30 


-.3 


50 


.40 


.56 


.55 


.62 


.35 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.36 


.53 


.55 


.62 


.30 










Eisen 


II Design 










1 


59.00 


15 


15 


-.3 


20 


.62 


.76 


.72 


.75 


.67 


2 


53.00 


15 


15 


-.1 


20 


.61 


.76 


.73 


.76 


.64 


3 


47.72 


15 


30 


-.3 


20 


.59 


.74 


.72 


.78 


.60 


3 


47.72 


30 


15 


-.3 


20 


.66 


.78 


.72 


.75 


.66 


5 


39.24 


15 


30 


-.1 


20 


.56 


.73 


.73 


.79 


.56 


5 


39.24 


30 


15 


-.1 


20 


.64 


.78 


.72 


.76 


.62 


7 


38.00 


30 


30 


-.3 


20 


.62 


.75 


.72 


.78 


.60 


8 


29.00 


15 


15 


-.3 


50 


.54 


.69 


.66 


.79 


.50 


9 


26.00 


30 


30 


-.1 


20 


.58 


.74 


.72 


.78 


.54 


10 


23.00 


15 


15 


-.1 


50 


.51 


.67 


.66 


.80 


.47 


11 


17.72 


15 


30 


-.3 


50 


.47 


.62 


.65 


.80 


.42 


11 


17.72 


30 


15 


-.3 


50 


.55 


.69 


.66 


.78 


.50 


13 


9.24 


15 


30 


-.1 


50 


.41 


.57 


.65 


.81 


.36 


13 


9.24 


30 


15 


-.1 


50 


.50 


.66 


.65 


.79 


.44 


15 


8.00 


30 


30. 


-.3 


50 


.48 


.62 


.64 


.79 


.42 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.44 


.59 


.65 


.80 


.37 



61 



Table 6 continued 







Parameters 






~2 

°9 


Efficiency 
a ggm a gm 


of 

"75 




Rank d 


4 


4 


2 

a gm 


r ggm 


2 

- CT m 


~2 










Eisen 


III 


Design 










1 


59.00 


15 


15 


-.3 


20 


.67 


.87 


.90 


.91 


.71 


2 


53.00 


15 


15 


-.1 


20 


.66 


.87 


.89 


.91 


.72 


3 


47.72 


15 


30 


-.3 


20 


.65 


.85 


.87 


.91 


.66 


3 


47.72 


30 


15 


-.3 


20 


.70 


.88 


.88 


.90 


.70 


5 


39.24 


15 


30 


-.1 


20 


.63 


.84 


.85 


.89 


.62 


5 


39.24 


30 


15 


-.1 


20 


.68 


.87 


.86 


.88 


.66 


7 


38.00 


30 


30 


-.3 


20 


.67 


.84 


.84 


.89 


.65 


8 


29.00 


15 


15 


-.3 


50 


.61 


.83 


.85 


.89 


.57 


9 


26.00 


30 


30 


-.1 


20 


.63 


.82 


.81 


.85 


.58 


10 


23.00 


15 


15 


-.1 


50 


.59 


.82 


.83 


.88 


.53 


11 


17.72 


15 


30 


-.3 


50 


.56 


.76 


.78 


.85 


.48 


11 


17.72 


30 


15 


-.3 


50 


.62 


.83 


.81 


.86 


.54 


13 


9.24 


15 


30 


-.1 


50 


.50 


.72 


.74 


.82 


.41 


13 


9.24 


30 


15 


-.1 


50 


.57 


.80 


.77 


.83 


.48 


15 


8.00 


30 


30 


-.3 


50 


.56 


.75 


.75 


.82 


.46 


16 


3.70 


27.8 


27.8 


-.1 


46. 


3 .51 


.72 


.73 


.80 


.41 



a Efficiency is the standard error of the MIVQUE estimate of 
(co) variance component divided by the standard error of the SDS 
estimate. 



" Mating designs are defined in the text. 



c Parameters are defined as follows: 



a£, aJL,, a£. and o£ are variances 



rcu-aiicLCia iuc uciiucu ao j.uj.iuwo. a' am' m' c ""- i u Q «j.«= »uj.j.uin-ti3 

due to direct additive genetic, maternal additive genetic, permanent 
maternal environmental, and residual error effects; and r_ gm and aggm 
are the correlation and covariance between direct and maternal 
additive genetic effects. 



Parameter sets are ranked by a£, from high to low. 



62 



Table 7. Efficiency 3 of WSDS estimates of (co) variance components 
relative to MIVQUE estimates computed from mating designs of Thompson*-* 
and Eisen" replicated 200 times each. 







Parameters 1 - 








Efficiency 


of 




Rank d 


°l 


•I 


a 2 
u gm 


ggm 


m 


J2~ 

°g 

Design 


a ggm 


A 2 
a gm 


°m 


T 2 ~ 

°e_ 






Thompson A 




1 


59.00 


15 


15 


-.3 


20 


.96 


.95 


.95 


.97 


.97 


2 


53.00 


15 


15 


-.1 


20 


.96 


.93 


.94 


.95 


.96 


3 


47.72 


15 


30 


-.3 


20 


.94 


.91 


.91 


.94 


.95 


3 


47.72 


30 


15 


-.3 


20 


.93 


.92 


.93 


.95 


.94 


5 


39.24 


15 


30 


-.1 


20 


.92 


.89 


.89 


.91 


.93 


5 


39.24 


30 


15 


-.1 


20 


.92 


.90 


.91 


.93 


.92 


7 


38.00 


30 


30 


-.3 


20 


.91 


.89 


.90 


.93 


.92 


8 


29.00 


15 


15 


-.3 


50 


.87 


.80 


.83 


.89 


.87 


9 


26.00 


30 


30 


-.1 


20 


.88 


.84 


.85 


.89 


.88 


10 


23.00 


15 


15 


-.1 


50 


.84 


.76 


.80 


.87 


.85 


11 


17.72 


15 


30 


-.3 


50 


.80 


.70 


.75 


.84 


.80 


11 


17.72 


30 


15 


-.3 


50 


.83 


.75 


.80 


.87 


.83 


13 


9.24 


15 


30 


-.1 


50 


.72 


.61 


.69 


.80 


.72 


13 


9.24 


30 


15 


-.1 


50 


.76 


.69 


.75 


.84 


.77 


15 


8.00 


30 


30 


-.3 


50 


.76 


.67 


.73 


.83 


.76 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


1 .71 


.62 


.69 


.80 


.71 










Thompson B 


Design 










1 


59.00 


15 


15 


-.3 


20 


.99 


.99 


.97 


.99 


.99 


2 


53.00 


15 


15 


-.1 


20 


.98 


.99 


.97 


.99 


.98 


3 


47.72 


15 


30 


-.3 


20 


.97 


.99 


.96 


.99 


.97 


3 


47.72 


30 


15 


-.3 


20 


.97 


.98 


.97 


.99 


.97 


5 


39.24 


15 


30 


-.1 


20 


.96 


.99 


.95 


.98 


.96 


5 


39.24 


30 


15 


-.1 


20 


.96 


.98 


.97 


.99 


.96 


7 


38.00 


30 


30 


-.3 


20 


.96 


.98 


.96 


.98 


.96 


8 


29.00 


15 


15 


-.3 


50 


.99 


.95 


.87 


.96 


.99 


9 


26.00 


30 


30 


-.1 


20 


.93 


.98 


.95 


.98 


.93 


10 


23.00 


15 


15 


-.1 


50 


.98 


.96 


.86 


.95 


.98 


11 


17.72 


15 


30 


-.3 


50 


.98 


.95 


.83 


.95 


.98 


11 


17.72 


30 


15 


-.3 


50 


.98 


.95 


.87 


.96 


.98 


13 


9.24 


15 


30 


-.1 


50 


.96 


.95 


.80 


.95 


.96 


13 


9.24 


30 


15 


-.1 


50 


.96 


.95 


.85 


.95 


.96 


15 


8.00 


30 


30 


-.3 


50 


.97 


.95 


.83 


.95 


.97 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


1 .95 


.95 


.82 


.95 


.95 



63 



Table 7 continued 





a2 


Parameters 
cj2 G 2 


r 


o2 




Efficiency 


of 




Rank^ 


52 


a 


a2 


— 2 — 


IT 




e 


g 


gm 


ggm 


m 


g 


ggm 


gm 


m 


e 










Eisen 


I Design 










1 


59.00 


15 


15 


-.3 


20 


.92 


.94 


.92 


.82 


.80 


2 


53.00 


15 


15 


-.1 


20 


.90 


.93 


.90 


.80 


.76 


3 


47.72 


15 


30 


-.3 


20 


.87 


.89 


.85 


.74 


.68 


3 


47.72 


30 


15 


-.3 


20 


.87 


.90 


.87 


.78 


.75 


5 


39.24 


15 


30 


-.1 


20 


.82 


.86 


.80 


.69 


.62 


5 


39.24 


30 


15 


-.1 


20 


.82 


.88 


.83 


.74 


.69 


7 


38.00 


30 


30 


-.3 


20 


.82 


.85 


.81 


.70 


.65 


8 


29.00 


15 


15 


-.3 


15 


.84 


.86 


.85 


.64 


.52 


9 


26.00 


30 


30 


-.1 


20 


.74 


.80 


.74 


.63 


.55 


10 


23.00 


15 


15 


-.1 


50 


.80 


.83 


.82 


.61 


.47 


11 


17.72 


15 


30 


-.3 


50 


.74 


.75 


.75 


.56 


.41 


11 


17.72 


30 


15 


-.3 


50 


.76 


.80 


.80 


.60 


.48 


13 


9.24 


15 


30 


-.1 


50 


.64 


.67 


.69 


.53 


.34 


13 


9.24 


30 


15 


-.1 


50 


.68 


.74 


.75 


.56 


.40 


15 


8.00 


30 


30 


-.3 


50 


.66 


.69 


.71 


.54 


.38 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.59 


.64 


.68 


.52 


.33 










Eisen 


II Design 










1 


59.00 


15 


15 


-.3 


20 


.94 


.95 


.94 


.88 


.91 


2 


53.00 


15 


15 


-.1 


20 


.91 


.94 


.92 


.86 


.88 


3 


47.72 


15 


30 


-.3 


20 


.88 


.90 


.88 


.80 


.84 


3 


47.72 


30 


15 


-.3 


20 


.90 


.93 


.90 


.83 


.88 


5 


39.24 


15 


30 


-.1 


20 


.84 


.88 


.86 


.78 


.79 


5 


39.24 


30 


15 


-.1 


20 


.86 


.91 


.88 


.80 


.84 


7 


38.00 


30 


30 


-.3 


20 


.86 


.88 


.85 


.77 


.82 


8 


29.00 


15 


15 


-.3 


50 


.83 


.85 


.87 


.76 


.71 


9 


26.00 


30 


30 


-.1 


20 


.79 


.84 


.82 


.73 


.74 


10 


23.00 


15 


15 


-.1 


50 


.79 


.82 


.85 


.74 


.67 


11 


17.72 


15 


30 


-.3 


50 


.73 


.74 


.80 


.69 


.61 


11 


17.72 


30 


15 


-.3 


50 


.79 


.81 


.84 


.72 


.68 


13 


9.24 


15 


30 


-.1 


50 


.64 


.67 


.77 


.67 


.52 


13 


9.24 


30 


15 


-.1 


50 


.71 


.76 


.80 


.69 


.61 


15 


8.00 


30 


30. 


-.3 


50 


.69 


.71 


.78 


.67 


.59 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.63 


.67 


.76 


.66 


.53 



64 



Table 7 continued 





o2 


Parameters 
a 2 cr2 


T 


' o2 




Efficiency 


of 




Rank^ 


52" 


o2 


32 


a2 


i\ni v\ 


e 


g 


gm 


L ggm 


m 


g 


ggm gm 


m 


e 










Eisen 


III Design 








1 


59.00 


15 


15 


-.3 


20 


.95 


.97 .95 


.87 


.83 


2 


53.00 


15 


15 


-.1 


20 


.93 


.97 .94 


.85 


.80 


3 


47.72 


15 


30 


-.3 


20 


.91 


.94 .90 


.80 


.74 


3 


47.72 


30 


15 


-.3 


20 


.92 


.95 .93 


.82 


.78 


5 


39.24 


15 


30 


-.1 


20 


.88 


.92 .87 


.77 


.67 


5 


39.24 


30 


15 


-.1 


20 


.89 


.94 .90 


.79 


.72 


7 


38.00 


30 


30 


-.3 


20 


.89 


.92 .88 


.76 


.70 


8 


29.00 


15 


15 


-.3 


15 


.86 


.91 .88 


.72 


.58 


9 


26.00 


30 


30 


-.1 


20 


.83 


.89 .84 


.70 


.60 


10 


23.00 


15 


15 


-.1 


50 


.83 


.89 .87 


.70 


.53 


11 


17.72 


15 


30 


-.3 


50 


.78 


.83 .80 


.65 


.48 


11 


17.72 


30 


15 


-.3 


50 


.82 


.88 .85 


.68 


.53 


13 


9.24 


15 


30 


-.1 


50 


.69 


.77 .76 


.61 


.40 


13 


9.24 


30 


15 


-.1 


50 


.76 


.85 .81 


.63 


.46 


15 


8.00 


30 


30 


-.3 


50 


.74 


.80 .77 


.61 


.45 


16 


3.70 


27.8 


27.8 


-.1 


46.3 


.68 


.77 .75 


.59 


.40 



a Efficiency is the standard error of the MIVQUE estimate of 
(co) variance component divided by the standard error of the WSDS 
estimate. 



Mating designs are defined in the text. 



°' °gm' °m' 



£, and a \ are variances 



c Parameters are defined as follows: o _ _ m , e — 

due to direct additive genetic, maternal additive genetic, permanent 
maternal environmental, and residual error effects; and r nrm and 
are the correlation and covariance between direct 
additive genetic effects. 

d Parameter sets are ranked by a 2 ,, from high to low. 



and maternal 



65 
The efficiencies of MIVQUE (0, M, E) estimates of (co)variance 
components relative to MIVQUE are in table 8. In table 8, efficiencies 
corresponding to each parameter set are ranked by cr m + a* from high to 
low. The efficiency of MIVQUE (0, M, E) appears to follow the same 
ranking with some exceptions as o^ + a | {table 8). Unlike SDS and 
WSDS, the efficiency of MIVQUE (0, M, E) relative to MIVQUE remained 
relatively high (greater than .80} over all parameter sets studied. In 
practice, this is an upper bound on the efficiency that could be 
attained for MIVQUE (0, M, E) because estimates for <?2 and a| would 
need to be used in place of the true parameters. 



66 



Table 8. Relative efficiency 3 of MIVQUE (0, M, E) estimates of 
(co) variance components relative to MIVQUE estimates computed from 
mating designs of Thompson 13 (1976) and Eisen b (1967) replicated 200 
times each. 





2 2 

m e 


Parameters 
2 2 

Ot. OX™. 

g u gm 


r ggm 


r, 2 

°m 




Efficiency 


of 




Rank d 


^ 2 

a g 


°ggm 


n 2 

CT gm 


-2 

a m 


-2 




Thompson A 


Design 










1 


79.00 


15 


15 


-.3 


20 


.99 


.99 


.99 


.99 


.99 


1 


79.00 


15 


15 


-.3 


50 


.99 


.99 


.99 


.99 


.99 


3 


73.00 


15 


15 


-.1 


20 


.98 


.99 


.98 


.98 


.99 


3 


73.00 


15 


15 


-.1 


50 


.99 


.99 


.99 


.99 


.99 


5 


67.72 


15 


30 


-.3 


20 


.98 


.98 


.98 


.98 


.98 


5 


67.72 


15 


30 


-.3 


50 


.99 


.98 


.98 


.98 


.99 


5 


67.72 


30 


15 


-.3 


20 


.97 


.98 


.98 


.98 


.97 


5 


67.72 


30 


15 


-.3 


50 


.97 


.98 


.98 


.98 


.97 


9 


59.24 


15 


30 


-.1 


20 


.97 


.97 


.96 


.96 


.97 


9 


59.24 


15 


30 


-.1 


50 


.99 


.98 


.98 


.97 


.99 


9 


59.24 


30 


15 


-.1 


20 


.96 


.97 


.97 


.97 


.96 


9 


59.24 


30 


15 


-.1 


50 


.95 


.95 


.97 


.97 


.95 


13 


58.00 


30 


30 


-.3 


20 


.96 


.98 


.97 


.97 


.96 


13 


58.00 


30 


30 


-.3 


50 


.96 


.96 


.97 


.97 


.96 


15 


50.00 


27.8 


27.8 


-.1 


46.: 


3 .91 


.92 


.95 


.95 


.95 


16 


46.00 


30 


30 


-.1 


20 


.95 


.96 


.96 


.95 


.91 










Thompson B 


Design 










1 


79.00 


15 


15 


-.3 


20 


.99 


.99 


.99 


.99 


.99 


1 


79.00 


15 


15 


-.3 


50 


.99 


.99 


.99 


.99 


.99 


3 


73.00 


15 


15 


-.1 


20 


.98 


.99 


.99 


.99 


.98 


3 


73.00 


15 


15 


-.1 


50 


.98 


.98 


.98 


.98 


.98 


5 


67.72 


15 


30 


-.3 


20 


.97 


.99 


.99 


.99 


.97 


5 


67.72 


15 


30 


-.3 


50 


.98 


.98 


.98 


.97 


.98 


5 


67.72 


30 


15 


-.3 


20 


.97 


.98 


.99 


.98 


.97 


5 


67.72 


30 


15 


-.3 


50 


.98 


.97 


.97 


.97 


.98 


9 


59.24 


15 


30 


-.1 


20 


.96 


.98 


.99 


.97 


.96 


9 


59.24 


15 


30 


-.1 


50 


.96 


.96 


.96 


.95 


.96 


9 


59.24 


30 


15 


-.1 


20 


.96 


.97 


.98 


.97 


.96 


9 


59.24 


30 


15 


-.1 


50 


.96 


.94 


.93 


.94 


.96 


13 


58.00 


30 


30 


-.3 


20 


.96 


.98 


.99 


.97 


.96 


13 


58.00 


30 


30 


-.3 


50 


.97 


.95 


.94 


.94 


.97 


15 


50.00 


27.8 


27.8 


-.1 


46.: 


3 .95 


.92 


.90 


.91 


.95 


16 


46.00 


30 


30 


-.1 


20 


.93 


.95 


.97 


.95 


.93 



67 
Table 8 continued. 







Parameters 








Efficiency 


of 




Rank d 


2 2 


•§ 


gm 


r ggm 


a 2 
m 


~2 

°g 


a ggm 


-2 


"sT 


~w 










Eisen 


I Design 










1 


79.00 


15 


15 


-.3 


20 


.95 


.96 


.95 


.95 


.96 


1 


79.00 


15 


15 


-.3 


50 


.96 


.97 


.96 


.96 


.96 


3 


73.00 


15 


15 


-.1 


20 


.93 


.95 


.93 


.93 


.95 


3 


73.00 


15 


15 


-.1 


50 


.94 


.96 


.94 


.95 


.95 


5 


67.72 


15 


30 


-.3 


20 


.91 


.92 


.89 


.89 


.92 


5 


67.72 


15 


30 


-.3 


50 


.92 


.94 


.91 


.91 


.92 


5 


67.72 


30 


15 


-.3 


20 


.90 


.93 


.91 


.92 


.92 


5 


67.72 


30 


15 


-.3 


50 


.92 


.95 


.93 


.94 


.92 


9 


59.24 


15 


30 


-.1 


20 


.87 


.90 


.85 


.86 


.89 


9 


59.24 


15 


30 


-.1 


50 


.91 


.94 


.90 


.89 


.90 


9 


59.24 


30 


15 


-.1 


20 


.87 


.91 


.87 


.89 


.89 


9 


59.24 


30 


15 


-.1 


50 


.90 


.92 


.91 


.91 


.89 


13 


58.00 


30 


30 


-.3 


20 


.88 


.89 


.86 


.87 


.88 


13 


58.00 


30 


30 


-.3 


50 


.91 


.93 


.91 


.89 


.90 


15 


50.00 


27.8 


27.8 


-.1 


46.3 


.88 


.90 


.88 


.86 


.87 


16 


46.00 


30 


30 


-.1 


20 


.81 


.85 


.81 


.81 


.82 










Eisen 


II Design 










1 


79.00 


15 


15 


-.3 


20 


.97 


.98 


.97 


.96 


.98 


1 


79.00 


15 


15 


-.3 


50 


.97 


.98 


.98 


.98 


.98 


3 


73.00 


15 


15 


-.1 


20 


.95 


.97 


.96 


.95 


.96 


3 


73.00 


15 


15 


-.1 


50 


.96 


.98 


.97 


.97 


.96 


5 


67.72 


15 


30 


-.3 


20 


.93 


.94 


.93 


.92 


.95 


5 


67.72 


15 


30 


-.3 


50 


.94 


.96 


.96 


.96 


.95 


5 


67.72 


30 


15 


-.3 


20 


.94 


.96 


.94 


.93 


.95 


5 


67.72 


30 


15 


-.3 


50 


.94 


.96 


.96 


.95 


.94 


9 


59.24 


15 


30 


-.1 


20 


.89 


.93 


.91 


.91 


.91 


9 


59.24 


15 


30 


-.1 


50 


.92 


.95 


.96 


.95 


.93 


9 


59.24 


30 


15 


-.1 


20 


.90 


.94 


.93 


.91 


.92 


9 


59.24 


30 


15 


-.1 


50 


.90 


.93 


.93 


.93 


.90 


13 


58.00 


30 


30 


-.3 


20 


.91 


.93 


.91 


.90 


.92 


13 


58.00 


30 


30 


-.3 


50 


.91 


.93 


.94 


.93 


.91 


15 


50.00 


27.8 


27.8 


-.1 


46.3 


.88 


.90 


.92 


.91 


.88 


16 


46.00 


30 


30 


-.1 


20 


.85 


.90 


.90 


.88 


.87 



68 



Table 8 continued. 







Parameters 








Efficiency 


of 




Rank d 


m e 


°i 


°jm 


r ggm 


°2 

m 


^2 

a g 


CT ggm 


°gm 


a m 


°e 










Eisen 


III 


Design 










1 


79.00 


15 


15 


-.3 


20 


.98 


.98 


.98 


.97 


.98 


1 


79.00 


15 


15 


-.3 


50 


.98 


.98 


.99 


.98 


.98 


3 


73.00 


15 


15 


-.1 


20 


.96 


.98 


.97 


.96 


.96 


3 


73.00 


15 


15 


-.1 


50 


.97 


.98 


.98 


.98 


.97 


5 


67.72 


15 


30 


-.3 


20 


.95 


.96 


.94 


.94 


.94 


5 


67.72 


15 


30 


-.3 


50 


.96 


.96 


.96 


.97 


.97 


5 


67.72 


30 


15 


-.3 


20 


.95 


.97 


.96 


.95 


.95 


5 


67.72 


30 


15 


-.3 


50 


.95 


.96 


.97 


.96 


.95 


9 


59.24 


15 


30 


-.1 


20 


.93 


.95 


.93 


.93 


.92 


9 


59.24 


15 


30 


-.1 


50 


.94 


.96 


.96 


.96 


.94 


9 


59.24 


30 


15 


-.1 


20 


.93 


.95 


.95 


.93 


.92 


9 


59.24 


30 


15 


-.1 


50 


.91 


.92 


.94 


.93 


.92 


13 


58.00 


30 


30 


-.3 


20 


.93 


.94 


.93 


.92 


.93 


13 


58.00 


30 


30 


-.3 


50 


.92 


.94 


.96 


.94 


.93 


15 


50.00 


27.8 


27.8 


-.1 


46. 


3 .88 


.88 


.90 


.90 


.88 


16 


46.00 


30 


30 


-.1 


20 


.89 


.92 


.91 


.90 


.89 



a Relative efficiency is the standard error of the MIVQUE estimate of 
( co ) variance component divided by the standard error of the MIVQUE 
(0, M, E) estimate. 

b Thompson and Eisen Mating designs are defined in the text. 

c Parameters are defined as follows: cig, a 2 ™, a^, and CT | are variances 
due to direct additive genetic, maternal additive genetic, permanent 
maternal environmental, and residual error effects; and *g and a ( 
are the correlation and covariance between direct and mater 
additive genetic effects. 

d Parameter sets are ranked by o| + o 2 , from high to low. 



69 
The efficiencies of MIVQUE{1) estimates of (co) variance components 
relative to MIVQUE are presented in table 9. Like MIVQUE (0, M, E) , 
the efficiency of MIVQUE (1) relative to MIVQUE remained relatively high 
(greater than .8) over all parameter sets studied. The efficiencies of 
MIVQUE (0, M, E) and MIVQUE (1) are similar with MIVQUE (0, M, E) being 
higher for some parameter sets and MIVQUE (1) being higher for others. 



70 



Table 9. The efficiency 9 of MIVQUE(l) estimates of (co) variance 
components relative to MIVQUE estimates computed from 200 sets of 
designs described by Thompson 13 (1976) and Eisen b (1967). 





Parameters 


.c 






Efficiency of 




o 2 


r\ 


a 2 


<£ 


o 2 


GZL 




"2 


% 


%2 


_g 


ggm 


gm 


m 


u e 


g 


ggm 


gm 


m 


e 








Thompson A 










15 


-.1 


15 


20 


53 


.93 


.89 


.89 


.92 


.94 








50 


23 


.97 


.95 


.95 


.96 


.97 






30 


20 


39.24 


.97 


.93 


.94 


.95 


.97 








50 


9.24 


.91 


.89 


.93 


.95 


.91 




-.3 


15 


20 


59 


.91 


.87 


.87 


.90 


.93 








50 


29 


.97 


.94 


.94 


.95 


.97 






30 


20 


47.72 


.95 


.90 


.91 


.93 


.95 








50 


17.72 


.95 


.92 


.94 


.95 


.95 


30 


-.1 


15 


20 


39.24 


.96 


.93 


.93 


.94 


.97 








50 


9.24 


.96 


.95 


.96 


.96 


.96 






30 


20 


26 


.99 


.97 


.97 


.97 


.99 


27.8 


-.1 


27.8 


46.3 


3.70 


.92 


.91 


.95 


.96 


.91 


30 


-.3 


15 


20 


47.72 


.94 


.90 


.89 


.92 


.95 








50 


17.72 


.98 


.95 


.96 


.96 


.97 






30 


20 


38 


.97 


.93 


.93 


.94 


.97 








50 


8 


.94 


.92 


.95 


.96 


.94 










Thompson B 










15 


-.1 


15 


20 


53 


.90 


.95 


.95 


.95 


.90 








50 


23 


.93 


.92 


.92 


.93 


.92 






30 


20 


39.24 


.94 


.96 


.95 


.96 


.94 








50 


9.24 


.96 


.93 


.88 


.93 


.96 




-.3 


15 


20 


59 


.87 


.95 


.94 


.94 


.89 








50 


29 


.91 


.92 


.91 


.92 


.90 






30 


20 


47.72 


.91 


.95 


.94 


.94 


.91 








50 


17.72 


.94 


.92 


.89 


.92 


.93 


30 


-.1 


15 


20 


39.24 


.93 


.97 


.97 


.97 


.93 








50 


9.24 


.95 


.94 


.92 


.94 


.95 






30 


20 


26 


.96 


.97 


.97 


.97 


.96 


27.8 


-.1 


27.8 


46.3 


3.70 


.97 


.94 


.91 


.95 


.97 


30 


-.3 


15 


20 


47.72 


.90 


.95 


.96 


.96 


.90 








50 


17.72 


.92 


.93 


.93 


.93 


.92 






30 


20 


38 


.93 


.96 


.95 


.96 


.92 








50 


8 


.95 


.92 


.90 


.93 


.94 



71 



Table 9 continued. 





Parameters 


,c 






Efficiency 


of 




°1 


r ggm 


gin 


°l 


•2 


"2 
°9 


°ggm 


5 gm 


"2 


"2 

e 








Eisen 


I 










15 


-.1 


15 


20 


53 


.84 


.91 


.88 


.89 


.87 








50 


23 


.91 


.90 


.84 


.89 


.90 






30 


20 


39.24 


.91 


.93 


.90 


.93 


.91 








50 


9.24 


.88 


.80 


.80 


.89 


.85 




-.3 


15 


20 


59 


.81 


.89 


.85 


.86 


.85 








50 


29 


.89 


.90 


.82 


.87 


.88 






30 


20 


47.72 


.87 


.91 


.87 


.90 


.88 








50 


17.72 


.90 


.85 


.80 


.90 


.87 


30 


-.1 


15 


20 


39.24 


.92 


.95 


.92 


.93 


.93 








50 


9.24 


.91 


.89 


.86 


.91 


.89 






30 


20 


26 


.96 


.95 


.93 


.96 


.95 


27.8 


-.1 


27.8 


46.3 


3.70 


.86 


.81 


.84 


.91 


.83 


30 


-.3 


15 


20 


47.72 


.88 


.93 


.88 


.90 


.89 








50 


17.72 


.92 


.90 


.84 


.90 


.90 






30 


20 


38 


.92 


.93 


.90 


.93 


.92 








50 


8 
Eisen 


.88 
II 


.82 


.81 


.89 


.84 


15 


-.1 


15 


20 


53 


.88 


.92 


.93 


.95 


.90 








50 


23 


.92 


.93 


.92 


.94 


.92 






30 


20 


39.24 


.93 


.95 


.93 


.95 


.93 








50 


9.24 


.88 


.85 


.89 


.92 


.86 




-.3 


15 


20 


59 


.85 


.91 


.91 


.94 


.88 








50 


29 


.90 


.93 


.91 


.93 


.90 






30 


20 


47.72 


.89 


.93 


.91 


.94 


.90 








50 


17.72 


.90 


.89 


.88 


.91 


.89 


30 


-.1 


15 


20 


39.24 


.93 


.96 


.95 


.97 


.94 








50 


9.24 


.91 


.92 


.94 


.96 


.90 






30 


20 


26 


.96 


.97 


.95 


.97 


.96 


27.8 


-.1 


27.8 


46.3 


3.70 


.87 


.86 


.92 


.94 


.86 


30 


-.3 


15 


20 


47.72 


.90 


.94 


.93 


.96 


.91 








50 


17.72 


.92 


.93 


.93 


.95 


.91 






30 


20 


38 


.93 


.95 


.93 


.96 


.93 








50 


8 


.89 


.87 


.90 


.93 


.87 



72 



Table 9 continued. 



______ 


________ 


________ 


















Parameters 

r a 2 
ggm gm 


c 

a 2 
m 


°I 


a 2 
9 


Efficiency 

a a 
ggm gm 


of 




'§ 


*2 
m 


~2 

a e 










Eisen 


III 










15 


-.1 


15 


20 


53 


.88 


.94 


.90 


.93 


.90 








50 


23 


.92 


.91 


.90 


.94 


.91 






30 


20 


39.24 


.93 


.94 


.93 


.95 


.93 








50 


9.24 


.88 


.82 


.86 


.92 


.87 




-.3 


15 


20 


59 


.86 


.92 


.88 


.91 


.87 








50 


29 


.90 


.90 


.89 


.93 


.90 






30 


20 


47.72 


.89 


.93 


.90 


.94 


.90 








50 


17.72 


.90 


.85 


.87 


.92 


.89 


30 


-.1 


15 


20 


39.24 


.94 


.96 


.93 


.96 


.94 








50 


9.24 


.92 


.90 


.91 


.95 


.90 






30 


20 


26 


.97 


.96 


.95 


.97 


.96 


27.8 


-.1 


27.8 


46.3 


3.70 


.89 


.85 


.89 


.94 


.87 


30 


-.3 


15 


20 


47.72 


.91 


.95 


.90 


.94 


.91 








50 


17.72 


.92 


.91 


.90 


.95 


.91 






30 


20 


38 


.93 


.94 


.92 


.96 


.92 








50 


8 


.90 


.85 


.87 


.93 


.87 



a Efficiency is the standard error of the MIVQUE estimate of 
(co) variance component divided by the standad error of the 
MIVQUE (1) estimate. 

b Thompson and Eisen mating designs are described in the text. 



,2 f 

2rni 



G |m' a m' 



Parameter sets are defined as follows: a 
variances due to direct genie, maternal "genie, 
maternal environmental, and residual effects, and r„ 
are the correlation and covariance between direct a 
genie effects. 



and o| are 
permanent 

i and a ggm 
1 maternal 



SUMMARY AND CONCLUSIONS 

In papulations of farm animals, there are many different kinds of 
relatives. In general, analysis of variance (ANOVA) type methods are 
not adequate to estimate (co) variance components from data in which the 
animal model is needed because of the complex covariance structure 
among the observations. Minimum variance quadratic unbiased estimates 
(MIVQUE) (Rao, 1971) and restricted maximum likelihood estimates (REML) 
(Patterson and Thompson, 1971) have been extended by Henderson (1985a 
and b) to an animal model which includes both additive and nonadditive 
genetic effects. With Henderson's (1985a and b) approach, the mixed 
model equations (MME) and best linear unbiased predictions (HLUP) for 
genetic merits are obtained as intermediate results. 

Quaas and Pollack (1980) described a reduced animal model (RAM) 
with MME only for parents that have progeny records equivalent to the 
animal model with MME for all animals when the genetic model is 
strictly additive. Using RAM instead of the animal model can save a 
great deal of computer storage when analyzing performance records from 
some species. Hudson and Kennedy (1985) found that there were 5.6 
times as many total animals as there were parents and ancestors that 
had progeny with records in Ontario record of performance swine data. 

REML and MIVQUE under RAM or the animal model are not 
computationally feasible with 1986 computers when data sets are large. 
Therefore, under the animal model or RAM with large data sets, methods 

73 



74 
more computationally feasible than REML or MIVQUE are needed to obtain 
estimates of (co) variance components. 

Symmetric differences squared (SDS) (Grimes and Harvey, 1980), 
weighted symmetric differences squared (WSDS) (Christian, 1980), 
MIVQUE(O) (Rao, 1971), and diagonal MIVQUE (Henderson) are 
computationally feasible methods of estimating (co) variance components 
under RAM or the animal model. 

Estimates of (co) variance components by symmetric differences 
squared (SDS) weighted by the inverse of the error variance-covariance 
matrix among SDS (WSDS) are minimum variance quadratic unbiased 
estimates (MIVQUE) when all parameters other than the error variance 
are near zero. Likewise, estimates of (co) variance components by SDS 
weighted by the inverse of the total variance-covariance matrix among 
SDS are MIVQUE for the prior parameters chosen. 

The algorithm presented in this paper for obtaining WSDS greatly 
reduces the computations needed to complete the analysis compared to 
the WSDS methodology described by Christian (1980). The number of 
multiplication steps required can be further reduced if the 
relationships between pairs of animals can be classified into a 
relatively small number (5 to 100) of relationship categories. This 
algorithm follows the same approach for reducing the number of 
multiplications required as Christian (1980) used to reduce the number 
of multiplications needed to complete SDS. However, WSDS requires more 
multiplication and addition steps to complete than SDS. 

Matrices containing coefficients of relationship for additive and 
nonadditive genetic effects are needed to apply WSDS or MIVQUE(O) to 



75 
models in which these effects are important. Relationship matrices for 
dominance and epistatic effects can be computed from the numerator 
relationship matrix (A) when there is no inbreeding (Henderson, 1985a). 
Normally, an additive genetic model is assumed as an approximation when 
there is inbreeding because the nonadditive effects are difficult to 
interpret under inbreeding (Cockerham, 1954). 

For large data sets under the animal model, the numerator 
relationship matrix (A) is too large to half store in computer core if 
all n(n+l)/2 elements are stored. However, A can be stored and used by 
storing only the p < n(n+l)/2 nonzero elements (Hudson et al., 1982). 
This process requires the storage of three vectors of order n and two 
vectors of order p. Hudson et al. (1982) found that only 6 to 22.6% of 
the n(n+l)/2 elements of the upper triangle of A were nonzero for five 
dairy sire populations. The computational steps required to obtain A 
are proportional to n 2 even with the storage saving procedures of 
Hudson et al. (1982). 

The sampling standard errors computed from 200 sets of designs A 
and B of Thompson (1976) agree with the standard errors obtained by 
simulation of the same designs by Grimes and Harvey (1980). These 
results substantiate the validity of the numerical and simulation 
methods used in these two studies. 

WSDS was more efficient than SDS for most of the design-parameter 
set combinations studied. Because WSDS was less than 75% efficient 
relative to MIVQUE (parameters known a priori) for many of the design- 
parameter set combinations studied, there is a need for computationally 
feasible methods that are more efficient than WSDS. 



76 
MIVQUE with all priors set to zero except for the permanent 
maternal environmental and residual variance, which were set to their 
true values (MIVQUE(0,M,E)), and MIVQUE with all priors including the 
error variance set to one (MIVQUE(l)) were greater than 80% efficient 
relative to MIVQUE for all design-parameter combinations studied, and 
were greater than 90% efficient for over half of the design-parameter 
combinations studied. For MIVQUE(0,M,E), the efficiencies obtained in 
this study are an upper bound because in practice the priors chosen for 
permanent maternal environmental and residual variances would have to 
be estimates instead of the true parameters. MIVQUE(0, M,E) is much 
easier to compute than MIVQUE or MIVQUE(l) because the inverse of the 
variance-covariance matrix among observations is easy to compute when 
all parameters except the permanent maternal environmental and residual 
variances are assumed to be zero. MIVQUE(l) offers no computational 
advantage over MIVQUE, but it frees the experimenter from the 
responsibility of assigning prior values. 

A method not considered in the current study which offers poten- 
tial as far as computational feasibility and efficiency relative to 
MIVQUE is diagonal MIVQUE (Henderson, 1985a). Numerical studies 
similar to the current one are needed to study the efficiency of 
diagonal MIVQUE (Henderson, 1985a) relative to MIVQUE under the animal 
or reduced animal model. Henderson (1984) states that diagonal MIVQUE 
is more efficient than MIVQUE(O) when the unknown parameters deviate 
greatly from zero. Diagonal MIVQUE is only slightly more 
computationally burdensome to complete than MIVQUE(O) under the animal 
model. 



LITERATURE CITED 

Brocklebank, J. and F.G. Giesbrecht. 1984. Estimating variance 
components using alternative MINQE's in selected unbalanced 
designs. In: Experimental Design, Statistical Models, and Genetic 
Statistics. Vol. 50. Edited by Klaus Hinkelman. pp 177-211. 

Christian, L.E. 1980. Modification and simplification of symmetric 
differences squared procedure for estimation of genetic variances 
and covarainces. Fh.D. Dissertation. The Ohio State University, 
Columbus. 

Cockerham, C.C. 1954. An extension of the concept of partitioning 
hereditary variance for analysis of covariances among relatives 
when epistasis is present. Genetics 39:859. 

Dempster, A.P., N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood 
from incomplete data via the E.M. algorithm. J. of the Royal 
Statistical Society 39:1. 

Dickerson, G.E. 1942. Experimental design for testing inbred lines of 
swine. J. Anim. Sci. 1:326. 

Eisen, E.J. 1967. Mating designs for estimating direct and maternal 
genetic variances and direct-maternal genetic covariances. Can. J. 
Genet. Cytol. 9:13. 

Forthofer, R.N. and G.G. Koch. 1974. An extension of the symmetric sum 
approach to the estimation of variance components. Biometrische 
Zeitschrift 16:3. 

Grimes, L.W. and W.R. Harvey. 1980. Estimation of genetic variances and 
covariances using symmetric differences squared. J. Anim. Sci. 
50:634. 

Hartley, H.O. and J.N.K. Rao. 1967. Maximum likelihood estimation for 
the mixed analysis of variance model. Biometrics 54:93. 

Hazel, L.N., M.L. Baker, and C.F. Reinmuller. 1943. Genetic and 
environmental correlations between the growth rates of pigs at 
different ages. J. Anim. Sci. 2:118. 

Henderson, C.R. 1973. Sire evaluation and genetic trends. In: Proc. of 
the Animal Breeding and Genetics Symposium in Honor of Dr. Jay L. 
Lush, pp 10-41. ASAS and ADSA, Champaign, IL. 



77 



78 

Henderson, CR. 1984. Applications of Linear Models in Animal Breeding. 
University of Guelph. p. 173. 

Henderson, C.R. 1985a. Best linear unbiased prediction of nonadditive 
genetic merits in noninbred populations. J. Anim. Sci. 60:111-117. 

Henderson, C.R. 1985b. MIVQUE and REML estimation of additive and 
nonadditive genetic variance. J. Anim. Sci. 61:113. 

Henderson, CR. 1985c. Equivalent linear models to reduce computations. 
J. Dairy Sci. 68:2267. 

Henderson, C.R., O. Kempthorne, S.R. Searle, and CM. Von Krosigk. 
1959. The estimation of environmental and genetic trends from 
records subject to culling. Biometrics 15:192. 

Henderson, C.R. and R.L. Quaas. 1976. Multiple trait evaluation using 
relatives' records. J. Anim. Sci. 43:1188. 

Henderson, H.V. and S.R. Searle. 1981. On deriving the inverse of a sum 
of matrices. SIAM Review 23:53. 

Hudson, G.F.S., R.L. Quaas, and L.D. Van Vleck. 1982. Computer 
algorithm for the recursive method of calculating large numerator 
relationship matrices. J. Dairy Sci. 65:2018. 

Hudson, G.F.S. and B.W. Kennedy. 1985. Genetic evaluation of swine for 
growth rate and backfat thickness. J. Anim. Sci. 61:83. 

Hudson, G.F.S. and L.D. Van Vleck. 1982. Estimation of components of 
variance by method 3 and Henderson's new method. J. Dairy Sci. 
65:435. 

Kennedy, B.W., K. Johansson, and G.F.S. Hudson. 1985. Heritabilities 
and genetic correlations for backfat and age at 90 kg in 
performance-tested pigs. J. Anim. Sci. 61:78. 

Koch, G.G. 1967. A general approach to the estimation of variance 
components. Technometrics 9:93-118. 

Koch, G.G. 1968. Some further remarks concerning 'a general approach to 
the estimation of variance components'. Technometrics 10:551. 

Patterson, H.D. and R. Thompson. 1971. Recovery of interblock 
information when block sizes are unequal. Biometrics 58:545-555. 

Quaas, R.L. 1976. Computing the diagonal elements and inverse of a 
large numerator relationship matrix. Biometrics 32:949. 

Quaas, R.L. and EUT. Pollak. 1980. Mixed model methodology for farm and 
ranch beef cattle testing programs. J. Anim. Sci. 51:1277-. 



79 

Rao, C.R. 1971. Minimum variance quadratic unbiased estimation of 
variance components. J. Mult. Anal. 1:445. 

Searle, S.R. 1958. Sampling variances of estimates of components of 
variance. Ann. Math. Stat. 29:167. 

Swallow, W.H. and J.F. Monahan. 1984. Monte Carlo comparison of ANOVA, 
MIVQUE, REML, and ML estimates of variance components. 
Technomentrics 26:47. 

Thompson, R. 1976. The estimation of maternal genetic variances. 
Biometrics 32:903. 



1. 



APPENDIX A 
Prove TtrfViVj)] = XjXj + 2X3X2- 
Since V^ and V^ are sytrmetric, 



[49] 



n-1 n 



[tr^n = C j (v kk ) ± (v Kk ) :J 3 + 2C z z (v kx ) i (v kl ) j 3, 

which can be written as the sum of two row-column multiplications 
as follows: 



CtrfViVj)] = C(v u ) i (v 22 ) i ... (v^Ji] 



(v n)j 
(v 22 } j 



4 nn' 3 



+2 Cv 12 ) i {v 13 ) i ... (v ln ) i (v 23 ) i ... (v n _ 1/n ) ± ] 



(v 12)j 
< v 13>j 



< v ln>j 
< v 23>j 



^n-l^j 



2. 



which after substituting in [14] and [16] is 
CtrtVjVj)] = X^X X + 2X3X3, which proves [49]. 
Prove CVjljj] = H'Xj + Xj_. 
[V^l n 3 can be expressed in sunmation notation as: 



[50] 



80 



81 



[Vjl^ = 



I (v lk>j 
I (V2k)j 



* (v rik>j 



= H' 



< v 12>j 
< v 13>j 



(v 23>j 



1 < v lk J 1 
k#> 2k D 



^J^^ 



(v ll>j 
(v 22>j 



(v ) • 



{v ll>j 
< v 22>j 



( v nn>j 



or 



[51] 



(v n-l,n>j 
which after substituting in [143 and [16] is 

[Vj^] = H'X 2 + X lf which proves [50]. 

Prove [l^Vjln] = 1^ + 21^. 

The left side of equality [51] can be written 

which after substituting [50] in [Vjjl n ] is 

^m 1 = ^^2 + ^v 

which after substituting the transponse of [9] for l^H' is 

= 1^ + 21iX 2 ' which proves [51]. 
Prove [y^y] = X]Y ± + 2X^2 [59] 

The left side of [59] is 

[y'V iy ] = [ E (v^)^] + 2[ I _Z (v kl ) iykyi ], 

which can be written as the sum of 2 row column multiplications as 
follows : 



82 



[ y ' Vi y] = C(v 11 ) i (v 22 ) i ... (v^i] yj 

■ 

+ 2C(v 12 ) i (v 13 ) i ... (v ln ) i (v 23 ) i ... (v n _ 1/n ) ± ] 



V1V2 
^3 



yiy n 



y n _xy n 



5. 



which after substituting in [51, [6], [14], and [16] is 
[y'Vjy] = XJY-l + 2X 2 Y 2 , vMch proves [59]. 
Prove yy'ln = H'Y 2 + Y ± . 

The left hand side of [60] is 



[60] 



tyy'i^ = 



1 yivk 

k 




*i Yiyk 




yf 


S v 2Vk 
k 


— 




+ 


yi 

* 


• 
• 

1 VnVk 
k 




• 




^n 



or 



83 



= H' 



^1^2 






ny3 




rf 


• 
• 


+ 


* 


Yl^n 




• 


^2*3 






* 







, which after 



6. 



substituting in [53 and [6] is 

= H*Y 2 + Yj_, which proves [60]. 
Prove l^yy'ln = Vi + 21^. 
Substitute [60] into [61] for yy' 1^ to obtain 

= ^'*2 + *n Y l' 
which after substituting the transpose of [9] for l^H is 

= l^i l + 21 'Y 2 , which proves [61]. 



[61] 



APPENDIX B 



The multiplier coefficients that specify the contribution of <Jg, a 



ggm' 



r gm' 



and 



o£ to the covariances among the 18 individuals generated 



by a replicate of Eisen design I are defined in matrix form as V-^, V 2 , 
V3, and V4. These matrices are as follows: 



V, = 



V 2 = T 





16 8 




















4 


4 


8 


8 


8 


8 


4 


4 




16 




















8 


8 


4 


4 


4 


4 


8 


8 




16 8 


8 


8 














8 


4 


4 


4 
















16 


8 


8 














4 


8 


4 


4 


















16 


8 














4 


4 


8 


4 




















16 














4 


4 


4 


8 






















16 


4 


4 


4 














8 


2 


2 


2 


1 










16 


4 


4 














2 


8 


2 


2 












16 


4 














2 


2 


8 


2 


16 














16 














2 


2 


2 


8 




symmetric 














16 


6 
16 


4 

4 

16 


4 

4 

6 

16 


2 
2 
4 
4 
16 


2 
2 

4 

4 

5 

16 


4 
4 
2 
2 
3 
3 
16 


4 
4 
2 
2 
3 
3 
5 
16 




4 4 




















1 


1 


1 


1 


1 


1 


1 


1 




4 




















1 


1 


1 


1 


1 


1 


1 


1 




4 4 


4 


4 














5 


3 


3 


3 
















4 


4 


4 














3 


5 


3 


3 


















4 


4 














3 


3 


5 


3 




















4 














3 


3 


3 


5 






















4 























5 


1 


1 


1 


1 










4 




















1 


5 


1 


1 












4 

















1 


1 


5 


1 


4 














4 














1 


1 


1 


5 




syimetric 














4 


2 

4 


2 
2 

4 


2 
2 
2 

4 






4 







1 

4 







1 
1 

4 







1 
1 
1 

4 



84 



85 





4 4 














































4 














































4 4 


4 


4 














2 


2 


2 


2 
















4 


4 


4 














2 


2 


2 


2 


















4 


4 














2 


2 


2 


2 




















4 














2 


2 


2 


2 






















4 























2 











1 

V 3 = 4 










4 























2 


















4 























2 

















4 























2 




symmetric 














4 


2 

4 


2 
2 

4 


2 
2 
2 

4 






4 







1 

4 







1 

1 

4 







1 
1 
1 

4 



and 



v 4 = 



1 















































1 

















































1 1 


1 


1 








































1 


1 


1 










































1 


1 












































1 














































1 













































1 
































me 


trie 










1 











































1 










































1 




1 





1 






1 







1 








1 









1 










1 



The multiplier coefficients that specify the contribution of Cg/ o ggm , 

a*, and a^ to the covariances among the 18 individuals generated fcy 

a replicate of Eisen design II are defined in matrix form as V 1# V2# 



V 3 , and V4. These matrices are as follows: 



86 





8 


4 4 




















2 


2 


4 


4 


4 


4 












8 


4 


4 














4 


4 


2 


2 








4 


4 






8 4 




















4 


2 


2 


2 


2 


2 












8 




















2 


4 


2 


2 


2 


2 














8 


4 














2 


2 


4 


2 








2 


2 










8 














2 


2 


2 


4 








2 


2 












8 


4 


4 


4 














4 


2 


2 


2 


1 

V l = 8 












8 


4 


4 














2 


4 


2 


2 














8 


4 














2 


2 


4 


2 
















8 














2 


2 


2 


4 




symmetric 














8 


3 

8 


2 
2 


2 
2 


1 
1 


1 
1 


2 

2 


2 
2 
























8 


3 
8 


2 
2 
8 


2 
2 

3 
8 


1 
1 
1 
1 
8 


1 
1 
1 
1 
3 
8 




4 


4 4 




















3 


3 


1 


1 


1 


1 












4 


4 


4 














1 


1 


3 


3 








1 


1 






4 4 




















5 


3 


1 


1 


1 


1 












4 




















3 


5 


1 


1 


1 


1 














4 


4 














1 


1 


5 


3 








1 


1 










4 














1 


1 


3 


5 








1 


1 












4 


4 


4 


4 














5 


3 


3 


3 


1 

v 2 = ; 












4 


4 


4 














3 


5 


3 


3 














4 


4 














3 


3 


5 


3 
















4 














3 


3 


3 


5 




syrrmetric 














4 


2 

4 


2 
2 


2 
2 


1 
1 


1 
1 
































4 


2 

4 




4 




2 

4 


1 
1 
2 
2 
4 


1 
1 
2 
2 
2 
4 



87 





2 


2 2 




















1 


1 
























2 


2 


2 




















1 


1 


















2 2 




















1 


1 
























2 




















1 


1 


























2 


2 




















1 


1 






















2 




















1 


1 
























2 


2 


2 


2 














1 


1 


1 


1 


1 












2 


2 


2 














1 


1 


1 


1 


v 3 = 5 














2 


2 














1 


1 


1 


1 
















2 














1 


1 


1 


1 




symmetric 














2 


1 








































2 



2 




1 

2 





2 






1 

2 






1 

1 

2 






1 
1 
1 

2 



and 



v 4 = 






1 


1 












































1 








1 


1 








































1 


1 
















































1 


















































1 


1 














































1 
















































1 


1 


1 


1 






































1 


1 


1 








































1 


1 


























me 


tri 


c 












1 




1 





1 






1 







1 








1 









1 










1 











1 



The multiplier coefficients that specify the contribution of Og, a gqm' 
a^*, and a^ to the covariances among the 18 individuals generated by 
a replicate of Eisen design III are defined in matrix form as V-^, V2, 
V3, and V4. These matrices are as follows: 



88 





16 8 8 




















4 


4 


8 


8 


8 


8 










16 


8 


8 














8 


8 


4 


4 








8 


8 




16 8 




















8 


4 


4 


4 


4 


4 










16 




















4 


8 


4 


4 


4 


4 












16 


8 














4 


4 


8 


4 








4 


4 








16 














4 


4 


4 


8 








4 


4 










16 


4 


4 


4 














8 


2 


2 


2 


*-fe 










16 


4 


4 














2 


8 


2 


2 












16 


4 














2 


2 


8 


2 














16 














2 


2 


2 


8 




symmetric 














16 


6 
16 


4 

4 

16 


4 

4 

6 

16 


2 
2 

4 

4 

16 


2 
2 

4 

4 

5 

16 


4 
4 
2 
2 
1 
1 
16 


4 
4 
2 
2 

1 

1 

5 

16 




4 4 4 




















3 


3 


1 


1 


1 


1 










4 


4 


4 














1 


1 


3 


3 








1 


1 




4 4 




















5 


3 


1 


1 


1 


1 










4 




















3 


5 


1 


1 


1 


1 












4 


4 














1 


1 


5 


3 








1 


1 








4 














1 


1 


3 


5 








1 


1 










4 























5 


1 


1 


1 


1 

V 2 = 4 










4 




















1 


5 


1 


1 












4 

















1 


1 


5 


1 














4 














1 


1 


1 


5 




symmetric 














4 


2 

4 


2 
2 

4 


2 
2 
2 

4 


1 

1 


4 


1 
1 



1 

4 





1 
1 
1 
1 
4 





1 
1 
1 
1 
1 
4 



89 



V3 = i 



4 


4 




















2 


2 




















4 





4 


4 




















2 


2 














4 


4 




















2 


2 






















4 




















2 


2 
























4 


4 




















2 


2 




















4 




















2 


2 






















4 























2 





















4 























2 




















4 























2 



















4 























2 


metri 


c 














4 


2 

4 




4 




2 
4 






4 







1 

4 







1 
1 
4 







1 
1 
1 

4 



and 



V 4 = 



10 11 












































10 


1 


1 






































1 1 












































1 














































1 


1 










































1 












































1 











































1 










































1 





























synmetric 












1 








































1 




1 





1 







1 







1 








1 









1 










1 



The multiplier coef ficients that specify the contribution of o ~, a qqm / 
a**, and a^ to the covariances among the eight individuals generated 
by a replicate of Thompson design A are defined in matrix form as V-^ f 
v 2' v 3* an< ^ v 4* Th ese matrices are as follows: 



90 



and 







16 8 4 4 


8 


8 


2 


2 






16 4 4 


4 


4 


2 


2 




1 


16 8 


2 


2 


4 


4 


v l 


~ 16 


16 


2 
16 


2 

8 


8 

1 


8 
1 






symmetric 




16 


1 
16 


1 

8 

16 






8 8 


2 


2 


2 


2 






8 


2 


2 


2 


2 




1 


8 8 








6 


6 


V 2 


= 8 


8 



8 



8 


10 
1 


10 

1 






symmetric 




8 


1 
8 


1 
8 
8 






2 2 


















2 
















1 


2 2 








1 


1 


V 3 


J. 


2 








1 


1 


«J 


2 




2 


2 












symmetric 




2 



2 



2 
2 



v„ = 



110 














10 














1 1 














1 














symmetric 


1 


1 












1 




1 




1 
1 



The multiplier coefficients that specify the contribution of a~, 



ggm' 



2 

a gm' 



4 



and o£ to the covariances among the eight individuals 



generated by a replicate of Thompson design B are defined in matrix 
form as V^, V^, Vg, and V^. These matrices are as follows: 



91 



and 







16 8 4 4 


8 


8 


2 


2 






16 4 4 


4 


4 


2 


2 




1 


16 8 


2 


2 


4 


4 


v l 


~ 16 


16 


2 

16 


2 

8 


8 
1 


8 

1 






symmetric 




16 


1 

16 


1 

8 

16 






4 4 


5 


5 


1 


1 






4 


3 


3 


1 


1 




1 


4 4 


1 


1 


3 


3 


V 2 




4 


1 


1 


5 


5 


4 




4 


4 


1 


1 






symmetric 




4 


1 
4 


1 
4 
4 






4 4 


2 


2 












4 


2 


2 










1 


4 4 








2 


2 


V 3 


A. 

~ 4 


4 



4 



4 


2 

1 


2 

1 






symmetric 




4 


1 
4 


1 
4 
4 



v 4 = 



110 














10 














1 1 














1 














symmetric 


1 


1 












1 




1 




1 

1 



APPENDIX C 
Computational example follows for symmetric differences squared 
(SDS) and SDS weighted by the inverse of the error variance-covariance 
matrix among SDS when relationship categories are considered. 

Suppose that the observations generated from one replicate of 
design A of Thompson (1976) are 

y = [25 13 37 11 7 4 9 21]' . 
Assume that Var(y) is the same as assumed in Appendix B. Hence, 

l^y = 127, and 
y - 15.875 . 
D contains the first 4 columns of table 1 of Grimes and Harvey (1980), 
excluding the 9th row, plus a column of zeros for the fifth column. 
Hence, 







4 









2 









8 16 16 16 









8 20 8 





u = 


(1/16) 


8 4 









4 12 8 









2 4 









4 4 









12 






Q is obtained from the 5th column of table 1 of Grimes and Harvey 
(1980), excluding the 9th row. Hence, 

q= [4 4422242 4]' . 
The 4 in the first element of Q, means that there are four pairs of 
individuals in a set of this design related as paternal half-sibs. N 

92 



93 
is obtained from figure 1 and table 1 of Grimes and Harvey (1980). 
Hence, 



N' = 



2 


2 


2 


2 




















2 


2 


2 


2 








1 


1 


1 


1 


1 


1 


1 


1 











2 








1 


1 


2 











1 


1 














2 











1 


1 


2 


2 














2 


2 





2 








1 


1 




















2 


2 


2 


2 



A 2 in the first row and column of N', means that there are two animals 



related to P^ as paternal half-sibs. Therefore, 



N'N = 



16 8 8 

16 8 

8 



4 
4 
4 
6 



4 
4 
4 

6 



synmetric 



4 
4 
4 
2 

6 



8 

8 
4 
4 
4 
16 



4 
4 
4 

2 

4 
6 




8 
8 
4 
4 
4 
8 
4 
16 



The vectors of squares (Y^) and crossproducts (Y2) among observations 
(y) are 

Y ± = [625 169 1369 121 49 16 81 441]\ and 
Y 2 = [325 925 275 175 100 225 525 481 143 91 52 117 273 407 
259 148 333 777 77 44 99 231 28 63 147 36 84 189] ' . 
We can let 



B = Diag(Q) 



94 
The incidence matrix that specifies the relationship category to which 
the elements of Y2 belong is 



Z = 









1 




















1 


























1 




















,0 

















1 


























1 
































1 


























1 








1 


























1 















































1 


























1 























1 


























1 














1 























1 


























1 






































1 


























1 














1 


























1 
































1 


























1 























1 












































1 


























1 


























1 


























1 








1 





















Z does not need to be stored, but is given here for completeness. 
Therefore, 

Z'Y 2 = [1824 528 949 330 275 1110 1140 143 330]' . 
Applying equations [89] and [90], 



X'W" 1 * = 



5.23828 6.10156 6.46875 6.375 

11.0781 10.1875 8.75 

11.25 11 

symmetric 12 



5.1875 
4.375 
5.5 
6 

7 



and 



X'W^Y - [594.633 641.391 708.313 736.75 854.875]' 



95 



64.5625 57.6250 


70 


75 


83 


63.2500 


70 


70 


70 




84 


88 


88 


symmetric 




96 


96 
112 



Therefore, 

(X , W 1 X)~ 1 X , lT 1 Y= [-195.7 148.6-142.9 72.3 219.5]' , 
which are the WSDS estimates of the variance due to direct genie 
effects, the covariance between direct and maternal genie effects, the 
variance due to maternal genie effects, the variance due to permanent 
maternal environmental effects, and the variance due to residual 
effects. Applying equations [91] and [92], 



X'X = 



and 

X'Y = [10131.8 9166.5 10588 11732 13678]' . 
Therefore, 

(X'X)~ 1 X'Y= [-232.3 538.6 -905.4 503.1 237.8]* , 
which are the SDS estimates of the variance due to direct genie 
effects, the covariance between direct and maternal genie effects, the 
variance due to maternal genie effects, the variance due to permanent 
maternal environmental effects, and variance due to residual effects. 

Note for this example that the number of relationship categories 
was 9 for only 8 animals. Normally in practice the number of 
relationship categories would be quite small (less than 50) and the 
nutiber of animals could be in the thousands. Note that two estimates 
of variance components are negative. This can happen with high 
probability for an example this small.