Skip to main content

Full text of "Practical least squares"

See other formats


Mortin  Duke 

ISM  Wa/nutStj  Berkeley 


PRACTICAL 
LEAST  SQUARES 


PRACTICAL 
LEAST  SQUARES 


BY 


ORA   MINER   LELAND,  B.S.,  C.E. 

Dean  of  the  College  of  Engineering  and  Architecture  and  the  School  of 

Chemistry,  University  of  Minnesota.     Member  of  the  American 

Society   of  Civil  Engineers,  American   Association   of 

Engineers,  Society  for  the  Promotion  of  Enqinecring 

Education,  American  Astronomical  Society,  etc. 


First   Edition 
Fourth  Impression 


McCx RAW-HILL  BOOK  CO^IPAXY,  Ixc 

N  E  W    YO  R  K    A  X  D    L  O  X  D  O  X 
19  21 


Copyright  1921,  by  the 
McGraw-IIii.l  ]?ook  Compaxy,  Inc. 


PRIXTKD    IX    THE    trX'ITED    STATES    OF    AMERICA 


TO 

MY  WIFE 

whose  i>oyal  assistance 

has  contributed  ix  creat  measure 

to  its  prei'aration', 

this  book  is  aifkctionately 

dedic:ated 


EngjUa^S. 


'J  h 


PREFACE 


This  book  results  from  the  author's  experience  in  teaching 
the  subject  of  Least  Squares  and  the  Adjustment  of  Observations 
to  classes  of  civil  engineering  students  at  Cornell  University. 
As  the  time  allotted  to  this  work  became  more  and  more  limited, 
the  available  textbooks  became  less  adaptable  to  the  scope  of 
the  course.  To  meet  this  condition,  a  series  of  chapters  entitled 
"  Notes  on  the  Adjustment  of  Observations  "  was  prepared  and 
used  as  a  text.  With  these  notes  as  a  basis,  this  book  has  been 
written. 

It  is  designed  particularly  for  use  in  short  courses  of  instruction 
and  In'  engineers  and  scientists  in  connection  with  their  private 
practice.  It  will  not  replace  the  more  elaborate  treatises  on  the 
subject  but  the  author  hopes  that  it  will  introduce  the  student 
directly  to  the  simpler  methods  of  solving  the  ordinary  problems 
in  adjustment. 

The  plan  of  the  work  is  essentially  practical.  After  a  general 
introduction  devoted  to  a  consideration  of  the  character  and 
occm-n^nce  of  errors,  the  adjustment  of  direct,  indirect,  and 
conditioned  observations  is  taken  up  in  detail  and  illustrated 
by  numerical  applications  to  triangulation,  leveling,  astronomy, 
and  the  derivation  of  emi:)irical  formulas.  Not  until  after  this 
practical  treatment  of  the  determination  of  the  best  values  of 
th(;  unknown  quantities  is  the  precision  of  observations  discussed, 
togeth(M-  witii  the  ('omputatif)n  of  the  mean  s(|uare  and  pi-oba])le 
errors  of  the  observations  and  results.  Finally,  the  ])rinciples 
of  i)ro])ability  and  the  analytical  derivation  of  tlie  Law  of  Error 
are  given  in  a]i})en(lices. 

The  utility  of  this  arrangement  should  be  o])vious.  By  far 
the  greater  number  of  a]i]iHcations  of  Least  Scjuares  do  not 
require  a  consideration  of  the  ])recision  of  thc^  i-esults  nor  a 
kiunvknlge  of   the  nunm   sciuai'c   or  jiro])a])l(^   (>rrors.     ^Moreover, 

vii 


viii  PREFACE 

the  subject  of  the  precision  is  usually  the  most  troublesome  part 
of  the  work  for  the  student  or  the  beginner  to  understand. 
Therefore,  the  practical  methods  of  adjustment  are  explained 
directly  and  fully,  without  regard  to  the  probable  errors  or  to 
the  theoretical  derivation  of  the  Law  of  Error.  A  special  effoi't 
has  been  made  to  explain  the  procedure  in  each  case  as  com- 
pleteh'  as  necessary  for  the  beginner  as  well  as  the  practitioner, 
even  at  the  risk  of  criticism  for  undue  length.  The  usual 
difficulties  experienced  by  students  seem  to  justify  this  effort. 

In  Appendix  D  there  is  given  an  outline  of  a  short  course  of 
instruction  suitable  for  civil  engineers.  This  plan  was  carried 
out  successfulh'  by  the  author  in  sixteen  lessons.  While  it  is 
not  at  all  desirable  to  restrict  the  work  so  severely,  if  no  more 
time  can  be  given  to  it  the  course  is  still  very  much  worth  while. 

The  author  is  indebted  to  many  excellent  works  and  has 
endeavored  to  make  specific  acknowledgments  throughout  the 
book  wherever  due.  In  the  preparation  of  the  original  notes 
and  their  application  to  class  instruction,  his  thanks  are  due 
to  his  former  colleagues,  Professors  P.  H.  Underwood  and  L.  A. 
Lawrence,  for  their  assistance  and  suggestions. 

0.  ^I.  Lelaxd. 

Minneapolis,  Minn. 

Sept:7nb('r,  1921. 


CONTENTS 


PAGE 

Preface vii 

CHAPTER   I 

INTRODUCTION 
Art. 

1.  Discrepancies  among  Observations 1 

3.  Necessity  for  Adjustment 3 

4.  Errors  of  Observation 4 

5.  Systematic  or  Constant  Errors 5 

6.  Theoretical  Errors 5 

7.  Instrumental  Errors 6 

8.  Personal  Errors 6 

9.  Mistakes  or  Blunders 7 

10.       Accidental  Errors  of  Observation 7 

13.  Accidental  Errors,  only,  Considered  in  Adjustments 9 

14.  Assumption  of  the  Arithmetic  Mean 9 

15.  Residuals   (v) 10 

IG.  Regularity  in  the  Occurrence  of  Accidental  Errors 10 

17.  Curve  of  Error 10 

IS.  Assumptions  as  to  the  Occurrence  of  Errors 11 

19.  Law  of  Error 12 

20.  Tests  of  the  Law  of  Error 12 

21.  Method  of  Least  Squares 13 

22.  Number  of  Observations 14 

23.  ^Fwo  Uses  of  Least  Squares 15 

24.  ( 'Uissification  of  Problems 15 

CHAPTER    II 
DIIiECT   OHSERAATIONS    OF   ONE    (QUANTITY 

25.  Direct  Observations:  Readings 17 

2i).  01)S('r\'ations  Resulting  from  a  Combination  of  Heaflings 17 

27.  The  Mean IS 

28.  Comjnitation  of  the  Mean 19 

29.  (^)iitrol  or  Cluvk  ol  ihe  Ab^an 20 

30.  Weighted  Observations 20 

ix 


X  CONTENTS 

Art.  page 

31.  Definition  of  Weight  {w) 20 

32.  Sources  of  Weights 21 

33.  The  Weighted  Mean 22 

34.  Principle  of  Least  Squares  for  Weighted  Observations 23 

35.  Control  or  Check  of  the  Weighted  Mean 24 

36.  Weighted  Mean  of  Two  Quantities 24 


CHAPTER   III 

INDIRECT  OBSERVATIONS,   OF  A   FUNCTION   OF   THE 
UNKNOW^N   QUANTITIES 

37.  Indirect  Observations 26 

38.  The  General  Function 26 

39.  The  Linear  Function 26 

40.  Observation  Equations 27 

41.  Adjustment  of  Indirect  Observations  of  Unequal  Weight 27 

42.  Observations  of  Equal  W^eight 29 

43.  Control  or  Check  in  the  Formation  of  the  Normal  Equations.  ...  29 

44.  Symmetry  of  the  Normal  Equations 30 

45.  Formation  of  the  Normal  Equations.     Aids 30 

47.  Exam-pie  of  the  Direct  Formation  of  Normal  Equations 32 

48.  Use  of  Assumed  Approximate  Values  of  the  Unknowns 34 

49.  Adoption  of  New  Unknowns  to  Equalize  Coefficients 35 

50.  Example:  Time  by  Star  Transits 36 

51.  General  Application  of  the  Method 38 


CHAPTER    IV 
SOLUTION    OF    NORMAL   EQUATIONS 

52.  Methods  of  Elimination 40 

53.  The  Gauss  Method  of  Substitution 40 

54.  Requirements  of  a  Good  Method 40 

55.  Algebraic  Elimination  by  Addition 41 

56.  Symmetry  among  the  Derived  Equations 43 

57.  Omission  of  Redundant  Terms 44 

58.  The  Series  of  Derived  Equations 46 

59.  Control  or  Check  in  the  Solution  of  the  Normal  Equation.s 46 

60.  Elimination  by  the  Abridged  Method;  Example 47 

62.  Notes  and  Suggestions 49 

63.  ^'alues  of  the  Unknowns 49 

64.  Final  Check  of  the  Unknowns 50 

65.  Refinement  of  the  Computations 50 

66.  Mechanical  Aids  in  the  Solution 51 


CONTENTS  xi 

CHAPTER  V 

OBSERVATIONS   OF   DEPENDENT  QUANTITIES: 

CONDITIONED   OBSERVATIONS 

Art.  page 

67.  Dependent  Quantities 53 

68.  The  Observations 54 

69.  The  Weights 54 

70.  Conditions 54 

71.  Number  of  Conditions 55 

72.  Statement  of  Conditions 57 

73.  Adjustment  by  the  ISIethod  of  Correlates 59 

74.  Observations  of  Equal  Weight 61 

75.  Controls  or  Checks  upon  the  Computation 62 

76.  Tabular  Forms  for  Computations 63 

77.  Example:  Adjustment  of  Levels 64 

78.  Arrangement  of  Equations 70 

79.  Examjile:  Local  Adjustment  of  Angles  bj- the  Method  of  Correlates.  71 

80.  Special  Case  of  One  Condition  Only 73 

81.  Adjustment  by  the  Method  of  Indirect  Observations 75 

82.  Example:  Local  Adjustment  of  Angles  as  Independent  Quantities.  .  .  76 

83.  Comparison  of  the  Two  Methods 77 

84.  Adjustments  not  Rigid 78 


(^HAPTER   VI 
ADJUSTMENT   OF   TRIANGULATION 

85.  Triangulation 80 

86.  Nature  of  the  Conditions 81 

87.  Local  Adjustment 82 

88.  Figure  Adjustment.     Notation 82 

89.  Classification  of  Figures 83 

90.  Angle  Equations 84 

91.  Number  of  Angle  Equations  in  a  Figure 86 

92.  Side  Ecjuations 87 

9.'?.       Side  Equation  of  a  Quadrilateral 88 

94.  Shorter  Form  of  the  Side  Equation 91 

95.  Side  Equation  for  a  Central-point  Figure 92 

96.  Mechanical  Statement  of  Side  Equations 93 

97.  Number  of  Side  Equations  in  a  Figure 95 

98.  Statement  of  All  of  the  Conditions  for  a  Figure  .Adjustment 96 

99.  Exfinrple:   Adju.stment  of  a  (Quadrilateral;    Method  of  Angles 99 

100.  Use  of  Directions  Instead  of  Angles 105 

101.  Notation:  Method  of  Directions 107 

102.  Lists  of  Directions 107 

103.  Statement  of  Conditions:  Method  of  Direction.- 108 

104.  Example:   Adjustment  of  a  Quadrilateral;   Method  of  F)ire('tions.  .  .  Ill 


xii  CONTENTS 

Art.  page 

105.  Example.   Adjustment  of  a  Quadrilateral:  Approximate  Method ..  ,  118 

106.  Adjustment  to  Conform  to  Work  Previously  Adjusted  or  Fixed.  ...  119 

107.  Two  .Sides  and  the  Included  Angle  Fixed 121 

108.  Quadrilateral  with  One  Fixed  Triangle 122 

109.  Fixed    Triangle   or    Polygon    with    Central    Point    Unoccupied; 

Exam-pie -. 123 

110.  Adjustment  of  a  System  between  Points  of  Control 127 

111.  Adjustment  of  Trigonometric  Leveling 130 

112.  Base  Lines 130 

CHAPTER   VII 
EMPIRICAL   FORMULAS 

113.  Empirical  Formulas 131 

114.  Their  Uses 131 

115.  Nature  of  the  Problem 132 

116.  The  Form  of  the  Equation 132 

117.  Straight  Lines  and  Parabolic  Arcs 133 

118.  Periodic  Functions 134 

119.  Non-linear  Forms 135 

120.  Exponential  Functions 135 

121.  General  Case  of  Reduction  to  Linear  Form 137 

122.  Determination  of  the  Constants 139 

123.  Test  of  Empirical  Formula 139 

124.  Remarks 140 

125.  Example:  Straight  Line 141 

126.  Example:  Parabola 143 

127.  Example:  Exponential  Curve 145 

128.  Example:  Periodic  Curve 148 

129.  References 150 

CHAPTER   Vni 

PRECISION   OF  OBSERVATIONS  AND   RESULTS  AND 
COMBINATION   OF  COMPUTED   QUANTITIES 

131 .  Precision 151 

132.  Precision  and  Accuracy 151 

133.  Index  of  the  Precision 152 

134.  The  Quantit}',  h,  in  the  Law  of  Error 152 

135.  The  Mean  Square  Error  f«) 153 

136.  The  Probable  Error  (r) 155 

137.  The  Average  Error  (t]) 157 

138.  Compari.son  of  the  Indices  of  Precision 158 

139.  Precision  of  Direct  Observations 160 

140.  Precision  of  a  Single  Observation Kil 

141 .  Precision  of  the  Mean 163 


CONTENTS  xn\ 

Art.  page 

142.  Example:  Precision  of  the  Mean 164 

143.  Precision  of  the  Weighted  Mean 165 

144.  Exam'ple:  Precision  of  the  Weighted  Mean 167 

145.  Precision  of  Indirect  Observations 167 

146.  Weights  of  the  Unknowns 168 

147.  Precision  of  an  Observation  of  Weight  Unity 169 

148.  Example:  Precision  of  Indirect  Observations 170 

149.  Precision  of  Conditioned  Observations 172 

150.  Examples:  Differences  of  Elevation 174 

151.  Precision  of  Computed  Quantities 177 

152.  Simple  Propagation  of  Error 178 

153.  Example:  Precision  of  the  Mean 181 

154.  Compound  Propagation  of  Error 181 

155.  Examples:  Propagation  of  Error 182 

Combination'  of  Computed  Quantities 

156.  Weights  from  Mean  Square  or  Probable  Errors 187 

157.  Limitations 188 

158.  Example:  Weighted  Mean  of  Computed  Quantities 189 

159.  Precision  of  the  Adjusted  Value 189 

160.  Example:  Precision  of  the  Adjusted  Value 189 

CHAPTER   IX 

CONCLUSION 

161.  Rejection  of  Observations 191 

162.  Criteria  for  Rejection  of  Observations 192 

163.  Methods  of  Observing 193 

164.  Precision  Desired  and  Number  of  Observations 193 

165.  Ultimate  Limit  of  Precision  and  Accuracy 194 

166.  Indication  of  Systematic  Errors 195 

167.  Treatment  of  Discordant  Observations 196 

168.  ArVjitrary  Adjustments 196 

169.  Use  and  Abuse  of  Least  Squares 197 

170.  Adju.<tments  not  Infallible 198 

171.  Other  Laws  of  Error 198 

172.  Review;  Outline  of  Methods  of  Adjustment 199 


APPENDICES 
A.     HISTORY   AND    BIBLIOGRAPHY    OF    LEAST   SQUARES 

173.   Historical  Sketch 201 

171.       ( Irowth  of  the  Literature 202 

175.   Hililio^a-aphy 202 


xiv  CONTENTS 

B.     PRINCIPLES   OF    PROBABILITY 
Art.  page 

176.  Definition 204 

177.  Two  Sources  of  Probability 204 

178.  Simple  Probability 20,5 

179.  Compound  Probability.     Independent  Events 206 

180.  Compound  Probability.     Dependent  Events 206 

181.  Number  of  Occurrences 208 

C.  DERIVATION  OF  THE  LAW  OF  ERROR 

182.  The  Law  of  Error 209 

183.  Assumptions.     The  Error  Function 209 

184.  Derivation  of  the  Law  of  Error 211 

185.  The  Constant,  C 213 

186.  Expansion  of  Law  of  Error  in  Series 215 

187.  Tables  of  the  Law  of  Error 215 

D.     OUTLINE   OF   A   SHORT   COURSE   OF   INSTRUCITON 

188.  General  Plan 217 

189.  List  of  Problems 218 

E.     TYPICAL   CURVES   FOR    REFERENCE 

Plate       I.  Straight  Lines,     Parabola.     Hyperbola 221 

II.  Parabola 222 

III.  Hyperbola.     Parabolas 223 

IV.  Parabolas 224 

V.  Parabolas 225 

VI.  Hyperbolas 226 

VII.  Exponential  and  Logarithmic  Curves 227 

VIII,  Periodic  Curves 228 

F.     TABLlvS 

Table    I.  Probability  of  an  Error  Less  than  A:   Argument  is  /,=//A. ...  229 

II.  Probability  of  an  Error  Less  than  A:   Argument  is  A   e 230 

III.  Probability  of  an  Error  Less  than  A;    Argument  is   A/r 231 

IV.  Factors     for    Computing     Probable     Errors     from     Bcs-sel's 

Formulas 232 


PRACTICAL  LEAST  SQUARES 


CHAPTER    I 
INTRODUCTION 

1.  Discrepancies  among  Observations.  Measurements  made 
in  the  field,  office,  or  laboratory  directly  depend  upon  readings  of 
scales,  circles,  micrometers,  clocks,  watches,  etc.  The  readings 
may  be  made  to  the  nearest  division,  or  graduation,  or  the  space 
between  two  adjacent  graduations  may  be  subdivided  by  estima- 
tion, thus  carrying  the  observation  to  a  greater  degree  of  refine- 
ment.^ When  successive  settings  or  pointings  of  the  measuring 
apparatus  are  made,  upon  the  same  object,  the  corresponding 
readings  may  be  the  same  as  the  first  if  the  graduations  be  coarse 
and  the  nearest  one,  only,  recorded.  But  if  the  divisions  be  very 
fine  and  the  readings  made  with  the  aid  of  a  magnifier,  or  reading- 
glass,  and  by  estimation,  there  may  be  considerable  variation 
among  them,  especially  in  the  last  figure  which  is  estimated. 

For  example,  consider  the  following  measurements  of  a  line 

made  with  a  steel  tape  in  a  drizzling  rain,  using  spring-balance, 

hand-level,  and  plumb-bobs,  the  tape  l)cing    graduated  to  hun- 

dredtlis. 

899.754  ft.  899. 7():^  ft. 

.7()1  .7r)() 

.760  .7.')9 

.7.")8  .7r)9 

.7()2  .7(iO 

If  the  readings  had  been  made  to  the  nearest  hundrcxlth.  all  after 

'  It  is  oustomary  to  estimate  to  tenths,  although  an  exiiericnccd  observer 
will  sometimes  record  to  five  one-hundredths  when  the  reading  seems  to  lie 
between  two  adjacent  tenths,  greater  than  the  one  and  less  than  the  other. 


2  PRACTICAL  LEAST  SQUARES 

the  first  would  have  been  ahke  and  899.76  ft.;  if  to  the  tenth  only, 
each  reading  would  have  been  899.8  ft.,  indicating  that  the  care  in 
handling  the  apparatus  would  justify  the  use  of  a  more  precise 
method  of  making  the  readings,  or  that  some  of  the  precautions 
were  unnecessary.^  Thus  it  will  be  seen  that  the  observations 
may  be  so  rough  or  coarse  as  to  show  no  variation  whatever. 
Their  very  agreement,  in  such  a  case,  might  be  misleading,  as 
indicating  a  false  precision. 

Realizing  the  occurrence  of  these  small  discrepancies  among 
observations,  when  made  with  care,  the  observer  makes  a  number 
of  readings,  instead  of  a  single  one,  and  by  some  method  of  adjust- 
ment adopts  a  certain  value  for  the  observed  quantity  as  a  result 
of  his  series  of  observations.  If  they  were  made  with  equal  care 
and  under  the  same  conditions,  he  may  consider  them  to  be  of 
equal  weight  and  that  none  is  entitled  to  preference  over  the  others, 
in  which  case  it  will  be  reasonable  to  adopt  the  simple  mean  or 
average  of  the  set  as  the  best  value  obtainable  from  these  observa- 
tions.    In  fact,  this  adoption  of  the  mean  is  axiomatic. 

2.  It  will  be  evident  that  absolute  correctness  in  the  observed 
quantity  is  unobtainable  as  a  result  of  the  observations  them- 
selves. In  the  above  example,  it  would  be  impossible  to  de- 
termine the  length  of  a  line  down  to  a  millionth  of  a  foot  (the 
sixth  place  of  decimals),  using  this  method  of  making  the  meas- 
ures. Certainly,  then,  correctness  to  an  infinite  number  of  places 
is  beyond  hope.  Moreover,  it  is  impossible  to  ascertain  the 
correct  value  of  the  next  figure  beyond  the  limit  of  our  observa- 
tions. Whatever  value  may  be  adopted  as  a  result  of  adjust- 
ment, it  should  be  regarded  as  but  an  approximation  to  the  true 
one,  that  is,  as  the  best  available  value  within  our  knowledge. 

The  discrepancies  among  the  observations  of  a  (luantity,  then, 
show  that  these  observations  are  not  quite  correct, — that  the 
work  is  not  perfect,  in  other  words,  but  is  attended  by  errors  of 
observation.  The  differences  between  the  readings  are  not  the 
errors  themselves  but  serve  to  indicate  thc^  existence  of    errors. 

•  Addiiif^  a  zero  after  the  last  observed  fisure,  making  the  reading,  in  this 
example,  899.80  instead  of  899.8,  is  a  habit  of  some  beginners  which  should  be 
studiously  avoided. 


INTRODUCTION  3 

If  it  were  possible  to  ascertain  the  correct  value  of  the  observed 
quantity  the  true  error  of  each  observation  would  be  easily  found 
as  the  difference  between  the  observation  and  the  correct  value. ^ 
But  just  as  it  is  never  possible  to  know  the  correct  value,  so  the 
true  errors  must  be  regarded  as  ideal  and  indeterminate. 

3.  Necessity  for  Adjustment.  By  making  several  observations 
upon  a  quantity,  in  succession,  two  objects  are  attained,  namely, 
greater  precision  in  the  resulting  mean  than  in  a  single  observation, 
and  the  check  upon  the  work  afforded  by  the  agreement  of  the 
various  readings  among  themselves,  within  the  limits  of  the  small 
discrepancies  above  described.  The  several  observations  having 
been  made,  however,  for  the  purpose  of  securing  a  better  value  of 
the  observed  quantity  than  any  one  of  the  separate  readings  would 
be  Hkely  to  be,  that  is,  a  value  presumably  closer  to  the  true  or 
correct  value,  it  is  necessary  to  arrive  at,  and  adopt,  some  one 
value  of  the  quantity,  for  use  in  any  computations  which  may 
involve  it,  such  use  being  the  probable  reason  for  making  the 
observations  in  the  first  place.  This  necessity  arises  from  the  fact 
that  if  different  values  of  the  same  quantity  be  used  in  the  com- 
putation, the  results  will  fail  to  check. 

Similarly,  if  two  or  more  related  quantities,  resulting  from 
observations,  be  used  in  computations  without  having  been 
adjusted  so  as  to  satisfy  the  relation  between  them,  the  results 
will  be  inconsistent  and  checks  upon  the  computation  will  be  sac- 
rificed. For  example,  suppose  the  three  horizontal  angles  of  a 
triangle  have  been  measured  in  the  field  and  their  sum,  as  usual, 
fails  to  equal  the  theoretical  amount,  namely,  180°  plus  the 
spherical  excess  of  the  triangle.  In  order  that  the  triangle  may 
be  computed  and  the  sides  checked,  the  three  angles  must  be 
adjusted  l)y  the  application  of  small  corrections  so  as  to  satisfy 
the  theoretical  sum.     Also,  if  a  scries  of  benchmarks  be  connected 

'  It  is  well  to  adopt  the  rule  of  su])tracting  the  incorrect  or  observed 
ciuantity  from  the  correct  or  adjusted  one.  algebraically,  taking  account  of 
the  signs.  The  resulting  dilference,  with  its  sign,  is  then  the  correction  to  be 
added  algebraically  to  the  observed  (piantity  to  obtain  the  adjusted  one. 
Strictly,  the  error  lias  the  opposite  sign  to  the  correction,  but  the  latter  is  more 
con\-enient  in  most  cases,  and  the  use  of  a  fi.xed  rule  tends  to  avoid  mistakes. 
An  old  expression  of  this  rule  is,  Suhtrnct  llie  fahc  from  the  true. 


4  PRACTICAL  LEAST  SQUARES 

by  lines  of  levels,  some  of  which  are  check-lines  forming  with  the 
others  complete  circuits,  it  is  necessary  to  adjust  the  differences 
of  elevation  so  that  all  of  the  circuits  will  close  exactly,  in  order 
that  the  difference  of  elevation  between  any  two  benchmarks  will 
be  constant  when  computed  through  two  or  more  series  of  lines, 
that  is,  by  two  or  more  different  routes. 

Obviously,  any  computation  could  be  carried  out  and  checked 
even  though  the  original  data  were  assumed  and  far  from  the 
truth,  provided  they  were  not  inconsistent.  But  it  is  not  suf- 
ficient that  the  data  be  consistent;  they  must  be  as  near  the  truth 
as  our  knowledge  permits  if  the  results  are  to  be  of  the  greatest 
value.  Observations  are  made  for  the  purpose  of  securing  infor- 
mation with  precision,  and  the  results  serve  as  a  basis  for  accurate 
computations.  Therefore,  it  is  important  to  so  combine  the  obser- 
vations as  to  give  due  consideration  to  each  one  and  to  obtain  for 
each  quantity  the  best  value  which  the  given  observations  can 
yield,  that  is,  the  value  which  they  indicate  to  be  nearest  the 
truth.  However,  the  time  and  labor  involved  should  not  be 
unreasonable  or  excessive  in  view  of  the  objects  to  be  secured. 

The  process  of  combining  the  various  observations  so  as  to 
obtain  the  best  values  of  the  quantities  concerned  is  called  the 
adjustment  of  the  observations.  The  results  are  referred  to  as  the 
adopted,  adjusted,  or  corrected  values.  The  small  quantities  to  be 
added  algebraically  to  the  observations  to  obtain  these  adjusted 
values  arc  known  as  the  corrections. 

4.  Errors  of  Observation.  Every  observation  made  in  the 
process  of  measurement  is  likely  to  be  in  error  from  various  causes, 
that  is,  the  actual  reading  is  not  the  quantity  really  sought — 
is  not  what  it  would  be  if  conditions  were  ideal  and  perfection 
attainable.  Some  of  these  causes  are  beyond  tlie  control  of  the 
observer  while  others  depend  entirely  upon  his  skill  and  i)erson- 
ality.  For  example,  the  altitude  of  a  star  is  measured  with  a 
surveyor's  transit.  The  star  appears  higher-  than  it  really  is, 
owing  to  atmospheric  refraction.  The  instrument  is  never  in 
perfect  adjustment,  so  that  wluni  the  star  is  seen  on  the  hori- 
zontal thread  the  vertical  circle  does  not  show  the  corre(;t  altitude 
of  the  line  of  sight.     Moreover,  the  obsorvcM-  liiius(4f  may  have  tlie 


INTRODUCTION  5 

habit  of  noting  the  time  when  a  star  crosses  a  thread  a  fraction  of  a 
second  too  late.  Then  he,  or  his  recorder,  may  make  a  mistake 
of  a  whole  minute  in  taking  the  time  from  his  watch.  And  finally, 
he  reads  the  vertical  circle  vernier  to  the  nearest  half-minute,  per- 
haps, with  a  possible  error,  therefore,  of  one-fourth  of  a  minute. 

It  is  customary  to  include  the  effects  of  all  influences  such  as 
those  illustrated  in  the  above  example  in  the  term,  errors,  and  to 
classify  them  as  Systematic  or  Constant  Errors,  Mistakes  or 
Blunders,  and  Accidental  Errors  of  Observation. 

5.  Systematic  or  Constant  Errors  occur  in  accordance  with 
fixed  laws  or  are  constant  during  a  set  of  observations  made  under 
unvarying  conditions.  Their  effects  are  eliminated  from  observa- 
tions, as  far  as  our  knowledge  permits,  in  two  ways:  first,  by 
the  application  of  corrections  computed  from  the  known  laws  of 
the  occurrence  of  the  errors;  and  second,  by  making  the  observa- 
tions according  to  a  prearranged  plan  so  that  the  conditions  will 
be  reversed  during  half  of  the  set,  changing  the  signs  of  the  cor- 
responding systematic  errors;  these  therefore  neutralize  those  of 
the  other  half-set  when  the  observations  of  the  whole  set  are  com- 
bined.^ Systematic  errors  are  divided  into  three  classes,  namely, 
Theoretical,  Instrumental,  and  Personal  Errors. 

6.  Theoretical  Errors  conform  to  certain  laws  from  which 
their  effect  upon  observations  made  under  given  conditions  may 
be  computed  and  corresponding  corrections  applied,  as  soon  as 
these  laws  are  known.  Refraction  and  aberration  of  light,  expan- 
sion of  metals  with  rise  of  temperature,  and  dip  of  the  horizon  are 
examples.  The  form  of  a  law  is  usually  determined  theoretically 
but  its  constants  may  result  from  observations.  Theoretical  errors 
arc  not  errors  in  the  sense  of  being  accidents  or  inaccuracies,  but 

'  This  arrangement  of  a  program  for  observing,  so  as  to  eliminate  syste- 
matic errors,  is  exceedingly  important.  Observers  and  computers  should 
always  l)e  on  the  lookout  for  new  and  unforeseen  sources  of  these  errors,  as 
the  observations  may  not  reveal  them,  and  the  results,  apparently  good,  may 
be  erroneous  to  a  suri)rising  degree.  The  experience  of  the  observer  is  inval- 
uable in  his  study  of  the  conditions  under  which  his  observing  is  done,  with 
this  end  in  view.  As  our  know'ledge  of  the  sources  of  error  increases,  so  does 
our  ability  to  bring  the  results  of  observations  closer  to  the  truth.  (See  Wright 
and  llayford:    Adjustment  of  Observations,  Art.  201.) 


6  PRACTICAL   LEAST  SQUARES 

rather  are  the  effects  of  certain  influences  which  operate  to  prevent 
the  observer's  seeing  or  reading  directly  the  quantity  which  he  seeks 
in  his  observations.  They  are  included  in  the  classification  and 
study  of  errors  merely  as  a  matter  of  convenience  and  as  a  result 
of  custom. 

7.  Instrumental  Errors  may  be  defined  as  imperfections  in 
the  construction  or  adjustment  of  instruments,  or  the  effects  of 
those  imperfections  upon  observations  made  with  the  instruments. 
Among  these  may  be  mentioned  the  graduation  errors  of  scales 
and  circles,  eccentricity  of  circles,  inequality  of  pivots,  collima- 
tion  error,  and  error  of  runs  in  a  micrometer  microscope.  They 
may  be  determined  by  measurement  and  the  corresponding  correc- 
tions applied  to  the  observations,  or  the  observing  plan  may  be 
such  as  to  eliminate  their  effects. 

8.  Personal  Errors  are  generally  referred  to  as  Personal 
Equation.  They  depend  upon  the  habits  of  the  observer  and 
his  physical  condition.  They  result,  frequently,  from  the  habit  of 
always  setting  the  thread  of  a  telescope  slightly  to  one  side  of  the 
object  sighted,  or  of  always  noting  the  time  or  giving  a  signal  too 
early  or  always  too  late.  No  one  can  hope  to  be  free  from  such  a 
tendency,  and  some  of  the  best  observers  the  world  has  ever  known 
have  had  unusually  large  personal  equations.  Good,  steady 
observers  in  normal  physical  condition  will  have  nearly  constant 
personal  equations,  whether  large  or  small,  and  this  steadiness 
of  habit  is  more  important  than  that  the  error  be  small  in  amount. 
If  the  observations  be  differential  in  character,  the  personal  equa- 
tion of  the  observer  may  have  no  effect,  if  it  he  constant  and  if  all 
the  readings  V)c  made  by  him.  This,  for  example,  would  be  the 
case  in  leveling,  if  the  rod-target  were  alwa\'s  placed  too  high  oi' 
always  too  low  and  by  the  same  amount.  Similarly,  it  may  not 
affect  the  measurement  of  angles  in  tiiaiigulation.  But  if  dif- 
ferent observers  be  involved,  the  results  may  be  affected  by  the 
sum  or  difference  of  their  personal  equations. 

The  effect  of  this  error  may  be  eliminated,  in  some  cases,  by  an 
exchange  of  observers,  as  in  telegraphic  longitude  determinations; 
or,  its  amount  may  be  det(U'min(Hl  by  special  experiments  or 
apparatus,   for  each  observer,   then  assumed  to  be  constant  and 


INTRODUCTION  7 

applied  as  a  correction  to  his  subsequent  observations  of  the  same 
kind  when  made  under  the  same  conditions,  especially  as  regards 
his  personal  comfort  and  health.  However,  the  personal  equation 
of  an  observer  must  not  be  assumed  as  constant  for  any  great 
length  of  time,  and  there  is  always  danger  in  assuming  it  constant 
at  all.  It  is  safer  to  determine  it  at  different  times  and  to  inter- 
polate for  its  value  between  these  results.  Depending  upon 
personal  peculiarities,  it  follows  no  law  and  is  often  the  most 
troublesome  source  of  error  to  which  observations  are  subject. 
Fortunately,  it  is  small  in  amount  in  most  cases. 

9.  Mistakes  or  Blunders  are  irregular  in  their  occurrence, 
obeying  no  law,  and  are  relative^  large  in  size.  They  result  from 
haste  and  carelessness,  frequently,  on  the  part  of  the  observer, 
during  temporary  lapses,  perhaps,  from  his  customary  vigilance. 
He  may  call  out  to  his  recorder  one  number  while  reading  and 
thinking  another;  he  may  read  the  wrong  division  of  a  circle  or 
scale;  or  he  may  read  a  clock  wrong  by  a  whole  minute  while  he  is 
estimating  tenths  of  a  second.  He  may  turn  the  wrong  tangent- 
screw  while  I'epoating  angles,  the  rod-clamp  may  slip  during  level- 
ing, or  the  wrong  object  may  be  sighted  in  triangulation  or  azimuth 
work.  The  remedy  lies  in  uninterrupted  care  on  the  part  of  the 
observer  to  avoid  these  blunders,  and  watchfulness  by  the  recorder 
to  detect  them  in  any  inconsistencies  among  the  readings.  Herein 
lies  one  of  th(^  chief  virtues  of  a  good  recorder. 

10.  Accidental  Errors  of  Observation,  or  simply  Accidental 
Errors,  is  a  name  given  to  a  specific  class  of  errors  in  connection 
with  the  adjustment  of  observations  by  the  Method  of  Least 
Squares.  They  arc  purely  errors  of  observation  and  have  no  rela- 
tion to  systematic  errors  or  the  large  mistakes  already  described. 
They  are  small,  for  the  most  part,  and  their  presence  is  indicated 
by  the  discrepancies  among  a  series  of  readings  upon  a  fixed  object 
which  have  been  made  with  the  utmost  care  and  precision,  with  an 
instrument  which  can  be  read  to  a  greater  degi'ce  of  refinement 
than  the  pointings  can  be  made  by  the  observer.  These  errors 
are  never  known  exactly  because  the  true  or  correct  value  of  the 
quantity  ol^served  is  never  known,  as  has  been  explained  in  a 
previous  article.      Thus   it    is   staffed    that    they  are    indicated  bv 


8  PRACTICAL  LEAST  SQUARES 

the  discrepancies,  not  that  they  are  the  discrepancies  them- 
selves. 

11.  For  example,  suppose  readings  are  made  by  a  skillful 
observer  using  a  micrometer  microscope,  upon  a  graduation  line  or 
scratch  of  a  standard  meter  bar,  the  whole  being  enclosed  in  a 
vault  of  constant  temperature  so  that  conditions  are  steady. 
Further,  suppose  the  observer  to  be  able  to  set  the  parallel  threads 
so  as  to  be  equidistant  from  the  scratch  within  10  microns  ^  and 
that  the  micrometer  reads  directly  to  five  microns  and  by  esti- 
mation to  one  half-micron,  that  is  to  0.0005  millimeter.  The 
readings,  then,  might  run  as  follows,  the  unit  being  one  division 
of  the  micrometer  head  (equal  to  0.005  mm.) : 

d  d 

46.4  45.9 

45.6  45.3 

46.0  46.1 

46.1  45.8 
45.9  45.8 

46.6  45.2 

46.7  46.1 

45.4  46.8 

46.5  45.1 
45.9  46.3 

By  assumption,  the  conditions  are  very  favorable  for  precise  work 
and  the  observer  is  skillful  and  is  using  great  care  in  making  the 
readings;  nevertheless,  there  are  discrepancies  and  the  readings 
have  a  total  range  of  1.7  divisions.  These  are  the  discrepancies 
which  indicate  the  presence  of  accidental  errors  of  observation. 
They  are  so  small  as  to  be  beyond  the  control  of  the  observer,  as  he 
is  assumed  to  make  each  separate  pointing  as  carefully  as  he  can. 
It  may  be  noted,  also,  that  most  of  the  discrepancy  is  due  to  the 
errors  of  pointing,  that  is,  setting  the  threads  on  the  mark,  as 
the  estimation  of  tenths  of  a  division  of  the  head  would  seldom 
be  in  error  by  a  whole  tenth. 

12.  In  other  examples,  the  discrepancies  might  be  made  up  of 
accidental  errors  of  several  different  kinds,  such  as  pointing  the 

^  A  micron  is  one,  one-tliovisaiulth  of  a  milliinctor  or  one  onc-Diillionth  of 
a  meter.  It  is  the  unit  used  in  verj'  precise  measurements  of  length  by 
means  of  micrometer  microscopes. 


INTRODUCTION  9 

telescope  upon  a  signal,  setting  the  threads  of  the  microscope 
upon  a  division  of  the  circle,  and  reading  the  micrometer  head. 
Other  sources  of  errors  which  may  have  the  nature  of  accidental 
errors  are  the  unsteadiness  of  the  atmosphere  and  that  of  instru- 
ment supports,  and  rapid  changes  of  temperature.  However,  the 
foregoing  example  of  simple,  direct  readings  is  a  clear  illustration 
of  the  occurrence  of  accidental  errors  without  complication.  If 
the  micrometer  head  had  been  graduated  directly  into  one  thou- 
sand parts  instead  of  one  hundred,  to  be  read  with  a  magnifier, 
the  error  of  estimation  in  reading  it  would  have  disappeared  and 
the  discrepancies  might  have  been  ascribed  entirely  to  the  acci- 
dental errors  of  setting  the  threads  upon  the  division  on  the 
circle, — the  simplest  kind  of  a  case. 

13.  Accidental  Errors,  only,  Considered  in  Adjustments.  It 
has  been  shown  that  the  effects  of  systematic  errors  arc  eliminated 
by  corrections  or  by  the  observing  program,  as  far  as  they  arc 
known  to  exist;  and  that  the  mistakes,  or  blunders,  are  avoided 
by  the  exercise  of  care  and  vigilance,  as  much  as  possible.  Of  all 
the  kinds  of  errors,  then,  there  remain  the  accidental  ones,  still 
affecting  the  observations,  and  it  is  to  minimize  the  effects  of 
these  errors  that  adjustments  are  made.  In  all  that  follows  in 
this  work,  therefore,  only  this  special  class,  the  accidental  errors, 
will  be  considered,  except  as  others  may  be  specifically  mentioned. 

14.  Assumption  of  the  Arithmetic  Mean.^  When  eacli  obser- 
vation or  reading  has  been  made  with  the  same  care  and  under  the 
same  conditions  as  all  the  others  of  a  set  made  upon  a  certain 
quantity,  so  that  all  arc  of  (Hjual  value,  or  weight,  there  is  no 
reason  for  preferring  any  one  to  any  other;  the  mean,  or  average, 
of  them  all  must  be  regarded,  then,  as  th(>  best  value  of  the  observed 
(}uantity  which  can  l)e  ol)tained  from  the  given  set  of  observations. 
The  soundness  of  this  i)rin('iple  is  so  cn'idfMit  that  it  is  adopted  as 
the  fundamental  assumi)tioii  in  <levelo])ing  the  theory  of  the 
adjustment  of  observations.  The  mean  should  b(^  regarded,  not 
as   the  true  value  of  the  ()])S(M-ve(I  (juantity.   but    rather  as  the 

^  I'ho  word  >nr<in.  in  tliis  work,  is  uiiil(>rsloo(l  to  rcfiM'  to  the  arithnulic 
mean,  or  average,  in  every  case.  Tlie  (ji  auK  Irn-  nH\aii  is  the  sijuare  root  of 
the  ])ro(hirt  of  two  ([uantities. 


10  PRACTICAL  LEAST  SQUARES 

nearest  approximation  to  it  that  the  given  observations  will  yield, 
and  subject  to  improvement  if  other  or  better  observations  should 
become  available. 

15.  Residuals  (v).  The  difference  between  an  observed  value 
of  a  quantity  and  the  adopted  one  is  known  as  the  residual  of  that 
observation.  It  should  be  taken  in  the  sense,  adopted  minus 
observed,  for  consistency  in  sign.  If  the  adopted  value,  the  mean, 
for  example,  be  the  nearest  approximation  to  the  truth,  then  the 
residuals  obtained  with  that  value  would  be  the  nearest  approxi- 
mations to  the  actual  or  true  errors  of  the  observations,  to  the 
extent  of  our  knowledge.  The  occurrence  and  behavior  of  the 
residuals,  then,  will  be  our  best  indication  as  to  the  occurrence  of 
the  true  errors.  In  fact,  we  may  reasonably  assume  that  the 
errors  and  the  residuals  conform  to  the  same  laws.  In  the  inves- 
tigation of  such  laws,  therefore,  it  may  be  convenient,  some- 
times, to  use  the  terms  somewhat  indiscriminately, — to  use  the 
word  error  when  residual  is  intended. 

16.  Regularity  in  the  Occurrence  of  Accidental  Errors.  At 
first  thought,  it  may  seem  strange  that  there  should  beany 
method  at  all  in  the  occurrence  of  errors  which  are  so  small  and  so 
evidently  the  result  of  accident  or  inaccuracy.  However,  it  has 
been  found  from  a  large  number  of  investigations  of  observations 
of  almost  ever}"  conceivable  sort,  that  these  errors  occur  not  only 
with  regularity  but  in  conformity  to  a  definite  law,  of  which  the 
general  form  is  the  same  for  all  kinds  of  observations.  This  law 
of  the  occurrence  of  errors,  or  Law  of  Error,  as  it  is  called,  is 
expressed  in  the  form  of  an  equation  which  has  been  completely 
derived,  and  tested,  later,  in  a  multitude  of  cases,  with  entire 
satisfaction.  In  accordance  with  this  law  of  error,  the  Method  of 
Least  Squares  has  been  devised  and  demonstrated  for  the  adjust- 
ment of  observations. 

17.  Curve  of  Error.  As  an  example,  let  us  consider  a  large  set 
of  direct  observations,  say  500  of  them,  such  as  the  readings  of 
the  micrometer  microscope  in  the  example  of  Art.  11,  page  8. 
By  taking  the  mean  of  the  entire  series  and  subtracting  from  it 
3ach  separate  reading,  the  rcsichials  are  obtained.  Counting  the 
residuals  of  each  size  and  sign,  we  find  that  there  are  36  of  +0.1, 


INTRODUCTION 


11 


35  of  -0.1,  33  of  +0.2,  34  of  -0.2,  etc.,  the  sum  of  all  the  numbers 
being,  of  course,  500.  These  results  are  plotted  as  rectangular 
coordinates,  the  magnitude  of  the  error  on  the  horizontal  axis, 
plus  on  the  right  and  minus  on  the  left  of  the  origin,  and  the 
corresponding  number  of  errors  of  that  size  on  the  vertical  axis, 
upward.  Thus  one  point  is  plotted  for  each  size  of  error,  and  for 
each  sign.  A  smooth  curve  is  then  drawn  so  as  to  follow  the 
points  as  closely  as  possible,  with  the  result  shown  in  Fig.  1 : 

Nuniler 
of  Errors 


1                            1                y^ 

aoN.          1                    1 

no             \l                       ! 

10                         1     \                    I 

;                      ^*>^..___        Size 

-0.5  0  +0.5  +1.0 

Fig.  1.     Occurrence  of  Errors  or  Residuals 


of  Error 


The  form  of  the  curve  is  typical  of  all  those  constructed  from 
observations  in  this  manner,  and  it  is  called  the  Curve  of  Error, 
or  the  Curve  of  Probability  of  Error,  since  an  ordinate  to  the  curve 
may  represent  the  probability  ^  of  the  occurrence  of  an  error  as 
well  as  the  number  of  times  that  error  occurs. 

18.  Assumptions  as  to  the  Occurrence  of  Errors.  The  error 
curve  has  three  properties  which  are  evident  from  inspection: 
first,  it  is  symmetrical  about  the  vertical  axis;  second,  it  has  a 
maximum  point  where  it  crosses  that  axis;  and  third,  it  approaches 
the  horizontal  axis  so  gradually  as  to  appear  asymptotic.  Gener- 
alizing from  these  properties,  the  following  assumptions,  or  axioms, 
arc  obtained,  as  to  the  occurrence  of  errors  in  any  large  set  of 
observations : 

1.  Positive   and    negative  errors  of  the  same  magnitude 
occur  with  equal  frequency;  they  are  equally  probable. 

2.  Small  errors  occur  more  frequently,  or  are  more  prob- 
able, than  large  ones. 

^  The  probability  of  an  event  is  directly  proportional  to  the  number  of 
times  it  occurs.     See  Appendi.x  B,  for  Princii^les  of  Probabilit}'. 


12  PRACTICAL  LEAST  SQUARES 

3.  Very  large  errors  seldom  occur;  they  are  likely  to 
belong  in  the  class  of  mistakes  rather  than  that  of  accidental 
errors. 

It  should  be  remembered  that  the  number  of  observations  in  a  set 
is  assumed  to  be  large.  The  smaller  the  set,  the  less  closely  will 
the  residuals  conform  to  the  ideal  conditions,  such  as  that  of  the 
first  of  these  assumptions,  but  even  in  a  small  set  they  will  approx- 
imate to  their  ideal  occurrence.  Obviously,  the  larger  the  number 
of  observations,  the  more  closely  should  the  mean  approach  the 
true  value  of  the  quantity  observed,  in  so  far  as  the  accidental 
errors  are  concerned. 

19.  Law  of  Error.  The  general  equation  of  the  error  curve,  or 
curve  of  probability,  may  be  derived  ^  from  the  assumptions  of  the 
last  article  together  with  the  principle  of  the  mean  (Art.  14,  page  9). 
The  curve  is  seen  to  be  continuous,  and  it  is  of  special  importance 
to  note  that  the  number  of  errors,  or  the  probability  of  an  error, 
is  a  function  of  the  size  of  the  error.  The  algebraic  principles  of 
probability,  also,  are  involved  in  the  derivation.  The  resulting 
Law  of  Probability  of  Error  may  be  stated  thus : 

'      ■  Vtt 

in  which  p  is  the  probability  of  an  error  A  in  a  set  of  observations 
for  which  }%  is  a  computed  constant;  e  is  th(>  base  of  natural 
logarithms;    and   r  =  3.1416+.       The   constant    h   lias   the   value 

.  in  which  e  is  a  constant  for  each  separate  set  of  observations,- 

and  serves  to  change  the  general  equation  into  a  specific  oik^  for 
the  particular  set  of  ol)scrvations  under  consideration. 

20.  Tests  of  the  Law  of  Error.  The  law  may  he  tested  by 
applying  it  to  many  diffcM-ont  kinds  of  observations  so  as  to  ascer- 
tain whether  the  I'csiduals  occur  in  conformity  with  it.  Con- 
versely, if  the  law  be  accepted  as  applicable  to  all  observations, 

1  This  derivation  may  be  found  in  Appendix  C. 

-  €  will  be  defined  farther  on  as  the  mean  square  error  of  a  single  observa- 
tion. Its  \-alue,  for  a  jiiA^cMi  set  of  observations,  depends  upon  their  precision, 
and  i-  determined  from  the  resichials. 


INTRODUCTION  13 

the  quality  of  a  given  set  could  be  tested  by  the  same  method. 
In  general,  then,  it  is  a  process  of  comparing  theoretical  results 
with  observed  ones,  or  theory  with  practice.  The  method  consists 
of  the  comparison  of  the  number  of  residuals,  in  the  given  series, 
which  lie  between  certain  limits,  with  the  number  of  errors  which 
ought  to  lie  between  those  limits  according  to  the  Law  of  Error. 
For  example,  e  having  been  computed  for  the  given  observations, 
the  probability  of  an  error  between  0.00  and  0.30,  say,  is  deter- 
mined by  integration  and  substitution  in  the  equation  (1).^ 
(It  will  always  be  less  than  unity,  from  the  principles  of  prob- 
ability.) Multiplying  the  total  number  of  observations  in  the  set 
by  this  probability  gives  the  number  of  errors  which  ought  to  lie 
between  the  assumed  limits  according  to  the  law.  The  residuals 
which  actually  lie  between  those  limits  may  then  be  counted 
and  their  number  compared  with  that  obtained  from  the  formula. 
Crandall  gives  an  example  of  a  small  set  of  18  observations  of 
an  angle,  with  the  following  results,  e  being  1.66". 


Xvunber  of  Errors  Less  Than 

1                  1                   •                  1 
O.o"             1"       1       2"              3"       1       4" 

From  theory 

liy  aftual  count 

I                               :                               1                               i 

4.3     1       8.1           13.8     i      It). 8     \     17.8 
6                8              14         '      17              17 

This  agreement  is  very  satisfactory  but  might  be  nuich  closer  in  a 
larger  set  of  observations. 

21.  Method  of  Least  Squares.  The  most  prol)able  value  of  the 
observed  quantity  obtainable  from  a  given  set  of  observations  will 
be  the  one  corresponding  to  the  most  probable  set  of  errors  or  of 
residuals.  Consider  a  set  of  n  observations  of  equal  precision,  of 
which  the  most  probal^le  eri'ors  are  Ai,  A^,  X>,,  .  ■  .  A„,  respectively. 
Siii('(>  the  probability  of  the  siinultan(^ous  occurrence  of  several 
(n'cnis  in  a  series  is  the  product  of  theii'  s(>pafate  pro]jal)ilities,- 
and  tlie  probability  of  an  erroi'.  A.  is,  fi'oin  (1): 

'  The  iis(>  of  the  (equation  is  facilitalcnl  by  its  transformation  into  series 
from  which  tal)h^s  have  Ix'cn  ('()mi)ute(l.     See  Appendices  V  and  F. 
-  See  Ap!)en(h\  W. 


14  PRACTICAL  LEAST  SQUARES 


„.-,„/. 


it  follows  that  the  probability  of  the  simultaneous  occurrence  of  the 
errors  Ai,  A2,  A3,  .  .  .  A„,  will  be 


■Vtt 


that  is 


P:=[  _  )    g-'i2(Al2  +  A22+    ...    +An2)  rg-v 


But  since  these  errors  are  to  be  the  most  probable  ones,  P  must 
have  its  maximum  value.  As  h,  t.  n,  and  e  are  constant  in  a  given 
problem,  and  the  exponent  of  e  is  always  negative,  the  expression 
will  be  a  maximum  when  the  exponent  of  e  is  a  maximum,  alge- 
braically, that  is,  when  the  sum 

Ai2+A2^+A3^+  .  .  .  +^/  is  a  minimum  (3) 

Thus,  the  most  probable  value  of  the  observed  quantity,  or  the 
best  value,  in  other  words,  obtainable  from  the  given  set  of  obser- 
vations, will  be  the  one  for  which  the  sum  of  the  squares  of  the 
errors,  or  of  the  residuals,  likewise,  is  a  minimum.  This  is  called 
the  Principle  of  Least  Squares  and  the  method  which  is  based 
upon  it,  for  the  adjustment  of  observations,  is  known  as  the 
Method  of  Least  Squares.  It  was  first  published  by  Legendre,  in 
1806,  although  used  by  Gauss  as  early  as  1794. ^  In  the  general 
case,  involving  the  determination  of  several  quantities,  and  obser- 
vations of  unequal  weight,  it  provides  that  the  most  probable 
values  of  the  unknown  quantities  will  be  those  for  which  the  sum 
of  the  weighted  squares  of  the  residuals  is  a  minimum.  This 
form  will  be  discussed  later  (Art.  34). 

22.  Number  of  Observations.  In  the  development  of  the 
]\Iethod  of  Least  S(}uares,  it  is  assumed  that  the  number  of  obser- 
vations is  large.  The  assumptions  as  to  the  occurrence  of  errors 
approach  the  truth  more  closely  as  the  number  of  the  errors 
increases.     However,  if  the  method  be  applied  to  small  sets  of 

^  See  Appendix  A, 


INTRODUCTION  15 

observations,  although  the  results  may  be  farther  from  the  correct 
values,  still  they  may  be  regarded  as  the  best  ones  obtainable 
from  the  given  observations,  which  is  sufficient  warrant  for  the 
use  of  the  method  under  such  unfavorable  conditions.  It  is 
unreasonable  to  generalize  too  greatly  from  a  very  small  set  of 
residuals,  as  to  the  precision  of  a  result,  but  it  is  still  permissible 
to  take  the  mean  of  even  two  observations,  if  they  be  the  only 
available  data. 

Regarding  the  number  of  observations,  it  must  be  remembered 
that  no  adjustment  is  possible  unless  there  are  more  observations 
than  unknown  quantities.  If  the  number  be  less,  the  unknowns 
cannot  be  determined  without  additional  information  or  assump- 
tion. If  the  number  be  equal  to  that  of  the  unknowns,  there  is 
only  one  solution,  namely,  the  rigid,  algebraic  one  by  means  of 
simultaneous  equations. 

23.  Two  Uses  of  Least  Squares.  The  ^Method  of  Least  Squares 
is  essentially  a  practical  subject,  being  devoted  to  the  solution  of 
numerical  problems.  Its  applications  may  be  divided  into  two 
classes:  first,  the  determination  of  the  best  values  of  the  unknown 
quantities  obtainable  from  given  observations,  that  is,  the  adjust- 
ment of  observations;  and  second,  the  investigation  of  the 
precision  of  the  observations  and  the  results,  and  the  influence  of 
errors  upon  them.  Those  two  uses  of  the  method  are  quite  inde- 
pendent; most  problems  require  adjustment,  but  the  precision 
may  not  be  investigated  at  all  unless  the  results  are  to  be  compared 
with  those  of  other  observations.  In  this  treatment  of  the  sub- 
ject, therefore,  immediate  attention  will  be  given  to  the  adjust- 
ment of  the  various  kinds  of  oliscrvations,  but  the  determination 
of  the  precision  of  the  results  will  be  postpoiunl  to  a  later  chapter. ^ 

24.  Classification  of  Problems.  In  the  following  chapters, 
the  typical  problems  are  such  as  the  engineer  frequently  encounters 
in  field  work.  A  certain  method  of  solution  is  adopted  for  each 
type.  The  adjustment  of  the  three  great  classes  of  observations 
is  taken  up  in  th(^  usual  order,  namely: 

1  Chapter  VIII. 


16  PRACTICAL  LEAST  SQUARES 

Direct  Observations  of  One  Quantity, 

Indirect  Observations,  of  a  Function  of  the  Unknown  Quan- 
tities, and 

Observations  of  Conditioned  Quantities. 
Following  these,  the  investigation  of  precision  and  the  propagation 
of  error  will  be  explained.  It  is  important  that  the  student  become 
familiar  with  the  characteristics  of  these  classes  of  problems  and 
with  the  method  of  solution  of  each  type.  The  special  problem 
of  the  derivation  of  empirical  formulas  and  constants  will  be 
treated  in  a  separate  chapter.^ 

'  Chapter  VII. 


CHAPTER   II 
DIRECT  OBSERVATIONS  OF  ONE  QUANTITY 

25.  Direct  Observations:  Readings.  In  their  simplest  form, 
direct  observations  consist  of  single  readings  made  upon  various 
kinds  of  apparatus  used  in  measurements,  such  as  scales,  circles, 
micrometers,  and  timepieces.  The  example  in  Art.  11,  of  microm- 
eter readings  upon  a  fixed  scale,  is  typical  of  this  class.  The 
conditions  under  which  the  readings  are  made  are  assumed  to  be 
constant  or  to  vary  according  to  a  known  law  so  that  the  discrep- 
ancies among  the  readings  maj"  be  reduced  to  the  accidental  errors 
of  pointing  or  setting  the  instrument  and  of  reading. 

Usually,  however,  the  conditions  are  more  complex  and  involve 
several  sources  of  error.  In  the  example  just  cited,  the  tempera- 
ture may  vary,  causing  the  position  of  the  division  line  on  the 
scale  to  change.  Then  if  the  temperature  be  read  from  a  mer- 
curial thermometer  simultaneously  with  the  micrometer  readings, 
two  corresponding  sets  of  direct  readings  are  obtained.  Also, 
when  the  altitude  of  a  star  is  observed  for  time  and  azimuth, 
each  pointing  on  the  star  may  be  attended  by  readings  of  the 
watch  and  the  horizontal  and  vertical  circles,  so  that  three 
simultaneous  sets  of  direct  readings  result. 

26.  Observations  Resulting  from  a  Combination  of 
Readings.  It  fr(!(}uently  happens,  on  the  otlu^r  hand,  that 
the  so-called  observed  (juantity  is  the  result  of  two  or  more 
separate  readings  of  the  sanu^  kind.  I'or  cxanipk^,  in  tli(> 
measurement  of  angles  ])y  ivpctition,  a  singl(>  observation  is 
obtained  by  subti'acting  the  initial  reading  from  the  final  one 
and  dividing  tlu>  difference  by  th(>  nunibcM'  of  I'epelitions,  in 
the  case  of  the  direct  uK^asure  of  tlu^  angl(>  its(>lf  and  also,  of 
the  n'V(M'S(Ml    measure    of    its    (>xplenieiit,   the    ni(>an    of    the    two 

17 


18  PRACTICAL  LEAST  SQUARES 

results  being  taken.^  In  the  measurement  of  a  base  line,  each 
observed  length  is  the  sum  of  several  tape-lengths;  the  elemental 
observations  consisting  of  placing  the  rear  scratch  on  the  tape  in 
contact  with  a  scratch  on  a  marking-plate  and  of  making  a  mark 
on  a  plate  opposite  the  scratch  at  the  forward  end  of  the  tape. 
Similarly,  the  observed  difference  of  elevation  between  two 
benchmarks  consists  of  the  algebraic  sum  of  a  series  of  fore-  and 
back-sight  readings  of  the  rod.  It  is  customary,  in  all  such 
cases,  to  consider  the  result  of  a  single  complete  measurement  to 
be  the  observed  quantity,  even  though  it  consist  of  a  combination 
of  separate  readings.  In  its  general  sense,  therefore,  the  term, 
direct  observation,  may  be  taken  to  mean  a  single  measurement 
of  the  quantitj-  desired. 

27.  The  Mean.  The  adjustment  of  direct  observations  of  a 
single  quantity  consists  in  taking  their  mean  as  the  best,  or  most 
probable,  value  obtainable  from  the  given  observations,  as  ex- 
plained in  Art.  1-i.  That  this  is  in  accordance  with  the  principle 
of  least  squares,  ma}-  be  shown  as  follows : 

Let  JMi,  3/2  .  .  .  j\In  represent  a  series  of  observed  values; 
Xq,  the  best  value  of  the  observed  quantity;  and  vi,  vo,  .  .  .  I'n, 
the  corresponding  residuals,  n  being  the  number  of  observations. 
Then,  for  each  observation  there  results  an  observation  equation, 
thus:  ^       ^f  _„ 

Xq  —  M  1  —  I'l 
.Tf)  —  Mo  =  V2 

(4) 

.ro  — 3/)j  =  I'n 

Squaring  both  members  of  each  equation  and  adding  the  resulting 
equations,  we  ()l)tain,- 

(.ro--Vi)^'  +  (.ro-3/2)-  +  .  .  .  +(xo-Mn)- =  [r-]  (5) 

1  A  simpler  method  is  to  subtrart  from  the  reading  on  the  right-liand 
object  the  mean  of  the  two  readings  on  the  left-hand  object  (first  and  last 
readings)  and  divide  the  difference  by  the  mimber  of  repetitions.  'J'he  result 
is  the  same. 

-The  square  l)rackets,  [  ],  indicate^  the  suiu  of  all  such  terms  as  arc  in- 
cluded by  them.  Thus,  [r-]  represents  the  sum  of  the  sf]uares  of  all  the  /''s, 
that  is,  rr  +  rj-  +  ':-+  ■  ■  +''«-•  The  symbol,  ^,  may  be  used  to  indicate 
summation  in  the  same  manner. 


DIRECT  OBSERVATIONS  OF  ONE  QUANTITY 


19 


According  to  the  principle  of  least  squares,  the  sum  of  the  squares 
of  the  residuals,  that  is,  [y^],  is  to  be  a  minimum.  Therefore,  we 
differentiate  the  left-hand  member  and  place  the  first  derivative 
equal  to  zero;  whence,  after  dividing  by  two,  we  have: 

{xo-M{)  +  {xo-M2)+  .  .  .  +(a:o-Mn)=0  (6) 

nxo-{Mx+M2+  .  .  .  +ilf„)=0, 

[M] 


and 


Xo=- 


(7) 


That  is,  the  best  value  of  the  observed  quantity — the  one  for 
which  the  sum  of  the  squares  of  the  residuals  is  a  minimum,  is  the 
mean. 

28.  Computation  of  the  Mean.  Owing  to  the  close  agreement 
of  the  observations  of  which  the  mean  is  to  be  taken,  it  is  possible, 
often,  to  abridge  the  numerical  work  by  the  assumption  of  an 
approximate  value  of  the  mean.  Suppose  the  mean  of  the  follow- 
ing 16  quantities  to  be  desired 


+  '' 

— /' 

+;• 

-V 

1403,49768 

4 

1463.49764 

0 

Assumed  appro.x. 

1463 . 49754 

10 

58 

6 

value,  60 

1463 . 49763 

1 

66 

2 

1463 . 49765 

1 

63 

1 

1463 . 49759 

5 

65 

1 

Sum,  +65 

1463 , 49771 

( 

60 

4 

1463 . 49767 

3 

67 

3 

Mean,  +4 

1463.49765 

1 

70 

() 

Remainder,  +1 

IC)      '      16 

11           12 

Mean,    1463.19764 


[r] 


In  the  first  phwo,  it  i.s  cvidcMit  from  inspection  that  all  but  the  last 
two  figures  arc  the  same  in  all  tlu^  quantities,  so  that  there  is  no 
need  of  writing  them  r('peat(Mlly.  It  is  sufficient  to  write  the  first 
nuni])(>r  in  full  and  thiM'cafter  only  the  last  two  figures.  Somc- 
tiinc^s,  in  a  long  column  of  nunil:)ers,  tlie  constant  part  will  change 
suddcMily  to  anotli(>r  on(\  in  which  cast^  it  is  well  to  write  the  last 
as  well  as  th(>  first  of  each  s(M'i(^s  in  full. 


20  PRACTICAL  LEAST  SQUARES 

Also,  by  inspection,  it  will  be  seen  that  the  next  to  the  last 
figure  in  the  mean  will  probably  be  6.  Thus  we  may  take  60  as 
an  approximate  value  of  the  last  two  places,  calling  54,  for  exam- 
ple, —  6,  and  71,  +11.  Then,  adding  mentally  the  figures  in  the 
last  place  with  60  as  a  basis,  we  obtain  the  sum,  +65,  and  the 
mean,  +4,  so  that  the  full  mean  is  1463.49760+4  in  the  last 
place,  or  1463.49764.  This  process  may  be  simplified  still  more 
by  combining  a  5  and  a  7  in  the  next  to  the  last  place,  as  their 
mean  is  6,  without  modifying  their  last  figures.  Thus  in  the  above 
example,  59  and  71  would  be  added  directly  as  10  instead  of  —1 
and  +11.1 

29.  Control  or  Check  of  the  Mean.  Substituting  equations 
(4)  in  (6)  of  Art.  27, 

Vi+V2-\-V3+  .  .  .  -\-Vn  =  0         or,         [v]  =  0  (8) 

That  is,  the  sum  of  the  residuals  should  be  zero,  or  the  sum  of  the 
positive  residuals  should  be  equal  to  that  of  the  negative  ones. 
This  check  is  very  important  and  should  l^e  used  whenever  prac- 
ticable. It  will  be  satisfied  rigidly  unless  there  is  a  remainder 
when  the  sum  of  the  observations  is  divided  bj'  their  number  to 
obtain  the  mean.  In  this  case,  the  check  fails  by  just  the  amount 
of  the  remainder  but  with  the  opposite  sign,  so  that  the  mean  is 
verified,  nevertheless.  In  the  example  in  the  preceding  article, 
the  sum  of  the  residuals  is  —  1,  and  the  remainder  in  taking  the 
mean  is  +1,  so  that  the  mean  was  correctly  computed. 

30.  Weighted  Observations.  Thus  far,  wc  have  considered 
only  observations  of  equal  fjuality  or  precision.  In  the  general 
case,  however,  one  ()l)servation  of  a  series  may  ])e  better  than 
another,  for  some  reason,  and  cntitk^l  to  have  a  greater  influence 
upon  the  result.  WIkmi  all  of  the  (observations  of  a  set  are  not  of 
the  same  quality  or  worth,  tliey  are  called  weighted  observations, 
or  arc  said  to  ho  of  une((ual  weight. 

31.  Definition  of  Weight  (w).  By  the  weight  of  an  observation 
is  meant   its   relative   value   among   the   others   of  a  set.     It    is 

'  The  l)e<j;iniicr  will  do  ■well  to  learn  to  add  mentally  ])y  combinations  of 
two  or  three  fi^un\s  at  once,  partitailarly  those  whose  sum  is  10,  as  6  and  4, 
7  and  3,  or  o,  2,  and  .'!,  even  though  ancjther  figure  intervenes. 


DIRECT  OBSERVATIONS  OF  ONE  QUANTITY  21 

expressed  as  a  number,  and  being  strictly  relative,  may  be  mul- 
tiplied by  any  factor  so  long  as  all  the  others  in  the  set  are  mul- 
tiplied by  the  same  quantity.  Thus,  the  weights  may  be  integral 
or  fractional.  If  one  observation  has  a  weight  of  3  and  another 
a  weight  of  unity,  the  first  may  be  considered  as  the  mean  of 
three  observations  of  the  same  size,  each  of  which  has  the  weight 
unity.  The  weights  could  be  stated  as  6  and  2,  as  1  and  \,  or  as 
0.153  and  0.051,  as  well  as  3  and  1. 

32.  Sources  of  Weights.  Either  the  observer  or  the  com- 
puter may  assign  the  weights  to  the  observations  and  it  is  largely  a 
matter  of  judgment.  If  the  observer  assigns  them,  during  the 
observing,  he  has  the  right  to  do  it  by  estimation  or  arbitrarily. 
For  instance,  in  the  measurement  of  angles  in  triangulation,  the 
atmosphere  may  be  so  unsteady  during  one  observation  that  he 
will  give  to  that  particular  result  a  weight  of  one-half  that  of  the 
others.  Or  he  may  note  in  his  record  the  fact  that  the  atmosphere 
was  very  unsteady  at  that  time,  and  leave  to  the  computer,  in  the 
office  or  at  headquarters,  the  dutj^  of  assigning  a  low  weight  to 
that  observation,  when  making  the  adjustment.  Of  course,  the 
computer  might  give  it  a  weight  of  0.8  instead  of  0.5,  and  thus 
change  the  result  somewhat.  Or,  an  arbitrary  rule  might  be 
agreed  upon  so  that  both  would  assign  the  same  weight  under  the 
same  circumstances.  Similarly,  two  benchmarks  may  be  con- 
nected by  two  lines  of  levels  giving  discordant  results.  If  one 
run  were  made  during  a  high  wind  or  with  a  careless  rodman,  it 
might  be  given  a  lower  weight  than  the  other. 

In  the  second  place,  weights  may  be  assigned  upon  the  number  of 
observations,  as  a  basis.  If  one  measurement  of  an  angle  be  made 
with  three  repetitions  and  another  with  six,  the  second  may  be 
given  twice  the  weight  of  the  first. 

Finally,'  the  assignment  of  weights  may  ])e  governed  by //^cor//. 
In  the  determination  of  time  by  transits  of  stars  across  the  meridian, 
the  motion  of  a  star  near  the  ecjuator  will  be  more  rapid  than  that 
of  one  of  greater  declination,  and  the  rapidly  moving  one  can  be 
observed  more  accurately  than  the  other.     Th(n-efore,  a  system 

1  For  the  determination  of  weights  from  mean  square  errors,  see  Art.  I.'i6, 
Chapter  \'1II,  Combination  of  Computed  (Quantities. 


22 


PRACTICAL  LEAST  SQUARES 


of  weights  has  been  devised  which  depends  upon  the  declinations 
of  the  stars. 

33.  The  Weighted  Mean,  The  best  value  of  the  observed 
quantity  which  is  obtainable  from  a  given  series  of  weighted  obser- 
vations is  known  as  the  Weighted  Mean.  To  determine  it,  each 
observation  is  multiplied  by  its  weight  and  the  sum  of  these 
products  is  divided  by  the  sum  of  the  weights.  The  analogy  of 
this  process  to  the  determination  of  the  simple  mean  will  be 
evident  from  an  example. 

Let  it  be  required  to  adjust  the  following  set  of  four  weighted 
observations  of  an  angle,  the  weights,  w,  being  determined  from 
the  number  of  repetitions  and  the  notes  as  to  weather  conditions: 


M 


73°  18'  42,16" 
41.96 
41.70 
42.23 


Use  42.00  as 
Approx.  value 


+0 .  72 
11 


+0.07 


Mean,     42  07 


11 


2.16  or   +.16 


2.16 
2.16 
1.96 
1.96 
1.70 
1.70 
2 .  23 
2.23 
2.23 
2.23 


22.72 


+  .16 
+  .16 
-.04 
-.04 
-.30 
-.30 
+  .23 
+  .23 
+  .23 
+  .23 

+  .72 


+v 


11 
11 
37 
37 


16 
16 

16 
16 


wM 


+  .48 
-.08 
-.60 

+  .92 


-\-LCV 


22 

74 


27 


64 


96 


91 


+  .72      9(i        91 


By  writing  each  observation  a  number  of  times  equal  to  its 
weight,  and  by  using  42.00"  as  an  assumed  or  approximate  value 
of  the  mean,  the  third  column  is  obtained.  According  to  the 
definition  in  Art.  31,  this  reduces  all  the  quantities  in  these 
columns  to  the  same,  unit  weight,  and  their  number  is  the 
sum  of  the  weights.  Therefore,  their  mean  is  the  best  value, 
and  by  the  methods  of  Art.  28,  this  is  42.07",  with  residuals 
shown  in  the  columns  headed  +-r  and  —v.  The  mean  is  checked 
by  its  remainder,  —5,  against  the  sum  of  the  residuals,  +-5. 

It  is  evident  that  instead  of  writing  the  first  observation  three 
times  in  the  third  and  fourth  columns,  it  will  be  easier  to  multiply 


DIRECT  OBSERVATIONS  OF  ONE  QUANTITY  23 

it  by  three  and  write  the  product,  and  similarly  with  the  other 
observations  and  their  weights;  the  sums  would  be  unchanged. 
Likewise,  the  residuals  may  be  multiplied  by  the  weights  of  the 
corresponding  observations  and  the  products  noted  instead  of 
writing  all  the  separate  residuals.  Thus,  the  last  three  columns 
are  obtained,  giving  the  same  results  as  the  preceding  ones.  The 
following  rule,  therefore,  is  given  for  the  adjustment  of  direct 
observations  of  unequal  weight,  whether  the  weights  be  integral 
or  fractional:  Multiply  each  observation  by  its  weight  and  divide 
the  sum  of  the  products  by  the  sum  of  the  weights,  to  obtain  the 
weighted  mean;  and  multiply  each  residual  by  the  weight  of  the 
corresponding  observation,  adding  the  products  algebraically  to 
obtain  the  sum  of  the  weighted  residuals. 

34.  Principle  of  Least  Squares  for  Weighted  Observations. 
Let  Ml,  M2,  Ms,  .  .  .  Mn  represent  a  set  of  n  observations  having 
the  respective  weights,  wi,  w-z,  W3,  .  •  .  Wn,  and  let  xq  be  the  best 
value  of  the  observed  quantity,  with  vi,  V2,  V3,  .  .  .  Vn  as  the 
corresponding  residuals.^  Considering  each  observation  of  weight 
w  to  be  the  mean  of  w  equal  observations  of  weight  unity,  the 
residual  of  each  of  these  latter  observations  would  be  the  same 
as  that  of  the  original  one,  but  there  would  be  w  of  them.  As 
stated  in  Art.  21,  for  the  best  value  of  the  observed  c^uantity,  in 
the  case  of  equal  weights,  the  sum  of  the  squares  of  the  residuals 
will  be  a  minimum.  Therefore,  to  express  this  minimum  for 
weighted  observations,  each  residual  must  be  written  a  number  of 
times,  w,  equal  to  the  weight  of  its  observation.  Thus, 
{vr-\-vr'-\-vr-[-  .  .  .  iow\  terms) +  (y2^  +  ?-'2^+?^2^+  •  .  .  iowo  terms) 

+  .  .  .  +(/'«- +  r«-  +  r„2+  ...  to  Wn  terms)  is  to  be  a  minimum; 
that  is, 

u'V'r  +  wiv-r-i'  ■  ■  ■  ■^-WnV,?  must  be  a  minimum  (9) 

or  the  sum  of  the  weight od  squares  of  the  re-^idiials  must  hQ  a  min- 
imum.    Su])stituting  for  each  v  in  (9)  its  value,  Xo  —  M,  with  the 
corrc^sponding  subscripts, 
IV \{xi)— M \)~ -\- ic-zixo— M -lY  +  ■  ■  ■  -^Wn(x()  —  Mn)'~  IS  to be  a  mininunn. 

1  Reference  to  the  numerical    cxauii)le  of  the  preceding   article  will  be  of 
assistance  in  following  these  steps. 


24  PRACTICAL  LEAST  SQUARES 

Differentiating  this  expression  and  placing  the  first  derivative 
equal  to  zero,  for  the  minimum,  we  have,  after  canceling  the  fac- 
tor 2: 

WiiXo-Mi)-\-W2{Xo-M2)+   .   .   .WniXo-Mn)=0  (10) 

Combining  terms, 

-  {wiMi-\-W2M2-\-W3M3-{-    .   .   .   -\-WnMn)=0 

and 

[w] 
that  is,  the  best  value  of  the  observed  quantity,  for  which  the  sum  of 
the  weighted  squares  of  the  residuals  is  a  minimum,  is  the  weighted 
mean,  obtained  by  multiplying  each  observation  by  its  weight 
and  dividing  the  sum  of  the  products  by  the  sum  of  the  weights. 

35.  Control  or  Check  of  the  Weighted  Mean.  If  in  (10), 
above,  v  be  substituted  for  xq  —  M,  we  have 

WiVi+W2V2-\-lV3V3-h   .   .   .   -^-WnVn^O  (12) 

or,  the  sum  of  the  weighted  residuals  should  equal  zero.  As  was  the 
case,  however,  in  the  control  of  the  simple  mean,  the  actual  sum  of 
the  weighted  residuals  should  equal  the  remainder  obtained  with 
the  weighted  mean  but  with  the  opposite  sign.  This  is  illustrated 
in  the  example  of  Art.  33. 

36.  Weighted  Mean  of  Two  Quantities.  The  solution  of  this 
special  case  is  particularly  convenient  and  instructive.  With 
the  usual  notation,  let  Mi  and  Mo  be  the  two  given  quantities, 
whose  weights  are  u'l  and  u'2  respectively,  and  let  .tq  be  their 
weighted  mean.      Then  from  (11), 

lV\Mi-\-lV2-^I'2  .,„s 

.To- •  (13) 

lC\-+-tV2 

Adding  and  subtracting  W2M1  from  the  numerator, 
ir\M\  -\-iV2^!^2^u'2M]  —  2C2Mi 
iri-^iC2 

^  Ml  (t/-i  +K-2)  ■i-W2(M2  -  .1/1) 
"•l+W'2 

=  Mi-h-^^{M2-Mi)  (14) 

Wi-{-W2 


•To 


DIRECT  OBSERVATIONS  OF  ONE  QUANTITY  25 

Also,  owing  to  the  symmetry  of  (13),  the  subscripts  may  be  inter- 
changed, and  therefore, 

xo  =  M2-{-      ^'      {M1-M2) 
Wi-\-W2 

Thus,  the  weighted  mean  may  be  found  by  correcting  one  of  the 
quantities  by  an  amount  equal  to  the  difference  between  the  two 
quantities  multiplied  by  the  weight  of  the  other  and  divided  by 
the  sum  of  the  weights.  Obviously,  the  mean  lies  between  the 
two  quantities,  so  the  sign  of  the  correction  will  be  evident.  The 
weighted  mean  divides  the  interval  between  the  two  quantities 
in  the  inverse  ratio  of  the  weights  of  the  adjacent  quantities. 
For  example,  the  weighted  mean  of 

6.784  Wt.  7 

and  6.743  Wt.  2   is    6.784-|x41  =  6.784-9  =  6.775 

the  unit,  for  the  correction,  being  conveniently  taken  in  the  last 
decimal  place.  Similarly,  the  correction  to  the  second  quantity 
would  be  +1^X41,  with  the  same  result. 


CHAPTER   III 

INDIRECT  OBSERVATIONS,  OF  A  FUNCTION  OF  THE 
UNKNOWN  QUANTITIES 

37.  Indirect  Observations  are  those  in  which  the  observed 
quantity  is  related  to  the  desired  unknown  quantities  through  a 
known  formula  or  function.  The  observed  quantity  is  expressed 
as  an  explicit  function  of  the  unknowns,  which  are  usually  two 
or  more  in  number,  and  is,  therefore,  the  observed  value  of  the 
function.  It  may  be  that  the  unknowns  cannot  be  separated  so 
as  to  be  observed  directly,  and  that  they  can  only  be  determined 
in  combination.  They  are  assumed  to  be  mutually  independent; 
each  may  vary  without  causing  a  corresponding  variation  in  the 
others.  Moreover,  the  number  of  observations  must  be  greater 
than  that  of  the  unknown  quantities,  as  stated  in  Art.  22. 

38.  The  General  Function  may  be  algebraic,  logarithmic, 
exponential,  or  trigonometric,  and  simple  or  complicated.  How- 
ever, it  is  always  possible  to  reduce  such  a  general  function  to  the 
linear  form,  that  is,  to  the  first  degree,  either  by  taking  the  loga- 
rithm of  each  member  or  by  developing  the  function  by  Taylor's 
Theorem  and  neglecting  the  squares,  products,  and  higher  powers 
of  the  small  increments  involved.^  Furthermore,  the  great  ma- 
jority of  problems  are  concerned  with  the  simplest  form  of  func- 
tion, namely,  the  algebraic  one  of  the  first  degree.  Therefore, 
we  shall  here  consider  only  this  linear  form. 

39.  The  Linear  Function  between  the  unknowns,  x,  y,  ?,  etc., 
will  have  the  following  general  form, 

ax  +  by  +  cz-\-  .  .  .+k  (15) 

in  which  a,  h,  c,  etc.,  are  known  numerical  coefficients  or  factors 
and  /.'  is  the  constani  t(M'in.  As  usual,  the  signs  n^prosent  algebraic 
addition  and  the  (quantities  may  be  positive  or  n(^gative. 

40.  Observation  Equations  are  the  algebraic  statements  of 
the   separate;   obscn'vations.     Thus,    if   Mi,    M-2,  .  .  •  Mn   be    the 

i.See  Arts.  119   121. 

26 


INDIRECT  OBSERVATIONS  27 

observed  values  of  the  function,  with  the  respective  weights, 
wi,  Wo,  .  .  .  Wn,  the  observation  equations  would  be, 

aix-\-hiy-{-  .  .  .  -\-ki  =  Mi  Wt.  wi 

a2X+622/+  .  .  .  -{-k2  =  M2  W2 

(16) 

anX  +  bny+   .   .    .   +A-„  =  jT/„  Wn 

Their  number  is  the  same  as  that  of  the  observations,  and  each 
subscript  indicates  its  equation  and  observation. 

If  X,  Y,  Z,  etc.,  be  the  best  or  most  probable  values  of  x,  y,  z, 
etc.,  to  be  obtained  from  the  given  observations,  the  substitution 
of  these  values  in  the  above  equations  (16)  would  show  a  residual, 
V,  for  each  equation,  inasmuch  as  the  observations  are  subject  to 
error  and  no  set  of  values  of  the  unknowns  would  be  likely  to 
satisfy  exactly  any  one  of  the  observation  equations.  The  ideal 
form  of  these  equations,  therefore,  would  be, 

aiZ+6iF+ciZ+  .  .  .  +ki  =  Mi-^vi  \Yt.  wi 

a2X  +  b2Y-\-C2Z-\-    .   .   .   -\-k2  =  M2  +  V2  W2 

(17) 

Transposing  the  M  of  each  ec^uation,  and  I'epresenting  the  differ- 
ence, k  —  M,  of  the  two  constant  (juantities  by  the  constant  term, 
/,  we  have  for  the  observation  ecjuations, 

aiX+6i}^  +  ciZ+  .  .  .  +/i  =  ri  Wt.  wi 

a2X  +  62r  +  C2Z+    .   .   .   +/2  =  r2  W2 

(18) 

anX  +  bn-Y-^-CnZ-]-    .    .    .    -\-In^Vn  U'„ 

which  are  somotimes  called  Residual  lu/uaiions. 

41.  Adjustment  of  Indirect  Observations  of  Unequal  Weight. 

For  th(^  l)est  values  of  the  unknown  (|uaiilities,  the  sum  of  the 
weighted  scjuares  of  the  rc-siduals  is  to  be  a  iniiiinuuu.     That  is, 

n:irr-\-W2V2~-\-u':',r:r-{-  ■  ■  ■  +ii'nr,r  must  he  a  iniiiimuin  (9) 
Sinc(>  .r,  y,  z,  etc.,  ai'c  iiidepeiideiit  of  ouv  another,  it  follows  that 
\\\v  first  (l(M'ivative  of  the  above  expression  with  respect  to  each  of 


28  PRACTICAL   LEAST  SQUARES 

them  must  separately  equal  zero,  for  the  minimum.  Differen- 
tiating (9),  therefore,  and  canceling  the  factor,  2,  from  each 
equation,  we  have: 


-"l    ,  "«^2   I  .  avn      f. 

WlVi—--\-W2V2-—z+   .  .  .    -hWnVn-r;^  =  0 

dX  dX  dX 


avi  av2  .  .  dvn     ^ 

dY  dY  dY 


(19) 


There  will  be  one  equation  for  each  of  the  unknown  quantities. 
The  differential  coefficients  in  the  first  of  these  equations  are  the 
coefficients  of  X  in  the  successive  equations  (18),  those  in  the 
second  are  the  coefficients  of  Y,  etc.  Substituting  the  value  of 
the  v^s  from  (18),  in  the  equations  (19),  then,  we  obtain 

wiai{aiX+hiY+  .  .  .  -\-h)+W2a2(a2X+b2Y-\-  .  .  .  +^2) 

+  .  .  .  +w)„a„(a„X+6„F+  .  .  .  +U  =  0  (20) 

w;jbi(aiX+6iF+  .  .  .  -\-h) +W2h2{a2X -{-boY  +  .  .  .  +^2) 

+    .   .   .   -\-Wnhn{anX-\-bnY-i-   .   .   .  +  ^=0 


Whence  carrying  out  the  products  indicated,  and  adding  the  similar 
terms, 

[iva^]X-{-[wab]Y+[wac]Z+  .  .  .  -\-[wal]  =0 

[wba]X-h[wh'^]y-\-[wbc]Z+  .  .  .  -\-[wbl]  =0  (21) 

[wca]X-i-[wcb]Y-i-[wc^]Z-\-  .  .  .  +[wd]   =0 


These  are  called  the  Normal  Equations,  as  they  are  the  same 
in  numljcr  as  the  imknown  quantities,  and,  therefore,  may  l)e 
solved  simultaneously  to  determine  the  latter.  It  will  be  seen 
in  the  equations  (20)  that  the  first  normal  eciuation  is  formed  by 
nuiltipl\'ing  the  left-hand  member  of  each  observation  equation 
by  its  weight  and  the  coeffici(Mit  of  A'  in  that  ecjuation,  and  adding 
all  the  resulting  products.  Likewise,  the  second  normal  equation 
is  formed  by  multiplying  each  observation  equation  by  its  weight 
and  the  coefficient  of  F,  and  adding  the  products,  and  so  on  through 
th(>  series  of  unknown  (}uantities. 


INDIRECT  OBSERVATIONS  29 

The  adjustment,  then,  consists  in  forming  from  the  given 
observation  equations  a  set  of  normal  equations,  the  same  in  num- 
ber as  the  unknown  quantities,  the  solution  of  which  as  simul- 
taneous equations  will  give  the  best  values  of  those  quantities. 

42.  Observations  of  Equal  Weight.  This  is  a  special  case  of 
the  foregoing,  in  which  each  weight  may  be  replaced  by  unity  so 
that  the  tf's  disappear  from  the  normal  equations  (21),  resulting, 
therefore,  in  the  following  simpler  form: 

[a2]X+[a&]F  +  [ac]Z+  .  .  .  +M]  =  0 

[hajXMb'^W-hlbcjZ-h  .  .  .  +[bl]  =  0  (22) 

[ca]X-\-[cb]Y+[c']Z+  .  .  .  +[c/]  =0 


For  purposes  of  illustration,  it  will  be  convenient  to  use  these 
equations  (22)  rather  than  the  longer  ones  in  which  the  weights 
arc  included. 

43.  Control  or  Check  in  the  Formation  of  the  Normal  Equa- 
tions. Referring  to  equations  (18),  let  the  sum  of  the  numerical 
coefficients  and  the  constant  term  in  each  equation  be  represented 
by  s;  thus,  .  ,     .        ,  ,  , 

a2  +  62  +  C2+   .  .  .  -\-l2  =  S2  (23) 


On-\-hn  +  Cn-\-    ■   .   •   +?»  =  S„ 

To  form  the  first  normal  (xjuation,  as  shown  in  Art.  41,  the  terms 
in  the  left-hand  member  of  each  of  these  equations  are  multiplied 
by  its  weight  and  its  first  t(>rni  or  coefficient,  namely,  wioy,  etc., 
and  the  resulting  products  ar(^  added,  as  in  (21).  P(>rfoi-niing 
this  operation  at  the  same  tim(>  on  the  right-hand  members  above, 
in  (23),  we  have,  using  tlu^  first  eciuation,  only,  as  an  illustration: 

W'iar'+ii'ifli6i-f?/'iaici+  .  .  .  +U!i«i/i  =  (<"i«i.s'i  (24) 

or,  after  addition, 

[(/vr]  +  [(ra/;]  +  [»v/c]4-   .    .    ■  +[?m/]  =  [(ra.s^]  (25) 

Thus,  tlu>  second  inetiibcr  of  lliis  (■(jualioii  should  ('(pial  tlie  sum 


30  PRACTICAL  LEAST  SQUARES 

of  the  numerical  coefRcients  and  the  constant  term  of  the  first 
normal  equation,  which  affords  a  check  upon  the  numerical  work 
of  computing  these  quantities  and  forming  the  normal  equations. 
This  second  member  of  (25)  is  therefore  called  the  sum-term. 
For  the  other  normal  equations,  respectively,  it  has  the  form 
[wbs],  [wcs],  etc.  This  check  is  very  important  and  should  always 
be  applied,  except,  perhaps,  in  the  very  shortest  problems.  Hav- 
ing formed  the  sum,  s,  for  each  of  the  observation  equations,  it  is 
treated  the  same  as  the  other  quantities,  a,  b,  c,  etc.,  and  w^hen  a 
normal  equation  is  written,  its  sum-term  should  equal  the  alge- 
braic sum  of  its  other  numerical  quantities.  It  must  be  noted, 
however,  that  the  check  may  not  hold  exactly,  in  the  last  decimal 
place,  owing  to  discarded  remainders,  but  this  discrepancy  will  not 
usually  exceed  one  unit  in  that  place. 

44.  S3nnmetry  of  the  Normal  Equations.  Inspection  of  the 
literal  forms  of  the  normal  equations,  in  (21)  and  (22),  reveals  a 
sj^mmetry  which  is  useful  as  an  aid  to  the  memory,  and  which  will 
lessen  the  labor  of  computation  both  in  forming  the  equations  and 
in  their  solution,  as  will  be  shown  farther  on.  This  symmetry 
exists  among  the  coefficients  of  the  unknown  quantities  with  refer- 
ence to  the  diagonal  line  passing  downward  to  the  right.  On  this 
diagonal  will  be  found  those  terms  which  involve  the  squares  of  the 
quantities,  a,  b,  c,  etc.,  as  ^  [waa],  [wbb],  [voce],  etc.,  or  more  simply, 
without  weights,  [d^],  \b^,  [c~],  etc.  These  terms  being  squares, 
are  always  positive.  Then  the  coefficients  in  any  vertical  cohnnn 
occur  in  the  same  order  as  those  in  the  corresponding  horizontal 
row.  Thus,  in  the  third  coliunn  and  row  the  order  is  [ac],  [be], 
[cc],  [dc],  etc.,  c  being  the  third  of  the  original  coefficients,  and  the 
other  factors  having  the  original  order,  a,  b,  c,  d,  etc. 

45.  Formation  of  the  Normal  Equations.  Aids.  The  com- 
putation of  the;  necessary  s(}uures  and  pixxlucts  foi-  the  coeffi- 
cients in  tlie  normal  ecjuations  is  facilitated  by  the  use  of  special 
methods  as  well  as  tables  and  niectianical  (Un'ices.  Tlie  choice 
of  the  method  or  device  will  be  governed,  in  general,  by  the  size  of 
the  numbers  involved  and  the  refinement  of  the  computations. 

'  The  Kcjuarcs  of  «,  h,  c,  etc.,  arc  oftcMi  written  ;is  na,  bb,  cc,  etc.,  to  illustrate 
this  syTiiinetry  as  well  as  to  avoid  the  us(>  of  exponents. 


INDIRECT  OBSERVATIONS  31 

The  tables  used  contain  the  logarithms,  the  squares,  or  the 
products  of  numbers.  Five-place  logarithms  are  suitable  in  most 
work,  and  four  places  are  often  sufficient.  Hussey's  five-place 
tables  are  recommended  as  very  convenient.  Barlow's  tables  of 
squares,  cubes,  roots  and  reciprocals  of  numbers  up  to  9,999  are 
well  known  and  satisfactory.  Of  the  tables  of  products,  Crelle's 
Rechentafeln,  giving  the  complete  products  of  numbers  of  three 
figures  each,  that  is,  up  to  999  by  999,  is  probably  the  most  useful, 
although  Zimmermann's  and  Peters'  may  more  readily  be  used  to 
obtain  products  of  larger  numbers,  as  they  give  directly  products 
of  numbers  of  four  figures  by  those  of  two  figures.  In  computing 
the  coefficients  for  normal  equations  by  means  of  tables,  the  loga- 
rithmic method  is  slowest,  the  use  of  squares  is  better,  and  the 
tables  of  products  are  usually  most  satisfactory.  In  the  absence  of 
these  last,  however,  tables  of  squares  may  be  used  in  either  of  two 
ways  for  the  computation  of  products,  namely,  by  one  of  the  fol- 
lowing formulas: 

a6  =  i[(a+6)2-a2-62]  (26) 

and 

a6  =  i[(a+6)2-(a-6)2]  (27) 

The  former  requires  but  one  new  opening  of  the  tables,  as  a^  and 
b~  are  separatel}'  necessary'  as  coefficients. 

The  mechanical  aids  to  computation  consist  of  slide-rules  and 
computing  machines.  The  ordinary  10-in.  slide-rule  is  sufficient 
for  reading  products  to  three  significant  figures.  The  Thatcher 
slide-rule,  however,  reads  directh'  to  four  or  sometimes  five 
figures  and  is  excellent  for  solving  normal  equations  as  well  as 
forming  them.  Computing  machines  are  of  two  types,  for  addi- 
tion and  foi-  multiplication.  We  are  concerned  primarily  with  the 
latter,  although  tlu>  fornuM-  may  be  used  indirectly  for  multipli- 
cation. Of  the  multiplying  macliinos  therc^  are  two  forms;  the 
Brunsviga  type,  in  whicii  one  turn  of  the  ci'ank  multi])]ies  !)y  one 
unit  so  that  to  multiply  by  4.'),  four  turns  would  hv  ixniuii'ed  in 
one  position  and  three  in  tiie  nexl :  and  th(^  Millioiuii'  machine,  in 
which  one  tui'u  of  the  crank  nniltiplics  by  a  whole  digit,  so  that  but 
twt)  turns  would  ])e  recjuired  to  multiply  by  ().").  one  for  each  digit. 
If  very  lai'gc  nunibcM's  are  to  be  inuhiphcd  oi'  (Hx'idcd,  a  computing 


32  PRACTICAL  LEAST  SQUARES 

machine  is  almost  indispensable,  but  for  ordinary  work  the  tables 
of  products  and  the  slide-rule  are  convenient  and  sufficient,  espe- 
cially since  large  numbers  are  avoided  as  much  as  possible. 

46.  The  computation  of  the  coefficients  in  the  normal  equations 
is  carried  out  conveniently  in  the  form  of  a  table  in  which  each 
quantity  involved  is  shown,  with  its  proper  sign,  first  the  given 
ones  and  then  the  computed  ones.  Then  it  is  highly  important 
that  the  multiplication  of  several  quantities  by  the  same  factor 
be  performed  in  succession,  as  this  plan  in  particular  is  adapted 
to  the  use  of  slide-rules,  multiplication  tables,  and  computing 
machines.  Thus,  for  each  observation  equation,  the  factor,  wa, 
would  be  multiplied  into  a,  h,  c,  .  .  .  I,  and  s,  in  succession,  and 
the  products  entered  in  the  proper  columns  of  the  table,  so 
that  the  sums  of  the  columns  would  be  the  coefficients,  [iwaa], 
[wab],  [wac],  .  .  .  [was],  of  the  first  normal  equation.  Next, 
the  factor,  wb,  would  be  multiplied  into  the  same  quantities, 
beginning  with  6,  however,  as  the  wab  products  are  included  in 
the  preceding  series,  and  the  column  totals  would  be  coefficients 
for  the  second  normal  equation,  and  so  on.  As  each  normal 
equation  is  completed,  its  coefficients  should  be  tested  with  the 
sum-term  to  assure  the  computer  that  the  check  is  satisfied. 
This  would  be  indicated  by  a  definite  check-mark  after  the  sum- 
term  if  it  checked  exactly,  or  by  the  cancellation  of  its  last  figure 
with  the  correct  one  written  above  so  as  to  equal  the  sum  of  the 
quantities  in  the  equation. 

In  the  simplest  cases,  when  there  are  but  few  observations  and 
two  unknown  quantities,  and  when  the  coefficients  are  small 
integers,  it  may  not  be  worth  while  to  carry  out  the  tabulation  for 
the  formation  of  the  normal  equations,  but  it  is  generally  safer 
to  do  so,  cspeciall}'  when  the  computer  is  subject  to  interruption 
in  his  work.  It  is  well,  also,  to  write  the  algebraic  signs  for  a 
complete  equation  before  forming  and  writing  the  numbers,  always 
writing  all  positive  as  well  as  negative  signs. 

47.  Example  of  the  Direct  Formation  of  Normal  Equations. 
As  an  illustration  of  the  preceding  articles,  the  normal  equations 
will  be  formed  directly  from  the  following  set  of  oljscrvation  equa- 
ti(Mis.     For  simplicity,  the  woiglits  will  be  assunietl  equal. 


INDIRECT  OBSERVATIONS 


33 


00 


o 

o 

O 

o 

o 

H 

II 

II 

II 

11 

< 

GO 

CO 

CO 

o 

00 

GO 

CO 

r^ 

H 

^ 

CO 

'^ 

(M 

1 

1 

+ 

+ 

Z 

^ 

kH 

>^ 

t^ 

o 

OJ 

o 

00 

H 

'^ 

CO 

LO 

(>! 

> 

+ 

+ 

1 

1 

"^ 

H 

'^ 

>^ 

W 

ZD 

^ 

>o 

CO 

+ 

+ 

1 

1 

O 

o 
p 


> 

o 

cc 

04 

04 

00 

^ 

c^' 

CO 

i-H 

lO 

(^ 

-o 

1^ 

t^ 

05 

05 

^ 

iO 

a> 

lO 

1 

1 

+ 

+ 

+ 

o 

o 

00 

00 

04 

oq 

lO 

'^ 

04 

lO 

,^^ 

lO 

C4 

04 

t^ 

t^ 

^ 

CO 

04 

■* 

1> 

t^ 

04 

1 

1—1 
1 

04 

1 

1 

CO 
1 

^ 

TjH 

O 

S 

'^ 

o 

04 

CO 

Tt< 

-o 

CO 

O 

1— 1 

1> 

lO 

rO 

r-( 

l—{ 

CO 

CO 

+ 

+ 

+ 

+ 

+ 

> 

00 

ca 

lo 

04 

t^ 

CC 

o 

ci 

od 

O 

04 

e 

t- 

CO 

1 

1 

+ 

+ 

+ 

00 

04 

lO 

00 

CO 

ca 

CO 

o 

04 

lO 

■-0 

to 

o 

?— 1 

00 

o 

Cj 

1 

T— 1 
1 

04 
1 

1 

00 

1 

o 

QO 

o 

-* 

04 

-^ 

Tt< 

C^ 

00 

GO 

CO 

^ 

04 

04 

1> 

+ 

+ 

+ 

+ 

+ 

o 

o 

lO 

Ci 

CO 

ti! 

ro 

1-H 

04 

oo 

a 

+ 

+ 

+ 

+ 

+ 

00 

CO 

1^ 

^ 

04 

^ 

04 

7 

04 
1 

7 

CO 

1 

oc 

CO 

CO 

CO 

04 

^^ 

X 

CO 

'f' 

r- 

CO 

T 

"\ 

+ 

Ol 

+ 

•oi 
1 

Ol 

CO 

04 

+ 

+ 

'i" 

OJ 

1 

7 

o 

io 

-1* 

ir\ 

CO 

OI 

1 . 

+ 

+ 

T 

' 

+ 

>   > 


+    + 


'A     CO 


>. 


+       + 


+     + 


34  PRACTICAL  LEAST  SQUARES 

As  there  are  no  discarded  remainders,  the  checks  are  exactly 
satisfied. 

(The  solution  of  these  equations  by  the  methods  of  algebra 
gives 

Z- +11.52  and  F= -0.25 

as  the  best  values  of  X  and  Y  obtainable  from  the  four  given 
observations.) 

48.  Use  of  Assumed  Approximate  Values  of  the  Unknowns. 

The  constant  term  of  the  observation  equations  is  sometimes  large 
as  compared  with  the  other  numerical  quantities,  and  to  save  labor 
in  the  formation  of  the  normal  equations,  recourse  may  be  had 
to  a  scheme  similar  to  that  used  in  Arts.  28  and  33  in  the  com- 
putation of  the  mean,  namely,  the  use  of  assumed,  approximate 
values  of  the  unknowns,  by  which  device  the  constant  terms  will 
be  reduced  considerably  in  size.  For  each  unknown  in  the  obser- 
vations, there  is  substituted  its  approximate  value  plus  a  small 
correction,  as, 

X  =  Xo-\-x 

Y=Yo-\-y,     etc.  (30) 

where  Xo  and  Yq  represent  the  approximate  values  and  x  and  y, 
the  small  corrections.  The  approximate  values  may  be  obtained 
by  a  trial  solution  of  the  necessary  number  of  the  observation 
equations,  namely,  as  many  as  there  are  unknown  quantities. 

Thus,  in  the  example  of  the  preceding  articles,  a  solution  of  the 
third  and  fourth  of  the  observation  equations  results  in 

Z=+11.9  and  F=-0.29 

whence  we  may  assume  the  approximate  values, 

A'o=+12.0  and   70= -0.3 

Substituting  for  X  and  Y,  therefore,  in  equations  (28),  the  quan- 

^^^^^"^  A  =  x+12.0  and  F  =  /y- 0.3 

we  obtain  for  the  first  equation, 

+  6(a;+12.())+40(?/-0.3) -58.8  =  0  (31) 


INDIRECT  OBSERVATIONS  35 

and  for  the  entire  set  of  observation  equations,  after  simplifica- 
tion, 

+  6a: +40t/+ 1.2  =  0 

-t-4a:+ 327/ +0.1=0  (32) 

-5a:-56?/+0.1  =  0 

-3X-282/  =0 

The  constant  terms  have  thus  been  diminished  to  very  small 
quantities  and  without  the  expenditure  of  much  labor,  so  that  the 
formation  of  the  normal  equations  will  be  considerably  easier, 
but  in  so  far  only,  be  it  noted,  as  the  terms  involving  I  are  con- 
cerned. It  is  obvious  that  this  scheme  leaves  the  coefficients  of 
the  unknowns  entirely  unaltered,  the  only  changes  being  in  the 
constant  terms. 

49.  Adoption  of  New  Unknowns  to  Equalize  Coefficients. 
When  the  coefficients  of  any  unknown  in  the  observation  equa- 
tions are  consistently  large,  they  may  be  reduced  in  size  by  an 
artifice  similar  to  that  of  the  preceding  article,  that  is,  by  sub- 
stituting for  the  unknown  a  new  one  obtained  by  multiplying  the 
former  by  a  certain  factor. 

In  the  equations  (32),  for  example,  the  coefficients  of  y  are 
much  larger  than  those  of  x  and  would  be  easier  to  handle  if  they 
were  cUvided  l^y,  say,  20.     Therefore,  assume 

y'  =  20y     or     y  =  ^  (33) 

SuV).stituting  this  value  of  y  in  the  given  equations,  and  writing 
the  coefficients  in  columns  for  simphcity,  we  have. 


(34) 


X 

+6 

y 
+2.0 

+  1.2     = 

=     0 

+4 

+  1.0 

+0.1 

—  5 

-2.8 

+0.1 

-3 

-1.4 

0 

36  PRACTICAL  LEAST  SQUARES 

which  are  much  simpler  than  the  original  equations  (28),  both  as 
regards  the  formation  of  the  normals  and  their  solution.  The 
normal  equations  are 

X  y' 

+86         +36.6       +7.1     =     0  (35) 

+36.6     +16.36     +2.28 
and  their  solution  results  in 

a;=-0.48     and?/=+0.94  (36) 


whence, 


2/  =  -^= +0.047 
^20 


Therefore, 

X=+12.0+x= +12.0-0.48= +11.52 
and  (37) 

F=-  0.3+2/=-  0.3+0.05=-   0.25 

The  advantages  of  reducing  the  size  of  the  coefficients  and  con- 
stants before  forming  and  solving  the  normal  equations,  is  less  in 
such  a  short  problem  as  the  one  just  solved  than  in  the  ones  which 
contain  more  unknown  quantities  and  larger  series  of  observa- 
tions. It  is  generally  advisable,  however,  even  in  the  shorter 
problems,  to  diminish  the  coefficients  and  constants  to  a  size 
which  will  be  convenient  for  computation,  and  to  equalize  them  to 
some  extent,  at  least,  by  using  whole  numbers  for  the  approximate 
values  and  the  factors. 

50.  Example:  Time  by  Star  Transits.  Let  us  consider,  as 
an  example  of  indirect  observations,  the  determination  of  time 
by  observed  transits  of  stars  on  the  mcri(Uan,  using  an  astronomical 
transit  instrument.  The  times  when  each  star  is  seen  to  cross 
the  successive  threads  are  rocoixlcd  by  the  observer,  himself, 
as  he  carries  the  lieats  of  the  chronometer  in  his  mind.  The 
mean  of  these  times  is  taken  as  the  time  when  the  star  crossed 
the  line  of  sight  of  the  instrument.  It  is  then  corrected  for  diurnal 
aberration,  the  rate  of  the  chronometer,  and  the  inclination  of 
the  horizontal  axis  of  the  instrumcMit  as  detei'mined  from  the 
readings  of  the  stricHng  level.  The  resulting  time,  d' ,  is  subtracted 
from  the  right  ascension,  a,  of  the  star  (which  is  the  correct  sidereal 
time   when    the   star   crosses   the    incriiUdu)    and    the   difference, 


INDIRECT  OBSERVATIONS 


37 


a—d',  according  to  the  usual  notation,  is  therefore  made  up  of 
the  chronometer  correction,  Ad,  which  is  the  quantity  really  desired 
from  the  observations,  and  the  corrections  for  azimuth,  Aa,  and 
collimation,  Cc,  according  to  the  formula, 

Aa-\-Cc+Ae-{a-d')  =  0  (38) 

in  which  A  and  C  are  the  known  azimuth  and  collimation  factors, 
and  a,  c,  and  Ad  are  respectively  the  azimuth  and  collimation  con- 
stants and  the  chronometer  correction,  which  are  the  three  un- 
knowns of  the  problem  and  which,  therefore,  will  be  represented 
by  X,  y,  and  z.  Each  star  thus  furnishes  an  observation  equation 
of  the  form, 

Ax  +  Cy-{-z-ia-d')^0        Wt.  w  (39) 

the  weight  being  determined  from  the  star's  declination,  as  stated 
in  Art.  32.     The  given  data  for  the  nine  stars  observed  are: 


,1 

r 

a-d' 

IC 

+0.15 

+  1.22 

-9m.  06.81s. 

0.9 

+0.72 

+  1.00 

06.98 

1.0 

- 1 .  ()(•) 

+3 .  30 

07.69 

0.2 

-0  17 

+  1..V2 

06.49 

0.7 

+0.09 

- 1 .  2S 

00.08 

0.8 

-0.18 

- 1 .  .-)3 

06 .  32 

0.7 

+  0.S0 

- 1  02 

06 .  27 

1  0 

+4.2.-, 

+  4  02 

07 .  47 

0.1 

+  0.C)0 

^1   ()1 

Of) .  00 

1.0 

(40) 


As  th(>  valuers  of  a  —  0'  inv  nearly  ecjual,  and  the  coefficients  of  z 
are  unity,  it  is  (>vi(lent  that  the  value  of  z  will  approximate  to 
a-d\     Therefore,  let 

2  =  2'-9ni.0(),00s.  (41) 

and  the  form  of  the  typical  (Hiuatioii  Ix'comes 

A.r  +  C//  +  /  +  /  =  ()     Wt.   IV  (42) 

in  which  /=  — Om.OG.OOs.  —  (a  — 0')-  The  modified  observation 
equations,  therefore,  are 

+-0,lo.r  +  1.22?/  +  /  +  0.81-0     (Wt.  0.0),     etc.. 


38  PRACTICAL  LEAST  SQUARES 

or,  arranged  in  tabular  form, 


X 

y 

z' 

(l) 

(Wt.) 

+0.15 

+  1.22 

+  1 

+0.81  =  0 

0.9 

+0.72 

+  1.00 

+  1 

+0.98 

1.0 

-1.66 

+3.30 

+  1 

+  1.69 

0.2 

-0.17 

+  1.52 

+  1 

+0.49 

0.7 

+0.09 

-1.28 

+  1 

+0.08 

0.8 

-0.18 

-1.53 

+  1 

+0.32 

0.7 

+0.80 

-1.02 

+  1 

+0.27 

1.0 

+4.25 

+4.92 

+  1 

+  1.47 

0.1 

+0.60 

-1.01 

+  1 

0 

1.0 

(43) 


X 

y 

z' 

(/) 

(«) 

+3.94 
+0.38 

+2.18 

+  0.38 
+  13.56 
+  0.19 

+2.18 
+0.19 
+6.40 

+  1.00 
+3.53 
+3.09 

=   0 
=   0 
=   0 

+  7.50 
+  17.66 
+  11.86 

The  quantities  in  the  successive  columns  are  the  a,  b,  c,  I,  and  w, 
of  the  observation  equations,  so  that  it  is  unnecessar}'  to  tabulate 
them  again. 

Forming  the  normal  equations  from  the  typical  ones  in  (21) 
by  means  of  a  tabulation  similar  to  that  of  Art.  47,  we  have, 


(44) 


the  solution  of  which,  as  algebraic  simultaneous  equations,  gives 

a:=+0.043s.,        ?/= -0.261s.,        z'= -0.491s., 

whence,  from  (41), 

z  =  Ae  =  z'-  9m. 06.00s.  =  -  9m. 06.49s., 

so  that  a=+0.043s.,  c= -0.261s.,  and  A^- -9m. 06.49s.,  rej)- 
resent  the  best  or  most  probable  values  of  the  thre(>  unknowns 
which  can  })e  ol)tained  from  the  given  observations. 

51.  General  Application  of  the  Method.  The  process  outlined 
in  this  chapter  for  the  adjustment  of  indirect  observations  may  be 
applied  to  any  set  of  linear  simultaneous  ccjuations  whose  number 
is  gi'eater  than  that  of  the  unknown  quantities,  although  they  may 
not  be  observation  ecjuations.     Sometimes  such  a  series  of  eciua- 


INDIRECT  OBSERVATIONS  39 

tions  may  result  from  computations  or  from  theoretical  assump- 
tions. Inasmuch,  however,  as  the  present  method  of  adjustment 
depends  upon  the  assumptions  as  to  the  occurrence  of  error, 
stated  in  Art.  18,  the  use  of  the  method  for  the  adjustment  of 
quantities  other  than  those  resulting  from  observations  may  be 
justified  only  by  the  absence  of  a  better  scheme. 

However,  any  other  method  is  likely  to  be  more  laborious 
than  this  one,  if  it  takes  into  account  all  the  given  data.  For 
example,  suppose  the  simplest  case  of  three  given  equations  involv- 
ing two  unknowns.  If  ignorant  of  the  adjustment  by  means  of 
Least  Squares,  but  desirous  of  utilizing  all  of  the  given  equations 
because  there  is  no  way  of  telling  which  one  could  be  discarded 
with  least  effect,  the  computer  might  reasonably  select  all  possible 
combinations  of  the  three  equations,  two  at  a  time,  namely,  three, 
and  solve  each  of  the  three  pairs  independently  by  algebraic 
methods,  thus  obtaining  three  different  values  for  each  of  the 
unknowns,  of  which  he  would  probably  take  the  mean  as  the  best 
value  within  his  knowledge.  Certainly,  the  formation  and 
solution  of  two  normal  equations  would  be  much  easier  than 
such  a  process. 


CHAPTER   IV 

SOLUTION  OF  NORMAL  EQUATIONS 

62.  Methods  of  Elimination.  As  simple,  simultaneous  equa- 
tions of  the  first  degree,  the  normal  equations  may  be  solved  by 
any  of  the  ordinary  algebraic  methods  of  elimination;  by  addition 
or  subtraction,  by  substitution,  or  by  comparison.  In  fact,  these 
methods  are  satisfactory  when  there  are  but  two  equations  to  be 
solved.  But  in  larger  sets,  of  three  or  more,  it  is  possible  to 
shorten  the  numerical  work  by  taking  advantage  of  the  peculiar 
symmetry  which  all  normal  equations  possess,  as  was  pointed  out 
in  Art.  44.  It  is  much  easier  to  solve  a  set  of  normal  equations 
than  a  set  of  ordinary,  simultaneous  equations  of  the  same  number 
which  do  not  have  this  symmetry. 

53.  The  Gauss  Method  of  Substitution  has  been  for  a  century 
the  basis  of  the  special  methods  for  the  solution  of  normal  equa- 
tions. Its  notation  is  convenient,  and  in  its  general,  literal  form 
it  is  given  in  nearly  every  work  on  Least  Squares.  However,  it 
has  been  modified  and  improved  in  various  ways,  particularly  by 
Mr.  M.  H.  Doolittle,  formerly  a  computer  in  the  U.  S.  Coast 
and  Geodetic  Survey,  and,  in  the  effort  to  confine  ourselves  to  a 
single  method  which  shall  be  the  most  generally  useful  one  for  our 
purposes,  we  shall  omit  the  detailed  explanation  of  the  Gauss 
process. 

54.  Requirements  of  a  Good  Method.  It  is  important  that 
the  method  to  be  adopted  be  as  universally  useful  as  possible,  in 
both  short  and  long  problems,  although  modifications  may  ])e 
convenient  to  adapt  it  to  special  or  peculiar  cases.  The  various 
steps  in  the  elimination  should  be  identical,  so  that  the  work  may 
be  performed  mechanically,  to  a  great  extent,  thereby  avoiding 
mistakes.  The  method  should  be  as  short  as  possible  so  as  to 
avoid  unnecessary  work.     And  finally,  checks  should  be  available 

40 


SOLUTION  OF  NORMAL  EQUATIONS 


41 


at  frequent  intervals  throughout  the  computation  in  order  that 
errors  may  be  discovered  and  corrected  without  a  great  deal  of 
recomputation.  All  of  these  qualities  should  be  borne  in  mind 
and  utilized  as  far  as  possible  in  every  solution.  It  is  believed 
that  the  Abridged  Method  explained  below  fulfills  these  require- 
ments and  that  it  will  be  readily  understood.^ 

55.  Algebraic  Elimination  by  Addition.  Let  us  undertake 
the  solution  of  a  simple  set  of  normal  equations  in  order  that  the 
steps  we  shall  take  in  the  process  may  be  clearly  understood.  The 
method  of  elimination  by  addition  will  be  used,  although  arranged 
in  a  certain  form  to  illustrate  the  shorter  method  which  is  to  follow. 
The  given  normal  equations,  with  coefficients  arranged  in  col- 
umns, are: 


(45) 


For  purposes  of  explanation,  the  equations  are  numbered  at  the 
left,  but  for  a  reason  which  will  appear  later,  the  first  is  given  the 
Roman  numeral  (I).  First,  we  eliminate  x  between  the  first  and 
second  equations,  by  multiplying  the  first  by  such  a  quantity  or 
factor,  as  will  make  its  first  term  equal  to  that  of  the  second  equa- 
tion with  the  opposite  sign,  and  then  adding  the  two.  This 
factor  will  be  the  quotient  of  the  first  term  of  the  second  equation 
with  its  sign  changed,  by  the  first  term  of  the  first  equation,  that  is, 
+2  6.  Thus,  indicating  on  the  right  the  steps  tak(ni,  we  write  down 
equation  (2)  and  under  it  ihv  first  e(}uation  multijilied  by  +2  G: 


(40) 


Published  by  M.  II.  Doolittle  in  (\  A:  G.  Survey  Uejjort,  1S78,  Ai)p,  8. 


.r 

,'/ 

-' 

\li 

(I) 

+  0 

-2 

+3 

+2 

=  0 

(2) 

-2 

+3 

-4 

-3 

=  0 

(3) 

+  3 

-4 

+3 

+  1 

=  0 

(2) 
(41 

(III 

■I-              11 

z 

(/) 

—  2 
+2.0 
0 

+3 
-0.7 

+2.3 

-4 

+  1.0 
-3.0 

-3 

+0.7 
-2  3 

=  0 
=  0 
=  0 

(nx(+2  (n 

(2)  +  !4i 

42 


PRACTICAL  LEAST  SQUARES 


This  equation,  resulting  from  the  elimination  of  the  first  unknown 
is  called  a  First  Derived  Equation,  and  is  given  the  Roman  numeral 
(II)  as  marking  the  completion  of  a  whole  step  in  the  process. 

Next,  X  is  eliminated  in  the  same  way  between  the  first  and 
third  equations,  by  multiplying  the  first  by  such  a  factor  as  will 
make  its  first  term  equal  to  that  of  the  third  equation  with  its 
sign  changed,  and  adding  the  two  equations.  As  before,  the 
factor  will  be  the  quotient  of  the  latter  first  term  with  the  reversed 
sign  by  the  former  one,  or  —  3/6.     Writing  (3)  first, 


(3) 

X 

y 

z 

(l) 

+3 

-4 

+3 

+  1 

=  0 

(5) 

-3.0 

+  1.0 

-1,5 

-1,0 

=  0 

(I)X(-3,'6) 

(6) 

0 

-3.0 

+  1.5 

0 

=  0 

(3) +  (5) 

(47) 


Equation  (6)  is  like  (II)  in  having  no  x-term,  that  is,  it  may 
also  be  called  a  First  Derived  Equation.  Therefore,  y  can  be 
eliminated  from  these  two  equations  in  the  same  manner  as  x 
was  eliminated  above.  Multiplying  (II)  by  the  first  term  of  (6) 
with  its  sign  changed,  divided  by  the  first  term  of  (II),  and  adding 
the  resulting  equation  to  (6),  we  have,  as  before,  writing  the  second 
of  the  two  equations  first : 


,'/ 

z 

(I) 

(iV) 

(7) 

aiii 

-3.0 

+3.0 
0 

+  1.5  1 

-3.9 

-2.4 

0 
-3.0 
-3.0 

o  c    c 

II    II    II 

(II)  X( +3. 0,2.3) 

(i))  +  (7) 

(iS) 


This  conipk'tcs  the  s(H'ond  step  in  the  process,  the  second  unknown 
has  been  eliminated  and  this  last  ecjuation  is  therefore  called  a 
Second  Derived  Erjuation  and  given  the  next  Roman  numeral  (III). 
The  elimination  is  now  complete^  and  the  last  e(}nation  may  be 
writt(n;i 

-2.4.~-;5.()  =  0 


SOLUTION  OF  NORMAL  EQUATIONS 


43 


giving  the  value  of  z  directly,  as  2=  +3.0/  — 2.4=  —1.25.    Substi- 
tuting this  value  in  (II)  gives 


y= 


+3.02+2.3     -3.75+2.3     -1.45 


+2.3 


+2.3 


=  -0.63 


+2.3 
Then,  from  (I), 

+2y-?>z-2     -1.26+3.75-2     +0.49 


(49) 


x  = 


+6 


+6 


+6 


+0.08 


Some  properties  of  this  method  of  solution  will  now  be  con- 
sidered. 

56.  Symmetry  among  the  Derived  Equations.  The  First 
Derived  Equations  resulting  from  the  elimination  of  the  first 
unknown,  x,  between  the  first  normal  equation  and  each  of  the 
others  in  succession,  will  be  one  less,  in  number,  than  the  unknowns; 
therefore,  in  the  above  example  they  are  two,  namely. 


y 

2 

(/) 

(II) 

(6) 

+2.3 
-3.0 

-3.0 
+1.5 

—  2.3 

0 

=  0 
=  0 

(50) 


They  evidently  are  symmetrical  about  the  diagonal  in  the  same 
respect  as  the  normal  equations,  that  is,  the  second  coefficient  in 
the  first  row  (  —  3.0)  is  the  same  as  the  second  coefficient  in  the 
first  column  (  —  3.0).  This  symmetry  exists  likewise  in  all  sets  of 
first  derived  equations,  whatever  their  ninnber,  as  may  be  proved 
by  carrying  out  a  complete  solution  of  the  typical  normal  e(|ua- 
tions  with  their  literal  coefficients.  In  this  example,  however,  the 
reason  for  the  equality  of  these  two  coefficients  may  ])e  seen  by 
indicating  the  operations  through  which  they  were  obtained,  using 
certain  symmetrical  terms  in  the  normal  equations.     Thus, 

Second  Coeff.  of  (II)  =-4- (-2  6X+3)  =  -3.0 

First  Coeff.  of  (6)        =  -4- (  +  3  OX  -2)  =  -3.0  (51) 

The  two  —  4's  are  svnnnetrical  in  the  normal  eouations,  as  also 


44 


PRACTICAL  LEAST  SQUARES 


are  the  two  —  2's  and  the  two  +3's,  while  the  +6  is  the  same  in 
both  cases. 

Likewise,  the  Second  Derived  Equations,  resulting  from  the 
elimination  of  7j  from  the  first  derived  equations,  are  symmetrical 
among  themselves,  and  so  on  with  successive  sets  of  derived 
equations  in  the  solution  of  a  large  number  of  normal  equations 
by  this  method  of  elimination. 

57.  Omission  of  Redundant  Terms,  (a)  In  each  first  derived 
equation,  x  has  been  eliminated;  that  is,  it  has  the  coefficient  zero. 
Therefore,  it  is  unnecessary  to  write  the  coefficients  in  the  x-col- 
umn  at  all  during  the  elimination  of  x,  as  in  (46)  and  (47),  as  we 
know  that  they  will  add  up  to  zero  if  the  work  is  correct,  and  any- 
way, there  will  be  other  and  better  checks  on  the  correctness  of 
the  work.  Similarly,  the  y-column  may  be  omitted  during  the 
elimination  of  y,  as  in  (48),  and  so  on.  However,  the  sum  of  the 
remaining  terms  in  each  equation  will  not  now  equal  zero,  except 
in  the  derived  equations,  where  the  omitted  coefficients  are  always 
zero.  This  will  deprive  us  of  the  equation-checks  except  in  the 
derived  equations,  but  these  will  still  be  sufficiently  close  together 
to  control  the  computation. 

(6)  By  transposing  all  the  terms  of  each  equation  into  one 
member,  as  was  done  in  the  above  example,  we  maj^  omit  the 
symbols,  "  ="0,"  from  each  equation.  As  just  stated,  however, 
these  must  be  understood,  in  the  cases  of  the  derived  equations, 
as  if  written. 

(c)  Even  the  original  normal  equations  may  be  simplified  by 
the  omission  of  all  the  terms  lying  below  the  diagonal,  these 
being  symmetrical  to  the  ones  above  the  diagonal.  Thus,  in  the 
normal  equations,  (4.5), 


X 

,v 

^ 

(/) 

(1) 

+6 

-2     1 

+3 

+2     ! 

^9 

(2) 

~2 

+3 

-4 

-3     ; 

F^0 

CA) 

+  3 

-4 

+3 

+  1 

?^o 

(52) 


Iho  cancelod  toi'ins  would  })e  omitted.     To  read  the  original  equa- 


SOLUTION  OF  NORMAL  EQUATIONS 


45 


tions,  then,  the  omitted  portion  of  each  row  must  be  replaced 
by  the  symmetrical  quantities  in  the  corresponding  column. 
For  example,  the  second  equation  is  begun  in  the  second 
column  and  read  downward  to  the  diagonal  and  then  hori- 
zontally to  the  right  along  its  own  row,  retaining,  however,  the 
original  order  of  the  unknowns,  as  —2x-\-Sy  —  4:Z  —  3  =  0.  Simi- 
larly, the  third  equation  is  begun  in  the  third  column,  read  down- 
ward to  the  diagonal  and  continued  along  the  third  row  as  usual: 
-\-3x  —  4:y-{-3z-{- 1.0  =  0.  The  simplified  form  of  the  equations, 
then,  would  be: 


X 

y 

1           ^ 

[l 

(I) 

+6 

-2 

+3 

+2 

(2) 

+3 

-4 

-3 

(3) 

'       +3 

+  1 

(53) 


It  is  evident  that  the  original  order  of  the  equations  must  never  be 
changed  if  these  abbreviations  are  to  be  used,  as  the  symmetry 
would  then  be  destroyed. 

(r/)  In  the  second  step  of  the  elimination,  it  is  unnccessar\'  to 
write  equation  (6)  either  in  (47)  or  (48),  but  equation  (7)  may  be 
written  directly  below  (5)  in  (47),  the  x-  and  ^/-columns  being 
omitted  as  above,  and  the  three  lines  (3),  (5),  and  (7)  added,  to 
form  (III).  The  factor  for  obtaining  (7)  from  (II),  namely, 
—  (  —  3.0  2.3),  is  the  second  term  of  (II)  with  its  sign  changed, 
divided  by  the  first  term  of  (II),  owing  to  the  symmetry  of  the 
derived  equations  as  sliown  in  Art.  56.  Thus,  (47)  and  (48)  may 
be  combined  into: 


'/ 

(3) 
(n) 

+  3 
- 1  . .") 

-1,0 

'r)Xf'-3  Gi 

IT) 

- :? .  0 

-.so 

(Hix'+i'S.o  2. 

5) 

fill) 

-2.4 

~:i  0 

r.])  +  <r))  +  i7'- 

(54) 


46  PRACTICAL   LEAST  SQUARES 

58.  The  Series  of  Derived  Equations.  Upon  inspection  of 
(46)  and  (54)^  it  will  be  seen  that  (II)  is  derived  from  (2)  and  (I), 
and  that  (III)  is  derived  from  (3),  (I),  and  (II).  If  there  were  a 
fourth  unknown  and  four  normals,  the  derived  equation  (IV) 
would  be  derived  from  the  fourth  normal  equation  and  (I),  (II), 
and  (III),  and  so  on.  Here,  then,  lies  the  reason  for  giving  to  the 
first  normal  equation  the  Roman  numeral  (I);  it  is  associated 
with  the  derived  equations  in  each  step  of  the  elimination.  There- 
fore, in  writing  a  list  of  the  derived  equations,  this  equation  is 
written  first,  and  is  referred  to  as  one  of  them.  Such  a  list,  in 
order,  has  the  property  that  each  equation  is  complete  and  begins 
with  the  second  unknown  of  the  preceding  one,  so  that  the  series 
is  used  for  determining  the  successive  unknowns  in  their  reverse 
order  when  the  elimination  has  been  completed,  by  substitution 
back  through  them. 

59.  Control  or  Check  in  the  Solution  of  the  Normal  Equations. 
The  check  on  the  formation  of  the  normal  equations,  explained 
in  Arts.  4.3  and  47,  may  be  continued  through  the  process  of  elimina- 
tion so  as  to  test  the  correctness  of  the  computation  at  frequent 
intervals.  If  the  sum-term  of  each  of  the  normal  equations  be 
subjected  to  the  same  operations  as  its  other  terms,  the  resulting 
modified  sum-term  will  be  equal  to  the  sum  of  the  corresponding 
series  of  other  terms.  Aloreover,  this  relation  will  persist  when 
several  equations  have  ))een  added  or  sul^tracted,  the  sum  of  all 
the  sum-terms  being  ecjual  to  the  sum  of  all  the  other  terms. 
Thus,  the  sum-terms  which  were  used  to  check  the  formation  of 
the  normal  equations  ma}'  be  used  during  their  solution  to  test 
the  correctness  of  an  ecjuation  at  any  stage  of  the  work.  As  was 
pointed  out  in  (a)  of  the  last  article,  however,  the  omission  of 
redundant  terms  leaves  the  derived  efjuations  as  the  only  com- 
plet(>  ones  in  the  elimination.  Therefore,  this  sum-check  being 
applicable  to  each  derived  equation  as  it  is  formed,  should  be 
taken  advantage  of  in  every  case. 

8inc(!  the  check  applies  only  to  complete  e(}uations,  the  coeffi- 
cients of  the  normal  equations  must  be  i-ead  down  and  to  the  right 
as  shown  in  the  last  article,  when  the  simplified  form  is  used. 


SOLUTION  OF  NORMAL  EQUATIONS 


47 


In  the  above  example,  then,  the  complete  statement  of  the  normal 
equations  in  the  simpler  form,  with  their  check-terms,  is: 


(55) 


60.  Elimination  by  the  Abridged  Method.  This  set  of  equa- 
tions will  now  be  solved  in  accordance  with  the  devices  explained 
in  the  preceding  articles  for  al)ridging  the  various  operations  as 
much  as  possible.  A  comparison  of  this  solution  with  the  direct, 
algebraic  one  given  in  Art.  55,  will  illustrate  the  different  steps 
and  the  saving  of  labor. 


X 

1 

y 

1 

2 

(0 

is) 

(I) 

+6 

-2     ' 

+3     ' 

+2 

+9 

(2) 

+3 

--1 

-3 

-6 

(3) 

+  3 

+  1 

+3 

X 

y 

1          ^ 

(0 

(•s) 

(I) 

(2) 
(3) 

(2) 
^4) 

(U) 

(3;) 

dill 

+6 

-2 

+3 

+2 

+9      ^ 

^  Normal    Equations 

(r'ixf.+2  ()) 

+3 

+3 
-0.7 

-4 

+3 

-4 

:  +1.0 

-3 

+  1 

-3 

+0,7 

-6 

+3      . 

-6 

+3 . 0 

+2.3 

-3.0 

-2.3 

-3.  On 

'     '2)  +  (4) 

'I)X(-3  (i) 
(II)X(+3.0  2.3) 

'    (3)+r5)  +  (()l 

+3 

-1.5 

-3.9 

+  1 

-1.0 

-3.0 

+3 
-4  5 
-3.9 

-2.4 

-3  0 

-5.4^ 

Hic  process  may  be  outlined  as  follows:  Wi'it(>  (2);  follow 
the  l(>ft-hand  column  of  (2)  u]i  to  (I)  and  find  —2;  chang(^  its 
sign  and  divide^  by  the  first  term  of  (1),  giving  +2  (>:  multijily 
this  factor  into  the  terms  of  (1).  begimiing  with  The  same  left-hanc! 
column  of  (2),  and  writing  the  results  in  line  (4)  under  the  corr(^ 


48  PRACTICAL  LEAST  SQUARES 

spending  ones  in  (2);  add  (2)  and  (4)  to  obtain  (II).  Next,  write 
(3);  follow  its  left-hand  column  up  to  (I)  and  find  +3;  change 
its  sign  and  divide  by  the  first  term  of  (I),  giving  —3/6;  multiply 
this  factor  into  the  terms  of  (I),  beginning  with  the  left-hand  col- 
umn of  (3),  writing  the  products  in  their  proper  columns  in  line 
(5),  under  (3).  Again,  follow  the  same  left-hand  column  of  (3) 
up  to  (II)  and  find  —3.0;  change  its  sign  and  divide  by  the  first 
term  of  (II),  giving  +3.0/2.3;  multiply  this  factor  into  the  terms 
of  (II),  beginning  with  the  left-hand  column  of  (3),  writing  the 
results  in  line  (6),  below  (5),  and  in  their  proper  columns;  add  (3), 
(5),  and  (6),  to  obtain  (III),  as  the  second  and  last  step  in  the 
elimination.  If  there  were  four  normal  equations,  the  fourth 
would  now  be  written,  beginning  with  the  fourth  colunm;  under 
it  would  be  written  the  products  of  the  terms  of  (I),  beginning 
with  the  fourth,  by  a  factor  consisting  of  the  fourth  term  of  (I) 
with  its  sign  changed,  divided  by  its  first  term;  under  this  line 
would  be  written  the  products  of  the  terms  of  (II),  beginning  with 
the  third,  by  a  factor  consisting  of  this  third  term  with  changed 
sign,  divided  by  the  first  term  of  (II) ;  and  finally,  under  this  line 
would  be  written  the  products  of  the  terms  of  (III),  beginning  with 
the  second,  by  a  factor  consisting  of  this  second  term  with  its  sign 
changed,  divided  by  the  first  term  of  (III) ;  whence  the  sum  of  the 
four  lines  thus  obtained  would  be  (IV).  This  procedure  could  be 
continued  through  any  number  of  equations. 

61.  The  mechanical  character  of  this  scheme  of  elimination 
is  apparent  from  the  foregoing  explanation.  Each  of  the  main 
steps  accomplishes  the  elimination  of  one  unknown  more  than  the 
preceding  step  did,  and  results  in  the  next  derived  equation.  Each 
step  consists  of  the  sum  of  its  normal  equation  and  as  many  others 
as  there  are  derived  equations  already  formed,  including  (I); 
so  that  the  successive  steps  embrace  the  sums  of  two,  three,  four, 
five,  or  more  lines,  up  to  the  number  of  unknowns  involved  in  the 
problem.  For  each  step,  th(>  numerators  of  the  factors,  with 
opposite  signs,  are  found  in  one  column,  namely,  the  one  con- 
taining the  first  term  of  the  normal  equation  as  written,  that  is, 
the  one  corresponding  to  that  equation,  as  third  column  for  third 


SOLUTION  OF  NORMAL  EQUATIONS  49 

equation,  etc.;   the  denominators  are  the  first  terms  of  the  corre- 
sponding derived  equations. 

62.  Notes  and  Suggestions.  The  arrangement  of  the  work 
in  columns  is  essential  to  mechanical  efficiency  and  "  Data  Sheets  " 
are  convenient  for  this  purpose.  Ruled  horizontal  lines  including 
each  derived  equation  make  it  prominent  for  quick  reference. 
By  writing  the  algebraic  signs  of  each  line  of  products  before 
writing  the  numbers  themselves,  errors  in  sign  may  be  avoided 
to  a  great  extent.  In  each  line,  all  the  signs  will  be  the  same  as 
those  of  the  corresponding  derived  equation,  or  all  opposite  to 
them.  It  will  be  noted  that  the  ^7-5^  term  in  each  of  these  lines  of 
products  is  always  negative  owing  to  changing  the  sign  in  the 
numerator  of  each  factor;  this,  also,  affords  a  check  on  the  signs. 
Unavoidable  discrepancies  in  the  last  figure  of  the  check-term,  due 
to  remainders,  should  be  removed  by  arbitrarily  correcting  the 
check-term  before  proceeding  with  the  next  step  in  the  elimina- 
tion; this  is  best  done  by  drawing  a  line  through  the  erroneous 
figure  and  writing  the  correct  one  just  above  it.  If  the  check  is 
exactly  satisfied,  it  should  be  as  carefully  noted  with  a  check-mark 
in  order  to  avoid  uncertainty. 

63.  Values  of  the  Unknowns.  The  process  of  elimination  hav- 
ing been  completed  and  the  derived  equations  checked  as  formed, 
it  remains  to  determine  the  last  unknown  from  the  last  equation  and 
to  substitute  back  in  the  preceding  derived  equations  in  reverse 
order,  to  obtain  the  other  unknowns;  x  being  finally  determined 
from  (I).  If  there  ])e  many  equations,  this  process  may  be  facil- 
itatcnl  by  tal)ulating  the  products  instead  of  indicating  the  work 
as  in  (tO).  A  table  is  ])egun  for  each  imknown  by  writing  first, 
with  changcMl  sign,  the  constant  term  of  the  derivcnl  e(|uation  from 
which  that  unknown  is  to  b(>  obtained.  Below  this  ar(>  placed  in 
succ(>ssion  tlu^  ])ro(lucts  of  the  unknowns,  as  coniput(Ml,  by  their 
respective  co(>ffi('ients,  with  signs  changed,  in  that  ecjuation. 
The  sum  of  these  (juantitic^s  (Hvided  ])y  the  fii'st  coefficient  in  the 
ecjuation  gives  the  value  of  that  unknown.  The  advantage  lies 
in  the  fact  that  each  unknown,  as  conipuUMl,  is  niultiphed  into 
all  of  its  coefiicicnit.s  in  tlie  pr(M'e(Ung  deiived   (Hjuations,  in  sue- 


50 


PRACTICAL  LEAST  SQUARES 


cession,  rather  than  separately  as  needed  for  each  case;  thus,  a 
shde-rule,  a  multipHcation  table,  or  a  machine  can  be  used  with 
profit.  Applying  this  arrangement  to  the  problem  in  Art.  60, 
we  have: 


X 

y 

z 

Constants 

z-terms 

,!/-terms 

-2.0 

+3.75 

-1.26 

+2.3 
-3.75 

-1.45 

+2.3 

-0.63  =  ?/ 

+3.0 
-2.4 

-1.25  =  2 

+0.49 
+6.0 

+0.08  =  x 

It  is  often  convenient  to  write  these  little  tables  in  the  proper  col- 
umns just  to  the  left  of  the  computation  of  the  derived  equations, 
that  is,  left  of  the  elimination,  as  it  is  not  necessary  to  write  the 
numbers  of  the  various  equations  after  the  method  has  been 
learned. 

64.  Final  Check  of  the  Unknowns.  The  use  of  the  chock- 
terms  ends  with  the  formation  of  the  last  derived  equation.  The 
entire  solution,  however,  including  the  values  of  the  unknowns, 
may  be  checked  by  substituting  those  values  in  the  normal  equa- 
tions other  than  the  first.  If  there  were  no  neglected  remainders 
in  the  computation,  all  these  equations  should  be  exactly  satis- 
fied. Therefore,  in  any  actual  case,  the  discrepancies  should  be 
very  small. 

When  several  etiuations  are  to  l)e  tested  at  once,  the  method  of 
tabulation  explained  in  the  last  article  may  be  used  to  advantage. 
Obviously,  the  sum  of  each  table  should  be  very  nearly  zero  if 
no  mistake  has  been  made.  This  will  be  illustratetl  in  a  later 
chapter. 

65.  Refinement  of  the  Computations.  The  number  of  decimal 
places  to  1)0  retained  throughout  the  elimination  will  depend  upon 
the  niunber  desired  in  the  n^sulting  unknowns.  T}u>  labor  of 
solution  inci-eases  rapidly  with  thv  size  of  the  (juantities  involved 


SOLUTION  OF  NORMAL  EQUATIONS  51 

SO  that  it  is  very  important  to  keep  them  as  small  as  practicable. 
The  discrepancies  due  to  neglected  remainders  during  the  elimina- 
tion will  seldom  amount  to  more  than  one  or  two  in  the  last  place 
and  these  will  be  revealed  by  the  checks.  Similar  ones  will 
occur  in  the  values  of  the  unknowns,  resulting  in  their  failure  to 
check  exactly  when  substituted  in  the  original  normal  equations. 
However,  as  the  final  values  of  the  unknowns  must  be  regarded 
as  but  approximations  to  the  correct  values,  which,  of  course,  are 
unattainable,  it  cannot  be  objectionable  to  alter  the  last  figure 
of  an  unknown  arbitrarily,  to  make  it  check  or  to  make  it  con- 
sistent with  the  others,  and  this  is  sometimes  necessary.  There- 
fore, it  is  unwise  to  carry  the  whole  computation  one  or  two  places 
farther  merely  to  secure  an  exact  check  in  a  certain  place  without 
forcing  it. 

As  a  general  rule,  it  is  well  to  carry  the  observations  two 
places  beyond  the  last  one  which  is  regarded  as  known  with  cer- 
tainty. For  example,  each  reading  will  have  its  last  figure  the 
result  of  estimation,  to  some  extent,  the  preceding  one  being  cer- 
tain; then  the  mean  of  several  readings  would  be  carried  one  place 
farther.  This  should  determine  the  degree  of  refinement  to  which 
the  normal  equations  and  the  elimination  should  be  carried, 
the  coefficients  of  the  observation  equations  being  modified  as 
shown  in  Arts.  48  and  49  so  as  to  be  consistent  in  size.  The 
unknowns  may  then  be  carried  out  one  place  farther,  the  last 
figure  to  be  retained  or  rounded-off  to  the  preceding  one  as  pre- 
ferred. However,  this  is  largely  a  matter  of  judgment  derived 
from  experience.  The  beginn(>r  is  too  apt  to  carry  his  work 
farther  than  is  justified  by  the  precision  of  the  observations. 
He  may  ho  guided  l)y  the  rule  to  carry  the  computations  one 
place  fartluM-  than  the  given  data;   this  is  anipl(\ 

66.  Mechanical  Aids  in  the  Solution.  Wv  have  seiMi  in  Art.  45 
how  the  formation  of  the  normal  ecjuations  may  l)e  facilitated  by 
the  us(^  of  tal)k"s  and  nu'clianical  (Un'ices.  In  \he  solution  of  the 
ecjuations,  these  articles  are  oxen  moi'(>  useful,  perhaps,  especially 
the  sli(l(>-rules.  as  th(\v  admit  of  multiplying  a  series  of  mnnbers 
by  tlK>  (juotient  of  two  other  numlxM-s  at  one  st^tting  of  the  rule. 
The  Thatcher,  in  jxirticular,  is  very  coii\cni(Mit ,  and  is  good  for 


52  PRACTICAL  LEAST  SQUARES 

four  significant  figures,  but  the  ordinary  10-in.  rule  is  sufficient 
when  but  three  figures  are  used  and  is  commonly  at  hand.  The 
computing  machines,  while  necessary  for  large  numbers  are  less 
advantageous  for  small  ones,  but  they  have  the  great  advantage 
over  the  slide-rules  of  causing  little  or  no  straining  of  the  eyes. 


CHAPTER  V 

OBSERVATIONS  OF  DEPENDENT  QUANTITIES: 
CONDITIONED  OBSERVATIONS 

67.  Dependent  Quantities.  In  the  preceding  chapters,  the 
quantities  observed  or  determined  from  the  observations  have  been 
independent,  that  is,  any  one  or  more  might  vary  without  causing  a 
corresponding  change  in  the  others.  Thus,  in  the  determination 
of  time  by  star  transits  in  Art.  50,  the  constants  of  the  transit 
instrument  cannot  be  affected  by  any  change  in  the  chronometer 
correction.  Now,  however,  we  shall  consider  a  different  class  of 
quantities,  and  one  which  is  of  particular  importance  to  engineers, 
inasmuch  as  it  includes  their  most  complex,  but  at  the  same  time, 
most  useful,  problems  in  the  adjustment  of  observations.  In 
this  second  division  of  the  subject,  the  observed  quantities  are  not 
independent  of  one  another,  but  are  inter-related  by  certain 
theoretical  requirements,  called  Conditions,  which  their  adjusted 
or  adopted  values  must  rigidly  satisfy.  The  adjustment,  then, 
consists  in  determining  the  best  set  of  values  for  the  observed  quan- 
tities which  shall  exactly  satisfy  the  prescribed  conditions. 

For  example,  if  the  three  angles  of  a  plane  triangle  be  measured 
with  a  protractor,  they  must  be  so  adjusted  that  the  sum  will  be 
exactly  180°.  Or,  if  the  three  angles  of  a  triangle  in  the  field  be 
measured  with  a  transit  or  theodolite,  they  must  be  adjusted  by 
the  application  of  a  small  correction  to  each,  so  that  the  sum  of 
the  adjusted  values  will  be  180°  plus  the  spherical  excess.^     Also, 

^  The  earth  is  api)rc)ximately  si)heroidal  but  the  figures  in  triangulation 
are  considered  as  spherical  for  convenience  in  eonii)Utation.  The  observed 
horizontal  angles,  then,  are  those  of  .spherical  triangles,  since  the  plumb-lines 
at  the  different  stations  are  converg(>nt  and  the  horizontal  planes  of  the 
angles  are  neither  coincident  nor  par;dlel.  Yvry  small  triangles,  however, 
may  be  considered  plane,  as  the  spherical  excess  is  but  1"  in  a  triangle  con- 
taining 75  square  miles. 

o3 


54  PRACTICAL  LEAST  SQUARES 

the  horizontal  angles  completing  the  horizon  at  a  station  must  be 
adjusted  so  that  the  sum  will  be  360°;  and  the  differences  of  ele- 
vation in  a  closed  circuit  of  levels  must  be  adjusted  so  that  their 
algebraic  sum  will  be  zero,  when  proceeding  continuously  around 
the  figure,  that  is,  clockwise  or  counterclockwise. 

68.  The  observations  to  be  adjusted  will  have  been  made 
independently,  as  a  rule,  as  in  the  case  of  a  circuit  of  levels  made 
up  of  several  differences  of  elevation  between  successive  bench- 
marks, each  difference  of  elevation  being  determined  independently 
of  the  others.  However,  so-called  "  observations,"  entering  into 
an  adjustment,  may  never  have  been  actually  observed  but  may 
be  the  results  of  computation  or  of  a  previous  adjustment  of  actual 
observations.  For  example,  an  angle  of  a  triangle  may  have  been 
determined  by  the  addition  or  subtraction  of  two  or  more  observed 
angles,  or  from  a  local  adjustment  of  the  angles  at  that  station. ^ 
Also,  as  stated  in  Art.  26,  each  observation  may  be  the  result  of 
several  elemental  observations  or  readings;  in  fact,  this  is  usually 
the  case  with  dependent  quantities.  Generally,  too,  these  obser- 
vations are  direct  ones.  In  any  event,  however,  the}'  will  be 
adjusted  as  direct  observations  of  dependent  quantities,  as  this  is 
the  most  convenient  and  practical  method. 

69.  The  weights,  in  the  general  case,  will  be  unequal,  of  course. 
They  are  obtained  as  indicated  in  Art.  32,  from  the  number  of 
observations,  from  theory,  or  bj'  estimation.  They  ma}'  be 
determined  from  the  nature  of  the  observed  quantities,  inde- 
pendently of  the  observations  themselves,  although  subject  to 
modification  in  every  case  when  the  circumstances  are  unusual. 
The  basis  of  weights  in  observations  of  angles  is  usually  the 
number  of  observations;  in  leveling,  the  lengths  of  the  lines,  the 
number  of  instrument  stations,  etc.,  may  indicate  the  weights. 

70.  Conditions.  The  nature  of  the  conditions  which  are  to  be 
satisfied  by  the  adjusted  values  of  the  observed  quantities  will 
depend  upon  the  character  of  the  problem.     The  only  limitations 

1  It  will  be  seen  later  on  that  it  is  often  convenient  to  make  two  or  more 
small,  partial  adjustments  instead  of  a  single  large  one,  so  that  it  freqviently 
happens  that  the  given  data  to  be  adjusted  are  the  results  of  a  previous  adjust- 
ment. 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  55 

upon  them  are:  (1)  that  their  number  must  be  less  than  that  of  the 
observations,  as  otherwise  a  sufficient  number  of  them  could  be 
solved  as  simultaneous  equations  so  as  to  determine  the  unknowns 
directly,  without  an  adjustment;  and  (2)  that  they  must  be 
independent  of  one  another,  that  is,  no  condition  may  be  included 
twice  in  the  same  series.  Furthermore,  the  correctness  of  the 
conditions  is  not  essential  to  the  adjustment,  itself,  as  this  can  be 
carried  out  so  as  to  force  the  unknowns  to  satisfy  almost  any  arbi- 
trary or  unreasonable  condition;  but  a  correct  adjustment  requires 
that  the  conditions  be  correctly  stated.  If  an  error  be  made  in 
the  statement  of  a  condition,  and  the  proper  method  of  adjust- 
ment be  used,  the  unknowns  would  satisfy  the  erroneous  condi- 
tion, and  the  error  might  not  be  discovered  until,  as  a  final  check, 
the  adjusted  values  were  tested  by  substitution  in  the  original 
conditions.  Therefore,  it  is  well  to  exercise  great  care  in  the 
statement  of  each  condition,  and  to  be  sure  that  all  of  the  neces- 
sary conditions,  but  no  others,  be  included  in  an  adjustment. 

71.  Number  of  Conditions.  It  is  evident,  in  general,  that  a 
certain  numljcr  of  observations  would  be  necessary  for  the  deter- 
mination of  a  certain  number  of  quantities,  if  the  observations 
were  strictly  correct, — ideal.  If  extra  observations  are  made, 
beyond  this  necessar}'  number,  each  of  these  would  furnish  a  check 
upon  the  work,  that  is,  a  condition  to  be  satisfied.  The  rule,  then, 
could  be  stated  that  the  number  of  independent  conditions  of  a 
certain  kind,  involved  in  a  given  series  of  observations,  would  be 
equal  to  the  number  of  extra  observations  of  the  corresponding  kind, 
that  is,  the  excess  over  the  necessary  number  of  ideal,  correct 
observations. 

Let  Fig.  2  represent  a  system  of  levels  connecting  the  bench- 
marks, ^l,  B,  C,  D,  E,  and  F,  the  numbers  in  parentheses  repre- 
senting the  Hues  over  which  the  differences  of  ek'vation  are 
observed.  If  the  observations  were  absolutely  correct,  the  dif- 
ferences of  elevation  would  be  completely  determined  by  the  lines, 
(1),  (2),  (8),  (4),  and  (5).  Then  if  (G)  were  addcxl,  it  would  fur- 
nish one  chcM'k.  and  the  condition  that  the  whok^  outer  circuit 
should  close  to  zero,  if  the  signs  of  the  separate  lines  were  so 
changed,  if  necessary,  as  to  indicate  running  continuously  around 


56 


PRACTICAL   LEAST  SQUARES 


the  figure.  By  adding  tlie  line  (7),  between  C  and  F,  another 
check  is  obtained,  with  the  corresponding  condition  that  the 
circuit  A-B-C-F  should  close,  or  that  the  remainder  of  the  figure, 
namely,  C-D-E-F,  should  close.  Having  taken  the  closure  of 
the  whole  figure  as  the  first  condition,  onlv  one  of  the  two  smaller 


Fig.  2.     System  of  Levels 

circuits  may  be  used  as  an  independent  condition,  since  the  other 
small  circuit  would  then  necessarily  close,  being  the  difference  of 
the  other  two  circuits.  Thus,  as  stated  in  the  above  rule,  each 
extra  observation  gives  one  independent  condition.  It  is  obvious 
that  any  five  lines  connecting  the  six  benchmarks  could  be  con- 


FiG.   'A.     Horizontal  Anglos  at  a  Station 

sidercd  as  the  original,  ne('(>ssaty  obscM'vatioiis,  and  that  cnty  two 
of  the  three  circuits  could  l)e  used  for  tlu^  two  conditions. 

As  aiiotluM'  example,  let  Fig.  3  r('i)rcsent  a  series  of  horizontal 
angk^s  around  a  station,  coTniccting  thc>  {\yc  signals,  L-M  XOF, 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  57 

each  angle  observed  being  indicated  by  a  number  in  parentheses 
and  a  corresponding  arc.  Four  angles  would  be  sufficient  to 
connect  the  five  signals,  so  that  there  are  three  extra  observations 
and,  therefore,  three  independent  conditions.  If  (1),  (2),  (5), 
and  (6),  be  regarded  as  the  necessary  angles,  (j)  would  give  the 
condition  that  (l)  +  (2)  +  (3)  — (6)  should  equal  zero;  (4)  would 
complete  the  horizon  with  the  requirement  that  (4) +  (5) +  (6) 
should  equal  360°;  and  (7)  would  close  the  horizon,  likewise,  with 
(6)  — (1).  Different  combinations  could  be  used,  as  well,  for  the 
three  conditions,  such  as  (l)  +  (4)  +  (o)  — (7)  =  0,  etc.,  and  these 
would  be  independent  if  each  of  the  three  extra  observations  were 
used  in  one,  and  only  one,  condition.  It  is  easily  seen  that  the 
Number  of  conditions  = 

(Number  of  angles  observed)  —  (Number  of  signals)  +  1. 

72.  Statement  of  Conditions.  Although  the  conditions,  as 
functions  of  the  observed  quantities,  might  be  very  complex  in 
form,  involving  the  higher  powers,  etc.,  still  in  the  problems  with 
which  the  engineer  is  usually  concerned,  they  are  of  the  linear 
form  or  easily  reducible  to  th  it  form.  Therefore,  we  shall  con- 
fine our  attention  to  these  simpler  conditions  and  consider  them 
all  as  in  the  linear  form. 

The  conditions  express  the  relations  which  must  be  rigidly 
satisfied  l)y  the  final,  adjusted  values  of  the  observed  quantities. 
Let  these  l)cst,  adopted  values  be  represented  by  T^,  V2,  V.i,  ■  .  .  T'„; 
the  corresponding  observations,  hy  Mi,  Mo,  Mi,  .  .  .  Mn',  and  the 
small  corrections  to  be  added  to  the  observations  to  obtain  the 
adjusted  vahu^s  by,  v\,  v-j,  v-.i,  .  ■  .  v„,  in  which  /;  is  the  number  of 
observations,  which  is  also,  in  this  case,  the  numl)er  of  ol)sorved 
ciuantiti(^s.  Then  ]'\=  Mi-\-v\,  r2  =  -l/2  +  i'2,  etc.  The  original 
conditions  will  be  stated  in  the  following  Condition  Equations: 

aiVi+02rL'+r/3r.s+  .  .  .  +r/„T'„  +  r/(,  =  () 

hiVi  +  h;V->^h,,V-,+  .  .  .  +/;„r,+  /„  =  0  (56) 

ClVl^C2V-2^C,.Vy,^  .  .  .  +r„T'„+ro  =  0 

ill  which  th(>  a's,  //s,  c':<.  etc.,  ai'c  known  conslaiils.      'I'hcix^  will  l)e 


58  PRACTICAL  LEAST  SQUARES 

as  man}"  of  these  equations,  of  course,  as  there  are  independent 
conditions  in  the  problem,  and  as  many  F's  as  observed  quantities. 
As  the  observed  values  approximate  closely  to  the  V's  in  all 
observations  which  are  carefully  made,  they  will  nearly  satisfy 
the  conditions  (56).  Therefore,  substituting  M's  for  V's  in  (56) 
will  result  in  a  small  quantity,  q,  instead  of  zero,  as  the  value  of 
each  condition  function,  thus: 

01.1/1+02^1/2+033/3+  .  .  .  +o„-l/„+ao  =  5i 

6111/1  +  62.1/2  +  633/3+  .  .  .  +6„3/„+6o  =  92  (57) 

Ciil/i  +  C23/2+  C33/3+  .  .  .  +  c„j/„+  Co  =  qs 


q  being  the  amount  by  which  the  observations  fail  to  satisf}'  a 
condition  equation,  that  is,  it  is  the  closure  error  of  each  condition 
equation. 

Xow  substitute  for  V,  in  the  equations  (56),  the  value  M-\-v, 
and  we  have, 

oiVi+a2r2+  .  .  .  +a„r„+ (01.1/1+023/2+  -  •  •  +o„.1/„+0())  =  0 

6iri  +  6,r2+  .  .  .  +6„r„  +  (6i-1/i  +  623/2+  .  .  .  +6J/„  +  6o)=0   (58) 

Cii'i  +  C2r2+  •  .  •  +c„r,.+  (cii1/i  +  C23/2+  .  .  .  +c„,l/„  +  Co)  =0 

in  which  the  parenthetical  expressions  are  the  values  of  q  in  (57). 
Therefore,  the  equations  (58)  take  the  form, 

Oiri+02r2+  .  .  .  +o„r„+ryi  =  0 

6iri  +  62r2+  .  .  .  +6„r„  +  ry_.  =  0  (59) 

C\Vl+C-2V-2+   .    .    .    +Cr.r„  +  f/;s  =  0 


These  arc  the  Reduced  Condition  Equations.  They  state  the 
rocjuired  relation  Ix'tween  the  corrections  to  the  oljservations  and 
the  closure  errors  of  the  ()i-i<2;inal  conditions.  These  corrections 
are  the  unknowns  which  are  to  be  obtained  as  a  result  of  Ihe  adjust- 
ment. The  reduced  conditions  thus  involve  much  smaller  quan- 
tities than  the  original  ones,  (56),  and  are  more  convenient  to 
handle. 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  59 

Comparing  the  two  sets  of  equations,  (56)  and  (59),  we  note 
that  they  are  ahke  in  form  but  differ  only  in  the  substitution  of 
the  small  y's  for  the  V's  and  the  q's  for  the  constant  terms,  oo, 
bo,  Co,  etc.  It  is  usually  convenient,  therefore,  to  write  the  con- 
ditions in  the  reduced  form  in  the  first  place,  especially  as  the 
constants,  ao,  bo,  Co,  etc.,  are  zero  in  most  of  our  problems.  How- 
ever, if  the  original  equations  be  omitted,  the  sign  of  q  should  be 
determined  wdth  great  care.  It  should  be  the  same  as  the  error 
of  closure  of  that  condition  and  opposite  to  that  of  the  correc- 
tions, in  general.  For  example,  if  the  sum  of  the  angles  closing 
the  horizon  at  a  station  be  greater  than  360°,  q  would  be  positive, 
since  the  corrections  to  the  angles,  generally,  would  be  negative 
so  as  to  reduce  their  sum  to  360°.  For  the  beginner,  nevertheless, 
it  is  safer  to  write  the  original  conditions  first,  so  as  to  avoid  this 
difficulty  with  the  signs.  It  should  be  noted  that  if  an  adjust- 
ment were  carried  out  completely  with  the  signs  of  all  the  q's 
incorrect,  it  would  result  in  a  set  of  corrections  having  the  wrong 
signs  throughout,  which  could  be  changed  without  altering  the 
adjustment  computation  in  the  least. 

73.  Adjustment  by  the  Method  of  Correlates.  The  final, 
adjusted  values  of  the  observed  quantities  must  exactly  satisfy 
the  prescribed  condition-;  of  the  problem,  and  must  be,  moreover, 
the  best,  or  most  prol)a})lc,  values,  according  to  the  Theory  of 
Least  Squares,  which  will  so  satisfy  them.  Therefore,  the  sum 
of  the  weighted  scjuares  of  the  corrections,  which  have  the  nature 
of  residuals,  must  be  a  minimum,  as  in  Art.  34.     That  is, 

[u'v~]  =  Wivr-}-V2i'2~^ii':iV:r^  .  .  .  -i-WnVn'^^ii  minimum         (9) 

which  nuist  be  satisfied  simultaneously  with  the  conditions  (59). 
Multiplying  th(^  condition  eciuations  of  (50)  in  succession  by 
the  factors,  -2A,  -2/i,  -2r,  (>tc.,  respectively, 

-2ai.l/'|-2a_..l/'_>-2a:i.lr:i-  .  .  .  -  2^'„.l/'„-27ul  =  0 

-~2biIh'i-2h-2Br2-2h,,Bv:i-  .  .  .  -2h„Br„-~'2q:B  =  ()  (60) 

-2ciCv,-2c-2Cv2-2c:,Cv:,-  .  .  .  ~2cJ-r„-2q:,C  ^0 


GO 


PRACTICAL   LEAST  SQUARES 


Adding  these  equations  to  (9)  and  collecting  the  coefficients 
of  the  separate  v's,  we  have  the  requirement  that 

wiVi2-2vi(aiA-\-b,B-\-ciC+  .  .  .  )  + 

-\-W2V2^-2v2ia2A+b2B-\-C2C+  .   .   .  )  + 

+WsVs^-2vs(a:iA-^bsB^C'sC+  .  .  .  )+  (61) 

+ + 

-\-WnVn^-2Vn(anA-\-bnB  +  C„C+  .   .   .  ) 

—  2    {qiA-\-q2B-{-q'sC-\-  .  .  .  )=a  minimum 

For  the  minimum,  the  derivative  of  this  expression  with  respect  to 
each  of  the  v's  must  be  placed  equal  to  zero.     Therefore, 

2wiVi-2iaiA-\-hiB+ciC^  .  .  .)=0 

2w2V2-2{a2A-^b2B-\rC2C-\-  .  .  .  )  =  0  (62) 


whence, 


2w„Vn~2ia^  +  b„B  +  c„C-\-  .  .  .  )=0 


vi=—{aiA+biB-\-ciC+  .  .  .) 

Wi 


(63) 


r„-~(a„A+6„7?+c„C+  . 


Substituting  these  values  of  the  r's  in  the  eoncUtion  equations  (59), 
and  combining  the  coefficients  of  .4,  B,  C,  etc.,  we  obtain  the 
Normal  Equations: 


(64) 


A  + 

-ab' 

w 

B  + 

'ac'^ 

IV 

C'+. 

.+71 

=  0 

~ah 

IV 

A  + 

1>K 
w 

/>  + 

be 

IV 

r'+  . 

.+q2 

=  0 

~  ac 
w 

-4  + 

be 

w 

/i  + 

cc 
w 

('+ . 

.  +r/. 

=  0 

Upon  inspection  of  these  (>(iuations,  there  is  seen  to  exist  among 
them  the  same  svmmetrv  as  shown,  in  Art.  44,  amon";  those  for 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  61 

the  adjustment  of  indirect  observations.  The  diagonal  coefficients, 
down  to  the  right,  are  sums  of  squares  and,  therefore,  always  posi- 
tive, and  the  other  coefficients  are  symmetrical  about  this  diagonal. 
However,  the  weights,  w,  occur  here  in  the  denominators  of  the 
coefficients,  instead  of  in  their  numerators  as  in  the  previous  case, 
and  the  constant  terms,  q,  are  the  original  closing  errors  of  the 
condition  equations,  in  order.  The  number  of  the  normal  equa- 
tions will  always  be  the  same  as  that  of  the  conditions,  so  the 
number  of  the  g's  will  be  the  same. 

The  factors,  A,  B,  C,  etc.,  are  obviously  the  same  in  number 
as  the  conditions,  and  they  correspond  to  the  various  condition 
equations,  in  order.  They  are  called  Correlates  or  Correlatives, 
and  are  the  unknowns  of  the  normal  equations,  from  which  they 
are  obtained  by  a  solution  according  to  the  methods  of  the  last 
chapter. 

Substituting  the  values  of  the  correlates,  resulting  from  the 
solution  of  the  normal  equations,  in  the  equations  (63),  we  obtain 
the  desired  corrections,  vi,  vo,  Vz,  etc.,  which,  applied  to  the  cor- 
responding observations.  Mi,  AI2,  M3,  etc.,  give  the  best  values, 
Vi,  V2,  F.3,  etc.,  of  the  observed  quantities. 

Thus,  the  process  of  adjustment  may  be  stated  in  the  rule: 
Write  the  condition  equations  involving  the  unknown  corrections 
to  the  obscn'vations,  and  from  them  an  equal  number  of  normal 
equations,  the  solution  of  which  gives  the  values  of  the  correlates, 
from  which  the  desired  corrections  to  the  observations  are  com- 
puted to  obtain  the  ])est  values  of  the  observed  quantities.  The 
conditions  (59)  are  fii'st  written,  then  the  normal  equations  (64) 
are  formed  and  solved,  and  lastly,  the  substitution  of  the  cor- 
relates in  (63)  gives  the  desired  corrections. 

74.  Observations  of  Equal  Weight.  By  placing  the  weights 
equal  to  unity,  in  ccjuations  (63)  and  (64),  we  have  the  simpler 
forms: 

Vi=a\A-j-hiB^ci('-\-  .  .  . 

r.,  =  a^A  +?>./^  +  c,r'+  .  .  .  (65) 

r„  =  nnA^h,Ji^CnC+  .  .  . 


62 


PRACTICAL  LEAST  SQUARES 


and  the  normal  equations, 

[aa]A-\-[ab]B-{-[ac]C+  .  .  .  +gi-0 

[ab]A-\-[bb]B+[bc]C+  ...  +92  =  0  (66) 

[ac]A  +  [bc]B-\-[cc]C+  .  .  .  +93  =  0 

Here  the  coefficients  are  the  same  and  occur  in  the  same  order  as 
those  of  the  equations  (22)  in  Art.  42,  but  the  constant  terms  are 
simpler  as  they  may  be  taken  directly  from  the  conditions  with- 
out additional  computation  or  combination. 

75.  Controls  or  Checks  upon  the  Computation.  The  forma- 
tion of  the  normal  equations  from  the  conditions  is  conveniently 
checked  by  means  of  sum-terms  similar  to  those  explained  in  Art. 
43  for  indirect  observations.  In  this  case,  however,  the  sum- 
check  does  not  include  the  constant  terms,  q,  which  are  not  formed 
in  the  same  manner  as  the  coefficients.  Therefore,  the  check- 
equations  have  the  form: 

«1+&1+Cl+  .  , 


which,  multiplied  bv  — ,  becomes: 

Wi 

ttiOi     ai6i     aiCi 

«'l  Wi  li'i 

Adding  all  such  ecjuations,  wc  have, 


+ 


acf 

ab- 

, fflc 

+ 

H 

w 

le 

iw 

Si 


Wi 


(67) 


(68) 


(69) 


of  which  the  left-hand  member  is  the  sum  of  the  coefficients  of  the 
first  normal  equation.  A  similar  equation  can  be  written  for  each 
of  the  other  normal  ccjuations.  Tlius,  the  sum-terms  check  the 
formation  of  the  coefficients  of  the  normal  ecjuations. 

After  the  a])Ove  checks  have  l^een  verified,  the  constant  term  of 
each  normal  equation  may  be  added  to  its  sum-term,  so  as  to  form 
a  check-terni  which  shall  inchuk^  the  constant,  for  use  durinp;  tlie 
solution  of  the  e(}uations,  as  ck^scribcMl  in  Art.  50. 

The  computation  of  the  correlates  is  checked  ])y  suljstituting 
them  in  the  normal  ecjuations  otlun'  than  the  first.  The  correc- 
tions  are    checked    by   sul^stitution    in    the    condition   equations. 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES 


63 


And  finally,  the  resulting  values  of  the  observed  quantities  may 
be  substituted  in  the  original  conditions  as  a  test  of  the  correct- 
ness of  the  entire  adjustment.  This  is  the  ultimate  test  of  the 
work  and  should  never  be  neglected.  Beginners,  in  particular, 
should  make  use  of  all  available  checks. 

76.  Tabular  Forms  for  Computations.  By  arranging  the  given 
data  and  the  condition  equations  in  the  form  of  a  table,  the  forma- 
tion of  the  normal  equations  and  the  subsequent  computation 
of  the  unknown  corrections  will  be  greatly  facilitated.  As  the 
weights  occur  in  the  denominators  of  the  coefficients,  it  is  con- 
venient to  use  their  reciprocals  throughout  the  computation. 
The  following  form  is  recommended: 

Form  for  Condition  Equations 


(v) 

t'l       [     r-            Vs 

Vi 

Vo         n 

Vl 

Const. 

(1/uO 

iM 

lie. 

;    I   w. 

l/Wi 

l/"'o 

1,  We 

I/U'7 

(9) 

(«) 

ai 

(li 

i         0-3 

Ol 

«0 

Qe 

a? 

7i 

(f>) 

b, 

hi 

I         ^3 

h, 

h 

h 

^7 

Qi 

(r) 

Ci 

Ci 

;     Cs 

c. 

Ci 

(^6 

C-, 

'h 

(«) 

Si 

Si 

S3 

Si 

S5 

Se 

Sl 

(70) 


Th(^  parenthetical  number  at  the  left  of  each  row  is  the  symbol 
Un-  the  quantities  in  that  row.  The  reciprocals  of  the  weights  are 
written  in  their  proper  columns  just  l)elow  the  corresponding 
r's,  while  the  sum  of  the  coefficients  is  written  at  the  foot  of  each 
colunm.  It  frecpiently  happens  that  most  of  these  coefficients, 
a,  h,  c,  etc.,  are  unity,  so  that  the  formation  of  the  normal  ecpiation 
coefficients  can  be  performed  mentally,  especially  when  the  weights 
are  equal. 

The  solution  of  the  normal  ecjuations  and  th(^  substitution  l)ack 
throutih  the  (l(M-ive(l  (Hiuations  to  obtain  th(>  correlates  will  be 
carricMl  out  by  the  abridged  method  of  tlie  last  chapiter  in  the  form 
there  li-ivcii. 


64 


PRACTICAL   LEAST  SQUARES 


The  computation  of  i\w  corrections,  also,  may  be    tabulated 
conveniently,  as  follows : 


Computation   of  the   Corrections 


t'l 

t'2 

V-i 

I'i            I'i            re       1     Vj 

1/w, 

l/tt-2 

1/Wi 

l/Wi 

1 

l/"-7 

(a  A) 
(bB) 
(cC) 

(iiA 
hji 
c,C 

02.4. 
62/i 

C2C 

a^A 
b,B 

cC 

GaA 
b,B 

CiC 

n^A 
b,B 

c,C 

OeA 
b,B 

c,C 

a,A 
b:B 

ciC 

(Sum) 

(Hum/w) 

0') 

Tl 

''2 

r, 

Vi 

!'-o                   /'6 

V: 

(71) 


The  first  row  is  obtained  from  the  first  row  of  (70)  b;.'  multiplying 
the  a's  in  succession  by  the  first  correlate.  A;  the  second  row,  by 
multiplying  the  6's  of  (70)  by  the  second  correlate,  B,  etc.  ]\Iul- 
tiphdng  each  sum  by  the  reciprocal  of  its  weight  gives  the  prelim- 
inary values  of  the  f's,  carried  out,  as  were  the  correlates,  one 
decimal  place  farther  than  is  required  in  the  final  corrections,  which 
are  then  wi'itten  in  the  last  row,  taking  the  nearest  figure  in  the 
next  to  the  last  place.  These  r's  would  1)0  tested  in  the  condition 
equations,  and  modified  slightly,  if  necessary,  so  as  to  satisfy  them 
exactly,  the  modifications  being  shown  by  canceling  the  changed 
figures  and  wiiting  the  adopted  ones  just  to  the  right  and  ab()V(\ 
Finally,  the  adopted  valuers  of  the  observed  cjuantiticvs  would  b;^ 
tested  in  the;  original  conditions,  which  they  should  rigidly 
satisfy. 

77.  Example:  Adjustment  of  Levels.  The  following  o])served 
differciices  of  elevation  between  the  benchmai'ks,  A,  B,  C,  etc., 
will  now  be  adjusted  to  the  nearc^st  hundredth,  in  accordance^  with 
the  foi'cgoing  nielhod.  In  I'^ig.  4,  each  line  of  kn'els  is  numbered,  in 
pai-('iitlics(>s,  nnd  an  arrow  sliows  tlu^  direction  of  running  the  levels 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES 


65 


over  it,  or  rather,  the  direction  of  stating  it,  as  it  may  have  been 
run  in  either  direction  or  both  directions,  but  can  be  stated 
with  only  one  sign  which  must  correspond  to  a  certain  direction, 
plus  or  minus,  according  as  the  final  benchmark  is  higher  or  lower 
than  the  initial  one.  Lines  like  that  between  C  and  G,  which  are 
parts  of  no  complete  circuit,  do  not  enter  into  the  adjustment  in 
any  way.     The  observed  differences  of  elevation  are  as  follows: 

(1)  +2.18         (3)     -3.47         (5)     +4.70         (7)     -6.86 

(2)  +5.06         (4)     +1.32         (6)     -9.82         (8)     +3.46 

Let  the  weights  of  the  lines  (7)  and  (8)  be  2  each,  and  those  of  the 
others,  unity,  or,  for  simplicity  in  the  use  of  reciprocals,  let  the 
former  be  unity  and  the  others,  one-half,  giving  1  and  2  for  the 
reciprocals. 


Fi<;.  4.     System  of  Levels 


As  shown  in  Art.  71,  each  complete  circuit  furnishes  the  con- 
dition that  the  sum  of  its  adjusted  differences  of  elevation  shall 
ec^ual  zero  when  given  the  proper  signs  as  if  run  continuously  around 
the  circuit,  clockwise  or  counter-clockwise.  Also,  the  number  of 
independent  conditions  is  the  same  as  the  number  of  extra  observa- 
tions  above  those  necessaiy  to  connect  the  given  benchmarks. 
These  neces.sary  lines  may  be  drawn,  one  at  a  time,  starting  at  one 
benchmai'k,  as  long  as  a  new  Ijenchmark  is  added  for  each  line 
drawn.  Then  each  line  adikxl  t(j  th(>  figure,  between  two  bench- 
marks ali'eady  sliown,  gives  one  independent  condition  which 
should  always  be  written  so  as  to  inclwle  that  line.  When  the 
complete  figui'c  has  be(>n  re})ro(luced  on  paper,  in  this  manner, 
omitting  no  hues,  all  of  th(^  necessary,  independent  conditions  for 


66 


PRACTICAL  LEAST  SQUARES 


the  adjustment  will  be  indicated.  Their  number  may  be  verified 
by  the  rule  that  it  is  the  same  as  the  number  of  extra  observations. 
Also  from  the  above  construction, 

Number  of  conditions  =  (Number  of  lines) 

—  (Number  of  bench-marks) +  1. 

Assuming  the  lines  (1),  (2),  (3),  (4),  and  (5)  to  be  those  neces- 
sary to  connect  all  of  the  benchmarks,  we  write  a  condition  for 
each  of  the  remaining  lines,  namely,  (6),  (7),  and  (8).  It  is 
essential,  of  course,  that  all  of  the  lines  which  form  circuits  should 
appear  in  the  conditions.     Thus  we  obtain  the  original  conditions, 

+  Fi- 72  +  ^3  +  74- F5-Fo  =  0 

+  Fi-F2  +  F7-7o  =  0  (72) 

+  F5-F4-F8  =  0 

The  minus  signs  result  from  changing  the  directions  of  the  arrows 
so  as  to  be  continuous  around  each  circuit.  It  is  not  necessary 
that  all  the  circuits  be  traversed  in  the  same  direction,  however, 
in  a  given  problem.  Substituting  for  each  F  its  observed  value, 
we  find  the  closure  errors  of  these  three  circuits  to  be,  respectively, 
+  0.09,  +0.08,  and  -0.08.  Therefore,  the  following  table  may 
be  formed  directly,  the  coefficients  being  unity: 

Condition  Equations 


('■) 

Vi 

t'2 

?'3 

t'4 

rs 

?'6 

i'7 

''8 

Const. 

a/ IV) 

(a) 
(b) 
(c) 

2 

2 

2 

2 

2 

2 

1 

1 

i'l) 

+  1 
+  1 

-1 
-1 

+  1 

+  1 
-1 

-1 

+  1 

-1 
-1 

+  1 

-1 

+0.09 
+0.0S 
-0.08 

U) 

+2 

_2 

+  1 

0 

0 

-2 

+  1 

-1 

(73) 


The  normal  equations  may  be  written  by  inspection,  owing  to 
the  simplicity  of  the  condition  equations.  Each  square  or  product 
is  formed  in  the  propcM-  column,  above,  and  niultijilied  l)y  its  1,  w; 
the  sum  of  all  siniilai'  i-(>sults  IxMiig  tlu^  coefficicMit  foi-  the  normal 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES 


67 


equation.  Thus,  there  are  six  aa's,  each  being  +1,  three  ab's,  each 
being  +1,  and  two  ac's,  each  of  which  is  —1.  Where  no  coeffi- 
cient appears  in  the  condition  equation  for  a  certain  v,  that  coeffi- 
cient is  regarded  as  zero.  Each  aa/w  will  be  2X(-}-l)  =  +2,  as 
also,  will  be  each  ab/w.  As  there  is  no  column  in  which  both  h 
and  c  occur,  the  products,  he,  are  zero.  When  all  the  coefficients 
in  any  condition  equation  are  unity,  the  sum  of  the  squares,  each 
divided  by  its  weight,  is  equal  to  the  sum  of  the  reciprocals  of  the 
weights;  thus,  [66/w]  =  2+2+2  +  1  = +7.  Likewise,  the  product 
terms  may  be  written  by  inspection,  but  the  signs  must  be  care- 
fully considered;  thus,  [ac/w]=  —  2  — 2=  —  4,  etc.  The  sum- 
terms  are  treated  in  the  same  way  as  the  coefficients,  to  test 
the  correctness  of  the  computations. 

Normal  Equations 


(74) 


It  must  be  remembered  that  the  sum  includes  all  the  coefficients 
of  an  equation,  whether  written  or  not,  so  that,  when  the  abridged 
form  is  used,  as  above,  the  coefficients  must  be  read  down  and  to 
the  right  as  explained  in  Art.  57  (c). 

Preparatory  to  solving  the  normal  equations,  the  constants  are 
added  to  their  respective  sum-terms  to  form  the  check-terms  for 
use  throughout  the  solution,  in  order  that  the  operations  performed 
upon  the  constants  may  be  included  in  the  checks.  In  their  form 
for  solution,  therefore,  these  e(iuations  are: 

X()i{MAL  Equations 


(75) 


A 

B 

C 

(q) 

Sum 

+  12 

+6 

-4 

+0.09 

+  14 

+7 

0 

+0.08 

+  13 

+5 

-0.08 

+   1 

A 

Jl 

C 

Const. 

Check 

+  12 

+0 

+7 

-4 
0 

+  5 

+0.09 

+0.08 
-0.08 

+  14.09 
+  13.08 
+  0.92 

68 


PRACTICAL  LEAST  SQUARES 


These  equations  will  now  be  solved  by  the  Abridged  Method  as 
explained  in  Art.  60,  the  separate  operations  being  indicated. 


(I) 

(2) 
(3) 

(2) 
(4) 

(ID 

(3) 

(5) 
(6) 

(HI) 

A 

B 

C 

Const. 

Check 

+  12 

+6 

-4 

+0.09 

+  14.09 

+7 

0 

+5 

+0.08 
-0.08 

+  13.08 
+  0.92 

+7 
-3 

0 

+2 

+0.08 
-0.04 

+  13.08 
-  7.04 

(I)X(-6/12) 
(2) +  (4) 

(I)X(+4/12) 
(IDX(-2/4) 

(3) +  (5) +  (6) 

+4 

+2 

+0.04 

+  6.04V 

+5 

-1.33 

-1.00 

-0.08 
+0.03 
-0.02 

+  0.92 
+  4.70 
-  3.02 

+2.67 

-0.07 

+  2.60V 

Correlates 


A 

B 

C 

Constants 

-0.09 

-0.04 

+0.07 

C-terms 

+0.104 

-0,052 

+2.67 

B-tcrms 

+0.138 

-0.092 

+0.026  =  r 

+0.152 

+4 

+  12 

-0.023  =  7? 

+0.013=. 4 

(76) 


Tests  of  C()1{UEi-.\tks 
Equation  (2)      +0.078-0. 161+0.08= -0.003 
Equation  (3)      -0.052+0. 130-0.08  = -0.002 
These  discrepancios  would  be  reduced  l)y  caiTying  out  the  corre- 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES 


69 


-- 

- 

d 
1 

CO 

c<> 

o 

o 

1 

CO      'N 

c  o 

d  d 

1   1 

~ 

i-t 

CO 

o 

d 

1 

CO 

c 
o 

1 

CO 

o  o 

d  d 

1   1 

s 

(N 

CO   CO 

r-H     C^ 

o  o 

d  d 

1   + 

o 
o 

o 

+ 

o 

!M    CM 

o   o 
o   d 
+  + 

£ 

(M 

CO      CO 

o      § 

o      d 

1      + 

CO 

r-H 

o 
d 
+ 

CO 

(N    CO 

o  o 
d   d 
+   + 

^ 

«N 

CO        CD 
O        O 

d      d 
+      1 

CO 

I— 1 

o 
o 

1 

CO 

(M    CO 

o  o 

d  d 

1   1 

^ 

C^l 

CO 
o 
o 

CO 

T-H 

c 
d 
+ 

CO 

CM   CO 

o   o 

o   o 

+   + 

c' 

M 

CO   CO 

o   o 
o   o 

1    + 

o 

C-1    CM 

o   o 
d   d 
+   + 

~ 

1 

CO  CO 
—   2J 

+    T 

c 

T    T 

§ 

^      ^      vS- 

'^ 

>  > 


8 

8 

o 

o 

II 

o 

II 

o 

II 

o 

00 
o 

o 

o 

+ 

o 

+ 

o 

1 

§ 

u 


I    + 


1     I 


o 
+ 


-:S         -^3         T3 


O 


70  PRACTICAL  LEAST  SQUARES 

lates  to  four  places,  instead  of  three.  But  these  are  satisfactory 
when  the  corrections  are  desired  to  hundredths,  only. 

Upon  testing  the  corrections  by  substituting  them  in  the  condi- 
tion equations  (73),  it  is  found  that  the  third  condition  fails  by 
+0.01.  This  discrepancy  must  be  removed  by  arbitrarily  altering 
one  or  more  of  the  corrections  involved  in  it,  as  it  is  due  to  neglected 
remainders.  At  the  same  time,  the  other  two  conditions,  which 
check  exactly,  must  not  be  disturbed.  Therefore,  it  is  desirable 
to  find  a  correction  which  is  used  in  the  third  condition  only. 
Such  a  one  is  vs  which  is  seen  to  be  too  large  by  0.004,  and  which, 
moreover,  belongs  to  one  of  the  o]:)servations  of  greater  weight  so 
that  it  would  be  expected  to  have  a  smaller  correction.  There  is 
reason,  therefore,  for  reducing  this  correction  by  the  necessary 
0.01  in  order  to  satisfy  the  condition.  The  change  is  made  as 
indicated  so  that  the  original  figure  remains.  If  there  were  no 
single  correction  which  could  be  modified  without  affecting  other 
conditions,  it  might  be  necessary  to  alter  two  or  three  corrections 
in  order  to  satisfy  all  the  conditions  by  a  given  set  of  corrections. 

The  final  test  of  the  correctness  of  the  adjustment  consists  in 
substituting  the  adjusted  values  of  the  differences  of  elevation  in 
the  original  conditions,  (72),  or  in  the  other  conditions  which  were 
not  used  because  not  independent  of  these.  It  is  well  to  restate 
the  conditions,  using  the  corrected  differences  of  elevation,  in 
order  to  secure  a  check  on  the  condition  equations.  Referring 
to  the  diagram,  Fig.  4,  therefore,  and  appljdng  to  each  observa- 
tion the  corresponding  correction,  we  have: 

Final  Tests  of  the  Adjusted  Values 

Circuit  A-B~C-F:       +2. 16-5. 08-0. 88+9, 80  =  0. OOv^ 

Circuit  C-D-T^:  -3.44-3.44  +  0.88  =0.()0\/    (79) 

Civmit  D-EF:  +1.29-4.73+3.44  =O.O0V 

These  comprise  all  the  elemental  circuits,  so  that  any  coml)inati()ns 
of  these  would  also  be  satisfied. 

78.  Arrangement  of  Equations.     Tlie  larger  the  coefficients 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  71 

of  the  normal  equations,  the  greater  will  be  the  labor  of  solution, 
generally  speaking,  so  it  is  important,  as  was  shown  in  the  case  of 
indirect  observations,  to  make  them  as  small  as  practicable.  The 
methods  of  Arts.  48  and  49  do  not  apply  directly  to  conditioned 
observations,  but  it  is  possible  to  select  the  conditions  and  arrange 
the  condition  equations  in  such  a  manner  as  to  save  some  labor  in 
the  solution  of  the  normal  equations. 

Inspection  of  the  equations  (73)  and  (74)  shows  that  the  shorter 
conditions,  that  is,  those  which  involve  fewer  observations,  will 
produce  smaller  coefficients  for  the  normal  equations.  Therefore, 
it  is  important  to  select  the  shorter  conditions,  as  far  as  practicable. 
In  the  above  example,  for  instance,  the  three  small  circuits  might 
have  been  used  to  advantage,  although,  in  so  short  a  problem  the 
advantage  is  less  evident  than  in  longer  ones. 

It  is  apparent,  also,  that  by  arranging  the  condition  equations 
in  a  certain  order,  with  the  shorter  ones  first,  the  larger  coefficients 
will  occur  later  in  the  normal  equations,  instead  of  earlier,  which  is 
an  advantage  especially  in  the  abridged  method  of  solution. 
Then,  too,  it  is  possible  to  place  those  equations  first  which  have 
no  terms  in  common,  so  that  the  product-terms,  [ah/w],  [ac/w], 
etc.,  in  the  first  normal  equation,  may  be  zero  in  some  cases. 
Each  of  such  zero  coefficients  gives  a  zero  elimination-factor  which 
saves  writing  a  whole  line  in  the  elimination.  In  some  problems 
this  is  very  important.  In  the  above  example,  if  the  second  and 
third  equations  had  been  written  as  the  first  and  second,  respect- 
ively, [ab/vi\  would  have  been  zero,  thus  saving  the  second  step 
in  the  elimination,  since  the  second  normal  equation  would  have 
had  no  A-term  and  so  would  have  been,  itself,  the  first  derived 
equation,  number  (II).  Sometimes,  it  is  possible  to  save  several 
steps  in  the  elimination  in  this  manner. 

79.  Example :  Local  Adjustment  of  Angles  by  the  Method  of 
Correlates.  In  triangulation,  the,  methods  of  measuring  the 
angles  at  a  station  may  result  in  several  extra  angles  being  observed. 
As  shown  in  the  latter  part  of  Art.  71,  each  of  these  extra 
observations  yields  one  independent  condition.  To  illustrate 
the  method  of  adjusting  the  angles  so  as  to  satisfy  all  these  condi- 


72 


PRACTICAL  LEAST  SQUARES 


tions,  we  shall  consider  the  case  shown  in  that  article,  Fig.   3, 
assuming  the  weights  to  be  equal. 

p  L 


Fig.  3.     Horizontal  Angles  at  a  Station 
Observed  Angles 
Mi=   85°  14'  24.5"  M5=   50°  23'  26.7" 

M2=   83    45    32.0  71^6  =  210    35    17.5  (80) 

Ms^  41     35    24.0  ilf7-234    39    08.2 

ilf4=   99     01    14.1 

Adopting  (1),  (2),  (5),  and  (6),  as  the  necessary  angles,  the  conditions 
may  be  written, 

Condition  Equations 

yi  +  F2+T^3-T>,     =0 

T^4  +  r5  +  Fc-360°  =  0  (81) 

-T'i  +  Fg  +  F7-3()0°-0 
from  which,  by  substituting  for  each  V  its  value,  M-{-v,  we  have. 

Reduced  Coxditiox  Equatiox.s 


(82) 


(•i 

r.       ,      r. 

!'4 

'■.i           ''ti      .     ''? 

(n) 

+  1 

+  1 

+  1 

-1 

+;5.()  =  o 

(h) 

+1 

+1 

+1 

-17 

(<■) 

-1 

+1 

i  +1 

+  1.2 

r.sO 

0 

+  1 

+  1 

+1 

+1 

+1 

+1 

OBSERVATIONS  OF  DEPENDENT  QUANTITIES 
Normal  Equations 


73 


A 

B                  C 

Const. 

Sum 

Check 

+4 

-1 

+3 

-2 

+  1 
+3 

+3.0 
-1.7 

+  1.2 

+1V 

+3^ 
+2V 

+4.0 
+  1.3 

+3.2 

(83) 


Solving  these  equations  as  in  Art.  77,  we  obtain  the  following 
values  of  the  correlates: 


A  =  -1.347  B=+0.G19  C  = 

whence  the  corrections  to  the  observed  angles  are, 


1.503 


'•i         1         r..                 Vs 

'•-1                          Vi 

''6                               ''7 

+0.H)" 

-i.3.r' 

- 1 . 35" 

+0.62" 

+0.62" 

+0.46" 

- 1 .  r,o" 

(84) 


which  exactly  satisfy  the  given  conditions. 

The  adjusted  values  of  the  angles,  therefore,  are, 

Vi  =  M,-^v,=   85°     14'  24.5"+0.16"=   85°     14'  24.66" 

45    32.0   -1.35    =   83  45  30.65 

35    24.0   -1.35    =   41  35  22.65 

01    14.1    +0.62    =   99  01  14.72    (85) 

23    26.7   +0.62    =   50  23  27.32 

35    17.5   +0.46    =210  35  17.96 

.39    08.2    -1.50    =234  39  06.70 

As  a  final  test  of  the  adjustment,  th(>se  adjustcnl  values  ai-e  sub- 
stituted in  th(>  oi-iginal  conditions,  which  th(\v  arc  found  to  satisfy. 
80.  Special  Case  of  One  Condition  Only.     Th(>  gcncM-al  con- 
dition e([uation  is  the  fir.-<t  one  of  (59),  nanu^ly, 

airi+02r2+r/:ir:i+  .  .  .  +anr„+r/  =  0  (86) 

whence,  the  single  normal  ('([uation  i.-^ 

\aa   ;/•].! +ry  =  0  (87) 


Vo 

=   83 

F.i 

=   41 

F4 

=   99 

75 

=   50 

Fg 

=  210 

Vt 

=  234 

74  PRACTICAL  LEAST  SQUARES 

and  the  corrections,  from  (63),  are 

vi=^{ai/wi)A;     V2  =  (a2/w2)A;     etc.  (88) 

The  case  of  special  interest,  however,  is  that  in  which  the 
coefficients  of  the  condition  equation  are  unity;   thus, 

Vi-\-V2+V3-\-  .  .  .  -\-Vn+q  =  0  (89) 

The  normal  equation,  then,  becomes, 

[l/w]A  +  q^O  (90) 

so  that 

A  =  -q/[l/w]  (91) 

The  corrections,  with  this  value  of  A,  are,  therefore, 

yi  =  -^rTT-i'      ^2=-gY— ";    etc.  (92) 

[1/w]  [1/w] 

Thus  the  corrections  are  proportional  to  the  reciprocals  of 
the  weights,  and  each  correction  is  equal  to  the  total  closure  cor- 
rection divided  by  the  algebraic  sum  of  the  reciprocals  of  the 
weights  and  multiplied  by  the  reciprocal  of  the  corresponding 
weight. 

For  example,  suppose  we  have  a  single  circuit  of  levels  which 
add  to  +0.24  instead  of  zero,  and  that  the  weights  of  the  nine 
lines  are  2,  3,  1,  2,  3,  1,  1,  3,  and  1.  The  least  common  multiple 
of  the  weights  is  6,  and  they  may  be  written,  2/6,  3/6;  1/6,  2/'6, 
3/6,  1/6,  1/6,  3/6,  and  1/6,  respectively,  so  that  their  reciprocals 
are  the  following  integers,  in  order,  3,  2,  6,  3,  2,  6,  6,  2,  and  6, 
whose  sum  is  36.  The  corrections,  therefore,  are  obtained  by 
multiplying  each  of  these  reciprocals  into  the  constant,  —0.24/36, 
resulting  thus: 

-0.020,     -0.013,     -0.040,     -0.020,     -0013, 
-0.040,      -0.040,      -0.013,      -0.040 

Testing  these  corrections  in  (89),  their  sum  is  —0.239  instead  of 
—  0.24,  so  that  it  is  ncK'cssary  to  add  —0.001  to  ()n(>  of  th(>m,  prefer- 
ably changing  —0.013  to  —0.014,  in  order  to  rigidly  satisfy  the 
prescribed  condition. 

Th(^  important   point  is  that  1h(^  corrections  may  be  written 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  75 

by  inspection,  in  such  cases,  from  the  fact  that  they  are  propor- 
tional to  the  reciprocals  of  the  weights  and  that  their  sum  must  be 
equal  to  —  q.  If  any  of  the  v's  in  the  condition  equation  be  nega- 
tive, the  signs  of  the  corresponding  corrections  are  changed.  Thus, 
if  the  condition  equation  were 

Vi—V2  +  V3  —  V4-\-V5  —  Vq-\-V7-'i-V8-^Vq+0.24:  =  0 

the  corrections  would  be  numerically  the  same  as  above,  but  the 
signs  of  the  second,  fourth,  and  sixth  would  be  plus  instead  of 
minus.  In  testing  the  corrections  in  the  condition  equation,  then, 
these  three  would  be  multiplied  by  —1,  so  that  the  condition 
would  be  satisfied  as  before. 

This  method  of  distributing  the  error  of  closure  is  somewhat 
similar  to  that  used  in  the  special  case  of  weighted  mean  of  two 
quantities,  given  in  Art.  36. 

81.  Adjustment  by  the  Method  of  Indirect  Observations. 
It  is  possible  to  adjust  conditioned  observations  as  if  the  quanti- 
ties observed  were  independent,  that  is,  by  the  method  used  in 
Chap.  Ill  for  indirect  observations.  Although  this  process  is 
generally  longer  and  less  satisfactory  than  the  solution  by  the 
method  of  correlates,  it  will  be  explained,  briefly,  in  order  that  it 
may  be  used  when  the  circumstances  are  favorable,  and  that  the 
subject  may  be  better  understood. 

In  Art.  71  it  was  shown  that  a  certain  number  of  observations 
would  be  necessary,  in  a  given  problem,  for  the  determination  of 
the  unknown  quantities,  on  the  assumption  that  those  observa- 
tions were  correct,  and  that  the  remaining,  extra,  observations 
would  furnish  one  condition  each,  to  be  satisfied  by  the  adjusted 
quantities.  Let  those  ol)servations  which  are  selected  as  the 
necessary  ones  be  stated  simply  as  observation  equations,  namely, 

Vi=Mi;     F2  =  3/2;     V:i  =  M:i;     etc.  (93) 

Tlien  each  condition,  selected  so  as  to  involve  ])ut  one  new  (juan- 
tity,  may  be  expressed  in  terms  of  tlie  other  quantities  only,  so 
that  the  total  num])er  of  unknowns  will  not  exceed  those  first, 
necessary  ones.  From  the  entire  set  of  these  observation  equa- 
tions,  the  normal  equations  ai'e   foi-nunl,   as   many  as   there  are 


76  PRACTICAL  LEAST  SQUARES 

necessary  (i.e.,  independent)  unknowns,  and  their  solution  gives 
the  adjusted  values  of  the  quantities. 

For  example,  in  the  local  adjustment  of  angles  at  a  station,  in 
Art.  79,  and  Fig.  3,  the  three  conditions  could  be  replaced  by  obser- 
vation equations,  as  follows : 

Conditions  Observation  Equations 

F1  +  F2  +  F3-F6    =0  -F1-F2  +  F6    =^Ms 

F4  +  F5  +  Fo-360°  =  0  -F5-Fg+360°  =  M4     (94) 

-  Vi  +  T^G  +  V7  -  360°  -  0  +  Fi  -  Fe +360°  =  M7 

The  entire  seven  observation  equations,  therefore,  are, 

+  Fi  -M, 

+  F2  -M2 

-Vi    -V2  +Vo     -Ms 

-F5     -Fg     -M4    +360°     =0  W4      (95) 

+  F5  -Ms 

+  Fg     -Mg 

+  Fi  -Fg     -Mj    +360° 

in  which  there  are  but  the  four  unknowns,  namely,  the  angles, 
(1),  (2),  (5),  and  (6),  and  each  M  represents  an  observed  value  or 
constant  term.  These  equations  correspond  to  (16).  Forming 
and  solving  the  four  normal  equations  ])y  the  methods  of  Chap. 
Ill,  the  })est  values  of  the  angles  are  determined  directly. 

82.  Example:  Local  Adjustment  of  Angles  as  Independent 
Quantities.  The  solution  of  th(^  above  example  will  be  con- 
tinued, to  illustrate  the  method,  Init  with  ecjual  weights,  for  sim- 
plicity. Let  the  observed  angles  be  the  same  as  those  used  in 
Art.  79,  as  the  compai'ison  of  the  two  methods  will  be  useful. 
These  angles  are  given  in  (SO).  Sul)stituting  their  values  in  (95), 
and  also  for  each  T',  the  corresponding  M^v,  so  as  to  reduce  the 
constant  terms,  which  is  ecjuivalent  to  assuming  foi-  thes(>  T'"s 
the  corresponding  M's  as  approximate  values,  as  in  Art.  48,  we 
have  the  simplified  observation  cfjuations: 


=  0 

wt. 

=  Wi 

=  0 

W2 

=  0 

Ws 

=  0 

W4 

=  0 

Wo 

=  0 

Wq 

=  0 

w- 

OBSERVATIONS  OF  DEPENDENT  QUANTITIES 


77 


;'i 

''•2 

''6 

i-'e. 

(0 

is) 

+1 

+  1 

=  0 
=  0 

+  1 

+  1 

-1 

-1 

+  1 

-3.0" 

=  0 

-4.0 

-1 

-1 

+  1.7 

=  0 

-0.3 

+  1 

+  1 

=  0 
=  0 

+  1 
+  1 

+  1 

-1 

-1.2 

=  0 

-1.2 

(96) 


The  normal  equations  are  obtained  by  inspection ; 


t'l 

ih 

Vb                  ?'o 

Const. 

Sum 

+3 

+  1 
+  2 

+2 

-2 
-1 

+  1 
+4 

+  1.8 
+3.0 
-1.7 
-3.5 

II    II    II    II 

o  o  o  o 

+3.8  V 
+5.0  V 
+  1.3  V 
-1.5   V 

(97) 


Solving  them,  we  obtain  directly, 

2,i  =  +0.17"      y2=-1.36"    y6=+0.61"    and     re  =  +0.48"      (98) 

whence, 

Fi=8o°  14'  24.67"  ¥5=   50°  23'  27.31' 

72  =  83    45'  30.64  F6  =  210    35    17.98       (99) 

by  a  combination  of  which,  the  angles  V;i,  V.i,  and  V7,  are  com- 
puted.    Thus, 

Vs^Vq    -Fi-72=   41°  35'  22.67" 

T^4  =  860° -1^:5-1+,=   99°  01'  14.71"  (100) 

F-  =  ;36()°-T>,  +  ri  =  234°  39'  06.69" 

This  completes  Ihe  adjustment.  A  comparison  of  th(>s(^  results 
with  those  of  (85)  in  Art.  79,  shows  them  to  be  only  slightly 
diff(M'ent. 

83.  Comparison  of  the  Two  Methods.  'i1ic  piincipal  points 
of  differ(Mic(>  between  llu>  nu^thods  lie  in  \hv  lal)oi'  in\'olved  and  in 
th(^  cliecks  wliich  ar(^  available.  It  may  ha\'(^  \)cv\\  not(>d  that  in 
tlie   preceding  article   there   is   no   ultimate   ehcM-k    upon    th(>   cor- 


78  PRACTICAL  LEAST  SQUARES 

rections  or  the  adjusted  angles.  The  corrections  must  satisfy 
the  normal  equations,  of  course,  as  in  any  other  adjustment,  but 
there  is  no  check  upon  the  observation  equations  (96) .  The  checks 
afforded  by  the  conditions,  in  the  method  of  correlates,  are  for- 
feited in  the  method  of  indirect  observations,  being  used  for  the 
determination  of  some  of  the  unknowns,  as  F3,  V^,  and  V7  in 
the  above  problem.  This  is  an  evident  disadvantage  of  the 
latter  method,  inasmuch  as  the  final  check  is  very  desirable  and 
important.  The  sum-checks  controlling  the  formation  and  solu- 
tion of  the  normal  equations  are  present  in  both  methods. 

In  the  method  of  correlates,  the  number  of  normal  equations  is 
equal  to  the  number  of  conditions,  which  must  be  less  than  that  of 
the  unknown  quantities  or  observations.  In  the  method  of  indi- 
rect observations  on  the  other  hand,  the  number  of  normal 
equations  is  that  of  the  necessary,  independent  unknowns,  and 
therefore  may  be  greater  or  less  than  in  the  former  method.  Usually, 
however,  the  number  of  conditions  is  small  as  compared  with  the 
number  of  independent  unknowns,  so  that  the  method  of  correlates 
is  likely  to  be  the  shorter,  although  the  determination  of  the  cor- 
rections from  the  correlates  is  a  step  which  is  not  required  in  the 
other  method  where  the  unknowns  are  obtained  directly  from  the 
solution  of  the  normal  equations,  or  at  most,  by  a  single  addition 
or  multiplication.  If  the  number  of  conditions  happens  to  be 
nearly  as  great  as  that  of  the  independent  unknowns,  as  in  the 
above  example,  the  disadvantages  of  the  method  of  indirect  obser- 
vations are  less,  and  the  simplicity  of  the  normal  equations,  result- 
ing from  the  considerable  number  of  zeros  in  the  observation 
ec^uations,  may  give  this  method  the  advantage,  even,  although 
this  is  seldom  likely  to  be  the  case.  ]\Ioreover,  th(>  alisencc  of 
the  final  check  in  the  conditions  is  a  serious  defect,  and  gives  to 
the  method  of  coiTolates  the  preference. 

84.  Adjustments  not  Rigid.  The  final,  adjusted  vahies  of 
the  unknown  quantities  cannot  be  regarded  as  the  correct  ones,  of 
course,  but  are  api)roxiiuations  to  th(un.  As  difTcrent  nu^thods 
may  be  used  in  the  adjustment,  and  as  diff(M'ent  s(>ts  of  conditions 
may  ])e  used  in  the  same  method,  it  is  obvious  that  small  dis- 
crepancies ar(^  likely  to  exist  Ix^tween   the  final  values  ol)lained 


OBSERVATIONS  OF  DEPENDENT  QUANTITIES  79 

from  different  adjustments  of  the  same  data.  Each  of  these  sets 
of  results  may  satisfy  all  of  the  conditions  as  required  and  may 
constitute  an  adjustment  which  is  entirely  satisfactory.  Usually, 
the  discrepancies  will  be  so  small  as  to  be  negligible  as  com- 
pared with  the  accidental  errors  of  the  observations. 


CHAPTER  VI 
ADJUSTMENT  OF  TRIANGULATION 

85.  Triangulation.  A  system  or  network  of  triangulation 
consists  of  a  series  of  stations  connected  by  lines  in  such  a  manner 
as  to  form  triangles  having  their  vertices  at  the  stations.  The 
length  of  one  line,  called  the  base-line,  being  determined  by  direct 
measurement,  usually  with  a  tape,  and  the  horizontal  angles 
between  the  lines  at  each  station  being  measured  with  a  transit 
or  theodolite,  the  lengths  of  all  the  lines  become  known  by  com- 
putation from  the  base-line  and  the  angles  through  successive 
triangles.  The  differences  of  elevation  between  the  stations  are 
obtained  from  observed  vertical  angles  which  determine  the 
elevations  of  the  stations  above  sea-level  when  one  of  them  has 
been  connected  to  sea-level  by  a  line  of  precise,  or  geodetic,  leveling. 
The  position  of  the  system  on  the  earth's  surface  is  fixed  by  astro- 
nomical observations  for  the  latitude  of  one  station,  the  longitude 
of  one  station,  and  the  azimuth  of  one  line.  The  size  of  the  system, 
or  net,  depends,  therefore,  upon  the  length  of  the  base;  its  shape, 
or  form,  depends  upon  the  horizontal  angles;  its  position,  upon  the 
astronomical  obsc^rvations;  and  its  elevations,  upon  the  vei'tical 
angles  and  the  initial  elevation.  If  the  triangulation  be  based 
u})on,  or  connected  to,  two  stations  of  another  system  which 
has  been  com{)lete]y  determined  and  adopted  in  size,  position,  and 
elevation,  the  line  joining  the;  two  stations  may  be  used  as  the  base- 
line for  the  new  work,  and  the  azimuth  and  the  latitude,  longitude, 
and  elevation  of  one  of  the;  stations  will  determine  the  position  and 
initial  elevation  of  the  new  n(>t.  However,  if  the  new  triangulation 
be  compk^te  in  itself  in  regard  to  one  or  more  of  these  elements, 
and  in  addition  })e  connected  to  previously  adjusted  and  adopted 
W(jrk,    this    connection    affords    checks    upon    the    coi'responding 

so 


ADJUSTMENT  OF  TRI ANGULATION  81 

elements,  and  therefore,  from  one  to  five  conditions  must  be  satis- 
fied if  all  of  the  work  is  to  be  made  consistent  as  to  length,  latitude, 
longitude,  azimuth,  and  elevation.  The  shape  of  the  net,  and 
the  differences  of  elevation,  therefore,  must  be  adjusted  so  as  to 
fulfill  these  requirements.  Moreover,  the  horizontal  angles  must 
be  adjusted  to  conform  to  certain  geometrical  and  trigonometrical 
conditions  which  depend  upon  the  arrangement  of  the  lines  and 
stations  and  the  angles  observed. 

The  vertical  angles  are  independent  of  the  horizontal  ones  and 
are  adjusted  by  themselves  in  any  case.  The  adjustment  of  a 
system  so  as  to  close  upon  fixed,  or  adopted,  work  with  regard  to 
any  of  the  five  elements  of  length,  latitude,  longitude,  azimuth, 
and  elevation  will  be  discussed  farther  on.^  There  remains, 
then,  the  adjustment  of  a  system  which  is  complete  in  itself. 
In  this,  the  length  of  the  base  and  the  initial  latitude,  longitude, 
azimuth,  and  elevation  are  determined  separately  and  inde- 
pendently of  the  horizontal  angles  in  the  net,  and  so  do  not  enter 
into  the  adjustment  as  long  as  there  is  but  one  of  each  of  these 
elements.  The  adjustment  of  the  horizontal  angles,  there- 
fore, will  now  be  considered. 

86.  Nature  of  the  Conditions.  The  horizontal  angles  in 
triangulation  are  subject  to  two  classes  of  geometrical  con- 
ditions, namely,  those  which  involve  the  angles  at  one  station 
only,  and  those  which  define  relations  between  the  angles  at 
two  or  more  stations.  The  former  are  called  local  conditions 
and  the  latter  figure  conditions,  giving  rise  to  local  and  figure 
adjustments. 

The  local  conditions  express  the  nniuirement  that  the  adjusted 
values  of  the  observed  angles  at  a  station  shall  satisfy  the  indi- 
cated horizon-closures  and  algebraic  sums. 

The  figure  conditions  aw  of  two  kinds,  known  as  angle  equa- 
tions and  side  eciuations.     An   angle  e(}uation   re(}uires  that   the 
sum  of  the  angles  of  a  ti'iangle  oi'  polygon  shall  b(^  equal  to  the 
nunibcM"  of  right  angles  pi'esci'ibcHl  by  <i'(M)inetry  foi-  a  plane  figure 
plus  \\\v  sphcM'ieal  excess.     A  sidi^  eciualion   i'(>(iuires  that   if  the 

1  Art.  IOC),  ct  s(Mi. 


82  PRACTICAL  LEAST  SQUARES 

length  of  a  line  in  the  figure  be  computed  from  another  line  through 
two  different  series  of  triangles,  that  is,  by  two  different  routes, 
the  two  results  must  be  equal. 

Since  all  of  these  conditions  must  be  satisfied  simultaneously, 
they  would  enter  into  a  single  adjustment,  ordinarily.  As  will  be 
explained  later,  however,  it  may  be  convenient  to  perform  the  local 
adjustment  separately,  prior  to  the  figure  adjustment,  the  latter 
being  so  arranged  as  not  to  disturb  the  former. 

87.  Local  Adjustment.  In  modern  field  practice,  simplicity  is 
sought  for  the  sake  of  economy.  Accordingly,  observations  are 
arranged,  as  far  as  practicable,  so  as  to  lessen  the  office  work 
necessary  for  their  reduction,  but  without  a  sacrifice  of  precision. 
The  angles  at  a  station,  therefore,  are  observed  in  such  a  manner 
as  to  avoid  combinations  which  introduce  checks  and  conditions 
requiring  extensive  local  adjustment.  It  is  customary  to  measure 
one  angle  for  each  of  the  signals  less  one,  and  then  a  single 
one  to  close  the  horizon,  thus  securing  one  check  which  involves 
all  of  the  observed  angles.  The  local  adjustment  is  thereby 
reduced  to  one  simple  condition,  with  equal  weights,  also,  in  most 
cases,  so  that  it  amounts  to  a  mere  distribution  of  the  error  of 
closure,  as  explained  in  Art.  80.  If  extra  observations  have 
been  made,  however,  so  that  two  or  more  conditions  are  to  be 
satisfied,  the  general  method  of  adjustment  must  be  used.  This 
has  been  demonstrated  in  Arts.  79  and  82,  in  the  last  chapter,  in 
which  the  number  of  the  conditions  was  shown  to  be  equal  to  the 
number  of  extra  observations.  Thus,  if  S  stations  be  observed, 
S~l  angles  would  be  sufficient  to  connect  th(nn,  and  if  .V  angles 
be  measured  between  them,  the  number  of  extra  observations,  and 
therefore,  the  number  of  local  conditions  may  be  expressed  in  the 
formula. 

Number  of  Local  Conditions  at  a  Station^A"— aS  +  1      (101) 

88.  Figure  Adjustment.  Notation.  In  order  to  distinguish 
between  stations  occupied  and  unoccupied,  and  lines  obsei-ved 
in  both  directions  or  in  one;  direction  only,  lines  shown  in  diagrams 
of  triangulation  will  be;  broken  at  the  (nuls  fi'om  which  they  are 


ADJUSTMENT  OF  TRI ANGULATION 


83 


not  observed,  full  lines  indicating  observation  at  both  ends. 
Stations  which  are  sighted  upon  but  not  occupied  with  the  instru- 
ment will  be  recognized  from  the  fact  that  all  the  lines  at  those 
stations  will  be  broken.     Thus,  in  Fig.  5,  the  station  Pan  was 


Ohrt 


Bon 


Arm 


Dake 


Dart 


Fig.  5.     Unobserved  Lines  and  Unoccupied  Station 


not  occupied,  as  no  full  lines  radiate  from  it.  Dake  was  occupied 
and  Pan  and  Bart  were  observed  from  it,  but  Ohrt  was  not  observed 
from  it,  although  Dake  was  observed  from  Ohrt.  The  other 
stations  were  occupied  completely  as  shown  by  the  lines  being 
unbroken  at  those  ends. 

89.  Classification  of  Figures.  Although  the  figures  in  tri- 
angulation  may  be  very  complicated  and  the  adjustment  very 
laborious,  Ihe  work  in  such  a  case  loses  its  economic  advantages  of 
covering  a  great  area  or  distance  at  the  minimum  of  cost  con- 
sistent with  the  accuracy  desircxl.  In  th(^  best  practice,  there- 
fore, simple  figures  are  used,  and  special  attention  is  given  to 
measuring  each  angl(>  with  the  ref}uisite  degree  of  precision. 
These  simple  figures  may  hv.  classified  as  triangles,  quadrilaterals, 
and  central-point  figures.  A  triangle  consists  of  tlu'ee  stations 
connected  by  three  lines.  A  cjuadrilateral  has  four  stations  con- 
nected by  six  lines.  A  central-point  figure  is  a  jwlygon  with  a 
station  at  each  vertex  and  another  station  in  the  interior  from  which 
lines  radiate  to  tlie  vertices;  th(^  polygon  usually  has  not  more  than 
six  sides.  The  lines  in  these  figures  may  l)e  full  or  partly  broken, 
as  above.  Fig.  5  represents  a  central-point  figure.  A  typical 
quadrilateral  witli  diagonals  is  shown  in  Fig.  6,  while  Fig.   7  is 


84 


PRACTICAL   LEAST  SQUARES 


the  simplest  form  of  a  central-point  figure,  which  may  be  con- 
sidered, also,  as  a  quadrilateral.  In  Fig.  8  is  shown  a  combina- 
tion of  a  central-point  figure  with  a  polygon  having  diagonals; 


Fig.  6.     Quadrilateral 

this  is  seen  to  increase  the  intricacy  of  the  system,  which  would 
have  been  a  simple  central-point  figure  had  the  diagonals  KM 
and  MO  been  omitted.^ 


Fk;.  7.     Central  Point  Fij^iire 


C^ntral  Point  Figure  with 
p]xtra  Diagonals 


90.  Angle  Equations.  The  triangle  is  the  unit  figure  in  tri- 
angulation.  For  each  t]'iangl(»  or  other  polygon  of  wliich  all  the 
angles  have  been  observed,  an  angle  ('(juation  may  ])e  written 
expressing  the  conchtion  that  the  sum  of  the  adjusted  angles  shall 

1  In  the  diagrams  represent  ing  triangulation,  it  is  assvnncd  that  there  is 
no  station  at  the  intersection  of  diagonals  of  a  figure  luiless  there  is  an  angle 
at  that  i)oint  in  one  of  them.  If,  in  llie  r(>mote  (•as(\  a  station  happcMied  to 
fail  at  this  intersection,  tlu^  diagram  \voul(.l  l)e  slightly  distorted  so  as  to  indi- 
cate the  fact  without  finest  ion. 


ADJUSTMENT  OF  TRIANGULATION  85 

be  the  theoretical  amount,  namely,  a  certain  number  of  right  angles 
plus  the  spherical  excess,  e,  of  the  figure.^  Thus,  referring  to  Fig. 
6,  in  which  the  separate  angles  are  numbered  clockwise  at  each 
station,  and  representing  their  adjusted  values  by  F's,  as  usual, 
the  four  triangles  yield  the  following  angle  equations, 

Triangle  (a)  ABC,  Fi  +  F2  +  F3  +  F6-(180°+ea)  =0 

"       {h)  DAB,  F2+F3  +  F4+77-(180°+e,)  =  0       (102) 

''       (c)  DBC,  Fi  +  F5  +  F6+F8-(180°+6c)=0 

''       {(1)  DAC,  F4+F5  +  F7  +  F8-(180°+e.)=0 

in  which  a,  b,  c,  and  d  refer  to  the  separate  triangles  as  shown  in 
the  figure.  Since  the  spherical  excess  depends  directly  upon  the 
area  of  a  figure,^  the  excess  for  the  entire  quadrilateral  should  be 
equal  to  the  sum  of  the  two  excesses  for  the  pair  of  triangles 
formed  by  each  diagonal.     Therefore, 

ea+ed=e6+ec  (103) 

which  affords  a  check  upon  their  computation.  By  inspection, 
then,  we  find  that  from  any  three  of  the  above  angle  equations  it  is 
possible  to  derive  the  fourth  by  addition  and  subtraction.  Also, 
from  the  whole  quadrilateral,  we  may  write  the  condition, 

F,  +  F2+F3  +  F4+T;5  +  T'G+F7  +  Fs-(360°+e.+  6.)  =  0       (104) 

and  this  eciuation  is  seen  to  Ijc  the  sum  of  the  first  and  the  fourth  of 
(101).  Therefore,  any  thi'ce  of  the  four  triangl(>s  may  l)e  sel(K'ted 
from  which  to  write  the  three  independent  conditions  or  angle 
e(}uations.  In  other  words,  if  two  triangles  foi'med  by  a  single 
diagonal  satisfy  thcnr  ('oiiditions,  the  (>iitii-e  figure  nnist  satisfy 
its  condition  (KW);  then  if  a  third  triangle  condition,  also.  l)e 
satisfied,  the  fourth  one  is  sui'c  to  be,  since  the  foui-th  triangle  is 
e(iual  to  the  whole  figure  minus  the  third  one. 

If  w(>  adopt  the  first  three  of  tli(>  e<|uations  (102)  as  the  inde- 
pendent ones,  and  wi'ite  foi'  each  F.  in  the  usual  manner,  its  value, 

^  It  is  seldom  that  an  aiifrlc  ('(luatioii  has  to  !)(>  written  for  a  fi^tire  greater 
than  a  triangle,  as  an  (jpen  ([iiadrilateral  i  without  a  diagonal)  is  not  rigid  and 
should  be  avoided. 

*  From  sphcTical  trigonometry. 


86 


PRACTICAL   LEAST  SQUARES 


M-i-v,  in  which  M  is  the  observed  value  of  the  angle  and  v  is  its 
correction  to  be  obtained  from  the  adjustment,  we  have  for  this 
quadrilateral  the  following  set  of  final  angle  equations,  corre- 
sponding to  (59): 

V2  +  V3  +  V4-\-V7-\-q2  =  0  (105) 

Vi+V5-hvQ^vs+q3  =  0 

in  which  q  is  the  error  of  closure  of  a  triangle,  positive  when  the 
sum  of  the  observed  angles  is  too  large.  These  closures  may  be 
checked  in  the  same  manner  as  the  spherical  excesses,  that  is,  the 
closure  for  the  whole  figure  must  be  the  sum  of  the  closures  for 
each  pair  of  component  triangles.     Thus, 

qa+qd  =  qb-\-qc  (106) 

91.  Number  of  Angle  Equations  in  a  Figure.     To  determine 

the   number   of   independent   angle  equations  in  a  given  figure, 

A-B-C-D-E,  Fig.   11,  we  may  proceed  as  follows:    Start  with 

c 

B  ^^ /  B 


A 

Fig.  9.  Fig.  10  Fig.  U. 

Determination  of  the  Number  of  Angle  Equations  in  a  Figure 

two  stations,  A  and  B,  connected  by  one  line,  as  in  Fig.  9.  Add 
the  station  C,  with  two  lines  to  A  and  B,  and  one  triangle  is 
obtained.  Add  station  D  and  two  lines,  to  A  and  B,  and  a  second 
triangle  is  formed,  as  in  Fig.  10.  Add  the  third  line,  from  D  to  C, 
and  the  quadrilateral  is  completed,  making  three  independent 
angle  equations  in  all.  If  another  station,  E,  be  added,  with 
three  lines  to  .4,  C,  and  D,  as  in  Fig.  11,  two  of  these  lines  form  a 
new  triangle,  as  before,  and  the  third  completes  a  (}iia(lrilatcral, 
A-C-E-D,  in  which  one  triangle,  A-C-D,  formed  a  part  of  the 
previous  figure,  A-B-C-D,  and  is  therefore  alread}'  included  in 
the  conditions.     The  third  line  from  E  thus  adds  but  one  new 


ADJUSTMENT  OF  TRIANGULATION  87 

condition,  making  a  total  of  five  angle  equations.  If  the  line 
BE  were  added,  it  would  be  the  second  diagonal  in  the  quad- 
rilateral A-B-C-E,  of  which  two  triangles  are  already  included 
in  the  figure,  so  that  the  third  angle  equation,  only,  for  that  quad- 
rilateral would  be  added  by  this  line. 

We  may  generalize  from  this  procedure  and  write  a  formula 
for  the  number  of  independent  angle  equations  in  any  figure. 
Starting  with  three  lines  and  three  stations  in  the  form  of  a  tri- 
angle, we  have  one  condition.  If  we  add  one  station  and  one  line 
to  it,  no  new  conditions  are  introduced,  but  each  additional  line 
to  that  station  gives  one  new  condition,  and  the  same  is  true  of 
further  additions  of  stations  and  lines  until  the  entire  figure  has 
been  drawn.  Therefore,  each  station  added  to  the  initial  triangle 
adds  as  manA'  conditions  as  there  are  lines,  less  one,  running  to 
that  station,  so  that  the  total  number  of  conditions  would  be  the 
aggregate  of  these  conditions  together  with  the  one  for  the  initial 
triangle,  that  is,  the  whole  number  of  lines  minus  the  whole 
number  of  stations,  plus  one.  But  if  any  one  of  these  lines  be 
unobserved  at  one  end,  one  angle  of  the  corresponding  triangle 
would  be  missing  and  that  line  would  not  count  for  a  condition. 
Also,  if  one  station  were  entirely  unoccupied,  as  Pan,  in  Fig.  5, 
page  83,  it  could  enter  into  no  complete  triangle  and  would  have 
to  be  omitted  from  the  stations  counted  in  determining  the  number 
of  angle  equations.  Finally,  then,  we  may  write  the  following 
formula  for  the  number  of  independent  angle  equations  in  a  given 
figure : 

Number  of  Angle  E(}uations  =  L'-,S"  +  l  (107) 

in  which  L'  is  the  numbcM'  of  /;///  lines  in  the  figure  and  S'  is  the 
number  f)f  occupied  stdlions. 

92.  Side  Equations.  The  ti-ianglcs  of  a  figure  may  close 
exactly  to  lS()°  +  e,  and  the  angles  still  be  inconsistcMit  with  regard 
to  th(>  closure  of  th(>  whole  figure  wIkmi  the  lengths  are  computed. 
To  illustrate  this,  sujipose  the  triangles  of  Fig.  (>,  page  84,  to  be 
plotted  in  the  following  oidei-.  the  angles  of  each  having  been 
adjusted  to  a  closure  and  \\\v  local  conditions  satisfied.  Plot 
triangle  (a)  to  any  convtMiicMit  scale,  as  in  Fig.  12.  using  the  given 


88  PRACTICAL  LEAST  SQUARES 

angles.  Upon  the  side  AB,  construct  triangle  (b)  with  vertex 
falling  at  d'.  Upon  BC,  construct  triangle  (c)  and  its  vertex 
might  fall  at  (/".  Then  if  triangle  (d)  were  plotted  upon  AC  as  a 
base,  its  vertex  might  fall  at  d'",  and  the  angle  at  d'"  would  still 
equal  the  sum  of  d'  and  d"  as  required  by  the  local  condition  at 
station  D.  Thus,  the  lengths  of  the  lines  running  to  this  station 
would  fail  to  check  because  they  did  not  intersect  at  a  single 
point.     The  side  equation  for  this  figure,  therefore,  would  require 


1^ 

1 

— ^^^x/^'^'X 

-- 

\. 

\ 

; 

? 

\\^" 

Fig.  12.     Side  Equation 


that  the  three  points,  d',  d" ,  and  cV" ,  ])o  coincident,  which  is 
equivalent  to  the  requirement  that  d"  and  d'"  ho  coincident,  or 
that  the  line  Cd"  in  the  triangle  (r)  ])e  equal  to  the  line  Cd'"  in  the 
triangle  {d).  In  other  words,  the  side  equation  requires  that  if  one 
line  of  a  figure  be  computed  from  another  line  tlu'ough  two  series 
of  ti'iangles,  or  by  two  different  routes,  the  resulting  IcMigths  shall 
be  eciual.  Since  the  same  initial  lin(>  is  u.'^ed  in  ])oth  cases,  it  can- 
cels from  the  equation,  and  the  angles,  only,  are  concerned  in  the 
condition. 

93.  Side  Equation  of  a  Quadrilateral.  Starting  witli  the  line 
AB  and  computing  CD  through  the  ti'iaiigles  (a)  and  (c),  as  indi- 
cated by  the  upper  dotted  arrows,  and  using  tlie  final,  adjusted 
valu(>s  of  the  angles,  wliicli  make  the  points  d' ,  d" ,  and  d'"  coin- 
cident, we  have, 

CD^BC'^^^^AB'^:^^  ^  (]0S) 

sm  Is  sin  \  0  sm  T  g 


ADJUSTMENT  OF  TRI ANGULATION  89 

liikewise,  computing  through  the  triangles  (6)  and  (d), 

CD=^AD'^^^^  =  Ab'^^^^^^^^  (109) 

sm  F5  sin  Vi  sin  F5 

Equating  these  expressions  for  CD  and  cancehng  the  factor  AB, 

sin  F3  sin  Fi  _sin  F2  sin  F4  mm 

sin  Fe  sin  Fg     sin  F7  sin  F5 

Multiplying  both  members  of  (110)  by  the  reciprocal  of  the  second 
member,  we  obtain  a  statement  of  the  side  equation   in  the  form 

sin  V\  sin  F3  sin  F5  sin  F7 _  n  in 

sin  V2  sin  F4  sin  Fe  sin  Fg 

in  which  the  numerator  contains  the  odd-numbered  angles  and  the 
denominator,  the  even  ones,  which  happens  as  a  result  of  our  num- 
bering the  two  angles  at  each  station  in  clockwise  order,  and  which 
is  a  useful  check  on  the  formation  of  the  side  equation.  To 
reduce  this  equation  to  the  first  degree,  we  take  the  logarithm  of 
each  member  and  equate  them,  whence, 

logsin  Fi+logsin  Fs+logsin  Fs+logsin  F7 

—  logsin  Fo  — logsin  F4  — logsin  Fe  — logsin  Fg^O     (112) 

Equations  (111)  and  (112)  are  original  condition  equations  which 
state  the  requirement  which  must  be  satisfied  by  the  adjusted 
values  of  the  angles,  and  correspond  to  the  form  shown  in  (56). 
It  remains  to  derive  the  simpler  reduced  condition  which  expresses 
the  relation  between  the  corrections  to  the  observed  angles,  so  that 
it  may  be  combined  with  the  angle  equations  (105)  into  one  adjust- 
ment having  the  same  unknowns,  namely,  the  r's.  That  is,  for 
each  T'  must  be  substituted  its  value,  J/+r,  and  the  equation 
reduced  to  the  linear  form  to  coi-respond  to  (59),  page  58. 

If  a  given  angle,  M ,  be  altcu'cnl  by  a  small  conxH'tion,  v,  (expressed 
in  seconds,  the  logarithmic  sini^  of  the  ang]t>  would  ])e  changed 
by  a  corresponding  amount,  naiuc^ly,  the  nuni})er  of  seconds  in  v 
multiplied  into  the  difference  for  ()iu>  scH'ond  in  that  particular 
logarithmic  sine,  as  taken  from  Ihc^  k)garithmic  tables  with  the 
proper  algebraic  sign,  positive^  if  th(^  angle  lie  in  the  first  (juadrant, 
in  which  the  sine  iiici-eascs  with  inci'cwsing  angl(\  and  negative 


90  PRACTICAL  LEAST  SQUARES 

if  in  the  second  quadrant,  where  the  sine  decreases  with  increase 
of  angle.     That  is, 

logsin  (3/+y)=logsin  F  =  logsin  M  -j-v  {dl")         (113) 

For  example,  if  M  =  76°  15'  14.5"  and  v=-4.1",  logsin  M  = 
9.9873797  with  a  difference  for  1"  of  +5.1  in  the  seventh  decimal 
place.  Then  logsin  (ilf +t;)  =  logsin  76°  15'  10.4"  =  9.9873797 - 
4.1(+5.1)  =9.9873776.  Substituting  for  logsin  V,  in  (112),  its 
value  given  in  (113),  namely,  logsin  M-\-(d  l")v,  and  collecting 
the  logarithms  into  one  constant  term,  we  have, 

-(del")ve  +  {d-l")v7-{dsl")v8^(\ogsmMi 

—  logsin  3/2+logsin  .^/a  — logsin  3/4 + logsin  il/s 

—  logsin  J/6+ logsin  M7  -logsin  Mg)  =  0  (114) 

in  which  {dil")  represents  the  difference  for  1"  in  the  logsine  of 
angle  Mi,  in  the  seventh  place  of  decimals,  assuming  that  seven- 
place  logarithms  are  used.^  Each  of  these  differences  for  1"  is  a 
numerical  coefficient  for  its  v,  and  corresponds  to  a,  b,  c,  etc.,  of 
(59),  page  58.  Also,  since  the  observations  are  carefully  made, 
the  observed  angles,  M,  will  approximate  closely  to  their  adjusted 
values,  V,  so  that  the  algebraic  sum  of  the  logsines  of  the  Jl/'s  in 
(114)  will  be,  instead  of  zero  as  in  (112),  a  sma.l  error  of  closure,  g, 
expressed  in  units  of  the  seventh  decimal  place,  and  equal  to  the 
amount  by  which  the  sum  of  the  positive  logsines  exceeds  that  of 
the  negative  ones,  in  (114).  The  reduced  form  of  the  side  equa- 
tion becomes,  therefore,  if  we  assume  it  to  be  the  fourth  of  the 
condition  equations  so  that  its  coefficients  are  f/'s  (a,  b,  and  c 
being  coefficients  of  the  first  three  conditions  respectively), 

diVi  —  d2V2-\-d:',r-4  —  d4V4:-\-d-)V-,  —  d(]VG-{-d7V7  —  dsVs~^q4  =  0     (115) 

in  which  each  d  is  the  numerical  difference  for  I"  in  the  logsine  of 
the   corresponding   angle.     Thus   we   see   that    the   side   ecjuation 

'  It  will  be  convenient,  generally,  to  use  seven-jilace  logarithms  but  to 
take  the  sixth  {ilace  unit  for  (dl")  and  q,  thus  moving  the  decimal  ])oint  one 
])lace  to  the  left  in  these  (luantities.  If  the  tal)les  show  a  diilercnee  for  10" 
to  be  +51,  therefore,  ('/I")  =  +5.1  in  the  seventh  place,  or  +0.51  in  the  sLxth 
place. 


ADJUSTMENT  OF  TRI ANGULATION 


91 


states  that  the  corrections  to  the  angles  must  be  such  that  the 
algebraic  sum  of  the  resulting  corrections  to  their  logsines  will 
equal  —q  and  the  original  side  equation  (111)  be  satisfied.  Since 
the  coefficients  and  the  constant  term  of  (115)  are  expressed  in  the 
same  unit,  in  the  seventh  place,  this  equation  is  consistent  with  the 
angle  equations  (105),  as  stating  a  linear  relation  between  the  f's. 
94.  A  Shorter  Form  of  the  Side  Equation  for  the  quadrilateral 
may  be  obtained  by  computing  one  of  the  adjacent  sides,  as  BC, 
from  AB,  instead  of  the  opposite  one,  CD,  as  above.     In  Fig.  13, 


Fig   13.     Side   Equation;   Quadrilateral. 

the  dotted  arrows  show  the  two  routes  of  the  computation,  from 
which  the  two  resulting  values  for  BC  must  be  eciuated: 

sin  Vz 


and 


whence, 


BC^AB 


BC^AB 


sin  Fg 

sin  (r;5  +  T'4)        sin  T's 
sin  ]':        sin  (I's  +  Vo) 


sin  F3  sin  (T'n+F);)  sin  V~ 


=  1 


(116) 
(117) 


sin  (F3  +  F-})  sin  V(\  sin  I's 
and  the  rcHlucod  side  ecjuation  Ix'coinos, 

f/H?V, -'':',+ i(r:i  +  r4)+r/54-r,(r5  + '•,-,) -'/,;'■.! +  '/7r7-'/sr8  +  r/4  =  0    (118) 
or,  separating  the  various  unknowns, 

(f/3  —  (/3  +  4)i':i  —  '/3+  4?'.l  +(/5+  (i''5  +  (//o-r  fi  —  'A;)''0 

+  (/7?'r-'/srs  +  ^y4  =  0  (119) 


92 


PRACTICAL  LEAST  SQUARES 


in  which  ^3+4  represents  the  difference  for  1"  in  the  logsine  of  the 
sum-angle,  il/3+3/4,  etc.  Although  this  form  is  somewhat 
shorter  than  (115)  in  that  the  angles  at  the  station,  B,  lying  be- 
tween the  sides  AB  and  BC,  do  not  appear,  it  is  more  troublesome 
for  the  beginner.  However,  the  fact  that  the  angles  at  one  sta- 
tion are  not  concerned  makes  this  the  preferable  form  when  one 
of  the  stations  of  the  figure  was  not  completely  occupied,  in  which 
case  the  equation  is  expressed  between  the  two  exterior  lines 
adjacent  to  this  station.  Thus,  in  the  above  figure,  station  B 
might  have  been  unoccupied  without  affecting  the  form  of  equa- 
tion (117). 

95.  Side  Equation  for  a  Central-point  Figure.     Let  Fig.   14 
c 


Fig.    14.     Side   Equation;   Contral-point   Figure. 

represent  the  general  form  of  a  central-point  figure,  and  for  the 
sake  of  variety,  suppose  the  central  station  to  have  been  observed 
from  each  of  the  others  but  not  to  have  been  occupied,  as  shown 
by  the  lines  l)eing  })roken  at  that  point,  but  that  otluM'wise  the 
figure  is  (•omplet(\  The  side  equation  will  ho  written  between 
two  of  the  lines  which  meet  at  the  central  point,  and  the  dotted 
arrows  show  the  two  routes  of  computation  from  the  line  AO  to 
DO  through  two  series  of  triangles. 


T^^>      ^/-^^i"  ^1  ^"^  ^'i  ^^^  ^^^      </->^"^  ^2   sin  T'lo  sin  Tio 
IJ(J  =  A(J   7   "T     .     -—  -. — .,   -A(J  -. — ~ . — -. . — ~T- 

sm  I  4  sin  I  G  sm  I  s  sin  \  n  sm  I  (.>   sm  I  7 

whence, 


sin  T'l  sin  T':{  sin  T.-,  sin  V-  sin  1'..  sin  T'n 
sin  V2  sin  1^4  sin  Vq  sin  Vs  sin  V\o  sin  T'l.) 


=  1 


(120) 
(121) 


ADJUSTMENT  OF  TRI ANGULATION 


93 


■fhe  reduced  side  equation  follows  directly,  as  in  Art.  93: 

diVi—  d2V2  +  d3V3  —  d4V4-\-d5V5  —  d6VQ-\-d7V7 

—  d8V8-\-d9Vi)  —  dioVio^diiVn  —  di2Vi2-\-q  =  0  (122) 

It  will  be  seen  that  the  odd-numbered  angles  occur  in  the 
numerator  of  (121)  and  the  even  ones  in  the  denominator,  which 
results,  as  in  Art.  93,  from  the  clockwise  numbering  of  the  two 
angles  at  each  occupied  station.  Since  the  angles  at  the  central 
station  were  not  observed,  and  do  not  occur  in  the  side  equation, 
it  is  not  necessary  to  number  them.  By  comparison,  also,  with 
(111)  of  Art.  93,  it  is  evident  that  the  side  equation  for  a  central- 
point  figure  having  four  sides  and  the  stations  A,  B,  C,  D,  and  0, 
would  be  identical  with  (111)  written  for  a  complete  quadrilateral 
with  diagonals. 

96.  Mechanical  Statement  of  Side  Equations.  The  similarity 
among  the  side  equations  (111),  (117),  and  (121),  in  the  occurrence 
of  the  odd-numbered  angles  in  the  numerators  and  the  even  ones 
in  the  denominators,  would  indicate  the  possibility  of  writing 
these  equations  by  inspection  instead  of  using  this  property  merely 
as  a  check.  This  may  be  done  b\'  the  following  mechanical  method 
for  all  ordinary  figures  of  the  quadrilateral  or  central-point  form. 

Notation,  (a)  The  pole  for  a  side  equation  is  some  station, 
or  other  point,  to  which  a  line  rims  from  every  (other)  station  of 
the  figure  for  which  the  equation  is  to  be  written.  It  may  be  at 
the  intersection  of  the  diagonals  of  a  quadrilateral,  although  there 
is  no  station  there.  The  point  selected  as  the  pole  will  be  indicated 
by  a  S7nnll  circle  (h'awn  around  it.  as  in  the  following  figures: 


Fig.    15.  Fk;.    10.  Fig.    17. 

Location    of   Pole    for   Sidi^   Equaiion. 


(h)  At  each  station  of  the  figui'e,  there  will  l)e  three  lines,  one 
of  which  goes  to  the  p()l(>  and  ina\-  bo  calliMl  the  pole  line.     Of  the 


94 


PRACTICAL  LEAST  SQUARES 


other  two,  one  will  be  the  left-hand  line  and  the  other,  the  right- 
hand  line,  as  we  look  into  the  figure,  from  the  station  towards  the 
pole,  (c)  At  each  station,  the  left-hand  angle  is  the  angle  between 
the  left-hand  line  and  the  pole  line,  and  the  right-hand  angle  is  the 
one  between  the  right-hand  line  and  the  pole  line.  This  nota- 
tion is  illustrated  in  the  following  diagrams  in  which  I  and  r  indi- 
cate the  left-hand  and  right-hand  lines  and  L  and  R,  the  left-hand 
and  right-hand  angles,  respectively: 


Fig.   18. 


Pole 


Fig.    19. 
Loft-hand   and   Right-hand   Angles. 


QPole 


Fig.   20. 


The  side  equation,  then,  is  written  by  placing  the  product  of 
the  sines  of  the  left-hand  angles  equal  to  that  of  the  sines  of  the 
right-hand  angles,  or  by  placing  the  former  in  the  numerator  and 
the  latter  in  the  denominator  of  a  fraction  which  is  placed  equal  to 
unity.  The  reduced  form  of  the  equation,  corresponding  to  (115), 
(119),  and  (122),  may  be  written  as  the  sum  of  the  dv's  for  the 
left-hand  angles  minus  the  dv's  for  the  right-hand  angles  plus  q 
equals  zero,  or, 

[c/v](for  left-hand  angles)  —  [r/?;](for  right-hand  angles) -{-q  =  0    (123) 

in  which  d  is  the  difference  for  1"  in  the  logsine  and  q  is  the  sum  of 
the  logsines  of  the  left-hand  angles  minus  the  sum  of  the  logsines 
of  the  right-hand  angles. 

It  will  be  noted  that  the  ang^.cs  at  the  pole  do  not  enter  into 
the  side  equation  at  all,  so  that  the  pole  is  situated  at  the  inter- 
section of  the  two  lines' between  which  the  equation  would  be 
written  according  to  the  analytical  method  of  the  preceding  articles. 
Thus,  equation  (111)  would  correspond  to  a  pole  at  the  intersection 
of  the  diagonals  of  Fig.  12,  page  88;  in  Fig.  1.3,  page  91,  the  pole 
would  be  at  station  B  for  equation  (117);  and  in  Fig.  14,  page  92, 
the  pole  would  be  at  the  central  station,  0. 

It  is  now  apparent,  as  stated  in  Arts.  93  and  9o,  that  the  odd- 


ADJUSTMENT  OF  TRI ANGULATION  95 

numbered  angles  occurred  on  one  side  of  the  side  equations  and  the 
even  ones  on  the  other  because  the  two  angles  at  each  exterior 
station  of  the  figures  had  been  numbered  clockwise  so  that  the  odd 
ones  were  on  the  left  and  the  even  ones  on  the  right  of  the  pole 
lines. 

The  selection  of  the  pole  may  be  governed  by  the  following 
principles.^  In  a  central-point  figure,  it  must  be  at  the  central 
station.  If  a  station  was  not  occupied,  or  not  completely  occupied, 
the  pole  should  be  at  that  station.  Sum-angles  may  be  avoided 
by  placing  the  pole  of  a  quadrilateral  at  the  intersection  of  the 
diagonals,  which  is  simpler  for  the  beginner  although  it  introduces 
two  additional  angles  and  logsines.  The  pole  should  not  be  placed 
where  the  smallest  angles  occur,  as  these  angles  should  enter  into 
the  side  equations  with  their  larger  coefficients.^ 

97.  Number  of  Side  Equations  in  a  Figure.  In  order  that 
there  may  be  two  routes,  through  two  series  of  triangles,  between 
two  lines  in  a  figure,  there  must  be  at  least  three  triangles  in  the 
figure,  and  therefore,  four  stations  with  three  lines  to  each  station. 
In  other  words,  the  quadrilateral  is  the  simplest  figure  for  which 
a  side  equation  may  be  written.  Similarly,  a  central-point  figure, 
without  diagonals,  can  have  but  one  side  equation  since  there  are 
but  two  series  of  triangles  through  which  one  side  maj'  be  com- 
puted from  another.  The  quadrilateral  and  the  central-point 
figure,  therefore,  furnish  one  side  equation  each,  and  are  the  ele- 
mental figures  for  these  equations. 

A  complete  central-point  figure  has  as  many  outer  lines  and 
the  same  numl)er  of  inner  ones  as  there  are  exterior  stations,  so 
that  the  total  number  of  lines  will  equal  twice  the  total  number  of 
stations  less  one,  which  is  true,  also,  in  a  quadrilateral.  Since 
each  of  these  figures  yields  one  side  equation,  the  formula  may  be 
written, 

Number  of  Side  Equations  =  L-20S- 1)  +  1  =L-2,S+3     (124) 

in  which  L  is  the  total  number  of  lines,  full  or  ])roken,  and  S  is  the 

'  See  Wright  and  Ilayford's  Adjiistiiient  of  Oljservalions. 
-  The  logsiiic  of  a  small  angle  varies  rapidly  witl)  change  of  angle,  so  that 
the  diilerence  for  1"  is  large. 


96 


PRACTICAL  LEAST  SQUARES 


total  number  of  stations  whether  occupied  or  not.  Adding  to 
either  figure  one  station  with  three  Hues  from  it  to  stations  of  the 
figure,  adds  another  quadrilateral  or  central-point  figure,  and 
therefore,  another  side  equation,  which  corresponds  to  an  increase 
of  one  {L  —  2S)  in  the  formula.  Each  additional  line,  without 
increase  of  stations,  makes  possible  the  writing  of  one  or  more  new 
side  equations,  using  that  line,  of  which,  however,  only  one  can  be 
regarded  as  independent.  Adding  one  station  to  a  figure  thus 
adds  as  many  side  equations  as  there  are  lines  from  that  station 
to  the  figure,  less  two.     For  example,  adding  to  Fig.  21  the  station 


A  A 

Fig.   21.  Fig.   22. 

Number  of  Side   Equations  in   a   Figure. 


G  with  three  lines  to  A,  F,  and  E,  adds  one  side  equation  which 
could  be  written  for  the  quadrilateral  G-A-F-E  or  for  the  central- 
point  figure  G-A-B-C-D-E,  and  if  this  latter  had  been  the 
original  figure,  the  addition  of  the  line  AE  would  have  formed  the 
quadrilateral  A-F-E-G  with  its  side  equation.  In  each  case  the 
new  side  equation  must  include;  the  added  station  or  line. 

98.  Statement  of  All  of  the  Conditions  for  a  Figure  Adjustment. 
In  the  preceding  articles,  the  three  kinds  of  conditions,  local, 
angle,  and  side,  which  enter  into  the  adjustment,  have  Ijcen 
explained  separatcl}'.  The  adjustment  as  a  whole  will  now  be  con- 
sidered. 

Strictly,  all  of  the  conditions  should  ])e  satisfied  simultaneously 
in  one  gc^ieral  adjustment.  The  ln])()r  of  computation  is  greatly 
reduced,  how(>vei',  by  diminishing  the  number  of  conditions,  and  it 


ADJUSTMENT  OF  TRIANGULATION  97 

is  especially  convenient  to  perform  the  local  adjustments  sepa- 
rately and  in  advance  of  the  figure  adjustment,  inasmuch  as  it  is 
good  practice  to  arrange  the  observations  so  as  to  have  but  one 
local  condition  at  a  station,  involving  all  of  the  angles,  as  explained 
in  Art.  87.  Therefore,  we  shall  assume  that  the  necessary  local 
adjustments  have  been  made,  as  in  Arts.  79  and  82,  preparatory  to 
the  figure  adjustment.  However,  if  the  angles  at  any  station  of 
the  figure  complete  the  horizon,  it  will  be  necessary  to  include  in 
the  figure  adjustment  a  local  condition  providing  that  the  sum  of 
these  angles  shall  remain  360°,  that  is,  that  the  algebraic  sum  of  the 
corrections  to  these  angles  must  be  zero.  This  is  likely  to  be  the 
case  at  an  interior  station,  such  as  F,  in  Fig.  22.  Also,  if  a  sum- 
angle  should  be  included  among  the  conditions,  as  well  as  its 
component  angles,  and  with  a  separate  number,  a  similar  local 
condition  would  be  necessary  to  insure  that  the  sum-angle  would 
remain  equal  to  the  sum  of  its  components  after  adjustment; 
but  this  may  well  be  avoided  by  designating  the  sum-angle  as  the 
sum  of  its  components,  as  in  Art.  94,  instead  of  using  a  separate 
symbol  for  it.^  In  general,  care  must  be  taken  that  the  prelim- 
inary adjustment  be  not  disturbed  by  the  later  one. 

The  selection  of  the  angle  and  side  equations  for  a  given  figure 
or  system  must  conform  to  the  requirements  that  all  the  necessary 
conditions  be  included,  but  no  more,  and  that  they  be  independent 
of  one  another,  so  that  no  one  of  them  could  be  obtained  ])y  com- 
bining any  of  the  others.  If  a  dependent  condition  were  included, 
by  mistake,  it  would  be  indicated  during  the  solution  of  the  normal 
equations  by  a  derived  ociuation  liaving  all  of  its  coefficients  zero, 
or  nearly  so,  so  that  tlu^  corresponding  correlate  would  be  inde- 
terminate. The  necessaiy  nuni])(>r  of  independent  angle  and  side 
ecjuations  will  \)e  giv(>n  by  formulas  (107)  and  (124),  namely, 

Number  of  Aii<;lc  lMiuati()ns  =  L'-N'  + 1  (107) 

Number  of  Side  l^cpiat ions   =L  — 2N  +3  (124) 

in  wliicli  L  and  N  are  the  total  luiinboi's  of  lines  and  stations,  and 

1 'riics(>  local  coiulitions  an>  avoided  in  th(^  fi^ur(^  adjust  iiH-ut  by  using 
dircclid/ts  instead  of  (ni(jlc.<.  as  will  he  shown  later  on. 


98  PRACTICAL  LEAST  SQUARES 

U  is  the  number  of  full  lines  and  S'  is  the  number  of  occupied  sta- 
tions. (For  a  station  to  be  considered  as  occupied,  at  least  two 
lines  must  be  unbroken  at  that  station.)  The  best  method  of 
writing  the  angle  and  side  equations  so  as  to  be  certain  of  their 
independence  as  well  as  their  number,  is  to  draw  a  sketch  of  the 
system  or  figure  to  be  adjusted,  adding  one  station  at  a  time,  with 
its  lines  to  the  previous  stations,  and  writing  the  equations  intro- 
duced by  that  station  and  those  lines.  For  each  station  so  added, 
there  will  be  as  many  angle  equations  as  new  full  lines,  less  one, 
and  as  many  side  equations  as  new  lines,  less  two.  As  has  been 
stated,  small  angles  should  be  used  in  the  side  equations  where 
practicable,  although  it  is  best  to  use  each  but  once.  In  angle 
equations,  on  the  contrary,  they  should  be  avoided. 

For  example,  the  equations  for  Fig.  22,  page  96,  will  be  written. 
In  this  case,  L=13,  L'=12,  S  =  S'  =  7,  and  there  are  six  angle 
and  two  side  equations.  The  complete  horizon  at  F,  moreover, 
requires  a  local  condition.  Beginning  with  the  line  AB,  station 
F,  with  the  two  lines  to  A  and  B,  forms  a  triangle  with  one  angle 
equation  (A),  as  shown  below\  Adding  C  with  two  lines  to  B  and 
F,  gives  one  angle  equation,  (B),  and  similarly,  adding  D  with  two 
lines  to  C  and  F  gives  (C).  Now,  with  E  are  added  three  linos 
to  A,  D,  and  F,  so  that  two  angle  equations,  (D)  and  (E),  are 
formed  and  one  side  equation  (//),  for  the  w^hole  figure 
A^B-C-D-E-F,  with  pole  at  F.  With  G  are  added  two  full  lines 
and  one  broken  line,  giving  one  angle  equation,  (F),  for  the  tri- 
angle G-A-E,  and  one  side  equation  (7)  which  might  well  be 
written  for  the  quadrilateral  G-A-F-E,  with  pole  at  G  since  the 
line  FG  is  broken  at  G.  Thus  we  have  six  angle  and  two  side 
ef^uations  as  required  by  the  formulas  above.  The  local  equa- 
tion (G)  for  the  station  F  must  be  added.  To  facilitate  the 
formation  and  solution  of  the  normal  equations,  these  eonditioii 
eciuations  are  so  arranged  as  to  place  the  simpler  ones  first  and  the 
more  complex  ones  with  larger  coefficients,  last.  (See  Art.  78, 
page  71.)  The  angle  equations,  therefore,  will  usually  precede 
the  side  equations.  For  a  central  point  figure,  also,  several  angle 
equations  may  be  written  in  succession  having  no  angles  in  com- 
mon, with  the  result  that  many  of  the  coefficients  in  the  first  normal 


ADJUSTMENT  OF  TRIANGULATION  99 

equations  will  be  zero,  thus  materially  reducing  the  labor  of  solu- 
tion.    The  above  conditions  are  arranged  as  follows: 

Angle:   (A)  F2  +Fii  +  Fi9-(180°+6a)  =  0 

(B)  Fi  +74  +7i4-(180°+66)  =  0 

(C)  73  +76  +7i5-(180°+6c)=0 

(D)  75  +79  +7iG-(180°+6,)  =  0 

(E)  Vs  +7i2  +  7i7-(180°+e.)=0 

(F)  V7  +7io+7i3-(180°+e;)=:0  (125) 
Local:    (G)  7i4  +  7i5  +  7i6  +  7i7  +  7i8  +  7i9-360°  =  0 

sin  Vi  sin  73  sin  Vo  sin  78  sin  Vn      . 


Side:      (//) 


sin  V2  sin  74  sin  76  sin  79  sin  7i2 


/yx        sin  V7  sin  7]7  sin  (7i2  +  7i3)_.. 
sin  (77  +  73)  sin  7i3  sin  7i8 

Substituting  for  each  7  its  M-{-v,  the  M's  being  the  observed  values 
of  the  angles,  and  computing  the  spherical  excesses,  the  angle  and 
local  equations  are  thrown  into  their  reduced  form  as  in  (105),  and 
the  reduced  side  equations  are  formed  as  in  (115)  and  (119), 
respectivel\^  The  formation  of  the  nine  normal  equations  and 
the  remainder  of  the  solution  then  follow  as  in  the  last  chapter. 

99.  Adjustment    of    a    Quadrilateral:     Method    of    Angles. 
To  illustrate  the  foregoing  principles,  the  following  (juadrilateral 

Beckuith 


FiK.    23.     Atljustmciit    l)y    Metliod   of   Angles. 

will  iK)w  be  adjusted  in  full.  The  final  angles  arc  desired  to  hun- 
(lr(Hltlis  of  a  s(>('()nd.  Weights  are  ecpial.  Seven-place  logarithms 
will  be  used  ])iit  witli  the  unit  tak(>n  in  \hc  sixth  placi^   foi'  the  side 


100  PRACTICAL  LEAST  SQUARES 

equation.  The  pole  is  taken  at  the  intersection  of  the  diagonals, 
as  indicated.  The  given  angles  are  shown  in  the  three  triangles 
which  will  be  used  in  succession  for  the  angle  equations. 

Given  Angles 

(A) 
Beckwith (3)  26°  42'     51 .8" 

NorthBase (1)  64  43       42.3 

North  Base (2)  43  44       02.0 

SouthBase (8)  44  49       27.4 

03.5 
€a-0.05"  5„=+3.45" 

(B) 

Walter (6)  28°  17'      12.9" 

NorthBase (1)  64  43       42.3 

SouthBase (7)  42  09       40.3 

SouthBase (8)  44  49       27.4 

02.9 
e,  =  0.06"  qh=+2.M" 

(C) 
Waher (5)  48°  03'      10.3 

Walter (6)  28  17       12.9 

Beckwith (4)  (il  29       53.9 

SouthBase (7)  42  09       40.3 

57.4 
e,  =  0.08"  (7.- -2.68" 


ADJUSTMENT  OF  TRIANGULATION 


101 


o 

Tfl 

1— 1 

o^ 

^ 

c^ 

^^ 

Oi 

1— ( 

1— ( 

-M 

1-H 

CO 

(M 

+ 

+ 

+ 

+ 

^ 

00 

lO 

C3 

C5 

^ 

CO 

X 

1 

^? 

^ 

X 

ori 

o 

t^ 

C5 

t^ 

-^ 

X 

t^ 

1 

■■O 

cc 

o 

T-H 

CO 

CO 

1 

c 

CI 

CO 

i-O 

00 

r^ 

t^ 

CO 

'* 

t^ 

'^ 

o 

o 

S) 

00 

Oi 

o 

00 

CO 

CO 

hj 

03 

05 

03 

05 

Oi 

C5 

b 

o 

o 

-t< 

'M 

^/-v 

Ol 

t^ 

o 

0^ 

Vi 

^ 

C: 

t^ 

Ci 

"~  ^ 

-* 

Ol 

'^ 

CO 

,_t 

00 

-V 

■^ 

o 

Ol 

-* 

01 

-^ 

- 

X 

C5 

~ 

o 

CO 

s 

o 

,_, 

Ci 

CO 

-^ 

-v 

T— 4 

01 

+ 

+ 

+ 

+ 

^~, 

OD 

■s> 

o 

^ 

-o 

+ 

C3 

CO 

■— H 

O 

^ 

l-~ 

CO 

o 

t^ 

^/^ 

t- 

-^ 

X 

CO 

— ■ 

o 

oa 

-o 

t^ 

1^ 

'■O 

t^ 

"M 

3 

tt 

C2 

o 

00 

X 

CO 

>^ 

Ci 

Ci 

o 

C5 

o 

p— - 

CO 

CO 

CO 

CO 

01 

^ 

3 

o 

-t 

'■•' 

'"^ 

^ 

^^ 

X-« 

'^1 

*o 

C5 

^^ 

-t> 

-r 

o 

o 

01 

01 

-r 

-r 

-H 

CO 

.0 

t^ 

lO 

^ 

X 

o 

•* 

X 

o 

X 

3 

CO 
+ 

oq 

+ 

1 

X 

1 

!M 

Ol 

+ 

+ 

1 

o 

1 

CO 

CO 

CO 

CO 

i" 

T-H 

^ 

(M 

Tt^ 

+ 

+ 

+ 

+ 

O 

o 

.—1 

+ 

+ 

T 

7 

T 

5^ 

O 

o 

o 

Oi 

05 

H 

< 

^_| 

,_l 

01 

13 
C7 

+ 

+ 

+ 

■^ 

X 

~- 

't 

•<+ 

f- 

r— 1 

1-H 

c 

- 

^ 

^ 

c 

X 

+ 

1 

1 

L/ 

Ci 

■— ' 

>— ' 

^ 

+ 

-j- 

+ 

05 

o5 

-■' 

^H 

01 

,_, 

+ 

1 

i 

S 

01 

+ 

+ 

+ 

-h 

102 


PRACTICAL  LEAST  SQUARES 


K 


a 

3 

+  6.86 
+  5.29 
+  5.18 
+50.83 

(Factors) 
-2/4= -0.50 

0 
-2/3= -0.667 

-0.86/4= -0.215 
+3. 14/3  = +1.047 
-1.27/2.67= -0.476 

0) 

-a 
O 

+  10.31 
+  8.13 
+  2.50 
+42.03 

+  8.13 
-   5.16 

a 

M 

+ 

+  2.50 

0 
-   1.99 

> 

1—1 
i-O 

d 
+ 

+42.03 

-  2.22 
+  3.12 

-  0.24 

S 

01 

+ 

s 

+3.45 
+2.84 
-2.68 
-8.80 

+2.84 
-1.72 

+ 

-2.68 

0 
-0.75 

CO 
1 

-8.80 
-0.74 
+  1.17 
+  1.63 

T 

Q 

+  0.86 

-  2.71 

-  0.82 
+53.50 

-  2.71 

-  0.43 

CO 

1 

-   0.82 

0 
+  2.09 

+ 

+53 .  50 

-  0.18 

-  3.29 

-  0.60 

+ 

o 

O    C<I    -*             (MO 
+  +             + 

M 

+ 

CO 
'f    O    ^ 
+               1 

-M 

+ 

C<1       K*H                                           T^       1-1 
+      +                                          +          1 

CO 

+ 

-1 

+ 

ssss  s     5  g       is         i 

ADJUSTMENT  OF  TRI ANGULATION 


103 


Correlates 


+6.74 


C^+3  43-0.17^±3^6^^^^^^^ 


2.67 


2.67 


-1.12+-0.43-2.44     -3.13 
B= ^ =^-  =  -1.043 


A  = 


-3.45-0.12+0+2.09 


1.48 


=  -0.370 


Test.s  of  Correl.\tes 


Eq.  (4) 

Eq.  (3) 

Eq.  (2) 

Eq.  (1) 

A 
B 

C 
D 

(q) 

Sum 

-0.32 

+2.83 
-1.00 

+7.28 
-8.80 

0.00 
-2.09 

+4.88 
-0.11 

-2.68 

-0.74 
-4.17 

+2.44 
-0.37 

+2.84 

-1.48 
-2.09 
0.00 
+0.12 
+3.45 

0.00 

—0.01 

0.00 

0.00 

104 


PRACTICAL  LEAST  SQUARES 


^ 

-0.370 
-1.043 

-0.288 

I— ( 

oo 

77 

- 

CC   r^   !>• 

•rt<  (M  ^ 
CfM  CC 

^  ,-H  C 
1     +  + 

oo 

++ 

~ 

^^  ^H  C^ 
^  -M  CC 
O  M  Lt 

7+? 

cc  cc 
oo 

1  1 

i^ 

^  oc 

I— 1  O 

++ 

o 
1^  oc 

++ 

2 

1—1  i^ 

(M  i-H 

.-id 
+  1 

CO 

o  o 
o  o 

1—1  1—1 

++ 

~ 

T       + 

So 

oo 

++ 

T       T    T  i 

- 

cc  c;         r-     1    Ol  M 

t7    +77 

1  =^^^"^    ic 

COt^  lO  '* 

o 

,-1  c^  CO  ic 

CO 

1>  o  t^  ^ 

X 

lO 

O  X  O  1-- 

CO 

o 

o  CO  L':'  X 

CO  Tf  i^  -*< 

l^ 

s 

d 

X  oi  cc  X 

CO 

1 

11 

o 

05  0  0  05 

OS 

00 

^ 

00 

1 

oi 

CO  CO  >-- o 
CO  o  "-0  t>. 

c 

3 

,-i'  -+  -M  LO 

s 

[& 

O  LO  -^  C<l 

CO 

"bb 

^ 

+ 

c 

'U~  ^  ?5'  -^  ^ 

Ttl 

^ — ^ 

1—1 

0 

0 

t— 1 

3 

55 

O 

CO  -^  X'* 

+ 

« 

-*  -^  C<1  -* 

o 

o  o  o  t^ 

H 

OCO  CO 

< 

Od  Or-i 

II  II  II  + 

x; 

H 

c^  Tf  ox 

"-^ 

LO  -<t    X  ^ 

^ 

o 

■*  00  o  oc 

a 

u 

CO(M  C^  (N 

'3 

cc 

(i. 

++  1  + 

oo  Ot-i 

^ 

'/: 

t^  t^  -^  C^l 

.s 

^' 

LO  -*  t^  C^ 

X 

h 

. 

o 

h 
X 

X  !M  O  LO 

(M 

rfi 

1— t    T— I    O    T— * 

or^  CO  o 

X 

^ 

1            1        +       1 

O  O  iC  Tj- 
!M  Tf  CO  X 

:/2 

CO  t^  ^  X 
O  OQ  ^  o 

O  LO  t^  !>1 

OOX  X 

CO 
t^ 

o 
CO 

d  o  d  d 
+  +  1  + 

3 

CiOiOiOi 

d 

t~-.  O  X  I> 

Lh 

o  CO  -^  -^ 

a; 

ddr-i,-,' 

^ 

C^  O  X  o 

1    1  +  + 

m 

o  o  r-  i^ 

X  X  o  r- 

^ 

^  C<1  ^  o 

(M  !M  O  O) 

H 

I  I  + 


ADJUSTMENT  OF  TRI ANGULATION  105 

Computation  of  Triangles 

The  ultimate  test  of  the  adjustment  occurs  in  the  computation 
of  the  lengths  of  the  lines  or  sides  of  the  triangles.  If  an  error 
were  made  in  the  original  side  equation,  such  as  an  erroneous 
logsine  or  difference  for  \",  or  an  error  in  adding  the  angles  of  a 
triangle  to  obtain  its  error  of  closure,  q,  all  of  the  subsequent 
operations  might  check,  to  and  including  the  tests  of  corrections. 
The  test  of  the  side  equation,  using  the  adjusted  angles  and  the 
new  logsines,  checks  the  original  logsines  and  differences  for  1". 
It  remains  to  be  seen  in  the  computation  of  triangles  whether  or 
not  the  adjusted  angles  "  fill  "  each  of  the  four  triangles  and  at 
the  same  time  satisfy  the  side  equation  by  giving  the  same  results 
for  lengths  which  are  computed  in  two  triangles.  The  above 
discrepancy  of  one  in  the  last  place  of  logarithms  in  the  side  equa- 
tion test,  would  show,  also,  in  the  triangle  computations,  but  is  too 
small  to  warrant  further  investigation.  It  would  be  corrected 
arbitrarily  so  as  to  leave  no  inconsistency  in  the  computed  results. 
(An  example  of  the  final  triangle  computations  will  be  given  at 
the  close  of  the  adjustment  of  this  quadrilateral  by  the  ^Method  of 
Directions,  which  follows.) 

100.  Use  of  Directions  instead  of  Angles.  In  the  measure- 
ment of  angles  with  a  direction  instrument,  as  in  primary  triangu- 
lation,  the  various  signals  are  sighted  independently  and  for  each 
pointing  the  horizontal  circle  is  read,  in  a  clockwise  direction. 
This  is  done  in  the  direct  and  reversed  positions  of  the  instrument 
and  in  various  positions  of  the  circle,  and  the  mean  of  all  of  the 
readings  upon  a  certain  signal  is  adopted  as  the  direction  to  that 
signal.  The  angle  between  any  two  signals  is  the  direction  of  the 
right-hand  signal  minus  that  of  the  left-hand  one,  and  there  is  no 
local  adjustment.  Even  though  the  separate  angles  be  measured 
by  reading  directions  in  pairs,  or  by  the  method  of  repetitions,  the 
directions  may  be  numbered,  instead  of  the  angles,  and  each  angle 
d(^<ignated  by  the  difference  of  the  two  directions  which  hniit  it, 
the  right-hand  one  minus  the  left.  In  Fig.  24,  for  example,  angle 
BAG  would  be  designated  by  the  symbols  -(l)  +  (2),  and  CAE 
would  be  represented  by  -(2) +  (4).     EAB  would  be  -(4)  +  (l), 


106  PRACTICAL  LEAST  SQUARES 

and  so  always  minus  the  left  plus  the  right-hand  direction,  clock- 
wise. 


Fig.   24.   Directions. 

This  method  has  certain  advantages,  especially  in  the  adjust- 
ment of  the  more  complex  systems,  which  render  its  use  very 
desirable,  and  it  is  deservedly  popular  among  computers.  One 
of  its  strongest  features  lies  in  the  fact  that  preliminary  local 
adjustments  are  not  disturbed  by  later  adjustments  in  which  the 
method  of  directions  is  used,  so  that  no  local  condition  for  an 
interior  station  would  be  necessary  in  a  case  such  as  that  in  Art. 
97  and  Fig.  22.  Each  direction  is  regarded  as  observed  inde- 
pendently, and  the  unknowns  of  the  problem  are  the  corrections 
to  the  separate  directions.  The  correction  to  an  angle,  therefore, 
would  be  the  correction  to  the  right-hand  direction  minus  that  of 
the  left  one,  algebraically.  There  will  be  more  directions,  in  a 
given  system,  than  angles,  but  this  is  not  a  serious  objection 
when  the  Method  of  Correlates  is  used.  (In  the  Method  of 
Indirect  Observations,  any  increase  in  the  number  of  unknowns 
produces  a  like  increase  in  the  normal  equations  but  in  the 
Method  of  Correlates  the  number  of  normal  equations  is  equal 
to  that  of  the  conditions.) 

The  weights  of  the  directions  will  be  equal,  in  the  general  case, 
but  different  weights  may  be  assigned  if  certain  signals  were  more 
difficult  to  observe  than  others,  owing,  perhaps,  to  unsteady 
atmospheric  conditions  or  poor  illumination.  If  it  be  desired  to 
use  directions  in  the  adjustment  of  angles  of  different  weights,  care 
should  be  taken  in  giving  weights  to  the  corresponding  directions 
that  the  weights  of  adjacent  angles  be  not  seriously  affected.     If 


ADJUSTMENT  OF  TRIANGULATION  107 

two  adjacent  directions  were  assigned  small  weights,  and  thereby 
received  large  corrections,  the  intervening  angle  might  receive  a 
small  correction  (the  difference  of  the  two  large  ones)  and  so  defeat 
the  purpose  of  the  computer.  If  two  adjacent  angles  have  small 
weight,  the  intervening  direction  might  be  given  a  smaller  weight 
and  thus  affect  both  angles.  However,  if  the  angles  have  different 
weights,  rather  than  the  directions,  it  may  be  best  to  adjust  by 
the  ^Method  of  Angles  explained  above.  In  using  directions, 
therefore,  we  shall  assume  that  angle  weights  are  equal;  if  separate 
directions  have  different  weights,  they  may  be  treated  exactly  as 
in  the  Method  of  Angles. 

If  directions  be  used  in  local  adjustments,  it  is  advisable  to 
use  the  ^lethod  of  Indirect  Observations,  as  in  Art.  82,  since  the 
local  conditions  would  be  identities  of  the  form, 

-(1)  +(2)  -(2)  +  (3)  -(3)  +(4)  -(4)  +(1)  -360°  =  0     (126) 

Therefore,  it  will  usually  be  preferable  to  use  the  Methods  of  Angles 
and  Correlates,  illustrated  in  Art.  79,  for  the  local  adjustment. 

101.  Notation:  Method  of  Directions.  In  numl^cring  the 
directions  of  a  figure,  one  side  may  be  regarded  as  the  initial  line, 
as  if  fixed  Ijy  a  previous  adjustment,  perhaps,  and  its  numbers 
omitted.  In  this  case,  it  is  well  to  place  letters  on  the  fixed  line, 
instead  of  numbers,  to  distinguish  its  directions,  when  writing  the 
equations,  these  lettered  directions  not  to  enter  into  the  reduced 
conditions,  and  to  I'eceive  no  corrections.  On  the  other  hand,  this 
use  of  letters  is  not  necessary,  and  numbers  ma}^  be  placed  upon 
all  of  the  directions,  if  desired,  without  altering  the  method  or 
increasing  the  work  to  any  considerable  extent.  Directions  are 
to  be  numbered  clock\vis(>,  invariably,  at  each  station,  so  as  to 
avoid  errors.  Unobserved  directions,  shown  by  broken  lines, 
will  not  b(>  numlxTcd. 

102.  Lists  of  Directions.  Pi'cparatory  to  the  adjustment,  a 
list  of  the  dircH'tions  at  each  station  may  be  made  from  the  given 
data.  The  names  of  the  observed  signals  are  arranged  in  clock- 
wise order,  usually  beginning  with  a  prominent  one  which  is  given 
the  initial  direction,  0°  OO'  00.0",  although  any  direction  may  be 
taken  as  the   zero  liiu\     If  angles  wvvv  (jbserved.   and  adjusted 


108  PRACTICAL  LEAST  SQUARES 

locally,  if  necessary,  the  resulting  angles  are  added  in  the  proper 
order  so  as  to  obtain  the  angle  from  the  assumed  initial  direction 
to  each  of  the  other  signals  or  stations,  which  will  be  its  direction 
in  the  list.  The  angle  from  one  station  to  another,  clockwise, 
will  then  be  the  direction  of  the  latter  minus  that  of  the  former. 
If  only  two  or  three  angles  were  observed  at  each  station,  it  may 
not  be  worth  while  to  form  these  lists  of  directions,  but  each  angle 
may  be  given  the  proper  designation  as  the  difference  between 
two  directions,  and  two  angles  added  or  subtracted,  when  neces- 
sary, to  obtain  a  third. 


C     s 

8 

D 

^\^ 

/ 

\ 

/. 

\ 

11 

\'. 

15/    16. 


fU 


A         ^7  13       G 

Fig.   25.     Adjustment  by  Method  of  Directions. 

103.  Statement  of  Conditions:    Method  of  Directions.     To 

illustrate  the  use  of  directions,  the  condition  equations  (125)  for 
Fig.  22,  page  96,  will  be  restated.  The  figure  is  reproduced  in 
Fig.  25,  with  the  directions  numbered.  The  line  AB  will  be 
regarded  as  fixed  and  its  directions  will  be  indicated  by  the  letters 
a  and  b,  merely  for  convenient  reference.  Comparison  of  the  fol- 
lowing equations  with  (125)  will  make  the  process  evident.  As 
stated  above,  the  local  condition  is  unnecessary  when  the  Method  of 
Directions  is  used,  as  the  closure  of  the  horizon  at  F  will  not  be 
disturbed  by  applying  corrections  to  the  directions,  since  each 
direction  is  common  to  two  angles  and  its  correction  must  increase 
one  by  the  same  amount  as  it  decreases  the  other,  leaving  the  sum 
unchanged. 


ADJUSTMENT  OF  TRI ANGULATION 


109 


(N 


o 

CO 

1 

O 
00 

1—1 

T 

o 

GO 
\ — 1 

1 

o 

00 

T-H 

T 

O 
00 

T— 1 
1 

o 

00 

1— 1 

T 

TO 

00 

C5 

o 

(M 

+ 

+ 

+ 

+ 

+ 

+ 

>- 


^ 


+    +    +    +    + 


+     + 


+     +     + 


+     +     +     +     +     +     ^ 


-f 

h-^ 

>^ 

+ 

+ 

>- 

+ 


+ 


+ 


+     + 


+ 


-^        cq 


fe. 


no  PRACTICAL  LEAST  SQUARES 

To  state  the  reduced  conditions,  the  direction  letters,  a  and  6, 
are  omitted  and  the  angle  equations  are  written  by  replacing  each 
V  by  its  V,  and  —  (180°+e)  by  the  closure  error,  q.  The  side 
equations  are  more  complicated  owing  to  the  combined  subscripts. 
For  example, 

logsin(-a+Fi5)=logsin(  +  Fi5)=logsin  {-\-Mi5)-\-d+i5Vi5  (128) 
and 

logsin  ( -  Fs  +  F4)  -  logsin  ( -  F4  +  F5) 

=  logsin  (-ikr3+iVf4)- logsin  (-il/4+M5) 

+  C?-3  +  4(-l'3+t'4)-rf-4+5(-i'4  +  l'5) 

=  logsin  (-ilf3+ilf4)- logsin  (-.¥4+3/5) -f/-3+4y3 

+  (f/-3  +  4  +  rf-4+5)t'4  — r/-4  +  5?'5 

in  which  fi-3+4  is  the  difference  for  1"  in  the  logsine  of  the  angle 
(  —  Ms+Ah),  etc.  Applying  these  principles  to  the  equations 
(127),  we  obtain  the  reduced  equations  in  the  following  form: 

(A)  —V2^Vi5  —  V22  +  V23+qa  =  0 

(B)  —Vl+V2  —  Vi^Vo  —  V23  +  Vl8^gt>  =  0 

(C)  —V3-\-Vi  —  V7^V8-Vl8  +  VlC)^qc  =  0 

(D)  —  t'G  +  f?  —  y  1 1  +  t'l  2  —  i'l9  + 1'20  +  gd  =  0 

(E)  —Vio-\-Vn—Vi5-\-ViQ  —  V20-\-V22-\-qe  =  0 

(F)  -VQ-\-vio-vi3  +  vi4-vi(i-\-Vi7  +  qj  =  0  (129) 

(G)  -f/- 1  +  22^1 +  (f/-] +2  +  ^-2)^2 -f/-3  +  4?^3  +  («'/-3  +  4+(/-4  +  5)?'4 

—  f/_4+5r5— f/_0  +  7?'0  +  (f/-G  +  7  +  ^/-7  +  8)?'7 

—  f^-T  +  S^S  —  *^- 10+1  l^'lf) +  (''/- 10+ 11+^/- 11  + 12)^11 

—  d- 11+  12i'l2  +  {d+  15  +<"/-  1 5+  1  g) ?'l 5  —  f/-  1 5+  1  G^'l  G  +^7  =  0 

(//)  (,  —  d-0+  10  +  f/- 9+  1 1) t'9  +  f/- 9+  lO^'lO  —  <"/- 0+  1 1 1'l  1 

—  d-  20  +  2 1  i'20  +  (''''-  20+  2 1  +  ''Z-  2 1  +  22)  t'2 1  —  ^/-  2 1  +  22?'22 

—  (/- 1 5+ 17'-'15  +  (''/- 1 5+ I  7  —  c/- 1 G+ 1 7)  t'l 7 +''''- 1  G -  1  7^']  G +(?/!  =  0 

Inspection  of  eciuations  (127)  shows  that  there  are  more 
unknowns  than  in  (125),  and  what  is  more  important,  that  adjacent 
aiiji'lc  equations  have  unknowns  in  common  so  that  l<>ss  of  iho 


ADJUSTMENT  OF  Till  ANGULATION 


111 


product-coefficients  in  the  normal  equations  would  be  zero  in  this 
case  than  in  the  other.  The  diagonal  coefficients  (squares)  are 
larger,  also,  owing  to  the  greater  number  of  unknowns.  These 
are  disadvantages  which  may  offset  the  omission  of  the  local 
condition,  in  a  central-point  figure,  so  that  the  Method  of  Angles 
might  actually  involve  less  work  than  the  Method  of  Directions, 
in  such  a  case.  A  rearrangement  of  the  above  equations,  how- 
ever, would  simplify  the  normal  equations,  to  some  extent,  by 
collecting  the  zero  coefficients  near  the  beginning.  The  following 
order  might  be  used:   {B),  (L»),  {F),  (E),  (C),  (A),  (H),  (G). 

104.  Adjustment  of  a  Quadrilateral:    Method  of  Directions. 


Beckuith 


Walter 
Fig.   2G.   AdjiistmcMit    of  Quadrilateral;    Method   of   Directions. 


As  an  example  of  tlio  use  of  directions,  the  (juadrilatcral  of  Art. 
99  will  be  adjusted.  A  comparison  of  the  two  methods  of  solving 
the  same  problem  will  b(!  instructive.  The*  figin-e  is  shown  in 
Fig.  26  with  the  new  notation.  The  number  of  angles  being  small, 
it  will  not  l)e  necessary  to  write  lists  of  directions,  btit  the  separate 
angles  of  the  triangles  will  be  represented  by  the  proper  directions, 
instc^ad,  and  other  angles  may  be  obtained  from  them  by  addi- 
tion or  subtraction,  the  symbols  being  subjected  to  the  same 
operations.  Thus,  adding  the  two  angles,  —  (3)  +  (4)  and 
-- (d )-!-(•")),  w(^  ol)(a!n  th(Mr  sum  as  —  (3)4-('"))-  The  pole  for 
the  sid(»  (Hjuation  is  tak(Mi  at  Sotith  Base.  The  right-hand 
angles  happen  to  have  Ixhmi  written  on  the  left  side  of  the 
('(juation,  and  vi('(>  versa,  which  is  equival(Mit  to  changing 
all  the  signs  in  th(>  (^luation  without  affcH'ting  th(>  i-esults. 
The  ('()nii)utati()n  of  the  triangles  is  added  in  order  to  com- 
plete tlie  solut  ioTi. 


112  PRACTICAL  LEAST  SQUARES 

(A) 

Beckwith -  (3)  +  (4)         26°  42'  51 . 8' 

N.  Base -(a)  +  (2)  108  27  44.3 

S.  Base -(10)  +  (6)      44  49  27.4 


€„  =  0.05"  ga=+3.45" 


Walter -(7)  +  (8) 

N.  Base -(«)  +  (!) 

S.Base -(9)  +  (6) 

66  =  0.06" 


(C) 

Walter -(6)  +  (8)       76° 

Beckwith -(4) +  (5)       61 

S.Base -(9)  +  (10)     42 

e.  =  0.08"  qc=-2J 


03.5 


28°           17' 

12.9' 

64            43 

42.3 

86            59 

07.7 

02.9 

g,=  +2.84" 

20' 

23.2' 

29 

53.9 

09 

40.3 

57.4 


ADJUSTMENT  OF  TRIANGULATION 


113 


00 

Oi 

1—1 

^ 

,_, 

05 

lO 

-TS 

+ 

+ 

+ 

1 

>o 

OT 

1— ( 

lO        CO 

o 

a> 

00 

05          00 

■■ — ^ 

J> 

o 

CO 

1— 1       1—1 

1 

ta 

t^ 

CO 

iC 

O        CO 

(N 

CO 

t^ 

CO          CO 

kO 

lO 

00 

03          C5 

11 

,S 

<r> 

05 

03 

lO       >o 

W) 

ai 

d 

03 

t^     l> 

§ 

:i 

1    + 

00 

00 

(M 

,_! 

(M 

CO 

lO 

Tt< 

c^ 

.^— s 

O 

(M 

CO 

o 

bC 

TtH 

^ 

(N 

a 

Q 

<^ 

o 

Tt< 

CO 

-C5 

c^ 

o 

t^ 

c3 

^ 

A 

^— V 

,. V 

.. s 

t(— > 

^ 

y—i 

00 

q; 

v£^ 

+ 

+ 

+ 

'ro' 

e 

CO 

1 

1 

1 

lO 

--; 

I— 1 

^ 

,^ 

r^ 

Oi 

1-H 

CO 

+ 

1 

+ 

+ 

L'J 

o 

1    CO 

r-^ 

c^ 

LQ 

X 

CI 

L-O 

t^ 

30 

^ 

CO 

CO 

1^ 

LQ 

CO 

w 

-t^ 

t^ 

t^ 

C3 

c 

Ci 

CI 

CO 

lO 

do 

Ci 

C2 

O 

t^ 

h3 

+ 

^ 

C3 
05 

iQ 

^ 

^H 

X 

'^ 

C^ 

t^ 

t^ 

!:X 

-M 

C^J 

<; 

, , 

X 

X 

— * 

w 

; ; 

— ' 

J^ 

1 

St 

1             \r-^ 

■M 

X 

■ "" 

O^ 

~^ 

+ 

+ 

+ 

1         "* 

^ 

I"* 

1 

' 

1 

0^ 

,« 

CC 

o 

1^ 

« 

o 

-O 

H 

,    ^ 

<! 

r;^ 

& 

^ 

a 

y 

>^ 

o 

t^ 

a 

^ 

n 

fl 

o 

s 

+3.45 
+2.84 
-2.68 
-0.90 

o 

t— 1                                         T— 1 

1                                            +                                       ^ 

^ 

1               1                               1 

^ 

o     o 
^      --H      CO      lO 

+   +   +   + 

_r- 

1— 1       ,— 1 

Oti          C5 

r-l                             CO            •'fl 

1                                  1                 1 

~ 

1— 1         O) 

.-H            O           O 

1    +    1 

i' 

LO           lO 

1-1        ^        Ol 

+    +    + 

^ 

+       7   '?   'T 

_ro 

X         X 

1              +    + 

~ 

t~         ?0 

+              T    + 

~ 

CI        — ' 

C3      c: 

+         T    + 

■    '?    ^    i    ?    s 

114 


PRACTICAL   LEAST  SQUARES 


s 

-   8,21 
+  12,32 

+  15.37 

+81.26 

(Factors) 
0 

+2/4 

-2/4 

+  10.21/4 

-  6.32/4 

-  1.11/4 

y 
U 

-  4.76 
+  15.16 
+  12,69 
+80,36 

CO 
+    ^ 

CO 
lO 
+ 

C5    X    00 
CO    CO    »o 

CM     CM     t^ 

+       1         1 

CM 

+ 

+80.36 
-12.15 
-23.95 
-  0.75 

> 

+ 

- 

lO     ^     X     o 
TfH     X     CO     O 

CO     CM     CM     O 

+     +       1         1 

X 

CM     O 

+ 

X 
CM 

+ 

-2.68 
+  1.72 
- 1 ,  42 

X 
Ol 

1 

-0,90 

+8,81 
-4,49 
+0,66 

X 

c 

+ 

^ 

-10.21 
+  6,32 
+  9.37 

+75.78 

05 

+     ^ 

C^l 

CO 

CO 

+ 

+  9.37 

-  5.10 

-  3,16 

+ 

+75.78 
-26.06 

-  9,99 

-  0,30 

CO 

CO 
+ 

01    M    -o; 

1     +    + 

Ol      3 

+ 

+ 

+  77 

+ 

+ 

+     ^ 

+ 

+ 

!   ?  S  S  S 

?I      '? 

1             '"     '         I    "^     1        ' 

^ 

ADJUSTMENT  OF  TRI ANGULATION 

Correlates 
-4.08 


115 


-1.244+0.654-2.84 
B  = : =  -0.858 


^      0+1.244-1.056-3.45 
A= : =-0.815 


Te.sts  ov  Correi.ati:.-; 


Eq.  (I) 

Eq.  (2) 

Eq.  (3) 

Eq.  (4) 

+8.32 
-5.42 
+5.83 
-7.83 
-0.90 

A 
B 

C 
D 

(q) 

-3.26 

0 
-1.24 
+  1.06 
+3.45 

0 
-3.43 

+  1.24 
-0.66 

+2.84 

+  1.63 
-1.72 

+3.73 
-0.96 

-2.68 

+0.01 

-0.01 

0 

0 

116 


PRACTICAL   LEAST  SQUARES 


c      o 
+     + 


00  (M 

00  CO 
o  o 

+  I 


OO  C^  <M 
iC  C<I  "O 

00  CO  CO 
o  c  o 

I  +  I 


o 

+ 


oo 
oo 

I  I 


CO 

+  1 


o      do 

+    I  + 


+ 


++ 


o      o 

I    + 


CO 

CO-* 
o  c 

++ 


CO 

I  I 


++ 


o  c 

I  I 


o  o 

++ 


o  c 

++ 


o        t^  t^ 

o  o 

I  I 


o  o 

I  I 


d 

I 


I  I 


5      o    o 

O     »-,  ^:  ^  o 

P    o  o  't  CO 
R    CO -HO 

3+1  +  1 

o     II    II 


-r      CO  'M  O  C 

^  ++  I  + 


+ 


'^ 

c;  X  cj 

o 

1 

?+  + 

00  CO  o  c^ 

CO  00  Lo  LI 

c 

1 

—  CO 

1  ++ 

ci  X,  lO 

coco 

1  1  ++ 

> 


O  05  x 
CD  O  CO 

t^co  >o 

(M  CO  l> 

lO  >^  X 

CO  03  c 

05  Ol  03     I    t^ 


CO  i^  ^ 


■*  -*  <N 

o  ■*  ^ 

C<)  CO  «> 


2    +++ 

<;     CO  c3  CO 

S    M  T 


5-  ,_!  t^  LO 

-  CO  "M  1^ 

.  CI  LO  CO 

^  X  O  CO 

J-  CO  t--  LO 

H  c  c  CO 


C5  C5  O      I    t^ 


+++ 
TTT 


ADJUSTMENT  OF  TRI ANGULATION 


117 


^ 

^ 

^ 

+ 

+ 

I 

Tt<  00  t^  C5 

.-105 

Tt^  ^  ^  O 

'^^  C3 

aa^.T^t- 

1.0  05 

•*  '^  X  lO 

/*r«  —-> 

t^  CO  (M  lO 

t^  CO 

t^c^  C5  00 

oc  00 

CO  rt  CO  :o 

C^  X 

X  ^  C<l  I^ 

t>-  c^ 

O  CO  iC-^ 

CO  ■<t< 

ic  CO  c  o 

X  Ci 

^  CO  o  :o 

t^  C3 

X  i-it^  CO 

CO  t^ 

Tf  CMO^ 

xr^ 

■^  CO  CO  CO 

t^^  X  X 

cc 

—<  -N  cc  Tf 

X  o 

■<*  r^  t^  00 

O  00 

rt<  ^  vC  cr. 

X  LO 

X  c^  coo 

X  o 

X  o  o  --^ 

Ci  X 

•^  't  1^  tT 

CO  o 

^  C^l  i-O  C2 

o  c^l 

:c  —  -*C^ 

O-M 

OOCOt^ 

LO  CO  o  ao 

1>  00 

LO  CO  c;  CI 

X  X 

X  oox 

t^  X 

XC  X  X 

t^  t^ 

coo  c;c5 

coco 

CO  c  C-.  o 

CO  CO 

CO  O  O  d 

CO  CO 

CO  O  O  C5 

coco 

C  C  CO  "O 
>-0  Tf  (M 


+ 
X  CO 

-*  o 

CO  o 


o 

•    T-H    Tfl    lO 


X  X 
CO  CO 


+ 

o  o 


C;  o 

CO  :C  ^ 


O  X  !M 
X  C5  (M 

lO  1— I  CM 

^^  =  - 

CM  (M  -jti  CO 

■  ^  ^  o 


t^  X  if  rf 


CO  c; 


^  c  ^ 
I     I    I 


X 

CO 

-r 

I^ 

-r  01 

*-) 

01 

2 

01 

X 

■4 

I  I  I 


CM  CO  CO        X 


CM  CO  CM        t^ 


+  +  + 


+  +  + 


l^O  CO 

1.0  CM  O 


X 


I  - 


y.' 


1  I  I 

V.  X     — 


I  J 


/<  :C  X:  x      £  X   I  X; ::;  X:  xJ      ?^  x  i  x  ?^  —  x      ::^  xJ  i  x.  ::c  X. ; 


+  +  + 


X  10  -- 

++  + 


I    !    1 


1.0  01  I^ 

+++ 


118  PRACTICAL   LEAST  SQUARES 

In  this  standard  form  of  computation  of  the  triangle  sides,  the 
given  side  is  written  first,  followed  by  the  opposite  station  and  the 
other  two  in  clockwise  order  around  the  triangle.  The  correc- 
tions are  applied  to  the  given  angles  to  obtain  the  adopted  (spher- 
ical) ones,  from  which  the  spherical  excesses  are  deducted  and  the 
plane  angles  (to  be  used  in  the  logarithmic  computation)  are  found. 
The  sum  of  these  plane  angles,  of  course,  should  be  exactly  180°. 
The  cologsine  of  the  first  angle  is  written  below  the  log  distance, 
followed  by  the  logsines  of  the  other  two  angles.  Covering  the 
fourth  logarithm  with  a  pencil  or  strip  of  paper,  the  first  three  are 
added  to  obtain  the  sixth,  and  the  fifth  is  then  obtained  as  the  sum 
of  the  first,  second,  and  fourth,  by  covering  the  third.  In  order 
that  the  computed  lengths  may  be  consistent  throughout,  a  certain 
value  is  adopted  for  each  distance  and  logarithm,  and  the  neces- 
sary modifications  are  made  by  the  application  of  small  correc- 
tions as  shown.  It  is  a  good  plan  to  arrange  the  triangles  in  the 
above  form  before  beginning  the  adjustment  of  the  figure.  Then 
the  symbols  and  the  observed  angles  (after  local  adjustment,  if 
any)  are  in  convenient  form  for  use,  together  with  the  spherical 
excesses.  After  the  adjustment,  the  corrections  are  inserted  and 
the  form  completed. 

105.  Adjustment  of  a  Quadrilateral:  Approximate  Method.^ 
The  angles  of  a  quadrilateral  may  be  made  to  satisfy  the  angle 
equations  exactly  and  the  side  equation  very  nearly,  by  an  approx- 
imate adjustment  which,  although  not  rigorous,  may  be  sufficient 
for  subordinate  triangulation  or  detached  figures  in  which  great 
precision  is  not  required.  The  weights  of  the  angles  are  assumed 
to  be  equal. 

The  two  triangles  formed  by  one  diagonal  are  first  closed  by 
correcting  the  four  angles  of  each  b}^  one-fourth  of  the  closure-error 
for  the  triangle.  One  of  the  other  triangles  is  then  closed  by  cor- 
recting each  of  its  four  new  angles  by  one-fourth  of  its  closure- 
error,  which  correction  is  also  applied  to  the  remaining  four  angles 
(of  the  fourth  triangle),  with  the  opposite  sign,  so  that  all  four 
triangles  are  thus  satisficnl  exactly.     Taking  the  pole  for  the  side 

1  Due  to  Prof.  T.  W.  Wright. 


ADJUSTMENT  OF  TRIANGULATION  119 

equation  at  the  intersection  of  the  diagonals,  each  of  the  eight  new 
angles  is  corrected  by  one-eighth  of  the  error  of  closure  of  the  log- 
sines  divided  by  the  algebraic  mean  of  the  eight  differences  for 
1",  the  angles  on  the  right  being  corrected  with  the  opposite  sign 
to  those  on  the  left,  so  as  to  bring  the  sums  of  their  logsines  closer 
together.  If  the  eight  angles  were  equal,  the  side  equation,  also, 
would  be  exactly  satisfied  by  this  method ;  in  this  case  the  figure 
would  be  a  square. 

For  example,  let  us  adjust  the  quadrilateral  in  Fig.  23,  page  99, 
with  the  data  and  notation  there  given.     (See  next  page.) 

106.  Adjustment  to  Conform  to  Work  Previously  Adjusted  or 
Fixed.  Triangulation  of  a  subordinate  character  is  frequently 
carried  on  in  connection  with  a  main  scheme  or  net  in  order  that  a 
number  of  points  may  be  located  from  the  main  stations  without 
reoccupying  them  expressly  for  this  purpose.  In  primary  tri- 
angulation, for  instance,  it  is  customary  to  read  directions  from  the 
stations  upon  prominent  objects  such  as  church-spires,  which  may 
be  used  later  by  local  surveyors  for  obtaining  initial  positions  and 
azimuths.  Such  points  do  not  enter  into  the  adjustment  of  the 
main  figures  but  are  adjusted  subsequently  and  usually  separately, 
upon  the  previously  adjusted  work  as  a  basis.  Also,  secondary 
or  tertiary  figures  may  be  connected  to  or  based  upon  primary 
ones  so  as  to  require  separate  adjustment  which  will  not  disturb 
the  previous  work.  If  the  connection  be  to  one  fixed  line^  only, 
that  line  would  be  used  as  a  base-line,  and  no  condition  woukl  1)0 
introduced.  But  if  a  triangle  be  fixed,  or  two  sides  and  the  in- 
cluded angle,  the  new  conditions  must  be  so  written  as  not  to  dis- 
turb the  previous  adjustment.  The  Alethod  of  Directions  is 
particularly  convenient  when  fixed  lines  are  involved,  as  the 
dii'cctions  may  l)e  omitted  from  those  lines  and  they  will  not  be 
affected  by  the  adjustment.  The  angles  are  assumed  to  have  been 
adjusted  locally,  in  advance.  The  use  of  letters  on  the  fixed  lines, 
instead  of  numbers,  serves  to  identify  them  without  giving  them 
the  character  of  unknown  directions,  although  an  (>xp(M'i(>nced 
computer  usually  omits  the  letters  as  well  as  the  nunil)ers  on  the 
fixed  lines.      If  th(>  Method  of  An<z;k>s  were  used,  local  conditions 


120 


PRACTICAL   LEAST  SQUARES 


!3 

a- 


^ 

^ 

V 

^ 

> 

"3 

^  O  05  C^ 

00 

CD  05  OOC^ 

lO 

c 

GO  02  lo  r^ 

o 

>o  ^  CO  05 

o 

fa 

Tt<  ^  (N  o 

d 

lO  <— 1  >— 1  >-H 

o 

lO  --H  --H  ^ 

o 

(M  Tt<  O  lO 

o 

fl             G> 

^ 

TtH  o  -H  ro 

rH    Ol   05   rO 

1-1  05(M  .-H 

00  LO 
r-  CO 

1      «t 

'T* 

T-l   rH  CO  C<) 

(M        <M  ^ 

+  +  +  + 

+  +  +  + 

-     ^S 

W 

m  o 

a>  4J 

b£ 

O         (M 

05          (N 

CO  CO 

gSg 

CO       lO 

lO        (M 

CDO 

C5          l> 

^         t- 

00 1^ 

^  f^ 

^ ^ 

oo      o 

i-H            CO 

CO  CO 

>    O 

, 

CO       lO 

00      Oi 

t^ 

o3   o 

1 

TfH            t^ 

-*         CO 

o 

II  Jl 

■^-^ 

05          CD 

00      00 

CO 

05          Oi 

oi       C5 

OJ 

t--co 
ooo 

CO      --< 

'^         lO 

CO 

II  II 

O        'ii 

00      o 

o 

00  CO 

CO        CO 

O        t^ 

t^ 

^ 

^      00 

CO      t^ 

CO 

+ 

rH            CD 

CD        (M 

r^ 

•[■   -I- 

t^        (N 

iO       to 

o 

i-O 

00      00 

05        CD 

CO 

C3            Ci 

Oi        Ci 

Oi 

O  GO 

^ 

> 

> 

00  'A.  CD  ts. 

GO 

CO  (M  >0  >0 

lO 

^  lO  Ci  CO 

O 

Oioot^  »o 

o 

lO  r-H  C<1  O 

O 

lOOi-H  --H 

d 

LOr-l  T-H   T:f< 

O 

05  Tfi  o  »o 

o 

t^  t^ 

Tf  *-w 

^  CD 

lO  r-H  ■^ 

O  Ci 

IQ  ^ 

lO  O 

^cDo 

CO  o 

CD^ 

(NO 

CM  O  O 

1— I  ^ 

C<1  '^ 

O     II 

J3 

1  1 

.-H  Gl 

o  S 

^ 

x'cCt- 

1- 

,r^  .^'-~ 

r. 

Ci  CO  c;  CO 

^  O  ,CD  CD 

Tf  CO  o  GO 

»o  (5 

-*  ^o 

CO  o  Ol  o 

1^  O  '  !M  O 

t^  Ol   Ol  r-H 

bC 

'-    11           + 

iM  '^  O  O 

^T 

'"TT 

J5^ 

^ 

o- 

3 

C2  CO  -f  01 

o  ^ 

^ 

Ol  o  ^  o 

lO 

-*  -rt^  -*  -+ 

o 

— 1  CO  CO  oi 

C3 

-f  -f  CO  --0 

^ 

--D  -t  oi  -r 

(^ 

'-f  -^  ^  Ol 

00 

" 

u 

o 

^lOcD  r^ 

CO  --<  CM  CO 

c3 

S 

> 


rtHcD  t^i-H 

1-H  (N  CO  lO 
t^  05  1>-  ■<# 
CD  00  CD  I— I 
O  CO  lO  00 


CO  ■*  t^  TfH 

CO  Ol  CD  00 

o 

CO 

05  05  C3  05 

Ol 

00  1— 1  02  CD 
CO  GO  lO  lO 

i-i  •*  CM  lO 
O  lO  1-H  CM 

Tf  O^  t^  05 
■*  CM  .-HTt< 

CO  i-H  00  ■* 

^  CD  C<1  Tt< 

o 

^^S~oo 

(n    1"^  T-H  o  O 
H     00  CM  t^  lO 

L^     C  t^  CO  CD 

f^     CO  t^  Tt<  00 

CD  CM  -H  CD 

iO  »0  t^  CM 

02  CD  00  00 

<y>  o^  c^  a 


25  <M  CD  CM 
""*  OS  03 1^ 


T-H  CO  "Q  l>. 


ADJUSTMENT  OF  TRIANGULATION  121 

would  have  to  be  added.  The  following  simple  cases  will  be 
considered.  From  the  condition  equations  the  solution  proceeds 
in  the  usual  manner. 

107.  Two  Sides  and  the  Included  Angle  Fixed.  Fig.  27.  The 
adjacent  sides,  A  and  B,  are  fixed  in  length  and  the  angle  between 
them,  also,  must  not  be  altered.     If  the  missing  diagonal  had  been 


Fig.  27.     Two  Sides  and  Included  Angle  Fixed 

observed,  it  would  have  to  be  considered  as  fixed,  since  the  sides, 
A  and  B,  and  the  included  angle,  determine  the  triangle  completely. 
That  case  will  be  discussed  later.  The  only  new  lines,  then,  are 
the  three  w^hich  run  to  C,  forming  the  two  triangles.  According 
to  our  rules,  there  are  two  angle  equations,  one  for  each  triangle, 
and  no  side  equation.  However,  the  fact  that  two  lines  are  fixed 
in  length  renders  a  condition  necessary,  which  shall  require  the 
angles  to  be  so  adjusted  that  when  one  fixed  line  is  computed 
from  the  other,  the  result  will  l^e  equal  to  its  known  lengtli. 
This  condition  is  called  a  length  equation.  It  has  the  same  natun^ 
as  a  side  equation,  but  involves  two  known  lengths.  The  three 
('([uations  are: 

(.4)       -(6)  +  Fi-F2  +  (r/)-F5-fFr,-(180°+eJ  =  () 

(/i)         -ri  +  (r)-(a)  +  T^3-74+F5-(180°+e,)=0  (1:^0) 

sin  i-a-^V-,)  sin  (-F5  +  F6)   _ 
^^  sin  (-r4+r5)  sin(-r2+r/) 

II  is  ol)vi()us  that  the  angle  (—a+ !':{),  for  cxainplc,  may  l)e  desig- 


122 


PRACTICAL  LEAST  SQUARES 


nated  by  (+F3)  since  there  is  to  be  no  correction  to  the  direction 
(a).     The  equations  (130)  may  therefore  be  written, 


(A)  +7i-F2-F5  +  F6-(180°+6a)  =  0 

(B)  -Fi  +  F3-F4+F5-(180°+6,)=0 
Asin(+F3)sin(-F5  +  F6) 


(131) 


(C) 


5  sin  (-F4  +  F5)  sin  (-F2) 


=  1. 


To  obtain  the  error  of  closure,  q,  for  the  length  equation,  the  log- 
arithm of  the  length  A  must  be  added  and  that  of  B  subtracted,  in 
the  series  of  logsines.  As  these  lengths  are  fixed,  they  do  not 
appear  in  the  reduced  conditions,  which  have  the  form, 

(A)  +fl— ^'2  — t'S  +  t'O+^a^O 

(B)  -vi  +  V3-V4  +  V5+qt.  =  0  (132) 

(C)  d-2V2-\-d+3V3-\-d-4  +  5V4:—(d-4^  +  5-hfJ-5  +  Q)V5  +  d-5  +  6VQ+qc  =  0 

108.  Quadrilateral  with  One  Fixed  Triangle.  Fig.  28.  The 
quadrilateral  being  complete  would  have  one  side  and  three  angle 
equations.  The  angle  equation  for  the  fixed  triangle  is  satisfied 
in  advance,  however,  so  that  there  remain  but  two  angle  equations, 
and  the  side  equation  as  independent  conditions.     Using  the  tri- 


Fic.  2S.     One  Triangle  Fixed 


angles  D-A-B  and  D-B  (',  and  placing  the  pok^  at  D  for  tiie  side 
o(iuation,  we  write  the  three  conditions: 


ADJUSTMENT  OF   TRI ANGULATION 


123 


{A)       -(a)  +  7i-y2+(^/)-F4+F5-(180°+6a)=0 
{B)       -(c)  +  F2-F3  +  (/)-F5-^Fg-(180°+6,)=0 
sin  ( -g+Fi)  sin  (-C+F2)  sin  (-Fs+e)  _ 


(133) 


sin  {-h  +  Vx)  sin  (-F2+^)  sin  (-F3+/) 

After  obtaining  the  constants,  q,  the  lettered  directions,  a,  h,  c,  d, 
e,  and/,  would  be  omitted,  as  usual,  in  forming  the  reduced  equa- 
tions, although  it  is  convenient  to  use  them  in  the  subscripts  of 
the  side  equation  to  distinguish  between  those  angles  which  differ 
only  by  the  fixed  angle  at  A,  B,  or  C.  Thus  the  reduced  side 
equation  would  be, 

(C)     {(l-a+l—d-b+\)Vi-\-{d-c+2+d-2  +  d)v2 

-{d-s+e-d-3+f)v3-\-qc  =  0    (134) 
109.  Fixed  Triangle  or  Polygon  with  Central  Point  Unoccupied. 

Figs.  29  and  30.  In  this  case  there  are  no  new  triangles  and, 
therefore,  no  angle  equations  whatever.  The  pole  for  the  single 
side  equation  is  placed  at  the  central,  unoccupied  (or  concluded^ 
station.  If  this  station  be  outside  of  the  triangle,  the  side  equation 
would  be  the  same  as  (C)  in  (133)  and  (134),  above,  but  if  it  be 
inside  the  figure,  the  side  equation  has  a  characteristic  synunetrj' 


Fk;.  29.  Fic.  30. 

Fixed  Tri;uiRl(^  or  Polyfjon  with  Concluded  Station 

in  that  every  numl)er('d  dii'CM'tion  occiu's  in  botli  numerator  and 
denominator,  and  the  algel)raic  signs  are  positive  in  th(>  ntmierator 
and  negative  in  the  (kMiominator,  or  vice  vei'sa.  Thus,  foi- I'^ig.  29. 
tlie  side  (H^uation  would  l)e  as  follows,  omitting  the  lettered  direc- 
tions which  ai-e  mmecessarv  in  this  very  simple  case, 

sin  (  +  ]',)  sin  (  +  ]\')  sin  i+V:,] 


sin  f— T'l)  sin  (— F2)  sin  (— ]':i) 


=  1 


(135) 


124  PRACTICAL   LEAST  SQUARES 

Similarly,  for  Fig.  ,30,  the  side  equation  would  be, 

sin  (  +  7i)  sin  (  +  F2)  sin  (  +  F3)  sin  (  +  74)  sin  (  +  "^'5)_ 

sin  ( -  Fi)  sin  ( -  V2)  sin  ( -  F3)  sin  ( -  V^)  ^In  ( -  F5)  ^       ' 

This  case,  especially  Fig.  29,  occurs  so  frequently  in  the  loca- 
tion of  subordinate  stations  that  it  may  well  receive  special  atten- 
tion here.     In  its  reduced  form,  equation  (135)  may  be  written, 

(d+i+d-i>i  +  ((i+2+C?-2)y2  +  (^+3+C^-3)y3  +  g-0  (137) 

in  which  rf+i  is  the  difference  for  l''  in  the  logsine  of  angle  {-\-M\), 
(l-i  is  that  difference  for  angle  {  —  Mi),  etc.  This  equation  has 
the  same  form  as  (86)  of  Art.  80,  page  73,  so  that  ai  =  (d+i-\-d-i), 
a2  =  {d\.2-\-d-2),  etc.  Assuming  equal  weights,  which  will  usuall}^ 
be  the  case,  the  correlate  for  the  single  equation  will  be,  from  (87), 

A  =  ~  (138) 

[aa\ 

and  the  corrections  will  follow  from  (88), 

—  Q  — Q 

vi  =  aiA  =  ai-f — r;     V2  =  a2A  =  a2-r — r;  etc.        (139) 

[aa]  [aa] 

It  is  easy,  then,  to  arrange  the  logsines  in  positive  and  negative 
columns,  and  to  take  their  algebraic  sum  as  q.  The  algebraic  sum 
of  the  differences  for  1"  corresponding  to  the  directions  (those  in 
the  negative  column  having  their  signs  changed)  will  be  the  a's, 
and  the  sum  of  the  squares  of  these  a's  is  the  denominator  of  the 
factor,  —q/[aa],  in  (139).  Each  v  is  computed  by  multiplying  its 
a  into  this  factor. 

For  example,  let  us  adjust  the  following  observed  angles  for 
Fig.  29,  the  weights  being  (>(|ual.  Each  angle  is  followed  l:)y  its 
logsine  and  the  difference  for  1"  in  that  logsine,  the  left-hand 
angles,  in  the  left-hand  colunm,  being  considered  })ositivc.  The 
sphericid  excess  for  the  fixed  triangle  is  0.30". 


ADJUSTMENT  OF  TRIANGULATION 


125 


+     +    + 


00 

6b 

I— 1 

1— 1 

Cl 

a> 

o 

o 

C5 

o 

+ 

^  ^ 


w 

c 

o 

o 

-f 

o^ 

Ol 

-S 

t^ 

o 

o 

GO 

-1-J 

^e   ^" 

CO 

iC 

<M 

72 

lO 

^ 

-* 

O 

.s 

ll 

s 

o 

^ 

T— 1 

^ 

^. 

Ci 

t^ 

+   +   + 


+   +   + 


o 

00 

or 

oa 

TO 

^ 

r/^ 

+ 

+ 

+ 

i—< 

CD 

^/^ 

o 

L-O 

-^ 

+   +   + 


~ 

iC 

~ 

-f 

'T 

^ 

§ 

^ 

r^ 

.— • 

QC 

-^ 

X 

l-H 

^-~. 

— 

I- 

1- 

^ 

+ 

C: 

C:" 

c^ 

a-. 

+   +   + 


+   +   + 


126 


PRACTICAL  LEAST  SQUARES 


00 


+ 

IN 

+ 

cS 
+ 


+ 


:0 


o      o      O 

+  +  + 


^     ± 


O 
O 
+ 


o 
o 

o 

+ 


t^      t^      o 

+  +  + 


o 
o 


t^    ^    t^ 
I     I     I 


> 


> 


CM 


r^  C: 


+ 


+       + 


ADJUSTMENT  OF  TRIANGULATION  127 

The  side  equation  is  satisfied,  since  the  sums  of  the  positive 
and  negative  logsines  are  equal,  namely,  9.0918108.  Also,  the 
sum  of  all  the  angles  remains  unchanged,  and  each  of  the  three 
angles  of  the  fixed  triangle  is  the  same  as  before  the  adjust- 
ment, since  each  correction  was  applied  both  positively  and 
negatively.  An  ordinary  slide-rule  is  sufficient  for  the  arith- 
metical work,  and  after  the  sum  of  the  aa's  is  obtained,  each  v  is 
found  at  one  setting  of  the  rule.  The  above  illustration  of  the 
process  is  given  in  greater  detail  than  is  necessary  when  the 
method  is  understood. 

110.  Adjustment  of  a  System  between  Points  of  Control. 
Large  systems  of  triangulation,  such  as  the  primary  work  of  the 
U.  S.  Coast  and  Geodetic  Survey,  may  extend  over  strips  of  coun- 
try for  hundreds  of  miles.  In  such  great  distances,  errors  of 
various  kinds  are  likely  to  have  a  cumulative  effect  which  becomes 
too  great  to  be  tolerated.  It  is  necessary,  therefore,  to  control, 
or  check,  the  triangulation  at  intervals  which  will  depend  upon 
the  precision  of  the  observations,  the  points  of  control  being 
farthest  apart  in  first-class,  or  primary  systems.  The  lengths 
are  controlled  by  measured  base-lines,  the  positions,  by  astro- 
nomical observations  for  azimuth,  latitude,  and  longitude,  and 
the  elevations,  by  precise  spirit  leveling,  although  the  astro- 
nomical observations  may  really  control  all  three  elements 
of  a  system,  that  is,  its  size,  shape,  and  position.  (See  Art.  85, 
page  80.) 

In  general,  the  controlling  points  for  these  different  purjwscs 
will  not  be  coincident.  The  observations  for  azimuth  may  not 
be  made  at  the  same  stations  as  those  for  latitude  or  longitude, 
Of  at  the  base-line  stations.  To  illustrate  the  character  of  the 
coiiti'ol,  how('\'('i-.  it  will  1)(>  assuin(>(l  foi-  exainpl(>  that  a  given  sys- 
tem, or  net,  starts  at  a  certain  line,  ,1/i.  Fig.  31,  whose  length 
and  azinuUh  ar(^  known  as  w(>ll  a.s  the  latitude,  longitude,  and  ele- 
^■ati()n  of  on(>  of  its  (muIs,  and  that  it  extends  to  anotlier  line,  CD, 
which  is  fixed  in  the  sain(>  maniuM',  in  length,  dii'ection,  ])osition, 
and  elevation.  This  lin(\  CI),  may  hav(>  \)vvn  i'lxvd  by  original 
o1)S(M-vations  and  nieasurcMuents,  as  if  it  were  a  detached  or  isolatcnl 
line,  or  it   nia\-  be  a  line  in  a  previously  adjusted  triangulation 


128 


PRACTICAL   LEAST  SQUARES 


system  which  is  so  precise  or  so  strong  that  it  is  not  subject  to 
modification  by  the  subsequent  work. 


Fig.  31.     Triangulation  System  with  Control 


If  the  separate  elemental  figures,  such  as  quadrilaterals,  are 
adjusted  in  advance,  with  local,  side,  and  angle  equations,  and 
the  lengths  and  positions  are  computed  from  the  initial  side,  AB, 
through  the  system,  the  final  line  might  fall  at  CD'  instead  of  CD. 
If,  then,  all  the  lines  of  the  sj'stem  were  flexible  except  AB,  and 
CD'  were  picked  up  and  forced  to  coincide  with  CD,  it  is  easily 
seen  that  all  of  the  lines  and  angles  would  probably  be  distorted. 
The  adjustment,  therefore,  affects  all  of  the  angles  in  the  net. 
The  Base-line,  or  Leyigth,  Equation  provides  that  the  length  of  CD', 
computed  from  AB,  shall  be  equal  to  the  fixed  length,  CD.  This 
condition  is  similar  to  the  length  equation  (C)  of  Art.  107,  page  121, 
but  must  extend  through  the  whole  net.  The  Azimuth  Equation 
re(}uircs  that  CD'  shall  be  parallel  to  CD.  The  Latitude  Equation 
states  that  the  latitude  of  a  point  such  as  C  shall  be  equal  to  the 
fixed  latitude  of  the  corresponding  point,  C,  and  the  Longitude 
Equation  expresses  the  same  requirement  for  their  longitudes. 
It  is  evident  that  these  conditions  are  independent — that  any 
one  or  more  of  them  could  ])e  satisfied  without  forcing  the  others 
to  be  fulfilled.  For  example,  the  line  CD'  might  have  the  same 
l(>n^th  as  CD,  and  C  might  coincide  with  C,  and  still  the  azimuths 
mi.^lit  be  diff(>rent.  Of  course,  the  amount  of  the  discrepancy  is 
exaggerated  in  the  figure. 

In  a  I'igid  adjustment,  all  of  these  conditions  woukl  be  com- 
bined with  the  local,  side,  and  angle  conditions  and  satisfied  sinml- 
taiieously.      It   is  usually  sufficient,  and  much  more  convenient, 


ADJUSTMENT  OF  TRIANGULATION  129 

however,  to  perform  the  figure  adjustments  and  then  modify 
them  so  as  to  effect  the  closure  upon  the  controlling  hne  through 
the  above  four  conditions.  In  primary  triangulation  of  the  highest 
grade,  the  rigid,  complete  adjustment  may  be  required. 

When  an  extensive  system  contains  several  points  of  control, 
such  as  base-lines  or  astronomical  stations,  it  is  customary  to 
regard  the  net  as  subdivided  at  these  points  into  sections,  and  to 
adjust  each  section  independently.  This  method  has  practical 
advantages  which  outweigh  its  divergence  from  the  ideal  adjust- 
ment of  an  entire  system  as  a  single  problem.^ 

The  special  case  sometimes  occurs  in  which  a  detached  net, 
having  its  own  base-line,  but  only  approximate  astronomical 
position,  is  connected,  after  its  figure  adjustment,  to  a  fixed 
system  through  a  single  figure,  such  as  a  quadrilateral  or  central- 
point  figure.  If  the  discrepancy  in  length  between  the  two  sys- 
tems be  small,  it  may  be  thrown  entirely  into  the  intervening  figure, 
which  would  have,  therefore,  two  fines  fixed  in  length,  as  shown  in 
Figs.  32  and  33.  In  addition  to  the  usual  angle  and  side  equa- 
tions, the  figure  would  have  a  length  equation  such  as  (C)  of  Art. 
107,  page  121.  It  is  assumed  that  the  geographic  positions  for  the 
detached  system  are  to  be  obtained,  through  this  connection,  from 
the  fixed  one.- 


Fig.    32.      I.oii'rth  iMiuation 

1  For  a  thorougli  treatment  of  the  adjustment  of  large  systems,  and  for 
sperial  methods  apphcable  to  trianguhition  in  general,  see  Wright  and  Hay- 
ford's  Adjustment  of  Observations;  Sjiccial  Publication  No.  L'S  of  the  V .  S. 
Coast  and  (leodetic  Survey,  by  O.  S.  Adams;  and  .Jordan's  \"eiinessungs- 
kunde,  Band  L 

-A  method  for  the  adjustment  of  triangulation  by  eorrectiiig  tlie  ])relim- 
inary  latitudes  and  longitudes  of  the  stations  is  presented  by  Mr.  .\darns  ni 
Special  l'ul)lication  Xo.  28  of  the  U.  S.  Coast  and  Ceodctic  Su'Acy. 


130  PRACTICAL  LEAST  SQUARES 

111.  Adjustment  of  Trigonometric  Leveling.  The  adjust- 
ment of  the  horizontal  angles  in  triangulation  is  generally  inde- 
pendent of  the  vertical  angles,  which  will  be  used  to  compute  the 
difference   of  elevation   between  the  various  stations.     Although 


Fig.  33.     Length  Equation 

these  vertical  angles  may  be  adjusted  directly,  it  is  usually  easier 
and  at  the  same  time  satisfactory,  to  adjust  the  computed  differ- 
ences of  elevation,  and  this  is  done  by  the  method  illustrated  in 
Art.  77,  page  64.  A  long  net  may  be  divided  into  sections  to  facili- 
tate the  adjustment,  and  if  a  control  point  becomes  available  in 
the  form  of  a  station  whose  elevation  has  been  determined  directly 
through  a  line  of  precise  levels,  the  entire  net  intervening  between 
the  initial  elevation  and  this  final  one  may  be  adjusted  to  conform 
to  this  total  difference  of  elevation  by  a  slight  modification  of  the 
partial  adjustments  without  disturbing  their  conditions,  as  the 
proportionate  discrepancies  will  be  very  small  in  carefully  executed 
work. 

112.  Base-lines.  The  measurement  of  a  base-line  is  carried 
out  in  sections,  and  the  total  length  is  the  sum  of  the  sections.  It  is 
customary  to  measure  each  section  two  or  more  times,  in  both 
directions  and  under  different  conditions.  The  separate  measures 
of  a  section  are  then  adjusted  as  direct  observations,  by  taking 
their  mean  or  weighted  mean.  It  is  seldom  that  weights  are 
required,  however,  since  additional  measures  are  made  if  there  is 
too  much  discrepancy  between  vhe  first  two,  and  doubtful  results 
are  subject  to  rejection  in  the  field. 


CHAPTER   VII 
EMPIRICAL  FORMULAS 

113.  Empirical  Formulas.  Experimental  investigations  fre- 
quently comprise  the  determination  of  the  values  of  a  certain 
function  corresponding  to  known,  assigned,  or  observed  values  of 
its  independent  variable.  It  is  often  desirable  to  express  the 
relation  thus  determined,  between  the  function  and  the  variable, 
in  the  form  of  an  equation.  Should  the  observations  be  the  same 
in  number  as  the  unknown  constants  or  coefficients  of  the  equation, 
a  rigid  solution  of  the  problem  would  result,  as  explained  in  Art.  22- 
But,  as  it  is  customary  to  make  a  larger  number  of  observations  in 
order  to  obtain  increased  precision,  the  problem  becomes  one  of 
determining  the  equation  which  will  best  represent  the  entire 
group  of  observations,  thus  involving  an  adjustment  by  the 
Method  of  Least  Squares.  Such  an  expression,  depending  upon 
experimental  data,  is  known  as  an  Empirical  Formula. 

114.  Their  Uses.  Empirical  formulas  are  sometimes  called 
interpolation  formulas  from  the  fact  that  one  of  their  principal 
uses  is  to  facilitate  the  interpolation  of  values  of  the  function 
among  the  observations.  The  curve  which  represents  the  formula 
is  smooth  and  continuous  and  avoids  the  disci'epancies  among  the 
various  observations,  so  that  interpolation  is  usually  safe  and 
reasonable.  However,  th(M-e  is  generally  a  teridency  to  use  the 
formula  beyond  th(^  limits  of  the  obsei-vations,  that  is,  to  extra- 
polate along  an  extension  of  the  curve.  While  this  yields,  in 
many  cases,  very  useful  and  inteivsting  results,  care  must  l)e  taken 
that  such  results  l)e  Tiot  considered  trustworthy  except  within 
reasonable  limits. 

Sometimes  it  seems  impossibU^  to  deiive  a  th(H)i-(Mical  relation 
b(^twe(Mi  two  variables,  while  it  is  c^vident  from  the  observations 
that  some  conncH'tion  does  c^xisl.  Here  the  em{)iri('al  formula 
may  Ix^  the  only  oiu*  avail;\bl(\ 

It  is  not  essential  that  the  inflation  expi-(^ss(Hl  by  the  function 

131 


132  PRACTICAL  LEAST  SQUARES 

have  any  foundation  in  theory.  It  may  be  purely  accidental,  as 
is  the  case  in  many  statistical  investigations.  A  formula  may  be 
stated  between  the  death-rate  of  a  city  and  the  time  or  season, 
or  between  the  depth  of  a  pond  and  the  distance  from  the  shore. 
However,  the  existence  of  a  close  relationship,  such  as  cause  and 
effect,  is  sometimes  indicated  by  an  empirical  formula,  resulting 
in  the  subsequent  development  of  the  rigid  formula  by  theoretical 
analysis.  In  this  manner  some  well-known  laws  have  been  dis- 
covered. 

115.  Nature  of  the  Problem.  Equations  may  be  partially  or 
wholly  empirical.  For  example,  the  form  of  the  expression  may  be 
developed  theoretically  and  regarded  as  known,  leaving  only  the 
numerical  constants  and  coefficients  to  be  obtained  empirically. 
Or,  nothing  whatever  may  be  known  concerning  the  formula,  in 
which  case  it  is  necessary  to  assume  a  form  for  the  equation  and 
then  determine  the  constants  by  an  adjustment  of  the  observa- 
tions. In  any  event,  the  problem  is,  to  ascertain  those  constants 
which  will  make  the  given  expression,  whether  of  previously  known 
or  assumed  form,  represent  the  observations  as  nearly  as  possible. 
Should  there  be  uncertainty  as  between  different  forms  which 
could  be  assumed,  or  should  the  residuals  resulting  from  a  solution 
be  unsatisfactorily  large,  one  or  more  other  forms  may  be  assumed 
and  the  constants  be  determined  for  each  of  them,  that  one  being 
finally  adopted  for  which  the  sum  of  the  squares  of  the  residuals 
is  the  least. 

116.  The  Form  of  the  Equation  may  be  known  from  theoretical 
considerations,  as  when  it  is  a  special  case  of  a  group  of  expressions 
the  nature  of  which  is  known.  But  in  the  great  majority  of  cases, 
it  must  be  obtained  from  the  observations  themselves.  This  is 
conveniently  done  by  plotting  them  as  rectangular  coordinates, 
representing  the  values  of  the  function,  y,  as  ordinates,  and  those 
of  the  independent  variable,  x,  as  abscissas,  each  point  thus  plotted 
corresponding  to  one  observation.  A  smooth  curve  is  then 
sketched  so  as  to  follow  the  plotted  points  as  nearly  as  prac- 
ticable. An  inspection  of  this  curve  will  generally  throw  it  into 
one  of  throe  classes,  namel.y:  (1)  a  portion  of  a  conic  section, 
such  as  a  straight  line  or  a  parabola;  (2)  a  periodic  or  wave-like 


EMPIRICAL  FORMULAS  133 

curve;  or  (3)  a  curve  which  is  non-hnear  with  respect  to  the 
unknown  coefficients,  that  is,  one  which  involves  their  products, 
squares,  or  higher  powers,  or  functions.^ 

To  assist  in  the  selection  of  a  suitable  form,  a  number  of  curves, 
with  their  equations,  are  shown  in  Appendix  E.  Apparent  prop- 
erties of  the  desired  curve  should  be  carefully  noted,  as  positions 
of  axes,  asymptotes,  points  of  inflection,  points  of  crossing  of 
axes,  maxima  and  minima,  regular  or  irregular  periodicity,  etc., 
so  that  the  equation  selected  may  be  capable  of  representing  these 
features.  In  general,  however,  it  is  convenient  to  utilize  an  expres- 
sion in  the  form  of  a  series  which  can  embrace  all  the  curves  in  a 
certain  group.  This  is  particularly  useful  in  the  first  two  of  the 
above  classes  of  curves,  and  will  now  be  illustrated. 

117.  Straight  Lines  and  Parabolic  Arcs.  The  simpler  curves 
vary  from  the  straight  line,  through  the  forms  which  appear  uni- 
formly curved,  to  those  in  which  the  sharpness  of  curvature 
increases  or  decreases  continuously  in  one  direction.  It  is  possible 
to  represent  any  of  these  by  a  series  of  the  form, 

y  =  a-\-bx  +  cx^-\-dx^  +  ex-^-\-   .  .  .  (140) 

The  character  of  the  curve  will  determine  the  number  of  terms  to 
be  used  in  this  equation.  Thus,  if  a  straight  line  be  desired, 
the  first  two  terms  would  suffice,  giving, 

y  =  a^bx  (141) 

If  the  curvature  is  slight,  or  if  the  curve  straightens  towards  one 
end,  the  parabolic  form  may  be  assumed, 

y  =  a  +  hx^cx^  (142) 

Oi-,  if  it  be  desired  to  represent  the  ])l()tted  points  still  nioic  closeh', 
one  or  more  terms  may  ho  added,  the  principle  Ixniig  that  the 
greater  the  number  of  terms  used,  the  more  nearly  will  the  ivsult- 
ing  formula  fit  the  observations.  If  an  unnect^ssarily  large  num- 
l)cr  (jf  terms  is  used,  the  coefficients  of  those  which  miglit  be  omitted 

'  It  must  be  remembered  that  in  the  derivation  of  empirical  formulas,  the 
variables,  .r  ;ind  //,  are  not  the  unknowns  as  they  are  in  the  Adjustment  of 
Indirect  ()l)servat ions.  Chapter  III.  Here,  the  variables  are  the  observed 
((Uantities  and  the  eoeffieients  are  the  imknowns  which  are  to  be  determined. 
As  will  appear  later,  the  methods  of  solution  are  analogous. 


134  PRACTICAL  LEAST  SQUARES 

will  come  out  quite  small  or  negligible,  and  a  re-solution  with  the 
simpler  form  may  be  advisable. 

It  should  be  noted  that  the  straight  line  is  a  special  case,  and 
that  although  the  plotted  points  seem  to  lie  very  close  to  such  a 
line  it  is  usually  best  to  use  the  formula  for  a  parabola  and  obtain 
a  curve  which  approximates  closely  to  the  straight  line.  This 
parabolic  form  is  of  very  general  application  for  empirical  formulas 
because  of  its  convenience  and  adaptability. 

118.  Periodic  Functions.  If  the  curve  is  composed  of  similar 
elements  which  repeat  themselves  as  x  increases,  the  function  is 
evidently  periodic,  that  is,  the  values  of  y  corresponding  to  increas- 
ing values  of  x  will  pass  through  similar  cycles  or  periods.  The 
curve  in  many  cases  will  have  a  wave-like  form,  and  it  may  be 
simple  or  very  complex.  The  general  formula  to  be  used  is  a 
Fourier  series, 

.    360°  360° 

y  =  a-^o  sm x-\-c  cos  x 

m  m 

,    .    360°  360°  ,   ,   , 

-\-d  sm  2a;  +  e  cos 2.r+   .   .   .       (143) 

m  m 

in  which  a,  h,  c,  etc.,  are  the  constants  to  be  determined.^  By 
using  a  sufficient  number  of  terms,  this  equation  may  represent 
any  curve  whatever,  for  finite  values  of  the  variables,  but  in 
the  case  of  periodic  functions  it  is  particularly  useful.  If  the  ele- 
mentary parts  of  the  curve  are  alike  and  not  complicated,  the 
first  three  terms  will  be  sufficient;  otherwise,  succeeding  pairs 
of  terms  should  be  added,  involving  the  nmltiples  of  x.  Unless  a 
complex  formula  is  expected,  it  is  well  to  sketch  each  wave  in  the 
curv(^  so  it  will  !)(>  symmetrical  about  its  middle  ordinate.  If  the 
total  angk^  corix'sj^onding  to  a  cycle  should  be  LS0°  instead  of 
360°,  this  nuinh(M-  should  l)e  substitut(>(l  for  the  latter  in  the  for- 
mula. 

Th(^  constant  (iuantit>-,  ni,  is  the  nuinb(>r  of  units  of  x  in  one 
cycl(>  or  period,  and  is  assumed  from  an  inspection  of  the  curve 
and  the  observations.      For  example,  sui)pose  the  brightness  of  a 

'  For  ail  interesting  ai)i)li('ati()n  of  harmonic  analysis  to  this  jiroblem,  see 
Brunt's  Combination  of  Observations,  Chai)ter  XI 


EMPIRICAL  FORMULAS  135 

variable  star  to  be  observed  from  day  to  day,  and  when  plotted  as 
a  function  of  the  time  to  seem  to  have  a  period  of  about  nine  days. 
Here,  x  would  be  the  number  of  days  elapsed  since  an  assumed 
epoch  (such  as  the  date  of  the  first  observation)  and  m  would  be 
assumed  as  9.  Thus,  x/m  is  an  abstract  number;  360°a:/m  is  a 
number  of  degrees;  and  360 °/m  is  a  constant  coefficient  of  x  in 
any  single  problem.  Different  values  of  m  may  be  assumed,  and 
the  problem  solved  for  each,  if  deemed  worth  while,  that  one  being 
adopted  for  which  the  sum  of  the  squares  of  the  residuals  is  the 
least.  In  determining  the  period,  m,  from  the  curve,  it  is  well  to 
measure  it  at  several  places,  if  possible,  and  take  the  mean. 

AVhen  the  empirical  formula  of  this  periodic  type  has  been 
determined,  it  may  be  transformed  into  a  more  convenient  expres- 
sion in  the  following  manner:  Let  hq,  ni,  n^,  Ni,  N-z,  etc.,  be 
auxiliary  quantities  determined  from  the  assumptions, 

no  =  a ;        /; i  sin  A'l  =  6 ;        ^2  sin  A"^  = '/ ; 

??icosA^i  =  c;       n2CosA^2  =  c;   etc.  (l-l-i) 

Substituting  in  (143),  and  combining,  we  have 

/360°        ^    \  /360°          ^    \  ,       , 

7/  = /(()  +  /( 1  cos     .r  — A  1  )+/;■>  cos 2.r  — A2)+   •   •   •    (145) 

\  m  /  \  m  J 

which  is  shorter  than  (143).     From  (144), 

h  ,,       h 

n\^-: — ^-,         tanAi=-,   etc. 
sm  A  1  c 

119.  Non-linear  Forms.  As  stated  in  Art.  38,  equations  of 
higher  dogrco  can  be  reduced  to  linear  form,  in  general,  by  Taylor's 
Theorem,  and  in  the  case  of  exponential  equations  by  the  use  of 
logarithms.  Thus,  it  is  not  necessary  to  treat  these  higher  degi-ee 
expressions  (^xcept  by  reducing  to  the  linear  form  and  then  applying 
th(>  usual  methods.  These  j^i'ocesses  of  i-(Hlu('tion  will  now  be 
(explained.  They  are  ajjplieable,  of  (■ours(\  to  e(iuatioiis  wliicli 
ai'e  non-linear  as  to  the  independent  variable  as  well  as  to  those 
which  are  non-linear  as  to  the  eoc^ffieients. 

120.  Exponential  Functions.  I'(iuations  in  which  the  unknown 
constant  occurs  as  an  exponent  constitute  a  special  ca^e  tor  I'ecluc- 
tion  to  linear  form  which,  owing  to  its  simplicity-,  will  he  discussed 


136  PRACTICAL  LEAST  SQUARES 

first.  In  brief,  the  method  is  to  throw  the  equation  into  the 
logarithmic  form,  by  taking  the  logarithm  of  each  member,  and 
the  resulting  function  will  be  linear  with  respect  to  the  desired 
coefficient.     Suppose  the  function  to  be  of  the  form 

2/  =  arc^  (145) 

in  which  a  and  b  are  to  be  determined  so  as  to  fit  all  of  the  observa- 
tions as  well  as  possible.     Taking  the  logarithm  of  each  member, 

log  y  =  \og  a-\-h  log  X  (146) 

which  has  the  linear  form 

y'  =  A-]-bx'  (147) 

where  A  and  b  are  the  unknown  constants. 

By  plotting  log  x  and  log  y  as  coordinates,  or  by  using  loga- 
rithmic cross-section  paper  for  plotting  x  and  y,  the  above  exponen- 
tial formula  would  be  represented  by  a  straight  line.  Thus  the 
assumption  of  this  form  of  equation  can  be  easily  checked. 

Special  attention  must  be  given  to  the  weights  in  this  case  of 
exponential  functions,  for  the  weights  of  the  reduced,  linear  equa- 
tions will  not  be  the  same  as  before  reduction  to  the  linear  form, 
even  though  they  were  then  equal.^  If  the  weights  of  the  original 
observations  of  yi,  y2,  ys,  etc.,  are  wi,  W2,  ws,  etc.,  the  correspond- 
ing weights  of  the  functions,  log  yi,  log  y2,  log  yn,  etc.,  will  be  yi^wi, 
y-i^wo,  yii^ws,  etc.-  Or,  if  the  original  weights  are  equal,  the 
reduced  equations  will  be  weighted  directly  as  the  squares  of  the 
corresponding  observed  values  of  y.     If  the  empirical  formula 

1  This  matter  was  first  brought  to  the  attention  of  the  author  several  years 
ago,  bj-  Mr.  C.  K.  \'an  Orstrand. 

2  It  will  be  shown  in  the  next  chapter  (Art.  143)  that  the  weights  are  in- 
versely as  the  squares  of  the  mean  square  errors,  and  that  (Art.  152)  the  mean 
square  error  of  a  function  of  ?/  is  equal  to  the  mean  square  error  of  y  multiplied 
by  the  derivative  of  the  function  with  respect  to  //.     Thus, 

(I  (log  i/)        1 
€iog  !/  =  «;/ 7"~"  =  f2/-  (l-47a) 

dy  y 

and 

"'iogj/  =  "Vy'  (1476) 

the  mean  sfpiare  errors  being  represented  by  e. 


EMPIRICAL  FORMULAS  137 

follows  the  observations  very  closely,  however,  as  is  usually  the 
case,  these  weights  will  not  have  much  effect.  In  fact,  the  errors 
of  observation  may  warrant  neglecting  them  in  most  cases. 

121.  General  Case  of  Reduction  to  Linear  Form.  A  simple 
example  of  an  equation  of  the  non-linear  form  with  respect  to 
the  coefficients  would  be  the  following: 

2y  =  «2+53a;+c.T2+r/'V+   •  •  .  (148) 

Thus,  each  observation  equation  would  be  a  function  of  a,  b,  c,  d, 
etc.,  since  x  and  y  would  be  the  observed  numerical  quantities, 
so  that  if  the  observed  values  of  the  function,  y,  be  represented 
as  usual  by  Mi,  M2,  Mz,  .  .  .  Mn,  the  observation  equations 


would  have  the  form, 

fi(a,  h,  c,  . 

.  .  )=Mi 

hia,  h,  c,  . 

.  .  )-M2 

h{a,  h,  c,  . 

.  .  )=Mz 

(149) 


fn{a,  b,  C,    .    .    .    )=Mn 

The  functions  /i,  /2,  fs,  etc.,  on  the  left-hand  side  of  these  equa- 
tions are  different  owing  to  their  having  cHfforont  nuuKM-ical 
values  of  x.  Now  let  the  ])est  or  most  probable  values  of  a,  b,  c, 
etc.,  namely,  those  which  will  result  from  this  solution,  l)e  A,  B,  C, 
etc.,  and  let  Ao,  Bo,  Co,  etc.,  represent  approximate  values  of 
A,  B,  C,  etc.,  determined  bj-  the  solution  of  some  of  the  ()])sei-va- 
tion  equations  as  simultaneous  equations.     Then  let 

.l=.4o  +  a' 

B^Bo  +  b'  (!.-)()) 

C=Co  +  c' 


in   which   a',    //,    c' ,   .   .   .  are   small    correclions   lo   \\\v   assumed 


138  PRACTICAL  LEAST  SQUARES 

approximate   values,   to   be   determined   by   this   solution.     The 
observation  equations  may  now  be  written, 

/i(Ao+a',     5o+fo',     Co+c',    .    .    .   )=Mi+yi 

f2{Ao+a',     Bo+h',     Co+c',   .    .    .  )=M2+V2      (151) 

/3(Ao+a',     5o  +  &',     Co  +  c',    .    .    .   )=M;+Vi 


the  residuals  being  represented  by  fi,  y2,  vs,  .  .  . 

These  functions  will  now  be  expanded  by  Taylor's  Theorem. 
The  unknown  corrections,  a' ,  h',  c',  .  .  .  being  small,  it  is  per- 
missible to  neglect  the  terms  involving  their  products  and  higher 
powers.  The  constant  terms, /i(Ao,  -Bo,  Co,  .  .  .),f2(Ao,  Bo,  Co,  .  .  .), 
etc.,  will  be  combined  with  the  corresponding  Mi,  M2,  etc.,  and 
represented  hy  l\,  h,  etc.,  thus, 

MAo,Bo,Co,  .  .  .  )-Mi  =  h  (152) 

The  observation  equations  will  then  become, 

cIAq        aBo        clCo 

1    .^-^^  df2         df2   , 

t2  +  7T-a'+;T^6'+-^77-c'+    .    .    .    =V2  (lo3) 

dAo        dBo        dCo 


which  are  linear  with  regard  to  a',  1/ ,  c' ,  .  .  .  The  differential 
coefficients  arc  obtained  by  differentiating  the  left-hand  members 
of  (149)  with  respect  to  a,  b,  c,  etc.,  and  then  substituting  for 
these  quantities  their  approximate  values,  Aq,  Bo,  Co,  etc.  If  now 
we  let  the  differential  coefficients  be  represented  by  ai,  hi,  Ci, 
etc.,  with  the  subscripts  of  the  corresponding  equations,  we  obtain, 

aia'  +  6i6'  +  cic'+    .    .    .    +/i=t'i 

a2a'^b2l/  +  C2c'-\-    .    .    .    +?2  =  ?^'2  (154) 


an(i'  +  hJ)'-\-Cnc'-^    .    .    .    +/„  =  /'„ 

which  arc  similar  to  (18),  page  27.  Normal  ec}uations  having  boon 
formed  as  in  (21)  or  (22),  tlioir  solution  in  tlie  usual  manner  results 
in  the  dc^sired  corrections,  a',  h' ,  c' ,  etc.,  which  applied  to  the 
approximate  values,  yio,  Bo,  Co,  etc.,   as   in  (150),  give   the  most 


EMPIRICAL  FORMULAS  139 

probable  values,  A,  B,  C,  etc.  From  these,  the  desired  non- 
linear coefficients  of  the  original  equation  are  computed  directly, 
giving  finally  the  empirical  formula  sought. 

If  the  observations  are  of  different  weight,  the  general  form  of 
normal  equations,  (21),  would  be  used  as  in  Indirect  Observa- 
tions, Chapter  III. 

122.  Determination  of  the  Constants.  The  plotted  observa- 
tions having  been  investigated  and  a  suitable  form  selected  for  the 
eciuation,  reduced,  if  necessary,  to  the  linear  form  as  just  explained, 
it  remains  to  form  the  observation  equations  and  from  them  the 
normal  equations,  the  solution  of  which  is  to  give  the  desired 
constants  for  the  empirical  formula.  In  general,  it  is  similar  to 
the  case  of  Indirect  Observations,  and  the  methods  of  Chapter  III 
arc  applicable.  The  function  will  be  stated  in  the  explicit  form, 
y^J{x),  although,  of  course,  these  quantities  may  be  reversed,  if 
desired,  to  fit  the  conditions,  into  x—f{y),  which  form  may  some- 
times be  simpler  than  if  fractional  exponents  were  used. 

The  observation  equations  are  formed,  one  for  each  observa- 
tion, by  substituting  for  x  and  y  their  observed  values.  The 
processes  of  Arts.  48  and  49  may  be  utilized  for  the  simplification 
of  the  equations,  and  the  normal  eciuations  will  take  the  form 
of  (22)  or  (21)  according  as  the  weights  are  equal  or  unequal. 
The  solution  of  the  normal  et^uations  will  be  carried  out  by  the; 
usual  methods,  and  the  resulting  values  of  the  unknowns,  modified 
as  necessary,  will  furnish  the  constant  term  and  coefficients  of 
the  empirical  formula. 

123.  Test  of  Empirical  Formula.  There  are  two  methods  of 
determining  how  closely  the  formula  corresponds  to  the  observa- 
tions, namely,  ])y  plotting  the  curve  of  the  formula  and  by  com- 
})uting  the  residuals. 

The  residuals  arc^  f()fin(Ml  by  substituting  the  observed  values 
of  the  varia1)le,  .r,  in  tli(^  euipirical  formula  and  computing  the  cor- 
responding values  of  y.  Subtracting  from  tliesc^  the  observed 
values  of  ?/,  we  obtain  the  residuals  with  the  signs  of  corrections 
to  the  observations.  The  sum  of  the  sc^uares  of  th(>se  i'(\si(luals 
is  the  quantity  which  should  be  a  mininunn  if  the  empirical 
formiila  is  the  most  probable  one. 

Having  ploftcd  the  values  of  y,  coinpu1(Ml  as  above  from  the 


140  PRACTICAL  LEAST  SQUARES 

formula,  the  need  of  other,  intermediate  values  in  order  to  accu- 
rately define  the  curve  may  be  seen  at  once  and  such  values  com- 
puted and  plotted  and  the  curve  drawn  by  means  of  a  French 
curve.  If  this  be  done  on  the  sheet  showing  the  original  observa- 
tions, the  value  of  each  residual  is  shown  to  scale  by  the  vertical 
distance  from  the  corresponding  observation  up  or  down  to  the 
curve,  according  as  the  residual  is  plus  or  minus,  measured  on  its 
ordinate.  Inspection  of  these  graphical  residuals  will  determine 
whether  or  not  another  form  of  curve  should  be  assumed  and  the 
work  repeated  in  order  to  find  a  closer  approximation  to  the  obser- 
vations. If  this  should  be  done,  the  sums  of  the  squares  of  the 
residuals  in  the  two  cases  would  be  compared  and  that  formula 
adopted  for  which  this  sum  is  the  smaller.  In  a  case  of  great 
importance,  especially  one  that  involves  a  large  number  of  observa- 
tions, several  trials  of  this  kind  might  be  made  in  order  to  obtain 
the  best  formula. 

124.  Remarks.  The  above  method  of  deriving  empirical 
formulas  is  evidently  closely  analogous  to  the  adjustment  of 
Indirect  Observations,  that  is,  observations  of  a  function  of  several 
quantities,  and  it  must  be  borne  in  mind  that  in  this  method  the 
errors  of  observation  are  assumed  to  lie  in  the  values  of  the  func- 
tion, y,  and  not  in  those  of  the  variable,  x.  At  least,  the  errors  in  x 
are  assumed  to  be  negligible  in  comparison  with  those  of  tj} 

A  final  word  of  caution  must  be  added  with  regard  to  the  use 
of  the  empirical  formula.  In  general,  it  is  safe  to  use  it  within 
the  range  of  the  observations,  that  is,  in  interpolation;  but 
only  in  very  exceptional  cases  should  it  be  depended  upon  for 
extrapolation,  outside  of  these  limits.  Duncan  ^  cites  the  example 
of  the  stress-strain  curve,  which  is  practically  a  straight  line  until 
the  clastic  limit  is  reached,  but  which,  at  that  point,  suddenly 
breaks  into  a  sharp  curve.  An  extrapolation  from  the  straight 
line  would  be  greatly  in  error. 

Again,  it  nuist  be  emphasizcnl  that  the  form  of  the  empirical 
cfiuation  is  assumed  at  the  outset  and  from  considerations  outside 

'  For  an  investigation  of  the  case  when  x  and  y  arc  equally  subject  to  error, 
see  Report  of  C.  &  G.  Survey,  1890,  page  687,  or  ^\'right's  Adjustment  of 
Observations,  Art.  106. 

2  Practical  Curve  Tracing  l)y  R.  II.  Duncan. 


EMPIRICAL  FORMULAS 


141 


of  the  Method  of  Least  Squares.  From  that  point  as  a  beginning, 
this  method  determines  the  best  values  of  the  coefficients  for 
that  form  of  equation,  but  unless  a  suitable  form  has  been  selected 
the  resulting  empirical  formula  may  be  no  better  than  a  rough 
guess.  Therefore,  great  care  should  be  exercised  in  choosing  the 
form  of  the  equation. 

When  the  observed  data  are  few  and  widely  scattered,  it  is 
scarcely  worth  while  to  go  to  the  trouble  of  a  Least  Squares  adjust- 
ment to  establish  an  empirical  formula.  In  such  a  case,  it  is 
usually  sufficient  to  sketch  a  smooth  curve  through  the  plotted 
observations  and  to  determine  the  constants  of  the  curve  by 
scaling  various  elements  from  it,  in  connection  with  its  known 
properties.  In  particular  is  this  method  applicable  to  straight 
lines  and  to  those  hyperbolic  forms  which  appear  as  straight  lines 
when  plotted  on  logarithmic  paper. 

125.  Example :  Straight  Line.  Let  it  be  required  to  derive 
a  formula  which  shall  fit  the  following  observations  as  nearly  as 
possible,  preference  being  given  to  a  straight  lino,  if  reasonable. 


.r 

y 

X 

y 

-1.0 

+  14.0 

+  14.0 

+5.0 

+  1.0 

13.0 

17.0 

2.9 

5.0 

10.7 

20.0 

1.0 

9.0 

8.0 

LTpon  plotting  these  observations,  as  in  Fig.  34,  it  is  seen  that 
they  fall  nearly  in  a  straight  line,  so  we  shall  assume  the  form 

Y 


0  o  19  JJ  ZO 

Fig.   34.     Straight  Line  and  Parabola 


142 


PRACTICAL  LEAST  SQUARES 


5.0  =  0 
2.9  =  0 
1.0  =  0 


of  the  equation  to  be,  y  =  A-\-Bx,  A  and  B  to  be  determined. 
Substituting  the  observed  data  in  this  form  and  reversing  the  order 
we  obtain  the  equations, 

-     5+^-14.0  =  0 

+     5+^-13.0  =  0 

+  5S+A-10.7  =  0 

+  9B-\-A-  8.0  =  0  (155) 

+  14B+A- 

+  175+A- 

+20jB+^ 

Considering  these  to  have  the  form,  aiA  +  6i5+/i  =yi,  to  cor- 
respond to  equations  (18),  the  normal  equations  take  the  form  of 
(22),  and  become 

+9935+65A -263.8  =  0 

+  65B+  7 A-   54.6  =  0  (156) 

the  solution  of  which  gives  A  =  +13.62,  and  5=  —0.63,  so  that 
the  required  empirical  formula  is, 

^=13. 62 -0.63a:  (157) 

Substituting  the  values  of  A  and  B  in  (155),  with  the  original 
values  of  X,  the  computed  values,  ?/',  are  obtained,  and  subtracting 
from  these  the  corresponding  observed  values,  ij,  we  find  the  resid- 
uals V,  which,  for  further  reference,  will  be  squared  and  added. 


■  -^ "/ 

.1- 

y 

y' 

V 

r- 

-1.0 

+  14.0 

+  14.25 

+  .2o 

.  0()2.-, 

+  10 

1.3 . 0 

12.99 

-.01 

1 

5 . 0 

10.7 

10.47 

-.23 

529 

9.0 

SO 

7.9.-) 

-  .  0.") 

25 

14.0 

Ti.O 

4.80 

-.20 

400 

17.0 

2.9 

2.91 

+  .01 

1 

20.0 

1.0 

1.02 

+  .02 

4 
=  .1.5S5 

0.0 

+  13.62 

+21.0 

0.0 

EMPIRICAL  FORMULAS  143 

The  line  is  easily  plotted  from  the  points  where  it  crosses  the 
axes,  that  is,  where  x  =  0  and  where  y  =  0,  which  have  been  added 
to  the  above  table.  It  is  shown  in  Fig.  34.  The  residuals  are 
indicated  as  the  vertical  distances  of  the  observations  from  the 
plotted  line. 

126.  Example:  Parabola.  From  the  observations  in  the 
preceding  article,  let  us  determine  a  curve  instead  of  a  straight 
line,  using  the  parabolic  form, 

y^A^Bx+Cx^  (158) 

Substituting  the  observed  values  of  x  and  y,  and  reversing  the 
order,  we  obtain  the  observation  equations, 

C-     5+^-14.0  =  0 

C+     5+^-13.0  =  0 

25C+  55+^-10.7  =  0 

81C+  95+ A-   8.0  =  0  (159) 

196C+145++-   5.0  =  0 

289C+175+A-   2.9  =  0 

400C  +  205+.4-    1.0  =  0 

In  order  to  reduce  the  coefficients  of  the  first  two  unknowns,  we 
l(>t  C  =  \QOC,  and  B'  =  105,  as  in  Art.  49.     Then  we  have, 

.OIC-    .15'  +  .4- 14.0  =  0 

.01C'+    .15'+.4-13.() 

.25(:"+    .55'  +  .! -10.  7 

.Sir+    .95'  +  .!-   S.O  (160) 

1.9()r'+I.45'  +  .l-  5.0 
2.S9("  +  1.75'  +  .4-  2.9 
4.()()r"  +  2.()/r  +  .l-    1.0 


144  PRACTICAL  LEAST  SQUARES 

The  resulting  normal  equations  are, 


c 

ir 

A 

+28.91 
+  16.50 
+  9.93 

+  16.50 
+  9.93 
+  6.50 

+9.93 
+6.50 
+7.00 

-31.61 
-26.38 
-54.00 

=  0 

(161) 


and  their  solution  yields  the  values,  .4.  =  +13.52,  B'=— 5.63, 
and  C'=-0.34.  Then  B  =  0.15'= -0.56,  and  C  =  0.01C'  = 
—  0.0034.     The  empirical  formula  is,  therefore. 


y=  13.52-  0.56a;  -  0.0034^2 


(162) 


The  similarity  of  the  first  two  terms  of  the  second  member  to 
those  of  (157),  as  well  as  the  very  small  coefficient  of  x',  indicates 
that  the  curve  approximates  closely  to  the  straight  line  of  the 
previous  article.  However,  we  shall  investigate  the  residuals  to 
see  how  closely  the  observations  are  followed.  Computing  the 
values  of  y  and  designating  them  by  y'  as  before,  we  find: 


X 

y 

y' 

V 

r2 

-    1.0 

+  14.0 

+  14.08 

+0.08 

.0064 

+   1.0 

13.0 

12.96 

-       4 

16 

5.0 

10.7 

10.64 

-        6 

36 

9,0 

8.0 

8.20 

+     20 

400 

14.0 

5.0 

5.01 

+        1 

1 

17.0 

2.9 

3.02 

+      12 

144 

20.0 

1.0 

0.96 

-       4 

16 

,()()77 

Evidently,  this  curve  is  much  closer  to  the  observations  than  is 
the  straight  line.  The  residuals  are  smaller  and  the  sum  of  their 
scjuares  is  smaller  bj"  more  than  half.  The  plotted  curve  is 
shown,  also,  in  Fig.  34,  wIkm'c  its  advantages  are  appai'ent. 


EMPIRICAL  FORMULAS 


145 


127.  Example:  Exponential  Curve.  The  following  observa- 
tions are  plotted  in  Fig.  35,  and  an  exponential  curve  seems  reason- 
able to  assume.     In  order  to  investigate  the  equation  more  in 


X 

y 

log  X      [       c 

[iff. 

log  y 

diff. 

y- 

0,2 

4.6 

9.301 

0.663 

21 

0.6 

5.8 

9.778 

48 

0.763 

.10 

34 

1.2 

7.6 

0.079 

30 

0.881 

.12 

58 

1.6 

9.6 

0.204 

12 

0.982 

.10 

92 

2.0 

11.5 

0.301 

10 

1.061 

.08 

132 

2.4 

14.4 

0.380 

08 

1.158 

.10 

207 

2.8 

17.5 

0.447 

07 

1.243 

.08 

306 

3.0 

20.0 

0.477 

03 

1.301 

.06 

400 

detail,  the  common  logarithms  of  x  and  y  are  tabulated,  also,  with 
their  successive  differences. 


20 

r 

a/ 

i               1 

/ 

1            ' 
! 

1 

/ 

/ 

! 

/! 

1 

10 



/t 

i 

y\     log  y=0.G3-h  o.nx 

>^            1 

1 

^^^ 

0 

!            !             i 

X 

'J 

• 

J 

1, 

J 

Fk;.    35.      l-"-x]K)n(nitial  Function;    Siini)l('  Plotting 


It  i.--  evid('nt  froiu  uu  insi)ection  of  these  (liff(>rciic('.^  tliat  there  is 
no  .-^traight-line  relation  between  log  x  and  log  //,  and  plotting 


146 


PRACTICAL  LEAST  SQUARES 


these  values  as  coordinates  shows  a  distinct  curve,  in  Fig.  36. 
However,  the  differences  in  log  y  are  seen  to  correspond  quite 


Log  X 


Fig.  36.     Exponential  Function;  Logarithmic  Plotting 

closely  with  those  in  x  itself,  and  this  is  verified  b}'  plotting  x 
and  log  y,  in  Fig.  37.     Therefore,  the  desired  equation  will  have 


Log 

1.5 

Y 

J^ 

^ 

^^ 

^ 

^ 

1.0 

^^ 

log  y 

=  0.63 

t-  0.22 

t 

0.5 

0 

1 

0 

^ 

0 

3 

0 

Fig.  37.     Exponential  Function;  Semi-logarithmic  Plotting 

the  form, 

\ogy  =  A+Bx  (163) 

or,  if  A  =log  A', 

?/-^l'(10^^)  (164) 

Writing  the  former  equation  in  the  ustial  order  for  observation 
ec^uations, 

Bx+A-\ugy  =  0  (165) 

Substituiing   the  values  (jf  x  and  log  y  from  the  tabl(>,   abov(>, 


EMPIRICAL  FORMULAS 


147 


and  carrying  the  numerical  work  to  two  places,  only,  we  have, 

Wt. 
0.25+A-0.66  =  0  0.2 
0.65+A-0.76  0.3 

1.25+^-0.88  0.6 

1.6B+A-0.98  0.9  (166) 

2.05+A-1.06  1.3 

2.45+^1-1.16  2.1 

2.85+A-1.24  3.1 

3.05+A-1.30  4.0 

The  weights  of  the  original  observations  are  assumed  equal. 
Those  of  log  y,  and  the  observation  equations,  will  then  be  directly 
as  the  squares  of  the  ?/'s.  In  the  table  these  have  been  divided  by 
100  to  lessen  numerical  labor. 

The  normal  equations,  formed  in  accordance  with  (21),  are, 
+80 .  885+30 .  70A  -  37 .  18  =  0 
+30.705+12.50^-14.63  =  0  (167) 

and  from  their  solution,  5= +0.22  and  A  =  +0.63,  so  that  the 
empirical  formula  is, 

log  y  =  0. 63+0. 22x  (108) 

or, 

a;  =  4. 55  log  ?/- 2. 86  (169) 

or, 

7/ =  4. 27(10"-"^)  (170) 

Computing  the  values  of  log  y'  and  from  them  those  of  y\ 
corresponding  to  the  successive  values  of  x,  and  forming  the 
residuals,  we  obtain  tlie  following  table: 


.(• 

li'S  //' 

1"«  // 

'■i 

//' 

i          // 

V-l 

0.2 

.07 

.  00 

+  .01 

4.7 

4.0 

+0.1 

O.G 

.70 

.70 

0 

") .  8 

.') .  8 

0 

1.2 

.89 

,88 

1 

7.8 

7.0 

2 

l.f) 

.98 

.98 

0 

9.0 

9 . 0 

0 

2.0 

1.07 

1.00 

1 

11.8 

11,.") 

3 

2  4 

1.10 

■     1.10 

0 

14.4 

14  4 

0 

2 .  S 

1 .  25 

1.24 

+  .01 

17,8 

17.5 

+0.3 

:>  0 

1.29 

1  :]() 

-  .  01 

19  .-) 

20 . 0 

-0.5 

148 


PRACTICAL  LEAST  SQUARES 


The  curve  is  plotted  in  Fig.  35,  and  the  straight  Une,  using  log  y, 
in  Fig.  37.  The  residuals  of  log  y,  in  the  column  headed  vi,  are 
practically  negligible;  those  of  y,  called  vo,  are  somewhat  larger, 
and  increase  numerically  with  x.  This  may  seem  surprising  in 
view  of  the  increasing  weight  used,  and  in  order  to  illustrate  this 
effect,  the  normal  equations  were  formed  a  second  time  without 
considering  weights  at  all,  and  solved  with  the  following  equation 
as  a  result : 

log  t/= +0.61+0. 23a;  (171) 

The  residuals  of  log  y  are  about  the  same  as  before,  but  those  of 
y  are,  respectively,  —1,  0,  0,  0,  +.3,  0,  +.7,  and  0,  indicating 
the  diminishing  weights.  However,  the  curve  follows  the  obser- 
vations so  closely  that  the  weights  have  little  effect  upon  the  empir- 
ical formula. 

128.  Example :  Periodic  Curve.  The  following  set  of  observa- 
tions is  given  for  the  purpose  of  determining  the  equation  which 
will  best  represent  them.     They  are  of  equal  weight. 


X 

y 

X 

y 

+0.1 

+8.0 

+  1.8 

+  1.0 

0.5 

+6.8 

2.1 

-2.0 

0.9 

0.0 

2.-4 

+5.0 

1.2 

+0.5 

2.7 

+9.5 

1  ..", 

+4.5 

'.\:2 

+  1.0 

These  data  are  plotted  in  Fig.  38  and  from  a  curve  sketched  through 
the  points  it  is  evident  that  the  function  is  a  periodic  one.     Also 

Y 

lO.C 


y=.3.00-2.2',  siti  ir>n.r^t.:.',  ms  imr°+3..93  .^iti  30nx°+n.0D  rns  SOOx' 
Yic.   :^)S.     Compound  Periodic  Curve 


EM  PI  RICA  L  FORM  ULA  S 


149 


the  waves  occur  in  pairs,  one  large  and  the  next  smaller.  The 
cycle  or  period  is  completed  in  approximately  2.4  units  of  x,  and 
this  value  will  be  assumed  for  m  in  (143).  Owing  to  the  fact  that 
the  waves  in  the  curve  are  not  equal,  the  first  five  terms  of  (143) 
will  be  used,  namely, 

.    360°       ^        360° 
y  =  A-^B  sm  x-\-C  cos x 


.    360°         ^        360° 

-\-D  sm 2x-\-E  cos 2x 

m  m 


(172) 


which  becomes,  upon  inserting  the  above  value  of  m, 

y^A^B  sin  150.t°  +  C  cos  lbOx° 

+D  sin  300.r°+7^  cos  300a; °         (173) 

Substituting  the  various  values  of  x  and  y,  and  looking  up  the 
natural  sines  and  cosines  to  two  places,  we  obtain  the  observation 
equations,  which  will  be  written,  for  convenience,  in  the  reverse 
order: 

+0.87£'+0.50Z)+0.97C+0.26fi+A-8.0  =  0 

-0.87/i'+0.50i)  +  0.26C  +  0.97i?+A-0.8  =  0 

-1.00Z)-0.71C+0.715+A  =0 

+  1.00^  -l.OOC  +.4-0.5  =  0 

+  1.00Z)-0.71C-0.715+A-4.5  =  0 
-l.OOE  -1.00B  +  -4- 1.0  =  0         (174) 

-1.00/:)  +  0.7ir-0.71/i+A+2.0  =  0 
+ 1 .  OOE  + 1 .  OOC  +  .4  -  5 .0  =  0 

+  1.00/)  +  0.71C+0.717?+A-9.5  =  0 
-  0 .  oOE-0 .  87/)-  0 .  .5()f '  +  0 .  S7/i+.4  -1.0  =  0 
The  normal  (xiuatioiis.  forincd  in  tlie  usunl  mainier,  are, 


/•; 

1)        1       <■ 

y.'            .1 

-f-4.77 

+0.44 

+  0.81) 

-0,0.") 

+   0..")0 

-    .").04    = 

-    0 

+  5. 26 

+  1.0.") 

-0   1.") 

+  0.13 

-22..")3    = 

=    0 

+  5.20 

+  0.00 

+   0.73 

-1.")  ()4    = 

=    0 

+  4,77 

+   1.10 
^10,00 

-13  .")1    = 
-31  :;()  : 

17.5) 


150  PRACTICAL   LEAST  SQUARES 

the  sub-diagonal  terms  being  omitted  for  the  abridged  solution. 
Solving  these  equations,  the  following  values  of  the  unknowns  are 
obtained : 

A  =  +3.00,     B=-\-2.24,     C=+1.73,     Z)=+3.93,     ^=+0.09. 

The  empirical  formula,  therefore,  will  be, 

^  =  3.00+2.24  sin  150a;°  +  1.73  cos  loOa;° 

+3.93  sin  300.t°+0.09  cos  300.t°  (17G) 

or,  expressed  in  the  form  mentioned  at  the  close  of  Art.  118, 

t/  =  3.00  +  2.83  cos  (150a;°-o2°  19') 

+3.93  cos  (300a;°-88°  41')  (177) 

The  curve  is  plotted  in  Fig.  38.  For  this  purpose,  a  numljcr  of 
extra  values  of  y  were  computed  so  as  to  determine  the  curve  with 
greater  precision.  It  is  evident  that  a  larger  number  of  obser- 
vations would  be  desirable  in  the  case  of  an  equation  as  complicated 
as  this  one.  The  curve  conforms  to  the  observations  fairly  well, 
and  it  is  doubtful  that  a  recomputation  with  a  different  value  of  m 
for  the  period  would  be  worth  while.  It  is  useful  to  note  in  con- 
nection with  the  plotting  that  the  same  value  of  y  will  correspond 
to  values  of  x  which  differ  by  multiples  of  m.  Thus,  for  .r  =  0.1 
and  2.5,  we  have  the  same  value  of  ij,  namely,  +7.32. 

129.  References.  The  reader  is  referred  to  the  following  works 
in  which  useful  information  and  methods  concerning  empirical 
formulas  will  be  found.  The  collections  of  examples  given  ])y 
Weld  and  Bartlctt  are  worthy  of  note. 

\Vuight:  Adju.stinent  of  Observations. 
Comstock:   Method  of  Least  Squares. 
Meiiuim.\x:   Method  of  Least  Squares. 
AVkld:   Thc'ory  of  Errors  and  Least  Squares. 
B.\utlktt:   Method  of  Least  Squares. 
Helmeiit:   Auspcleic'hungsrechnmig, 
Duxc.w:   Practical  Curve  Tracing. 
BiirxT:   Combination  of  Observations. 
LiPK.\:   Graphical  and  Mechanical  Oomi)utation. 


CHAPTER   VIII 

PRECISION   OF  OBSERVATIONS  AND   RESULTS   AND 
COMBINATION   OF   COMPUTED    QUANTITIES 

130.  Having  considered  the  determination  of  the  best  values 
of  the  unknown  quantities  to  be  obtained  from  given  observations, 
it  remains  to  investigate  the  degree  of  confidence  which  may  be 
placed  in  the  observations  and  the  computed  results,  so  that  they 
may  be  compared  with  the  results  of  other  observations  of  the 
same  quantities. 

131.  Precision.  If  two  sets  of  direct  observations  of  the  same 
kind  be  compared,  and  in  the  first  the  component  quantities  are 
scattered  over  a  wider  range  or  are  more  discordant  than  in  the 
second,  it  is  natural  to  conclude  that  the  observations  of  the 
first  set  were  mads  with  less  care  or  under  less  favorable  cir- 
cumstances than  those  of  the  second  set.  The  latter  are  more 
consistent  and  evidently  more  precise;  their  differences  or  dis- 
ci'cpancios  are  smaller.  Furthermore,  even  though  the  number 
of  oljservations  in  the  two  sets  were  equal,  the  mean  of  the  second 
set  would  be  regarded  as  of  greater  reliability  or  weight  than  the 
mean  of  the  first  set,  merely  l^ecause  of  the  greater  consistency,  i.e., 
smaller  discrepancies,  among  its  original  observations.  Since 
these  smaller  discrepancies  correspond  to  smallci-  I'csiduals  from 
the  mean,  it  is  evident  that  the  precision  of  the  mean  is  indicated 
by  the  size  of  its  ]-esiduals,  being  grc^ater  as  the  residuals  are  smaller, 
and  ^■i(•(■  vci'sa. 

132.  Precision  and  Accuracy.'  This  ])reci>ion  must  not  be 
coufu.-cd  witli  the  accurac\',  that  is.  the  correctness,  of  tlie  results. 
The  latter  is  affected  by  systematic  errors  (\v\.  .")).  Thus,  a 
sei'ies   of   observations   may   be   very   closely   grouped,    showing   a 

'  .'<c('  .John-nn.  I'heory  (if  llrrors  ami  Mctluid  of  Lca>t  .SfjUarcs,  (Jhaj).  \'II, 
for  ail  cxtcndcii  trcatineiiT  of  tlii<  -ulijcct. 

l.jl 


152  PRACTICAL  LEAST  SQUARES 

high  degree  of  precision,  but  each  separate  observation,  and  there- 
fore, the  mean,  may  be  in  error  by  a  large  amount  due  to  some 
influence  which  is  unknown  or  not  taken  into  account.  Precision 
has  reference  to  the  accidental  errors  of  observations  made  under 
constant  conditions  and  indicates  the  care  exercised  by  the  observer, 
the  closeness  with  which  the  instrumental  readings  are  made,  and 
the  suitabilit}"  of  the  method  used.  Discordant  observations  are 
not  precise;  but  precise  determinations  may  or  may  not  be 
accurate. 

133.  Index  of  the  Precision.  It  is  easy  to  obtain  an  idea  as 
to  the  precision  of  the  observations  from  an  inspection  of  them 
or  of  the  residuals  of  their  mean.  But  in  the  comparison  of  the 
results  of  diiTerent  sets  of  observations  of  the  same  quantities, 
it  is  very  convenient  to  have  a  numerical  index  from  which  the 
precision  of  each  set  may  be  determined  without  actually  inspecting 
the  observations  themselves.  Since  this  precision  is  indicated,  in 
general,  by  the  size  of  the  residuals,  it  is  evident  that  the  desired 
index  would  logically  be  some  function  of  these  residuals.  The 
precision  of  a  result  will  depend,  also,  upon  the  numljcr  of  obser- 
vations from  which  it  is  obtained.  Obviously,  the  larger  the 
series  of  observations,  the  greater  should  be  the  precision  of  their 
mean  as  well  as  that  of  the  typical  single  observation.  Thus,  we 
might  use  the  mean  of  the  residuals,  without  regard  to  signs,  or 
the  square  root  of  the  sum  of  their  squares,  and  either  of  these 
would  give  us  some  idea  of  the  consistency  of  the  observations, 
this  hypothetical  I'csidual  being  smaller  in  the  case  of  greater 
precision. 

From  the  very  inception  of  the  ]\Iethod  of  Least  Squares,  the 
investigation  of  tlie  precision  was  regarded  as  of  considei-a])l(' 
iniportajice.  Sevei'al  ([uanlities  have  been  iis(h1  to  indicate  it. 
( lauss  designated  the  (}uantity,  h,  in  the  Law  of  luTor,  as  a  "  meas- 
ure of  prcM'ision.'"  However,  other  indices  have  hcon  more  gen- 
erally used,  namely,  certain  selectcnl  eri'ors,  theoi'etically  defined,  as 
the  Mean  Square  Error,  the  ProJxihle  Error,  and  the  Average  Error. 
These  will  now  be  considered  in  oi'(l(>r. 

134.  The  Quantity,  h.  in  the  Law  of  Error.  If  we  consider  two 
sets  of  observations  of  the  same  (-luantit;',  in;;(l-^  in  th(>  same  man- 


PRECISION  OF  OBSERVATIONS  AND  RESULTS 


153 


ner,  to  be  represented  by  the  curves  in  Fig.  39,  the  area  between 
each  curve  and  the  axis  of  A  will  be  unity,  that  is,  the  probability 
of  an  error  between  the  limits—  oo  and  +00.  Then,  the  taller 
the  curve,  i.e.,  the  greater  the  p-intercept,  the  larger  will  be 
the  portion  of  the  area  immediately  adjacent  to  the  p-axis  and  the 
more  numerous  the  smaller  errors  will  be  in  comparison  with  the 
entire  group;  in  other  words,  the  greater  will  be  the  precision. 
By  inspection  of  the  Law  of  Error, 

h 


V- 


-h2A2 


V71 


h  .      h 

it  is  seen  that  when  A  =  0,  p^  — =,  so  that  the  p-intercept  is  — = 

Vtt  Vtt' 

Therefore,  Vtt  being  a  constant,  h  may  be  regarded  as  an  index  of 
the  precision,  with  which  it  varies  directly. 


Fig.  39.     Curves  of  Probability 


135.  The  Mean  Square  Error  (e)  of  an  observation  is  defmed 
as  the  square  root  of  the  mean  of  the  squares  of  the  errors  in  a 
given  series  of  observations.^  It  will  be  represented  by  e  or 
m.  s.  e.  To  determine  its  relation  to  h  of  the  previous  article,  we 
proceed  as  follows: 

According  to  thc>  Law  of  Error,  the  probability  of  the  occiu'renco 
of  an  error.  A,  in  a  given  set  of  ol^servations,  is, 

V-K^)=^e->>'-'  (178) 

1  The  moan  square  error  is  sometimes  referred  to  as  the  uiean  error.  This 
introduces  an  ambiguity  with  the  average  error,  or  mean  of  the  errors,  and  is 
an  unfortunate  use  of  the  term.  Clerman  writers  call  it  der  mittlere  Feliler 
but  this  involves  no  ambiguity  as  they  designate  the  average  error  as  der 

(hireli.-ielinittliehe  Feldcr. 


154  PRACTICAL  LEAST  SQUARES 

and  the  probability  of  an  error  between  the  hmits  A  and  A-\-dA  is, 

-^e-^'^'dA  (179) 

Vtt 

The  number  of  these  errors  will  be  equal  to  their  probability  times 
the  total  number  of  observations  (or  errors)  in  the  set/  that  is, 

nh      ,,,, 

—^e-^'^^'dA  (180) 

Vx 

and  the  sum  of  their  squares  will  be, 

^e-'''^'A^dA  (181) 

then  the  sum  of  the  squares  of  all  of  the  errors,  between  the  limits 
—  00  and  +  cc ,  ^vill  be, 

nh  r 

and  the  mean  of  their  squares,  equal  to  e-  by  definition,  is 

nh     r+  '^- 
e2  =  --^  e-""-^'A^dA  (183) 

^-^f'\-'"^'A^dA  (184) 

Vtt  J-  '^ 

Substitutmg  in  (184)  the  value  of  the  definite  integral,^ 

1  Sec  Appendix  C. 

2  This  integral  maj*  be  evaluated  in  the  following  manner  (Bartlett): 
The  probability  of  an  error  between  the  limits  —  co  and  +  oo  is  unity  (cer- 
tainty), that  is, 

-^„-.   I         c-"'^'Ma  =  1  (184o) 

V   71 


,-  ,        e-'-^^A2</A  (182) 


or, 

e-"'^>/A=--^  (184&) 

h 


Differentiating  both  members  with  respect  to  h,  we  obtain, 


A-dAdh= '~dh  (lS4c) 

h- 


hence, 


For  another  solution,  sec  .Idi-dan.  Ilamlbuch  dor  VtTmessungskunde,  I,  .ItU. 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  155 

(185) 


'      V^V2/iV"2/i2 


Hence, 


(186) 


cr. 


hV2 

h  =  — ;=,  as  stated  in  Art.  19. 

eV2 

The  geometrical  interpretation  of  the  mean  square  error  is 
that  it  corresponds  to  the  abscissa  of  the  point  of  inflexion  of 
the  Eri'or  Curve.  Differentiating  (178)  and  placing  the  second 
derivative  equal  to  zero,  we  have, 

—  2h^A 
f'{A)=—y-e-''^'  (187) 

Vtt 

/"(A)  =^c-^^^^+    -^e-"=^^  (188) 

Vtt  Vtt 

2h^ 
=  -— e-  "'^\2h^A^  - 1)  =  0  (189) 

Vtt 

Therefore,  for  the  point  of  inflexion, 

2/rA--l  =  0     or,     2/;2a2=1  (190) 

and 

A  = T'-^e,     from  (ISO)  (191) 

hV2  ^       ^ 

which  shows  that  the  point  of  infl(>xion  corresponds  to  tlie  mean 
scjuare  error  of  an  oljservation. 

136.  The  Probable  Error  (r)  of  an  o])servation  in  a  given 
series  is  the  middk^  one  of  all  the  errors  when  they  are  an-anged 
in  numerical  order,  each  Ix^ing  written  as  inany  tinu^s  as  it  occurs. 
As  many  of  the  errors  are  gn^iter  than  it  as  nvv  less,  and  so  the 
])n)l)ability  of  an  error  greater  than  the  j)i-()babl(^  error  is  e([ual  to 
that  of  an  (M-ror  less  than  it ,  namely,  ()..">,  since  the  total  ]ii-()b;ibiiity 
is  unity.  It  is  an  even  chanei^  that  an  ei'ror  taken  at  I'andom 
from  the  series  will  ])e  gi-(>at(>r  or  k'ss  than  the  pi'obable  ei'ror. 

This  is  not  the  most  pmbnhle  error  in  the  series,  for  that  would 
];)e  zero,  to  correspond  to  the  maxinunn  ordinate^  to  the  Error 
Curve,  and  it  is  unforiunate  tliat  the  nanu^  has  com(^  to  be  quite 


156  PRACTICAL  LEAST  SQUARES 

generally  used  in  this  country.  It  is  simply  a  quantity  from  which 
the  precision  of  the  observations  can  be  estimated  or  deter- 
mined, in  comparison  with  similar  quantities  referring  to  other 
observations.  A  better  name  for  it  would  be  the  middle  error. 
It  is  represented  by  the  letter  r. 

The  probability  that  the  error  of  an  observation  will  be  numer- 
ically less  than  the  probable  error  is,  by  definition,  ^.  Then 
from  the  law  of  Error, 

h  r+'  1 

or  changing  the  lower  limit, 

^j'.-«VA  =  i  (193) 

It  is  not  feasible  to  determine  the  value  of  r  in  terms  of  h  directly 
from  this  equation,  so  we  make  use  of  the  following  process : 
In  the  Law  of  Error,  let 

t  =  hA,     whence  dA  =  ~. 
h 


h  r^  2  n 

^\     e-"'''\l\  =  --^\    e-'\lt  (194: 

■Kja  V  TT  Jo 


Then  we  have  for  the  proba])ility  of  an  error  less  than  A, 

Vti 

This  expression  is  evaluated  for  various  values  of  t,  by  expansio:i 
into  a  series,^  and  the  results  are  tabulated  with  t  as  an  argument.- 
By  interpolation  in  this  table  with  the  value  of  the  probability 
0.5,  the  corresponding  value  of  t  is  found  to  be  0.4769,  which  i. 
the  value  of  t  =  li\  when  A  is  the  probable  error,  r.     Thus, 

/ir  =  0.4769  (195,) 

and 

0.4769 
r  =  — (196) 

Since  the  probability  that  an  error  will  lie  between  certain 
limits  is  represented  by  the  area  bounded  by  thc^  Error  Curve, 
the  horizontal  axis,  and  the  ordinates  at  those  limits;    and  since 

1  Sec  Appendix  C,  page  215. 

2  See  Table  I,  page  229. 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  157 

the  entire  area  between  the  curve  and  the  horizontal  axis  repre- 
sents the  probabiHty  of  an  error  between  —  oo  and  +  ^  ,  that  is, 
unity  (certainty);  it  follows  that  the  ordinate  of  the  probable 
error  divides  the  area  on  either  side  of  the  vertical  axis  into  two 
equal  parts  corresponding  to  the  probability,  |. 

137.  The  Average  Error  (7?)  is  the  mean  of  all  the  errors  in  a 
set  without  regard  to  signs.  Since  positive  and  negative  errors 
are  equally  likely  to  occur,  the  probability  of  a  positive  error 
between  A  and  A+dA  will  be  one-half  of  that  of  any  error  between 
those  limits,  that  is,  it  will  be  equal  to 

h      r^"" 
—V  I        e-^'-^V/A  (197) 

2V7rJ-x 

The  number  of  the  positive  errors  will  be  their  probability  times 
the  total  numlx^r  of  errors,  n,  namely, 

^^  I       e-'^^^^rfA  (198) 

2V7rJ-=c 

and  their  sum  is, 

nh     r+" 
-^  e-^^'^''^d^.  (199) 

But  the  sum  of  the  negative  errors  is  numerically  equal  to  that 
of  the  positive  ones,  so  that  the  total  sum  will  he  twice  the  above, 
that  is, 

*'-  ,"".  -^=^=Ar/A,  (200) 

V7 


nh  n 


or, 

^'""   '        -^^'^'AdX  (201) 

V 


and  the  average  of  all  of  the  errors  is  therefore, 


<()  that. 


^f\^-'''^'Adl  (202) 

Vtt  Jo 

-^  r    c'-"'^'(-2/i2A)r/A  (203) 


:— .  J    r---^  (204) 


— -  (205) 


158  PRACTICAL  LEAST  SQUARES 

The  ordinate  of  the  average  error  passes  through  the  center 
of  gravity  of  the  area  between  the  curve  of  error  and  the  horizontal 
axis  on  either  side  of  the  vertical  axis.  For,  if  Ao  represent  the 
abscissa  of  the  center  of  gravity,  by  considering  vertical  strips  of 
width  dA  and  length 

Vtt 
and  taking  moments  about  the  origin,  we  have. 


~  re-'^^'dA^-^  Ce-"'^'AdA 
ttJo  'Vtt.  Jo 


Ao4=  I     e-'"^'dA^-^\     e-'"^'AdA  (206) 

Vti 


But  since  the  total  probability  area  is  equal  to  unity,  the  area 
on  one  side  of  the  vertical  axis  is  1/2,  that  is 


(207) 


Hence, 


2h    C"^ 

Ao  =  -iL(     e-'-'^'Af/A      =77,  from  (202).  (208) 


138.  Comparison  of  the  Indices  of  Precision.  From  (186), 
(196),  and  (205),  we  obtain  directly, 

eV2  '"  tjVtt 

-  =  1 . 4 142  6  =  2 .  09()6r  =  1 .  7726r?  (2 1 0) 

e=1.4826r=1.2533r7  ] 

r  =  0.07456  =  0.84587?  \  (211) 

r?  =  0.7979e=1.1829r  J 

Thus  it  is  seen  that  the  mean  square  error,  the  probable  error, 
and  the  average  error  are  related  b}-  constant  factoi-.'^.  Therefore, 
they  may  be  used  interchangeably  in  various  formulas  and  math- 
ematical investigations  b,y  simply  providing  for  the  numerical 
factors. 

In  Fig.  40,  these  quantities  are  shown  in  their  correct  relative 
positions    and    magnitudes.     The    abscissa'   rcpi'o.sent    the   erroi-s 


PRECISION  OF  OBSERVATIONS  AND  RESULTS 


159 


and  the  ordinates  their  probabihties.     It  will  be  recalled  that  the 

intercept  on  the  vertical  axis  is  -— =. 

Vtt 

The  quantity,  h,  is  directly  proportional  to  the  precision. 
However,  it  is  inconvenient  in  practice  and  is  not  generally  used. 
The  three  representative  errors,  e,  r,  and  rj,  on  the  other  hand, 
are  inversely  proportional  to  the  precision;  the  smaller  these 
errors,  the  more  precise  and  consistent  are  the  observations. 
They  are  sometimes  said  to  indicate  the  uncertainty,  therefore, 
instead  of  the  precision.     Each  of  the  three  errors  occurs  in  a 


Fig.   40.     Relations  between  the  Varioas  Indices  of  Precision 


certain  relativi;  position  when  all  the  errors  in  a  set  of  obsei'vations 
are  arranged  in  the  order  of  their  numerical  magnitude,  as,  for 
example,  the  probable  error  occupies  the  middle  of  the  series. 
This  feature  is  what  one  would  naturally  expect  in  an  index  of 
the  precision  (Art.  b38). 

The  average^  error,  also,  is  not  used  in  practice  as  an  index, 
although  it  would  be  a  satisfactory  one.  It  may  be  used,  how- 
ever, in  the  process  of  determining  e  and  r. 

Tlie  mean  scjuare  eiTor  and  tlic  pr()l)abl(>  error  are  in  common 
use  as  indices  of  the  precision.  The  former  has  l)een  almost  uni- 
v(M'sally  used  by  writers  in  German  and  othei-  foixMgn  languages,  as 
w(>ll  as  by  some  Americans,  notably  in  the  classic  Adjustment  of 
( )t)S('rvati()ns,    by   Pi-ofessoi'   T.    W.    Wi'ight,    and   by   ( 'hauvenet, 


160  PRACTICAL  LEAST  SQUARES 

Newcomb,  and  Crandall.  Its  principal  advantages  lie  in  the  facil- 
ity of  its  theoretical  derivation;  in  its  priority  (it  was  used  by 
Gauss);  in  its  use  by  the  Germans  and  French,  who  have  made 
the  most  numerous  contributions  to  the  subject  of  Least  Squares ;i 
and  in  its  avoidance  of  the  misnomer  of  the  probable  error  which  is 
frequently  a  stumbling-block  to  the  beginner. 

The  probable  error  is  used  by  most  American  and  British 
writers.  Its  name  is  its  greatest  enemy,  but  there  may  be  some 
advantage  in  its  mere  reference  to  probability.  The  person  who 
does-  not  clearly  understand  its  significance  is  apt  to  take  it  at  its 
face  value  and  so  interpret  it.  However,  it  is  hoped  that  such 
persons  will  learn  what  it  means  or  leave  it  alone.  It  should  be 
understood  simply  as  an  index  of  the  precision. 

Whichever  index  is  used,  it  is  written  after  the  quantity  to 
which  it  refers  and  separated  from  it  by  the  sign,  ±.  This  is 
merely  a  convention  and  the  sign  is  never  to  be  used  algebraically. 
There  is  never  any  reason  for  increasing  or  diminishing  a  quantity 
by  the  amount  of  its  mean  square  error  or  probable  error.  A 
better  method  of  designating  it  would  be  to  use  instead  of  the  sign, 
±,  the  symbol  for  the  mean  square  error  (e  or  m.  s.  e.),  or  that 
for  the  probable  error  (r  or  p.  e.),  as  7653.28  (€  =  0.02).  But 
the  use  of  the  ±  sign  is  well  established  as  is  also  the  term  probable 
error. 

139.  Precision  of  Direct  Observations.  We  have  seen  how 
the  precision  in  a  set  of  observations  may  be  indicated  by  the  mean 
square  error,  the  probable  error,  etc.,  and  it  is  evident  that  if 
we  could  know  the  true  value  of  the  observed  quantity,  and 
therefore,  the  true  errors.  A,  we  could  ascertain  the  numerical 
value  of  the  index  of  the  precision  dircctl}^  from  those  ei'rors,  by 
definition,  as 

e^  = ,     r]  =  — ,  and  r  =  the  middle  error.  ^ 

n  n 

But  as  these  true  errors  are  unknown,  it  remains  to  determine  the 
precision  index  from  the  given  observations  or  n^sidiuils.     Know- 

'  Sec  Appendix  A. 

^  The  syuiVjol  fur  the  .sum  without  reg;ir(l  to  signs  is  [ 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  161 

ing  the  relations  between  the  three  indices  as  stated  in  (211),  it 
will  suffice  to  determine  the  mean  square  error  in  each  case  and 
from  it  to  express  the  probable  error  and  the  average  error. 

140.  Precision  of  a  Single  Observation.  Each  observation 
has  its  own  individual  error  and  when  we  refer  to  a  "  single  obser- 
vation "  in  this  connection,  we  mean  an  observation  such  as  those 
in  the  set  which  is  being  discussed,  not  any  single  one  of  them, 
but  a  hypothetical  one  which  is  never  evaluated,  but  which  is 
typical  of  the  entire  set  in  so  far  as  precision  is  concerned. 

Using  the  notation  of  Chapter  I,  let  M  represent  an  observa- 
tion of  a  directly  observed  quantity;  v,  its  residual  from  the 
arithmetic  mean,  xq;  X,  the  true  value  of  the  observed  quantity; 
A,  the  true  error  of  an  observation;  Ao,  the  true  error  of  the 
arithmetic  mean;  and  n,  the  number  of  observations  in  the 
series.  Then,  using  subscripts  to  indicate  the  separate  obser- 
vations, 

X  =  .To+Ao  =  il/i+Ai=ilf2+A2  .  .  .  (212) 

Vi^xo  —  Mi,     V2^xo  —  M2    .    .    .  (213) 

Ai-xo+Ao-i¥i,     A2  =  a:,)+Ao-M2,    .    .    .  (214) 

Ai=i'i+Ao,  A2  =  y2+Ao,    .    .    .  (215) 

Squaring  both  members  and  adding  the  n  resulting  equations, 

[A->[i'2]+2AoH+nAo2  (216) 

From  (8),  [y]  =  0;  and  the  unknown  true  error,  Ao,  of  the  mean  is 
assumed,  for  this  demonstration,  to  be  equal  to  the  mean  square 
error  of  the  moan,  the  value  of  which  is  determined  in  Art.  153 
to  bo  (see  Art.  141), 

6 


(217) 
(218) 
(219) 


60=            . 

Thoroforo, 

divic: 

ling 

(216)  by 
11 

n       n 

whoiico, 

,2_     V-\ 

n-\ 

and 

102  PRACTICAL   LEAST  SQUARES 

Then,  from  (211), 


r  =  0.6745 


7?  =  0.  7979a  P^^  (221) 

\n— 1 

These  three  formulas  are  known  as  Bessel's  Formulas  and  the 
first  two  are  in  general  use.  In  long  series  of  observations, 
however,  it  is  more  convenient  to  use  Peters'  Formulas,  which 
involve  the  sum  of  the  residuals  without  regard  to  signs,  [v, 
instead  of  the  sum  of  their  squares.  They  may  be  derived  as 
follows : 

From  (217)  and  (218), 

[A2]  [,2] 


n       n—1 


(222) 


k1  =  "--[A2]  (223) 

n 


and, 


v,=J''-^Ai,     V2  =  J'^^A2,  .  .  .  (224) 

\     n  y     n 


Adding  these  n  equations,  neglecting  the  signs  of  v  and  A,  wc  have, 
since,  by  definition,  rj  =  [A/n, 


whence, 


!^[A  =  ,J^,  (225) 


and  from  (211), 


,  =  -^^=^  (226) 

vn(n— 1) 


W 


e  =  1 .  2533— =i  z:^  (227) 

V7i(n— 1) 

r  =  0 .  84o3-y  J:L=  (228) 

V  ??,(?!— 1) 

An  ayjiroximate  value  of  the  probable  error  of  a  single  o})serva- 
tion  in  a  series  of  from  20  to  30  observations  may  l)c  determined 
by  taking  one-sixth  of  the  rangc^  of  the  set,  or  one-third  of  the 
largest  residual.     Fi'om  the  table  of  values  of  the  I^aw  of  Error, 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  163 

it  is  found  that  the  probability  of  an  error  three  times  as  great 
as  the  probable  error  is  about  0.04,  or  1  in  25.^  That  is,  in  a 
series  of  25  observations,  the  maximum  error  is  likely  to  be  about 
three  times  the  probable  error  of  a  single  observation.  And, 
since  there  are  as  many  positive  as  negative  errors,  the  total  range 
of  the  observations  in  an  ordinary  set  of,  say,  from  20  to  30  obser- 
vations, is  likely  to  be  about  six  times  the  probable  error,  or  about 
four  times  the  mean  square  error  of  a  single  observation.  Con- 
versely, knowing  the  precision  index,  and  the  approximate  number 
of  observations  in  the  set,  we  can  estimate  the  range.  Frequently 
this  fact  affords  the  most  tangible  idea  as  to  the  consistency  of  the 
observations,  especialh^  to  the  beginner,  since,  by  doubling  the 
mean  square  error  of  a  single  observation  he  obtains  an  approximate 
value  of  the  maximum  residual. 

141.  Precision  of  the  Mean.  The  arithmetic  mean  being  the 
best  value  of  the  observed  quantity  obtainable  from  the  given 
direct  observations  (Arts.  14,  27),  it  is  obvious  that  the  precision 
of  the  mean  wall  be  greater  than  that  of  a  single  observation,  and 
also  that  the  precision  will  increase  with  the  nvmibcr  of  observa- 
tions in  the  set.  In  Art.  153  it  is  shown  that  if  e  be  the  mean 
square  error  of  a  single  observation,  and  eo  that  of  the  mean  of  the 
set  of  n  observations, 

60  =  -^  (229) 

Vn 

which  expresses  the  very  important  relation  that  the  precision  of 
the  mean  increases  directly  as  the  srpiare  root  of  the  number  of  obser- 
vations. In  other  words,  to  double  the  precision,  that  is,  to  divide  eo 
by  two,  it  is  necessary  to  make  four  times  as  many  observations.^ 

'See  Ai)i)en(li\  1'',  pajjo  231. 

The  probability  of  an  error  less  tlian  three  times  the  probable  error  is 
0.957,  eorrespoiidiiifi  to  A  r  =  'A.();  th(>n  the  probability  of  an  error  greater 
than  this  would  l)e  1-0.9.57   =0.01.3. 

-  It  must  not  be  assumed  that  by  increasing  the  number  of  observations 
without  limit,  the  precision  can  be  indefinitely  increased.  There  are  always 
infhuMices  which  make  it  extremely  didicult,  if  not  ((uite  impossible,  to  ap- 
proach c(>rtainty  beyond  a  definite  limit.  In  this  connection,  the  reader  is 
rcf(>rre<l  to  th(>  admirable  tn^itment  of  tliis  matter  in  ^^'right  and  Hayford's 
Adjustment  of  Oliservations.  Arts.  3S  to  4l). 


164 


PRACTICAL  LEAST  SQUARES 


This  principle  is  used  in  determining  the  most  economical  or  advis- 
able number  of  observations  to  make  in  a  certain  program. 

From  (229)  and  the  formulas  of  the  preceding  article,  wc  obtain 
the  following  expressions  for  the  three  precision  indices  of  the 
mean,  by  dividing  by  y/n  in  each  case; 

Bessel's  Formulas: 

(230) 
(231) 

(232) 


1  =  0.7979 


Peters'  Formulas: 


n(n—l) 


V 


Vo 


nv  n— 1 
€0  =  1.2533- 


ro  =  0 .  8453 


nVn—1 
[v 


(233) 
(234) 

(235) 


wvn— 1 

The  values  of  the  factors  of  [v~]  and  [v  in  these  formulas  are 
tabulated  for  various  values  of  n  to  facilitate;  computation.  Such 
a  table  for  (231)  will  be  found  in  Appendix  F,  Table  IV. 

142.  Example:  Precision  of  the  Mean.  Let  us  consider  the 
prol)lein  in  Art.  28,  consisting  of  16  ol)servations.  Here,  w=16, 
[v  =  5r),  and  [r-]  =  305,  the  unit  being  in  the  fifth  place  of  decimals. 
The  results  are  as  follows : 


e 

! 

I'cters 

4.5 
4.5 

1.1 
1.1 

:i .  0 

3.0 

0.8 
0.7 

'I'he  raiig(^  of  tlie  observations  is  16;    one-sixth  of  this  gives  3  as 
the  approximate  value  of  the  probable  error  of  a  single  observa- 
tion.    Tsing  mean  s(}uar(^  errors,  then,  the  best  value  from  the  set 
of  16  observations  would  be  (from  Art.  28), 
1463. 49764  ±0.00001 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  165 

143.  Precision  of  the  Weighted  Mean.  Since  the  weights  are 
merely  relative  quantities,  as  explained  in  Art.  31,  we  shall  consider 
them  as  reduced  to  integers.  The  weight  of  any  observation  will 
then  be  regarded  as  the  number  of  elemental  observations  of  weight 
unity  of  which  that  observation  is  the  mean.  The  mean  square 
error,  ei,  of  an  observation  of  weight  wi,  then,  will  be  that  of  an 
observation  of  unit  weight,  nameW,  e',  divided  by  \/wi,  from  (229) : 


e 


Vwi  Vw2 


(236) 


and 

whence. 


SiVwi=  €2^^^=  e3Vw3=    ...    =  e'  (237) 

.2 


(238) 


er  _W2 

which  states  the  fundamental  principle  that  the  weights  are  inversely 
as  the  squares  of  the  mean  square  (or  probable)  errors.  Also,  since 
the  weight  of  the  weighted  mean  is,  by  the  definition  of  weights, 
[w],  from  (237)  we  have. 


e 


eo  =  — =  (239) 

V[w] 

which  corresponds  to  (229). 

To  find  the  expression  for  e',  the  mean  square  error  of  a  single 
observation  of  weight  unity,  we  proceed  as  in  the  case  of  equal 
weights,  Art.  140.  Beginning  with  equations  (215)  and  using  the 
first  one  only,  to  illustrate  the  process, 

Ai  =  ri+Ao,     A2  =  V2+Ao,  ...  (215) 

Squaring, 

Ai2  =  yi2+2Ao^'i+Ao2  (240) 

]\Iultiplying  each  ecjuation  by  its  weight, 

ivilr=wirr^2\)WiVr}-WiAo^  (241) 


Since  iri  represents  iho  number  of  elemental  observations  of  unit 
weight  which  make  up  th<^  first  actual  observation  of  weight  wi, 
it  will  also  be  the  num])er  of  the  errors  A],  so  that  i/']Ai^'  would  be 
the  sum  of  the  squai-cs  of  these  elemental  errors;  also,  by  defini- 
tion, this  sum  is  equal  to  the  num])er,  ir\,  times  the  corresponding 
mean  square  error  sciuanvl,  and  tlierefore, 

wilr  =  wier=e'-     from  (236)  (242) 


166  PRACTICAL  LEAST  SQUARES 

in  which  ei  is  the  mean  square  error  of  an  observation  of  weight  wi. 

Hence, 

e''^  =  wiVi^+2AoWiVi-\-wiAo^  (243) 


Adding  the  n  equations  of  this  kind,  and  assuming  as  in  Art.  140 

that  Ao  =  eo, 

n  e'2  =  [^^y2j  _^  2Ao[wv]  +  [w]  eo^  (244) 

But,  from  (12),  [wv]  =  0, 
Therefore, 


=  [wv^]  +  [w]eo^ 

(245) 

^[wv^]^e'^     by  (239) 

(246) 

._[wv^] 

r947^ 

Whence, 

^  /2 

and  the  mean  square  error  of  an  observation  of  weight  unity  is, 

(248) 


,        j[wv'] 
Then  from  (239), 


'"^;fe=V[#:fT)  (249) 


and  using  the  relations  stated  in  (211),  wc  have. 


r' =  0^6745  J  I'"-"'! 

>   ??  —  1 

(250) 

r       ()Cu]-^J      f''"''^ 

(251) 
(252) 

\[w](n-l) 

,'  =  0.7979./^""^'^) 
V  //  —  1 

,v     n  7070-c''     f""''"^ 

(253) 

^  [m'J(^;  — 1) 

If  the  weights  are  eciual,  a'  =  l,  [w]  =  n,  and  tlie  last  six  forniular- 
rr)rrospond  to  thos(>  of  Arls.  ]40  and  141. 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  167 

By  analogy,  the  Peters'  Formulas  for  weighted  observations 
may  be  written.     They  are, 


[V^ 


ivv 


VO- 


Vn(n— 1) 

[Vwv 
V[w]n{n — 1) 


(254) 
(255) 


e'=1.2533-i±^  (256) 

Vn{n-1) 

eo  =  1 .  2533-— LJ^=  (257) 

V[iv]n(n-1) 

/  ^0.8453     L^!^  (258) 

Vn(n— 1) 

ro  =  0.8453—  ^     ^^''' — •  (259) 

V[w]n(n-1) 

144.  Example:    Precision   of   the   Weighted   Mean.     In   the 

problcMii  of  Art.  33,  [ir]  =  ll,  /)=4,  [tt-r-]  =4247,  and  the  mean 
square  error  of  the  weighted  mean  is,  eo  =  0.11".  The  complete 
result  is, 

a;o  =  73°  18'  42.07"±0.11" 

145.  Precision  of  Indirect  Observations.  The  process  of  find- 
ing the  mean  square  errors  of  th(^  best  values  of  the  unknowns  from 
indirect  o})servations  is  nuich  more  involved  than  in  the  case  of 
direct  observations.  Also,  the  precision  is  required  in  compar- 
atively few  cases  in  which  engineers  are  concerned.  The  method 
will  b(>  outlined,  however,  without  developing  the  complete  theory, 
for  which  the  reader  is  referred  to  the  works  ])y  Jordan,  and 
Wright  and  Ilayford. 

The  determination  of  tli(^  ])i'ecision  of  the  results  from  in- 
direct observations  is  divided  into  two  parts,  namely,  (a)  the 
computation  of  the  relative  weights  of  the  adjusted  values  of 
the  unknowns.  X.  Y,  Z.  etc.,  and  (h)  the  determination  of  the 
mean  s(^uai'f>  (m-i-oi-,  e',  of  an  observation   of  weidit   unity.     Then 


168 


PRACTICAL  LEAST  SQUARES 


the  mean  square  errors  of  these  unknowns  are  obtained   from 
the  relation  (237) : 


Vwr:= 


or, 


eJ^Wx  =  ey^Wy  = 


146.  "Weights  of  the  Unknowns.  There  are  three  methods  of 
determining  the  weights  of  the  unknowns.  We  shall  use  the  fol- 
lowing one,  which  utilizes  the  principle  of  undetermined  coeffi- 
cients. In  the  normal  equations  (22),  page  29,  to  find  the  weight 
of  X,  replace  the  constant  term  of  the  X  (i.e.,  first)  equation,  [al], 
by  —  1,  and  the  other  constant  terms  by  zeros.  The  solution  of  the 
equations  thus  modified  will  give  as  the  value  of  X,  the  reciprocal 
of  its  weight,  that  is,  1/wx-  Similarly,  substituting  —1  {or  [hi]  in 
the  second  equation,  and  zeros  for  the  other  constant  terms,  and 
solving  the  set  of  equations  for  F,  we  obtain  l/wy.  Thus,  —1  is 
substituted  for  each  constant  term  in  succession,  the  others  being 
replaced  by  zeros,  and  the  equations  are  solved,  in  each  case,  for 
the  corresponding  unknown,  the  resulting  value  of  which  is  its  1  'w. 

This  process  is  tedious  at  best,  but  it  can  be  simplified  as  follows. 
It  is  evident  that  as  the  constant  terms,  only,  are  altered,  the 
preceding  columns  of  the  elimination  in  the  solution  of  the  normal 
equations  will  be  unchanged.  Therefore,  referring  to  the  cciua- 
tions  (55)  page  47,  we  may  add  as  many  columns  as  there  are  un- 
knowns, between  {I)  and  {sj,  designating  them  as  {x\),  iy'z),  (23), 
etc.,  in  which  to  write  the  new  constant  terms.  These  would  be 
included  in  the  check-terms  (.s')  and  carried  through  the  elimina- 
tion the  same  as  other  coefficients  or  constants.  Then  the  weight 
of  each  unknown  would  be  determined  by  substituting  ])ack  in 
the  proper  column  until  that  unknown  was  ck^terminod,  and 
taking  the  reciprocal  of  its  value.  Of  course  it  would  be  unneces- 
sary to  substitute  farther  in  that  particular  column,  as  but  o\w 
weight  is  ol)taincd  from  (vich  colunm.  This  arrangement  of  the 
eciuations  (00)  would  Ix', 


(2()0) 


X 

,'/ 

il) 

(.'■0      (■//■i) 

(-:.)   1       (*•') 

(2) 

+  6 

_2 

+  '■'> 
-4 

+2 
—  .'5 

+  1 

-1             0 
0          -1 

0             0 

0       +s 

0  i      -7 

-1    •      +2 

PRECISION  OF  OBSERVATIONS  AND  RESULTS  169 

Since  the  last  equation  in  the  ehmination  will  have  the  quantity 

—  1  as  its  absolute  term  in  its  added  column,  it  follows  that  the 
coefficient  of  the  last  unknown,  in  that  equation,  will  always  be  its 
own  weight.  The  last  added  column,  (23),  in  the  above  example, 
may  therefore  be  omitted. 

If  the  original  observations  are  of  unequal  weight,  the  same 
process  is  followed,  using  (21)  instead  of  (22)  as  the  form  for  the 
normal  equations,  and  replacing  the  terms  [wal],  [wbl],  etc.,  by 

—  1,  and  zeros,  as  above. 

147.  Precision  of  an  Observation  of  Weight  Unity.  Let  the 
number  of  unknowns  be  represented  by  m,  the  number  of  obser- 
vations being  n,  as  usual.  Then  the  formulas  for  the  mean 
square  and  probable  errors  of  an  observation  of  unit  weight,  are, 


\wv\ 


n  —  in  y  n  —  m 


r'  =  0.6745x/^'^  or  0. 6745a M^  (261) 

^  n  —  m  y  n  —  m 

Liiroth's  Formulas  arc, 

e'  =  1 .  2533-J^^„  or  1 . 2.533-J= 

V  ??.  {n  —  m )  V  ??  (?i  —  7)i) 

r^  =  0.8453     L^^^'''        or  0.8453— JL=  (262) 

V7}{n  —  ))i)  'Vn(n  —  m) 

If  there  is  but  one  unknown,  m=  1  and  these  formulas  ])0('()nic  those 
of  Bcssel  and  Peters  for  direct  observations  (Art.  140). 

The  usual  method  of  determining  the  residuals  is  to  substitute 
the  adjusted  values  of  the  unknowns  ])aek  into  the  obscM'vation 
equations  and  obtain  a  residual  for  each  ccjuation.  IIo\v('\-(m-,  the 
sum  of  the  squares  of  the  resicUials  or  of  the  \vei,ulit(Ml  ic.-^ichials, 
that  is,  [v^]  or  [ivv^],  may  b(>  obtaiiuMl  more  (visily,  in  most  cases, 
in  the  following  manner,  along  with  the  solution  of  tlie  normal 
C(iuations.  Form  the  term  at  the  foot  of  th(>  diagonal,  namely, 
[f\  or  [ivP],  and  perform  a  cori'esponding  step  in  the  (elimination 
as  if  there  were  more  terms  following  it.     The  resulting  sum  in  the 


170 


PRACTICAL  LEAST  SQUARES 


Z-column  will  then  be  [v^]  or  [wv^],  as  the  case  may  be.     Also,  it  may 
be  obtained  from  the  relation 


and  from, 


[wv^]  =  [wal]x-\-[whl]y  -\-[wcl]z-\- 


[wv^]  =  [wvl] 


-\-[wf]  (263) 

(264) 


which  latter  requires  that  the  separate  v's  be  known. 

148.  Example:  Precision  of  Indirect  Observations.  To  illus- 
trate the  foregoing  articles,  the  modified  observation  equations 
(43),  page  38,  will  be  solved  to  determine  the  best  values  of  the 
unknowns  and  their  mean  square  errors.  To  the  normal  equations 
(44)  are  added  the  term  [wf]  and  the  columns  (xi),  {y2),  and  (23), 
and  the  check  terms  are  modified  to  include  all  of  these. 


.r 

U 

z' 

(/) 

(.r.) 

(y.) 

(23) 

(,s-') 

I 

II 
III 

IV 

+3.94 

+  0.38 
+  13.56 

+2.18 
+0.19 
+6.40 

+  1.00 
+3.53 
+3.09 
+2.66 

-1 
0 
0 

0 

-1 
0 

0 
0 

-1 

+   6.50 
+  16.66 
+  10.86 

+  10.28 

+  13.56 
-  0.04 

+0.19 
-0.21 

+3 ,  53 
-0.10 

0 
+0.10 

-1 
0 

0 
0 

+  16.66 
-   0.63 

+  13.52 

-0.02 

+3.43 

+0.10 

-1.00 

0 

+  16.03^/ 

+6.40 

-1.21 

0 

+3 .  09 
-0.55 
+0.01 

0 

+0.55 

0 

0 
0 
0 

-1 

0 
0 

+  10.86 
-   3.59 
+  0.02 

+  5.19 

+2.55 

+0.55 

0 

- 1 .  00 

+  7.29v' 

["■'-    = 

+2.66 
-0.25 
-0.87 
-1.25 

+0.25 
-0.03 
-0.27 

0 

+0.25 
0 

0 

0 

+0.49 

+  10.28 

-  1.65 

-  4.07 

-  3.5S 

+  0.29 

-0.05 

+0.25 

+0,49 

+     O.OSy 

0 ,  29 

i                1 

PRECISION  OF  OBSERVATIONS  AND  RESULTS  171 

Unknowns 

5.19 

-3.43-0.10     -3.53 

«  =  - ■  = =  —  0.2ol 

^  13.52  13.52 

-1.00  +  1.07+0.10     +0.17      ,  ^  ^.o 

X  = =  — =  +0 .  043 

3.94  3.94 

Residuals 
Substituting  in  the  observation  equations  (43)  and  determining 
the  i''s,  we  find  directly,  [wv-]^0.2S  and  [wvl]=  0.26,  while  the 
evaluation  of  (263)  gives  [wv^]  =  0.2Q.  From  the  above  elimina- 
tion, the  first  term  of  IV  is  [wv^]  =  0.29.  The  average  value, 
is  therefore,  .027. 

Weights 

23  =  ^^—7         and         M.v  =  5.2 
o.  19 

5.19 

i/2  = and         ii'v  =  13 . 5 

^       13.52  " 

5.19 

-010 
yi= =  -0.007 

^       13.52 

+  1.00  +  0.23       1  ,  .^  „ 

Xi= = and     Wx  =  o.2 

3.94  3.2 

^NIe.v.x  Sqiake  Errors 
Average  value  of  [»7''']  =  0.27;  /i=9;  //;  =  3. 

e'  = ..  f lirl  ^  ^ /().()475  =  0.22 

e'         0   22 
6:,  =  -4=:  =---^^-^  =  0.12 

Vic.      1.8 

0  22 
e,  =  --^  =  O.OG 

0  22 
ey  =  ^^-^  =  0.10 
2.3 


172  PRACTICAL  LEAST  SQUARES 

Results 
X  =+0.043d=0.12 
y  =-0.261±0.06 
/--0.491±0.10 

149.  Precision  of  Conditioned  Observations.  In  general,  it  is 
necessary,  as  in  the  preceding  case,  to  determine  the  precision  of 
an  observation  of  weight  unity  and  also  the  weight  of  each  un- 
known, from  which  the  precision  of  the  unknowns  is  obtained  from 
the  usual  relation  that  the  mean  square  errors  are  inversely  pro- 
portional to  the  square  roots  of  the  weights,  that  is,  from, 
e/wx  =  ey~Wy  =     .     .     .     =  e'2 

If  the  conditioned  observations  are  adjusted  as  indirect  obser- 
vations by  the  method  stated  in  Art.  81,  the  precision  of  those 
unknowns  which  are  involved  in  the  normal  equations  may  be 
determined  by  the  methods  just  explained  in  Arts.  145  to  148. 
Then  by  a  second  solution,  ehminating  a  different  set  of  unknowns, 
the  normal  equations  may  be  made  to  involve  those  which  were 
not  included  in  the  previous  set,  and  their  precision  may  be  found 
in  the  same  manner.  Obviously,  this  is  a  tedious  method  except 
in  cases  of  a  few  observations. 

Since  the  number  of  unknowns  which  may  thus  be  made  inde- 
pendent is  the  total  number,  m,  minus  the  number  of  conditions, 
m',  the  formula  for  the  mean  square  error  of  a  single  observation 
of  weight  unity  may  be  derived  direct^  from  (261)  and  (262) 
by  substituting  for  m,  m  —  m'.     Thus, 


:  ^     f )  J-  ^ 

y  n—m-\-7n  > /; 


//;  +  /// 


/  =  0.6745\/ — ''^~^—     or     0.6745x/ ^"^ — ,         (265) 

^  n  —  7n-\-m  ^  n  —  ni-\-ni 

But  in  most  of  the  cases  with  which  we  are  concerned  each  unknown 
is  directly  observed  so  that  n  =  m,  when  the  above  formulas  become. 


ivi'- 


r'  =  0.6745a/'     -^     or     0.6745\r  (266) 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  173 

in  which  m'  represents  the  number  of  conditions.  Liiroth's 
Formulas  (262)  may  be  similarly  modified  by  substituting  m  —  m' 
for  m;  and  when  n  =  m,  the  denominators  become  \^nin' . 

The  residuals,  v,  are  the  corrections,  v,  to  the  observations,  as 
determined  in  the  adjustment  (Art.  73).  As  a  check  upon  the 
direct  computation  of  [wv^],  however,  we  may  use  the  formula, 

[^i;2^=-Aq,-Bq2-Cqz    .    .    .  (267) 

A,  B,  C,  .  .  .  being  the  correlates,  and  qi,  92,  ^3,  .  .  .  being  the 
absolute  terms  of  the  reduced  condition  equations  (59)  or  the 
normal  equations  (64).  Or,  in  the  solution  of  the  normal  equa- 
tions, a  step  may  be  taken  similar  to  the  one  described  in  Art.  147 
for  indirect  observations.  Here,  however,  zero  is  written  for  the 
last  term  in  the  constant  (q)  column.  The  elimination  process  is 
continued  to  include  this  term,  and  the  resulting  sum  will  be 
—  [wv"^]. 

The  method  of  correlates,  however,  will  generally  be  used  in 
the  adjustment.  The  weights  of  the  adjusted  values  are  not 
determined  directly,  in  this  case,  but  the  weight  of  a  function 
of  these  values  is  determined,  and  this  function  may  be  merely 
unity  times  one  of  them.  Examples  of  the  functions  of  the 
adjusted  values  for  which  the  precision  may  be  desii-ed  are:  A 
side  of  a  triangle  or  an  unobserved  line  in  a  system  of  triangula- 
tion,  when  computed  from  the  adjusted  angles;  and  a  computed 
difference  of  elevation  in  a  lev(4  net,  det(M-min(Hl  from  adjusted 
values  of  observed  difference's.  The  fmiction  must  not  involve 
more  of  the  unknowns  than  can  be  made  indc^pcndent  by  (elimina- 
tion with  the  conditions,  that  is,  not  more  than  tii-^)/)'.  The 
method  is  as  follows:^ 

Since  any  function  can  be  I'cduced  to  the  liii(\ai'  form,  this  one 
will  be  assumed  to  have  that  lorni. 

in  which  1^,  \-2,  1':;.  •  ■  •  ;ii"e  the  adjusted  values  of  ihe  unknown- 
(Art.  72),  namely,  Vi=  }f]+r\.  etc.  If  any  of  the  t(>i-!n-;  in  (2()8) 
are  missing  from  the  desired  function,  give  to  the  cori'espoiKhng 

'  See  Jordan.  Handbuch  tier  W'rmo.-sunjrskundc,  I5d.  I,  jiar.  lii.  or  Wrigiit's 
.\djustmont  of  Observations,  page  229. 


174 


PRACTICAL  LEAST  SQUARES 


coefficients,  /,  the  value  zero.  Now,  referring  to  the  condition 
equations  (56)  or  (59)  for  the  notation,  and  representing  the 
original  weights  of  the  observations  by  Wi,  W2,  Wz,  .  .  .,  we  form 
the  terms,  m'  +  l  in  number. 


«/ 

Y 

Lw\ 

iwl 

-1'  \-\  -m 

WA     IwA  iwA 


(269) 


Writing  these  terms  in  order  in  an  additional  column,  between  the 
constant  and  check  in  the  normal  equations,  or  in  place  of  the  con- 
stant column  if  the  equations  have  already  been  solved,  the  elimina- 
tion process  is  carried  out  for  this  column  and  its  own  new  check 
column,  including  the  last  step,  for  the  term  [ff/w].  The  final  sum 
for  the  last  term  will  be  the  reciprocal  of  the  weight  of  the  func- 
tion,^  and  from  this  and  the  mean  square  error  of  a  single  observa- 
tion of  unit  weight,  the  mean  square  error  of  the  function  is  ob- 
tained by  (237), 


ep- 


V^ 


(270) 


IVf^ 


It  will  be  seen  that  even  though  the  weights  of  the  original  observa- 
tions were  equal,  those  of  th(^  adjusted  values  may  be  unequal. 
But  if  the  original  weights  of  certain  of  the  observations  are  equal, 
and  also  a,  h,  c,  .  .  .  are  the  same  for  all  of  these  observations, 
then  the  weights  of  the  corresponding  adjusted  values  will  be 
equal,  since  the  /'s  in  these  cases  are  unity.  This  will  appear  in 
the  following  examples. 

150.  Examples:  Differences  of  Elevation,  (a)  As  an  illus- 
ti'ation  of  the  method,  lot  us  apply  it  to  the  problem  of  Art.  77, 
Adjustment  of  Levels,  and  dotermino  the  mean  square  error  of  the 
difference  of  elevation  of  the  bcriehmarks  A  and  D.  The  function 
is,  therefore, 

F=Fo+F8  (271) 

'  Expressed  in  the  Claussiuii  form,  this  is, 


_1_ 


[nP 

2 

'hf 

2 

r^''     ;1 

Ijf 

ir 

—  •1 

(/■ 

IV 

(1- 

-•1 

(■'' 

9 

?/' 

ir 

■?/■ 

PRECISION  OF  OBSERVATIONS  AND  RESULTS 


175 


and  the  reciprocals  of  the  weights  of  the  observations  are, 
1/^6  =  2,  and  l/ws  =  l.  Then  /o=+l,  and  /8=+l,  the  other 
/'s  being  zero.     Referring  to  the  condition  equations  (73),  we  find 


^6  = 

-1 

08  = 

0 

be  = 

-1 

68  = 

0 

cq  = 

0 

C8=- 

-1 

Thus  we  obtain  the  terms. 

of 


=  -2, 


=  — 9 


=  -h 


=  +3      (272) 


UtiUzing  the  solution  of  the  normal  equations  already  made  in 
Art.  77,  we  write  the  /-terms  in  a  new  column  and  form  a  new  check 
column.  To  illustrate  the  second  method  of  obtaining  [wv^],  stated 
near  the  middle  of  page  173,  the  constant  column  will  be  included, 
with  the  addition  of  the  zero  for  the  last  term.  The  solution, 
then,  is  as  follows : 


I 

2 
3 
4 

2 
5 

.1      i      B 

i 

C 

(/) 

Check 

Const. 

+  12 

+6 

+  7 

-4 
0 

+5 

_2 

_2 
-1 
+3 

+  12 

+  11 

0 

_   2 

+0.09 

+    .08 

-    .08 

0 

a)x(-o,i2) 

^2) +(5) 

(I)X(+4/12) 
(II)X(-2,  4) 

(3) +  (6) +  (7) 

fI)X(+2   12) 
ai)X(  +  l/4) 
(III)X 

(+1.17/2.67/ 

+7 
-3 

0 

+2 

_2 

+  1 

+  11 
-   6 

+    .08 
-    .045 

II 

+4 

+2 

-1 

+  o 

+    .035 

3 
6 

"     1 

+5 

-1.33 

-1 

-1 

-0,67 
+  0.50 

0 
+  4 
-2.50 

-  .OS 
+    .0300 

-  .0175 

III    1 

+2.67 

-1.17 

+  1.50 

-  .0()75 

0 

-  .0007 

-  .0003 

-  .0017 

-0.0027 

4 

s 

9 
10 

+3 

-0.33 
-0.25 
-0.51 

+2 

+  1.25 
+0.66 

IV 

4-1    01 

+  1.91 

176 
Therefore, 

and 


PRACTICAL  LEAST  SQUARES 


—  =1.91 


(273) 


-[wv-]= -0.0027 

From  (76),  wc  evaluate  (267)  and  obtain, 

[M;y2]=_o.ooi2+0.0018+0.0021  = +0.0027 

which  agrees  with  the    above    and  with  the  value  determined 
directly  from  the  table  of  corrections,  page  69. 
Then,  

l\om,2]  /n    0097 

(274) 


7n'        M       3 

and  from  (272)  and  (273)  we  have, 


=  0.03 


e.  =  -;l=^£L  =  0.02 

Vw     VI.  91 


(275) 


so  that  the  best  value  of  the  difference  of  elevation  from  A  to  D, 
from  the  given  data,  is,  with  its  mean  square  error, 

(F)  =  -6.36±0.02  (276) 

(6)  As  a  variation  of  the  above,  let  us  determine  the  precision 
of  the  adjusted  difference  of  elevation,  A—F.     The  function  is, 

F'=Fo  (277) 

and  from  the  data  above, 

1/wg  =  2,  /(3=+1,  a6=— 1,  6g=— 1.  and  cg  =  0. 

so  that 

r^,/i  r/i/i  Vnfi         r/ri 

=  +2. 

The;  only  changes  in  the  solution,  therefore,  are  in  the  last  two  of  the 
/-terms.     The  result  is 

—-=1.41  (278) 

whence, 

e.-'  =  -4._  =  ^-^  =  ().02  (279) 

In  view  of  the  statements  at  the  close  of  Ai't.  149,  it  is  evident 


'oj 

=  -2, 

V" 

=  -2, 

'cl 

=  0, 

;r 

L  w_ 

livl 

Iw] 

Iwj 

PRECISION  OF  OBSERVATIONS  AND  RESULTS  177 

from  an  inspection  of  the  tabulated  condition  equations  (73) 
that  the  weight  and  mean  square  error  of  V2  will  be  the  same  as 
those  of  Vq,  since  these  two  columns  in  the  table  are  alike. 

151.  Precision  of  Computed  Quantities.  As  a  result  of  the 
adjustment  of  observations,  the  adopted  values  of  the  unknowns 
are  likely  to  be  used  in  the  computation  of  other  quantities  which 
may  be  expressed  as  functions  of  the  unknowns.  Having  inves- 
tigated the  precision  of  the  unknowns,  it  may  be  desired  to  ascertain 
the  effect  which  the  uncertainties  in  these  values  would  have 
upon  the  quantities  computed  from  them. 

For  example,  suppose  the  diameter  of  a  cylindrical  bar  of 
steel  is  measured  with  micrometer  calipers  at  various  points,  from 
which  the  mean  diameter  and  its  probable  error  are  obtained; 
the  cross-sectional  area  computed  from  this  mean  diameter  would 
have  a  resulting  uncertainty.  Also,  if  the  bar  were  tested  in  a 
tension  machine,  the  breaking  stress  per  square  inch  would  be 
uncertain  to  a  corresponding  degree  as  a  result  of  the  uncertainty 
in  the  measured  diameter  and  computed  area. 

Again,  suppose  one  side  and  the  adjacent  angles  of  a  triangle 
have  been  measured  independently,  resulting  in  an  adopted  mean 
and  a  mean  square  error  for  each.  If  another  side  be  computed 
from  these  data,  it  will  have  an  uncertainty  due  to  the  discrep- 
ancies among  the  original  measures  of  the  given  side  and  angles, 
that  is,  to  the  uncertainties  of  the  given  means. 

It  must  be  emphasized  that  the  determination  of  the  best  values 
of  the  computed  quantities  is  not  involved  in  this  question.  Hav- 
ing adjusted  the  observations,  the  resulting  adopted  values  are 
the  best  ones,  as  far  as  our  knowledge  goes,  and  quantities  com- 
puted fi-oin  them  are  also  the  best  we  can  determine  froui  the 
given  data.  We  are  now  concerned  only  with  the  precision  of  the 
coniput(>d  quantities,  not  witli  the  determination  of  the  ([uantities 
themselves. 

Our  j)i-o])leni  is  to  determine  the  mean  square  (or  probable) 
error  of  afundion.  of  indepc^ndent,  adjusted  quantities  of  which  the 
mean  scjuare  (or  probable)  errors  ai'e  given.  It  will  be  ('onv(>ni(Mit 
to  assume  that  each  of  these  given,  adjusted  values  is  the  mean  of 
a  lari'-c  iiuiuIxm-  of  obs(M'\-ations.  and  that  th(>  cofresnoiuliim-  indices 


178  PRACTICAL  LEAST  SQUARES 

of  precision  were  determined  by  the  formulas  of  Art.  141,  although, 
of  course,  they  might  result  from  indirect  observations. 

The  errors  in  a  linear  function  of  independently  observed  quan- 
tities occur  in  accordance  with  the  same  Law  of  Error  as  those 
of  the  quantities  themselves.^  Thus,  the  errors  in  the  mean  of  a 
set  of  observations  occur  in  accordance  with  the  usual  Law  of 
Error.  Such  means,  therefore,  may  be  treated  as  original  obser- 
vations, as  far  as  the  occurrence  of  errors  goes,  as  long  as  they  do 
not  involve  the  same  original  observations,  in  which  case  they 
would  no  longer  be  independent.^ 

This  subject  is  usually  called  the  Propagation  of  Error.  We 
shall  consider  it  as  divided  into  two  parts, — the  simple  influence 
of  errors  of  one  kind  or  character,  and  the  compound  effects  of 
errors  of  different  kinds  or  resulting  from  different  causes. 

152.  Simple  Propagation  of  Error.  Before  attacking  the  gen- 
eral case,  a  few  special  forms  of  functions  will  be  considered  in  order 
to  illustrate  the  process  of  reasoning.  Let  F  represent  the  func- 
tion of  the  independent,  adjusted  quantities,  x,  y,  .  .  .  whose 
mean  square  errors  are  ex,  ey,  .  .  .  Let  the  original  observations 
of  X  be  represented  by  Mi,  M2,  .  .  .,  those  of  y  by  M'l,  M'2,  .  .  ., 
etc.,  and  let  the  true  errors  of  these  observations  be  represented 
respectively  by  Ai,  A2,  .  .  .,  A'l,  A'2,  .  .  .,  etc.  We  may  assume 
an  equal  number  of  observations  for  each  quantity,  for  simplicity. 

(a)  Consider  first  the  sum  or  difference  of  two  quantities. 
Then, 

F  =  x±y  (280) 

Taking  the  separate  observations  in  pairs,  the  first  of  x  with  the 
first  of  y,  the  second  of  x  with  the  second  of  y,  etc.,  each  pair 
gives  a  value  of  F,  say  Fi,  F2,  .  .  .     Thus, 

Fi=Mi±M'i 

F2-M2:hM'2  (281) 


^  For  proof  (  f  this,  see  Wriglit  and  Ilayford,  Adjustment  of  Olisorvations, 
Art.  13. 

-  See  Chauvenel   par. 23,  for  treatment  of  tlie  case  of  a  function  of  functions. 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  179 

Now,  if  we  add  to  each  M  its  true  error,  A,  the  resulting  value  of 
F  must  be  corrected  by  its  corresponding  error,  and, 

i^i+A^,  =  (ilfi+Ai)±(M'i+A'i) 

/^2+A^,=  (ilf2+A2)db(M'2+A'2)  (282) 


Subtracting  (281)  from  (282),  one  by  one,  we  have, 

Af,  =  Ai±A'i 

Af.,  =  A2±A'2  (283) 


Squaring  each  equation,  adding,  and  dividing  by  their  number,  n, 

M^[A2]^2[AAq^[A'2]  ^2g^^ 

n         n  n  n 

But  in  a  large  number  of  observations,  the  positive  and  negative 
errors  occur  with  equal  frequency,  so  that  the  sum  of  the  products, 
[AA'],  would  approximate  to  zero, — certainly  so  in  comparison  to 
[A2]  +  [A'2],  sothat, 

l^.I^l+t^:!]  (285) 

n  n         n 

or, 

€/  =  6.2+e/  (286) 

Obviously,  the  above  process  would  apply  likewise  to  a  similar 
function  consisting  of  any  number  of  quantities  connected  by 
j)lus  and  minus  signs,  so  that  for 

F  =  .T±i/±2±     .      .      . 

wc  can  write, 

6/=e/+e/+6.2+    ,     ,    ^  (287) 

and 

6^  =  Ve/+e/+e/+    .     .     .  (288) 

From  the  constant  ratio  of  the  })i-()babl('  error  to  the  mean  square 
eri'or,  it  follows  that 

/V-  =  r,2  +  /v-  +  /%2+    .    .    .  (289) 

This  jii'incipk^  is  vei'v  iinpoilant  and  often  used.  Note  that 
tli(>  si^i'iis  in  (287)  and  (2S9)  an^  all  ])ositive.  The  uncertainty  in 
tlu^  sum  of  two  or  more  (juantities  is  thei'cfore  the  samc^  as  in  their 
(liffei'('n('(>. 

{U)  In  1h(>  next  case,  \v\  F^a.r^hijzt   .   .   .  (290) 


180  PRACTICAL  LEAST  SQUARES 

in  which  x  and  y  are  adjusted  values  from  observations,  and  a 
and  b  are  known  constants.     As  in  (282),  we  may  write, 

Fi+A^.  =  a(Mi+Ai)±6(M'i+A'i)+    .    .    . 

/^2+A^,  =  a(M2+A2)±6(M'2+A'2)+    .    .     .  (291) 


whence,  as  in  (283), 

Afi  =  aAi±&A'i+    .    ,    . 

A^,  =  aA2±6A'2+    .    .    .  (292) 


Squaring,  adding,  dividing  by  n,  and  omitting  the  products  as 
before,  we  have, 

M^J^+^m+   .    .    .  (293) 

n  n  n 

That  is, 

e/  =  a2  6x2+&2e,2+    .    .    .  (294) 

or, 

e^  =  Va2 6x2+62 e,2+    .    .    .  (295) 

(c)  Now  we  shall  consider  the  general  case  in  which  F  is  amj 
function  of  the  quantities,  x,  y,  z,  etc., 

F=f{x,y,z,  .    .    .)  (296) 

Since  x,  y,  z,  .  .  .  are  adjusted  values,  they  may  be  assumed  to 
be  nearly  correct,  so  that  their  errors  arc  very  small;  let  us  rep- 
resent them  by  differentials.  Then,  if  A^-  be  the  true  error  of  F, 
we  have, 

F+^J.=J{x+dx,y+dy,z+dz,    .    .    .)  (297) 

Expanding  this  function  by  Taylor's  Theorem,  and  omitting 
terms  which  involve  squares,  products,  and  higher  powers  of  the 
differentials,    we   obtain, 

F+^,=J{x,y,z,    .    .    .)+%lx^-%hj-\-%h-\-    .    .    .      (298) 

ox       ay       oz 

whence,  subtracting  (290), 

A,.  =  ^r/a:  +  ^r/7/+^f/2+    .    .    .  (299) 

9.T         dy         dz 

This  ccjuation  has  the  same  linear  forin  as  (292),  so  that  from  (294) 
we  have  directly, 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  181 

in  which  the  partial  derivatives  of  the  function  correspond  to  the 
constants,  a,  b,  c,  etc.,  of  (294). 

Thus  we  may  state  the  general  Rule:  Express  the  given  func- 
tion in  literal  form.  Differentiate  it  partially  with  respect  to  each 
quantity  for  which  the  mean  square  error  is  given.  Substitute  in 
these  derivatives  the  given  quantities  (without  reference  to  their 
mean  square  errors,  of  course).    Substitute  in  (300)  and  obtain  ep-. 

153.  Example:  Precision  of  the  Mean.  We  shall  now  apply 
the  foregoing  principles  to  determine  the  mean  square  error,  to, 
of  the  mean  of  n  observations,  when  the  mean  square  error  of  a 
single  observation  is  e. 

The  expression  for  the  mean  is, 

F^^^^'l+Ml+   .    .    .   +M.  (301) 

n        n  n 

where  M\,  M2,  etc.,  represent  independent  direct  measures  or 
observations  of  the  unknown  quantity.  This  function  has  the 
form  of  (290)  and  a  =  b  =  c^  .  .  .  =l/n.  From  (294),  there- 
fore, we  have 

eo^  =  \e^+—,e--h—,e^+    .    .    to  n  terms  (302) 


that  is. 


6(r-M-J=^  (303) 


eo=  --;-  (304) 


Vn 

which  states  the  very  important  principle  that  the  precision  of  the 
mean  varies  directly  as  the  square  root  of  the  number  of  observa- 
tions. To  double  the  precision, — that  is,  to  reduce  the  mean 
square  error  of  the  mean  to  one-half  its  size, — it  is  necessary  to 
have  four  times  as  many  ol)servations. 

154.  Compound  Propagation  of  Error.  The  uncertainty  in  a 
computed  quantity  may  result  from  several  sources  which  are  not  of 
the  same  nature,  and  it  may  be  impossible  to  state  th(>  quantity  as 
a  single  function  of  all  these  sources  of  error.  For  example,  the 
measiu'cment  of  a  line  with  a  steel  tape  involves  the  uncertainty 
in  the  length  of  the  tape  itself  and  also  the  errors  in  the  process  of 
measurement.  We  cannot  express  the  length  of  the  line  as  a  func- 
tion of  the  length  of  the  tape  and  the  "  process  of  measurement!" 


182 


PRACTICAL  LEAST  SQUARES 


From  (283)  to  (287),  we  can  state  the  principle  that  when  the 
error  in  the  computed  quantity  is  the  algebraic  sum  of  independent 
errors  from  different  sources,  the  total  mean  square  error  of  the 
computed  quantity  will  be  the  square  root  of  the  sum  of  the  squares 
of  the  separate  mean  square  errors  of  that  quantity  due  to  the 
various  causes. 

In  any  given  case,  therefore,  we  determine  the  mean  square 
error  of  the  computed  quantity  or  function  resulting  from  each 
source,  separately,  by  the  methods  of  Art.  152,  and  then  take  the 
square  root  of  the  sum  of  their  squares  as  the  total  mean  square  error. 

We  shall  now  illustrate  this  subject  by  a  series  of  typical 
examples. 

155.  Examples:  Propagation  of  Error.  (1)  The  following 
measures  of  the  diameter  of  a  cylindrical  test-piece  of  metal  were 
made  by  means  of  micrometer  calipers.  The  piece  was  then 
broken  in  a  testing-machine  at  a  load  of  20,000  lbs.  Find  the  unit 
breaking  stress  and  its  mean  square  error  due  to  the  uncertainty 
in  the  measured  diameter,  D. 


Inches 

V 

i 

Inches 

V 

V- 

Inches 

V           r- 

0 . 6252 

2 

4 

0.6251 

1 

1 

0.6248 

2 

4 

46 

4 

16 

52 

2 

4 

45 

5 

25 

42 

8 

64 

54 

4 

16 

52 

2 

4 

48 

2 

4 

55 

5 

25 

56 

6 

36 

121 
I2 


=  10 ;    Mean  =  0 .  6240+0 .  0010  =  0 .  0250  =  D 
208 


[.2]  =  203,      60^  =  ^  =  1.54,     60=1.24 

Diameter  (/))  =0.6250 ±0.0001 2.     The  function  is, 

20,000 


Unit  Ijreaking  stress  ■- 


Area  of  section 
20,000 

80,000/  1 


3.14  \D-/ 
=  05,300  lbs.  per  sq.  in. 


(305) 


PRECISION  OF  OBSERVATIONS  AND  RESULTS 


183 


Differentiating  (305)  with  respect  to  D, 
^_  80,000 
dD 


3.14 
- 160,000 


-2\        -160,000 
I>3/~3. 14X0.6253 


=  -208,900 


0.766 
From  (300), 

6^  =  208,900X0.00012  =  25. 

Thus,  the  unit  breaking  stress  =  65,300± 25  lbs.  per  square  inch. 

The  uncertainty  due  to  the  variation  among  the  measures  of  the 
diameter  is  therefore  neghgible  when  it  is  remembered  that  the 
breaking  load  is  seldom  required  within  a  range  of  a  hundred 
pounds. 

(2)  The  length  of  a  50-meter  tape  is  determined  by  comparison 
with  a  5-meter  standard  bar  which  is  surrounded  by  chipped  ice  to 
control  its  temperature.  The  length  of  the  bar,  as  determined 
from  its  standardization,  is  5  =  5. 000060  ±0.000006  meters. 
The  following  measures  are  made  of  the  difference  between  the 
length  of  the  tape  and  ten  lengths  of  the  bar,  the  former  being  the 
longer.  It  is  required  to  find  the  length  of  the  tape  and  its  mean 
square  error  due  to  the  uncertainty  in  the  length  of  the  bar  and 
to  the  errors  of  measurement.  The  temperature  is  assumed  con- 
stant.    The  unit  is  in  the  sixth  place  of  decimals,  that  is,  a  micron. 


Interval  (K) 

.'■ 

r- 

Obs. 

0 . 002533 

16 

256 

666 

117 

13689 

529 

20 

400 

444 

105 

11025 

461 

88 

7744 

5.53 

4 

16 

638 

89 

7921 

5137 

18 

324 

Mean,  2549 

41375 

^    41375 
60"=     ^^    =739. 
oo 


Interval  (/v)  =  2549±27 


184  PRACTICAL  LEAST  SQUARES 

The  function  is, 

Length  of  tape  (L)  =  10  B-\-K  (306) 

=  50.000600+2549 
=  50.003149  m. 
This  function  corresponds  to  (290),  so  that,  from  (294), 
6^2=100X62+739  =  3600+739  =  4339. 
ez.  =  66 
Therefore,  Length  of  tape  (L)  =  50. 003 149 ±0.000066  m. 

An  important  principle  is  illustrated  here.  The  larger  source 
of  error  in  the  length  of  the  tape  is  that  due  to  the  error  in  the 
length  of  the  bar,  amounting  to  ten  times  as  much  as  the  other. 
It  would  be  useless,  therefore,  to  increase  the  above  number  of 
observations  with  the  idea  of  increasing  the  precision  in  the 
length  of  the  tape,  since  this  part  of  the  total  error  is  almost 
negligible.  On  the  other  hand,  the  above  set  of  observations 
might  be  diminished  considerably  without  seriously  affecting  the 
result.  For  example,  suppose  there  were  but  one-half  as  many 
observations,  namely,  4.  Dividing  the  number  of  observations 
by  2  increases  the  square  of  the  mean  square  error  twofold.  Thus, 
we  should  have  60^=1478,  and  6^2  =  3600+1478  =  5078.  Then, 
ei  =  71,  which  is  very  little  larger  than  66.  It  must  be  remembered 
however,  that  the  number  of  observations  should  be  sufficiently 
large  to  justify  the  assumption  that  errors  of  observation  follow 
the  Law  of  Error. 

(3)^  A  comparison  of  the  two  following  cases  will  be  instruct- 
ive, (a)  A  line  400  feet  long  is  measured  with  a  100-foot  tape 
of  which  the  mean  square  error  is  0.004  foot.  The  resulting 
mean  square  error  in  the  length  of  the  line  will  be  0.016  foot,  since 
L  =  4T. 

(6)  The  same  line  is  divided  into  four  100-foot  sections  and 
each  section  is  measured  with  a  different  100-foot  tape  of  which 
the  mean  s(|uare  error  is  0.004  in  each  case.     The  resulting  mean 

1  Adapted  from  Craiidall's  Cieodpsy  and  Least  Squares. 


PRECISION  OF  OBSERVATIONS  AND  RESULTS  185 


square  error  in  the  length  of  the  hne  will  be  V4(0 .  004)  =  0 .  008  foot, 
since  the  function  is,  L  =  Ti  +  T2  +  7'3  +  T4. 

In  the  first  case  (a),  whatever  the  true  error  in  the  tape  may  be, 
it  is  constant  and  its  effect  is  cumulative.  In  (b),  on  the  other 
hand,  the  actual  errors  in  the  different  tapes  are  not  the  same  even 
though  their  mean  square  errors  happen  to  be  equal,  and  in  con- 
sequence they  are  likely  to  be  both  positive  and  negative  so  as 
to  neutralize  to  some  extent.  Therefore,  the  resulting  error  in 
the  length  of  the  line  would  be  smaller  than  in  the  former  case.  It 
is  important  that  this  principle  be  well  understood. 

(4)  Let  it  be  required  to  compute  the  length  and  mean  square 
error  of  the  side,  h,  of  the  triangle,  A-B-C,  from  the  side,  a,  and 
the  angles,  A  and  B,  given  with  their  mean  square  errors  as  follows : 

a  =  4268 .  344  ±0 .  008  meter, 

A  =  56°37'42.4"±0.6" 

5  =  70°26'54.3"±0.3" 
The  function  is, 

b  =  '^'^  (307) 

sm  A 

from  which  we  obtain,  using  the  above  data, 

6  =  4816.349  m. 

Differentiating  (307)  with  respect  to  a,  A,  and  B,  in  succession, 
and  reducing  by  means  of  (307), 

9^  =  :'^^  =  ^  =  1.128 
da     sm  A     a 

dh      -a  sin  B  cos  A         ,       ,    .  ...^^ 

= .    ,    , =  — 0  cot  .4  =  — 3172 

dA  sm-'  .1 

dh      a  cos  B 

=    -.  =  b  cot  y?  =  W  10 

dB       sin  A 

Subslituling  in  (300),  and  noting  that  it  is  necessary  to  nuilliply 
e.i   and    e,i  by  sin   1"   (  =  0.00000."))   in  order  to  reduce    them  to 


186  PRACTICAL   LEAST  SQUARES 

abstract  quantities  so  that  each  term  may  be  expressed  in  the  unit 
of  length,  we  have, 

e,2=/^y,„2_|_(^  cot  A)HeA  sin  l")2  +  (6  cot  Bfies  sin  1")^ 

=  (1.128X0. 008)2 +  (3172X0. 6X0. 000005)2 

+  (1710X0.3X0.000005)2 

=  (0 .  0090)2  +  (0 .  0095)2  +  (0 .  0026)2 

=  0.00017801 
and 

€0  =  0.013 

whence, 

6  =  4816. 349 ±0.013  meters. 

(5)  Find  the  mean  square  error  in  a  single  measurement  of  an 
angle,  direct  and  reversed,  with  a  direction  theodolite  having 
three  microscopes.  Each  reading  consists  of  the  mean  of  the 
three  microscope  readings  corresponding  to  a  pointing  upon  one 
object,  and  a  measure  of  the  angle  is  the  difference  between  the 
readings  upon  the  two  objects  limiting  the  angle.  This  process  is 
repeated  in  the  reversed  position  of  the  instrument  and  the  mean 
is  taken.  Suppose  the  mean  square  error  of  a  pointing  of  the 
telescope  upon  an  object  to  be,  ep  =  0.04";  that  of  a  reading  of  a 
microscope  to  be,  er  =  0.00";  and  that  of  a  graduation-mark  on 
the  circle  to  be,  e^  =  0.03".  The  error  in  each  microscope  reading 
will  be  the  algebraic  sum  of  the  error  of  setting  and  reading  the 
microscope  itself  and  that  of  the  graduation,  so  that  the  moan 
square  error  due  to  both  causes  will  be  v(er+e/).  Then  the 
mean  square  error  of  the  mean  of  the  readings  of  the  three  micro- 
scopes will  be 


V(er'+e-')      \/(0.0045) 


e«=  --^--^ '=  ■  -^ ^=V0.0015 

v/3  \'^3 

The  error  in  a  reading  upon  one  object  will  be  made  up  of  tlu^  (^rrors 
due  to  all  three  causes,  that  is,  to  the  a])ov(>  c()m})ined  error  and 
the  error  of  a  single  pointing,  or. 


e„  =  V(e,,-+  e,r)  =  V  (0 .  0015  +  0 .  OOK))  =  v'()  ■  00:5 1 
Finally,  tlu^  mc^an  square  error  of  the  difference  of  the  readings  on 


COMBINATION  OF  COMPUTED  QUANTITIES  187 

the  two  objects,  which  is  that  of  the  direct  measurement  of  the 
angle,  will  be, 

V(eo2+6o-)  =  Vo.0062 

and  that  of  the  mean  of  the  direct  and  reversed  results  will  be 


,,=^^^»?  =  VOa31  =0.056" 

V2 

(6)  A  line  1000  feet  long  is  measured  eight  times  with  a  100- 
foot  tape,  and  the  mean  square  error  of  the  mean  of  the  eight 
measures  is  found  to  be  0 .  004  foot.  If  the  mean  square  error  of  the 
length  of  the  tape  (resulting  from  its  standardization)  is  0.001, 
what  is  the  mean  square  error  of  the  line,  due  both  to  errors  of 
manipulation  and  error  in  the  tape  length? 

The  mean  square  error  of  the  line  due  to  the  tape  error  is 
10X0.001=0.010.  Since  the  total  error  is  the  algebraic  sum 
of  both  kinds,  the  mean  square  error  due  to  both  causes  will  be  the 
square  root  of  the  sum  of  the  squares  of  the  separate  mean  square 
errors,  that  is, 

e^  =  V{0 .  0042+0 .  010-)  =0.011 

COMBINATION   OF   COMPUTED    QUANTITIES 

156.  Weights  from  Mean   Square   or   Probable  Errors.     In 

Art.  143,  it  was  demonstrated  that  weights  are  invcr.sely  as  the 
squares  of  the  corresponding  mean  square  or  probaljle  errors. 
Thus  it  is  possible  to  combine  the  results  computed  from  different 
observations  of  a  certain  c^uantity,  using  thorn  as  weighted  obser- 
vations, when  the  mean  square  errors  of  these  results  are  known 
so  that  their  relative  weights  may  l)e  determined.  For  example, 
a  certain  angle  in  a  triangulation  may  have  been  measured  several 
times,  with  a  resulting  nuvui  and  moan  square  error.  Subse- 
quently, in  another  season,  perhaps,  another  s(M-ios  of  measures  of 
the  same  angle  may  be  made,  giving  a  cHffenMit  result  and  mean 
square  (>rror.  By  giving  to  each  result  a  weight  equal  to  the 
reciprocal  of  the  sc}uai'e  of  its  mean  scjuare  error,  th(»  weighted 
mean  of  the  two  rosuhs  may  be  taken  as  the  best  value  of  the  angle 
from  all  of  the  availa])le  data. 


188  PRACTICAL  LEAST  SQUARES 

157.  Limitations.  It  is  obvious  that  this  method  assumes 
that  all  of  the  original  observations  in  the  various  groups  are  of  the 
same  character,  so  that  if  they  were  known  their  mean  could 
reasonably  be  taken.  The  conditions  under  which  they  were 
made  should  be  similar,  and  especially  is  it  assumed  that  constant 
or  systematic  errors  affect  all  of  them  in  the  same  way. 

On  the  other  hand,  it  is  seldom  that  these  conditions  are 
fulfilled  with  any  great  degree  of  certainty.  Frequently,  nothing 
at  all  is  known  about  the  observational  methods  or  circum- 
stances, except  what  is  indicated  by  the  mean  square  errors 
as  to  the  consistency  of  the  original  observations.  Even  in  such 
a  case,  however,  it  is  probable  that  the  weighted  mean  will  be 
as  good  as,  or  better  than,  any  of  the  component  results,  so 
that  the  method  should  not  be  discarded  without  careful  con- 
sideration. 

Of  especial  importance  in  this  connection,  is  the  case  in  which 
the  observations  resulting  in  one  of  the  given  values  are  known  to 
be  of  much  greater  precision  than  those  which  resulted  in  the  other 
value,  without  regard  to  their  respective  mean  square  errors. 
For  example,  an  angle  might  be  measured  with  a  direction  theodo- 
lite reading  to  a  single  second,  and  again  by  means  of  a  transit 
reading  to  half-minutes.  Here,  the  judgment  of  the  computer 
may  determine  what  weight,  if  anj^,  shall  be  given  to  the  transit 
result  in  comparison  with  the  other,  in  spite  of  the  mean  square 
errors,  provided,  of  course,  the  number  of  observations  made, 
with  the  theodolite  is  sufficient  to  reduce  the  effect  of  the 
accidental  errors.  Should  the  two  results  be  close  together,  how- 
ever, the  weights  given  by  the  mean  square  errors  may  still  be 
satisfactory. 

When  the  results  being  compared  are  separated  by  a  consider- 
able interval  in  comparison  with  the  given  mean  square  errors, 
the  presence  of  systematic  error  may  be  indicated  and  should  be 
investigated.  If  the  difference  is  not  too  great  to  be  a  reasonable 
accidental  error  of  observation,  it  may  be  considered  safe  to  accept 
the  weights  given  by  the  mean  square  errors.  But  if  the  differ- 
ence is  too  large  to  be  thus  considered,  and  the  mean  square  errors 
arc  much  smaller,  there  may  be  no  reason  for  believing  one  of  the 


COMBINATION  OF  COMPUTED  QUANTITIES 


189 


values  to  be*  nearer  the  truth  than  the  other,  so  that  the  arith- 
metic mean  of  them  may  be  adopted  as  the  best  value.  Here, 
again,  the  judgment  of  the  computer  must  determine  the  method  of 
adjustment.^ 

158.  Example:  Weighted  Mean  of  Computed  Quantities. 
Three  independent  series  of  observations  give  the  following  results 
for  the  value  of  an  angle;  what  is  the  best  or  most  probable  value 
of  the  angle  from  these  data? 


Means,  (xo) 

e 

6^ 

W  =  l/t'' 

w 

WXti 

72°  47'    43.18" 
44.01 
43.74 

±0.06" 
.10 
.08 

36 

100 

64 

0.028 
.010 
.016 

14 
5 

8 

44.52 
20.05 
29.92 

27 

)94.49 
3.50 

Weighted  mean,  72°  47'  43.50" 

159.  Precision  of  the  Adjusted  Value.  Although  the  problem 
has  the  nature  of  the  determination  of  the  weighted  mean,  the  pre- 
cision of  the  resulting  value  should  not  be  computed  as  in  the 
case  of  the  weighted  mean  owing  to  the  small  number  of  given 
quantities,  in  general,  and  the  fact  that  their  individual  mean 
square  errors  or  probable  errors  are  given.  On  the  contrary,  the 
method  of  propagation  of  error  should  be  used.  The  two  methods 
will  not  usually  give  the  same  result.  It  is  to  l)e  expected  that  the 
final  value  will  l)e  better  than  the  separate  given  values,  and  it 
should  generally  have  a  smaller  mean  scjuare  error  although  this 
will  not  always  be  the  case. 

160.  Example:  Precision  of  the  Adjusted  Value.  Applying 
this  process  to  the  alcove  example  of  Ai't.  158,  we  have  the  func- 
tion, 

X  =  -4  n4.ri+5.r2  +  S.r:j) 


^  Seo  Johnsf)n's  'llu^ory  of  ]>rors,  Chap.  \'II,  on  this  .^ubjert. 


190  PRACTICAL  LEAST  SQUARES 

whence,  from  (300), 

1 
27 

1.3652_1.17^ 

272  272 

l-l^-  =  0.04 


6.v2  =  74[(14X  .06)2  +  (5X  .10)2  +  (8X  .08)2] 


ex=^- 


27 
SO  that  the  adjusted  vahie  is,  72°  47'  43.50"±0.04". 


CHAPTER   IX 
CONCLUSION 

161.  Rejection  of  Observations.  It  is  generally  conceded 
that  an  observer  has  the  right  to  reject  any  observation,  at  the 
time  of  making  it,  if  he  has  reason  to  believe  that  he  made  a  mis- 
take in  his  setting  or  reading,  or  if  the  conditions  were  temporarily 
so  unfavorable  as  to  indicate  that  the  result  was  quite  unreliable. 
His  attention  may  be  drawn  to  the  questionable  observation 
merely  by  its  being  discordant  among  the  others  of  the  series; 
or  he  may  question  the  observation  as  he  makes  it  and  mentally 
decide  to  reject  it  if  it  proves  to  be  very  discordant.  His  power 
is  absolute  but  he  is  expected  to  exercise  it  with  good  judgment 
and  strict  impartiality. 

On  the  other  hand,  when  the  observations  have  been  approved 
by  the  observer  and  are  turned  over  to  the  computer,  or  when 
sufficient  time  has  elapsed  that  the  observer  ceases  to  recall  the 
particular  conditions  under  which  each  of  the  observations  was 
made,  then  the  record  must  be  regarded  as  inviolable,  and  must 
not  be  changed  without  good  reason,^  and  this  reason  must  be 
evident  from  the  records  themselves. 

If  the  observer  has  noted  the  unfavorable  conditions  and  has 
not  indicated  a  resulting  smallcM'  weight  for  the  corresponding 
observation,  the  computer  may  feel  justified  in  assigning  such  a 
weight  if  tlu^  obsei'vation  is  ckvirly  discordant.  Ilow(>ver,  if  this 
is  necessary,  it  should  have  been  dour  by  tlu>  ()bs(M'V(n'  in  the  field, 
and  the  computer  may  wisely  refi'ain  from  thus  int(M'fering  witli 
th(>  record  unless  with  the  consent  of  the  obscM'ver  himself,  on 
the   gi'ound   that    this   would   haw   been  his  action  in  the   field. 

'  It  is  a  rifiid  rule  t  hat  an  oriuiiial  I'ccord  sliotild  nov(M'  b(>  crasotl  or  ol)S('ure(l. 
Chanel's  sliould  he  so  made  as  to  show  ch'arly  tliat  they  an>  clian^os,  with 
date  and  initials  of  the  coniputcM-.  and  so  as  to  leave  the  oi-iy;inal  data  legible. 
(ienerall>',  the  oi'iiiinal  will  he  in  jxaicil,  and  not(\-:  and  computations  will  l)u 
in  ink.     Jted  ink  may  well  be  nsed  for  annotations. 

191 


192  PRACTICAL  LEAST  SQUARES 

The  assignment  of  weights  to  various  observations  is  closely 
associated  with  the  question  of  the  rejection  of  observations,  since 
a  weight  of  zero  is  equivalent  to  rejection,  and  a  diminished  weight 
means  a  partial  rejection. 

162.  Criteria  for  Rejection  of  Observations.  While  the  author 
is  of  the  opinion  that  weighting  and  rejection  should  be  based 
upon  judgment  rather  than  mere  discrepancies  among  the  ob- 
servations, many  writers  and  experienced  computers  have  advo- 
cated the  rejection  of  all  observations  which  deviate  more  than 
a  certain  amount  from  the  mean  of  the  set.  The  mathemat- 
ical basis  for  determining  this  maximum  deviation  is  known  as  a 
Criterion  for  the  Rejection  of  Observations.  Several  of  these  methods 
have  been  devised,^  but  the  following  has  the  merit  of  simplicity. 

It  being  assumed  that  the  observations  conform  to  the  Law 
of  Error,  the  number  of  errors,  or  residuals,  greater  than  a  certain 
size,  to  be  expected  in  the  given  set,  will  be  found  by  using  Table 
III,  Appendix  F,  as  stated  in  Art.  175.  The  table  shows  that  the 
probability  of  an  error  less  than  four  times  the  probable  error  of  a 
single  observation  is  0.99;  that  is,  99  out  of  100  residuals  should  be 
less  than  that  amount  and  only  one  out  of  100  should  be  greater. 
Therefore,  if  a  greater  residual  occurs  in  a  set  of,  say,  20  to  30 
observations,  it  might  be  rejected  as  indicating  a  mistake.  Having 
computed  the  probable  error,  r,  of  a  single  observation,  for  the 
given  set,  any  individual  observation  whose  residual  from  the 
mean  is  greater  than  or  equal  to  4r,  would  be  rejected,  according 
to  this  assumption. 

Evidenth',  the  adoption  of  a  certain  criterion  is  a  matter  of 
estimation  and  preference.  The  above  value,  4r,  would  be  con- 
sidered conservative  by  many  computers  who  believe  in  any  kind 
of  a  numerical  criterion;  'Ar  is  sometimes  used.  Even  the  novice 
will  immediately  suggest  that  the  unusually  large  error  might 
h(i])pcn  to  occur  in  the  small  series  of  observations.  If  but  one 
veiy  large  residual  occurs  in  the  set,  there  may  be  more  reason 
for  n^jecting  it  than  if  it  be  accompanied  by  a  correspondingly 
large  one  of  the  opposite  sign,  since  the  pair  would  neutralize 
each  otlun',  to  some  extent,  in  the  mean. 

'Sec  ('h;iu\'('iH't ,   Practicul  and  Spherical  Astronomy. 


CONCLUSION  193 

163.  Methods  of  Observing.  One  of  the  most  important 
uses  of  the  Method  of  Least  Squares  Hes  in  the  investigation  of 
methods  of  observing,  with  the  idea  of  avoiding  or  ehminating 
the  effects  of  constant  or  systematic  influences,  of  segregating  the 
sources  of  error  which  produce  the  greatest  effect  so  that  these 
effects  may  be  diminished,  and  of  reducing  the  cost  of  securing 
the  desired  degree  of  precision. ^ 

In  Art.  154  it  was  shown  how  various  sources  of  error  combined 
to  affect  the  result;  therefore,  in  arranging  the  observations,  spe- 
cial attention  should  be  given  to  decreasing  the  errors  which  have 
the  greatest  effect,  since  the  final  precision  is  dependent  but  little 
upon  the  small  errors.  In  reducing  the  errors  from  a  certain 
source,  the  design  of  the  instrument  and  its  support  may  require 
study  as  well  as  the  method  of  using  it.  Very  important  improve- 
ments in  instruments  have  resulted  from  the  careful  study  of  the 
occurrence  of  the  errors  of  observation.^ 

Constant  and  systematic  errors  may  be  due  to  the  conditions 
under  which  the  observing  is  carried  on.  When  such  is  the  case, 
it  is  desirable  to  so  arrange  the  observing  program  that  these  con- 
ditions will  vary  during  the  observations  through  a  complete 
cycle  of  changes,  as  far  as  practicable,  in  order  that  their  effects 
may  neutralize  one  another,  at  least  partially. 

Finally,  the  matter  of  cost  must  be  considered.  This  will 
depend  largely  upon  the  number  of  the  observations  and  their  dis- 
tribution during  the  day,  after  the  instrumental  equipment  has 
})een  determined  upon. 

164.  Precision  Desired  and  Number  of  Observations.  In 
])lanning  the  obsc^rving  ])rogi-ain,  having  a  d(>finito  end  in  view,  it 
is  advisable  to  decide  upon  the  degree  of  pi-ecision  which  is  to  be 
sought  in  the  result.  This  will  ilepend  to  some  extent  upon  the 
insti'unuMits  or  apparatus  available,  but,  with  a  given  instrument 
and  an  individual  observer,  the  method  of  observing  and  the 
number  of  ilie  obstM-vations  become  of  great  imj)ortance  in  deter- 

'  For  a  more  (wtcndod  trcatinciit  of  this  suliject.  tli(^  r('a<lt>r  is  rc^fcrred  to 
Wright  and  Ilayford,  Adjustment  of  Observations,  ("ha!)ter  IX. 

-A  notal)le  iiistance  of  this  was  th(-  (h-sigii  of  tlic  Coast  and  (leodetic  Survey 
Precise  Level  in  1<)()(),  l)y  Mr.  .1.  F.  Hayford,  Chief  <,f  th(^  (  omputing  Divi- 
sion, and  Mr.  11.  Ci.  I'iseher,  Cliief  of  the  Insiriiinent  I)i\  is!o:i,  U.  S.  C.  i^  C!.  8. 


194  PRACTICAL  LEAST  SQUARES 

mining  the  precision.  The  observing  program  will  frequently 
take  the  form  of  a  number  of  units,  or  parts,  all  of  which  are 
alike  with  the  exception  of  a  change  in  the  position  of  the  instru- 
ment, as  in  the  case  of  horizontal  angles  measured  with  a  direction 
theodolite. 

To  attain  the  desired  precision,  then,  the  total  number  of  obser- 
vations must  be  considered.  As  a  result  of  experience  or  experi- 
ment,^ the  precision  (indicated  by  the  mean  square  error,  perhaps) 
of  each  elementary  observation  is  ascertained,  and  from  these, 
the  precision  of  a  unit  observation.  Then  the  number  of  observa- 
tions necessary  to  obtain  the  desired  precision  in  the  result  may  be 
computed  from  the  relation  that  the  precision  of  the  mean  varies 
as  the  square  root  of  the  number  of  observations  (Art.  141,  page 
163).  That  is,  to  double  the  precision  (to  divide  the  mean  square 
or  probable  error  by  two)  four  times  as  manj'  observations  must  be 
made.  But  how  far  can  this  process  be  continued?  Is  it  possible 
to  reach  any  degree  of  precision  by  simply  multiplying  the  observa- 
tions? 

165.  Ultimate  Limit  of  Precision  and  Accuracy.  While  in 
theory  the  precision  of  the  mean  can  be  increased  indefinitely  by 
increasing  the  number  of  observations,  experience  shows  that  a 
limit  is  soon  reached,  beyond  which  it  is  not  worth  while  to  con- 
tinue the  observing;  the  theoretical  increase  in  the  precision  as 
indicated  by  the  smaller  probable  error,  for  example,  would 
become  quite  misleading.  Furthermore,  after  passing  a  certain 
point,  the  number  of  observations  would  have  to  be  enormously 
increased  in  order  to  produce  a  very  small  decrease  in  the  probable 
error,  so  that  this  process  would  be  very  wasteful  of  time  and 
energy,  and  it  is  doubtful  if  the  results  would  be  much  better. 

After    all,    accuracy    is    desired    rather    than    precision.     The 

observations  are  not  made  for  the  purpose  of  enjo\'ing  the  labor, 

but  in  order  to  ascertain  the  truth  as  far  as  practicable.     It  is  a 

well-known  fact  that  the  mean  of  a  small  number  of  very  consistcMit 

observations,  showmg  a  vcm'v  small  pro])able  error,  may  be  farther 

'  A  theoretical  discussion  of  the  hinitations  of  the  human  eye  in  making 
observations,  antl  lh(>  increased  j)ower  resulting  from  jirojierly  designed  instru- 
ments, will  be  found  in  Jordan,  Ilandbuch  der  \'ermessungskunde,  Band  II, 
§45. 


CONCLUSION  195 

from  the  truth  than  that  of  a  larger  number  of  observations  which 
vary  over  a  considerable  range.  Cases  can  be  cited  in  which  a 
value  adopted  as  a  result  of  many  observations,  by  different 
observers,  extending  over  a  long  period  of  time,  has  been  proved 
to  be  incorrect  by  an  amount  greater  than  many  times  the  prob- 
able error.  Of  course,  the  conclusion  is  that  we  must  not  lose 
sight  of  the  fact  that,  however  consistent  the  observations  may  be, 
large  systematic  errors  may  be  present  and  the  observing  methods 
may  not  be  such  as  to  eliminate  them,  so  that  they  directly  affect 
the  results. 

As  to  the  limiting  number  of  observations,  then,  we  can  safely 
state  that  this  should  be  large  enough  and  so  distributed  as  to 
cover  varying  conditions  as  completely  as  practicable.  Natu- 
rally, it  will  be  different  in  various  kinds  of  work.  However, 
changes  in  the  instrument  and  its  supports  are  likely  to  take  place 
if  the  observations  extend  over  too  much  time,  so  that  it  is  gener- 
ally advisable  to  observe  as  rapidly  as  is  practicable  without  a 
sacrifice  of  precision. 

166.  Indication  of  Systematic  Errors.  In  order  to  discover 
the  presence  of  systematic  errors,  a  careful  study  of  the  residuals 
is  essential.  Unless  the  conditions  causing  these  errors  change 
during  the  course  of  the  observations,  the  errors  fall  into  the  class 
of  constant  errors  and  will  not  be  indicated  at  all  by  the  discrep- 
ancies or  residuals.  In  this  case,  a  different  method  of  observing 
might  reveal  them  when  the  results  of  both  methods  were  com- 
pared. 

By  plotting  the  residuals  in  chronological  order  some  regularity 
or  law  may  be  rioted  in  their  occurrence.  Positive  and  negative 
residuals  may  occur  in  separate  groups  or  a  curve  drawn  through 
the  i)lotted  points  may  show  a  periodic  character.  Again,  the 
nunil)oi's  of  residuals  of  the  various  sizes  may  be  plotted  as  in 
Art.  17,  to  form  a  Curve  of  Error,  and  if  the  resulting  curve 
differs  considerably  at  certain  points  from  the  theoretical  form, 
which  may  be  ]:)lotted  from  Ta})l(>  II  or  III  in  App(Mi(hx  F,  the 
pres(Mice  of  syst(Mnati('  ei'rors  may  be  indicated.  Having  thus 
investigated  the  occun-(>nce  of  the  residuals,  it  remains  to  seek 
changes   in    the   observing   conditions    which    correspond   to   the 


196  PRACTICAL  LEAST  SQUARES 

variations  in  the  residuals.  The  location  of  such  changes  should 
serve  to  point  out  conditions  responsible  for  part  or  all  of  the 
systematic  errors  so  detected. 

167.  Treatment  of  Discordant  Observations.  When  the  dis- 
crepancies in  a  set  of  direct  observations  are  unusually  large, 
the  lack  of  precision  will  be  indicated  by  a  large  probable  error 
or  mean  square  error,  and  the  mean  remains  as  the  best  value 
obtainable  from  the  given  measures.  It  sometimes  happens, 
however,  that  different  sets  of  observations  of  the  same  quantity 
will  yield  results  which  are  so  discordant  as  to  indicate  the  pres- 
ence of  constant  or  systematic  errors  in  one  or  both  of  the  sets. 
The  problem  may  be  further  complicated  by  the  fact  that  the 
precisions  of  the  results  may  be  considerably  different,  so  that  if 
their  weighted  mean  were  taken,  as  in  Art.  156,  it  would  give  a 
decided  preference  to  one  of  them.  The  question  arises  as  to 
whether  the  results  may  not  be  so  far  apart  as  to  make  it  advisable 
to  neglect  their  relative  weights  altogether  and  to  take  their 
simple  mean  arbitrarily.     This  course  is  sometimes  advocated. 

Obviously,  this  is  a  matter  of  judgment  rather  than  Least 
Squares,  and  such  action  should  be  preceded  by  a  careful  investi- 
gation of  all  the  circumstances.  However,  it  may  be  reasonably 
contended  that  if  such  discordant  results  are  to  be  used  at  all  a 
small  difference  in  the  adopted  value  would  be  of  little  moment 
and  the  regular  Least  Squares  process  may  well  be  followed 
without  considering  the  case  as  an  exceptional  one.  Should 
conditions  or  checks  be  found  which  would  be  satisfied  much 
more  nearly  by  one  of  the  results  than  by  the  other,  the  problem 
is  thereby  altered  and  becomes  one  involving  the  assignment  of 
weights  or  perhaps  the  rejection  of  observations.  The  judgment 
of  the  (!()mi)uter  must  be  the  determining  factor. 

168.  Arbitrary  Adjustments.  The  principles  outlined  in  the 
foregoing  chaplcM's,  especially  in  Chapters  V  and  VI,  will  be  found 
of  assistance  in  some  problems  where  it  may  b(;  deemed  sufficient 
to  approximate  to  a  rigid  adjustment  by  assigning  corrections  to 
the  ()bs(M-ved  (juantitios  arbitrarily.  While  such  a  method  can 
hardly  l)e  defended  in  the  hands  of  the  comput(;r  who  is  con- 
versant with  Least  Squares,  still  it  must  be  admitted  that  such  a 


CONCLUSION  197 

computer  is  the  only  one  who  could  be  expected  to  carry  out  an 
arbitrary  adjustment  consistently  and  reasonably.  The  usual 
difficulty  arises  in  satisfying  all  of  the  necessary  conditions  at  the 
same  time  without  a  distribution  of  the  corrections  which  is  clearly 
unreasonable. 

In  certain  problems,  however,  a  distribution  of  arbitrary  cor- 
rections may  be  of  use  in  preparation  for  a  rigid  adjustment.  The 
method  consists  in  applying  to  the  observations  such  preliminary 
corrections,  resulting  from  a  detailed  study  of  the  condition 
equations,  as  will  reduce  the  amounts  of  the  final  corrections. 
This  advance  study  requires  a  clear  understanding  of  the  field 
conditions  as  well  as  the  methods  of  adjustment,  but  when  care- 
fully carried  out  is  likely  to  diminish  the  labor  of  the  computation 
and  to  improve  the  adjustment  by  reducing  the  numerical  quan- 
tities involved.  The  method  is  analogous  to  the  assumption  of 
approximate  values  for  the  unknowns  in  the  adjustment  of  indirect 
observations,  Chapter  III. 

169.  Use  and  Abuse  of  Least  Squares.  In  view  of  the  crit- 
icisms which  are  sometimes  directed  at  the  use  of  Least  Squares 
for  the  adjustment  of  observations,  a  few  words  on  the  subject 
may  not  be  out  of  place  here.  While  it  is  unquestionably  true 
that  the  method  is  sometimes  used  in  an  unwarranted  manner,  the 
real  difficulty  probably  arises  from  the  placing  of  erroneous  inter- 
pretations upon,  or  the  drawing  of  unreasonable  conclusions 
from,  the  results  of  the  adjustments. 

A  great  deal  of  misunderstanding  in  the  minds  of  persons 
unfamiliar  with  the  fundamental  principles  of  the  method  has 
resulted  from  the  use  of  the  term  "  probable  error,"  and  such  per- 
sons are  too  apt  to  blame  the  method  for  the  fruits  of  their  misuse 
of  it.  It  is  unfortunate  that  this  term  has  come  into  use,  since  its 
meaning  in  Least  Squares  is  a  technical  one  and  not  what  would  be 
expected  from  the  ordinary  use  of  the  word  "  probable."  Some 
of  this  trouble,  to  say  the  least,  would  have  been  avoided  by  using 
the  "  mean  square  error." 

A  common  criticism  relates  to  the  use  of  Least  Squares  in 
connection  with  a  very  small  number  of  observations,  even  as 
small  as  two.     The  reply  may  well  bo,  "  Whor(>  is  a  bettor  method?" 


198  PRACTICAL   LEAST  SQUARES 

The  intelligent  computer  does  not  place  the  same  reliance  upon  a 
very  small  number  of  observations  as  upon  a  larger  one,  but  having 
only  the  small  number  he  uses  them  as  best  he  can.  Hovv^ever,  to 
place  great  confidence  in  the  precision  of  the  mean  of  two  observa- 
tions is  certainly  questionable,  although  even  that  precision  may  be 
very  useful,  in  spite  of  its  limitations,  for  purposes  of  comparison. 
While  investigations  of  the  precision  of  observations  and 
results  have  been  thus  criticised,  little  or  no  objection  has  ever 
been  raised  against  the  use  of  Least  Squares  for  determining  the 
best  values  of  the  unknown  quantities.  Its  advantages  for  this 
purpose  are  evident  even  to  those  who  are  not  familiar  with  its 
details.  It  provides  a  method  of  adjustment  which  is  consistent, 
definite,  and  adaptable  to  the  various  kinds  of  problems  and  con- 
ditions, and  which  conforms  to  the  facts  as  to  the  occurrence  of 
errors  of  observation.  Generally,  also,  it  is  simpler  than  an  arbi- 
trary adjustment;   certainly  it  is  more  reliable. 

170.  Adjustments  not  Infallible.  The  beginner  must  not 
make  the  error  of  assuming  that  the  results  of  an  adjustment 
are  correct.  At  the  risk  of  repetition,  this  principle  is  emphasized, 
—that  the  results  are  but  approximations  to  the  true  or  correct 
values,  the  best  obtainable  from  the  given  observations.  Should 
the  observations  be  affected  by  constant  errors,  the  results  will  be 
likewise  affected,  without  regard  to  their  precision,  which  is  deter- 
mined from  the  discrepancies  among  them. 

Also,  as  has  been  pointed  out,  different  adjustments  of  the 
,same  observations  by  slightly  different  methods,  perhaps,  may 
yield  results  which  are  not  exactly  the  same,  owing  to  the  fact 
that  different  sets  of  numerical  quantities  are  used.  If  the 
computations  are  carried  out  to  one  decimal  place  more  or  less, 
slight  variations  in  the  final  values  may  similarly  occiu-.  But  it 
should  be  kept  in  mind  that  any  one  of  these  various  adjustments 
will  probably  satisfy  the  requirements  of  the  pro})lem  within  the 
uncertainties  among  the  observations,  so  that  any  one  of  them 
can  safely  be  adopted. 

171.  Other  Laws  of  Error.  When  applying  the  method  of 
Least  Squares  to  a  new  class  of  problems,  it  becomes  necessary  to 
investigate  the  occurrence  of  the  errors,  particularly  when  these 


CONCLUSION  199 

are  not  actual  errors  of  observation.  It  has  been  found  by  experi- 
ment that  the  variations  among  many  natural  occurrences  follow 
the  same  law  as  the  accidental  errors  of  observation.  Thus  the 
law  is  applied  in  studies  of  the  growth  of  vegetables,  and  to  the 
occurrence  of  various  characteristics  among  animals. 

To  illustrate  errors  which  do  not  follow  this  law,  we  may 
consider  the  errors  in  a  table  of  logarithms.  It  is  evident  that 
in  a  seven-place  table,  for  example,  the  decimals  following  the 
seventh  place  have  been  rejected  when  less  than  5  in  the  eighth 
place,  while  if  the  eighth  place  is  greater  than  5,  the  seventh  place  is 
increased  by  unity.  Therefore,  instead  of  the  three  assumptions 
upon  which  Least  Squares  is  based  (Art.  18),  we  have  errors 
occurring  only  between  the  limits  0.0  and  0.5,  the  unit  being  in  the 
last  place  of  the  logarithm,  and  in  equal  numbers  without  regard 
to  magnitude  or  sign.  The  probabilities  of  the  occurrence  of  the 
various  errors  between  these  limits  would  be  equal,  and  the  curve 
of  error  would  be  a  rectangle  upon  the  axis  of  errors  as  a  base  and 
limited  by  the  ordinates  at  +0.5  and  —0.5. 

172.  Review:  Outline  of  Methods  of  Adjustment.  In  con- 
clusion, a  brief  outline  will  be  given  covering  the  main  classes 
of  problems  which  have  been  considered  and  the  methods  of  solu- 
tion. 

Direct  Observations  of  a  Single  Quantity. 

AflJHstment.     Take  the  mean  or  the  weighted  mean. 

Indirect  Observations. 

Adjustment.  "Write  the  observation  equations  and  from  them 
the  normal  equations;  the  solution  of  the  latter  gives  the  unknown 
quantities  themselves  or  the  corrections  to  their  assumed  approx- 
imate values.  The  number  of  the  observation  equations  will  be 
the  same  as  that  of  the  observations;  the  number  of  the  normal 
ecjuations  will  equal  that  of  tlio  unknown  quantities,  which  must 
always  be  less  than  that  of  the  observations. 

Conditioned  Observations. 

Adjusttneut.  Write  the  condition  equations  in  their  general 
form  and  tluMi  in  their  simple  form  involving  th(^  corrections. 
From  them  form  the  normal  ('(|ualions,  the  same  in  nunihcM-  as  the 


200  PRACTICAL  LEAST  SQUARES 

conditions.  The  solution  of  the  normal  equations  gives  a  set  of 
factors,  called  correlates,  one  for  each  condition  equation,  from 
which  the  desired  corrections  to  the- observed  quantities  are  deter- 
mined. 

Simple  Propagation  of  Error. 

Solution.  Write  the  literal  function  whose  mean  square  error 
is  desired.  Differentiate  it  successively  with  respect  to  each  of  the 
quantities  for  which  mean  square  errors  are  given.  Substitute 
these  partial  derivatives  and  the  given  mean  square  errors  in  the 
general  equation  of  propagation  of  error  to  obtain  the  mean  square 
error  of  the  function. 

Compound  Propagation  of  Error. 

Solution.  Find  the  mean  square  error  of  the  function  as  above 
for  each  of  the  different  sources  of  error,  and  take  the  square  root 
of  the  sum  of  their  squares. 

Combination  of  Computed  Quantities. 

Adjustment.  Give  to  each  value  a  weight  equal  to  the  recip- 
rocal of  the  square  of  its  mean  square  error  and  take  the  weighted 
mean  as  the  best  value  of  the  quantity. 

Empirical  Formulas. 

Solution.  Plot  the  observations  and  sketch  a  smooth  curve 
through  them.  From  this  curve  select  the  form  of  the  desired 
equation.  Write  an  observation  equation  of  the  selected  form  for 
each  of  the  observations,  reducing  to  the  linear  form  if  necessary. 
Write  normal  equations  and  solve  them  as  in  Indirect  Observations, 
for  the  constants  or  coefficients  of  the  formula. 


APPENDIX  A 
HISTORY  AND  BIBLIOGRAPHY  OF  LEAST  SQUARES 

173.  Historical  Sketch.^  The  principle  of  the  arithmetic 
mean  is  very  old.  But  when  the  first  indirect  observations  were 
made,  probably  in  astronomy,  the  necessity  for  adjustment  became 
apparent.  Observation  equations  were  written  as  early  as  1748, 
by  Euler.  In  1757  Simpson  stated  the  axiom  that  positive  and 
negative  errors  occur  with  equal  frequency,  and  in  1770  Lagrange 
considered  the  occurrence  of  errors  from  the  standpoint  of  the 
theory  of  probability.  Laplace,  in  1774,  in  his  "  Mecanique 
Celeste,"  further  investigated  the  subject  and  laid  the  foundation 
for  the  development  of  Least  Squares. 

It  was  not  until  the  end  of  the  18th  century,  however,  that 
the  Method  of  Least  Squares  was  introduced.  The  first  publi- 
cation of  the  principle  of  least  squares  was  by  Legendre,  in  1805, 
in  his  "  Nouvclles  methodes  pour  la  determination  des  orbites 
des  cometes,"  and  by  him  the  name  was  given,  "  Methode  des 
moindres  quarres."  Although  there  is  no  question  as  to  the  priority 
of  publication,  it  seems  well  established  that  Gauss  had  actually 
developed  and  used  the  method  itself  since  1794,  when  he  was  a 
student  at  the  University  of  Gottingen.  His  first  publication  on 
the  subject,  however,  was  not  until  1809,  in  his  classic  work, 
"  Theoria  motus  corporum  ccelestium."  But  Gauss  deserves 
more;  credit  than  anyone  else  for  the  further  development  of  the 
Method  of  Least  Squares,  and  as  jMerriman  states,^  "  Few 
liranches  of  science  owe  so  large  a  pi'oportion  of  subject-matter  to 
the  labors  of  one  man." 

The  first  publication  of  a  theoretical  derivation  of  the  Law  of 
Error  was  made  by  Dr.  R.  Adrian,  of  Reading,  Pa.,  in  1808,  in 

'  For  more  dctiiiled  information,  the  reader  is  referred  to  Jordan,  Iland- 
hueh  der  Vermessungskuiide,  Hand  I,  KinleitiniK. 
^  Merriman:    Method  of  Least  Squares. 

201 


202  PRACTICAL   LEAST  SQUARES 

the  "  Analyst  or  Mathematical  Museum,"  at  Philadelphia. 
Gauss  published  his  in  the  next  year,  and  various  others  have 
followed. 

174.  Growth  of  the  Literature.  The  development  of  the  sub- 
ject is  indicated  by  the  rate  at  which  publications  devoted  to  it 
appeared.  In  1877  Professor  Merriman  published  an  investiga- 
tion 1  of  the  literature  of  Least  Squares,  as  a  result  of  which  he 
deduced  some  interesting  statistics.  The  following  data  are 
are  based  upon  his  work. 

Prior  to  1805,  22  titles  were  found.  From  that  time  on,  aver- 
aging by  decades,  the  rate  of  publication  increased  steadily  from 
about  two  per  year  in  1810  to  about  ten  per  year  in  1870.  Alto- 
gether, 408  titles  were  listed  up  to  1875.  Of  these,  153  were  pub- 
lished in  German^',  78  in  France,  56  in  Great  Britain,  and  34  in 
the  United  States,  the  remaining  ones  being  scattered  over  eight 
countries.  The  German  language  was  used  in  167  instances, 
French  in  110,  and  English  in  90. 

175.  Bibliography.  In  addition  to  the  paper  by  Merriman, 
referred  to  above,  Gore's  Bibliography  of  Geodesy,  in  the  Report 
of  the  U.  S.  Coast  and  Geodetic  Survey  for  1887,  will  be  found 
ver}^  useful  in  an  investigation  of  the  literature  of  this  subject, 
although  many  important  works  have  appeared  since  that  time. 

From  the  large  number  of  books  and  parts  of  books  devoted  to 
Least  Squares  and  the  Adjustment  of  Observations,  the  following 
are  selected  for  reference: 

Wright:  Adjustment  of  Observations.  Van  Xostrand,  Xe-\v  York,  1884. 
This  is  the  classic  work  in  the  English  language  on  this  subject.  The 
applications  are  principally  geodetic.  It  has  long  been  out  of  print,  and 
was  succeeded  by 

Wright  and  Hayford:  Adjustment  of  Observations.  Van  Xostrand,  1907. 
Less  comprehensive  than  the  foregoing,  but  improved  in  many  respects. 
Mainly   geodetic. 

Jordan:  Handl)uch  der  Vermessungskunde,  I.  Metzler,  Stuttgart,  1910. 
A  very  complete  treatise,  presented  in  a  direct  style  which  is  easilj-  read. 
Most  valuable  for  reference.     Oeodetic. 

Helmert:  Ausgleichimgsrechnung.  Teul)ner,  Leipzig,  1907.  Ooinprehen- 
sive  and  scholarly,  l)ut  somewhat  diflicult  to  read.  The  notation  is  un- 
usual.    Ocodetic  and  ))hysical. 

1  Merriman:  List  of  \\'ritings  lielating  to  the  Method  of  Least  Squares, 
published  in  the  Transactions  of  the  Connecticut  Academy,  Xew  Haven,  1877. 


HISTORY  AND  BIBLIOGRAPHY  203 

Koll:  Methode  der  kleinsten  Quadrate.  Berlin,  1893.  Extensive  and 
practical  with  many  applications. 

Czuber:  Theorie  der  Beobachtungsfehier.  Leipzig,  1891.  Largely  theo- 
retical, with  applications  to  life  insurance  and  statistics. 

Merriman:  Method  of  Least  Squares.  Wiley,  New  York,  1913.  Geodetic 
applications  but  general  in  scope. 

Comstock:  Method  of  Least  Squares.  Ginn,  Boston,  1895.  Astronomical 
and  general. 

Bartlett:  Method  of  Least  Squares.  Boston,  1915.  Contains  an  exten- 
sive list  of  examples  for  solution. 

Weld:  Theory  of  Errors  and  Least  Squares.  Macmillan,  New  York,  191G. 
General  and  practical  with  many  exercises  for  solution. 

Johxson:  Theory  of  Errors  and  Method  of  Least  Squares.  Wiley,  1892. 
General;   strong  in  illuminating  explanations. 

Bruxt:  Combination  of  Observations.  Cambridge  University  Press,  1917. 
Theoretical. 

Chauve.net:  Practical  and  Spherical  Astronomy.  Lippincott,  Philadaiphia, 
1896. 

Crandall:   Geodesy  and  Least  Squares.     Wiley,    New  York,    1907. 

Adams:  Application  of  Least  Squares  to  the  Adjustment  of  Triangula'.ion. 
Special  Publication  No.  28,  U.  S.  C.  &  G.  Survey,  1915.  A  verv  im- 
portant contribution  to  this  subject. 


APPENDIX  B 
PRINCIPLES  OF  PROBABILITY 

176.  Definition.  If  an  event  can  occur  in  a  ways,  and  can 
fail  to  occur  in  b  ways,  the  probability  of  its  occurrence  will  be 

,  and  that  of  its  failure  to  occur  will  be ,  it  being  assumed 

a-\-h  a+b 

that  all  the  ways  of  occurrence  or  failure  to  occur  are  entirely 
independent  and  equally  likely.  Thus,  in  one  throw  of  a  die,  the 
probability  of  a  certain  face  lying  upward  is  1/6,  and  that  of  its 
not  being  upward,  that  is,  of  any  other  face  being  upward,  is  5/6. 
The  probability  of  throwing  any  face  upward  will  be  6/6  =  1 
in  other  words,  certainty.  Therefore,  if  the  probability  of  the 
occurrence  of  an  event  be  p,  then  that  of  the  failure  of  the  event  to 
occur  will  be  l—p,  provided  it  is  certain  that  the  event  must  either 
occur  or  fail. 

First  Principle.  The  probability  of  the  occurrence  of  an 
event  is  therefore  a  proper  fraction  between  the  limits  zero  (impos- 
sibility) and  unity  (certainty),  and  may  be  defined  ^as  the  ratio 
between  the  number  of  ways  in  which  the  event  may  occur  and  the 
number  of  ways  in  which  it  may  either  occur  or  fail. 

177.  Two  Sources  of  Probability.  The  probability  of  an 
event  may  be  based  upon  theory  or  experience.  The  above  case 
of  throwing  a  die  is  an  example  of  the  theoretical  basis.  We  know 
without  question  how  many  faces  the  die  has  and,  therefore^,  the 
number  of  ways  in  which  a  certain  face  can  lie  upwards.  The 
numbers  involved  are  known  absolutely.  In  the  second  case,  on 
the  other  hand,  the  number  of  ways  in  which  the  event  can  occur 
is  assumed  as  a  result  of  experiment  or  experience.  For  example, 
if  an  event  has  occurred  a  times  and  failed  b  times  out  of  a  large 
number  a-f6,  of  trials,  we  may  say  that  the  probability  of  its 
occurrence,  under  the  same  conditions,  is  a/(a-\-b),  as  ])efore. 
Thus  we  may  also  define  the  probability  of  an  event  as  the  ratio 

201 


PRINCIPLES  OF  PROBABILITY  205 

of  the  number  of  times  it  has  occurred  to  the  total  number  of 
times  it  has  occurred  or  failed;  but  the  total  number  of  cases,  or 
attempts,  should  be  sufficiently  large  to  justify  their  use  as  a  basis 
for  generalization.  To  illustrate,  suppose  that  statistics  show 
that  in  the  long  run  the  number  of  male  children  born  is  to  that  of 
female  children  born  as  21  to  20;  then  the  probability  that  any 
birth  will  be  that  of  a  male  is  21/41. 

178.  Simple  Probability.  The  above  statements,  relating  to 
the  occurrence  of  a  single  event,  illustrate  simple  probability. 
The  principle  will  be  further  amplified.  Suppose  a  box  to  contain 
w,  white,  h,  black,  and  r  red  balls  of  the  same  weight  and  texture, 
and  that  a  single  ball  is  drawn  from  the  box  at  random.  Then 
the  probability  of  drawing  a  ball  of  a  certain  color  will  be  as  follows : 


White, 
Black, 

White  or  black, 
Black  or  red, 


w 


w-\-h-{-r 

b 

w-{-h-\-r 

w-\-h 
w-\-h-\-r 

6+r 
w-\-b-]-r 


White,  black,  or  rod, =  1 

iv-\-b-i-r 

Yellow,  ^ =  0 

w-\-b-\-r 

Thus  we  may  state  the  Second  Principle.  If  the  ways  in 
which  a  single  event  can  occur  independently  can  be  grouped  in 
differ(>nt  sots  or  series,  and  the  probability  of  its  oecurrenco  in 
each  scnios  bo  known,  the  total  probability  of  its  oceurroiico  in  any 
coinbination  of  the  series  will  b(>  the  .s^(o/i  of  the  ('()ri-(\sp()n(hng 
si^parate  probabilities.  In  the  above  example,  a  single  Ijall  can 
be  drawn  from  the  white  on(\s  with  a  probability  of 

w 


206  PRACTICAL  LEAST  SQUARES 

or  from  the  black  ones  with  a  probabihty  of 

b 

then  the  probabihty  of  drawing  either  a  white  or  a  black  ball  will 

be  the  sum  of  the  two  probabilities,  namely,  . 

w-\-b-\-r 

179.  Compound  Probability.  Independent  Events.  Suppose 
we  have,  in  addition  to  the  above  box,  a  second  one  containing  iv' 
white,  h'  black,  and  r'  red  balls,  and  that  we  draw  a  ball  from  each 
box.  Each  of  the  w-\-b-\-r  possible  draws  from  the  first  box  may 
occur  in  combination  with  each  of  the  w'  +  b'-\-7''  balls  in  the  second 
so  that  the  total  number  of  possible  draws  of  two  balls,  one  from 
each  box,  wih  be  (w-\-b-\-r){w'+b'-{-r').  Also,  each  of  the  white 
balls  in  the  first  box  may  be  drawn  with  each  of  the  white  ones  in 
the  second  box,  giving  ww'  possible  pairs  of  white  balls  drawn 
one  from  each  box.  Therefore  the  probability  of  drawing  simul- 
taneously, two  balls  of  one  color,  one  from  each  box,  will  be, 

two  white  ball 


rr' 

Two  red  balls. 


Two  black  balls, 


(w  +  b^r){iv'  +  b'  +  r') 
bb' 


As  a  result  of  this  reasoning,  we  can  state  the  Third  Principle: 
If  two  or  more  independent  events  are  to  occur  simultaneously, 
and  the  ]:)r()bability  of  the  separate  occurrence  of  each  is  known, 
that  of  the  simultaneous  occurrence  of  all  of  them  wiU  be  the 
product  of  the  separate  prol)abilities. 

180.  Compound  Probability.  Dependent  Events.  The  prol)- 
ability  of  drawing  a  black  and  wliitc^  I'^ii'',  one  ball  from  each  of  the 
two  boxes,  is  an  example  in  which  th(>  events  aix^  (k^pendent.  For, 
if  a  white  ball  werc^  drawn  fi'om  the  first  box,  a  black  one  would 
necessarily  hiwv  to  l)e  drawn  fi'om  the  second  box  in  orck^-  to  make 
the  pair,  and  vice  v(M'sa,  so  that  the  pi'obabilit}"  of  th(>  scm'oiuI  evcuit 
would  1)(^  (liffenMit  in  the  case  of  the  failure  of  the  first  one  than  in 


PRINCIPLES  OF  PROBABILITY  207 

its  occurrence.  Then  the  number  of  possible  black  and  white 
pairs,  one  ball  from  each  box,  would  be,  wb'-\-w'b,  and  the  prob- 
ability of  drawing  such  a  pair  would  be, 

wh'-\-w'b 
(w-\-b-\-r){w'-\-b'-\-r') 

Here  we  have  the  occurrence  of  a  compound  event  in  two  sets  or 
series,  so  that  the  total  probability  is  the  sum  of  the  separate 
(compound)  probabilities. 

Events  are  dependent  when  the  probability  of  the  occurrence  of 
one  of  them  depends  upon  the  occurrence,  or  failure  to  occur,  of 
another.  By  a  careful  analj-sis  of  each  problem,  it  will  usually  be 
easy  to  so  arrange  or  combine  the  events  as  to  render  them  inde- 
pendent. In  the  foregoing  example,  the  case  of  drawing  a  black 
and  white  pair,  one  ball  from  each  box,  is  clearly  one  of  dependent 
events,  but  if  we  require  the  probability  of  drawing  a  white  ball 
from  the  first  box  simultaneously  with  a  black  one  from  the  second 
box,  the  events  are  independent  and,  from  the  preceding  article, 
the  probability  would  be, 

U'b' 


(it'+6+rj(ir'  +  6'  +  r') 

Also,  the  probability  of  drawing  a  black  ball  from  the  first  box 
and  a  white  one;  from  the  second,  simultaneously,  would  be 

iv^ 

But  each  of  these  events,  while  compound,  is  independent  of  the 
other.  They  arc  of  thc^  same  charactcM-  and  may  be  considered 
as  a  single  compound  event  occurring  in  two  s(>ts  or  seri(>^,  so  tli;il 
the  total  prol)ability  of  its  occurrence  in  either  mannei-  will  be,  as 
in  Art.  178,  the  sum  of  the  two  s('parat(^  probabilities,  that  is, 

ivb'^  u/b^ 

This  is  evident,  also,  fi'om  \hv  first  priiicii^le,  when  we  note  that 
the  total  niunber  of  bhick  and  whit(^  pairs  is  irh'^w'b,  while  the 
total  number  of  possible  ])airs,  of  all  colors,  is 

(«-  +  /;  +  /■)(»•'  +  //  +  /■') 


208  PRACTICAL  LEAST  SQUARES 

181.  Number  of  Occurrences.  It  follows  from  the  definition 
of  probability  (Art.  176),  that  the  number  of  times  an  event  occurs 
may  be  determined  by  multiplying  the  total  number  of  possible 
occurrences  and  failures,  in  other  words,  trials  or  attempts,  by 
the  probability  of  the  occurrence  of  the  event.  In  Least  Squares, 
for  example,  the  number  of  errors  less  than  a  certain  amount  to  be 
expected  in  a  given  series  of  observations  will  be  equal  to  the 
probabiHty  of  an  error  less  than  that  amount  multiplied  into  the 
total  number  of  observations  in  the  set. 


APPENDIX  C 
DERIVATION    OF    THE    LAW    OF    ERROR 

182.  The  Law  of  Error,  that  is,  the  equation  of  the  Error  Curve, 
(Art.  19),  has  been  derived  in  several  ways  by  different  writers 
since  the  original  demonstration  by  Dr.  Adrian  in  1808,  published 
at  Philadelphia  in  the  ''  Analyst."  The  most  notable  of  these, 
however,  are  the  methods  of  Gauss  (1809)  and  Hagen  (1837). 
The  former  of  these  two  will  now  be  explained.^ 

183.  Assumptions.  The  Error  Function.  Gauss  based  his 
derivation  upon  the  assumption  of  the  arithmetic  mean  as  the  most 
probable  value  of  a  directly  observed  quantity  when  all  of  the 
observations  are  made  with  the  same  care.  Also,  the  occurrence 
of  the  errors  of  observation  is  assumed  to  be  in  accordance  with 
the  three  axioms  of  Art.  18. 

Since  small  errors  are  more  numerous  than  large  ones,  and  since 
the  probability  of  an  error  of  a  certain  size  is  directly  proportional 
to  the  number  of  times  that  that  error  occurs  in  the  given  series  of 
observations,  it  is  evident  that  the  probability  of  an  error  is  a 
function  of  the  error  itself.  Representing  any  error  by  A,  the 
probability  of  the  occurrence  of  this  error  by  Pa;  and  the  prob- 
abihty  function  by  <?!)(A),  we  can  write, 

Pa  =  ^{A)  (308) 

Strictly  speaking,  consecutive  errors  will  differ  by  small  finite 
amounts  which  are  the  least  readings  made  with  the  given 
instrument  or  by  the  method  used.  For  example,  the  least  reading 
of  a  vernier  on  a  circle  may  be  10",  so  that  all  the  observations 
might  be  made  only  to  the  nearest  10",  and  the  errors  themselves 

'  Soo  Brunt's  Combination  of  Observations,  for  tlio  nietliods  of  Hagen, 
Thomson  and  Tait,  and  luldington.  Hagen's  jjroof  is  given  in  many  works 
on  Least  Sciuares.  Comstoek  frankly  assum(>s  the  Law  of  Error  to  be  empir^ 
ical,  wliich  is  a  reasonable  and  practical  method  of  attack. 

209 


210  PRACTICAL  LEAST  SQUARES 

would  differ  by  multiples  of  10".  So  the  ordinates  to  the  error 
curve,  corresponding  to  the  various  errors,  and  the  successive 
points  on  the  curve,  would  be  separated  by  these  intervals.  How- 
ever, as  the  precision  of  the  observations  increases,  these  ele- 
mental differences  decrease  and  so  we  may  reasonably  regard  the 
points  as  being  so  close  together  as  to  make  the  curve  continuous. 
Thus  we  may  consider  that  the  errors.  A,  var}^  continuously,  and 
that  the  function,  0(A),  is  a  continuous  one.  The  probability 
of  an  error,  A,  is  therefore  equivalent  to  the  probability  of  an  error 
between  the  limits,  A  and  A  +  riA. 

The  probability  of  the  occurrence  of  an  error  between  two 
limits  is  the  sum  of  the  separate  probabilities  of  all  the  possible 
errors  h'ing  between  those  limits.^  If  we  regard  each  probability 
as  the  ordinate  to  the  error  curve,  corresponding  to  its  particular 
error,  the  sum  of  these  successive  ordinates,  when  the  curve  is  a 
continuous  one,  will  constitute  the  area  between  the  limiting 
ordinates,  the  curve,  and  the  axis  of  A.  Then,  the  probability  of 
an  error  between  A  and  A-\-dA  would  be  represented  by  the  area 
of  the  infinitesimal  vertical  strip  of  length  0(A)  and  of  width  r/A, 
that  is,  by  the  area  0(A)r/A.  Therefore,  the  probability  of  the 
occurrence  of  an  error  between  the  limits  a  and  b  would  be 


/ 


b 

0(A)rfA 


If  the  limits  be  extended  so  as  to  include  all  possible  errors, 
namely,  between  —  oo  and  +  20  ,  the  probability  of  the  occurrence 
of  any  error  between  these  limits  would  be  unity,  that  is,  certainty, 
and  this  can  be  stated, 

0(A)</A  =  1  (809) 


X 


or,  since  th(^  area  is  symmetrical  about  the  axis  of  probal)ility, 

0(A)r/A=i  (310) 

1  S(H>  Ai)i)('n(lix  15,  Pi-inciples  of  Probability. 


f: 


LAW  OF  ERROR  211 

184.  Derivation  of  the  Law  of  Error.  We  shall  consider  the 
general  case  of  indirect  observations,  since  direct  observations 
form  but  a  special  case  under  it.  The  observed  quantity  is  a 
function  of  the  unknown  quantities.  Let  there  be  n  observations 
and  m  unknowns,  n  being  greater  than  m  (Art.  22).  The  observa- 
tion equations  may  be  written, 

/i(X,  Y,Z,     .    .    .)=Mi 

/2(X,  Y,Z,     .    .    .)=M2  (311) 


UX,  Y,Z,     .    .    .)^M, 

Let  Ai,  A2,  A3,  .  .  .  A,„  be  the  respective  errors  of  Mi,  M2,  M3, 
,  .  .  Mn,  and  let  the  probability  of  the  occurrence  of  Ai  be 
^(Ai),  that  of  A2  be  <?i)(A2),  etc.  Then  the  probability  of  the  sim- 
ultaneous occurrence  of  this  series  of  errors  will  be  the  product 
of  their  separate  probabilities,  or, 

P  =  (/)(Ai)(/)(A2)0(A;O    .    .    .    0(A„)  (312) 

Taking  the  logarithm  of  each  mem])er,  this  Ix'comes, 

logP  =  log0(Ai)+l()g</>(A2)+    .    .    .    +log0(A„)       (313) 

The  most  probable  series  of  errors  will  be  those  for  which  the 
above  probaljility  is  a  maximum,  which  also  will  l)e  the  case  when 
log  P  is  a  maximum.  This  is  the  condition  for  the  best  or  most 
probable  values  of  the  unknowns.  Therefore,  the  first  (l(M'ivativ(> 
of  (313)  must  equal  zero,  and  since  the  unknowns  X,  Y,  Z,  .  .  .,  in 
the  case  of  indirect  observations  are  independent,  it  follows  that 
the  separate  partial  doi'ivatives  of  log  P  witii  respect  to  these 
unknowns  must  equal  zero.     Thus  we  o])tain, 


0  (A  1 )  (IX      cpiSo)  (IX  0  (^n)'IX 


(314) 


212  PRACTICAL  LEAST  SQUARES 

Multiplying  and  dividing  each  fraction  by  the  corresponding  dA, 
d(f)(Ai)     dAi  ,    d4>{A2)      dA2  dcl>(A„)     dAn_Q 


0(Ai)dAi    dX     4>{A2)dA2    dX  </>(A„)dA„    dX 

d(f)(Ai)     dAi  ,    f/0(A2)      dA2,  d4>(An)     <iA„_  . 


(t>{Ai)dAi    dY     </)(A2)dA2    dY  <p{A„)dAn    dY 

Since  the  function,  0(A)dA,  must  be  appHcable  to  any  number 
of  unknowns,  we  shall  make  use  of  the  case  of  one  unknown, 
directly  observed,  from  which  to  determine  the  nature  of  the 
function.  Letting  X  represent  the  true  value  of  the  unknown, 
and  A  the  true  error  of  the  observation,  M,  we  may  write, 

X-Mi=Ai 

X-M2  =  A2  (316) 


X-Mn  =  An 
Differentiating, 

dM^dA2^dAz^  =^=1  C317) 

dX     dX     dX       '    '    '        dX  ^      ^ 

Substituting  in  (315), 

d<i>{Ai)     ,     r/0(A2)     ,  ,     d(/)(A„) 


=  0  (318) 


</.(Ai)dAi      0(A2)rfA2  </)(A„)f/A„ 

Multiplying  and  dividing  each  fraction  by  the  corresponding  A, 
d«(A.)    .^^+_MA^^^_^    ^    ,    ^    +     <>'>'}^:\     A,.  =  0      (319) 


Ai(/)(Ai)dAi  A2</)(A2)r/A2    "  A„</)(A„)(/A„ 

But  it  is  assumed  in  direct  observations  that  the  mean  is  the  best 
or  most  probable  vahu;  of  the  observed  (quantity,  and  that,  as  the 
number  of  observations  increases  indefinitely,  the  mean  approaches 
the  true  value  as  a  limit.     So  we  may  write, 

^JU+M..+    ^    ^    .    +M„ 

n 
or, 

{X-My)  +  {X-M2)+    .    .    .    +(X-3/„)=0         (321) 


LAW  OF  ERROR  213 

whence,  from  (316), 

A1+A2+    .    .    .    +An  =  0  (322) 

Both  (319)  and  (322)  hold  good  as  the  number  of  observations  is 
increased  one  by  one.  But  in  order  that  this  condition  may  exist, 
it  is  necessary  that  the  coefficients  of  the  A's  in  (319)  be  equal  and 
constant,   so  that  they  may  be  cancelled  from  that  equation. 

Therefore,  we  may  write,  in  general, 
d(P(A) 


or, 

Integrating, 
whence,^ 


A0(A)dA 
d<f)(A) 


=  a  constant,  say  k  (323) 

M  (324) 


0(A)dA 

log</>(A)=pA2+fc'  (325) 

0(A)=e^*^V  (326) 


But  one  of  the  original  assumptions  was  that  small  errors  are 
more  numerous  and  more  probable  than  large  ones.  Thus,  as  A 
decreases,  (^(A)  must  increase,  which  requires  that  k  must  always 
be  negative.  To  effect  this,  we  replace  k/2  b}^  the  new  constant, 
—  h^.  Then,  replacing  the  constant  factor,  e^' ,  by  the  constant,  C, 
we  obtain  the  expression  for  the  probability  of  an  error,  A, 

<PiA)  =  Ce-"'^'  (327) 

185.  The  Constant,  C.  It  remains  to  determine  the  value  of 
the  constant,  C.     Substituting  (327)  in  (310), 


f 


Ce-"'^VA  =  §  (328) 


Let    t  =  hA;    then   (U  =  hdA.     Also,    when   A  =  0,    ^  =  0,    and   when 
A  =  »^ ,  t=  00  .     Therefore  w(>  may  wiite  (328)  as  follows. 


j     e-''^'hdA  =  ^  (329) 


1  c  is  the  base  of  Xapicrian  logarithms. 


214 


But/ 


PRACTICAL   LEAST  SQUARES 


/■ 


e-''dt 


V^ 


(330) 
(331) 


so  that,  from  (330)  and  (;^31), 

h  _Vx 
2C       2~ 
whence, 

h 


C  =  - 


Vtt 


(332) 


Therefore,  we  have  from  (327)  the  final  expression  for  the  Law  of 
Error, 

0(A)  =4--^-"^^^  (333) 

Vtt 

1  This  definite  integral  may  be  evaluated  in  various  ways.     The  following 
method  is  given  by  Bartlett: 

From  the  assumption,  t  =  hA,  we  have, 


itegrals,  only,  ar€ 


But  when  definite  integrals,  only,  are  used. 


Multiplying  (334)  and  (335), 


(:-"!-!:.[ 


c-"^^^+^'^kdAdh 


„     2fl+A^)L  Jo 

1  r^  ds 


-,-/,2(l+A2)(_2/),)(l+A2)(i/i 


tan- 'A 


Therefore, 


i: 


0        4 

Vtt 


e-"dt  =  - 


(334) 
(335) 

(336) 

(337) 
(338) 
(339) 
(340) 

(341) 


LAW  OF  ERROR  215 

186.  Expansion  of  Law  of  Error  in  Series.  The  Law  of  Error 
may  be  expressed  in  the  form  of  series  for  convenience  in  evaluating 
it  for  various  values  of  A.^ 

Using  the  quantity,  t  =  hA,  as  an  auxiliary  variable,  we  can 
state  (333)  as  follows : 

«^(A)rfA  =  -^e-"'^'hdA  =  ^e-"dt  (342) 

Vtt  Vtt 

which  is  the  probability  of  an  error  between  A  and  A+c/A.  The 
probability  of  the  occurrence  of  an  error  less  than  A  will  be  that 
of  an  error  between  the  limits,  —A  and  +A,  that  is,  since  t  =  hA, 

+A       r+A  1      r+t  9     r+t 

p      =  <p(A)dA  =  -^\      e-'\lt  =  ^\      e-'\H      (343) 

-A     J- A  VirJ-t  Vtt  J 

=;7?('-3Ti+5^-4  ■  ■  •  )  (^^' 


or, 


for  use  with  small  values  of  t, 

for  use  with  large  values  of  t. 

187.  Tables  of  the  Law  of  Error.  From  the  above  formulas, 
tables  have  been  computed  with  the  argument  t,  giving  the  prob- 
al)ility  of  an  ci-ror  less  than  A,  in  a  given  set  of  observations. 
Tabk^  I,  in  Api)endix  F,  has  been  formed  in  this  manner.  To  use 
such  a  table,  the  mean  square  error  of  a  single  observation,  (e), 
is  computed  fi'om  the  residuals  of  tlie  mean.  Then  i  is  obtained 
from  the  assumed  error,  A,  by  means  of  the  relation, 

t  =  }iA^     ^-^  (34()) 

eV2 
siiic(>,  from  (209), 

e\  2 

Finally,  with  /  as  an  argmnenl,  the  tabular  ])r()t)al)ilit >'  is  obtaiiUMJ. 

'  Sc(^  \\'njiht  and  HayfonK  Art.  2iy  Craiidall,  Art.  114;  dp  ( 'hauvc-nct, 
Vol.  I,  Art.  ii:;. 


216  PRACTICAL  LEAST  SQUARES 

However,  it  is  more  convenient  in  many  cases  to  express  the 
function  in  terms  of  A/  e  or  A/r  directly,  and  this  has  been  done  in 
Tables  II  and  III,  respectively,  in  Appendix  F.  The  table  gives 
the  probability  of  an  error  less  than  a  certain  fraction  (A/e)  of 
the  mean  square  error,  or  (A/r),  of  the  probable  error,  of  a  single 
observation.  Thus,  from  Table  II,  the  probability  of  an  error  less 
than  0.4  of  the  mean  square  error  is  0.3108,  and  the  number  of  such 
errors  should  be  approximately  0.3108  times  the  number  of  obser- 
vations, n,  in  the  given  series.  Similarly,  the  number  of  errors 
greater  than  0.4 e  would  theoretically  be,  n(l— 0.3108).  By  com- 
paring these  theoretical  numbers  of  errors  with  those  actually 
counted  in  the  given  set,  it  is  possible  to  ascertain  how  closely 
the  observations  conform  to  the  theory  (Art.  20). 


APPENDIX  D 
OUTLINE  OF  A  SHORT  COURSE  OF  INSTRUCTION 

188.  General  Plan.  While  it  is  desirable  to  devote  a  three- 
hour  course  for  one  semester  to  the  study  and  practice  of  Least 
Squares  and  the  Adjustment  of  Observations,  with  civil  engineer- 
ing students,  the  author  presents  the  following  outline  of  a  one- 
hour  course  which  he  conducted  at  Cornell  University  when,  owing 
to  the  demands  of  other  courses,  this  was  all  the  time  which  the 
student  could  devote  to  the  subject.  He  regards  such  a  course 
as  very  much  worth  while  and  believes  that  the  students  obtained 
a  good  general  knowledge  of  the  methods  of  adjusting  observa- 
tions together  with  considerable  practice  in  the  solution  of 
problems. 

The  course  was  given  in  16  lessons,  and  in  addition  to  the 
fifty-minute  lecture,  the  student  was  expected  to  work  two  hours 
at  home  upon  the  text  and  the  assigned  problem.  The  problems 
were  handed  in  at  the  next  lecture,  with  a  penalty  for  failing  to  do 
so.  It  was  considered  essential  that  the  problem  be  solved  while 
the  topic  was  fresh  in  the  student's  mind.  The  problems  were 
carefully  examined  by  comparison  with  standards  and  returned 
for  correction,  if  necessary,  or  retained  until  the  end  of  the  term. 
The  work  was  required  to  be  neatly  done  with  the  idea  that  the 
set  of  examples  would  be  kept  for  reference.^ 

The  lectures  had  to  be  limited  to  the  essential  parts  of  the 
subject,  owing  to  the  limited  lime,  and  especial  attention  was 
given  to  the  solution  of  the  pr()l)l(MU  at  hand.  Sometimes  two 
lectures  intervened  ])etween  problems,  and  in  tlu>  ('as(>  of  the  double^ 
problem  of  the  adjustment  of  a  (luadrilateral  a  lecluix^  was  omittcnl 
in  order  to  give  the  studcMit  more  time  for  {hv  solution.     The 

1  The  jnipcr  known  as  "  Data  SIkh'Is  "  was  usod.  It  is  SXlOj  inclu\s  and 
ruled  witli  blue  linos  onc-fDUrtU  inch  apart  ])arallcl  to  the  slioi-t(>r  (xlgo  and 
with  periMMidicular  r<'d  lines  I'orniin^  ten  ecuial  eohnnns.  A  blank  margin  is 
left  at  the  toj)  and  left-hand  edges  of  the  sheet. 

217 


218  PRACTICAL   LEAST  SQUARES 

first  lectures  were  devoted  to  a  very  careful  consideration  of  the 
occurrence  of  errors.  Thence  the  order  is  indicated  by  the  prob- 
lems in  the  following  list. 

189.  List  of  Problems.  It  was  intended  that  each  of  the 
ordinary  problems  would  be  of  such  length  that  the  average 
student  could  solve  it  in  two  hours,  in  connection  with  the  accom- 
panying text.  The  order  here  given  may  be  varied,  if  desired,  and 
Nos.  9  and  10  may  be  combined.  The  inclusion  of  the  topic  of 
index  of  precision  and  mean  square  error  in  the  introductory  lec- 
tures will  depend  upon  the  preference  of  the  instructor;  it  is  not 
necessary  to  introduce  it  until  the  propagation  of  error  is  to  be 
studied. 

1.  Simple  and  weighted  means;  precision  and  mean  square 
error. 

2.  Indirect  observations;  observation  equations  given;  direct 
solution  for  the  unknowns. 

3.  Indirect  observations;  observation  equations  given;  solu- 
tion with  approximate  values  of  unknowns  to  find  corrections  to 
those  values. 

4.  Local  adjustment  of  angles  at  a  station. 

5.  Local  adjustment;  method  of  directions. 

6.  Adjustment  of  a  level  net. 

7.  Adjustment  of  a  quadrilateral;  method  of  angles. 

8.  Adjustment  of  a  quadrilateral;  method  of  directions. 
(Problems  7  and  8  may  be  combined,  using  same  data.) 

9.  Simple  propagation  of  error. 

10.  Compound  propagation  of  error. 

11.  Combination  of  results;   weights  from  mean  square  errors. 


APPENDIX  E 
TYPICAL  CURVES  FOR  REFERENCE 


TYPICAL  CURVES 


221 


PLATE   I 


y=A  +  B  X 
A  =  Intercept  en  Y-axis 

B=  Tan  gent  of  Slope,  io  X-axis, 
—  tan 


STRAIGHT  LIKES 


C.S  Log  X 


NOTE  .'    Sec  next  paces  for  these  curves 
plotted  by  X  and  y 


222 


PRACTICAL  LEAST  SQUARES 


4 


PLATE  11 


:Ci.5T- 


20  1,0  CO 


H (- 


H ^ 


100  X 


PARABOLA 


TYPICAL  CURVES 


223 


PLATE  III 


224 


PRACTICAL  LEAST  SQUARES 


PLATE  IV 


PAIiAEOLAS 


TYPICAL  CURVES 


225 


PLATE  V 


i/     i 


/ 


_->■ 


FARABOLAS 


-iO- 


226 


PRACTICAL   LEAST  SQUARES 


PLATE  VI 


TYPICAL  CURVES 


227 


PLATE  VII 


228 


PRACTICAL  LEAST  SQUARES 
PLATE  VIII 


APPENDIX    F 
TABLES 

TABLE   I 

Values  of  i»  =  — =  |    e~'^dt 


2   n 

of  p  =  — =  I    e 


(Arts.  136  and  187) 


Argument  is  t  =  h^. 


I 

p 

t 

p 

0.00 

0.0000 

1.00 

0,8427 

0.05 

.0564 

1.05 

.8624 

0.10 

.1125 

1,10 

.8802 

0.15 

.1680 

1.15 

.8961 

0.20 

.2227 

1.20 

.9103 

0.25 

0.2763 

1.25 

0.9229 

0.30 

.3286 

1.30 

.  9340 

0.35 

.3794 

1,35 

.9438 

0.40 

.4284 

1.40 

.9523 

0.45 

.  4755 

1.45 

.9597 

0.50 

0 . 5205 

1.50 

0.9661 

0.55 

.  5633 

1.60 

.9763 

0.60 

.6039 

1.70 

.  9838 

0.65 

,6420 

1.80 

.  9891 

0.70 

.6778 

1.90 

.  9928 

0.75 

0,7112 

2.00 

0.9953 

0.80 

.7421 

2,10 

.9970 

0.85 

,7707 

2.20 

.9981 

0.90 

,  7969 

2.30 

.9989 

0.95 

. 8209     i 

2.40 

.  9993 

1  no 

0 . 8427 

2 ,  ."in 

n  9996 

229 


230 


PRACTICAL   LEAST  SQUARES 


TABLE   II 


Values  of  p  ■ 


2/i 


f 

Jo 


-"^^^iA  in  terms  of  — 

€ 


Probability  of  the  occurrence  of  an  error  less  than  A. 

€  =  A  / ,  that  is,  the  mean  square  error  of  a  single  observation. 

\  71  —  1 

(Art.  187,  J).  215,) 


A 

e 

P 

1 

P 

0.0 

0.0000 

2.0 

0.9545 

0.1 

.0797 

2.1 

.9643 

0.2 

.1585 

2.2 

.9722 

0.3 

.2358 

2.3 

.9785 

0.4 

.3108 

2.4 

.9836 

0.5 

0 . 3829 

1      2.5 

0.9876 

0.6 

.4515 

2.6 

.9907 

0.7 

.5161 

2.7 

.9931 

0.8 

.5763 

!      2.8 

.9949 

0.9 

.6319 

2.9 

.9963 

1.0 

0.6827 

3.0 

0.9973 

1.1 

.7287 

3  1 

.9981 

1.2 

.7699 

3  2 

.9986 

1.3 

.8064 

3.3 

.  9990 

1.4 

.8385 

3.4 

.  9993 

1.5 

0.8664 

I      3.5 

0 . 9995 

1.6 

.8904 

'      3.6 

.9997 

1.7 

.9109 

3.7 

.9998 

1.8 

.9281 

3.8 

.9999 

1.9 

.  9426 

3  9 

.  9999 

2.0 

0 . 9545 

4.0 

0.9999 

4  1 

1 .oooo 

TABLES 


231 


TABLE   III 


7-  r^ 

ttJo 


Values  of  p  =  —7=   I     e    ''^^^dA  in  terms  of 


Probability  of  the  occurrence  of  an  error  less  than  A. 
r  =  0.6745-»  / =  the  probable  error  of  a  single  observation. 

H  —  1 

(Art.  187,  p.  215) 


A 

r 

P                i 

A 
— 
r 

P 

0.0 

0.0000 

2.5 

0.9082 

0.1 

.0538 

2,6 

.9205 

0.2 

.1073 

2.7 

.9314 

0.3 

.1603 

2,8 

.9410 

0.4 

.2127 

2,9 

.9495 

0.5 

0.2641 

3,0 

0.9570 

0.6 

.3143 

3.1 

,9635 

0.7 

.3632 

3.2 

.9691 

0.8 

.4105 

3.3 

.9740 

0.9 

.4562 

3.4 

.9782 

1.0 

0.5000 

3.5 

0.9818 

1.1 

.5419 

3,6 

.9848 

1.2 

.  5817 

3,7 

.9874 

1.3 

.6194 

3,8 

.9896 

1.4 

,6550 

3,9 

.9915 

1.5 

0.6883 

4.0 

0.9930 

1.6 

.7195 

4.1 

.9943 

1.7 

.  7485 

4,2 

.9954 

1.8 

.7753 

1      4,3 

.9963 

1.9 

,8000 

4,4 

.9970 

2.0 

0.8227 

4 . 5 

0 . 9976 

2.1 

.  8433 

4.6 

.9981 

2,2 

.8622 

4,7 

.9985 

2.3 

.  8792 

4,8 

.9988 

2.4 

.  8945 

4,9 

.9991 

2 . 5 

(1.9()S2 

5  0 

0 ,  9^)93 

232 


PRACTICAL  LEAST  SQUARES 


TABLE   IV 

Factors  for  Computing  Probable  Errors  from  Bessel's  Formulas. 
(Arts.  140  and  141) 


0 .  6745 

0.6745 

0.6745 

0,6745 

n 

Vn-1 

n 

Vn-1 

Vn(n-l) 

Vn(n-l) 

30 

0.1252 

0.0229 

31 

.1231 

.0221 

2 

0.6745 

0.4769 

32 

.1211 

.0214 

3 

.4769 

.2754   ' 

33 

.1192 

.0208 

4 

.3894 

.1947 

34 

.1174 

.0201 

5 

0.3372 

0.1508 

35 

0.1157 

0.0196 

6 

.3016 

.1231   1 

36 

.1140 

.0190 

7 

.2754 

. 1041   t 

37 

.1124 

.0185 

8 

.2549 

.0901 

38 

.1109 

,0180 

9 

.2385 

.0795 

39 

.1094 

.0175 

10 

0.2248 

0.0711 

40 

0.1080 

0.0171 

11 

.2133 

.0643 

41 

.1066 

.0167 

12 

.2029 

.0587 

42 

,  1053 

.0163 

13 

.1947 

.0540 

43 

.1041 

.0159 

14 

.1871 

.0500 

44 

.1029 

.0155 

15 

0.1803 

0.0465 

45 

0.1017 

0.0152 

16 

.1742 

.0435 

46 

.  1005 

.0148 

17 

.1686 

.  0409 

47 

.0994 

.0145 

18 

.1636 

.0386 

48 

.0984 

.0142 

19 

.1590 

.0365 

49 

.0974 

.0139 

20 

0.1547 

0.0346 

50 

0,0904 

0.0136 

21 

.1508 

.0329 

51 

,  0954 

.0134 

22 

.1472 

.0314 

52 

,0944 

.0131 

23 

.1438 

.0300 

53 

,0935 

.0128 

24 

.1406 

,0287 

54 

,0926 

.0126 

25 

0.1377 

0.0275 

55 

0,0918 

0.0124 

26 

.  1349 

.0265 

56 

.  0909 

.0122 

27 

.  1323 

.0255 

57 

,0901 

,0119 

28 

.  1298 

.0245 

58 

.0893 

,0117 

29 

.  1275 

.0237 

59 

.0886 

,0115 

30 

0.1252 

0 . 0229 

60 

0,0S7S 

0,0113 

INDEX 


(Numbers  refer  to  pages) 


Abridged  method  of  solving  normal 

equations,  41,  47 
Abuse  of  least  squares,  197 
Accidental  errors,  7,  9,  10 
Accuracy  and  precision,  151,  194 
Adjustment,    angles    {see    triangula- 
tion) 
arbitrary,  196 
base  lines,  130 
by  parts,  54,  97,  130 
levels,  55,  64,  74,  174 
necessity  for,  3 

trigonometric  leveling,  81,  130 
triangulation,  80,  127 

between  base  lines  or  points  of 

control,  119,  127 
figure,  81,  96,  107,  108 
local,  56,  71,  76 

quadrilateral,  method  of  angles, 
99 
method  of  directions.  111 
approximate  method,  118 
Adjustments  not  infallible,  78,  198 
Aids  in  computation,  30,  51 
Angle  equations,  84,  86 
Angle  measurement  with  theodoHtc, 

example,  186 
Approximate    method    of    adjusting 

([uadrilatcral.  lis 
Approximate  values  of  unknowns,  use 

of,  19,  34 
Ar!  itrary  adjustments,  196 
Arithmetic  mean,  9  (s(r  Mean) 
Average  error,  l.")2,  l.")7.  loS,  163,  166 
Axioms  or  assmnptions,  11,  209 
Azimuth  equation.  128 


B 

Base  lines,  adjustment  of,  130 
Bessel's  formulas  for  mean  square  and 

probable    errors,    162,    164, 

166,  169 
Best  values  of  the  unknowns,  15 
Bibliography  of  least  squares,  202 
Blunders,  7 


Central-point  figure,  83 
side  equation  for,  92 
central  point  unoccupied,  123 
Coefficients,  equalization  of,  35 
Comparison  of  indices  of  precision, 

158 
Comparison     of     observations     and 

theory,  11 
Compound  propagation  of  error,  181 
Computation  tables  and  machines,  30 
Computed  quantities,  combination  of, 
187 
precision  of,  177 
weighted  mean  of,  189 
Conditioned  observations,  53 

adjustment    by    method    of    inde- 
pendent unknowns.  75 
precision  of,  172 

special  case  of  one  condition  only, 
73 
Conditions,  53 

angle,  81,  S4,  97,  108 
arrangement  of.  70 
ecjuations,  57.  63 
ind(>pendent.  55,  (')'),  85,  97 
latitude,    longitude,   and  azimuth, 
128 


233 


234 


INDEX 


Conditions,  length,  121,  128 

local,  56,  81,  97 

number  of,  55,  57,  65,  82,  86 

side,  81,  97,  108 
Constant  errors,  5,  193 
Control  or  check,  arithmetic  mean,  20 

weighted  mean,  24 

correlates,  62 

formation  of  normal  equations,  29, 
62 

solution   of  normal  equations,   46, 
62 

final  values  of  the  unknowns,  50,  63 
Corrections,  3 

computation  of,  64 

used  instead  of  errors,  3 
Correctness  unattainable,  2 
Correlates,  61 

method  of,  59,  71,  106 
Course  of  instruction,  217 
Criteria  for  rejection  of  observations, 

192 
Curve  of  error,  10 
Curves  of  empirical  formulas,  219 

D 

Dependent  quantities,  53 
Derived  equations,  43,  46 
Direct  observations  of  one  unknown, 
17 

adjustment  of,  18 

precision  of,  160 
Directions,  105 

list  of,  107 

method  of,  97,  107 
Discordant  observations,  196 
Discrepancies  among  observations,  1, 
8 

indicate  errors,  2,  8 
Doolittle,  method  of  elimination.  40 


Elimination,  methods  of ,  40 

Emi)irical  formulas,  131 
straiglit  lines,  133,  141,  221 
parabolas,  133.  143,  221-225 
hyperbolas,  141,  221.  223.  226 
jjcriodic  functions.  134,  blX,  22S 


Empirical  formulas,  non-linear  forms, 
135 

exponential  functions,  135,  145,  227 

logarithmic  functions,  227 

reduction  to  linear  form,  137 

test  of,  139 

use  of,  140 
Equations,  angle,  84,  86 

azimuth,  latitude,  and  longitude,  128 

base  line,  128 

condition,  57 

derived,  43 

length,  121,  128 

normal,  28 

observation,  27 

residual,  27 

side,  87 

mechanical  statement  of,  93 

simultaneous,  general,  38 
Error,    average,    152,    157,    158,   162, 
166 

curve  of,  10 

law  of,  12,  209 
tables,  229-231 

mean   square,   152,   153,   158,   161, 
164,  166 

probable,  152,  155.  162,  164,  166 

proi)agation  of,  177 
Errors,  accidental,  7 

constant,  5,  193 

instrumental,  6 

occurrence,  4,  10,  11 

personal,  6 

systematic,  5,  193,  195 

theoretical,  5 
Excess,  .spherical,  53,  85 
Exponential  fimctions,  135,  145 

F 

Factors,  Bessel's,  f(jr  ])robablc  errors, 
232 
correlates,  61 
Figure  adjustment,  82.  96 

l)('t\v('(>n  fixed  jjoints  of  control,  127 
lucthod  of  angles,  82,  96 
method  of  directions,  107,  111 
to    conform   to   fixed   or   adjusted 
work,  119 


INDEX 


235 


Figures,  classification  of,  83 
Fixed,  or  controlling,  data,  119,  127 
Formulas,  Bessel's,  162,  164,  166,  169 

empirical,  131 

Liiroth's,   169 

Peters',  162,  164,  167,     169 
Function,  general,  26 

linear,  26,  137 

observations  of  a,  26 

of  the  unknowns,  26 

reduction  to  linear  form,  26,  89,  137 

G 

Gauss,  method  of  elimination,  40 
Geometric  mean,  9 

H 

History  of  least  squares,  201 
Hyperbolas,  141,  221,  223,  226 

I 

Independent  conditions,  55,  65,  85,  97 
Independent  observations,  54 
Independent  unknowns,   method  of, 

75,  106 
Index  of  precision,  152 
Indirect  observations,  26 

method  of,  75,  106 
Instruction,  short  course  of,  217 
Instrumental  errors,  6 
Interpolation  formulas,   131 

I. 

Latitude,     longitude,     and    azimuth 

eciuations,  128 
Law  (if  error,  12,  209 

expansion  in  scries,  215 

others  than   that  of  least  sciuarcs, 
19S 

tables  (,f,  13,  162,  215,  229-231 

test  of,  12 
Law  of  pi'o])apitioi\  of  error,  177 
Least  sciuares,   13 

axioms  or  assuniiitions  of,  11.  209 

cliissificatioii  of  ])!'ol)leii!S,  15 

priiH'iple  of,  11 

two  uses  of,  15 

use  and  at)us(^  of,  197 


Length  equation,  121.  128 
Levels,  adjustment  of,  55,  64,  74 

precision  of,  174 
Limit  of  accuracy  and  precision,  163, 

194 
liinear    function,    reduction    to,    26, 

89,  137 
Literature  of  least  squares,  202 
Local  adjustment,  56 
method  of  correlates,  71 
method  of  independent  unkaowns, 
76 
Logarithmic  curves,  227 
Logarithmic  plotting,  146 
Liiroth's  formulas    for  mean    square 
and  probable  errors,  169 

M 

Machines  for  computation,  30,  51 
Mean,  18 

arithmetic,  9,  19 
control,  20 

assumed  as  best  value,  9,  18 

geometric,  9 

weighted,  22 
control,  24 
Mean  square  error,  152,  153 

compared  with  probable  error,  158 

of  a  single  observation,  161 

of  arithmetic  mean,  164,  181 

of  weighted  mean,  166 

of  a  function,  177 
Mechanical  aids  in  comi)utation,  30, 

31 
Mechanical   statement  of  side  equa- 
tions, 93 
M(>thods  of  observing,  5,  193 
Micron,  S 
Mistakes,  7,  l2 

X 

Xou-linear  functions  and  curves,  135 
Normal  equations,  2S 

formation  of.  2S.  30,  32 
control,  29 

nunil)(>r  of.  2S 

redundauT  tei'ins.  44 

solution  of.  40 


236 


INDEX 


Normal      equations,      solution      of, 
abridged  method,  41,  47 
control,  46 
Number  of  angle  equations,  86 
conditions,  55 
local  conditions,  82 
observations,  14,  163,  193 
occurrences,  determined  from  prob- 
ability, 208 
side  equations,  95 

O 

Observation  equations,  27 
Observations,  conditioned,  53 

direct,  17 

discordant,  196 

indirect,  26 

number  of,  14,  163,  193 

precision  of,  151 

superfluous,  55 

weighted,  20 

weighting  of,  21 
Observing,  methods  of,  5,  193 
Occurrence  of  errors,  4,  10 
One  condition  only,  73 
Outline  of  methods  of  adjustment,  199 


Parabolas,  133,  143,  221-225 
Partial  adjustments,  54,  97,  130 
Periodic  curves,  134,  148,  228 
Personal  errors,  6 

Peters'    formulas    for    mean    square 
and    ])robable    errors,    162, 
164,  167,  169 
Pole  of  the  side  ecjuation,  93 
Polygon    fixed,    with    central    point 

unoccupied,  123 
Precision,  151 

and  accuracy,  151,  194 
increased    by   additional    observa- 
tions, 3,  193 
index  of,  152 

of  direct  observations,  160 
of  a  single  observation,  161 
of  arithmetic  mean,  163,  181 
of  weighted  mean,  165 
of  indirect  observations,  167 


Precision  of  an  observation  of  weight 
unity,  169 

of  conditioned  observations,  172 

of  a  difference  of  elevation,  174 

of  a  function,  177 

of  computed  quantities,  177,  189 
Principle  of  least  squares,  14,  23 
Probability,  principles  of,  205 

simple  205 

compound,  206 
Probable  error,  152,  155 

approximate  value  of,  162 

compared  with  mean  square  error, 
158 

of  a  single  observation,  162 

of  arithmetic  mean,  164 

of  weighted  mean,  166 

table  of  Bessel's  factoid,  232 
Problems,  classification  of,  15 

list  of,  for  a  short  course,   218 
Propagation  of  error,  177 

simple,  178 

compound,  181 


Quadrilateral,  83 

adjustment,  method  of  angles,  99 
method  of  directions,  111 
approximate  method,  118 
defined,  83 

one  triangle  fixed,  122 
side  equation  for,  88,  91 
two  sides  and  included  angle  fixed, 
121 

R 

Readings,  1,  17 

combination  of,  17 
Reduced  condition  equations,  58 
Reduction  to  linear  form,  26,  89,  137 
Redundant   terms   in   normal    e(iua- 

tions,  44 
Refinement  of  computations,  50 
Rejection  of  observations,  191 
Relation  between  mean  square,  prob- 
able,    and    average    errors, 
158 
Residual  equations,  27 


INDEX 


237 


Residuals,  10,  195 
from  the  mean,  20 
not  the  same  as  errors,  10 
sum  of  squares  is  a  mii.imum,  14, 
23 

S 
Side  equations,  81,  87 
formation  of,  88 
mechanical  formation  of,  93 
number  of,  95 
reduction  to  linear  form,  89 
Simple  propagation  of  error,  178 
Simultaneous  equations,  solution  by 
means  of  normal  equations, 
38 
Single  observation,  precision  of,  161 
Spherical  excess,  53,  85 
Straight  lines,  133,  141,  221 
Systematic  errors,  5,  193,  195 

T 

Tables,  Bessel's  factors,  232 
probability  of  error,  229-231 
for  computation,  30 
Tape  compari'^on,  example,  183 
Tape  measurements,  examples,  184 
Test  of  empirical  formulas,  139 


Test  of  the  law  of  error,  12 
Test-piece,  example,  182 
Time  by  star  transits,  example,  36 
Triangle  errors,  example,  185 
Triangles,  computation  of,  118 
Triangulation,  adjustment  of,  80  (see 

Adjustment) 
Trigonometric  levehng,  81 
True  errors,  3 

U 

Unknowns,  approximate  values  of,  19, 
34 
final  check  of,  50 
Use  and  abuse  of  least  squares,  197 
Use  of  empirical  formulas,  140 

W 

Weighted  mean,  22 

of  computed  quantities,  189 

of  two  quantities,  24 
Weighted  observations,  20 
Weights,  20 

basis  for,  21,  54,  187 

determination  of,  21 

from    mean    square    or    probable 
errors,  187 

of  the  unknowns,  168 


^ 


^ 


UC  SOUTHERN  REGIONAL  LIBRARY  FACILITY 


AA      000  071  289   3