Skip to main content

Full text of "A treatise on the nature and properties of algebraic equations"

See other formats


I  UC-NRLF 


*b  sm  baa 


•• 


in?r 


IN   MEMORIAM 
FLOR1AN  CAJORI 


(few 


^x^<- 


• 


A  TREATISE 


NATURE  AND  PROPERTIES 


ALGEBRAIC    EQUATIONS, 


By   R.   STEVENSON,   B.A. 

TR.INITY   COLLEGE,    CAMBRIDGE. 


SECOND  EDITION. 


CAMBRIDGE: 

PRINTED     BY     W.      METCALFE,      ST.     MARY'S     STREET, 

FOR   J.    &   J.    J.    DEIGHTON; 

AND   WHITTAKER   &   CO.,    A  V  E  -  MARIA-LAN  E,    LONDON. 

1835. 


QJ\  2  ll- 


CONTENTS. 


CHAP.  I. 

Page 

On  the  Structure  of  Equations  1 


CHAP.  II. 

On  the  Transformation  of  Equations;  or  the  Determination  of 
Equations  by  means  of  the  Relations  existing  between  their 
Roots  and  the  Roots  of  given  Equations 25 

CHAP.  III. 

On  the  Theory  of  the  Limits  of  the  Roots,  as  far  as  it  was  known 

previous  to  Fourier      50 


CHAP.  IV. 

On  the  Separation  of  the  Roots  by  the  Method  of  Fourier 58 


KS3g9776 


^ 


IV  CONTENTS. 

CHAP.   V. 
On  the  Method  of  Divisors       97 

CHAP.  VI. 

On  the  Method  of  Newton  for  obtaining  approximately  the  Real 
Roots  of  any  Equation,  so  far  as  it  had  been  developed  previous 
to  its  completion  by  Fourier       105 

CHAP.  VII. 

On  the  Completion  of  Newton's  Method  of  Approximation  by 

Fourier 112 

CHAP.  VIII. 

On  the  Method  of  Approximation  given  by  Lagrange,  as  simplified 
by  the  Theorems  of  Fourier       121 


CHAP.  IX. 


On  the  Indirect  Rules  for  the  Solution  of  Equations  of  Low 
Degrees,  which  have  been  accidentally  discovered :  with  the 
true  Theory  connecting  these  Methods,  namely,  the  Appli- 
cation of  the  Method  of  Symmetrical  Functions  of  the  Roots 
to  the  Solution  of  the  Equation  itself :  and,  lastly,  the  Reason 
why  this  Method  cannot  be  extended  beyond  the  Fourth 
Degree kr.     127 


A    TREATISE 


THEORY    OF    EQUATIONS 


CHAP.  I. 


ON    THE    STRUCTURE    OF    EQUATIONS. 

1 .  By  the  term  Equation  is  meant,  in  general,  the  algebraic 
expression  of  the  equality  existing  between  two  quantities, 
without  reference  to  the  form  in  which  that  equality  is 
presented,  or  to  the  distinction  between  the  known  and 
unknown  quantities  which  are  involved  in  it.  But  in  the 
following  pages  the  general  term  will  be  restricted  to  that 
class  of  equations  only,  which  contains  but  one  unknown 
quantity.  This  is  to  be  understood  in  all  cases,  unless  the 
contrary  be  stated.  The  indices  of  the  unknown  quantity 
will  in  all  cases  be  supposed  positive  integers :  and  the 
coefficients  will,  in  general,  be  supposed  real ;  that  is,  either 
numbers  or  symbols  which  do  not  involve  the  imaginary  sign. 
If  the  coefficients  are  imaginary  in  particular  cases,  they  will 
be  expressed  in  that  form,  or  it  will  be  stated  that  such  a  form 
is  to  be  understood. 


2  THEORY    OF    EQUATIONS. 

2.  The  general  type  of  such  equations,  according  to  the 
usual  arrangement  of  their  terras,  will  be 

Xn  -t-p^-1  +  p2X»-2  + +  P»  =  0. 

The  degree,  or  dimensions,  of  the  equation  will  be  determined 
by  the  index  of  the  highest  power  of  the  unknown  quantity. 
Thus  the  preceding  equation  is  of  the  nth  degree. 

3.  As  it   will   frequently   be   necessary   to   speak  of  the 
quantity,  or  expression, 

Xn  +  pYXn-1    +  P2Xn~2  + +  Pn, 

without  reference  to  its  equality  to  zero,  but  merely  consider- 
ing it  as  a  function  of  the  symbol  x,  it  will  be  convenient  to 
designate  it  as  the  polynomial  composing  the  equation.  For 
the  sake  of  brevity  in  our  notation,  as  well  as  for  the  conve- 
nience of  exhibiting  the  particular  values  of  this  polynomial, 
it  will  be  denoted  by  f(x).  And  similar  advantages  will 
attend  the  adoption  of  Lagrange's  notation,  for  its  differential 
coefficients  taken  with  respect  to  x  successively ;  which  will 
accordingly  be  represented  by 

/o),  ri*i  /"'(*> /*(*)• 

So  that  we  can  immediately  express  the  result  of  substituting 
for  x  any  known  value  a  in  this  series  of  polynomials,  by 
writing  the  series 

/(«).  /'(«).  /*(«) /"(«)• 


4.  Now  it  is  evident,  that  by  giving  different  values  to  x, 
the  value  of  f(x)  will  vary ;  and  it  may  so  happen  that  when 
x  receives  the  particular  value  a,  the  particular  value  of  f  (x) 
shall  be  zero ;  or  that  we  have  identically  /(a)  =  0. 


STRUCTURE    OF    EQUATIONS.  O 

In  this  case,  the  particular  value  a  is  said  to  satisfy,  or  to  be 
a  root  of  the  equation 

/<*)  =  o. 

And  it  is  to  the  properties  of  these  roots,  as  connected  with 
the  formation  of  the  polynomial  /(%),  that  our  attention  is 
now  to  be  directed. 

5.  The  first  object  of  inquiry  which  here  presents  itself  is, 
whether  under  all  forms  of  the  polynomial  f(pc)  we  shall 
have  a  right  to  conclude  that  there  exists  any  such  particular 
value,  or  root,  expressible  in  a  known  form,  possible  or  im- 
possible ?  For  we  are  immediately  obliged  to  relinquish  the 
hope  of  finding  in  all  cases  a  real  root  of  the  equation,  from 
our  experience  in  the   case  of  quadratic  equations;   as  for 

instance, 

x*  +  1  =  0, 


x  =  ±  Vr-1. 

The  question  then  is,  does  there  in  every  case  exist  a  root 
which  is  expressible  by  algebraic  symbols?  The  following 
demonstration  by  M.  Cauchy  of  the  existence  of  such  a  root 
appears  to  be  free  from  objection. 

6.    There  will  always  exist  some  real  values  of  p  and  0, 
which  shall  render 

p  (cos  0  +  V  —  1  sin  0) 
a  root  of  the  equation  f  (x)  =  0. 

For  the  sake  of  brevity,  denote  by  a  the  quantity 
p  (cos  0  +  v'  —  1  sin  0) ; 
and  let  a  receive  an  increment  h  of  the  same  nature,  namely 
h  s=  <r(cos^>  -f  V'—  1  sin0). 


4  THEORY    OF    EQUATIONS. 

Then,  expanding  by  Taylor's  theorem,  after  writing  a  -f  h 
for  x,  we  shall  find 

f(a  +  h)  ==/(«)  +  A/ (a)  + 

observing  that  the  theorem  cannot  fail  for  a  polynomial  of 
this  description.  Here  f(a)  is  supposed  not  zero,  or  a  would 
be  a  root ;  and  f  (a)  is  also  supposed  not  zero,  for  if  it  be 
zero,  then  we  may  take  the  first  term  of  the  series  which  is 
not  zero,  and  the  reasoning  will  still  be  the  same. 

Now,  for  the  convenience  of  the  calculation,  let  the  quan- 
tities involved  in  the  above  series  be  put  under  the  same 
form  as  a  and  h,  which  can  always  be  effected ;  so  that  we 
may  assume 

f(a)  —  A  (cos  a  +  V—  1  sin  a), 


f(d)  =  A'(cosa'  +  V-  1  sin  a'), 

/(a  +  h)  =.  R  (cosw  +  \/—  I  sinw). 

Then,  after  the  substitution,   and   reduction   by   the  aid  of 
Demoivre's  theorem,  we  shall  have 

R  (cos  w  +  V  —  1  sin  to)  =  A  (cos  a  -f  V  —  I  sin  o) 

4-  4V  Jcos(a  -h  <f)  4-  v'  —  1  sin  (a  4-  0)?   ) 

+ 

from   which,  by  the  separate  equality  of    the   possible   and 
impossible  parts,  we  obtain  the  two  equations 

R  cos  oj  =  A  cos  a  4-  AV  cos  (a  +  0)  4- 

R  sin  w  ==  A  sin  a  +  A'<r  sin  (a  4-  $)  4- 

and,  by  adding  the  squares  of  these  equations, 

R2  =  A2  f  2AA'» cos(«'  +  0  -  a)  4-  AV  4-  .  .  . 


STRUCTURE   OF   EQUATIONS.  5 

Now  the  sign  of  R2  —  A2  can  be  made  to  depend  on  its  first 
term  by  diminishing  a.  Hence  we  can  in  every  case  render 
R2  <  A2,  because  <j>  is  an  arbitrary  angle. 

But  on  the  other  hand,  since  there  must  be  some  minimum 
amongst  the  quantities  A2,  R2,  and  all  other  similar  quantities, 
we  may  suppose  that  p,  9,  have  been  so  chosen  as  to  give  A2 
this  minimum  value.  Hence,  after  this  choice  of  p  and  0,  we 
cannot  make  R2  <  A2. 

It  follows  then,  that  this  minimum  value  must  be  zero; 
and  that  p,  0,  are  then  so  chosen  that 

p  (cos  0  -f  t&«*J  sin  0) 

is  a  root  of  the  equation.  For  if  A  be  not  zero,  we  can  in  all 
cases  make  R2  <  A2. 

We  may  remark  that  the  above  proof  applies,  when  the 
coefficients  are  impossible  quantities. 

Hence,  in  every  case,  we  have  a  right  to  conclude  that  there 
exists  a  value  of  x  of  the  form 

p  (cos  0  -f  */  —  1  sin  9), 

which  shall  render  f{x)  =  0;  or,  in  other  words,  that  every 
equation  has  a  root  expressible  by  means  of  algebraical 
symbols. 


7.  The  investigation  of  the  roots  of  the  equation  f(x)  =  0 
is  identical  with  the  decomposition  of  the  polynomial  f(x) 
into  its  factors.  For,  whenever  any  root  a  of  the  equation  is 
found,  the  corresponding  factor  x  —  a  of  the  polynomial  is 
determined :  and  vice  versa\  The  proof  of  this  is  at  once 
evident,  if  we  observe  that  Taylor's  series  can  never  fail  for 
such  a  polynomial. 


THEORY    OF    EQUATIONS. 


Now  we  have 


/(*)  =/(«  +  *  —  <*), 


=/(*)  +  *-«/(«*>  +  ..  .   +(x-a)\ 

Hence,  if  f(a)  =  0,  that  is,  if  a  is  a  root,  x  —  a  is  a 
factor.  And  conversely,  if  x  —  a  is  a  factor,  we  must  have 
f(a)  =  0,  and  a  will  be  a  root. 

8.  Impossible  roots  enter  equations  by  pairs,  and  each 
pair  corresponds  to  a  real  quadratic  factor  of  the  poly- 
nomial. 

For  if  one  root  of  the  equation  be 

p  (cos  0  +  s/  —  1  sin  0), 
then,  by  substitution  and  reduction, 
pn (cosw0-f\^Tsin nO)+plp"-1(cos n— 10-f\/Zl  sinrc— 10) 

+  i?2J°n"2  (cos  w  — 20  +  V^TT  sin  n~^~20)  +  .  .  .  =  0; 
and  since  the  equality  can  be  separated,  we  must  also  have 
pn  (cos nO—S^-l  sin nO) ■\-plpn~l  (cos n— 10— V^T  sinrc— 10) 

+  JPa/0""2  (cos  w  — 20  —  V  —  1  sin  n  —  20)  +  .  .  .  =  0; 
from  which  we  conclude  that 

p  (cos  0  —  /^l  sin  0) 
is  also  a  root  of  the  equation. 

Now  the  two  corresponding  factors  of  the  polynomial  will  be 
x  —  p  cos  0  —  s/  —  1  p  sin  0, 
x  —  p  cos  0  +  */  —  I  p  sin  0 ; 
and  their  product  gives  the  quadratic  factor 
(x  —  p  cos  0)2  -|-  p2  sin2  0, 
or  #2  —  2xp  cos  0  -f  p2. 


STRUCTURE    OF    EQUATIONS.  7 

9.     Every  equation  has  as  many  roots  as  it  has  dimensions, 
and  no  more. 

For  it  has  been  shown  that  /(#)  =  0,  has  always   one 
root,  and  therefore  /(a?)  has  always  some  factor  of  the  form 

f(x) 
x  —  av  which  exactly  divides  it.      Now  the  quotient  g  v  J 

x  —  o,  j 

is  a  quantity  perfectly  similar  to  f{x)  in  form,  but  of  one 

dimension  lower.     It  must  then  have  a  factor  x  —  a2,  and 

f(x) 
the  quotient  . w \  w^  be  a  polynomial  of  n  —  2 

{x  —  ax)  {x  —  a2) 

dimensions.     Proceeding  in  this  manner,  we  shall  at  length 
find  a  quotient  of  no  dimensions,  which  will  be  unity ;  thus 


/(*) 


I: 


(x  -  Oj)  (x  -  eg (x  —  an) 

that  is, 

f(x)  =  (x  -  oj  (as  —  a2) (x  —  O. 

It  is  evident  that  the  quantities  av   a2, an,  are 

roots  of  the  equation  /(#)  =  0 ;  and  that  no  other  quantity 
can  be  a  root  of  it. 

For  any  other  quantity  Q,  when  substituted  for  x,  will  give 
the  result 

/(Q)  =  (Q  _  a,)  (Q  -  a2)  .  .  .  .  (Q  -  «.), 

which  is  not  zero ;  and  consequently  Q  cannot  be  a  root. 

10.    Connexion  of  the  coefficients  of  an  equation  with  its 
roots. 

Let  the  roots  be  denoted  by  a,  b,  c, / ;   and  let 

the  equation  be  of  n  dimensions  with  the  usual  coefficients. 
Then,  by  the  decomposition  of  the  polynomial,  which  has 
been  effected, 


8  THEORY    OF    EQUATIONS. 

xn  +  p{xn'A  +  p.2  xn~2  -f +  pn 

=z  (x  —  a)  (x  —  b)  (x  —  c) (x  —  I) 

—  xn  —  xn~x  (a  +  b-\-c  +  d  + y 

-4-  x^"2{ab  +ac  +  be  4- ) 

—  xn~3  (abc  4-  acd  + ) 

4-  (—  I)f  .abed  .  .  .  .  / 

=  xn  —  xn-l2(a)  +  xn'2  2(ab)  —  .  .  .   +  (—  l)n  abc  .  .  .  I, 

denoting  by  S  the  sum  of  all  quantities  similar  to  the  one 
to  which  it  is  prefixed. 

Hence,  generally,  we  shall  have 

(—  l)r  pr  fc=  sum  of  products  of  every  r  roots. 

11.  By  means  of  the  relations  just  established  between  the 
coefficients  and  the  roots  of  an  equation,  we  can  determine  the 
values  of  any  symmetrical  functions  of  the  roots,  without 
knowing  the  roots  themselves ;  as  in  the  following  examples : 

Ex.  1 .  To  find  the  sum  of  the  squares  of  the  roots  of  an 
equation. 

It  is  manifest,  that  in  multiplying  2  (a)  by  itself,  terms  of 
two  kinds  will  be  produced ;  namely,  a2,  ab  ;  and  while  the 
term  a2  can  be  formed  in  one  way  only,  the  term  ab  can  be 
formed  by  the  a  of  either  factor,  and  the  b  of  the  other,  and 
will  consequently  appear  twice.     Wherefore 

S  (a)  .  S  (a)  =  S  (a2)  4-  2S  (ab), 
orp2  =  S(a2)  +  2^, 


STRUCTURE    OF    EQUATIONS.  9 

Ex.  2.    To  estimate  2  (a2bc). 

Commence  with  forming  the  product 
2(a)  .  2(a6c); 
and  observe  that  its  terms  are  of  two  kinds,  a2bc}  abed ;    also 
that  a?bc  can  enter  but  once,  whilst  abed  can  be  produced  in 
four  different  ways. 

Hence 

2(a)  .  2(afc)  =  2(a26c)  -f  4 .  2(afaaT), 

oxP\P*  =  S(a26c)  +  4p4, 
2(a26c)  =Plp3-4pr 

Ex.  3.    To  find  the  value  of  2  (a262). 
Adopting  the  same  method  as  above, 

2(a&)  2(a&)  =  2(a262)  +  22(a2fo)  +  62  (abed), 
or  p22  *  2  (a2*2)  +  2(^/>3  -  4p4)  +  6>4, 
V(aW)=p*-2Plp3  +  2pr 


S(t 


Here  2 


_  sum  of  prod,  of  every  n  —  1  roots 
abc  .  .  .   .  I 

=  _?»=}. 

Pn     ' 

Hence  2  (f)  =  ^&d  -  *. 


!         f/OR 


10  THEORY    OF   EQUATIONS. 

12.  The  same  process  would  of  course  conduct  us  to  the 
expressions  for  the  sums  of  the  powers  of  the  roots.  But 
these  can  be  found  in  a  more  direct  manner  by  the  following 
methods : 

Since  we  have  the  identity 

f(x)  =  (x  —  a)  (x  —  b)  (x  —  c) (x  —  /)  ; 

•■•4?  =  ('-90 -DC -3 N>. 

and  by  taking  the  Naperian  logarithms 

ff(&[        's-Wf    i  sQ2)  _  l  sp3) 

#     —  2     #2  3     x3         

But  since  we  have 

The  first  side  of  the  above  equation  is  capable  of  expansion 
in  negative  powers  of  x.  After  expanding,  the  equation  of 
coefficients  will  give  the  sums  of  the  powers  of  the  roots. 

A  similar  process  will  apply  for  finding  the  sums  of  the 
negative  powers  of  the  roots.     For,  as  above, 

f(x)  =  (x  —  a)  (x  —  6)  (x  —  c) (x  —  I)  ; 

and  therefore,  observing  that  pn  =  (—  l)w.  abc I, 

^M-'Stl^1 

But  since  we  have 

/O)    _     !      ,     Pn-,*   +Pn-2*2+    .     .     .     .    +X» 
Pn  '  Pn 

by  expansion  and  equation  of  coefficients,  we  have  the  sums  of 
the  negative  powers  of  the  roots. 


STRUCTURE    OF   EQUATIONS. 


11 


13.  The  above  method  is,  however,  only  applicable  with 
ease  to  the  cases  where  n  is  not  large,  or  at  least  where  the 
number  of  terms  of  the  equation  is  not  large.  In  other  cases 
it  is  best  to  calculate  the  sums  of  powers  from  each  other  in 
succession. 


Since  f  (#)  =  (x  —  a)  (x  —  6)  (x  —  c)  .  .  .  .  (x  —  I), 
by  taking  Naperian  logarithms,  and  differentiating  the  equa- 
tion, we  shall  have 


/Or) 


l 


l 


x  —  a      x  —  b 

I    .a   ,   a? 
x      x*      x3 


+ 


.  .  + 


x-l 


X 


x1 


+I+i+ 


X2 


x- 


xn~l  xn    \ 


n  + 


x  x2 


But  f(x)  =  a;'1  +  jt?x  xn~l  + 


+  ft 


y  (#)  =  nxn~l  +  n  —  I  p^x71*2  +....+  pn_x ; 

and  after  the  substitution  of  these  values,  the  above  equation 
will  contain  only  negative  powers,  and  we  can  compare  the 
coefficients  of  like  powers  of  x. 


The  comparison  of  the  coefficient  of—  will  give  the  equa- 

x 


tion 


(n—r)pr  =  2(0  +Pl  S  (aT1)  +  .  .  .  +  p,.^  S(«)  +  npr, 
or,   0  =  S  (or)  +  px  SCa^1)  + +  Pr-i  S (a)  +  r^r. 


12  THEORY    OF    EQUATIONS. 

By  making  r  =  1,  2,  3, successively,  we  can  find 

2(a),  2  (a2),  S(a3), in  order. 

We  may  remark  that  the  formula  still  holds  when  r  is 
greater  than  n,  for  in  that  case  all  the  coefficients  after  pn  are 
to  be  considered  as  zeros. 

A  very  slight  modification  of  the  preceding  equation  will 
give  the  values  of  the  sums  of  the  negative  powers  of  the  roots. 

We  obtained,  by  logarithmic  differentiation,  the  equation 

1    +*+...+   l   ■ 


f(x)        x  —  a      x  —  b       '  x  —  I3 

and  if  the  second  side  be  expanded  in  ascending  powers  of  x, 
instead  of  descending  powers,  we  shall  have 


^SS --*©-»  Q-«© 


from  which  we  obtain 


-*/<*>  =/(*){*s(±)  +  *»s(i)  + }, 

and  after  supplying  the  values  of  f(x)  and  f  (x),  the  com- 
parison of  coefficients  of  xr  will  give  us 

-rPn_r  =  p.  2 (1)  +  p^  S  (i)  +  .  .  .  +  p„_,+l  2  (I), 

or, 


14.  We  have  hitherto  considered  the  roots  «,  b,  c,  .  .  .  .  /, 
of  the  equation,  without  reference  to  any  particular  relation 
that  may  exist  amongst  them.  There  is,  however,  one  very 
important  case,  that  of  equal  roots,  in  which  the  equation  may 
be  reduced  to  a  lower  degree. 


STRUCTURE   OF   EQUATIONS.  13 

Let  us  suppose,  then,  that  r  roots  of  the  equation  become 
alike,  and  consequently  that  (x  —  a)r  becomes  a  factor  of/"(#). 
Then,  since 

/(*)  =/0)  +  O  -  a)/ (a)  + 

it  follows  that  we  must  have,  besides  the  equation  f(a)  =  0, 
the  r  —  1  additional  equations 

f(a)  =  0 
/"O)  =  0 


/r-,(«)  =  o. 

In  fact,  since  we  have 

it  is  manifest  that  (x  —  a)r~l  will  be  a  factor  of  f'(x),  or 
f(x)  =  (x-ay-l$(X); 
similarly  f\x)  =  (x  —  a)r~2  xW* 


and  the  first  r  —  1  differential  coefficients  will  vanish  when 
x  =  a.  And  the  same  conditions  must  hold  for  any  other 
repeated  factor  as  (x  —  by ;  so  that  if  we  have 

fix)  &  (x •-  ay  .  (x  -  by  tf>  (a?), 
we  must  also  have 

fix)  =  (x-  ay-*  o  -  by-*  +<&, 

and  so  on. 

15.  By  means  of  the  conditions  which  have  been  deter- 
mined for  the  case  of  equal  roots,  we  can  reduce  the  degree 
of  the  equation  containing  them. 

Thus,  let  f(x)  have  some  factors  entering  once,  others 
entering  twice,  and  so  on  :    and  let  XOT  denote  the  product  of 


/ 


14  THEORY    OF    EQUATIONS. 

all  the  factors  which  enter  m  times  into  f  (x) ;    so  that  XWJ 
enters  m  times  into  f(x).     Then  we  shall  have 

/(*)  =  x,  x/  xy 

And  from  what  has  preceded,  it  is  manifest  that  the  greatest 
common  measure  of  f(x)  and  f\x)  will  be 

x2x3*x4* 

which  we  shall  call  fY(x).     Then  treating  the  polynomial 

/»  =  X2X3*X43 

in  the  same  manner  as  f(x)  was  treated,  we  shall  find  the 
greatest  common  measure  ofyj  (x)  and/"/  (x)  to  be 

/,(*)  =  X,X4*X,' 

and  so  on,  as  long  as  the  operation  can  be  executed. 

Now  if  we  form  from  these  successive  polynomials  f(x), 

fi(x)>  fi(®)> the  successive  quotients  ^  (V), 

02  (*)>  03  (•)» such  ^at 

0i  (*)  =  y^  =  xi  Xi  X3 


02W=^-X2X3X4 


and  so  on,  as  far  as  possible  ;  it  will  be  evident  that  X1?  X2, 

X3, can  be  determined  from  these  quotients  by  a 

second  operation  of  division,  in  the  same  order ;  for  we  have 


__0L(£) 
v  _  02  0*0 

2  "03^' 


'3 

and  so  on,  as  far  as  the  degree  of  repetition  of  factors  admits. 


STRUCTURE    OF   EQUATIONS. 


15 


The  solution  of  the  original  equation 

is  thus  reduced  to  the  equations 
Xl  =  0, 

X3  =  0, 

x3  =  o, 


all  of  which  are  of  less  dimensions. 

This  method  has  also  the  advantage  of  pointing  out  which 
of  the  roots  enter  once,  twice,  or  oftener. 

The  following  is  an  example  of  the  above  process  : 

Ex.     Required  to  solve  the  equation 

x5  —  x4  +  4#3  —  4#2  +  4#  —  4  =s  0, 
which  has  equal  roots. 

In  the  first  place, 

f(x)  =  x5  —  x*  -f  4x3  —  4x2  +  4#  —  4, 
f  O)  =  5x*  —  4#3  -f  12#2  —  8x  +  4; 

and  by  the  rule  for  finding  the  greatest  common  divisor  of 
two  algebraic  quantities, 

ffa  =  a?  +  2, 
and  there  is  no  common  divisor,  but 

/.(*)  =  £ 

Secondly,  to  form  the  primary  quotients, 

a.  (x)  =-C^  =  xt-x*  +  2x-2, 


16  THEORY    OF    EQUATIONS. 


*a(*)  = 

+  2, 

tf>3  (*)  S 

tf>4  (*)  -- 

=  1. 

Lastly, 

the  final  ( 

quotients  are 

x, 

=  K=* 

-  1, 

i 

f*00 

*  i 

x, 

+  2, 

x3 

=  1. 

■ 


So  that  we  have 

/(*)  =  (X  -  1)  (^2  +  2? 
And  the  five  roots  are 

one  root    =        1 

two  roots  =  -f  v  —  2 

two  roots  =  —  V  —  2 

In  general,  the  operations  are  to  be  carried  on  for  each  set 
of  polynomials,  until  we  arrive  at  one  of  no  dimensions,  or 
unity ;  after  which  every  other  will  be  unity. 

16.  When  it  is  required  to  resolve  any  polynomial  into 
its  factors,  the  only  difficulty  is  that  of  discovering  all  the 
roots  of  the  equation  formed  by  equating  that  polynomial  to 
zero.     We  shall  give  some  examples  of  this  problem. 

Ex.  1.  To  resolve  xn  —  1  into  its  real  component  factors, 
n  being  odd. 

Put     xn  -  1  =  0, 

xn  =  I  =  cos  2Xtt  4  V  —  1  sin  2Att, 


STRUCTURE    OF    EQUATIONS. 


17 


where  X  is  any  integer  whatever ;  and  we  obtain  the  values 

2Xtt         , ,    .    2Xtt 

x  —  cos +  V  —  1  sin ; 

n  n 

and  by  giving  X  the  values  1,  2,  3,  .  .  .  .  n,  we  obtain  n 
different  values  for  x ;  but  these  values  recur  when  X  is  made 
greater  than  n,  or  less  than  ] . 


Now,  observing  that  we  have 
1 


2mr         , .    2rnr 

cos +  */  _.  1  sin , 


cos 


cos 


2tt         ,_   .    2,r  20-1)tt         ,— -   .    2(n-l)ir 

--  -  \/_l  sin  -_  =  cos  -^ '-   +  V~  1  sin —, 

n  n  n  n 

4tt         ,-_   .    4tt  2(w~2)tt        , .    20-2)tt 

—  -  \/_  i  8in  —-  =  cos  -3 E  +  yCTl  sin~ —  , 


and  that  n  =  1  -f  an  even   number,  we  may  write  the  roots 
thus ; 

First,  the  single  real  root  =  1. 

n  -  1 
Secondly,  the  — ^ —  pairs  of  impossible  roots,  which  are 


2tt  -      ,-_  .   2tt 

cos—  ±  V  —  1  sm-, 

n  n 

4tt  I      ' '  .    4ir 

cos—  ±V  —  1  sm--? 


(rc—  l)ir   J     , — -    .    (n  —  \)ir 


And  the  corresponding  real  factors  of  xn  —  1  will  be 
First,  the  single  factor  x  —  1. 


18 


THEORY    OF    EQUATIONS. 


n  —  1 
Secondly,  the  — ^—-  quadratic  factors, 


2tt 

x1  —  2x  cos i-l, 

n 

x2  —  2x  cos \-  1, 


:*_2*coS^^r+l 


Ex.  2.   To  resolve  x11  —  1,  w  being  even. 

The  general  form  of  the  roots  is  the  same  as  before ;  and 
taking  away  the  last  root,  which  =  1,  we  must  unite  the  rest 
in  pairs  as  before ;  but  after  taking  away  that  last  root,  there 
will  be  an  odd  number  left,  which  can  be  paired  as  before, 

71 

only  that  the  middle  one  corresponding  to  A  =  ~  is  left  single. 
This  single  root  is 

COS  7T   +   V 


1  sin 


1. 


Hence,  for  the  corresponding  real  factors  of  xn  —  1,  we 
have: 

First,  the  single  factors  (#  —  1),  {x  -f  I). 


Secondly,  the  ^  —  1  quadratic  factors, 

2ir 
x2  —  2x  cos h  1, 

71 

x2  —  2x  cos  —  +1, 

71 


x*-2zCOs(n-2)w  +  1 


STRUCTURE    OF    EQUATIONS.  19 

Ex.  3.    To  resolve  x11  +  1,  when  n  is  odd. 

By  changing  x  into  —  x,  in  Ex.  1,  and  changing  the  signs 
of  the  result,  the  factors  of  xn  -f  1 ,  will  be, 

First,  the  single  factor  x  +  1 . 


Secondly,  the 


I 


quadratic  factors, 


2tt 
x2  +  2x  cos 1-  1, 


x*  +  2x  cos—  +  1, 


o           r»                 (»— 1)7T  , 

X2  +  2#  COS- —    -f    1. 


Ex.  4.    To  resolve  xn  +  1,  »  being  even. 

Since  xn  -f  1  =  0,  gives 
#n  =  —  1, 


X  =  cos 


COS  (2X  4"    1)  7T  + 

(2X  +  1)tt 


"1  sin(2X  +  1)tt, 


+  v  —  1  sin 


X  being  any  integer  whatever;    and  by  giving  X  the  values 

0,  1,  2, n  —  1,  we  obtain  n  different  values  of  x,  all 

of  which  are  impossible,  and  all  different ;  but  if  X  be  made 
greater  than  n  —  1 ,  or  less  than  0,  the  values  recur. 

Now  observing  that  we  have 

(2rc-l)7r  ,     ) ;    .    (2n-  1)tt 

cos  L h  V  —  1  sin — 


IT 


~  cos v 


1  sin 


20 


THEORY    OF    EQUATIONS. 


(2n  U  3)  it        > ,    .    (2n  -  3)  v 

C0S~ n +  S/-  ]  sm  n 


=sz  cos 


, .    3tt 

v  —  1  sin  — -  > 


we  shall  be  able  to  class  the  n  impossible  values  of  x  into  ~ 
pairs, 


cos 


cos-±*/TTism--, 

37r         / .     .    3n 

cos  —  ±  V  —  1  sin  —  > 
w  n 


(n-l)TT         J .     (»-l)7T 

— ±  v  —  1  sm 


And  the  corresponding  ~  real  quadratic  factors  will  be 


x2  —  2x  cos-  +  1, 
n 

x2  —  2x  cos h  1, 

n 


x  —  2a?  cos 


(n-iy 


+ 1. 


In  the  preceding  examples  there  has  been  no  difficulty  in 
the  decomposition  into  factors,  because  we  were  at  once 
certain  of  having  obtained  all  the  roots  of  the  equation.  This 
is  also  the  case  in  the  more  general  problem. 


Ex.  5.    To  find  the  real  factors  of 

x2n  —  2xn  cos0  f  1, 


STRUCTURE    OF    EQUATIONS.  21 

For  the  polynomial  above  given  is  evidently  the  product  of 

xn  —  cos  0  4-  v'  —  1  sin  0, 

xn  —  cos  0  —  */  —  1  sin  0 ; 

and  by  equating  these  separately  to  zero,  we  find  the  roots  of 

#2»  _  2xn  cos  0  +  1  =  0. 

These  roots  are  evidently  given  by  the  single  formula 

2Xtt  +  0  ^     , .     .    2Xtt  +  0 

x  =  cos ±  \/  —  1  sm 

n  n 

And  the  required  factors  are 

o         o  2X7T  +  0    .     , 

#2  —  2#  cos ! h  1. 

Here  X  is  to  receive  the  n  values,  1,  2,  3, n. 


17.  We  shall  now  treat  of  the  inverse  problem,  namely, 
that  of  finding  the  polynomial,  when  the  roots  of  the  equation, 
formed  by  equating  that  polynomial  to  zero,  are  known. 

Thus,  to  find  the  polynomial  of  the  rift  degree,  which  shall 
become  zero  for  the  n  values  of  x, 

«pa2'fl3 a»> 

we  have  the  factors  of  the  polynomial 

and  observing  that  a  constant  factor  will  not  affect  the  roots 
of  any  equation,  we  conclude  that  the  required  polynomial  is 

fix)  =  C  O  -  ax)  O  —  a£  .  .  .  .  (x  —  O, 

denoting  by  C  any  constant  quantity,  which  is  indeterminate, 
until  some  other  condition  is  proposed  for  f(x)  to  satisfy,  and 
for  particularizing  C. 

Thus,  if  any  value  of  f(x)  be  known  for  a  value  of  x  not 
causing  f(x)  to  vanish,  we  can  determine  C  in  terms  of  the 
known  value. 


22  THEORY    OF    EQUATIONS. 

Let/(a0)  be  supposed  known, 

/Oo)  =  C(«q  -  «i)  («o  -  ai) K  -  «J; 

and  by  dividing  the  former  equation  by  this,  we  obtain 
f\x)  __  x  —  ax    x  —  a2  x  —  an 

/K)  ~~  «o  — °i  ao~a2  *         "  «o-«» ' 
In  the  particular  case  in  which  it  is  required  to  find  a  poly- 
nomial f(x),  which  shall  be  zero  for  the  values  of  x} 

a\>az><*3> an, 

and  shall  become  ==  1,    when  x  =  aQi   or  /(a0)  =1,    the 
required  polynomial  is 

x  —  a,    x  —  a0  a:  —  a„ 


/(*) 


°0—  °1     «0  —  «2 


18.  There  is  another  problem  of  more  generality,  which 
can  be  reduced  to  the  preceding.  It  is  to  find  a  polynomial 
of  n  dimensions,  when  the  n  -\-  1  values  of  the  polynomial 
due  to  n  +  1  values  of  the  variable  are  known.  That  is,  to 
find  f(x) ,  having  given  the  n  +  I  values 

A%)>  /(ai)>  f(<h)> /K)> 

which  are  due  to  the  values  of  x, 

%>  «i>  «2» a«- 

Here  we  must  assume 

/Or)  =  0O(*)/K)  +  *!(*)/(«,)  + +  *„(*)/<«0, 

where   <p0(x),  fa(x), <j>n(x)9 

are  all  polynomials  of  the  nth  degree ;  and,  consequently,  f(x) 
is  of  the  same  degree. 

Now  it  is  manifest  that  we  shall  have  0O(#)  ==  zero  for 
the  values 

x  =  alt  x  =  a2, x  =  any 

but  that  <f>0{%)  must  =  1. 


STRUCTURE    OF    EQUATIONS.  23 


X 

Hence  <(>0(x) 


a0-a„ 
And  similar  reasoning  will  hold  for  the  determination  of 

tf*)j  tf>20)> ♦»(*)• 

Hence  we  obtain,  finally, 

/» ,     \  /» x      «.     X  —  (Xi     X  (Xn  X  uL 


+/(«,)  4 


«i  —  o. 


+/«o-* 


ro 


We  may  here  remark,  that  if  a  be  any  other  constant,  then 
the  polynomial /(#)  —f(a)  will  take  the  n  +  1  values 

/K)  -/(«),  /(O  -/(«), /(«„>  -/<*)- 

Hence  we  shall  have  the  equation 

/(*)-/(«)=  l/K)-/(«)l 


+  !/(«.)-/(«)!• 


a?  —  a, 

x  — 
%~ 
x  — 

«l 

-«2 
a2 

«i  — «o 

a2 

#-a0 

#  — 

a\ 

o  n 

x  —  an 


«.  —  #. 


gl— a. 


And  as  this  holds  for  all  values  of  a,  we  can  profit  by  the 
introduction  of  this  arbitrary  constant  to  expel  one  of  the 
terms  on  the  second  side;  which  will  be  effected  by  taking 
a  =  one  of  the  n  +  1  values  of  x, 

aQ,  a.,  a2,  .  .  .  .  .,  an. 


24  THEORY    OF    EQUATIONS. 

As  an  example,  take  n  ~  \,  and  put  a  =  a0;  we  shall  then 
have 

/(*)  -/K)  =  !/(«,)  -/K)?  <p^; 

ai  —  ao 
which  is  the  equation  for  a  straight  line  passing  through  two 
given  points,  supposing  x  to  denote  the  abscissa,  and  f(x) 
the  corresponding  ordinate. 

The  general  problem  is  one  of  considerable  utility  in  inter- 
polating the  series 

/(«„)>  /(«,)>  /(<*).  •  •  •  •  •  /K)> 

corresponding  to  the  values 

%>    ttl>    a2>  • °n> 

so  as  to  obtain  the  term  f(x)  corresponding  to  any  interme- 
diate value  x. 


CHAP.  II. 


ON  THE  TRANSFORMATION  OF  EQUATIONS;  OR  THE  DETER- 
MINATION OF  EQUATIONS  BY  MEANS  OF  THE  RELATIONS 
EXISTING  BETWEEN  THEIR  ROOTS  AND  THE  ROOTS  OF 
GIVEN  EQUATIONS. 


19.    The   general  problem  is,  having  given  the   equation 

f(x)  ==  0,  whose  roots  are  ax,  a2,  a3, aH,  to  find  the 

equation  F(y)  =  0,  whose  roots  are  to  be  all  the  combina- 
tions of  the  roots  alt  a2,  a3~  .  .  .  an,  of  which  the  general  type 
is0Ol?  a2, ar). 

Here  one  value  of  y  will  be 

y  =  <p(a]}  a2, ar), 

and  since  a,,  a2,  .  .  .  .  ari  belong  to  the  n  roots  of  the  equa- 
tion/^) =  0,  we  shall  have  the  r  equations  of  condition 

f(a2)  =  0 

from  which,  and  the  equation 

V  =  <i>(ax,a2, «,), 

we  shall  be  enabled  to  eliminate  the  quantities  a,,  a2,  .  .  .  .  ar; 

E 


26  THEORY    OF   EQUATIONS. 

and  the  resulting  equation  *(#)  =  0  will  have  for  its  roots 
all  the  combinations  similar  to  <j>(alf  a2,  .  .  .  .  ar).  This  is 
evident  from  the  consideration  that  the  same  resulting  equation 
$(?/)  ss  0  would  have  been  obtained  by  the  assumption  of 
any  other  combination, 

as  y  =  <t>(a2,a3f a^). 

But  it  must  be  remarked,  that  the  final  equation  so  found 
will  also  contain  the  roots 

<t>  Oi»  av at), 

^  02>  av a*)> 


and  <j>  (av  a2,  a2, a2)y 

(f>  (<*2>  a\>  a\> ai)> 

with  several  other  classes,  all  of  which  are  equally  foreign 
to  the  problem,  if  combinations  without  repetitions  be  re- 
quired. 

If,  however,  r  =  I,  or  the  values  of  y  are  dependent  upon 
the  values  of  x  singly,  there  will  not  be  any  such  difficulty. 
We  then  have 

f{x)  =  0, 

y  =  #0*); 

whence,  by  elimination,  we  have  the  correct  final  equation 
F(y)==Q. 

It  will  be  seen  in  the  sequel,  that,  in  particular  cases  of 
r  greater  than  1 ,  there  will  always  be  found  a  method  of  ex- 
cluding the  obnoxious  roots  from  $  (y)  =  0,  and  reducing  it 
to  the  correct  equation  F  (y)  =  0;  and  this  will  be  effected 
by  some  artifice  peculiarly  adapted  to  each  case. 


TRANSFORMATION    OF    EQUATIONS.  27 

20.  There  is  also  another  method  of  finding  the  equation 
F(y)  =  0;  for  we  can  find  its  degree  by  means  of  the 
number  of  the  combinations  of  the  form 

<t>  («p  «2> ar) » 

and   assuming  indeterminate  coefficients,    we   can   write   the 
equation  F  (y)  =  0. 

These  coefficients  will  now  be  symmetrical  functions  of  all 
the  roots,  and  can  therefore  be  estimated  in  terms  of  the 
coefficients  of/ (a?). 

21.  The  following  are  examples  of  the  case,  where  y 
depends  on  one  root  only  of  the  equation  /(#)  =  0. 

Ex.  1.  To  form  the  equation  whose  roots  shall  differ  from 
those  of/X^O  =  0  in  sign  only. 

Assume  for  F  (y)  =  0  the  equation 

yn  +  ^i2/n-l  +    q^yn-2  +  .    .    .    .    +  ^  =  Q, 

since  F(y)  is  evidently  of  the  same  degree  as  /(#),  that  is,  of 
the  nth  degree. 

Then,  if  a,  b,  c,  ...  .  /,  are  the  roots  of  f(x)  =  0,  we 
have 

ft .«  -  S(-  a)  =  2(a)  =  -Pv 
q2=       S(a6)      =p2, 
?3=       2(a£c)   =-/>3, 


9n   =   (~    1)>«. 

And  the  required  equation  is 

yn  _  p]  yn-l  +  nyn-1  _ +   (_   Xy ^   _  Q 

The  rule  in  this  case  is  to  change  the  signs  of  the  alternate 
terms  of  the  equation,  beginning  with  the  second. 


28  THEORY    OF    EQUATIONS, 

Thus,  the  roots  of  the  equation, 

a*  _  27,r2  +  14#  +  120  ==  0, 
or  xA  +  0 .  x3  —  27a?2  +  14a?  +  120  =  0, 
are  4,  3,  —  2,  and  —  5. 

But  when  the  signs  have  been  changed  according  to  the 
above  rule,  as 

#4  _  o.#3  _  27a?2  -  14a?  -f  120  =  0, 
or  at  -  27a?2  -  14a?  4-  120  =  0, 
the  roots  will  have  been  changed  to 

—  4,-3,  2,  and  5. 

This  transformation  can  also  be  effected  by  eliminating  x, 
thus, 

y  =  —  m, 
and  the  resulting  equation  is 

/(-  v)  =  o. 

Ex.  2.   To  increase  or  diminish  the  roots  of  an  equation  by 
a  given  quantity. 

Put  y  =  x  -  % 
x  =  S  +  y: 
then  the  roots  will  be  diminished  by  S,  if  8  be  positive ;  or  will 
be  increased  by  —  8,  if  S  be  negative. 

Hence,  by  substitution  in  f(x)  =  0,  we  obtain  the  result- 
ing equation  in  y, 

/(»)  +  yf  (8)  +  o/'(S)  +••••+  yn  m  o. 


TRANSFORMATION    OF    EQUATIONS.  29 

Thus,  to  diminish  by  3  the  roots 

5,  2,  -  3,  -  4 
of  the  equation  before  mentioned, 

a;4_27^_  ux  +  120  =  0. 

Here/(8)  =  84  r-  278*  -  148  +  120  =  -  84, 
/'(8)  =  483  -  548  -  14  =  -  68, 
f  (8)  =  1282  -  54  =   54, 

f"  (8)  =  248  =   72, 

and  substituting  these  values,  we  obtain  the  final  equation  in  y, 

y*  +  I2y3  +  27^  -  68y  -  84  =  0; 
whose  four  roots  are 

2,  —  1,  —  6,  and  —  7. 

By  means  of  this  transformation  we  can  take  away  any 
term  of  an  equation.  Thus,  if  we  wish  that  the  new  equation 
shall  want  its  second  term,  we  should  assume  that  the  co- 
efficient of  yn~l  =  zero,  or 

Pj  4.  nS  =  0, 

n 

and  we  must  increase  the  roots  of  the  equation,  each  by  —  • 

Thus,  for  the  equation 

x3—  6x2  +  5  =  0, 
assume  #=2/4-8  =  ^4-2, 
and^the  transformed  equation  is 

2/3-l%-  11=0, 
which  wants  its  second  term. 


30  THEORY   OF   EQUATIONS. 

Ex.  3.  To  increase  or  diminish  all  the  roots  in  a  given 
ratio. 

Assume  the  equation,  whose  roots  are 

ma,  mb, ml, 

and  which  is  of  the  nth  degree,  to  be 

yn  +  q,yn~l  +  q2yn'2  +  ....*•+  gn  =  o. 

Then  q1  =  —  2  (ma)  =  —m^2(a)  =  mpv 
q2  =  S(m2a6)  =  m2S(a&)  =  m2^, 

ft  F=  ™nJ°»- 
And  the  required  equation  is 

yn  +  mpxyn~*  +  m2p2y1l~2  -f +  #2n/?M  =  0. 

One  chief  advantage  of  the  preceding  transformation  is, 
that  it  enables  us  to  render  the  coefficients  integers,  when 
they  are  vulgar  fractions,  or  can  be  reduced  to  that  form ; 
that  is,  when  they  are  commensurable. 

Thus,  to  render  the  coefficients  of  the  equation 

integers,  we  must  assume 

m  =  2  . 3  =  6, 
and  the  resulting  equation  will  be 

y3  +  I2y2  +  9y  +  24  =  0. 

This  transformation  might  also  be  effected  by  elimination, 
as  follows : 

/(*)  =  o, 

y  =  mx. 

Hence  x  =  - , 
m 


and/g)  =  0. 


TRANSFORMATION    OF    EQUATIONS.  31 

Ex.  4.  To  find  the  equation  whose  roots  are  the  squares  of 
the  roots  of  the  given  equation. 

Here  we  have  to  eliminate  x  from  the  equations 

/(*)  =  o, 
y  ==  z2; 

and  the  result  is  to  be  an  equation  in  y,  not  containing  frac- 
tional powers  of  y ;  which  it  must  do  if  the  elimination  were 
performed  by  substituting  for  x  its  values  ±  s/y. 

Instead  then  of  this  elimination,  we  shall  form  a  polyno- 
mial of  the  2nth  degree,  and  containing  only  even  powers  of  x; 
the  form  of  the  polynomial  being  the  product  of  the  factors 

(a-a  _  a2)  (x2  —  b2)  (x2  —  c2)  ....  (z2  —  I2) ; 

and  then  write  y  for  x2,  and  equate  to  zero. 

Now,  observing  that  we  have 

/(*)  =  Cf  -  a)  (x  -  b) (*-/), 

/(-  *>  =  (- l)»,  (i  +  a)  <*  +  b) $rw%; 

and  we  shall  obtain  the  required  polynomial 

FO2)  =  (x2  -  a2)  O2  -  b2)  ....  O2  -  I2), 
by  forming  the  product 

And  if  we  put  y  in  the  place  of  x2  in  F(#2),  and  equate 
to  zero,  we  shall  have  the  required  equation, 

F(y)  =  0; 

which  is  evidently  the  same  equation  with 
/(VP  /(-  s/y)  =  0, 
omitting  the  factor  (—  1)"  =  ±1. 


32  THEORY    OF    EQUATIONS. 

Thus,  in  the  example 

a;3_  6#2  +  &r-  10=  0, 
we  shall  form  the  product  of 

yt  -  6y  +  %}  -  10, 
^  +  6y  -f  %i  +  10, 

and  equate  to  zero  the  product;    which  process  gives  the 
equation 

y*  -  20y2  -  56y  -  100  =  0. 

The  proof  of  the  advantage  of  adopting  the  method  here 
given  is,  that  an  analogous  method  will  apply  for  any  other 
powers  of  the  roots,  provided  the  index  be  an  integer. 

Ex.  5.  To  find  the  equation  whose  roots  are  the  rth  powers 
of  the  roots  of  the  given  equation. 

We  have  to  form  and  equate  to  zero  the  polynomial  of  the 
nth  degreej 

(y  -  O  (y  -  *>r) (y-ir); 

or,  we  may  proceed  to  form  the  function 

FOr)  =  (a?  —  ar)  (xr  —  br) (xr  —  1% 

and  then  write  y  for  xr,  and  equate  to  zero  ;    the  only  datum 
for  the  formation  of  this  function  being 

/(a?)  =  (»  -  a)  (x  -  b) (x-l). 

Now  if  we  designate  the  roots  of 
*'  -  1  =  0, 

by  \v  X2, Xr,  we  shall  have 

(«r  -  I)  =  (*-X.)  (*-*■) O-Ar); 


TRANSFORMATION    OF    EQUATTONS.  33 

or  ar  —  #r  =  (a  —  \x)(a— X2cc) (a  —  Xr#) ; 

similarly,  6r  —  xr  =  (6  —  X^)^— X2a;) (£  —  X,.#), 


And  forming  the  product,  we  have, 

calling  F(t/)  =  (y  -  ar)(y  -  b*)  .....  (y  -  $fc 

(-  d*  .F(af)  =  (-  ixrpfQtfdJiQdd  •  •  •  •  /(\*o. 

and  the  equation  F(y)  =  0  is  the  same  as  the  equation 

/(Al2/0  ./(A^;) /(Ary;)=0, 

in  which  only  integer  powers  of  y  will  appear. 

Ex.  6.    To  find  the  equation  whose  roots  are  the  reciprocals 
of  the  roots  of  the  given  equation. 

Here  /(a?)  ==  0, 

1 

y  = 


£ 

on 

l  which  equations  we  find 

1 

X 

Ki 

=  0; 

1         Pi 
or 1-  -+1-  4- 

yn    yn'l  + 

1*.  . 

"•     +   ft. 

=  0, 

or 

n    ,     Pn-l      n-1     , 
*            P. 

ft 

y  +  K 

=  0; 

and  the  rule  is  to  reverse  the  order  of  the  coefficients  of  the 
equation;  and  the  equation  may  then  be  reduced  to  the 
usual  form,  by  dividing  by  the  coefficient  of  the  highest 
power  of  y,  if  necessary. 

F 


34  THEORY    OF   EQUATIONS. 

We  may  remark  here,  that  if  the  transformed  equation  be 
the  same  as  the  original  one ;  that  is,  if  the  coefficients  are 
the  same  when  inverted,  or  differ  only  in  the  sign  of  the  whole 
series ;  the  roots  of  the  transformed  equation  are  the  same  as 
those  of  the  given  equation.  Hence  the  roots  of  the  given 
equation  must  consist  of  either  4-  1,  or  —  1,  repeated  any 
number  of  times,  (these  being  the  only  quantities  which  are 
identical  with  their  reciprocals,)  and  of  pairs  of  roots  of  the 

form  a,  -,  b,-r, every  pair  of  which  will  give  a 

similar  pair  in  the  transformed  equation.  Such  equations  are 
called  Reciprocal  Equations,  and  their  solution  is  very  much 
simplified  by  this  property  of  their  roots. 

22.  We  shall  now  proceed  to  give  examples  of  the  case 
where  the  roots  of  the  transformed  equation  are  connected 
with  the  roots  of  the  given  equation,  in  such  a  manner  that 
each  one  of  the  former  depends  upon  more  than  one  of  the 
latter. 


Ex.  1.  To  find  the  equation  to  the  differences  of  the  roots. 

Here  y  =  a—  b, 

and  f(a)  =  0, 

A*)  =  o, 

are  the  equations  which,  by  the  elimination  of  a,  b,  will  give 
the  final  equation  F  (y)  =  0. 

Eliminating  a,  we  have 

f(b  +  y)  =  0, 

/(*)  =  0, 

from  which  b  is  to  be  eliminated. 


TRANSFORMATION    OF    EQUATIONS.  35 

If  we  had  eliminated  b  first,  we  should  have  obtained  the 
equations 

f(a)  =  0, 
/(«-.?)  =  0, 
from  which  a  is  to  be  eliminated. 

Hence  we  see  that  the  final  equation 
F(y)  =  0, 

is  to  be  obtained  from  the  elimination  of  x  between  the  equa- 
tions 

/(*)  =  \ 

f(x  +  y)  =  Oi' 

or,  between  the  equations 

/(*)  =,  0 
f(x-y)  =  0 

This  shows  that  F(y)  will  not  alter  its  form  when  y  is 
changed  into  —  y,  and  therefore  F(y)  contains  only  even 
powers  of  y;  a  fact  which  we  may  point  out  a  priori.      For  if 

a,  j3,  y, be  roots  of  the  equation  F  (y)  =  0,  and 

we  suppose  that 

a  —  a  —  b, 

P  =  b-c, 

y    =    C  —  d. 


then  —  a  ==  b  —  a, 
-f$  =  c-b, 

—  y  =  d  —  c, 

which  are  still  roots  of  the  equation ; 


36  THEORY   OF    EQUATIONS. 

consequently,  we  shall  have 

F  (SO  =  {y-a)(y  +  a)  (y  -  0)  (y  +  /3),  .  .  .  . 

=  (y2-a2)(y2-/3a) 

—  *  (^2)j  suppose. 

If  we  now  put  y2  =  z,  we  have 

*  (2)  =  (2  -  a2)  (2  -  j32) 

which  is  the  equation  to  the  squares  of  the  differences  of  the 

n  __  [th 
roots ;    and  therefore  is  of  the  n  .  — ^ —     degree. 

We  have  yet  to  perform  the  elimination  of  x  between 

If  the  elimination  be  performed  directly,  the  resulting 
equation  F  (y)  ■=  0  will  contain  n  roots  =  zero,  and  we  shall 
have  to  suppress  the  factor  yn.  These  roots  correspond  to  the 
differences 

a  —  a, 

b-b, 

c  —  c, 

or  the  combinations  with  repetitions. 

These  roots  may,  however,  be  suppressed  in  the  commence- 
ment of  the  calculation,  since  we  may  take,  instead  of  the 
two  equations  above,  another  pair  of  equations  formed  from 
them ; 

f{x)  =  ov 
/(f  +  ?/)  -/(*)  m  oi* 


TRANSFORMATION  OF  EQUATIONS.  37 

Now  the  second  equation  is  satisfied  by  y  as  0,  and, 
therefore,  is  of  the  form 

ft.  +  (x,y)  =  <h 

and,  by  suppressing  this  factor  y,  we  have  the  correct  result,  by 
obtaining  the  final  equation  in  y,  from 

f(x)  *  0 

^(x,  y)  ==  0> 

In  fact  we  have,  by  Taylor's  series,  after  expanding  the 
second  equation,  and  suppressing  the  factor  y, 

+  (*»  y)  =/(*)  +  o/"<»  + +  y"l> 

from  which  equation,  and  the  equation 

f{x)  =  0, 
we  obtain  the  required  equation 

F(y)=0. 

As  this  transformation,  or  rather  the  second  transformation, 
by  putting  y1  =  z%  and  forming  the  equation  whose  roots  are 
the  squares  of  the  differences  of  the  roots  of  the  original 
equation,  is  one  of  considerable  importance  ;  and  because  the 
process  of  elimination,  though  more  beautiful  in  theory,  is  less 
commodious  in  practice;  we  shall  proceed  to  form  the  equation 
in  zy  whose  degree  is  known,  by  calculating  its  coefficients. 
This  is  the  method  adopted  by  Lagrange. 

Assume  the  equation  to  be 

,                 n  (n  —  1) 
where  ?n  ==  — ^-~ -> 

Let  a,  j3,  7, X,  be  its  roots ;  which  are  the  squares 

of  the  differences  of  the  roots  a,  b,  c,  ,  .  .  .  /. 


38  THEORY    OF    EQUATIONS. 

Now  we  have,  for  any  integer  k, 

(x  -  a)**  =  x*  —  2kxnJc-la  + 4-  «*j 

(*  -  b)*  =  x*~  2kx'*~lb  + +  V*, 

whence,  by  addition,  we  have 

2  \(x  —  a)*}  =  nx*  -  2&2(«)#2*-1  .  .  . .  +  rc2  (a2*). 

And  from  this  equation,  writing  for  x  successively  a,  b,  c, 

/,  we  have,   after   adding  all  such   equations,  and 

observing  that  every  member  of  the  first  side,  as  (ft— a)2*, 
will  be  repeated  under  the  form  (a  —  &)*, 

22  (a*)  =  *2  (a2*)  -  2&2  (a)  2  (a2*"1) +  w2  (a2*); 

and  collecting  the  terms  of  the  second  side  in  pairs,  we 
obtain 

2  (a*)  =  n2  (a24)  -  2&2  (a)  2  (a2*"1)  . .  .  .  -j 

-<-*)-l:^::::(1-V'rf 

observing  that  the  middle  term  is  left  single. 

Now  we  can  find  the  sums  of  the  powers  of  the  roots, 

2  (a)  2  (a2) 2  (a2*) 

in  terms  of  the  coefficients 

Pv  Pv  Pv IV* 

hence  2  (a*)  can  be  found  in  terms  of  the  same  quantities. 
But  2  (ak)  can  also  be  determined  in  terms  of  the  new 
coefficients 

Qv  9v  9v ?»>'> 

hence,  by  equating  the  two  values,  we  have  an  equation 
connecting  the  new  coefficients  with  the  old  ones ;  and,  by 
putting  k  =  1,  2,  3,  ...  m,  successively,  we  shall  successively 
determine  the  coefficients  of  the  required  equation. 


TRANSFORMATION    OF    EQUATIONS.  39 

Even  this  process,  however,  becomes  too  complicated  for 
practice  when  n  is  >  4. 

Ex.  2.    To  find  the  equation  whose  roots  shall  be  the  sums 
of  every  two  roots  of  the  given  equation. 

Here  y  =  a  +  b, 

/Ca)  =  0, 

are  the  equations,  from  which  we  are  to  eliminate  a  and  b,  in 
order  to  obtain  the  required  equation 

F(y)  =  0. 

First,  let  b  be  eliminated,  and  we  have 


-  a)  =  0     J  ' 


Secondly,  we  have  to  eliminate  a  from  these  two  equations ; 
but  if  that  operation  be  performed  immediately,  the  result 
would  contain  the  roots  2a,  2b,  .  ...  21,  which  are  foreign  to 
the  question. 

To  avoid  this,  we  must  form  the  equations 

/(«)  =  o, 
/(y-«)-/(«)  =  o 

the  second  of  which  is  satisfied  by  y  =  2a,  and  therefore 
contains  the  factor  y  —  2a,  which  is  foreign  to  the  question, 
and  must  be  expunged. 

In  fact,  expanding  the  second  equation  by  the  series  of 
Taylor,  we  have 


f(a  +  y  -2a)  -f(a)  =  0, 
ovf(a)  4  V-^f'{a)  + +  (y  -  2a)^  =  0, 


40  THEORY    OF    EQUATIONS. 

observing  that  we  must  suppress  the  factor  y  —  2a;  and  we 
shall  now  obtain  the  correct  equation  by  eliminating  a  from 
this  equation,  and  /  (a)  =  0.  We  shall  then  have  to  take 
the  square  root  of  F  (y)  =  0,  since  each  root  enters  twice. 

Ex.  3.    To    find   the   equation    of  which   the   roots    shall 
express  the  ratios  of  every  two  roots  of  the  given  equation. 

a 
Here  y  =  p 

/(a)  ==  0, 
f(b)  =  0.  J 
Proceeding  as  usual,  we  have 

/(«)  =  o 


m = •  r 


or  f(b)  =  0  -, 

so  that  the  final  equation  in  y  results  equally  from  eliminating 
x  between 


■'© = » 


or  between  f(x)  =  0  ~i 
/(**)  =  0  )' 
Hence  we  may  remark,  that  the  required  equation 

will  not  change,  when  we  write  -  for  y ;  in  other  words,  it  is 
a  reciprocal  equation.  This  is  also  evident,  from  the  con- 
sideration  that  every  root  j-  has  for  a  companion  the  inverse  -. 


TRANSFORMATION    OF    EQUATIONS.  41 

The  final  equation  resulting  from  the  elimination  will 
contain  n  roots  equal  to  unity,  being  the  ratios 

a     b  l9 

a    b  I 

to  avoid  this,  take  the  equations 

/(*)  =  <>} 

f{xy)-f{x)  =  Oy 
and  for  the  second  equation,  write 

f\x  +  x(y-\)\  -/(*)  =  <>, 
or  expanding  and  suppressing  the  factor  (y  —  1)  x, 

/(*)  +*(y2T1)/"(*) +  *--1  (y  -  lr~>  =  o. 

We  give  the  following  examples  for  practice : 

Ex.  4.  To  find  the  equation  whose  roots  shall  be  the  pro- 
ducts of  every  two  of  the  roots  of  the  given  equation. 

Ex.  5.  Form  the  equation  whose  roots  shall  be  all  com- 
binations of  the  form  p  (a  -f  b)  -f  q  .  ab,  a  and  b  being  two 
roots  of  /  (a?)  ==  0,  and  p,  q,  constant  quantities. 


23.    In  particular  cases  of  the  general  equation,  particular 
artifices  are  preferable ;  thus,  for  the  cubic 

x3  +  px2  +  qx  +  r  —  0, 
whose  roots  are  a,  b,  c,  we  can  find  the  equation  whose  roots 
are 

b  +  c     a  +  c     a  4-  b 

9   — t — »  — Z — ' 

a  0  c 

a  +  b  +  c       .     a -\- b  +  c        ,     a  4-  b  4-  c        , 

or  —  1,  r 1?  ~ —  l> 

a  b  c 

G 


42  THEORY    OF    EQUATION3. 


by  putting?/  =  —  |-  1, 

~P 
or  x  =  ,    /    , 
1  +y 

and  substituting  in  the  given  equation. 


24.  It  will  not  be  improper,  in  this  place,  to  treat  of  a  class 
of  problems  which  can  be  most  easily  solved  by  the  method  of 
transformation;  they  are  those  in  which  there  is  a  given 
relation  between  some  of  the  roots  of  the  given  equation,  and 
those  roots  are  required.  The  general  problem  is,  having 
given  that  one  of  the  roots  g  depends  on  a  certain  number  of 
the  others,  as  for  instance  upon  three  by  the  relation 

g  =  $  (a,  b,  c), 
required  to  find  this  root  g  of  the  equation. 

Form  the  equation  whose  roots  are  all  combinations  similar 
to  <j>  (a,  b,  c). 

Let  this  be  F  (a)  =  0 ;  then  one  root  of  F  ($)  =  0  is  g} 
which  is  also  a  root  of  f(x)  =  0;  and,  consequently,  F  (x), 
fix),  have  a  common  factor  x  —  g,  which  can  be  found  by 
the  usual  method. 

If  any  other  root  h  is  in  similar  circumstances,  they  will 
have  a  common  factor  (x  —  g)  (x  —  h). 

Ex.  1.  Suppose  that  we  knew  that  there  were  two  roots  of 
f(x)  =  0,  of  the  form  ±  a,  to  find  these  roots. 

Since  ±  a  are  two  roots  of 

xn  +pxxn^  +p2xn~2  + +  Pn  =  0; 

it  follows  that,  on  changing  the  signs  of  the  roots  of  the  equa- 
tion, we  shall  have  =F  a  still  roots  of 

xn  —  p}  xn~^  +  p2xn~2  — -f  pH  =  0. 


TRANSFORMATION    OF   EQUATIONS.  43 

We  suppose  n  to  be  an  even  number ;  and  it  will  be  seen 
that  a  similar  proof  will  apply  when  n  is  odd. 

Now  since  ±  a  satisfy  the  two  preceding  equations,  they 
will  also  satisfy  their  sum  and  difference ;  or,  omitting  the 
factor  x  of  the  difference,  they  will  still  be  roots  of  the  equa- 
tions 

Xn  +  p2Xn~2  + +  Pn   =   0\ 

PxXn'2  +  P3Vn~4  + +Pn-l   =    °J 

Hence  the  two  polynomials  forming  these  equations  must 
have  a  common  factor  x2  —  a2  at  least.  And  if  there  be  any 
other  pair  of  roots  ±  b  in  similar  circumstances,  x2  —  b2  is 
also  a  factor. 

Thus,  for  the  equation 

#3  _  2x2  —  x  +  2  =  0, 
which  has  two  such  roots,  we  form  the  equation 
x*  +  2a;2  —  x  —  2  =  0; 

and   forming  the  sum  and  difference,  after  suppressing  the 
factor  x, 

x2-  1  =  0^ 
and  the  common  factor  is  evidently 

x2-l; 
so  that  these  two  roots  are  ±  1. 

.  j        .       x3  —  2x2  —  x  +  2  0     t. 

Also,  since  ^ i =  x  —  z,    the  remaining 

x  —  i 

root  is  2. 

Ex.  2.  The  difference  of  two  roots  of  the  equationy*(#)  =  0 
is  S,  to  find  these  roots. 


44  THEORY    OF    EQUATIONS. 

Let  the  less  of  the  two  roots  be  a,  and  therefore  the  other 
be  a  +  $.  Increase  the  roots  of  the  equation  by  8,  these  two 
roots  become  a  -f  S,  a  +  28 ;  so  that  a  +  S  is  still  a  root  of 
the  transformed  equation.  In  other  words,  x  —  a  —  S  is  a 
factor  of  the  two  polynomials 

and  f{x  —  $); 
observing  that  the  roots  of  the  equation 
/(*  -  S)  =  0, 
are  those  of/  (x)  —  0  increased  by  8. 

Thus,  for  example,  knowing  that  the  difference  of  two  roots 
is  2  in  the  equation 

#3  _  2#2  -  x  +  2  =  0, 

and  searching  the  common  factor  as  above,  we  find 

a-f8=  1,  and  a  =  1  —  S  =  —  1. 

Ex.  3.  Having  given  the  sum  of  two  roots  of  f{x)  ttt  0,  to 
find  the  roots. 

Ex.  4.  Given  the  product  of  two  roots,  or  their  quotient, 
to  find  these  roots. 

Ex.  5.  The  sides  and  hypothenuse  of  a  right-angled  triangle 
are  roots  of/ (x)  =  0 ;  find  the  hypothenuse. 

These  examples  are  given  for  the  sake  of  rendering  the 
subject  familiar,  by  exciting  individual  practice  in  problems 
of  this  description.     We  shall  add  two  other  examples. 

Ex.  6.  The  roots  of  f(x)  =  0  are  all  in  arithmetical 
progression ;  find  them. 


TRANSFORMATION  OF  EQUATIONS.  45 

Ex.  7.    The  roots  of  /(#)  =  0  are  all  in  geometrical  pro- 
gression ;  find  them. 

These  two  examples  are  best    solved    by    means   of   the 
coefficients. 


25.  There  is,  however,  one  class  of  equations  whose  roots 
are  connected  by  certain  relations,  to  which  the  ordinary 
process  of  transformation  will  not  be  applicable.  For  the 
method  becomes  illusory,  when  the  transformation  reproduces 
the  original  equation. 

The  most  important  case  is  that  of  reciprocal,  or  recurring 
equations.  It  was  stated,  that  the  roots  of  these  equations 
consisted  of  single  roots,  either  -f  1,  or  —  1 ;  and  of  pairs 
of  the  form  a,-,  b,  j- We  shall  suppose  that  all  the 

roots  ±  I  have  been  expelled,  which  can  always  be  done  by 
trial,  so  that  the  equation  is  no  longer  satisfied  by  ±  1.  In 
this  state,  which  is  the  simplest  of  all  recurring  equations,  the 
polynomial  is  of  even  dimensions,  and  its  last  term  will  be 

=  (_a)(_I)(_6)(_I) =  1. 

Hence  we  shall  write  the  equation 

x2m  +  p^apm-l   + +  pjc  _j_    1    —.  0. 

And  the  polynomial 

X2m  +pi3tfim-1  + +PXX+  1 

=  O  -  a)  (x  -  -)  (x  -  b)  (x  -  g)  .  ..... 


46 


THEORY    OF    EQUATIONS. 


and,  consequently,  dividing  by  xm,  we  have,  after  collecting  the 
terms  of  the  polynomial  in  pairs  equidistant  from  the  ends, 


d-«)a-/3) 


putting  x  +  -  =  l 


a  +  -  =  a 

a 


*  +  i  =  0 


It  follows,  therefore,  that  the  polynomial 
can  be  put  under  the  form 

r  +  ?1£wi-1  4- +  &.; 

and  that  on  equating  it  to  zero  in  this  form,  its  roots,  or  the 

values  of  &  will  be  a,  j3,  7 ;     so  that  the  original 

equation  will  then  be  reduced  to  the  m  quadratics 

X2  _  aX  +    1    =  0  "J 

x*  —  $x  +  l  —  0 


.r 


7a?  +  1 


where  a,  0,  7 depend  on  an  equation  of  the  mth 

degree  only,  since  they  are  the  roots  of 

km+q^m-]  + +^  =  0. 


TRANSFORMATION    OF    EQUATIONS.  47 

We  have  yet  to  perform  the  transformation  of  the  polynomial 

(*"  +?)  +Pi  (^f  iF*)  +  *  '  *  *  +  Pm 
intor+  lit*"1* +  fc 

where  £  .•?=  an — 
a? 

Now  we  have  identically 

(1-*q(i-0  =  i_0  +  p. 

and  taking  the  Naperian  logarithms,  expanding,  and  equating 
the  coefficients  of  tn  in  the  equivalent  series,  we  have 

( xn  -f  — -  j  ==  coefficient  of  tn  in  the  series 

n\  xnJ 

_ 1  (u  —  t2y — ,  at  -  py-*  - 

n  J         n  —  1  v 


or  xn  4-  — ■  =  coefficient  of  tn  in  the  series 


=  'C-n  e-2  4-  fl*5-^^*4  -  .  .  . 

n.(n-s-\)(n-s-Z)  . . .  (n-2s+  1) 
+  C_lj  1.2.3 s  4 


Hence,  by  the  application  of  this  theorem  to  the  cases  n  =  m, 
ra  2=  m  —  1, successively,  every  term  of 

(*"  +  i)  +  p<  (a;'"",  +  *p)  +  —  +'- 

will  be  expanded  in  powers  of  £ ;    and  the  whole  will  thus  be 
reduced  to 

e»  +  y,  r-«  + +  ?„,• 


48  THEORY    OF    EQUATIONS. 

By  way  of  exemplifying  the  process,  we  shall  apply  it  to  the 
equation 

x»  -  3^8  +  5x6  —  5x4  +  3cc2  -  1  =  0. 

Here  the  first  remark  to  be  made  is,  that  there  are  no  odd 
powers,  and  therefore  the  roots  are  of  the  form  ±  */y,  where 
the  values  of  y  are  given  by  the  equation 

ys  _  3^4  +  5^3  _  5y2  +  ty  _  i  =  0 

Secondly,  this  equation  is  satisfied  by  y  =  1,  and  hence 
the  factor  y  —  1  is  to  be  expunged  before  treating  it  as 
a  reciprocal  equation.     This  gives  the  equation 

f  -  2y*  +  3^  _  2y  +  1  =  0, 

which  is  no  longer  satisfied  by  ±  1 . 

Assume  rj  =z  y  -\-  -;  and  since 

(^)-2H)+3=o> 

we  shall  have,  by  the  expansion, 

tf  -  2)  -  2V  +  3  =  0, 

^-  in  4.  i  dt  o, 

and  there  are  two  roots  =  1 . 

Hence  the  equation  in  y  is  reduced  to  two  equal  quadratics, 
y2-y+  1  =0; 
whence  we  have,  for  the  four  values  of  y , 

two  roots  =      +      ~3  as  cos  60°  +  \Z^1  sin  60°, 
and  two  roots  =      ~      ~      =  cos  60°  —  \Z~\  sjn  60°. 


TRANSFORMATION    OF    EQUATIONS.  49 

Hence  the  ten  values  of  x  will  be 

two  roots,  ±  1, 

two  equal  pairs,   ±  (cos  30°  +  V~\  sin  30°), 

two  equal  pairs,   ±  (cos 30°  —  s/^-[  sin  30°). 

We  may  remark,  that  the  general  equation 

#"±1=0 

is  of  the  same  kind,   and  when  n  is  not  too  large,  may  be 
solved  in  a  similar  manner. 

But,  in  every  case,  we  must  recollect  that  it  will  be  extremely 
advantageous  to  get  rid  of  the  roots  ±  I,  at  the  very  com- 
mencement of  the  operation  ;  for  it  is  of  the  greatest  import- 
ance that  the  equation  in  %  should  be  of  as  low  a  degree  as 
possible,  on  account  of  the  troublesome  expansion  necessary  in 
forming  it. 


n 


CHAP.  III. 


ON   THE    THEORY  OF  THE    LIMITS  OF    THE    ROOTS,    AS    FAR 
AS  IT  WAS  KNOWN  PREVIOUS  TO  FOURIER. 


26.  It  very  early  became  an  object  of  inquiry  to  assign 
the  intervals  in  the  series  of  all  possible  magnitudes  from  —  oo 
up  to  4-  oo  ,  within  which  each  of  the  roots  of  the  equation 
ought  to  be  sought.  And  although  this  question  was  never 
fully  answered  in  a  form  adapted  to  practice,  until  the  recent 
publication  of  Fourier's  Treatise,  yet  various  interesting  as 
well  as  important  theorems  were  the  result  of  the  researches 
of  former  analysts  in  this  direction.  The  important  deter- 
mination of  the  number  of  impossible  roots  of  the  equation 
was  also  practically  unaccomplished  previous  to  Fourier's 
theory  of  the  separation  of  the  roots.  But  the  investigation 
of  a  limit  to  the  number  of  the  positive  and  negative  roots 
had  been  previously  effected  by  the  theorem,  known  under 
the  name  of  the  Rule  of  Signs,  which  was  first  given  by 
Descartes.  The  properties  of  the  limiting  equation  were 
also  discovered,  and  the  method  of  finding  a  superior  and 
inferior  .limit.  On  this  account,  as  well  as  for  the  sake  of 
giving  the  method  of  Fourier  in  a  connected  form,  we  shall 
first  demonstrate  the  principal  theorems  which  had  been 
brought  to  a  complete  state  by  previous  analysts. 


LIMITS    OF    THE    ROOTS.  51 

27.  We  shall  commence  with  the  rule  of  signs  of  Descartes, 
which  is  to  be  enunciated  in  the  following  manner : 

There  can  be  no  more  positive  roots  in  any  equation  than 
there  are  changes  of  sign  in  the  series  of  signs  of  the  terms 
composing  the  equation  ;  and  no  more  negative  roots  than 
there  are  continuations  of  sign. 

Let  the  signs  of  the  equation  f{x)  =  0,  that  is,  of  the 
polynomial  f  (a?),  be 

+  -  4-  + +  +  -+, 

and  let  a  new  real  root  be  introduced  into  the  equation. 

First,  let  this  be  a  positive  root  m;  then  f(x)  must  be 
multiplied  by  the  factor  x  —  m;  and  setting  down  only  the 
signs  of  the  multiplication,  we  have 

+  -  +  + +  +  -   +  " 

-  4- +  +  + +   - 


±  ±  -f  ±  -    +  - 


Secondly,  let  the  root  be  negative,  as  —  m;  then  multiply- 
ing f(x)  by  x  +  m,  and  retaining  the  signs  only  of  the 
operation,  we  have 

+  —   +  + +  +  -   + 

+  —  +  + +  +  —  + 


+   ±±+± ±+±±  + 

Now  it  will  easily  be  seen,  that  we  have  added  at  least 
one  change  of  sign  in  the  first  case,  and  at  least  one  con- 
tinuation in  the  second,  whatever  be  the  signs  of  those  terms 
where  it  is  doubtful  whether  the  sign  is  to  be  4-  or  — . 
So   that   every   additional   positive    root   introduces   at  least 


52  THEORY    OF    EQUATIONS. 

one  additional  change,  and  every  additional  negative  root  at, 
least  one  additional  continuation ;  and,  consequently,  there 
cannot  be  more  positive  roots  than  there  are  changes,  or  more 
negative  roots  than  there  are  continuations  of  sign. 

If  all  the  roots  are  real,  their  number  is  equal  to  the 
number  of  changes  together  with  the  number  of  continuations. 
Hence  it  follows,  that  the  number  of  positive  roots  is  equal  to 
the  number  of  changes,  and  the  number  of  negative  roots 
equal  to  the  number  of  continuations. 

28.  An  even  number  of  roots,  or  none,  will  lie  between 
a  and  /3,  when  the  quantities  f  (a)  and  f  (j3)  have  the  same 
sign ;  and  an  odd  number,  when  they  have  a  different  sign. 

For  if  the  roots  off(x)  =  0  are 

a,  b,  c,  .  .  .  .  p  (cos  0  ±  V  —  1  sin  0)  ...  . 

then  we  have  the  identity 

f(x)  =  (x—a)  (x~b) \{x-p  cosO)2+p2  sin20£ 

and,  consequently, 

f(a)  _  a  —  a    a  —  b  (q  —  p  cos  ft)2  +  p2  sin2fl 

/(0)~j3-a"0-& {P-PcosQ)*  +  p*sm*0 

Now  if  no  root  lie  between  a  and  j3,  every  factor  of  J.  \/ 
will  be  positive ;  but  every  root  which  does  lie  between  those 
limits   will   give   one  corresponding   negative   factor;     since 


a  . 


is  negative  only  when  a  lies  between  a  and  j3. 


j3-a 

f(a) 


/((3) 
the 


Hence,  if  m  roots  lie  between  a  and  /3,  the  sign  of 
depends  on  (-  l)"1;  that  is,  /(a)  and  /(j3)  will  have  the 
same  sign,  if  m  —  0,  or  if  m  be  even;  but  they  will   have 
different  signs,  if  m  be  odd. 


LIMITS    OF    THE    ROOTS.  53 

29.    To  find  a  limit  greater  than  the  greatest  root. 

Let  a  be  this  greatest  root,  and  suppose  a  to  be  greater 
than  a;  then,  taking  the  same  equation,  we  have 

f(a)  =  (a- a)  (a- b)  ...  .  \(a—p  cos  Of  +  p2  smW]  .... 
which  is  positive,  since  all  its  factors  are  so.     And  the  result 
is  still  positive  for  quantities  greater  than  a. 

Hence,  if  we  can  find  a  quantity  a,  such  that  x  =  a,  and 
every  greater  value  of  x,  shall  render  f  (x)  positive,  then  a 
shall  be  a  limit  superior  to  any  root. 

But  the  following  method  of  finding  a  superior  limit  is  pre- 
ferable to  the  preceding,  and  is  better  adapted  to  practice : 
Decrease  the  roots  of  the  equation 
/(*)  =  0, 
each  by  the  quantity  X;  which  is  done  by  putting 
y  =  x  —  X, 
or  x  =  y  +  X; 
and  the  final  equation  in  y  is 

/(A  +  y)  =  0, 

or/(A)  +  y  /(X)  +  f^/"(A)  + +  yn  =  0. 

Now,  by  taking  X  large  enough,  the  signs  of  /(X),  /'(X), 
f"(X), .  •  •  •  can  be  rendered  all  positive;    and  when  this  is 
the  case  there  is  no  positive  value  of  y,  as  there  is  no  change 
of  sign. 

Consequently,  there  can  be  no  positive  value  of  x  —  X; 
or,  in  other  words,  X  is  a  superior  limit  to  the  roots  of  the 
equation  /  (x)  ==  0. 

Hence,  the  rule  is  to  find  a  value  X  for  x,  such  as  shall 
render  positive  the  series  of  polynomials 


54  THEORY    OF    EQUATIONS. 

and  that  value  is  a  superior  limit  for  the  roots  of  the  equation 

/<*)  =  <>. 

We  may  remark,  that  the  same  quantity  X  is  also  a  superior 
limit  to  the  roots  of  any  of  the  equations 

f»(x)  =£  0, 


/«-i(,)=0 


30.  To  find  an  inferior  limit  to  the  roots  of  the  same 
equation,  we  must  change  the  signs  of  the  roots,  and  find  a 
superior  limit  to  the  new  roots :  this  limit,  with  its  sign 
changed,  will  be  an  inferior  limit  to  the  original  roots. 

We  can  also  find  a  superior  limit  to  the  negative  roots,  and 
an  inferior  limit  to  the  positive  roots.  For  if  we  change  the 
roots  to  their  reciprocals,  and  find  the  superior  and  inferior 
limits  of  the  new  roots,  then  the  reciprocals  of  these  limits 
will  be  respectively  the  inferior  limit  to  the  positive  roots, 
and  the  superior  limit  to  the  negative  roots  of  the  original 
equation. 

31.  It  was  implied  in  the  statement  of  Art.  26,  that  the 
theoretical  separation  of  the  real  roots,  or  the  assignment  of 
the  interval  containing  each  root,  had  been  accomplished 
previous  to  the  method  of  Fourier.  This  was  done  by 
Lagrange,  in  the  following  manner: 

First,  let  the  equation  be  cleared  of  equal  roots.  Secondly, 
form  the  equation  of  the  squares  of  the  differences  of  the 
roots,  and  find  its  inferior  limit  S2;  the  quantity  S  will  be 
inferior  to  any  difference  of  the  roots,  taking  always  the  less 
from  the  greater.     Lastly,  find  the  superior  and  inferior  limits 


LIMITS    OF    THE    ROOTS.  55 

of  the  roots,  /  and  /',  and  it  is  evident  that  any  interval  of  the 
series 

V,  V  +  %  V  +  21 ,  up  to  t% 

will  contain  no  root,  or  one  only ;  and  thus,  by  trial,  we  can 
find  the  intervals  of  the  real  roots,  and  consequently  their 
number. 

The  method  of  trying  the  intervals  of  the  series 
i3  t  +  g,  l'  +  28, i  up  to  /, 

is  to  substitute  the  terms  of  that  series  successively  in  f(x), 
and  to  write  the  signs  of  the  results  in  order.  Every  change 
of  sign  in  that  row  of  signs,  points  out  the  interval  containing 
one  of  the  roots;  and  the  number  of  such  changes  of  sign 
gives  the  number  of  real  roots  of  the  equation  /(a?)  =  0. 

Hence  also,  by  subtracting  this  number  from  n,  we  obtain 
the  number  of  impossible  roots. 

But  although  perfect  in  theory,  yet  in  practice  this  method 
is  useless,  except  for  equations  below  the  fifth  degree  ;  since 
the  equation  of  the  squares  of  the  differences  cannot  be 
formed.  Besides,  the  process  has  the  disadvantage  of  trying 
a  large  number  of  intervals,  many  of  which  the  method  of 
Fourier  will  at  once  exclude. 


32.  The  real  roots  of  the  equation  f'(x)  =  0  lie  between 
those  of  f  (x)  =  0 ;  so  that  an  odd  number  of  the  roots  of 
f\x)  —  0  will  be  found  between  every  two  of  the  roots  of 
f(x)  =  0,  when  the  real  roots  of  both  equations  are  written 
in  one  series  in  the  order  of  magnitude. 

For  let  a,  b,  c, be  the  real  roots  of  the  equation 

f(x)  ==  0 ;  then  we  have 

f(x)  =~  O-a)O-6)0r-c)  .  .  .  \(x-Pcos0f+p2sm20l  .  . 


56  THEORY    OF    EQUATIONS. 

and  if  we  take  the  logarithms  of  these  expressions,  and  diffe- 
rentiate, 

/(*)_       1       j       *  t         2(x-pcos0) 

j\x)        x  —  a      x  —  b  '         '      (%—p  cos  0)2  +  p2  sin2  6 ' 

so  that,  on  multiplying  by  f(x),  every  fraction  on  the  second 

side  will  become  a  polynomial ;   and  we  may  remark  that  the 

f(x) 
factor  x  —  a  will  enter  into  every  polynomial  except   J  '       , 

X  —  CL 

fix) 
and  the  factor  x  —  b  into  every  one  but  ,  >  and  so  on  for 

all  the  real  factors. 

Hence,  upon  substituting  for  x,  the  value  a,  these  polyno- 

f(x) 
mials  will  all  vanish,  except  J  ;   and  on   putting  x  =  b, 

X  —  CL 

f(x) 
the  only  one  which  does  not  vanish  will  be  7 ;   and  so  on 

J  x  —  b 

for  all  the  real  roots  of/"(V)  =  0.     And,  consequently, 

f'(d)  =:  value  of  - due  to  x  =  a 

J  v  '  x  —  a 

=  (a-b)  (a-c) \(a  —  p  cos  0)2  -f  p2  sin20^ 

and,  by  a  similar  process, 

f(b)  =  (b~a)(b-  c) \ib-p  cos0)2  4-  p2  sin20£  .... 

f(c)  =  (c-a)(c-b) \(c  -  p  cos0)2  +  p2  sin20^ 


And,  by  forming  the  successive  quotients 

f(a)  _               «-c  a  -rf  (a-jQcos0)2+/c>2sin20 

/(ft)  T  >::     ^ft-  c*6-  d (b-p  cosd)2  +  p2sm26  "" 

/(*)  -  Uft  — A  &  — <*         (6-<Qcos0)2  +  /o2sin20 

J(c)  T  '~~    ^c-  rt'c-rf""(c-/ocos0)2  +io2sin20"  ' 


from  which  we  conclude  that  all  these  quotients  are  negative, 


LIMITS    OF    THE    ROOTS.  57 

since  no  factor  of  any  one  of  them  is  negative,   except  the 

d    Q 

factor  —  1 ;   for  we  cannot  have  -7 negative,  unless  c  lies 

o      c 

between  a  and  b,  which  is  not  the  case.     Consequently  the 

series  of  quantities 

f(fl),  f(b),  f(e) 

are  alternately  positive  and  negative ;  and  an  odd  number  of 
roots  off(x)  —  0  lie  between  every  two  of  the  roots  a,b,c,  .  .  . 
of  the  equation  f(x)  =  0. 

It  is  manifest  that  if  r  be  the  number  of  real  roots  of 
f(x)  =  0,  then  the  least  number  of  real  roots  that  fix)  ca  0 
can  have  is  r  —  1,  so  that  one  root  may  lie  between  every 
two  of  the  r  roots  of/* (a?)  =  0. 

The  case  of  equal  roots  might  be  considered  in  a  similar 
manner.  But  it  is  better  to  clear  the  equation  of  equal  roots 
by  the  method  previously  given,  and  apply  the  present  rule  to 
the  reduced  equations  given  by  that  method. 

We  may  remark,  that  when  the  limiting  equation  f{x)  =  0 

can  be  solved,  and  we  have  its  real  roots  a,  j3,  7,  S, 

in  the  order  of  magnitude,  then  the  roots  a,  b,  c,  d, 

of  the  primitive  equation  f(%)  =  0,  lie  singly  between  the 
terms  of  the  series 

00  >  a,  /3,  y,  §, ,  —  co  . 

Hence,  if  we  form  the  series  of  results 

/(»),/(«)>/(/3),/(Y), /(-oo), 

the  number  of  changes  of  sign  in  the  series  of  signs  of  these 
results  will  be  the  number  of  the  real  roots  a,  b,  c,  d, 


CHAP.  IV. 


ON  THE  SEPARATION  OF  THE    ROOTS   BY  THE    METHOD   OF 

FOURIER. 

33.  In  order  to  investigate  the  intervals  between  which 
each  root  of  the  equation 

/(*)  =  o 

is  to  be  found,  it  will  be  necessary  for  us  to  consider,  at  one 
view,  the  series  of  polynomials 

f(x),f\x),/\x), /*(*), 

which  it  will  be  more  convenient  to  write  in  the  inverse  order, 

f«{x),f«-\x),  .  .  .  .f"(x),/\x),f(x). 

Let  us  now  give  to  x  a  determinate  value  a,  positive  or 
negative.  And  let  the  signs  of  the  above  series  of  quantities 
be  set  down,  instead  of  the  quantities  themselves.  A  series  of 
signs  will  thus  be  formed  which  will  always  commence  with  +  ; 

because  f\x)  =  1.2.3 n  is  always  positive.    But 

the  signs  after  the  first  will  vary,  as  a  varies  in  magnitude. 
If  now  we  begin  with  a  =  —  oo  ,  the  series  will  consist  of 
4-  and  —,  alternately  ;  since  the  dimensions  of  the  polynomials 
are,  respectively, 

0,  1,2,3, ,  », 


SEPARATION    OF   THE    ROOTS.  59 

and  on  substituting  —  oo  for  x,  the  sines  will  depend  upon 
their  first  terms,  that  is,  upon 

(-i^c-i)1^-!)2,  —  (~i)M- 

And  when  a  has  gone  through  all  stages  of  magnitude,  up 
to  +  oo,  the  series  of  signs  will  evidently  contain  no  — . 
Hence  the  n  changes  of  sign  due  to  —  oo  have  become  con- 
tinuations, when  a  has  become  -f  oo  . 

For  the  sake  of  brevity  we  shall  denote  the  series 

f\d),f-\d) /'(«)>/(«)> 

by  the  symbol,  result  (a). 

And  the  series  of  signs  of  the  terms  composing  that  series, 
will  be  denoted  by  the  symbol,  sign  (a). 

Also,  the  number  of  changes  of  sign  in  the  series  of  signs, 
will  be  denoted  by  the  symbol,  change  (a). 

34.  We  shall  now  proceed  to  show,  that  as  a  continually 
increases  from  —  go  to  +  co ,  change  (a)  will  from  time  to 
time  diminish  by  one  or  more  units,  but  that  it  will  never 
increase.  And  that  change  (a)  will  lose  one  unit  every  time 
that  a  becomes  equal  to  a  root  of  the  equation  f(cc)_=  0. 

It  is  evident  that  sign  (a)  can  receive  no  change  of  any  of 
its  terms,  so  long  as  the  gradual  increase  of  a  does  not  at  some 
instant  render  zero  one  or  more  of  the,  terms  of  result  (a). 
It  becomes  necessary,  therefore,  to  examine  what  takes  place 
for  values  of  a,  just  before  and  just  after  that  value,  which  has 
produced  such  zero  or  zeros. 

In  the  first  place,  then,  we  shall  suppose  that  there  is  only 
one  such  zero,  and  that  this  is  the  last  term  of  result  (a) ;  in 
which  case,  f(a)  —  0,  and  a  is  a  root.  Write  for  a  succes- 
sively a  +  4;   then,  since  no  difference  can   exist  between 


:} 


60  THEORY   OF    EQUATIONS. 

sign  (a  +  da)  and  sign  (a)  if  we  omit  their  last  terms,  we 
have  only  to  compare  the  last  two  terms  of 

result  (a  —  da)  and  result  (a  -f  da), 
to  estimate  the  difference  between 

sign  (a  —  da)  and  sign  (a  -f  da). 
These  two  terms  will  be 

f(a  -  da),  f(a  -  da)' 
f'(a  -f  da),  f(a  +  da). 
And  since  we  are  concerned  with  the  signs  of  these  quanti- 
ties only,  and  not  with  their  magnitudes,  we  may  write  these 
terms 

/»,  -  duf\a)\ 

f'(a),  +  daf'(a)}' 
And  so  far  as  concerns  the  change  or  continuation  of  sign 
between  these  two  terms,  we  may  omit  the  factory  (a),  so 
that  we  have 

1,  —  da\ 

1,  +  da) 

And  it  is  manifest  that  we  shall  have  change  (a  -f-  da)  less 
by  unity  than  change  (a  —  da).  This  loss  of  one  change 
corresponds  to  the  root  a. 

Secondly,  let  there  still  be  but  one  zero  in  result  (a),  but  not 
in  its  last  term;  as  for  instance,  fr(a)  =  0.  Here  it  is 
manifest  that  sign  (a  +  da)  and  sign  (a)  will  be  perfectly 
alike,  except  so  far  as  regards  the  term  corresponding  to  fr(a). 
And  to  judge  of  the  effect  of  this  term  upon 

sign  (a  —  da)  and  sign  (a  +  da), 
we  need  only  consider  that  term  and  its  neighbours  on  either 
side,  in  the  two  series  of  numbers 

result  (a  —  da)  and  result  (a  +  da). 


SEPARATION    OF    THE    ROOTS.  61 

These  three  terms  will  be 

fr+\a  -  da),  f*{a  -  da),  fr~\a  -  da)\ 
/r+i(a  +  da),f(a  +  da),  f~\a  +  da))' 

or  so  far  as  concerns  the  changes  or  continuations  of  sign  of 
the  corresponding  terms  of 

sign  (a  —  da)  and  sign  (a  +  da), 
we  may  write  these  terms. 


} 


or,  dividing  the  terms  hyfr+\a),  and  setting  down  q  for  the 
quotient  of/r_1(a)  divided  by  fr+l(a),  we  have 

1,   —  da,  q 

I,   +  da,  £. 

And  it  is  manifest  that  change  (a  —  da)  —  change  (a  -f  da) 
will  be  equal  to  0  or  2,  according  as  q  is  negative  or  positive ; 
that  is,  according  as  fr+l(a)  and  fr~\a)  are  of  opposite  or 
of  similar  signs. 

Thirdly,  let  us  suppose  that  there  are  r  successive  zeros, 
and  that  these  are  the  r  last  terms  of  result  (a)  ;  so  that  there 
are  no  other  zeros  than 

/->(«)  =  0,f~\a)  =  0 f(a)  =  0,  /(a)  =  0. 

It  will  be  necessary  here  to  consider  the  effect  of  the 
substitution  of  (a  +  8),  where  8  is  very  small,  but  not  infinitely 
small ;  otherwise  we  should  still  have  r  —  1  zeros.  As  in  the 
previous  cases,  we  shall  have  to  consider  only  the  terms 

f%a  T  £),/-'(«  +  8), f(a  +  8),  f(a  =f  8) ; 

or,  expanding  by  Taylor's  series,  and  observing  the  r  zeros 
given  above,  as  well  as  omitting  positive  numerical  coefficients, 


"}■■ 


62  THEORY    OF   EQUATIONS. 

inasmuch  as  the  signs   only  are  the  object  of  investigation. 

we  may  write 

/r(a),  -  8/'(a),   +  */'(«), (-  8)'/'(a)i 

/r(«),    +  S/r(«),  +  Sy>(«), +  S'/rW    )' 

or,  suppressing  the  common  factor  fr(a)i 

it  -A  +»,~»3,  — ,  c-ay 

and,  consequently,  in  passing  from  sign  {a— 8)  to  sign  («+S), 
there  are  lost  exactly  r  changes  of  sign  ;  which  correspond  to 
the  r  equal  roots  (a)  of  the  equation,  pointed  out  by  the 
circumstance  of  having  at  once 

/(«)  =  0, -/'(a)  =0, /-i<»  i  0. 

Fourthly,  let  there  be  r  successive  zeros,  not  the  last  r  terms 
of  result  (a) ;  as,  for  instance,  if  we  have 

fm~\a)  =  0,  /H#  =  '0,  •  .  .   .  /"-'(a)  =  0; 
and  let  there  be  no  other  zeros. 

Here,  by  proceeding  on  the  same  plan,  we  shall  have  to 
consider  the  terms 

/"(«=*=  8), fm-r-\a  +  %); 

and  by  expanding  and  simplifying,  as  in  the  preceding  cases, 
and  setting  down  q  for  the  quotient  of  fm~r~\a)  divided  by 
fm  (a),  we  shall  at  last  reduce  these  terms  to 

i,  -S,  +^-S3, ,(-Sy,  q 

h  +  8,  +S2,  +S3, i  +  8r,    ?- 

so  that  there  will  be  a  loss  of  exactly  r  changes  in  passing 
from  sign  (a  —  8)  to  sign  (a  -f  8),  when  r  is  even ;  and  when 
r  is  odd,  we  shall  have 

change  (a—  8)  —  change  (a'+8)^r+  1/ 
according  as  q  is  negative  or  positive,  that  is,  according  as 
/m(a)-and/wl"r_1(a).bave  opposite  or  similar  signs. 


:i 


SEPARATION    OF    THE    ROOTS.  63 

But  in  none  of  these  cases  can  any  change  of  sign  be 
gained  in  passing  from  sign  (a  —  8)  to  sign  (a  4-  S). 

Finally,  if  there  be  several  such  zeros,  or  sets  of  successive 
zeros,  in  different  parts  of  result  (a),  the  above  reasoning  will 
apply  to  every  such  zero,  or  set  of  successive  zeros.  So  that 
we  are  now  able  to  draw  the  following  conclusions  : 

1st.  There  is  a  continual  diminution  of  change  (a),  by  one 
or  more  units  at  a  time,  during  the  increase  of  a  from  —  00  to 
4-  oo ;  and  change  (a)  never  increases  during  that  increase 
of  a. 

The  limits  of  change  (a)  are 

change  (  —  00  )  =  n  "V 
change  (  4-  00  )  =  0  J 

2ndly.  Every  time  that  a  becomes  ==  a  root  of  the  equa- 
tion, or  =  each  of  a  set  of  equal  roots,  change  (a)  will  lose, 
in  the  passage  of  a  through  that  value,  one  unit  for  every 
such  root.  And  change  (a)  must  thus  lose,  during  the  in- 
crease of  a  from  —  00  to  +  00  ,  as  many  units  as  there  are 
real  roots. 

3rdly.  If  a  has  such  a  value  as  shall  render  zeros  any 
number  of  terms  of  result  (a),  though  a  is  not  a  root  of  the 
equation,  then  change  (a)  may  lose,  in  the  passage  of  a 
through  that  value,  an  even  number  of  units,  or  may  not  lose 
any.  And  from  the  preceding  conclusion,  with  respect  to 
the  real  roots,  it  is  evident  that  every  loss  of  a  pair  of  changes 
in  this  manner,  must  correspond  to  a  pair  of  impossible  roots 
of  the  given  equation. 

35.  The  preceding  conclusions  at  once  give  us  a  rule  for 
avoiding  those  intervals,  in  our  investigation  of  the  roots, 
which  cannot  contain  any  roots  of  the  equation   under  con- 


64 


THEORY    OF    EQUATIONS. 


sideration.  For  change  (a)  must  lose  one  of  its  units  for 
every  root  that  a  passes  in  its  increase ;  and  it  may  Jose  be- 
sides these  units,  an  even  number  more  corresponding  to 
imaginary  roots.  Hence  there  cannot  be  more  real  roots 
between  a  and  b,  than  there  are  units  in  the  number 

change  (a)  —  change  (b); 

supposing  b  to  be  greater  than  a. 

If,  therefore,  we  find  that 

change  (a)  =  change  (b), 

we  may  rest  assured  that  no  real  root  lies  between  a  and  b. 

If  we  have 

change  (a)  =  change  (b)  -f  1, 

there  will  evidently  be  one  real  root  of  the  equation  between 
a  and  b,  and  only  one. 

And,  generally,  if  we  have 

change  (a)  e=  change  (b)  +  an  odd  number  X, 

there  is  at  least  one  root  between  a  and  b,  and  there  may  be 
3,  or  5,  ....  or  X,  roots  between  those  limits. 

But  if,  on  the  contrary,  we  have 

change  (a)  =  change  (b)  -\-  an  even  number  fi, 

we  cannot  affirm  that  there  is  any  root  of  the  equation  between 

a  and  b,  though  there  may  be  2,  or  4,  or  6, or  p, 

roots  between  those  limits. 


36.  We  may  remark,  that  by  taking  the  limits  —  go  and  0, 
or  0  and  -f-  oo ,  we  find  the  limits  which  the  numbersj  of 
negative  or  positive  roots  cannot  exceed;  and,  in  fact,  we 
obtain  the  rule  of  signs  of  Descartes. 


SEPARATION    OF    THE    ROOTS.  65 

For  we  shall  have  the  limit  of  the  number  of  positive  roots 
of  the  equation,  or  of  the  roots  between  0  and  oo  ,  equal  to 

change  (0)  —  change  (oo  ) 
=  change  (0). 

Now  if  we  consider  the  series,  sign  (0),  we  shall  find  that 
its  signs  are  the  same  as  those  of  the  coefficients  of  the 
original  equation,  and,  consequently, 

change  (0)  =  number  of  changes  in/(#); 
so  that  there  are  not  more  positive  roots  than  changes  of  sign 
in/O). 

And  the  limit  of  the  number  of  negative  roots  will  evidently 
be  equal  to 

change  (—  oo  )  —  change  (0) 
—  n  —  number  of  changes  \x\f(pc) 
=  number  of  continuations  in  /(#), 
which  accords  with  the  rule  of  signs. 

37.  There  is  one  important  remark  to  be  made  before 
proceeding  to  the  application  of  the  theorem  with  respect  to 
the  number,  change  (a). 

If  it  happens  that  there  are  any  zeros  in  the  series,  sign  (a), 
then  it  is  not  possible  to  estimate  properly  the  number, 
change  (a).  In  this  case  we  must  have  recourse  to  the  two  series, 
sign  (a  =F  S),  and  estimate  the  two  numbers,  change  (a  +  8). 
And  when  it  is  required  to  compare  the  numbers,  change 
(a  —  b),  change  («),  change  (a  +  b),  we  must  compare 
change  (a  —  b)  with  change  (a  —  £>),  and  change  {a  -f  b) 
with  change  (a  +  8);  b  being  any  positive  quantity,  and  8  a 
very  small  positive  quantity. 

The  rule  for  deducing  sign  (a  ±  8)  from  sign  (a),  which 
contains  zeros,  may  be  collected  from  the  preceding  theory. 

K 


66  THEORY    OF    EQUATIONS. 

It  may  be  enunciated  thus  : 

To  form  the  series  sign  (a  —  S)  from  sign  (a),  we  must 
commence  copying  the  signs  of  sign  (a)  from  left  to  right; 
but  when  we  arrive  at  a  zero  in  sign  (a),  we  are  to  set  down 
the  sign  opposite  to  the  one  just  written,  for  the  corresponding 
term  of  sign  (a  —  §). 

To  form  the  series  for  sign  (a  -+-  S),  we  must  proceed  as 
before;  only  that  for  the  zero  we  are  to  set  down  the  same 
sign  as  the  one  just  written. 

The  following  example  of  the  rule  of  the  double  sign  will 
best  explain  its  meaning: 

sign  (a_S)==  +  +  _  +  _  +  +  _  + 

sign  (a)          =  +  +  _000+—  +0- 
sign  (a-i-$)r=-f+ +  —  4   +  - 


38.   The  following  are  examples  of  the  application  of  the 
preceding  method  to  particular  equations  : 

Ex.  1.   xb  -  3x*  —  24a;3  +  95*2  -  46*  -  101  ==  0. 
Here  we  have  the  polynomials 

f(x\  =  x5  -  3*4  —  24.r3  +  95a:2  -  46*  -  101, 
/(*)  =  5*4  -  12*3  -  72**  +  190*  -  46, 
f(x)  =  20*3  -  36*2  -  144*  +  190, 
f"(x)  =  60*2  -  72*  -  144, 
f\x)  =  120*  -  72, 
fix)  =  120. 
And  if  we  substitute  for  *,  successively, 

,  -  10,  -  1,  0,  1,  10, 


SEPARATION    OF    THE    ROOTS. 


67 


we  shall  find 

sign  (-  10)  =  4-   -  +  -   +  — 
sign  (-  1)     =:  +  -  +  -+   + 

sign  (0)  ss  + + 

sign  (1)  s  4-  +  -  +   +  - 

sign  (10)        =  +  +  4-  +  -4-  4- 
from  which  we  obtain  the  numbers 

change  (—  10)  =  5 

change  (—1)    =4 

change  (0)         ==  3 

change  (1)         =3 

change  (10)       ss  0 

Hence  we  conclude,  that  one  root  of  the  equation  lies 
between  —  10  and  —  1;  a  second  lies  between  —  1  and  0; 
a  third  between  1  and  10;  and  the  remaining  pair,  if  they 
are  real,  are  also  comprised  between  the  limits  1  and  10. 
At  present  we  cannot  determine  the  question  concerning  these 
roots,  whether  they  are  real  or  not;  but  we  shall  hereafter 
give  a  rule  applicable  to  such  cases. 


Ex.  2.     Let  the  proposed  equation  be 

a?4  —  4a:3  -  3x  4-  23  =  0. 
The  series  of  polynomials  will  be 

f(x)  =  a;4  —  4.z3  —  3x  4-  23, 
f(x)  s;  4.z3  -  12a;2  -  3, 
f(x)  z=  12a-2  -  24a?, 
/'"(#)  =  2\x  -  24, 
fix)  =  24. 


68 


THEORY    OF    EQUATIONS. 


And  if  we  substitute  for  x,  successively, 
0,  1,  10, 

we  shall  find 

sign  (0)     ==  -f  -  0  -  + 

sign  (1)    =  +  0 + 

sign  (10)  =  +  +  +  +  + 

and  applying  the  rule  of  the  double  sign,  we  shall  write  these 

series 

sign  (  +  S)     -=+-±-+1 

sign  (1  :+:  8)  =  +  h= + 

sign  (10)        =  +  +  +  +  +  . 
5  representing  a  very  small  fraction. 

Hence  we  obtain  the  numbers 

change    (  —  8)  =  4  * 

change    (  +  $)  =  2 

change  (I  -  8)  =  2 

change  (1  +  S)  =  2 

change     (10)      =0 

from  which  we  conclude,  that  there  are  two  impossible  roots, 
because  two  changes  are  lost  at  once  between  the  substitution 
of  —  8,  and  of  -f  S,  although  0  is  not  a  root ;  and  that  the  re- 
maining two  roots,  if  they  are  real,  must  lie  in  the  interval  from 
1  to  10. 


Ex.  3.    The  cubic  equation 

x3  +  2x2  -  3x  +  2  =  0 
gives  the  following  table : 

f(x)  =  x3  +  2x2  -  3x  -f  2, 
-    f'(x)  =  3a,2  +  4#  -  3, 

f"(x)  =  Ox  +  4, 


SEPARATION    OF    THE    ROOTS. 


69 


sign  (-  10) 
sign  (—  1) 
sign  (0) 
sign  (I) 

change  (-  10)  =  3" 
change  (—1)    =  2 
change  (0)         =  2 
change  (I)         =0 

one  root  lies  between  —  1 0  and  —  1 
if  they  are  real,  between  0  and  1 . 


+  - 

+ 

—  ' 

+  - 

— 

+ 

+  + 

— 

+ 

+  + 

+ 

+  . 

and  the  other  two  lie, 


Ex.  4.    When  the  rule  is  applied  to  binomial  equations,  of 
the  form 

xn  ±  1  ==  0, 

we  shall  immediately  find   the  number  of  impossible   roots. 
Thus,  if  we  have  the  equation 

we  shall  form  the  table 

/'(*)  =  6afi9 
f\x)  =  6.5.^, 
f'\x)  =  6.5A.X3, 
f\x)  =  6.5.4.3.  x\ 
f\x)  =  6.5  A.  3.2.*, 
f¥\x)  =  6.5.4.3.2.1; 


70 


THEORY    OF    EQUATIONS. 


and  the  substitution  of  —  1,0,  4-1,  successively,  will  give 
sign  (-  1)  =  +  -  +  -  +  -  0* 

sign(0)        =+0    0    0  0  0-1; 
sign(l)        =  +  +  +  4-  4-  +  0/ 
and  upon  applying  the  rule  of  the  double  sign,  we  obtain 
sign  (-  1  +  8)  =+-+-+  - 
sign  (+8)  =++++++ 

sign  (1+8)        =++4-  +  +  +  +  ^ 

Here  we  conclude,  from  the  corresponding  numbers, 

change  (—  1  —  8)  =  6 " 

change  (—  1  +  8)  =  5 

change  (—8)  ==  5. | 

change  (+8)  =1 

change  (1-8)       =1 

change  (1  +  8)  =  0  J 
that  there  are  only  two  real  roots,  —  1  and  +  1 ;  and  that  the 
remaining  four  impossible  roots  correspond  to  the  simultaneous 
loss  of  four  changes  in  passing  through  zero. 

Ex.  5.    Let  the  equation  be 

xb  4-  3x*  +  2x3  —  3a?  -  2x  —  2  =  0. 
/(a?)  =  x>  +  3z*  +  2x3  -  3x2  —  2x  -  2, 
f'(x)  =  5x4  +  12#3  +  6a;2  -  6x  -  2, 

/"(a?)  =  20#3  +  36a?2  +  12a;  -  6, 

f'\x)  =  60a-2  +  72a;  +  12, 

f'\x)  =  \20x  +  72, 

rw  =  120. 


SEPARATION    OF    THE    ROOTS.  71 

We  shall  therefore  have  the  following  table,  writing  the 
double  sign  for  a  zero  according  to  the  rule  : 
V    IV  III  II     I     0 
sign  (-1)     +   —    ±   —    +    - 

sign(O)  +    +    + 

sign  (1)  +    +    4-    4-    +    - 

sign  (10)         +    +    4-    4-    +    +• 
Here  two  roots  are  impossible ;    two  may  lie  between  —  1 
and  0 ;  and  the  remaining  root  lies  between  1  and  10. 

The  above   is    the   abbreviated    form   of  the   table    best 
adapted  for  practice. 


Sx.  6.    Let  the  equation  be 

xb  — 

10#3  +  6x  4-  1 

*s  0. 

m- 

=  a5  -  10^3  4- 

6x  4-  lj 

/(*)  = 

=  5x4  -  30^2  +  6, 

rw  = 

=  20x3  -  60a;, 

/"(*)  : 

=  60^2  -  60, 

rw  = 

=  120.r, 

F&)  •• 

=  120. 

V   IV   III    II 

I     0 

sign  (-10) 

+    -    4-    - 

4-    - 

sign  (-  I) 

4-    -    ±    4- 

—  4 

sign  (0) 

4     Hh    —    ± 

4-    4- 

sign  (1) 

4-    4    =F    — 

—   — 

sign  (10) 

+    +    4-    4- 

4-    4- 

Here  one  root  lies  between  —  10  and  —  1  ;  two  roots,  if 
they  are  real,  lie  between  —  1  and  0 ;  one  root  lies  between 
0  and  1 ;  and  the  remaining  two,  if  real,  between  1  and  10. 


72  THEORY    OF    EQUATIONS. 

39.  We  have  now  completely  determined  the  intervals 
in  which  the  roots  of  the  equation  must  be  sought ;  and  have 
excluded  from  the  investigation  all  those  intervals  which  by 
the  rule  of  signs  cannot  contain  any  root.  These  latter  inter- 
vals are  of  much  greater  extent  than  the  former ;  and  it  is  in 
the  exclusion  of  these  that  the  method  of  Fourier  possesses 
one  of  its  chief  advantages  over  that  of  Lagrange. 

But  there  is  still  another,  and  a  very  important  question, 
which  here  presents  itself,  in  those  cases  where  any  interval 
contains  more  than  one  root :  are  those  roots  all  real  ? 

The  first  example  offers  an  instance  of  this  inquiry.  It  was 
found  that  between  I  and  10,  three  roots  of  the  equation 
were  to  be  sought.  One  of  these  must  be  real ;  but  the 
remaining  pair,  so  far  as  we  know  at  present,  may  be 
imaginary.  We  might  subdivide  the  interval  into  several 
smaller  intervals,  and  by  continuing  such  a  process  we  should 
at  last  separate  the  three  roots,  if  they  are  real,  and  we  should 
have  each  of  them  comprised  within  a  separate  interval.  But 
if  two  of  the  roots  are  imaginary,  the  process  of  subdivision 
would  not  come  to  a  conclusion ;  and  we  should  still  be  igno- 
rant, whether  the  separation  was  impossible  because  the  roots 
were  imaginary,  or  was  only  very  much  delayed  because  their 
difference  was  extremely  small. 

This  difficulty  is  theoretically  surmounted  by  the  method  of 
Lagrange;  for  if  we  can  find  the  inferior  limit  of  the  differences 
of  the  roots,  we  shall  know  when  to  stop  in  our  subdivision  of 
intervals.  But  the  practical  determination  of  that  limit  is 
impossible  for  equations  whose  degree  is  greater  than  3  or  4 « 
as  the  calculation  of  it  becomes  complicated  more  and  more 
for  each  additional  unit  in  the  degree  of  the  equation,  and  the 
difficulty  increases  at  a  rapid  rate.  Hence  we  must  seek  some 
other  method,  as  a  criterion  of  the  reality  of  the  included 


SEPARATION    OF    THE    ROOTS.  73 

roots.      Such   a   criterion,    in   a   form  readily  applicable  to 
practice,  has  been  given  by  Fourier. 

40.  It  will  be  convenient,  before  proceeding  to  the  de- 
monstration of  the  criterion  of  Fourier,  to  adopt  an  abbreviated 
and  expressive  notation  for  the  number  change  (a)  —  change  (6), 
which  indicates  the  number  of  roots  of  the  equation  jf  (a?)  =  0 
to  be  sought  in  the  interval  from  a  to  b.  This  will  be  effected 
by  writing  for  the  number  change  (a)  —  change  (6),  the 
expression,  root-index  («,  b).  The  method  of  finding  this 
root-index  for  any  equation,  and  for  any  interval,  is  obvious 
from  what  has  preceded.  We  need  only  refer  to  Example  (1) 
of  Art.  (37).  For  that  equation  we  find  the  following  root- 
indices  : 

root-index  (—10,  —  1)  =  1  ^ 

root-index  (—1,0)  —  1  I. 

root-index  (1,   10)  =  3  J 

But  further,  in  considering  the  interval  from  a  to  b,  it  will 
not  be  sufficient  to  determine  for  the  polynomial  f(jx),  the 
quantities  change  (a),  change  (&),  and  their  difference  root- 
index  {a,  b).  We  must  apply  the  same  process  to  every  one 
of  the  derivatives 

f\x),f\x), /»(*). 

The  three  series  of  numbers  so  obtained  are  to  be  written  in 
reverse  order,  that  is,  in  the  order  of 

/nWJvH /'(*), /M, 

and  will  form  what  we  shall  term  change-series  (a),  change- 
series  (b),  and  index- series  (a,  b). 

Referring  again  to  Art.  (37),  and  to  Example  (1),  for 
illustration,  we  shall  have  the  following  table  for  the  interval 
(1,  10). 

L 


74  THEORY    OP    EQUATIONS. 

V     IV    III    II      I       0 

sign(l)  +  +  -    +  +  - 

sign  (10)  +  +  +   +  +  -f 

change-series  (1)  0  0  12  2  3 

change-series  (10)  0  0  0    0  0  0 

index-series  (1,10)  0  0  12  2  3, 

the  last  series  is  formed  by  subtracting  the  terms  of  the  two 
preceding  series  vertically. 

In  practice  it  is  usual  to  write  only  the  index-series,  and  its 
terms  are  set  down  between  the  rows  of  signs,  thus : 

V  IV  III  II  I  0 

sign  (1)     +  +    -   +  +  - 

0  0     1     2  2  3 

sign  (10)     +  +    +    +  +  +. 

Asa  complete  acquaintance  with  the  preceding  notation  will 
be  necessary,  in  order  to  follow  the  train  of  reasoning,  we  add 
the  following  examples,  in  which  the  process  of  forming  the 
index-series  is  shown  : 

Ex.  1.         x1  —  2X5  -  3a;3  +  4a;2  -  5x  +  6  =  0. 

/(a*)  —  x1  —  2x5  —  3x3  +  4#2  —  5x+  6, 

f\x)  =  7a;6  -  10a;4  -  9a;2  +  8x  -  5, 

f\x)  =  42a?5  -  40a;3  -  l&r  +  8, 
f"'(x)  =  210#4-  120a?2  -  18, 

f'\x)  =  840a;3  -  240ar, 

f(x)  ==  2520a:2  -  240, 
jT'QO  =  5040a;, 
/"(a?)  =  5040, 


SEPARATION    OF   THE    ROOTS.  75 

from  which  we  obtain  the  following  table  of  signs,  and  can 
thence  form  the  index-series  for  each  interval,  as  follows : 


VII  VI 

V 

IV 

III 

II 

I 

0 

sign  (— 

10) 

4-  - 

4- 

— 

4- 

— 

4- 

— 

0 

0 

0 

0 

1 

1 

1 

sign  (- 

0 

+    ~ 

4- 

— 

4- 

4- 

— 

4- 

0 

1 

1 

1 

0 

0 

0 

sign  (0) 

+    =P 

— 

± 

— 

4- 

— 

4- 

0 

1 

1 

1 

1 

2 

2 

sign  (i) 

4-    4- 

4- 

4- 

4- 

— 

— 

4- 

0 

0 

0 

0 

1 

1 

2 

sign-(10; 

) 

+    + 

4- 

4- 

4- 

4- 

4- 

4- 

We  have  here  placed  the  indices  between  the  rows  of  signs 
corresponding  to  the  interval;  the  method  of  finding  any 
index,  as,  for  instance,  root-index  (—1,  0),  corresponding  to 
/'"(#),  is  to  consider  the  signs  as  far  as  that  term  only  for  the 
required  interval, 

sign  (-  1)     +     -     4-     - 

sign(0)  -f     -     -      4-. 

of  which  the  first  line  counts  three  changes,  and  the  second 
two;  their  difference  1   is  the  required  root-index  (-—  1,  0) 

forjTO*). 

Ex.  2.  x3  +  2x2  -  3x  +  2  =  0. 

f(x)  =  x3  +  2x2  -  3x  +  2, 
f(x)  =  Sti*  4-  4#  -  3, 
/"(*)  =  6*4-4, 


76  THEORY    OF    EQUATIONS. 

from  which  we  have  the  table 

III  II     I     0 
sign  (-  10)  +   -    +    - 


0 

1 

1 

sign  (- 

1) 

+ 

— 

— 

+ 

1 

0 

0 

sign  (0) 

+ 

+ 

— 

+ 

0 

1 

2 

sign(l) 

+ 

+ 

+ 

+  • 

We  may  remark,  in  general,  that  the  difference  of  two 
successive  terms  of  any  index-series  is  always  either  0,  or  ±1; 
this  is  a  consequence  of  the  formation  of  these  indices. 

41.  After  having  thus  defined  at  length  the  quantities  which 
we  are  about  to  consider,  and  expressed  them  by  a  notation  con- 
venient at  once  by  its  brevity  and  perspicuity,  we  shall  be  the 
better  able  to  bring  under  one  view  the  rule  given  by  Fourier, 
for  distinguishing  the  nature  of  the  roots  indicated  in  any 
interval,  supposing  more  than  one  to  be  so  indicated. 

When  the  index-series  is  formed,  we  are  to  choose  that 
term  of  it  in  which  the  index  1  appears  for  the  last  time  in  the 
series.  The  following  index  must  be  2;  for  it  is  necessarily 
one  of  the  three  numbers  0,  1,2.  If  it  was  0,  then  there 
must  be  some  index  1  later  in  the  series,  as  the  last  index  is 
not  0.  If  it  was  1,  then  the  index  1  first  chosen  was  not  the 
latest  index  1,  as  it  ought  to  have  been.  Both  these  cases  are 
therefore  excluded,  and  the  index  I  first  chosen  is  followed  by 
an  index  2.  We  shall  suppose  that  the  preceding  index  is^?; 
and  that  these  three  indices  correspond  to  the  three  polyno- 
mials 

p,         I,  2. 


SEPARATION    OF    THE    ROOTS.  77 

The  index  p  is  either  0,  1,  or  2.  If  it  be  not  0,  we  can 
reduce  it  to  0  in  the  following  manner.  Since  fr(x)  =  0  has 
only  one  root  lying  in  the  interval  (a,  b),  fr*\x)  —  0  cannot 
have  a  root  equal  to  that  root;  for  then/r(x)  de  0  must  have 
equal  roots  lying  in  the  interval  {a,  6),  which  is  not  the  case. 
Hence  there  can  always  be  found  a  new  interval  (a!,  b')  in- 
cluded within  the  larger  one  under  consideration,  for  which 
new  interval  the  index  p  is  zero,  and  the  succeeding  index  is 
still  unity.  The  larger  interval  is  thus  divided  into  the  three 
intervals 

(a,  a'),  (a1,  V),  (V,  b)y 

which  give  the  corresponding  series  of  indices  as  far  as  these 
three  terms  are  concerned, 


■f*K*%  /*(*),  /-'GO 

(a,  a') 

p               0                q 

(a!,  b') 

0                1                t 

(ft  b) 

P                0                q, 

where  the  indices  p,  q  may  be  either  0  or  1  ;   and  t  may  be 
either  0,  1,  or  2. 

We  may  now  discard  the  extreme  intervals  {a,  a'),  (b' ,  b), 
since  for  these  there  will  be  some  index  1  at  a  later  part  of 
the  series;  and  for  those  intervals,  therefore,  the  separation 
of  roots  has  been  carried  towards  the  end  of  the  series,  past 
the  termyr(#).  The  object  of  the  whole  process  is  evidently 
to  reduce  the  last  index  to  0  or  1 ;  and  this  is  effected  by 
choosing,  as  we  have  done,  the  latest  index  1,  and  moving  it,  if 
possible,  nearer  to  the  end  of  the  series.  The  remaining 
interval  (a',  U)  is  the  only  one  for  which  this  removal  of  the 
latest  index  1  may  not  have  been  effected ;  and  we  shall  now 
proceed  to  investigate  the  possibility  of  such  a  removal  in  this 
case,  that  is,  when  t  =  2. 


78  THEORY    OF   EQUATIONS. 

42.  The  only  case  now  left  for  our  consideration,  is  that 
presented  by  the  table 

(a,b)  0  1  2, 

and  the  object  is,  if  possible,  to  separate  the  two  roots  of  the 
equation 

f~\x)  =  0, 

indicated  by  the  above  table. 

For  this  purpose,  we  must  seek  a  value  c  intermediate  to  a 
and  b,  such  that  /r~'(c)  shall  have  a  different  sign  from  that 
of/'~1(a)  and/r-'(J);  or  such  that 


/r-'(«) 
Let  us  now  put 


/-(«) =  ne§- 


j-'W)  =  neg- 


c  =  a  -|-  h  > 

c  p-  b-ky 

whence  b—  a  =  h  +  k, 
and  expand  the  preceding  conditions  by  Taylor's  series,  so  that 
the  term  of  the  second  order  shall  include  the  remainder  of 
the  series ;  that  is,  let  us  make  the  substitutions 

fr-Xc)  =r~\a)  +  hfr{a)  +  |/*+'(X); 

fr'\c)  =  fr~\b)  -  kfr(b)  +  |Vr+,W, 

where  X  is  some  value  intermediate  to  a  and  c ;   and  fi  lies 
between  c  and  b. 


SEPARATION   OF   THE    ROOTS. 


79 


The    preceding    conditions  will   then    take   the   following- 
form  : 

Now,  observing  the  table 

P+Kx),f\x),fr-\X) 

(a,b)  0  1  2, 

it  will  be  evident  that  we  shall  always  have  a  positive  quotient 
for  the  two  fractions 

/r+'(X)  and  ****& 

Hence,  transposing  these  terms,  the  second  sides  of  the 
equations  may  still  be  written  as  before ;  thus  we  shall  have 


neg. 


1-k/^b)  =  aeg- 


>  9 


Again,  by  inspection  of  the  above  table,  we  shall  find 

7#  =  nes 

fr-\t>) 

7W  s  pos- 

and  multiplying  the  preceding  equations  respectively  by  these 
latter,  we  find  the  conditions  reduced  to 


fr(.b) 


+  h  =  pos. 


—  k  =  neg. 


80  THEORY    OF    EQUATIONS. 

which  give,  by  subtraction, 

If  we  call  Q  the  sum  of  the  quotients  for  the  fractions 

neglecting  the  signs  of  those  quotients, 

and  also  call  D  the  difference  of  the  interval  (a,  b), 

or,  T)  =  b  —  a  =  h  +  k  ; 

then  the  preceding  condition  becomes 

D  -  Q  =  pos. 

or,  D  >  Q. 

It  follows  at  once,  from  this  condition,  that  when  we  find 

Q  =  or  >  D, 

no  such  value  of  c  can  exist. 

If,  on  the  contrary,  we  find  that 

D>  Q, 

then  such  a  value  of  c  may  exist ;   but  it  does  not  follow  that 
such  a  value  must  necessarily  exist. 

43.    We  shall  now  proceed  to  the  discussion  of  the   first 
case,  when 

Q  =  or  >  D. 

Since  no  value  c  can   lie  between   the   two  roots  of  the 
equation 

f'-\x)  =  0, 
which  may  lie  in  the  interval  (a,  b),  it  follows,  that  if  these 


SEPARATION    OF    THE    ROOTS.  81 

roots  exist  they  are  equal.  Suppose  then  that  they  do  exist, 
and  that  each  =  y.  Then  fr~\x)  and  fr(x)  must  have  a 
common  divisor  (j>  (a?),  of  which  there  is  one  factor  x  —  y,  and 
only  one;  and  the  other  factors,  if  there  are  any,  are  also 
factors  entering  intojfr(#);  consequently,  <j>(x)  can  only  have 
the  root  y  lying  in  the  interval  (a,b).     And  we  shall  then  have 

t-Trl  negative.      If  then  we  find  that  such  a  divisor  <j>  (x) 

exists,  and  that  ~r\  is  negative,  we  shall  know  that  the  two 

♦OT 
roots  of 

r~\x)  =  0 

are  equal;  and  if  y  be  not  a  root  of  fix)  t=  0,  entering 
r+  1  times,  the  series  of  signs  denoted  by  sign  (y)  will 
give  at  least  two  consecutive  zeros,  included  by  terms  which 
are  not  zero ;  and  the  rule  of  the  double  sign  points  out 
immediately  two  impossible  roots  of  the  equation 

/(*)  =  0. 
If  therefore  we  apply  the  method  of  equal  roots  to  fix),  we 
shall  discover  whether  any  such  root  y  enters  r  +  1  times,  or  we 
are  to  conclude  that  there  are  two  impossible  roots  at  least  of 
the  equation  fix)  =  0.  If  the  latter  case  happens,  we  can 
diminish  by  2  all  the  succeeding  root-indices.  For  in  the 
succeeding  root-indices,  a  part  of  every  term  is  formed  by  the 
two  changes  which  are  lost  by  the  series  sign  (y)  in  passing 
through  y. 

Again,  if  the  two  roots  of  the  equation 

/r-(*)  =  0 

are  impossible,  then  there  will  be  a  loss  of  two  signs  in  sign  (y) 
when  y  becomes  equal  to  the  single  root  of 

f\x)  w  0, 
lying  in   the   interval  (a,  b).      For   there  will  be  one  zero 

M 


82  THEORY    OF    EQUATIONS. 

between  like  signs.  Hence  we  may,  as  before,  subtract  2  from 
the  succeeding  root-indices. 

If  therefore  we  find  Q  =  or  >  D,  and  there  are  not  r  +  1 
equal  roots  of 

/(*>  =  o 

in  the  interval  («,  b),  we  are  in  all  cases  to  subtract  2  from 
the  succeeding  root-indices. 

44.  But  there  remains  the  second  case  for  consideration, 
when 

D  >  Q. 

We  must  choose  at  pleasure  in  the  interval  (a,  b)  a  new 
value  c.  If  we  find  that  /r_1(c)  differs  in  sign  from  fr~l(a) 
and  fr~\b)9  the  two  roots  are  separated.  But  if  the  contrary 
take  place,  we  are  to  choose  of  the  two  intervals  (a,  c),  (c,  b), 
that  one  which  still  gives  the  table 

0,  1,  2, 

and  form  the  criterion  anew,  with  a  less  value  of  D.  We 
must  however  first  enquire,  whether  the  two  roots  are  equal ; 
for  if  they  are  so,  we  need  proceed  no  further. 

By  this  process  we  shall  at  last  be  able  to  separate  all  the 
real  roots  of  the  equation 

/GO  =  o 

into  their  respective  intervals;  and,  of  course,  know  their 
number. 

45.  We  shall  now  give  some  examples  of  the  application  of 
the  criterion,  in  order  that  the  process  may  become  more 
familiarized  than  it  can  possibly  be  under  the  shape  of  a 
general  theorem. 


SEPARATION    OF    THE    ROOTS.  83 

Ex.  1.  x3  +  2x2  —  3x  +  2  =  0. 

f(x)  =  x3  +  2x*  -  3ar  +  2, 
/(a?)  =$  3#2  +  4x  -  3, 

/"(*)  -  6. 
The  interval  (0,  1)  gives  the  table 
III     II      I       0 

(0)  +      +      -      + 
0      0       12 

(1)  +      +      +      +• 

Hence,  observing  the  series  of  root-indices,  we  shall  have 
to  consider  whether  the  interval  1  is  greater  or  less  than  Q, 

the   sum   of  the   quotients  "ttttTx,  and  Jrrry  neglecting  their 

signs.     Now  we  have 

/(0)  =       2, 

/'(0)  =  -  3, 

/(I)  =       2. 

/'(I)  =       4. 
and  we  find  that 

U-  3+4~6' 
consequently  Q  is  greater  than  the  interval  unity;    and  we 
conclude  that  the  two  roots  indicated  by  the  figure  2  for  the 
interval  (0,  1)  are  impossible,  as  there  are  not  equal  roots. 

The  abbreviated  form  of  the  table  for  the  whole  process,  is 


(0) 


0) 

2 
"4 
are  not  equal  roots. 


Ill 

II 

I 

0 

+ 

+ 

3 

+ 
2 

0 

0 

1 

2 

4- 

+ 

+ 
4 

+  . 
2 

2      2 

Here  o  +  I  >  } *  an(*  tne  two  roots  are  impossible,  as  there 


x5  —  3a?4  - 

-24a;3 

+  95x* 

—  46a; 

-  101 

=  ( 

interval  (1 , 

10),  we  have 

V 

IV 

III 

II 

I 

0 

+ 

+ 

156 

30 

4- 

— 

0 

0 

1 

2 

2 

3 

4- 

4- 

4- 
5136 

4- 
15150 

4- 

4- 

Here 

30 
}56  + 

15150 
5136 

<9; 

84  THEORY    OF   EQUATIONS. 

Ex.  2.     a;s  -  3x*  -  24a;3  +  95,r2  -  46a;  —  101  =  0. 


(1) 
(10) 


and  we  must  divide  the  interval  into  two  parts  by  forming  the 
series  of  signs  for  some  intermediate  number,  as  3  for  instance. 
But  previously  we  must  find  whether  there  is  a  divisor  com- 
mon tof'\%},  f'"(oc).  There  being  no  such  common  divisor, 
we  proceed  to  form  the  new  table  as  follows : 

(i) 

(3) 

(10) 

Hence,  we  conclude  that  there  is  one  root  of  the  equation 
only  in  the  interval  (3,  10),  and  that  there  may  be  two  in  the 
interval  (1,  3).  This  last  interval  then  is  to  be  examined; 
and  the  first  remark  is,  that  the  latest  index  1  is  not  preceded 
by  0,  so  that  the  interval  must  yet  be  subdivided,  as  follows : 


(i) 

(2) 
(3) 


V 

IV 

III 

II 

I 

0 

4- 

0 

4- 

0 

1 

1 

1 

2 

4- 
0 

4- 
0 

+ 

0 

1 

1 

1 

4- 

+ 

+ 

+ 

+ 

+  • 

V 

IV 

III 

II 

I 

0 

+ 

+ 

— 

4- 

4- 

— 

0 

0 

0 

1 

0 

0 

+ 

+ 

— 

— 

4- 
30 

21 

0 

0 

1 

0 

1 

2 

+ 

4- 

+ 

— 

— 

— 

43     32 


SEPARATION    OF    THE    ROOTS.  85 

It  is,  therefore,  to  the  last  of  these  intervals  only  that  we 
are  to  attend ;  and  the  criterion  gives 

21      32 

30  +  43  >     ' 

whence  we  conclude  that  the  roots  are  impossible. 


Ex.  3.  xb  +  x4  +  x3  —  2x2  +  2x  —  1 

The  interval  (0,  1)  gives  the  table 

(0) 
(I) 


and  f'(x)  and//7(V)  nave  n0  common  divisor. 
The  interval  is  to  be  separated. 

(0) 

(!) 

(1) 
Here  the  single  quotient 

2  _  1 
4-2' 

and,  consequently,  the  roots  due  to  the  interval  f  0,  _  J  are  im- 
possible. 


V      IV 

III     II       I 

0 

+    + 

+      —      + 
4      2 

0      0 

0       1       2 

3 

+      + 

+     .+      + 
36     10 

f 

Here4 

+  36<1' 

V 

IV 

III 

II 

I 

0 

+ 

+ 

+ 

4 

+ 
2 

— 

0 

0 

0 

1 

2 

2 

+ 

+ 

4 

+ 

+ 

— 

0 

0 

0 

0 

0 

1 

+ 

+ 

+ 

+ 

+ 

+ 

86  THEORY   OF   EQUATIONS. 

46.  Instead  of  giving  any  more  examples  of  the  application 
of  the  criterion,  we  shall  give  several  examples  of  the  complete 
separation  of  the  roots,  combining  both  processes,  the  rule  of 
signs,  and  the  criterion.  These  will  be  best  exhibited  in  the 
tabular  form  adapted  for  practice.  The  operations  of  sub- 
dividing the  intervals,  when  necessary,  are  here  expressed  at 
once.  To  become  perfectly  acquainted  with  the  method,  so  as 
to  perceive  at  one  view  its  several  parts  and  their  relations,  it 
cannot  be  too  much  recommended  that  every  one  should  go 
through  the  working  of  these  examples  for  himself. 

Ex.  1.   xb  —  3#4  -  24#3  +  95x2  -  46x  -  101  =  0. 

f(x)  =  x5  —  3#4  -  24a?3  +  95.x2  —  46x  -  101, 
/'(*)  =  5x*  -  12a;3  -  72*2  +  190*  -  46, 
/'(*)  =  20#3  -  36x2  -  U4x  +  190, 
f"(x)  =  60#2  -  72#  -  144, 
/"(*)==  120a? -72, 


/"(*)  nt 

120. 

V 

IV    III 

II 

I 

0 

(-10) 

+ 

-    f 

— 

+ 

1 

(-1) 

+ 

-    + 

— 

+ 

+ 
1 

(0) 

+ 

—       — 

+ 

— 

0 

0) 

+ 

4-      - 

+ 

+ 

0 

(2) 

+ 

+     - 

— 

+ 

— 

0       1       2 

(3)  +      +      +      -     -      - 

43     32 

I 

(10)  +■     +     +     +     +     +. 


SEPARATION    OF    THE    ROOTS.  87 

Here  one  root  lies  between   —  10  and    —  1  ;    a  second 
between  —  1  and  0;  a  third  between  3  and  10:    but  the  two 

remaining  roots  due  to  the  interval  from  2  to  3  are  impossible, 

21      32 
because  we  have  of;  +  To  >  ^  and  there  are  not  equal  roots. 


Ex.  2.  x"  —  4#3  —  3x  -f  23  =  0. 

f(x)  =  x*  —  <±x3  —  3x  +  23, 


f'(x)  p  4#3  - 

12a;2 -3, 

f"(x)  =.  12a;2  - 

-24#, 

f\x)  =  2\x  - 

-24, 

f\x)  =  24. 

IV     III 

II       I         0 

(0)                 +     - 

±  -  + 
0 

0)         +  * 

-  -  + 
0 

(2)                 +      + 

m  -  + 
i 

(3)                 +      + 

+  -  - 
i 

(10)               +      + 

+    +    +. 

Hence  two  roots  are  impossible,  by  the  rule  of  the  double 
sign :  and  one  root  lies  between  2  and  3,  and  the  remaining 
one  between  3  and  10. 

Ex.  3.  x3  +  2a;2  —  3x  +  2  =  0. 

/(a?)  =  x3+2x'2-3x+2, 
f\x)  =  3x*  +  4x-3, 
f\x)  =  6x  +  4, 
fix)  k-  6. 


88  THEORY    OF    EQUATIONS. 

Ill      II         I         0 
(-10)  +       -        +        - 

1 

(-1)  4-      -      -     + 

0 

(0)  +      +      -      + 

3  2 

0       1       2 

(1)  +      +      +      +• 

4  2 

Here  one  root  lies  between  —  10  and  —  1  ;   and  the  two 

roots  due  to  the  interval  from  0  to  1  are  imaginary,  because 

2      2 

~  +  -  >  1,  and  there  are  not  equal  roots. 

Ex.  4.  x*  —  x3  +  4a;2  +  x  —  4  =  0. 

f(x)  —  xA  —  x3  +  4x2  +  x  —  4, 
fix)  =  4x3-3^4-  8*+  1, 
f"(x)  =  YZx1  —  6ar  +  8, 
f'{x)  =  24*- 6, 


r%*) 

==  24. 

IV 

III 

II 

I 

0 

(-10) 

+ 

— 

+ 

— 

+ 
i 
i 

(-1) 

+ 

— 

+ 

— 

0 

(0) 

+ 

— 

+ 

+ 
6 

8 

0 

1 

2 

2 

3 

(1) 

+ 

+ 

+ 

+ 

+ 

Here  the  only  real  root  lies  between  —  10  and  —  1  ;    and 
the  two  due  to  the  interval  0  and  1  are  impossible,  since  we 

have  7.  >  1,  and  there  are  not  equal  roots. 
6 


SEPARATION    OF    THE    ROOTS.  89 

Ex.  5.  x6  -  12#*  +  60x*  -{-  123a;2  +  4567a;  —  89012  =  0. 

f(x)  =  ««  -  12a?5  +  60a;4  +  123a?2  -f  4567a;  -  89012, 

fix)  =  6x5  -  60a;4  +  240a-3  +  246a;  -f  4567, 

/'(*)  =  30a^  -  240a;3  +  720a;2  +  246, 
f"\x)  =  120a;3  -  720a;2  +  1440x, 
f\x)  =  360a;2  -  1440a;  +  1440, 

/"(a;)  =  720a;  -  1440, 
/"(*)  -  720. 

VI      V        IV     III      II        I        0 

(-10;         +-     +     -     +     -■+• 

1 


(-  1)  +     ~     +     -     + 


0 


(0)  -f     -     +     +     +     +     - 

0 

(I)  +-    +    +    +    +   - 

0       12      2      2      2      3 
(10)  +      +      4      +      4-      +      +. 

Here /'"(a;)  =  0  has  equal  roots  in  the  last  of  the  intervals, 
and  these  are  not  roots  of/ (x)  =  0;  hence  there  is  only  one 
root  between  1  and  10 ;  there  is  also  one  root  between  —  10 
and  —  1 :  the  other  four  are  imaginary,  by  the  rule  of  the 
double  sign,  and  the  criterion. 

N 


90  THEORY  OF  EQUATIONS. 

Ex.  6.    x5  +  x*  +  a:2  -  25a:  —  36  =  0. 

/(a?)  =  «•  +  «*  +  #2  -  25a:  —  36, 
/*(*)  =  5x*  +  4a;3  +  2a:  -  25, 
f"(x)  ==  20a:3  +  12a;2  -f  2, 
/^J)  =»  60a:2  +  24a:, 
f'\x)  =  120a:  +  24, 
/"(*)  =  120. 


V 

IV 

III 

II 

I 

0 

(-10) 

+ 

— 

+ 

— 

+ 

1 

(-2) 

+ 

— 

+ 

— 

+ 

+ 
1 

(-1) 

+ 

— 

+ 

— 

— 

0 

(0) 

+ 

+ 

+ 

+ 

— 

0 

(1) 

+ 

+ 

+  ■ 

+ 

— 

— 

'  1 

(10)     +    +    +    +    +    +. 

The  three  real  roots  lie  in  the  intervals  from  —  10  to  —  2, 
from  —  2  to  —  1,  and  from  1  to  10;  the  remaining  two  are 
imaginary,  by  the  rule  of  the  double  sign. 

Ex.  7.         a-6  +  x4  +  a:3  -  2a:2  +  2a:  -  1  =  0. 

f{x)  ss  xb  4-  z*  +  *3  —  2a:2  +  2a:  —  1, 
f\x)  =  5x*  +  4a:3  -f  3a:2  -  4a-  +  2, 
f\x)  =  20a-3  +  12a:2  +  6a:  -  4, 
f"(x)  =  60a:2  +  24a:  +   6, 
f'\x)  =   120a:2  +  24, 
f\x)  =   120. 


SEPARATIOK 

f    OF 

THE 

ROOTS. 

V 

IV 

III 

II 

I 

0 

(-1) 

+ 

— 

+ 

— 

+ 

0 

(~i) 

+ 

36 

+ 

9 

— 

+ 

— 

0 

1 

2 

2 

2 

2 

(0) 

+ 

+ 

+ 

— 

+ 

— 

24 

6 

4 

2 

0 

0 

0 

1 

2 

2 

(i) 

+ 

+ 

+ 

+ 

+ 

1 

(1) 

4- 

4- 

+ 

+ 

+ 

+  . 

Here  there  is  but  one  real  root,  which  lies  between  -^  and  1 ; 

the  four  due  to  the  intervals  (  —  o>  0j,  and  (o,  ^)  being  impos- 
sible, since  we  have 


J*.      —       I       H?-    I 
36  +  24  -  2'         4  7  2' 


47.  It  appears  from  what  has  preceded,  that  there  can  be 
no  imaginary  roots  of  the  equation,  unless  for  some  value  of  a 
we  find  that  sign  (a)  presents  a  single  zero  included  between 
like  signs,  or  a  set  of  consecutive  included  zeros.  We  shall 
proceed  to  apply  this  rule  to  the  equations 

sin  x  =z  0, 

cos  x  =  0. 

It  is  at  once  seen,  that  for  these  equations  no  such  single 
zero  or  set  of  zeros  can  exist.  Hence  they  can  have  no  im- 
possible roots.  And  neither  of  them  contains  equal  roots, 
since  we  can  never  satisfy  both  equations  together  by  any 
real  value  of  x. 


92 


THEORY    OF    EQUATIONS. 


We  can  now  proceed  accurately  in  the  resolution  of  sin  x 
and  cos  x  into  their  factors,  all  of  which  are  of  the  first  degree, 
and  enter  only  once. 

First,  to  find  the  factors  of  sin  x. 

The  roots  of  the  equation 

sin  x  =  0 

are  0,   ±  ir,   ±  2ir,   ±  3?r, to  oo  . 

We  may  therefore  assume 

sin  x  =  k .  x  {x  -f  if)  (x  —  n) 

s  k.x(x2-ir2)(x2-22ir2)  .  .  .  . 
where  k  denotes  some  constant. 

The  preceding  equation  may  be  put  into  the  form 

*nx  =  Kx(l-£)(l-ij£) 

where  k'  is  still  some  constant. 

For  the  determination  of  k' ,  we  may  observe  that,  when 
x  =  0,  then  sin  x  —  x  ;  or,  in  other  words,  that  the  limit  of 

,  when  x  becomes  zero,  will  be  unity.     But  this  will  not 

x 

be  the  case  in  the  above  equation,  unless  k  =  1 . 
Hence,  we  have  finally, 

sin  *  =  ».(l_g)(l-|L) 

the  series  of  factors  being  continued  to  infinity. 

By  a  process  of  exactly  the  same  nature,  we  find  that 
cosa7=^l__^l__j 

the  factors,  as  before,  being  continued  to  infinity. 


SEPARATION    OF    THE    ROOTS.  93 

48.  We  shall  conclude  the  present  chapter  by  a  recapitu- 
lation of  the  rules  to  be  observed  in  the  process  of  the  separa- 
tion of  the  roots.  This  process  consist  of  two  parts  :  the  rule 
of  signs  for  any  interval,  and  the  criterion  of  the  existence  of 
the  roots  indicated  for  an  interval  which  may  contain  more 
than  one  root. 

When  the  equation  /  (x)  z=0  is  proposed  for  investigation, 
the  first  step  is  to  form,  by  successive  differentiation,  the  poly- 
nomials 

f\x),  f"(x), /»(*). 

These  polynomials,  with  the  given  polynomial  f(x),  are  to 
be  written  in  the  inverse  order 

/•(*),  /"-■(*) /(*),/«• 

We  are  then  to  substitute  for  x,  in  the  series  so  formed,  the 
successive  terms  of  the  series 

-  100,  -  10,  -  1,  0,   1,   10,  100, 

and  write  down  the  series  of  signs  corresponding  to  these 
substitutions ;  the  number  of  such  substitutions  is  to  be  limited 
by  those  two,  of  which  one  gives  only  changes,  and  the  other 
only  continuations  of  sign.  In  case  of  a  zero  occurring  in  the 
series  of  signs,  it  is  to  be  replaced  according  to  the  rule  of  the 
double  sign  ;  or,  in  other  words,  the  series  of  signs  is  to  be 
taken  on  each  side  of  the  transition  through  that  zero,  instead 
of  the  series  at  the  point  of  transition. 

All  intervals  in  which  no  change  of  sign  is  lost,  are  imme- 
diately to  be  rejected,  as  containing  no  root  of  the  equation. 

All  intervals  in  which  only  one  change  of  sign  is  lost,  are 
to  be  set  down  as  containing  a  single  real  root  of  the  equation. 

iVny  interval  in  which  more  than  one  change  of  sign  is  lost, 
may  contain  as  many  roots  as  there  are  lost  changes  of  sign. 
But  to  discover  whether  this  is  the  case  or  not,  we  must 


94  THEORY    OF   EQUATIONS. 

apply  the  criterion.  The  first  step  is  to  form  the  index-series 
for  that  interval,  according  to  the  rules  already  laid  down. 

The  index  corresponding  to  any  term  fr(x)  of  the  series  of 
polynomials,  denotes  the  number  of  changes  of  sign  lost  in 
that  interval  by  the  partial  series  of  polynomials 

/"O),/-'^) /'W; 

or,  in  other  words,  that  index  expresses  the  number  of  roots 
that/r(a?)  =  0  may  have  in  the  interval  under  consideration. 
By  this  means  a  series  of  indices  will  be  formed  beginning 
with  0,  and  whose  last  term  is  greater  than  1.  The  difference 
between  two  successive  terms  of  the  index-series  will  in  all 
cases  be  0,  or  ±  1. 

Considering  now  any  one  such  series,  we  must  fix  upon 
that  term  in  which  the  index  1  occurs  for  the  last  time  in  the 
series.  This  last  index  1  is  necessarily  followed  by  2.  If  it 
be  not  preceded  by  0,  then  we  are  to  subdivide  the  interval ; 
and  by  this  subdivision,  the  last  index  1  will  in  all  cases 
occur  later  in  the  series,  unless  it  happen  that  we  find  for  any 
interval  the  three  successive  indices 

0,  1,  2. 

When  this  condition  presents  itself,  we  cannot  be  certain  of 
moving  the  index  1  towards  the  end  of  the  series,  by  the 
process  of  subdividing  the  interval.      So  that  we   are   now 
warned  to  apply  the  criterion. 
Denoting  by 

/r+,(o,  /*•(*)>  /r-(*) 

0,  1,  2 

the  part  of  the  tahle  to  which  the  criterion  is  to  be  applied, 
we  are  to  form  the  sum  of  the  two  quotients 
/>-'(«)    fr-Kb) 


SEPARATION    OF    THE    ROOTS.  95 

neglecting  the  algebraic  signs ;  and  we  must  examine  whether 
that  sum  be  equal  to  the  difference  b  —  a  of  the  limits ;  if  it  is 
equal  to  that  difference,  or  greater  than  it,  then  the  two  roots 
indicated  forfr~\x)  =  0  are  either  equal  or  imaginary. 

If  they  are  equal,  there  ought  to  be  a  common  factor  0  (#) 

of  the  two  polynomials  fr~\x)  and  fr(%),  such  that  --jr.  is  a 

negative  quantity. 

Whenever  this  is  the  case,  we  must  examine  whether  these 
two  roots  are  also  roots  of  the  equations  formed  by  equating 
to  zero  each  of  the  succeeding  polynomials 

/*-20*)>/r-3(*), /(*); 

that  is,  we  must  seek,  by  the  method  of  equal  roots,  whether 
the  equation  f(x)  =  0  has  a  set  of  r  +  1  equal  roots,  which 
lie  in  the  interval  (a,  b).  We  may  remark  that  this  operation 
will  be  unnecessary,  when  the  last  index  of  the  series  is  less 
than  r  -f  1 .  If  there  is  not  such  a  set  of  equal  roots  of  the 
equation  f(x)  =  0,  we  conclude  that  this  equation  has  two 
impossible  roots. 

If  the  two  roots  of  the  equation/1""1  (x)  ==  0  are  impossible, 
then  we  also  conclude  that  the  equation  f  (a?)  =  0  contains 
two  impossible  roots. 

In  both  cases,  therefore,  the  index-series  may  have  all  its 
terms  diminished  by  2,  after  the  term  corresponding  iofr(pe). 
And  the  last  index  1  will  have  been  removed  towards  the  end 
of  the  series. 

Lastly,  when  the  difference  b  —  a  is  greater  than  the  sum 
of  the  quotients  above  mentioned,  we  must  reduce  this  differ- 
ence by  subdividing  the  interval,  in  order  either  to  separate 
the  two  roots  offr~\x)  =  0,  or  to  obtain  an  interval  proving 
the  impossibility  of  that  separation. 


96  THEORY    OF    EQUATIONS. 

The  whole  operation  of  the  separation  of  the  roots  will  come 
to  a  conclusion  when  the  index-series  all  end  with  0  or  1. 
The  number  of  such  final  indices  I  will  give  the  number 
of  real  roots  of  the  equation  f(x)  —  0  ;  and  their  positions 
will  point  out  where  each  of  those  real  roots  is  to  be  sought. 

We  may  remark  that,  whenever  a  single  zero  occurs  between 
like  signs,  or  whenever  several  successive  included  zeros  occur, 
in  the  series  of  signs  corresponding  to  any  definite  value  of  the 
variable,  there  is  an  indication  of  impossible  roots.  These 
values  then  ought  to  be  as  much  the  object  of  our  search  as 
the  roots  of  the  equation,  at  least  as  far  as  concerns  their 
existence.  Now  it  is  the  precise  object  of  the  criterion  to 
point  out  the  existence  of  these  values  indicative  of  impossible 
roots,  or  to  demonstrate  the  possibility  of  the  roots,  by  sepa- 
ration. 


CHAP.  V. 


ON   THE    METHOD   OF    DIVISORS. 


49.  The  method  of  Divisors  was  proposed  by  Newton  for 
the  discovery  of  the  integral  roots  of  any  equation,  if  such 
roots  existed.  It  will  be  shown  that  this  process  can  be 
applied  to  the  discovery  of  all  the  roots  of  the  equation,  which 
are  commensurable,  when  the  coefficients  of  the  equation  are 
commensurable  quantities;  that  is,  all  the  roots  which  are 
either  integers,  or  can  be  expressed  in  the  form  of  vulgar 
fractions.  Although  the  method  of  Fourier  will  determine 
such  roots  exactly,  yet,  as  the  method  of  divisors  forms  a 
complete  theory  for  roots  of  this  kind,  and  is  also  practically 
applicable,  it  deserves  still  to  hold  a  place  in  the  Theory  of 
Equations. 

50.  We  shall  first  show,  that  by  transformation  all  the 
commensurable  roots  of  any  equation,  whose  coefficients  are 
commensurable,  can  be  rendered  integers.  For  we  can,  by 
transformation,  render  the  coefficients  of  the  equation  integers. 
Now  in  this  state  of  the  equation  there  cannot  exist  any 
root  expressible  in  the  form  of  a  vulgar  fraction.  For  if  we 
should  suppose  that  one  root  of  the  equation 

xn  4-  pxx11-1  +  2V?""2  + +  pH  =  0, 

o 


98  THEORY    OF    EQUATIONS. 

whose  coefficients  are  all  integers,  could  be  expressed  by  the 
fraction  -r  in  its  lowest  terms,  that  is,  when  the  integers  a  and 
b  are  prime  to  each  other,  we  should  have 

or  ~  -f  pxan~x  + +  pnbn~*  =  0 ; 

that  is,  we  should  arrive  at  the  absurd  equation, 

fraction  -j-  integer  ==  0. 

Hence  the  commensurable  roots,  if  any  exist,  must  now 
be  integers. 

51.  In  order  to  determine  the  integer  roots  of  any 
equation/ (#)  —  0,  let  us  suppose  that  one  of  its  roots  is 
the  integer  a ;  then  we  have 

/(a)  =±  0, 

or  an  +  p^-1  +  p2an'2  4-  .  •  •  •  +  Pn  =  °> 
in  which  the  coefficients  of  a  are  all  integers.      We  shall 
write  this  equation  in  the  inverse  order,  thus 

Pn  +  Pn-X  *  +  Pn-2  a*  + +  Pian~l  +  an  =  °' 

Dividing  this  equation  by  a,  we  have 


-    +    Pn-l  +  Pn-2  a    + +   Pl^2    +   «""1    =   °; 

CL 

so  that  the  quantity^  cannot  but  be  an  integer  ;  let  this  in- 
teger be  denoted  by  qv  and  we  obtain 

?1  +  Pn-i+^-2«+ +Pla»-*  +  a»-i  =  0. 

Again,  dividing  by  a,  we  obtain 

q±±Pn=1  +  Pn2  +  pn_3a  +  .  .  .  .  +  Pla»->  +  *+  =  0; 
d 


METHOD    OF   DIVISORS.  99 

so  that  the  quantity^-1 "=1  cannot  but  be  an  integer;  and 

we  may  denote  this  integer  by  q2. 
Proceeding  to  form  the  new  equations 

?2  +  Pn-2  +  Pn-3  °>  + +  Vl^  +   «n"2  =  0, 

92  +  Pn-2   +  ^_3  + ^an-4  +  a»»-3  _  Q. 

we  find  that  we  must  have 

%£ ^=2  =  an  integer  q3. 

And  in  this  manner  we  shall  continue  to  find  a  series  of 
conditions  which  a  must  satisfy,  the  last  of  which  will  be 
obtained  by  the  equation 

?n-i  +  Pi  +  a  =  0, 

a 
that  is,  we  must  have  the  quantity 

Hence  the  conditions,  in  order  that  a  may  be  an  integer 
root  of  the  equation /(a?)  ==  0,  are 

&  =  ft  =  integer, 
9l  +/"~]  =  q2  =  integer, 
gl  +/-a  =  ?3  =  integer, 


%-*a    P%  =  ?_,  =  integer, 


100  THfeOftY    OF    EQUATIONS. 

We  shall  now  show  the  practical  method  of  applying  the 
above  conditions  to  the  determination  of  the  integer  roots  of 
the  equation.  But  it  will  be  better  in  the  first  instance  to 
find  by  immediate  trial,  whether  there  are  any  roots  =  ±  1, 
and  if  there  are  such  roots  to  expel  them  from  the  equation ; 
so  that  the  reduced  equation  is  no  longer  satisfied  by  ±  1 . 

We  shall  now  form  a  table  of  the  divisors  of  pn;  to  each 
of  which  the  double  sign  ±  is  to  be  affixed.  For  the  first 
condition  shows  that  in  this  table  all  the  values  of  a  are  to 
be  sought,  since  we  must  have  a  dividing  pH  exactly. 

We  shall  next  form  a  corresponding  table  of  the  values 
of  qr  And  we  can  now  commence  forming  the  table  of 
values  of  q2;  setting  down  only  those  values  of  q2  which 
prove  to  be  integers. 

We  may  now  erase  from  the  first  table  of  divisors,  or 
values  of  a,  all  those  which  give  no  term  in  the  table  of 
values  of  q2;  for  such  values  of  a  cannot  be  roots,  by  the 
second  condition. 

Proceeding  now  to  form  the  table  for  q3  on  the  same  plan, 
that  is,  setting  down  only  integer  values  of  q3,  and  whenever 
q3  is  not  an  integer,  erasing  the  corresponding  value  of  a  from 
the  first  table,  we  shall  have  preserved  only  those  values  of 
a  which  satisfy  the  condition  q3  =  integer. 

In  this  manner  the  table  of  values  of  a  will  at  each  step 
contain  only  such  values  as  may  still  be  roots  of  the  equation. 
And  when  we  arrive  at  the  table  for  qn,  we  are  to  erase  from 
the  first  table  all  values  which  do  not  give  qn  =  —  1.  The 
table  then  remaining  for  the  values  of  a  will  contain  the 
integer  roots  of  the  equation. 

We  may  remark,  that  at  the  outset  of  the  operation,  the 
table  for  a  need  not  contain  any  values  which  do  not  lie 
between  the  inferior  and  superior  limits  of  the  roots. 


METHOD    OF    DIVISORS.  iOl 

We  may  also  remark,  that  by  increasing  or  diminishing  all 
the  roots  by  any  integer,  the  integer  roots  will  still  remain 
integers.  Nor  if  pH  is  such  a  quantity  as  to  give  a  large  table 
of  divisors,  the  equation  may  be  transformed  in  the  above 
manner,  so  that  the  last  term  shall  give  a  smaller  number  of 
divisors.     But  if  we  put 

y  =  x  —  m, 
or  x  =  y  -f-  wz, 

the  last  term  of  the  new  equation  \sf(m);  it  will  therefore  be 
our  object  to  find  some  positive  or  negative  integer  value  for 
m,  such  that/(ra)  shall  have  a  small  number  of  divisors,  and 
then  decrease  the  roots  by  that  integer  value. 

But  instead  of  effecting  such  a  transformation,  it  will  be 
sufficient  to  remark,  that  if  a  be  an  integer  root  of  the  original 
equation,  then  a  —  m  is  a  root  of  the  transformed  equation, 
and,  consequently,  we  shall  have 

f(m) 

?i     "I  —  integer. 
a  —  m 

Now,  by  giving  m  several  integer  values,  we  can  form 
several  new  criteria,  to  be  satisfied  by  the  table  of  values  of 
a,  previous  to  the  trial  of  the  conditions  above  given.  So 
that  we  can,  at  the  commencement  of  the  operation,  reduce 
the  table  of  values  of  a  to  a  considerable  extent.  The  best 
criteria  of  this  kind  are  those  most  easily  calculated,  namely, 

J    _     J  =  mteger, 

T?¥  =  inteSer' 

and  so  on,  to  any  extent  that  may  be  required  in  reducing 
the  table  for  a. 


102  THEORY    OF    EQUATIONS. 

52.     The    following   is   an    example    of   the    method   of 
divisors : 

x*  _  x3  —  13a;2  +  16«r  -  48  =  0. 

Here  the  superior  limit  is  14,  and  the  inferior  limit  is  —  8; 
hence  the  table  of  divisors  is,  at  first, 

(a)         12,  8,  6,  4,  3,  2,-2,-3,-4,-6. 

We  have  omitted  the  divisors  ±  1,  as  they  are  found  by 
trial  not  to  be  roots. 

Now,  applying  the  criteria, 


r-r^-  =  integer, 


a+  1 

the  corrected  table  of  divisors  is 
(a)        4,  2,  -  2,  -  4 


48 


We  may  now  proceed  to  form  the  table  for  qv  or 

(ft)  -12,-24,  24,  12. 

i    i  a 
Again,  forming  the  table  for  ft,  or  2i ,  we  obtain 

(ft)        1,  -  4,  -  20,  -  7. 
By  a  similar  process  for  q3  =  ^ ,  we  obtain  the  table 

denoting  by/ any  fractional  quantity. 

Hence  we  now  obtain  the  corrected  table  of  divisors 
(a)  4,  -  4, 

and  the  corresponding  table  for  q3  is 
(ft)         -3,-5. 


METHOD    OF    DIVISORS.  103 


Lastly,  forming  the  table  for  qA  or  ^ ,  we  obtain 

tm 

(ft)        -1,-1. 

Hence  ±  4  are  the  only  integer  roots  of  the  proposed 
equation. 

The  best  form  of  the  table  for  practice  is  the  following : 

(a)  4,  2,-2,-4 

-  48  (?,)  -  12,  -  24,  24,  12 
+  16  (ft)  1,  -4,-20,  -7 
-13(?3)  -3,  -5 

-  1  (?4)  -  1,  -  1. 

In  the  above  table  the  coefficients  of  the  equation  are 
placed  in  order,  as  they  are  wanted  in  the  operation,  by  the 
side  of  the  quantities  qv  q2,  q3,  and  qr  And  whenever  a 
fractional  value  occurs  in  the  table,  or  any  integer  but 
—  1  in  the  last  line,  it  is  not  marked.  So  that  we  may 
neglect  all  the  divisors  standing  at  the  head  of  those  columns 
which  do  not  reach  to  the  last  line. 


53.  The  method  of  divisors  is,  however,  still  defective  in 
one  point,  to  which  we  must  now  give  our  attention.  Al- 
though we  have  found  the  values  of  all  the  integer  roots  of 
the  proposed  equation,  yet  some  of  these  roots  may  enter 
into  that  equation  more  than  once.  We  must,  therefore, 
either  apply  directly  the  method  of  equal  roots  to  the  pro- 
posed equation ;  or  we  must  again  apply  the  method  of 
divisors  to  the  limiting  equation,  commencing  with  the  table 
remaining  for  the  integer  values  of  a}  from  the  operations  of 
the  first  application  of  the  method ;  so  that  we  may  find 
whether  any  of  the  integer  roots  so  determined,  are  also  roots 


104  THEORY    OF    EQUATIONS. 

of  the  limiting  equation.  By  continuing  this  process  as  far 
as  is  necessary,  with  regard  to  the  successive  derived  equa- 
tions, the  solution  will  be  rendered  complete. 

The  following  are  examples  to  which  it  will  be  necessary 
to  apply  the  last  process,  in  order  to  complete  the  solution 
given  by  the  method  of  divisors : 

Ex.  1.    To  find  the  integer  roots  of  the  equation 

#4  _  8^,3  +    &p2  _  16  =  0. 

Ex.  2.    To  find  the  commensurable  roots  of  the  equation 

o  ,       45    _       27  81       i 

x4  —  9x3  +  -y  x2  +  —  x  —  -r  =  0. 
4  2  4 

Ex.  3.    Solve,  by  the  method  of  divisors,  the  equation 
X3  _  2x*  -  iw  +  8  =  0. 


CHAP.  VI. 


ON  THE  METHOD  OF  NEWTON  FOR  OBTAINING  APPROXI- 
MATELY THE  REAL  ROOTS  OF  ANY  EQUATION,  SO  FAR 
AS  IT  HAD  BEEN  DEVELOPED  PREVIOUS  TO  ITS  COM- 
PLETION BY  FOURIER. 


54.  The  method  of  approximation  given  by  Newton  sup- 
posed that  two  limits  a  and  b  had  been  found,  between  which 
a  root  of  the  equation  must  lie.  It  was  then  easy  to  reduce 
by  trial  the  difference  b  —  a  of  these  limits,  until  it  became 
a  fraction,  whose  square  might  be  neglected  in  the  process  of 
approximation. 

Let  us  suppose  then,  that  an  approximate  value  of  the  root 
in  question  is  a;  so  that  the  difference  between  the  correct 
value  a  of  that  root,  and  the  approximate  value  a,  must  be  a 
small  fraction  S.     Then  we  have 

/(«)  =  o, 

since  a  is  supposed  to  be  the  exact  value  of  the  root :    and  by 
substituting  for  a  its  value  a  +  $,  we  obtain 

f(a  +  $)  =  0; 

from  which  equation,  by  expanding  in  a  series  of  powers  of 


106  THEORY   OF    EQUATIONS. 

8,  and  neglecting  all  its  powers  beyond  the  first,  we  obtain 
the  approximate  equation 

/(«)  + 8. /(«)  =  <>. 
From  this  latter  equation  we  obtain  an  approximate  value 
of  8,  which  we  may  denote  by  $v  namely, 

a  _  _/(«) 

And  we  have  thus  arrived  at  a  new  approximate  value  for 
the  root  in  question,  which  we  may  denote  by  av  so  that 

o,  =  a  4-  8, ; 

and  in  general  we  may  suppose  that  ax  will  be  a  nearer 
approximation  to  the  root  a,  than  the  original  approximate 
value  a. 

We  may  now  repeat  the  process,  commencing  with  the 
approximate  value  av  in  order  to  arrive  at  a  nearer  approxi- 
mation a2.     For  this  purpose  we  must  put 

a  =  ax  -\-  8, 
and  we  obtain  the  equation 

f{ax  +  8)  =  0, 
which  gives  the  approximate  equation 

/(«.)  +  8-  /'(«■)  =  0. 
If  we  now  call  S2  the  approximate  value  of  8  derived  from 
this  last  equation,  we  obtain 

and  the  new  approximate  value  will  be  known,  since  we  shall 
have 

a2  =  ax  -f  S2. 

In  this  manner  the  approximation  can  be  carried  on  to  any 
required  degree  of  accuracy. 


APPROXIMATION    OF    NEWTON.  107 

55.  The  following  is  the  example  to  which  Newton  applied 
the  process  just  described,  for  the  sake  of  showing  the  prac- 
tical advantages  of  the  method  of  approximation  he  proposed. 

Ex.  x3  -  2x  -  5  =  0. 

Here  one  root  lies  between  2  and  3,  so  that  we  may  assume 

a  =  2  +  & 
And  we  find  that  the  first  approximation  will  give  us 

a,  =  2  +  Bv 

where  S   =  —till  . 
wnere  e>l  -       ^/(2) 


Now 

we 

have 

/w  = 

.r3- 

2x- 

5 

/(*)  = 

3a:2- 

-2, 

whence  f(2)  = 

-  1 

7(2)  = 

10; 

so  that 

we 

obtain 

8,= 

1 

10  ~~ 

=  0,1, 

and  ax  =  2,  1. 

Again,  we  shall    find,  after  a  second   application  of  the 
process, 

_      /(2,  1) 
*2  ~      /(2,  1) 
_  0,061 
~       11,23 
=  -  0,  0054  .  .  .  nearly, 

«2   =    «t   +    $a 

=  2,  1  -  0,  0054 
=  2,  0946. 


so  that  we  obtain 


108  THEORY    OF    EQUATIONS. 

Continuing  the  approximation,  we  find  that 

_      /(2, 0946) 
°3  "T       /(2,  0946) 

0,  000541708 
~     11,  16196 

==  -  0,  00004853  nearly, 
so  that  we  obtain 

«3  =  2,  0946  -  0,  00004853 
==  2,  09455147. 
And  the  approximation  might  be  continued  if  required. 

56.  It  was  stated,  that  in  general  the  approximate  values 
av  ax>  **»>•••  would  converge  towards  the  true  value  of  the 
root  a.  This  however  depends  on  the  possibility  of  rejecting 
the  succeeding  terms  of  the  equation  in  comparison  with  the 
second  term,  in  the  approximation  to  the  quantity  S,  which  is 
the  error  of  the  preceding  approximate  value  of  the  root. 
Now  if  we  denote  the  root  by  a  -j-  S,  the  two  first  terms  of 
the  equation  will  be  f(a)  4-  §f(a).  But  the  succeeding 
terms  will  contain  higher  powers  of  $,  so  that  in  general  they 
will  be  much  smaller  than  the  second  term.  If,  however,  we 
happen  to  have  f'(a)  very  small,  the  second  term  may  be  of 
the  same  order  as  the  third,  or  even  of  an  inferior  order :  so 
that  our.  mode  of  approximation  is  in  this  case  totally  incorrect. 
Now  this  will  take  place  whenever  the  original  equation 
f(x)  =  0  has  another  root  nearly  equal  to  a;  for  then 
f'(x)  =  0  has  a  root  nearly  equal  to  a,  and  therefore  not 
differing  much  from  a :  so  that  f(a)  will  be  a  very  small 
quantity.  For  if  we  call  that  root  a  +  fi,  where  fi  is  very 
small,  we  obtain  f{a  -f  fi)  —  0,  or 

f'(a)  f  terms  involving  /u  =  0, 


APPROXIMATION    OF    NEWTON.  100 

so  that  f(a)  is  of  the  order  of  p.  There  are  also  other  cases 
in  which  f(a)  may  be  very  small.  It  is  therefore  necessary 
to  find  some  criterion  with  respect  to  the  applicability  of 
Newton's  method.  The  following  is  the  reasoning  of  La- 
grange on  this  subject;  but  no  certain  criterion  was  given, 
until  the  aspect  of  this  branch  of  analysis  was  so  completely 
changed  by  the  discoveries  of  Fourier. 

57.  Tn  order  that  the  method  of  approximation  may  be 
applied  with  safety,  it  is  necessary  that  the  succeeding  ap- 
proximation shall  give  a  result  a  +  8,  which  is  nearer  the 
truth  than  the  preceding  result  a :  so  that  the  condition  of 
applying  the  method  with  certainty  is,  that  we  shall  have 
a  -\-  S  differing  less  from  the  root  a  than  a  does,  the  alge- 
braical sign  of  that  difference  being  immaterial.  In  other 
words,  we  must  have 

(a  +  8  -  a)2  <  (a  -  a)2, 
or  28.  (a-  a)  +  &  =  neg. 
1       fa  -  a\ 

s  -  (a  ~  a)  77-r  =  "eg. 
Now,  if  a,  a,  a '  .  .  .  .  denote  the  roots  of  the  equation 

we  have  the  identity 

/(a?)  =  («  -  a)  (x  -  a')  .... 

and  by  differentiating  this  equation,  after  taking  the  loga- 
rithms of  both  sides,  we  find 

f(x)         X  —  a       X  —  a 


1  10  THEORY    OF    EQUATIONS. 

and'-r^  —  + ,  +   . 

J  {a)        a  —  a      a  —  a 

1 

4-  K  suppose. 


a  —  a 
And  our  condition  will  now  take  the  form 

-  -  1  -  R  (a  -  a)  =  neg. 

or  ^  +  R  (a  —  a)  =  pos. 

Hence  we  must  either  have  the  two  quantities 

R,  and  a  —  a, 

of  the  same  sign ;  or  if  they  have  different  signs,  their  product 
must  be  less  than  J  neglecting  its  sign. 

It  is  easy  to  form  a  posteriori  equations,  where  that  condi- 
tion will  not  hold ;  we  have  only  to  suppose  that  the  differ- 
ence between  a  and  a  is  very  small,  and  that  a  lies  between 
them,  but  very  near  to  a  ;  the  rest  of  the  roots  being  such  as 
to  give  small  terms  in  the  value  of  R.  Now  in  this  case  the 
signs  of  R,  and  a  —  a,  will  be  different ;  for  the  sign  of  R  will 

depend  on  that  of  its  first  term -n  which  differs   in   sign 

a  —  a 

from  a  —  a :     also    we  may  suppose  R  so  large,  that  the 

product  R  (a  —  a)  shall  be  numerically  greater  than  J. 

Again,  if  there  are  two  impossible  roots  of  the  form  p  (cos  0 
±  \/  —  I  sin  0),  we  shall  have  two  corresponding  terms  of  R, 


a  —  p  cos  0  —  s/  —  1  p  sin  0      a  —  p  cosQ  +*/  —  \  p  sin  0 

2  .  (a  —  p  cos  0) 
~  (a  —  p  cos  0)2  +  p2  sin2  6 ' 


APPROXIMATION    OF    NEWTON.  Ill 

and  if  we  have  sin  0  very  small,  the  two  terms  will  be  nearly 

_  2 

(a  —  p  cos  0)  ■ 

so  that  if  a  lies  between  a  and  p  cos  0,  and  is  very  nearly 
equal  to  the  latter,  the  value  of  R  is  of  the  same  kind  as 
before,  and  the  condition  will  not  be  satisfied. 

It  seems  impossible,  then,  to  establish  a  criterion,  without 
a  previous  knowledge  of  the  roots  of  the  equation.  But 
there  is  one  case  in  which  it  may  be  shown  that  the  method 
will  necessarily  be  applicable.  If  a  be  greater  than  all  the 
real  roots  a,  a ,  a! '  .  .  .  .  and  also  greater  than  the  possible 
parts  of  the  imaginary  roots,  then  every  term  of  R  corre- 
sponding to  a  real  root,  and  the  sum  of  every  pair  corresponding 
to  a  pair  of  impossible  roots,  will  be  positive ;  that  is,  R  will 
be  of  the  same  sign  as  a  —  a.  And,  on  the  contrary,  if  a  is 
less  than  all  the  real  roots,  and  the  possible  parts  of  the  ima- 
ginary roots,  every  term  of  R  will  be  negative ;  but  a  —  a  is 
also  negative,  so  that  the  condition  is  still  satisfied. 


CHAP.  VII. 


ON  THE  COMPLETION  OF    NEWTON'S   METHOD   OF   APPROXI- 
MATION BY  FOURIER. 


58.     Before  entering  upon  the  method  of  Fourier,  it  will  be 
expedient  to  remark,  that  the  separation  of  the  roots  has  now 
been  completely  effected  ;   so  that  by  following  his  plan  we 
can   now   be   assured   that  only   one   root  of   the    equation 
f  (x)  =  0  lies  within  any  interval  (a,  b)  proposed  for  examina- 
tion.', The  objecrof  the  approximation  is  then  to  determine 
values  nearer  and  nearer  to   the   root   lying   between  those 
limits,  by  a  method  at  once  regular  in  its  plan  and  rapid  in 
its  effects.     In  this  manner  all  the  digits  of  the  root  will  at 
last  be  found,  if  the  number  of  those  digits  be  limited  ;    or 
the  approximation  may  be  pushed  as  far  as  may  be  thought 
fit,  in  case  the  number  of  digits  of  the  root  be  infinite.      The 
process   of  approximation   is  that  given    by  Newton,  so  far 
as  regards   the   nature   of   the    operations.      But  it  is  only 
under  certain  limitations  that  this  method  can  be  employed 
with  confidence  of  success.      The  question  which  must  first 
be  solved  as   a  preliminary  to  the  application    of  Newton's 
method,  involves  one  of  these  limitations.     It  is  thus  stated 
by  Fourier. 


Fourier's  approximation.  113 

59.  Though  the  limits  (a,  b)  do  not  comprise  more  than 
one  root  of  the  given  equation,  yet  a  criterion  is  still  wanted 
to  point  out  whether  the  interval  is  sufficiently  small  to  permit 
the  commencement  of  the  method  of  approximation  at  either 
of  the  limits  (a,  b).    What  then  is  the  nature  of  this  criterion? 

60.  We  shall  first  remark,  that  it  will  always  be  possible 
to  reduce  the  last  three  terms  of  the  index-series  for  the  in- 
terval (a,  b)  to  the  three  indices 

0,     0,     1. 

For  since  f(x)  =  0  has  only  one  root  between  the  limits 
(a,  b),  it  follows  that  f'(x)  —  0  cannot  have  a  root  equal  to 
that  root  of  the  proposed  equation  f(x)  =  0.  Hence  we  can 
always  obtain  an  interval,  which  shall  include  the  root  of 
f(x)  =  0,  and  not  include  any  root  of  f'(x)  =  0.  In  other 
words,  the  last  two  terms  of  the  index-series  can  always  be 
rendered  0  and  1. 

But  we  are  not  able  to  state  the  same  proposition  with 
respect  to  the  equation  f\x)  —  0;  since  in  some  particular 
cases  it  may  happen  that  the  root  of  f(x)  ==  0,  comprised 
within  the  interval  (a,  b),  is  also  a  root  of  f"(x)  =  0.  We 
must  inquire  therefore,  whether  there  be  any  common  divisor 
<j>  (x)  of  the  polynomials /(#)  and/7/(V);  and  if  so,  whether 
the  equation  $  (x)  =  0  has  any  root  within  the  interval  (a,  jb). 
Now  if  $(%)  does  not  exist,  or  if  it  exists,  but  0(#)  =  0 
has  no  root  within  the  interval  (a,  b),  then/"0)  =  0  can  have 
no  root  equal  to  the  root  of  f(pc)  =  0  which  we  are  investi- 
gating: and  it  will  be  possible  to  include  this  root  of/ (x)  —  0 
within  some  interval  which  excludes  all  the  roots  o?f"(x)  ==  0. 
In  this  case  then  we  can  reduce  the  three  last  terms  of  the 
index-series  to  the  indices 

0,     0,     1, 

Q 


114  THEORY    OF    EQUATIONS. 

But  if  the  common  divisor  <j>  (x)  is  found  to  exist,  and  also 
$  (x)  3=  0  to  contain  a  root  between  the  limits  (a,  6),  then 
there  can  be  no  other  such  root  than  the  very  root  of/(o?)  =  0, 
of  which  we  are  in  search. 

For  all  the  roots  of  $  (a?)  =±  0  are  roots  of  f(x)  =  0, 
which  has  only  one  root  within  the  interval  (a,  b).  We  may, 
therefore,  discover  this  root  by  commencing  anew  with  the 
equation  0  {x)  =  0,  which  is  of  lower  dimensions  than 
J\x)  =  0.  By  this  means  we  can  discard  such  a  particular 
case,  inasmuch  as  it  can  be  reduced  to  the  general  case  of  an 
inferior  equation. 

In  all  cases  therefore  we  shall  consider  it  possible  to  reduce 
the  three  last  indices  to 

0,     0,     I. 

61.  We  shall  now  proceed  to  show,  that  when  this  reduc- 
tion of  the  index-series  has  been  made,  we  may  apply  with 
confidence  the  method  of  approximation  given  by  Newton, 
commencing  the  operation  from  one  of  the  limits  (a,  b). 

Suppose  then  that  for  the  interval  (a,  b)  we  find  that  the 
last  three  terms  of  the  index-series  are 

fX*),  /'(*),/(*). 

0,        0,         1. 

And  let  the  root  comprised  in  this  interval  be  y  ;  of  which 
c  is  an  approximate  value,  not  lying  beyond  that  interval,  but 
whose  extreme  values  are  the  limits  a  and  b.  We  shall  seek 
the  condition  that  c  must  satisfy,  in  order  that  the  approxima- 
tion may  be  carried  on  with  safety,  commencing  with  c. 

Now  if  we  assume 

y  =  c  -f  h, 

tben0=/(y)=/(c  +  h); 


Fourier's  approximation.  115 

and  by  the  Newtonian  approximation  we  find  the  new  quantity 

cx  =  c  4-  hv 
by  using  the  approximate  equation 

o  =/(*)  +  *,/'(<?> 

to  obtain  the  value  of  hv  which  is  supposed  to  be  nearly  the 
correct  value  of  h. 

But  the  correct  equation  would  have  been,  according  to  the 
rule  of  the  remainder  of  Taylor's  series, 

0=/(c)  +  hf\\)y 

where  X  lies  between  c  and  y. 

Now,  in  order  that  the  approximation  may  proceed  with 
success,  we  must  have  the  error  of  cx  less  than  that  of  c, 
neglecting  signs.     Or,  in  other  words,  we  must  have 

(y-cy>  (y-cx)\ 

or  h\  >  (h  —  hx)2, 

or  2hhx  —  h*  >  0, 

h       1 

or__^pos.; 

that  is,  we  must  have 

f'(c)       1 

The  first  remark  to  be  made  here  is,  that  it  will  not  be  safe 

to  employ  an  interval  (a,  b)  in  which  f\x)  changes  its  sign. 

f'(c) 
For  as  X  is  unknown,  we  could  not  say  a  priori  that  y^r  ■  was 

itself  a  positive  quantity,  unless  we  knew  that  f'{x)  did  not 
change  its  sign  during  that  interval.  Hence  it  becomes 
necessary  to  have  the  last  index  but  one  reduced  to  0. 


116  THEORY    OF    EQUATIONS. 

62.  But  this  condition  is  not  sufficient,  for  we  must  also 
have 

f\c)       1 

Now  the  true  equation 

0  =  /(c  +  h) 
may  be  written  either  in  the  form 

0=/(c)+/*/'(A), 
or,  expanding  to  one  more  term,  in  the  form 

0=/(C)+A/'(C)+J/"((«). 

where  jx  is  also  a  quantity  lying  between  c  and  y.     By  the  eli- 
mination of  h  from  these  two  equations,  we  obtain 

0-1       /'(c)  ,   1/(<0/"(m) 
U-         /'(X)+2    ^/'(X)p  ' 

so  that  we  have 

/'(X)      2-2  +  3    |/'(X)P' 
and  our  condition,  that 

/'(c)      1 
/CX)-2  =  P°S- 

will  manifestly  be  satisfied,  if  we  have 

l/'(*)P+/(c)/"(M)  =  Pos. 

The  second  remark  to  be  made  here  is,  that  we  cannot  with 
safety  use  any  interval  in  which  f"(x)  changes  its  sign.  For 
as  fi  is  unknown,  we  cannot  say  a  priori  whether  /"(ill)  will 
satisfy  any  condition  relating  to  signs,  or  not ;  unless  we  knew 
the  sign  of/"(/u),  by  knowing  that  /"(#)  keeps  its  sign  for 
the  whole  of  the  interval.  Hence  the  last  index  but  two  must 
be  reduced  to  zero. 


Fourier's  approximation.  117 

63.  But  we  have  still  to  satisfy  the  condition 

f/'(A)P+/W"G«)  =  pos. 
Now  as  the  values  of  A,  jx,  are  unknown,  the  only  mode  of 
satisfying  with  certainty  the  preceding  condition,  is  to  make 

°r/(c)  f(p)  =  Pos-5 
observing  that  f"(x)   preserves  its  sign  during  the  interval 
{a,  b). 

But  we  have,  by  the  table  of  signs, 

f>W)  =  pos- 

7(*)=neg- 
whence/(Sr^=nes-' 

and  either  the  numerator  or  denominator  of  this  fraction  must 
be  positive,  whilst  the  other  is  negative. 

The  third  remark  now  to  be  made  is,  that  the  method  of 
Newton  cannot  be  safely  applied  to  both  limits  alike  ;  but  that 
only  one  of  the  limits  can  be  employed  with  certainty  of  success. 
And  the  criterion  for  choosing  that  one  of  the  two  limits, 
suppose  c,  is  that  in  the  series  denoted  by  sign  (c)  the  last 
term  and  the  last  but  two  are  alike. 

64.  We  have  now  shown  that  there  can  always  be  found 
a  limit  c,  such  that  the  Newtonian  approximation  can  be 
commenced  with  certainty  of  success.  In  other  words,  we 
shall  always  be  able  to  deduce  from  the  approximate  value  c 
a  second  approximation  cv  whose  error  shall  be  less  than  the 
error  of  c.      But  there  remains  still  the  question,  whether  we 


118  THEORY    OF    EQUATIONS. 

can  proceed  with  certainty  to  a  third  approximation  c2,  com- 
mencing at  the  second  approximation  cv  For  this  purpose  it 
will  be  necessary  to  inquire  into  the  nature  of  the  approximation 
cv  which  has  been  obtained  by  the  application  of  Newton's 
method  under  the  limitations  given  by  Fourier.  Recurring 
then  to  the  expanded  forms  of  the  true  equation 

0=/(e  +  h), 
we  find,  on  the  elimination  of  h, 

or,  if  we  take  into  consideration  the  choice  of  the  limit  c,  we 
can  write  this  equation  in  the  form 

that  is,  T 1  =  pos., 

h-hx 

or-AT=pos-; 

and  expressing  this  condition  in  terms  of  c,  c,,  and  y,  we  find 
that 

1_J  =  pos.; 

hence  c,  lies  between  c  and  y. 

The  fourth  remark,  then,  to  be  made  on  the  Newtonian 
process  is,  that  the  approximate  value  given  by  that  method, 
under  the  limitations  of  Fourier,  is  always  on  the  same  side  of 
the  root  as  the  limit  from  which  the  approximation  commenced. 
Hence  we  have  only  to  write  the  new  limit  cx  for  the  old 
limit  c,  and  the  whole  of  the  preceding  limitations  will  of  course 
be  satisfied  by  the  new  interval ;  so  that  the  process  may  be 
renewed  with  confidence,  commencing  at  the  last  approxima- 
tion cv  And  in  this  manner  the  process  can  be  continued  to 
any  extent  with  certainty  of  success. 


Fourier's  approximation.  119 

65.  There  is  yet  one  question  which  must  be  solved  before 
the  Newtonian  approximation  can  be  considered  complete. 
This  relates  to  the  measure  of  the  degree  of  approximation  at 
any  stage  of  the  operation.  We  must  therefore  seek  some  con- 
venient expression  for  the  limit,  which  the  error  cannot  exceed. 
This  limit  will,  in  the  first  instance,  be  the  difference  b  —  a  of 
the  interval,  so  that  we  have  h<b-a;  or  rather  h2 < (b—a)2, 
since  we  are  speaking  of  the  magnitude  of  the  error,  and  not 
of  its  sign.  The  error  of  the  next  approximation  will  be 
h  —  hv     Now  we  have 

0=/(C)+A,/'(C)  | 

0=f(c)+hfXc)+^f"W\ 

where  ju  lies  between  c  and  c  +  h,  and  therefore  of  course 
between  a  and  b.  But  by  subtracting  the  above  equations  we 
obtain 

0  =  (A_A,)/(C)  +  |V"W. 

(  ''    ~   4  1/'(C)J  ' 

that  is,  the  error  h  —  hx  is  of  the  order  h2}  since  we  shall  not, 

except  in  particular  cases,  find  •   ,       to  be  very  large. 

J  (S) 
The  best  method  for  calculating  a  limit  of  the  error  h  —  hv 

■f"(  ^ 
is  to  take  a  coefficient  K  not  far  from  the  true  value  of  -jrrk, 

J  \c) 
yet  always  greater  than  that  true  value  numerically,  and  then  ■ 

h4 
to  consider  K2  -j  as  the  limit  of  the- square  of  the  error  h  —  hv 

The  necessity  of  taking  notice  of  the  limit  of  the  error  is, 

that  we  may  not  perform  any  useless  arithmetical  operation, 

by  finding  the  successive  approximations  to  too  many  places  of 

—  f(c) 
decimals.     For  in  finding  the  value  of  hv  which  is      ,/       , 

/  \c) 


120  THEORY    OF    EQUATIONS. 

we  ought  to  carry  on  the  division  only  to  so  many  decimal 
places  as  must  of  necessity  be  correct ;  and  we  must  conse- 
quently stop  before  the  digit  which  is  of  the  same  order  as 
the  probable  error.  Now,  inasmuch  as  every  error  is  of  the 
same  order  as  the  square  of  the  preceding  one,  it  follows 
that  at  every  new  approximation  the  number  of  correct 
decimals  given  by  the  process  will  become  doubled.  Thus  the 
process  is  not  only  perfectly  certain  to  succeed,  but  the 
rapidity  of  approximation  is  accelerated  continually  ;  and  it  is 
on  this  account  that  it  is  preferable  to  any  other  method  of 
approximation  whatever. 


CHAP.  VIII. 


ON  THE  METHOD  OF  APPROXIMATION  GIVEN  BY  LAGRANGE, 
AS  SIMPLIFIED  BY  THE  THEOREMS  OF  FOURIER. 


66.  We  shall  suppose  in  general  that  the  interval,  whose 
limits  are  the  successive  integers  c  and  c  +  1,  contains  at 
least  one  root  y  of  the  equation  under  consideration';  in  some 
cases  it  may  contain  other  roots  besides  y,  and  the  method  of 
Fourier  will  always  discover  how  many.  The  object  then  of 
this  approximation  is  to  discover  immediately  values  nearer  and 
nearer  to  each  of  the  roots  y}  S,  ....  so  contained  in  the 
interval  (c,  c  +  1),  commencing  with  the  value  c. 

For  this  purpose,  then,  assume 

1 

X  =  °  +  3?' 

and  substitute  in  the  proposed  equation 

f(x)  =  0 ; 
the  transformed  equation  will  be 


o  =/(*  +  I) 


~M  +  ~}m  +  •'•  •■+■£ 


122  THEORY    OF    EQUATIONS. 

or,  when  reduced  to  the  ordinary  form, 

fiP)  /(c) 

And  the  roots  of  this  new  equation  correspond  to  the  roots 
of  the  given  equation,  being  connected  with  them  respectively 
by  the  equation 

*  =  c  +  i, 

Now  if  there  are  k  values  of  x  lying  between  c  and  c  +  1, 

namely  y9i9 ,   it  follows  that  there  will  be  exactly 

k  values  of  x' ,  which  are  greater  than  unity,  and  which  we 

shall  denote  by  y,  $,  .  .  .  .  as  corresponding  to  y,  8, 

These  values  are  roots  of  the  equation  in  x ,  and  the  integers 
c' }  d' ,  .  .  .  .  next  below  them  may  be  found  by  the  method  of 
Fourier.  If  these  integers  are  all  different,  the  process  now 
becomes  separated  for  each  of  the  roots ;  if  all  or  any  part  of 
them  are  alike,  the  process  is  not  yet  separated  for  the  corre- 
sponding roots.  In  all  cases  we  are  to  proceed  separately  with 
each  one  of  the  different  integers  amongst  the  series  c',  d' 9  .  .  . 
The  following  is  the  process  for  c'. 

Assume 

and  transform  the  equation  as  before.  Then  as  many  values 
as  there  are  of  x'  lying  between  c'  and  c  4-  1,  so  many  values 
will  there  be  of  x"  greater  than  unity.  Hence,  if  the  method 
of  Fourier  points  out  k'  roots  of  the  equation  in  x' ,  we  shall 
have  to  seek  for  k'  integers  c" ,  d", ....  next  below  the  roots 
of  the  equation  in  x". 

If  there  are  not  k'  different  integers,  we  cannot  completely 
separate  the  procaes  at  present,  but  we  are  to  proceed  sepa- 
rately with  each  one  of  the  different  integers  of  the  series 


APPROXIMATION    OF    LAGRANGE.  123 

c" ,  d", It  is  manifest  that  this  process  can  be 

continued  to  any  extent. 

We  shall  then  have,  for  the  root  y, 

1 

y  =  c  +  j 

Now  if  any  of  the  integers  c,  c',  c"}  c'"} be  an  exact 

root  of  the  corresponding  equation,  instead  of  being  merely  a 
limit  of  the  root,  the  continued  fraction  will  close  with  that 
integer,  and  give  the  exact  value  of  the  root.  In  all  other 
cases  we  must  conclude  that  the  root  in  question  is  incom- 
mensurable ;  and  we  may  carry  the  approximation  as  far  as  we 
choose. 

67.     The  successive  convergents  will  be 
(1.)  c, 


(2.) 

1 

(3.) 

1 

C"  +  J' 

or,  if  we  assume 

Vx  =  c>  9*  =  h 

P2  =  c'Pi  +'*i  ft  =  c'9i> 

P3  =  c>2  +  Pv  9s  =  C"?2  +  9v 

P4=  C>3  +  P2'  94  =  c"'93+9v 


the  successive   convergents  will   be  denoted  by  the  vulgar 
fractions 

Ei  6  8?  £4 

9i  9*   93  9/ 


124  i       THEORY    OF    EQUATIONS. 

Now,  by  eliminating  c'"  from  the  equations 
P4  =  C'"P*  +  P%\ 

9*  ~  c"'?3  +  qJ 

we  have 

£4  _  9_4  =  £2  _  h 
Pz      9*      P3      ?3' 

w^Pa-Ma^P&z-Wv 

But,  by  a  similar  elimination  of  c" , 

92Ps-P293  =  Pi92-9iP2' 
And  lastly,  by  the  elimination  of  c',  we  find  that 

9iP2~Pi92^  ?i  f=f  l 
Hence  we  have 

P*_Pi  =  JL 
92     9i       9x92 


9s         92  9293 

?4  ?3  ^4 


so  that  we  may  infer  generally 

Hence  the  difference  of  the  two  convergents  ^2±),  &,  neg- 
lecting  its  sign,  will  be ;  and  of  course  less  than  — -f. 

9n9n+\  9n 


APPROXIMATION  OF  LAGRANGE.  125 

But  by  observing  that 


y  =  C  +  — 

1 

c' 

+ 

c"  +  . .  . . 

and  -  =  c, 
9 

9i              c 

&-C+1 
» 

+ 

1' 

c" 

•    =^  •     • 

we   perceive   that   the  convergents   are   alternately   less  and 
greater  than  the  true  value  y.    Hence  y  will  always  lie  between 

?2±l  and  «  :    and  consequently  the  error  of  —    will   be   less 
than  -— =. 

9n 


68.  The  example  to  which  Lagrange  applies  his  method 
is  the  one  Newton  had  chosen  for  the  illustration  of  his  own 
process  of  approximation. 

The  equation  is 

xz  —  2x  —  5  =  0. 

An  approximate  value,  namely  the  integer  next  below  one 
of  the  roots,  is  2.  And  the  method  of  Fourier  shows  that  the 
other  two  roots  are  imaginary.     Putting  then 

0       1 

B  =  2  +  3' 

the  transformed  equation  in  x'  will  be 

#'3  -  \0x'*  -  6x'  -  1  =  0. 
Here  the  integer  next  below  the  required  value  of  x'  is  10. 


126  THEORY    OF   EQUATIONS. 

Putting  therefore 

we  find 

61  x"2  -  94#"2  -  20s"  -1=0. 

And  the  integer  to  be  taken  for  the  next  below  x"  is  1 

Assume  therefore 


x"  =  1 

1 
+  1"' 

and  proceed  as  before. 

We  find  for  the  value  of  the 

required  root  of  the 

proposed 

equation 

i 

*  +           1 

10  +  — 

1 

1   + 

\*&k 

So  that  the 

convergents  are 

2    21    23 
V  10'  IP 

44 

2V 

The  tenth  convergent  is    7ftq7  ;  and  of  course  the  error  is 

less  than  (~ggjf>  that  is,  less  than  0.00000001. 

Hence  the  value  of  that  convergent  is  correct  to  seven  places 
of  decimals  :  this  value  is 

2.0945514, 

which  agrees  with  Newton's  method. 

It  is  however  easily  seen  that  this  method  is  far  inferior  in 
brevity,  in  facility,  and  in  regularity,  to  the  process  of  Newton. 


CHAP.  IX. 


ON  THE  INDIRECT  RULES  FOR  THE  SOLUTION  OF  EQUATIONS 
OF  LOW  DEGREES,  WHICH  HAVE  BEEN  ACCIDENTALLY 
DISCOVERED:  WITH  THE  TRUE  THEORY  CONNECTING 
THESE  METHODS,  NAMELY  THE  APPLICATION  OF  THE 
METHOD  OF  SYMMETRICAL  FUNCTIONS  OF  THE  ROOTS 
TO  THE  SOLUTION  OF  THE  EQUATION  ITSELF:  AND, 
LASTLY,  THE  REASON  WHY  THIS  METHOD  CANNOT  BE 
EXTENDED  BEYOND  THE  FOURTH  DEGREE. 


69.     On  the  solution  of  quadratic  equations  by  the  method 
of  completing  the  square. 

Suppose  that  the  given  equation  is 

x2  +  px  4-  q  =  0. 

p2 
The  rule  directs  us  to  transpose  q,  and  add  *g   to  both 

sides,  in  order  to  render  the  first  side  a  complete  square: 
after  this,  the  extraction  of  the  square  root  of  both  members 
of  the  equation  will  reduce  the  quadratic  to  two  simple  equa- 
tions, owing  to  the  ambiguity  of  sign  on  either  side,  after 
extracting  the  square  root. 


128  THEORY    OF    EQUATIONS. 

The  process  is  thus  indicated  : 

P2       P2 
x2  +  px  +  *j  =  %■  -  q, 


which  last  equation  is  equivalent  to  the  two  equations 

P  _ 


and  the  two  roots  obtained  are  expressed  by 


X  +  2 


*.V9K 


Now  this  is  obviously  identical  with  the  process  of  taking 
away  the  second  term  of  the  equation,  by  the  method  of 
transformation. 

For  let  a?  =  y  —  £ , 

then  the  transformed  equation  is 


\/i 


i/?- 


70.     On  the  solution  of  cubic  equations  by  the  method 
of  Cardan. 

Let  the  second  term  of  the  proposed  cubic  be  taken  away 


ON    INDIRECT    RULES.  ,         129 

by  transforming  the  equation,  if  necessary.     The  equation  is 
then  of  the  form 

x3  +  qx  -4-  r  =  0. 

Suppose  now  that  for  the  single  symbol  x  we  substitute  the 
sum  of  two  symbols,  as  a  +  /3.  Then  we  shall  have  the 
equation 

(«+  j3)3  +  ?(a  +  j3)  +  r  =  0, 
or,  a3  +  j33-f  3a0(a  +  j3)  ' 

+  90  +  /3)        1=0. 
+  r 

Now  as  x  can  be  divided  into  two  parts  in  an  infinite 
number  of  ways,  we  may  make  a  second  assumption  concerning 
these  parts ;  and  we  shall  suppose  that  they  satisfy  the  con- 
dition 

3a)3  +  q  -  0. 

The  equation  between  a  and  j3  is  thus  divided  into  the  two 
others 

3aj3  +  q  =  0^ 

a3  +  03  +  r  =  Oi 

Eliminating  /3,  we  have 

a        27a3  +      ~ 

a6  +  ra*  =  ^ 


2    ~  V    4+27 

Now  since  a  and  j3  enter  into  the  two  equations  symmetri- 
cally, it  follows  that  we  should  find  the  same  values  for  /33; 
so   that   the  two    roots   may   be    considered  as  representing 


130  THEORY    OF    EQUATIONS. 

indifferently  the  two  quantities  a3  and  /33.      And  we  may 
write 


2^V    4  +27' 

P  2      V    4+27' 

and  the  value  of  x  will  be 


(-i+vT*8*+(-SV;+fl*- 

In  finding  the  value  of  a,  we  must  recollect  that  every  cube 
root  of  any  quantity  will  have  three  values,  of  which  two  are 
always  imaginary ;  and  the  third  will  also  be  imaginary,  if  the 
quantity  whose  cube  root  is  to  be  extracted,  be  itself  imagi- 
nary. Again,  ]3  will  have  three  values,  which  will  correspond 
respectively  to  the  three  values  of  a.  Hence  x  will  have 
three  values ;  so  that  the  three  roots  of  the  cubic  are  found 
by  one  operation. 

Suppose  a,  j3',  to  be  any  one  pair  of  values  of  a  and  /3. 
Then,  denoting  by  p  the  quantity 


we  shall  have 


cos  —  +  VZT\  sin  ~ 


,  4?r        , ,    .    4?r 

p2  =  cos  -j-  +  V  —  1  sin  -g- 


And  the  cube  roots  of  1  are 

l>  P>  P2 
Hence  the  three  values  of  x  are 

«'  +  £', 
pa  +  P2/3', 

pV  +  t#-  J 


ON    INDIRECT    RULES.  131 

71.     We  shall  in  all  cases  be  able  to  find  by  the  processes 

of  arithmetic,  this  one  pair  of  values  a ,  j3',  except  the  quan- 

r2      a3 
tity  -7  +  ^z  be  negative. 

Putting  the  sign  of  q  in  evidence,  we  are  now  to  consider 

the  solution  of 

x3  —•  qx  +  r  =  0, 

in  the  case  where  we  have 

r2      q3 
j-f^neg., 

or2>' 


3      2' 


neglecting  the  sign  of  r. 
Now,  in  this  case,  we  have 


-=^v?- 


4      27 


Assume  now  cos  9  = ^— " ;  and  we  shall  obtain 

9*  

"3  =  37/3  ^cos  "  +  ^  ~  1  sin  ^ ' 

obtaining  one  value  by  Demoivre's  theorem. 


132 


THKORY    OF    EQUATIONS. 


And  the  corresponding  value  for  |3  will  be 


P 


_?   1 


3   a 

=  y  \ 

Hence  the  three  values  of  x  are 


gf  .  e 

3    C0S3 


»• 


Vl 


3.cosg) 


COS 


COS 


2ir  +  0 

3      ' 

4tt  +  0 


This  case  was  termed  by  the  old  algebraists  the  irreducible 
case  of  Cardan's  rule.  We  may  remark  that  this  happens 
when  the  three  roots  are  all  real. 


72.     The  solution  of  a  biquadratic  equation  by  the  method 
of  Ferrari. 

Let  the  second  term  of  the  given  equation  be  taken  away, 
if  necessary :  the  equation  is  then  of  the  form 
x4  +  qx2  +  rx  +  s  —  0. 

Transpose  all  the  terms  but  the  first,  and  then  add  to  both 

sides  the  quantity 

2nx2  +  n2. 

The  equation  will  then  take  the  form 

O2  +  n)2  =  (2n  —  q)  x2  —  rx  +  n2  —  $; 
and  since  n  is  arbitrary,  it  will  now  be  our  object  to  determine 
??,  so  that  the  second  side  shall  be  an  exact  square ;  and  then, 
by  extracting  the  square  root,  the  biquadratic  is  reduced  to 
the  two  quadratic  equations 

x2  +  n  =  ±  \(2n  -  q)lr  -  (n2  -  s)i\ . 


ON    INDIRECT    RULES.  133 

Now  the  condition  for  the  quantity 

(2n  —  q)  x2  —  rx  +  n2  —  s, 
to  be  a  perfect  square,  is  that 

(2n  -  q)  (V  -  »)  =  J, 

OT(»  -  |)  <*-.)  =  £; 
that  is,  n  must  be  a  root  of  the  cubic  equation 

w3-|ra2-s«+  ^  -  g-  =  0. 

There  will  always  be  one  real  value  of  n,  and  this  can  be 
found  by  Cardan's  rule.  And,  consequently,  the  biquadratic 
can  always  be  reduced  to  two  quadratics,  and  its  four  roots 
determined. 

This  method  has  sometimes  been  ascribed  to  Waring, 
instead  of  its  real  author,  who  was  Cardan's  pupil. 

73.  We  shall  now  point  out  why  the  equation  for  the 
determination  of  n  is  a  cubic. 

If  we  examine  the  two  quadratics  to  which  the  biquadratic 
has  been  reduced,  we  find  for  the  last  term  of  one  of  them, 
the  quantity 

n  —  VW2  —  5, 

and,  consequently,  this  quantity  is  the  product  of  the  two 
roots  of  that  quadratic,  that  is,  of  two  of  the  four  roots  of  the 
given  biquadratic.  Hence,  if  we  denote  these  four  roots  by 
a,  b,  c,  d,  we  shall  have 

n  —  */n2  —  s  =  ab. 

Also,  the  last  term  of  the  other  quadratic  will  be  equal  to 
the  product  of  the  remaining  two  roots ;  that  is, 

n  -f  s/n2  —  s  =  cd. 


134  THEORY    OF    EQUATIONS. 

Hence,  by  addition,  we  find  that 

2n  =  ab  +  cd. 

But  since  there  was  no  reason  for  choosing  the  particular 
pair  a,  b  of  the  four  roots  for  one  of  the  quadratics,  and 
leaving  the  pair  c,  d  for  the  other,  it  is  evident  that  n  must 
have  the  three  values 

J  (ab  4-  cd),  i  (ac  +  bd),  \  {ad  +  be) ; 
and,  consequently,  the  equation  for  determining  n  is  a  cubic. 
These  three  values  correspond  to  the  three  different  pairs  of 
quadratics  to  which  the  biquadratic   can   be  reduced;    and 
which  are  thus  indicated  by  writing  their  roots 

fab\       f«c\  fad\ 
led  J       Xbdf      XbcJ' 

74.     The  solution  of  a  biquadratic  by  the  method  of  Des 

Cartes. 

Let  the  proposed  biquadratic,  deprived  of  its  second  term, 

be 

x*  _j_  bx2  +  ex  +  d  =  0. 

Now,  since  the  first  side  of  this  equation  may  always  be 
considered  as  the  product  of  two  quadratic  factors  with  real 
coefficients,  and  since  the  coefficient  of  the  second  term  will 
be  the  sums  of  the  coefficients  of  the  second  terms  of  the 
quadratic  factors;  it  follows  that,  because  the  second  term 
has  been  taken  away  by  transformation,  we  may  assume 

x4  +  bx2  +  ex  +  d  =  O2  +  Vy%  +  «)  O2—  Vyx  +  z')> 
where  y  is  some  positive  quantity,  and  z,  z'  are  either  positive 
or  negative.     Hence,  performing  the  multiplication  and  equa- 
ting the  coefficients,  we  must  have 

z  +  z'  —  y  =  b, 

0'  -  z)  Vy  =  c> 
zz'  =d: 


ON    INDIRECT    RULES.  135 

which  three  equations  are  sufficient  for  determining  y,  2, 
and  z'. 

For,  by  the  first  equation, 

(z  +  z')  Vy  =  (y  +  6)  v^j 

and,  by  the  second, 

0'  —  z)^y  =  c; 
whence,  2z'  \/y  =  (y  +  b)  \/y  -f  c,     ^ 
and  22  Vy  =  (y  +  b)  \/y  —  c       )' 
and,  by  multiplication, 

423'y  =  (y  +  &)2  7/  —  c2, 
or,  observing  that  22'  =  d, 

y3  +  2%2  +  (62  -  4d)  y  -  c2  =  0. 

Now  as  the  last  term  of  this  equation  is  essentially  negative, 
there  must  be  at  least  one  real  root  which  is  positive;  which 
may  be  found.  Also,  when  y  is  found,  z ',  %  are  known  by 
the  preceding  equations  :  and  the  solution  is  reduced  to  that 
of  the  two  quadratics 

x2  -f  a/ y.x  -f-  z  =  0, 

x2  —  \fy.x  4-  z  ss  0; 
which  give  the  four  roots  required. 

We  observe  that  +  */y  is  the  sum  of  two  of  the  roots  of 

4.3 

the  proposed  equation,  and  as  there  are  ,-^,  or  six  ways  of 

combining  the  roots  two  and  two,  and  since  every  such  com- 
bination leaves  a  supplementary  combination  of  the  other  two 
roots,  there  are  three  ways  of  dividing  the  four  roots  into  two 
pairs;  that  is,  there  are  three  ways  of  dividing  the  first  side 
of  the  equation  into  two  quadratic  factors.  It  is  on  this 
account  that  the  equation  for  finding  y  rises  to  the  third 
degree.* 

*  On  this  subject,  see  the  excellent  remarks  in  the  note  upon  Art.  768  of 
Peacock's  Treatise  on  Algebra. 


136  THEORY    OF    EQUATIONS. 

75.     The  solution  of  a  biquadratic  by  the  method  of  Euler. 
As  before,  let  the  equation,  deprived  of  its  second  term,  be 
xx  -f  bx2  +  ex  4-  d  =  0, 
and  suppose  x  to  consist  of  three  parts  a,  0,  y ; 

•"•  ^  =  a  +  |3  +  7, 

.\  #2  =  a2  +  02  +  72  +  2  (0<y  +  ay  +  a/3), 
=  P  +  2  (07  -f  a7  +  aj3),  for  brevity. 

Transposing   P,   and   squaring    and  writing  Q  for  02  72 
+  a2  72  +  a2  02,  we  have 

x*  -  2P#2  +  P2  =  4Q  +  8a07  (a  +  0  +  7) 
=  4Q  +  Safiy.x; 
/.  x4  -  2Px2  -  Safiy.x  +  P2  -  4Q  =  0, 
which,  compared  with  the  proposed  equation,  gives 

a2  +  02  +  72  =  P  =  ~  J& 
02  72  +  «2  72  +  «2  ]32  =  Q  =  i  (P8  -  «,  =  tV  (*2  ~  4rf), 

and  a07  ds  — g. 

From  these  equations  it  is  evident  that  a2,  02,  72,  are  the 
roots  of  the  auxiliary  equation 

3  '    b    2      b2  -  U  c2 

^    +2^   +-T6-y-64=0; 

from   which   equation   they  may  be  found,  and   thence  the 
values  of  x  determined. 

Since  the  last  term  of  the  cubic  is  negative,  there  is  at 
least  one  real  positive  root,  I  suppose ;  and  the  other  two  are 
either  both  positive,  both  negative,  or  both  impossible ;  and 
they  may  be  denoted,  in  the  three  cases,  by  m,  n>  by  —  mt 
—  w,  and  by  p  (cos  0  ±  */*—  1  sin  0),  where  p  is  a  positive 
number. 


ON    INDIRECT    RULES.  137 

Hence  we  shall  have  a  =  ±  a/ I,  and  j3,  7,  respectively 
equal  to 

±  \/m,  ±  s/n  in  the  first  case, 
±  v  —  m,  ±  v  _  n  in  the  second; 
and,  in  the  third  case, 

/        $  0\ 

]3  =  ±  v>  f  cos  -  -f  V  _  1  sin  g  J, 

y=±</p  (  cos  -  -  */  -  1  sin  gV 

Now  a:  =  a  +  j3  -i-  7;  and  the  only  restriction  upon  the 

values  of  a,  j3,  7,  is  given  by  the  equation  a]3y  —  —  5>  which 

shows  that  the  product  a/3y  must  have  a  different  sign  from 
that  of  c. 

When  r  is  negative,  the  product  a]3y  must  be  positive;  and 
applying  this  restriction  to  each  of  the  three  cases,  we  have, 
in  the  first  case, 

x  —  \fl  ±  (y'ffi  +  V^)>  of  —  v^*  ±  (ys/m—  \fn)\ 
in  the  second  case, 

#  =  a/1±s/  —  I  (\/m—^n),  or  —  \Sl±*/—  1  (\/^+  -\/w); 
and  in  the  third  case, 

a  /i 

a?  =  v^  ±  2\/p  cos^,  or  —  \/l  ±  2  v'—  1  vV  smo- 

And,  on  the  contrary,  when  r  is  positive,  it  will  be  found 
that  the  values  of  x  are  the  same  as  the  above,  changing  only 
the  sign  of  ^/l  in  each  of  them. 

Hence  we  have  only  four  roots  in  each  case,  although  there 
were  apparently  eight  roots  if  there  had  been  no  restriction  to 
the  values  of  a,  j3,  7. 

T 


138  THEORY    OF    EQUATIONS. 

76.  We  shall  now  proceed  to  show  the  general  principle 
upon  which  these  methods  proceed.  It  is  that  of  symmetrical 
functions,  as  before  stated. 

In  general,  we  have  one  simple  equation  between  the  roots 
of  the  given  equation,  namely,  that  their  sum  is  equal  to  the 
coefficient  of  the  second  term  with  its  sign  changed.  Now  the 
object  of  the  general  method  of  solution,  by  means  of  symme- 
trical functions,  is  to  form  n  —  1  other  simple  equations 
between  the  roots.  This  is  done  by  assuming  z  equal  to  some 
linear  function  of  the  roots,  and  forming  the  equation  in  z, 
which  must  be  one  of  n  —  1  dimensions,  if  possible,  in  order 
that  we  may  have  n  —  1  values  of  z. 

By  substituting  these  n  —  1  values  of  z,  and  permuting  the 
roots  of  the  given  equation  in  the  equation 

z  =  linear  function  of  the  roots, 

we  shall  obtain  n  —  1  simple  equations  between  the  roots, 
and  thus  be  enabled,  with  the  addition  of  the  equation 

—  px  =  sum  of  the  roots, 

to  find  all  the  roots  of  the  given  equation.  The  process  will 
be  best  exemplified  in  the  equations  of  low  degrees  ;  and  it 
will  be  shown  that  it  fails  to  be  of  any  practical  utility  for 
equations  beyond  the  fourth  degree. 

The  above  method  is  due  to  Lagrange,  who  also  demon- 
strated its  failure  for  higher  orders. 

77.     Solution  of  a  cubic  by  the  method  of  Lagrange. 
Let  the  three  roots  of  the  cubic  equation  be  a,  b,  c ;    the 
equation  itself  being 

x3  +  qx  -f  r  =  0. 

Then  we  have  a  -f  b  +  c  —  0;   and  if  we  can  form  two 


ON    INDIRECT    RULES.  139 

other  simple  equations  involving  a,  b9  c,  we  shall  be  able  to 
find  the  roots  by  the  known  method  of  elimination  between 
simple  equations. 

Let  one  of  these  simple  equations  be 

t(«  +  bp  +  cp2)  =  z; 

then,  since  there  is  nothing  to  distinguish  one  root  from 
another,  it  follows  that  the  elimination,  which  gives  the  final 
equation  in  z,  must  also  gives  z  the  values 

\  (c  +  ap  +  V)> 
i  (b  +  cp  +  ap*)9 
i  (a  +  cp  +  bf), 
\(b  +  ap  +  cp*), 

i(c  +  bp  -f  V)  J 
and  the  final  equation  will  therefore  be  of  six  dimensions. 

But  if  we  assume 

p  =  cos  y  +  i/"Zl  sin  -rg,  (whence  p3  =  1), 
i  (a  +  V  +  c/o2)  =  *,, 
*  (a  +  cp  +  fy>2)  =p  *2, 
it  will  be  evident,  by  inspection,  that  the  six  values  of  z  are 

Wv  Z*P>  ziP2> 
z2,  Ztf,  z^p2; 

and  that  u  (==  s3)  has  only  the  two  values  z*,  z23.  Also  we 
shall  have 

i  (a  +  b    +  c)     =  0, 

±(a  +  bp  +  cp2)  =  zv 
£  (a  +  cp  -f  fy>2)  =*  22 ; 


140  THEORY    OF    EQUATIONS. 

and  if  we  observe  that  p  is  an  impossible  root  of  p3  —  1  =  0, 

p3—  1 

and  therefore  satisfies  " =-  =  p2+p  + 1  =  0,  we  shall  have 

p—  1 

.      a  =  a,  +  fy 

*   =   V  +  *2P> 

Now  let  u2  -f  Pw  -f  Q  =  0  be  the  equation,  in  which  u  has 
the  values  z£9  z23;  then 

—  P  =z  at3  +  a23, 

=  TV  |2S(a*)  +  12a£c  +  3(p  +  p2)  S (a2&)?, 
and,  since  a6c  =  —  r,  S(a3)  =  —  3r, 
p  +  ^^-1,  2(a26)  =  3r, 
/.  P  =  +  r. 
Again,  Q  =  £,3;z23; 
but  we  have  zxz2  =  f  |S  (a2)  +  (p  +  p2)  S  (a&)|, 
and,  since  2  (a2)  =  —  2q,  2  (aZ>)  =  £, 

and  the  equation  determining  u  is 

w2  +  rw-|^  =  0; 


whence  u  =  —  ~  ±  4  /  L  _i_  £ 
2  -  y/  4+2' 


Also,  the  three  roots  are 


ON    INDIRECT    RULES.  141 

It  is  evident  that  the  three  roots  cannot  be  possible,  unless 
zxz2  be  both  impossible.  But  in  this  case  they  will  be  so. 
For  we  may  represent  zv  z2)  by  R  (cos  9  ±  V  —  1  sin  0) ;  and 

Q3 

comparing  u2  +  ru  —  ~  =  0,  with  (u— zf)  (u—z3),  we  have 
2R3cos30  =  —  r, 

R6  =  Z^! 

*         27  >  ■ 

whence  R  and  0  can  be  found,  since  q  is,  in  this  case,  negative. 
And  the  values  of  a;  will  be  2R  cos  0,  2R  cosf  ~  +  0]  and2R 

Let  the  given  biquadratic  equation  be 

x*  4-  qx2  +  rx  +  s  =  0 ; 
and,  as  before,  assume  one  of  the  simple  equations  to  be 

i(a+  b9  +  cp2  +  dp3)  =  z\ 
and  there  will  in  general  be  24  values  of  z,  by  interchanging 
a,  b,  c,  d. 

But  if  we  assume  that  p  =  —  1,  and  that 
*i  =  i  f(«  +  b)  —  (c  +  d)l, 

z3  =  i  {(a  +  d)-(b  +  c)], 
then  it  is  evident,  from  inspection,  that  z  has  only  the  six 
values 

±  zv  ±  z2,  ±  z3 ; 
and,  therefore,  u  ='  z2  only  has  three  values,  zf,  z22,  z2. 

Let  u3  +  Fu2  -f-  Qu  -f  R  =  0  be  the  equation  determining 
w  ;  then,  since  a  +  b  +  c-\-d  =  0, 


142  THEORY    OF    EQUATIONS. 

*3  -  *(«  +  <*); 

whence  Sj2  +  *22  4-  £32  =  —  P, 

=  |  ^3a2  +  b2  +  c*  +  rf2  +  2(a&  +  ac  +  arf)J, 
and,  as  it  must  involve  the  roots  symmetrically,  we  may  inter- 
change a  with  b,  c,  d,  successively ;   whence,  adding  the  four 
identical  values  of  P,  and  taking  the  fourth  part, 

P  =  =^  *4S  (a*) +42  («&)?.. 
_+  q. 


2 
again,  by  a  similar  process,  we  have 

w  16     ' 

also  zlz^2  =  \(a  +  b)  (a  +  c)  {a  +  d), 

■  =  \  W  +  «2  (P  +  c  +  d)  +  2  (aftc)|, 


=  —  -,  since  6  +  c  -t-  d  =  —  a; 

8 


whence  R 


64 
And  the  equation  in  u  is 

The  roots  of  this  equation  can  be  found,  and  we  then  have 

0  =  i  («  +  b  +  c  +  <*)i 
2,  =  J  O  +  *  —  c  —  d), 
*2  =  i  («  +  c  —  b  —  d), 
z3=  l(a  +  d-b-c); 
whence  a,  b,  c,  d  will  be  known. 


ON    INDIRECT    RULES.  143 

The  reducing  cubic,  in  this  case,  is  the  same  with  that 
of  Euler;  and  we  must  apply  the  same  restriction  to  the 
double   values  of  zv   z2,  #3,  as  before.     For  we  have  zxz2z3 

T 

=  —  -,  and  therefore  we  must  have  such  values  of  zXi  z2,  zv 
o 

as  shall  make  their  product  of  a  different  sign  from  that  of  the 
coefficient  r. 

The  theory  of  symmetrical  combinations,  which  is  found 
successful  in  resolving  equations  of  the  third  and  fourth  degree, 
can  be  applied  to  those  of  higher  orders.  But,  to  use  the 
words  of  Lagrange,  "  passe"  le  quatrieme  degr6,  la  m^thode, 
quoiqu'  applicable  en  general,  ne  conduit  plus  qu'  a  des 
equations  r^solvantes  de  degr^s  superieurs  a  celui  de  la 
proposee."  Thus,  in  the  case  of  equations  of  the  fifth  degree, 
the  theory  leads  to  a  reducing  biquadratic ;  but  to  obtain  its 
coefficients  we  must  solve  an  equation  of  the  sixth  degree. 
So  that  the  method  is  useless  in  practice. 

There  is,  however,  no  doubt  that  the  doctrine  of  permu- 
tations, and  of  symmetrical  combinations  of  the  roots,  contains 
the  principles  from  which  we  are  to  expect  the  resolution  of 
equations  of  the  higher  orders,  if  that  problem  be  possible. 

In  the  12th  volume  of  the  Italian  Society,  and  in  a  work 
published  at  Modena  in  1813,  M.  Paolo  Ruffini  has  proved, 
that  no  function  of  five  letters  is  susceptible  of  only  three  or 
four  different  values  by  the  interchange  of  the  letters.  And 
M.  Cauchy,  in  the  1 6th  volume  of  the  Journal  de  V  Ecole 
Poly  technique,  has  shown,  that  if  a  function  of  n  letters  has 
more  than  two  values,  it  has  at  least  k  values,  k  being  the 
prime  number  next  below  ft. 

On  these  grounds  it  has  been  inferred,  that  the  resolution 
of  equations  of  the  fifth  degree,  and  consequently  of  the 
higher  orders,  is  in  reality  an  impossible  problem.     (Lacroix, 


144  THEORY    OF    EQUATIONS. 

Compt.  des  Elem.  cT  Algbbre,  p.  61).  But  it  must  be  ob- 
served, that  it  is  here  assumed  that  the  coefficients  of  the 
reducing  equation  are  symmetrical  functions  of  all  the  roots. 
It  may,  however,  be  possible  that  the  resolution  might  be 
effected  by  means  of  equations,  whose  coefficients  are  only 
partial  expressions  susceptible  of  several  values.  On  this 
supposition  the  problem  may  perhaps  be  considered  as  not 
altogether  impossible. 


THE  END. 


METCALFE,  PRINTER,  CAMBRIDOK. 


FOURTEEN  DAY  USE 

RETURN  TO  DESK  FROM  WHICH  BORROWED 


This  book  is  due  on  the  last  date  stamped  below,  or 

on  the  date  to  which  renewed. 

Renewed  books  are  subject  to  immediate  recall. 


13Apr'56Pl 


APR  3     1956  I  „ 


2SNpr'56HW 


APR  11 1956  LU 


_ : — _^__ 


■    .    i.-:: 


tffi&^ 


aCU^71 


SENT  ON  ILL 


JUN  0  8  1998 


U.  C.  BERKELEY 


• 


■9M*  G  ft 


LD  21-100m-2,'55 
(B139s22)476 


General  Library 

University  of  California 

Berkeley 


II