Skip to main content

Full text of "An introduction to the study of integral equations"

See other formats


Bocher,  Maxime 

An  introduction  to  the  study 
of  integral  equations 


Presented  to  the 
LIBRARY  of  the 

UNIVERSITY  OF  TORONTO 

by 

Mr.    J.    R.   McLeod 


Cambridge    Tracts    in    Mathematics 
and    Mathematical    Physics 

GENERAL  EDITORS 

J.  G.  LEATHEM,  M.A. 
E.  T.  WHITTAKER.  M.A.,  F.R.S. 


No.    10 

AN    INTRODUCTION 

TO   THE   STUDY   OF 

INTEGRAL   EQUATIONS 

by 
MAXIME   BOCHER,    B.A.,   Ph.D. 

Professor  of  Mathematics  in   Harvard   University 


Cambridge  University   Press  Warehouse 

C.   F.  CLAY,   Manager 
London  :    Fetter  Lane,  E.G. 

Edinburgh  :     ice,  Princes  Street 

1909 

Price  25.   6d.  net 


Cambridge  Tracts  in  Mathematics 
and  Mathematical   Physics 

GENERAL  EDITORS 

J.  G.  LEATHEM,  M.A. 
E.  T.  WHITTAKER,  M.A.,  F.R.S. 


No.  10 

An    Introduction    to 
the  study    of   Integral    Equations 


CAMBRIDGE   UNIVERSITY   PRESS  WAREHOUSE, 

C.   F.   CLAY,   MANAGER. 
ILonDon:    FETTER  LANE,   E.G. 
GFtimfaurg!) :    100,  PRINCES  STREET. 


Etrltn:    A.  ASHER  AND  CO. 
K.dp>ig:    F.  A.  BROCKHAUS. 
£tfo  gorh:  G.  P.  PUTNAM'S  SONS. 
Bombajj  anH  CTHlrutta:    MACMILLAN  AND  CO.,  LTD. 


[All  Rights  reserved.] 


AN    INTRODUCTION 

TO   THE    STUDY    OF 

INTEGRAL    EQUATIONS 


by 
MAXIME   B6CHER,   B.A.,   Ph.D. 

Professor  of  Mathematics  in  Harvard  University 


CAMBRIDGE: 

at  the  University   Press 
1909 


Gr, 


Cambridge: 


PBINTED    BY   JOHN    CLAY,    M.A. 
AT    THE    UNIVERSITY   PRESS. 


PREFACE 

TN  this  tract  I  have  tried  to  present  the  main  portions  of  the  theory 
-*-  of  integral  equations  in  a  readable  and,  at  the  same  time,  accurate 
form,  following  roughly  the  lines  of  historical  development.  I  hope 
that  it  will  be  found  to  furnish  the  careful  student  with  a  firm  founda- 
tion which  will  serve  adequately  as  a  point  of  departure  for  further 
work  in  this  subject  and  its  applications.  At  the  same  time  it  is 
believed  that  the  legitimate  demands  of  the  more  superficial  reader, 
who  seeks  results  rather  than  proofs,  will  be  satisfied  by  the  precise 
statement  of  these  results  as  italicized,  and  therefore  easily  recognized, 
theorems.  The  index  has  been  added  to  facilitate  the  use  of  the  book- 
let as  a  work  of  reference. 

In  these  days  of  rapidly  multiplying  voluminous  treatises,  I  hope 
that  the  brevity  of  this  treatment  may  prove  attractive  in  spite  of  the 
lack  of  exhaustiveness  which  such  brevity  necessarily  entails  if  the 
treatment,  so  far  as  it  goes,  is  to  be  adequate. 

I  wish  to  thank  Professor  Max  Mason  of  the  University  of  Wisconsin 
who  has  helped  me  with  some  valuable  criticisms  ;  and  I  shall  be 
grateful  to  any  readers  who  may  point  out  to  me  such  errors  as  still 
remain. 

MAXIME  BOCHER. 

HARVARD  UNIVERSITY, 

CAMBRIDGE,  MASS. 

November,  1908. 


CONTENTS 

PAGE 

INTRODUCTION 1 

1.  Some  preliminary  propositions  and  definitions    ....  2 

2.  Abel's  mechanical  problem 6 

3.  Solution  of  Abel's  equation 8 

4.  Liouville's  introduction  of  integral  equations  of  the  second  kind   .  11 

5.  The  method  of  successive  substitutions 14 

6.  Yolterra's  treatment  of  equations  of  the  second  kind.     Iterated 

and  reciprocal  functions 19 

7.  Linear  algebraic  equations  with  an  infinite  number  of  variables  .  24 

8.  Fredholm's  solution 29 

9.  The  integral  equation  with  a  parameter 38 

10.  The  fundamental  theorem  concerning  homogeneous  integral  equa- 

tions, with  some  applications 43 

11.  Symmetric  kernels 46 

12.  Orthogonal  functions 52 

13.  The  integral  equation  of  the  first  kind  whose  kernel  is  finite          .  60 

14.  Equations  of  the  first  kind  whose  kernel  or  whose  interval  is  not 

finite 65 

INDEX 72 


AX   INTRODUCTION   TO  THE  STUDY  OF 
INTEGRAL  EQUATIONS 

Introduction.  The  theory  and  applications  of  integral  equations, 
or,  as  it  is  often  called,  of  the  inversion  of  definite  integrals,  have  come 
suddenly  into  prominence  and  have  held  during  the  last  half  dozen 
years  a  central  place  in  the  attention  of  mathematicians.  By  an 
integral  equation*  is  understood  an  equation  in  which  the  unknown 
function  occurs  under  one  or  more  signs  of  definite  integration. 
Mathematicians  have  so  far  devoted  their  attention  mainly  to  two 
peculiarly  simple  types  of  integral  equations, — the  linear  equations  of 
the  first  and  second  kinds, — and  we  shall  not  in  this  tract  attempt  to 
go  beyond  these  cases.  We  shall  also  restrict  ourselves  to  equations 
in  which  only  simple  (as  distinguished  from  multiple)  integrals  occur. 
This  restriction,  however,  is  quite  an  unessential  one  made  solely  to 
avoid  unprofitable  complications  at  the  start,  since  the  results  we  shall 
obtain  usually  admit  of  an  obvious  extension  to  the  case  of  multiple 
integrals  without  the  introduction  of  any  new  difficulties!.  In  this 
respect  integral  equations  are  in  striking  contrast  to  the  closely 
related  differential  equations,  where  the  passage  from  ordinary  to 
partial  differential  equations  is  attended  with  very  serious  complica- 
tions. 

The  theory  of  integral  equations  may  be  regarded  as  dating  back 
at  least  as  far  as  the  discovery  by  Fourier  of  the  theorem  concerning 
integrals  which  bears  his  name ;  for,  though  this  was  not  the  point  of 
view  of  Fourier,  this  theorem  may  be  regarded  as  a  statement  of  the 
solution  of  a  certain  integral  equation  of  the  first  kind  j.  Abel  and 

*  The  term  Integral  Equation  was  suggested  by  du  Bois-Reymond.     Cf.  CreUe, 
voL  103  (1888),  p.  228. 

t  Another  extension,  in  which  serious  complications  do  not  usually  arise,  is  to 
systems  of  integral  equations.     We  do  not  consider  such  systems  in  this  tract. 

*  Cf.  the  closing  page  of  this  Tract. 


2  PRELIMINARY   PROPOSITIONS  [1 

Liouville,  however,  and  after  them  others  began  the  treatment  of 
special  integral  equations  in  a  perfectly  conscious  way,  and  many  of 
them  perceived  clearly  what  an  important  place  the  theory  was  destined 
to  fill*. 

As  we  shall  not,  except  in  one  relatively  unimportant  case,  take  up 
any  of  the  applications  of  the  subject,  it  may  be  well  to  say  explicitly 
that  like  so  many  other  branches  of  analysis  the  theory  was  called  into 
being  by  specific  problems  in  mechanics  and  mathematical  physics. 
This  was  true  not  merely  in  the  early  days  of  Abel  and  Liouville,  but 
also  more  recently  in  the  cases  of  Volterra  and  Fredholm.  Such 
applications  of  the  theory,  together  with  its  relations  to  other  branches 
of  analysis  t,  are  what  give  the  subject  its  great  importance. 

1.     Some    Preliminary  Propositions   and  Definitions.     In 

order  to  avoid  interruptions  in  later  sections,  we  collect  here  certain 
propositions  of  the  integral  calculus  for  future  reference. 

We  shall  have  to  deal  with  functions  of  one  and  of  two  variables. 
The  independent  variables,  which  we  will  for  the  present  denote  by 
x  and  (a,  y)  respectively,  are  in  all  cases  real.  In  fact,  in  order  to 
avoid  unnecessary  complications  we  will  assume  that,  unless  the 
contrary  is  explicitly  stated,  all  quantities  we  have  to  deal  with  are 
real. 

The  range  of  values  of  the  single  argument  x  is  usually 

/  a  ^  x  ^  b. 

We  shall  speak  of  this  in  future  simply  as  the  interval  /. 

In  the  case  of  functions  of  two  variables,  two  cases  have  to  be 
considered.  Interpreting  (x,  y)  as  rectangular  coordinates  in  a  plane, 
we  sometimes  consider  the  square 

and  sometimes  the  triangle 

T  a^y^x^b. 

It  should  be  noticed  that  the  three  regions  we  have  just  defined, 
7,  S,  T,  are  closed  regions,  that  is  they  include  the  points  of  their 
boundaries. 

In  order  to  avoid  long  circumlocutions  we  lay  down  the  following 

*  Cf.  besides  the  article  of  du  Bois-Reymond  already  cited,  some  remarks  by 
Rouche,  Paris  C.  R.   vol.  51  (1860),  p.  126. 
t  Cf. ,  for  instance,  much  of  Hilbert's  work. 


1]  AND   DEFINITIONS  3 

DEFINITION.  \\'e  say  that  the  discontinuities  of  a  function  of(x,  y) 
are  regularly  distributed  in  S  or  in  T  if  they  all  lie  on  a  finite  number 
of  '•/</•/>.  <  icitk  continuously  turning  tangents  no  one  of  ivhich  is  met  by 
a  line  parallel  to  the  axis  of  x  or  of  y  in  more  than  a  finite  number  of 
points. 

In  order  to  make  the  enunciation  of  some  of  our  results  simpler, 
we  will  assume  once  for  all  that  the  functions  we  deal  with  are  denned 
even  at  the  points  of  discontinuity,  at  least  in  the  cases  where  they 
remain  finite  in  the  neighbourhood  of  such  points. 

The  following  theorem  will  be  important  for  us.  We  state  it  first 
for  the  case  of  the  region  >$'. 

THEOREM  1.  If  the  tico  functions  <f>  (x,  y)  and  $  (x,  y)  are  finite  in 
S  and  their  discontinuities,  if  they  have  any,  are  regularly  distributed^ 
the  function 


is  continuous  throughout  S. 

The  truth  of  this  theorem  becomes  evident  if  we  interpret  (x,  y,  £) 
as  rectangular  coordinates  in  space.  It  is  then  clear  that  the  function 
under  the  integral  sign  is  finite  throughout  the  cube 

a^x^b,     a^y^b,     a^g  ^b, 

and  becomes  discontinuous  in  this  cube  only  at  points  on  two  sets  of 
cylinders  whose  generators  are  parallel  respectively  to  the  axes  of  x  and 
y.  Moreover  these  cylinders  are  so  shaped  that  any  line  .r  =  .r0,  y  =  yo 
in  this  cube  meets  them  at  only  a  finite  number  of  points.  —  The  formal 
proof,  based  on  these  or  similar  considerations,  presents  no  difficulty, 
and  we  leave  it  for  the  reader. 

COROLLARY.  If  <j>(x,y}  and  \j/  (x,  y)  are  finite  in  T  and  thtir 
discontinuities,  if  they  hace  any,  are  regularly  distributed,  the  function 


is  continuous  throughout  T. 

This  is  merely  a  special  case  of  Theorem  1.  For  if  we  define  <£  and 
</f  to  have  the  value  zero  everywhere  outside  of  T,  it  is  clear  that  they 
satisfy  the  conditions  of  Theorem  1  throughout  >S'  and  that  the  function 
F(z,y)  reduces  to  H(x,y}. 

1—2 


4  PRELIMINARY   PROPOSITIONS  [1 

If  <f>  (x,  y}  satisfies  the  conditions  of  Theorem  1,  the  double  integral 
of  <j>  extended  over  S  may  be  evaluated  in  either  one  of  two  ways  as  an 
iterated  integral*  and  we  thus  get  the  formula 

rb    rb  rb    rb 

I     \    $(x,y)dydx=\    \    $(x,y)dxdy. 

Ja   Ja  Ja  Ja 

If,  in  particular,  <£  vanishes  everywhere  outside  of  T,  we  get 

DIRICHLET'S  FORMULA!.  If  <£  is  finite  in  T  and  its  discontinuities, 
if  it  has  any,  are  regularly  distributed,  then 

rb    rx  rb   rb 

I  /  <£  0>  y)  fy&B  =     /  £0*.  y}  fafy- 

Ja  Ja  Ja  Jy 

This  formula  admits  of  extension  to  certain  cases  in  which  the 
integrand  does  not  remain  finite  in  T.  The  most  general  case  of  this 
sort  which  we  shall  have  occasion  to  use  is  contained  in  the  following 
statement,  for  a  simple  proof  of  which  we  refer  to  the  first  part  of 
a  paper  by  W.  A.  Hurwitz  \  : 

DIRICHLET'S  EXTENDED  FORMULA.  If  <f>  (x,  y}  is  finite  in  T  and  its 
discontinuities,  if  it  has  any,  are  regularly  distributed,  and  if  A.,  /*,  v 
are  constants  such  that 

O^X<1,     0^/Kl,     OSv<l, 
then 


fb  I*          ^(x,y}dydx          =  fb  fb          <j>(x,y}dxdy 
Ja  .L  (x  -  yY  (b  -  a>Y  (y  -  «)"     Ja  Jy  (x  -yY  (b  -  xY  (y  -  a)"  ' 

Finally  we  turn  to  some  theorems  concerning  functions  of  a  single 
variable. 

THEOREM  2.     If  <j>  (x)  is  finite  and  has  only  a  finite  number  of 
discontinuities  in  I,  the  function 


is  continuous  throughout  I,  including  the  point  a,  where  it  vanishes^. 

*  By  a  double  integral  we  understand  the  limit  of  a  sum  obtained  by  dividing 
up  the  region  in  question  into  pieces  both  of  whose  dimensions  are  small.  By  an 
iterated  integral,  the  integral  of  an  integral. 

f  Cf.  Crelle's  Journal,  vol.  17  (1837),  p.  45. 

£  Annals  of  Mathematics,  vol.  9  (1908),  p.  183.  This  result  may  also  be  deduced 
from  a  general  theorem  of  de  la  Vallee  Poussin.  Cf.  the  Cours  d' Analyse  of  this 
author,  vol.  2,  pp.  89—95. 

§  We  define  the  symbol    /    ^  (x)  dx  to  mean  zero,  whatever  the  nature  of  the 

J  « 
function  \j/  may  be. 


1]  AND   DEFINITIONS 

To  prove  this  we  introduce  the  new  variable  of  integration 

3=i-a 

x  —  a' 
Then 


By  replacing  <£  by  the  upper  limit  of  its  absolute  value,  we  see  that 
¥  (x)  remains  finite,  and  hence  that  3>  approaches  zero  as  x  approaches 
a.  Consequently  <£  is  continuous  at  a.  On  the  other  hand  the  same 
substitution  shows  that  the  integral  *  converges  uniformly  when 
b.  For  any  fixed  positive  8 <  1  the  function 


is  continuous  throughout  the  interval  a  <  x  ^  b,  since  the  integrand  in 
^i  is  finite  in  the  rectangle 

a<x^b,     O^s^l-S, 

and  is  discontinuous  only  along  a  finite  number  of  curves  in  this 
rectangle  each  of  which  is  met  by  a  line  .r  =  x0  in  at  most  one  point. 
Since,  as  we  have  just  seen,  *i  (^)  approaches  ^(X)  uniformly  as  8 
approaches  zero,  it  follows  from  a  fundamental  theorem  in  uniform 
convergence  that  *  (x)  is  continuous  when  a  <  x  ^  b,  and  hence  the 
same  is  true  of  &  (x),  and  our  theorem  is  proved. 

THEOREM  3.  If,  in  I,  <£(V)  is  continuous  and  has  a  derivative 
which  is  finite  and  which  has  at  most  a  finite  number  of  discontinuities 
in  I,  and  /f<t>(a)  =  Q,  the  function 


has  a  derivative  continuous  throughout  I  and  given  by  the  formula 


For  if  we  integrate  the  expression  for  3>  (.r)  by  parts,  we  have,  when 
we  remember  that  <f>  (a)  =  0, 


Applying  here  the  rule  for  differentiating  an  integral  whose  limits  are 
variable,   we  get  the    desired    expression  for  &  (x}.      Hence    from 


6  ABEL'S  MECHANICAL  PROBLEM  [1,  2 

Theorem  2  it  is  evident  that  <$'  (,r)  is  continuous.  It  should  be  noticed 
that  when  X  >  0  the  integrals  with  which  we  have  to  deal  are  infinite 
integrals  (i.e.  integrals  in  which  the  integrand  does  not  remain  finite) 
so  that  the  application  to  them  of  the  ordinary  rules  of  the  calculus 
requires  careful  justification. 

An  alternative  form  of  proof  for  this  theorem  consists  in  applying 
Dirichlet's  Extended  Formula*  as  follows  : 


t 
=  L 


'  w 


The  differentiation  of  this  last  formula  gives  us  the  result  we  wish 
to  establish  t. 

In  conclusion  we  point  out  by  means  of  the  following  two  examples 
that  if  we  replace  the  condition  of  finiteness  for  <£'  by  the  condition  of 
integrability,  or  even  of  absolute  integrability,  3>  will  not  always  have 
a  continuous  derivative  : 

(i)  4>0)=O-«)A,  <&(#)  =  &(#-«), 

°       (a  =  x  =  «')  \ 


In  both  cases  k  is  a  positive  constant,  and  if  0  <  X  <  1,  <£  is  con- 
tinuous in  /  and  has  a  derivative  which  is  continuous  except  at  one 
point  and  absolutely  integrable  but  not  finite.  In  the  first  case  4>'  is 
continuous,  in  the  second  discontinuous. 

2.  Abel's  Mechanical  Problem.  In  one  of  his  earliest 
published  papers  %  Abel  showed  how  a  certain  mechanical  problem, 

*  It  should  be  noticed  that  we  use  this  formula  here  under  slightly  different 
restrictions  on  the  function  0  (x,  y)  since  <f>  is  now  a  function  of  y  alone,  and  there- 
fore if  it  is  discontinuous  at  all,  is  discontinuous  along  lines  parallel  to  the 
axis  of  x. 

t  This  method  of  reasoning  admits  of  immediate  extension  to  the  proof  of  the 
more  general  formula 


which  holds  under  suitable  restrictions  on  \f/. 

t  See  Collected  Works,  p.  11.  This  paper  was  first  published  in  Christiania  in 
1823.  Cf.  also  a  second  paper  beginning  on  p.  97  of  the  Collected  Works,  and 
originally  published  in  Crelle,  vol.  1  (1826),  p.  153. 


2]  ABEL'S  MECHANICAL  PROBLEM  7 

which  includes  the  problem  of  the  tautochrone  as  a  special  case,  leads 
to  what  ha.s  since  come  to  be  called  an  integral  equation,  on  whose 
solution  the  solution  of  the  problem  depends.  On  account  of  its  great 
historical  interest,  we  take  up  this  problem  in  this  section. 

A  particle  starting  from  rest  at  a  point  P  on  a  smooth  curve  which 
lies  in  a  vertical  plane,  slides  down  the  curve  to  its  lowest  point  0. 
The  velocity  acquired  at  0  will  be  independent  of  the  shape  of  the 
curve.  The  time  of  descent  T  will  however  depend  on  this  shape. 
We  take  0  as  origin,  the  axis  of  x  vertically  upward,  and  the  axis  of  y 
horizontal  arid  in  the  plane  of  the  curve.  Let  the  coordinates  of  the 
point  of  departure  P  be  (x,  y],  and  the  coordinates  of  the  point  Q 
reached  by  the  particle  at  the  time  t  be  (£,  rj),  g  the  gravitational 
constant,  and  .-<  the  arc  OQ.  The  velocity  of  the  particle  at  Q  is 


/—  l'Q 

Hence  v  2g  t  =  -  I 

Jf 

If  we  express  s  in  terms  of  £ 

•-• 

the  whole  time  of  descent  is  then 


^       1     [*  *>'(€)<% 
l  =  —j=-  I    — .        . . 
V2<7  .'o    x/.r  -  £ 


If  the  shape  of  the  curve  is  given,  the  function  v  may  be  computed, 
and  the  whole  time  of  descent  is  given  to  us  as  a  function  of  x  by  the 
last  formula. 

The  problem  which  Abel  set  himself  is  the  converse  of  this,  namely 
to  determine  the  curve  for  which  the  time  of  descent  is  a  given 
function  of  ,i:  If  we  write 


our  problem  is  to  determine  the  function  v  from  the  equation 

(1). 


The  formula  for  the  solution  of  this  integral  equation  was  obtained 
by  Abel  by  two  different  methods.  The  first  depends  on  the  use  of 
series  proceeding  according  to  powers,  not  necessarily  integral,  of  the 
argument ;  while  the  second,  of  a  more  general  character,  is  closely 
related  to  the  one  we  are  about  to  give  in  the  next  section.  Neither 
of  Abel's  methods  can  be  regarded  as  satisfactory  although  they  lead 


8  SOLUTION  OF  ABEL'S  EQUATION  [2,  3 

to  the  correct  result.  Among  other  objections  it  may  be  said  that  both 
methods  omit  the  essential  step  of  proving  that  the  equation  (1)  has 
a  solution. 

3.  Solution  of  Abel's  Equation*.  Instead  of  the  equation  (1) 
of  §  2,  Abel  set  himself  the  problem  of  solving  a  more  general  equation 
which  we  will  write  in  the  form 


where  /is  a  known  function,  u  the  function  to  be  determined. 

In  order  to  solve  (1)  we  begin  by  establishing  the  general  formula 
(2)  below.     We  start  from  the  well  known  formula 

dx  , 


f* 

=    |     f 

/£  (z- 


Let  <£  (£)  be  any  function  which  is  continuous  and  has  a  continuous 
derivative  throughout  7.  Multiply  this  equation  by  <£'(£)  d£  and 
integrate  from  a  to  z,  which  we  suppose  to  be  any  point  of  /.  This 
gives 

)  -  $  (a)]  =  f  Z  f  -.     -4^  -  ^-  dasdt. 

ja  js  (z  -  a?)1  -  **  (#  -  $y* 


If  we  apply  Dirichlet's  Generalized  Formula  to  the  second  member  of 
this  equation,  we  get  the  desired  result 


sn/«r  j 

-     -  (2) 


a  formula  which  holds  under  the  sole  restrictions  that  z  be  in  /,  and  <£ 
be  continuous  and  have  a  continuous  derivative  in  I,  and  that 

0</t<l. 

Theorem  2,  §  1,  shows  us  at  once  that  a  necessary  condition 
that  (1)  have  a  solution  continuous  throughout  /  is  that  /(#)  be 
continuous  throughout  /and  that  /(a)  =  0. 

Let  us  suppose  that  these  conditions  are  fulfilled  and  that  u  (as)  is 
a  continuous  solution  of  (1).  Multiply  (1)  by  (z  —  a;)*"1  dx,  where  z  is 
a  point  of  /,  and  integrate  from  a  to  z,  thus  getting 

O)  dx 


,7 

^ 

*  Except  for  the  method  of  deducing  formula  (2),  the  method  we  use  is,  barring 
notation,  that  of  Liouville  in  Liouville's  Journal,  vol.  4  (183!)),  p.  233.  Liouville, 
who  seems  not  to  have  been  aware  of  Abel's  work,  had  already  published  on  this 
subject  in  the  Journal  de  I'Ecole  Poly  technique,  Cahier  21  (1832),  p.  1. 


3]  SOLUTION  OF  ABEL'S  EQUATION 

If  in  (2)  we  let 


it  will  be  seen  that  the  preceding  equation  reduces  to 

f'f®**      '    r  M(0#  (3). 

JaO-tf)1'*       Sin  \irja 

Since  the  second  member  of  (3)  has  a  continuous  derivative  with 
regard  to  z,  the  same  must  be  true  of  the  first  member,  and  this  gives 
us  a  further  necessary  condition  for  (l)  having  a  continuous  solution. 
By  differentiating  (3),  we  get  as  the  value  of  this  solution 

,  .  _  sin  Xn-  d   r-  f(x)dx  ,  } 

~~  -i-* 


We  thus  see  that  u  is  completely  detennined,  that  is  that  (1)  does  not 
have  more  than  one  continuous  solution.     That  the  formula  (4)  really 
does  give  a  solution  of  (1)  may  be  seen  by  substituting  it  in  (1).     The 
second  member  of  (1)  thus  becomes 
sin  XTT   *       1       d 


which  reduces  by  means  of  Theorem  3,  §  1,  to 

sin  XTT  d    [x       1        [*  f(x)  dx    , , 

~  J~     I       (  ~£\\    I       ft  _       \T^X      *' 

TT     ax  jd  \^tV  ~~  ^ ^  ya  \^%     »c j 
and  this  in  turn  reduces  by  means  of  (2),  when  we  let 

*<*)=(*/(*)«^ 

Ja 

to 


Thus  we  see  that  (4)  is  a  solution  of  (1),  and  we  have  proved 

THEOREM  1.  A  necessary  and  sufficient  condition  that  (1)  have 
a  solution  continuous  in  I  is  that  f  (x]  be  continuous  in  I,  that  f  (a]  =  0, 
and  that 


f 

Ja 


have  a  continuous  derivative  throughout  L     If  these  conditions  are 
fulfilled,  (1)  has  only  one  continuous  solution,  given  by  formula  (4). 

An  important  case  in  which  these  conditions  are  fulfilled  is  that  in 
which  /  is  continuous  and  has  a  derivative  which  is  finite  and  has  at 
most  a  finite  number  of  discontinuities  in  /,  and  f(a)  =  0.  This  we 


10  SOLUTION  OF  ABEL'S  EQUATION  [3 

see  from  Theorem  3,  §  1,  from  which  we  also  see  that  in  this  case  (4) 
may  be  written 


TT     Ja(z-xy--* 
Hence 

THEOREM  2.  If  f(x)  is  continuous  and  has  a  derivative  finite  in  I 
and  with  only  a  finite  number  of  discontinuities  there,  and  f(a)  =  Q, 
equation  (1)  has  one  and  only  one  continuous  solution,  and  this  is  given 
by  formula  (5)*. 

While  this  is  essentially  Abel's  result,  that  mathematician  did  not 
consider  the  integral  equation  (1)  but  rather  the  differentio-integral 
equation 


By  means  of  the  theorems  just  established  and  Theorems  2,  3  of 
§  1  we  readily  deduce  the  result 

THEOREM  3.  A  necessary  and  sufficient  condition  that  (6)  have  a 
solution  which  together  with  its  derivative  is  continuous  throughout  I 
is  thatf(x)  be  continuous  in  /,  that  f  (a)  =  0,  and  that 


have  a  continuous  derivative  throughout  I.     If  these  conditions  are 
fulfilled,  the  general  solution  of  (6)  is 

,  N     7     sin  A.TT  P  f(x\  dx 

v(z)  =  k  +  - 

*       Ja(z-xf~ 

where  k  is  an  arbitrary  constant. 

*  Goursat,  in  Acta  Math.  vol.  27  (1903),  pp.  131—133,  has  shown  that 
equation  (1)  still  has  a  solution,  though  not  a  continuous  one,  if  we  drop  the 
requirement  that  /(a)=0.  This  may  be  readily  seen  by  bringing  in,  in  place  of 
u,  the  function 

sin\7T        f(a) 

v(x)  =  u(x)--  -Hll' 

TT       (x-a)1 

Making  this  substitution,  we  find  that  equation  (1)  reduces  to 


Conversely,  we  see  that  a  solution  of  this  last  equation  corresponds  to  a  solution 
of  (1).     Consequently  a  solution  of  (1)  is 


sin\7r      /(a)          sin\7r   f*  f'(x)d 
u(z)  =  -  -  ^  —  --\  --  /         —  ; 

TT      («-a)1   A         TT      Ja(z-xY 


x 
r. 


3,  4]  EQUATIONS  OF   THE   SECOND   KIND  11 

By  letting  X  =  i  we  get  the  solution  of  the  mechanical  problem 
of  §  2.  If  in  particular  we  let  /(#)  =  const.,  we  get  Abel's  solution 
of  the  problem  of  the  tautochrone. 

An  easy  extension  of  the  results  we  have  found  is  to  the  case  in 
which  X  is  negative,  cf.  Liouville,  loc.  cit.  Let  us  for  instance  suppose 
that  -  1  <  X  <  0.  We  see  by  differentiation  that  any  continuous  solution 
of  (1)  also  satisfies  the  equation 

» ^  /  AN        t  1 

(7), 


and  conversely,  if  f(&)  is  continuous  in  /  and/(a)  =  0,  we  see  by 
integrating  that  every  continuous  solution  of  (7)  is  also  a  solution  of 
(1).  Hence 

THEOREM  4.  IfO>\>-l,a  necessary  and  sufficient  condition  that 
(1)  have  a  continuous  solution  is  that  /(.r)  be  continuous  together  with 
its  first  derivative  throughout  I,  that  f  (a)  -f  (a)  =  0,  and  that 


hare  a   continuous   derivative  throughout  I.     If  these  conditions  are 
fulfilled,  (1)  has  one  and  only  one  continuous  solution,  namely 

,  ,     sin  XTT  d    izf  (%}  dx 
u  (.r)  =  —  v  '       . 

* 


In  particular,  the  above  conditions  are  fulfilled  */*/(#),  /'  0*0  ^"O2*) 
are  continuous  in  Iandf(a)=f'(a)  =  Q.  In  this  case  the  continuous 
solution  may  be  written 


The  extension  to  other  cases  in  which  X  is  negative  may  readily 
be  made  by  further  differentiation.  "We  leave  to  the  reader  the 
enunciation  and  proof  of  the  general  theorem  to  be  obtained  here 
as  well  as  the  consideration  of  the  special  cases  in  which  X  is  an  integer 
negative  or  zero. 

4.  Liouville's  Introduction  of  Integral  Equations  of  the 
Second  Kind.  After  Abel,  the  next  to  use  an  integral  equation  was 
Liouville  who,  in  the  year  1837*,  showed  that  a  particular  solution  of 

*  Liouville's  Journal,  vol.  2,  p.  24.  This  reference  has  no  connection  with 
Liouville's  work  cited  in  the  preceding  section. 


12  LIOUVILLE'S    INTRODUCTION   OF  [4 

a  certain  linear  differential  equation  can  be  obtained  by  solving  an 
integral  equation  of  a  type  somewhat  different  from,  though  closely 
resembling,  Abel's  equation. 

To  explain  this  we  begin  with  the  non-homogeneous  equation 


where  p  is  a  parameter,  and  <£  (#)  a  function  continuous  throughout  /. 
The  general  solution  of  the  reduced  equation 


is  a  sin  p  (x  -  a)  +  ft  cos  p  (x  -  a),  so  that  by  a  well  known  formula*  the 
general  solution  of  (1)  is 

1  fx 

y  (x)  =  a  sin  p  (x  -  a)  +  ft  cos  p  (x  —  a)  +  -  I  <£  (£)  sin  p(x  —  £)  d£     (2). 

p  Ja 

Now  consider  the  homogeneous  equation 

g+[p2-<r(*)]</  =  0  (3) 

where  <r  (x)  is  continuous  in  /.     "We  will  denote  by  u  (x)  the  solution 
of  (3)  which  satisfies  the  auxiliary  conditions 

w(a)=l,     w'(«)  =  0  (4)- 

This  function  will  also  be  a  solution   of  the  non-homogeneous 
equation 


Consequently  by  using  (2)  and  (4)  we  get 

1   [x 
u  (x}  =  cos  p  (x  -  a)  +  -  I  a-  (£)  sin  p  (x  -  £)  u  (£)  d£         (5). 

P  Ja 

Conversely  if  u(x)  is  any  continuous  solution  of  this  integral 
equation,  we  see,  by  applying  to  (5)  the  rule  for  differentiating  an 
integral  whose  limits  are  functions  of  x,  that  u  (x)  has  continuous  first 
and  second  derivatives  given  by  the  formulae 


rx 

'  (x)  =  -  p  sin  p  (x  -  a)  +  I  o-  (£)  cos  p  (x  -  £)  u  (£)  d£       (6), 

Ja 

rx 

*cosp(x-a)-p  I  o-(|)gmp(a?-l)»(0^+°'(a?)f»(^)  (7)- 

Ja 
*  Cf.  Forsyth's  Treatise  on  Differential  Equation*,  §  66. 


4]  EQUATIONS   OF   THE   SECOND   KIND  13 

Formulae  (5)  and  (7)  show  that  it  satisfies  (3),  while  from  (5)  and 
(6)  we  see  that  conditions  (4)  are  satisfied.  Thus,  since  there  is  only 
one  function  satisfying  (3)  and  (4),  we  have  proved  the 

THEOREM.  The  solution  of  the  differential  equation  (3)  which 
satisfies  the  auxiliary  conditions  (4)  is  a  solution  of  the  integral 
equation  (5);  and  conversely  this  integral  equation  (5)  has  only  one 
continuous  solution. 

The  possibility  here  illustrated  of  replacing  a  differential  equation 
together  with  certain  auxiliary  conditions  by  a  single  integral  equation 
is  characteristic  for  many  important  applications  of  integral  equations. 

Liouville's  object  in  introducing  the  integral  equation  (5)  was  to 
enable  him  to  obtain  a  development  of  u  (x)  in  a  series  which  converges 
rapidly  for  large  values  of  p.  Such  a  series  he  obtained  by  solving  (5) 
by  a  method  which  will  be  explained  in  the  next  section. 

Comparing  Abel's  equation  (formula  (l),  §  3)  with  Liouville's 
equation  (5),  we  see  that  they  come  respectively  under  the  following 
types  : 

(8), 


in  which  f(x)  and  K(x,  £)  are  to  be  regarded  as  known  functions 
and  u  (x}  is  the  function  to  be  determined.  Equations  (8)  and  (9)  are 
spoken  of  as  linear  integral  equations  of  the  first  and  second  kinds 
respectively.  K  is  called  the  kernel  of  these  equations*. 

In  place  of  the  equations  (8)  and  (9)  in  which  the  upper  limit  of 
the  integrals  is  the  variable  x  we  often  have  to  deal  with  equations 
of  exactly  the  same  form  in  which  the  upper  limit  is  the  constant  b. 
These  are  also  called  linear  integral  equations  of  the  first  and  second 
kind  respectively.  It  will  be  seen  that  equations  (8)  and  (9)  are 
merely  the  special  cases  of  the  equations  just  mentioned  in  which  the 
kernel  K(x,  £)  vanishes  when  £>  x  since  it  then  makes  no  difference 
whether  x  or  b  be  used  as  the  upper  limit  of  integration. 

*  These  terms  were  first  employed  by  Hilbert,  Gottinger  Nachrichttn,  1904. 
An  equation  of  the  form 

fb 
*  (x) « (x)  =f  (*)  +  |    *  (*.  *)  «  (fl  <$, 

which  includes  the  equations  of  the  first  and  second  kinds  as  special  cases,  has  been 
called  by  Hilbert  an  equation  of  the  third  kind,  but  only  a  very  special  case  of  this 
equation  has  so  far  been  treated. 


14  THE    METHOD   OF  [4,  5 

The  special  case  of  (9),  or  of  the  more  general  equation  in  which 
the  upper  limit  of  integration  is  b,  in  which  /(>)  vanishes  identically 
may  be  called  a  homogeneous  integral  equation  of  the  second  kind. 
It  should  not  be  confounded  with  the  equation  of  the  first  kind. 

5.  The  Method  of  Successive  Substitutions*.  The  method 
which  Liouville  used  for  solving  equation  (5)  of  the  last  section 
applies  with  equal  simplicity  to  the  more  general  equation  (9).  In 
fact  we  will  consider  at  once  the  still  more  general  equation  referred  to 
near  the  end  of  the  last  section 

« (a?)  =/(#)  +  fjTOir,  £)»(£)#  (1). 


We  assume  that  the  kernel  K  is  finite  in  S  and  that  its  discon- 
tinuities, if  it  has  any,  are  regularly  distributed  there  t. 

If  (l)  is  to  have  a  solution  continuous  in  /,  it  is  clear  (cf. 
Theorem  1,  §  1)  that  f(x)  must  be  continuous  in  /.  Let  us  assume 
that  this  condition  is  fulfilled. 

Assuming  that  (1)  has  a  continuous  solution,  we  now  proceed  to 
find  it.  Substitute  in  the  second  member  for  u  (£)  the  value  given 
by  the  equation  itself,  thus  getting 

«(*)  =/(*)+  /Vfce,  *)/(*)#  +  /V(>,£)  /V(ft  ft)  «&)<&#• 

Ja  Ja  Ja 

Here  we  again  substitute  for  u(j;\)  its  value  as  given  by  (1),  and 
thus  get  a  four-term  expression  for  u  (#)•  Proceeding  in  this  way,  we 
get  the  general  formula 

«(#)  =  &  (a?)  +  ^0*0  (2) 

where 


*)=/(*)+  f  *> 

Ja 

...  +  (Vo,  i)  f  jsrcft  *o  ...  f 

Ja  Ja  Ja 

n  (a)  =  f  V(ar,  0  ibK(i;,  ft)  ...  (Vftwi  ft,)  u  (ft,)  dft,  ... 

Ja  ya  ./a 


*  This  method  of  solving  an  integral  equation  of  the  second  kind  is  usually 
attributed  -to  C.  Neumann,  whose  work,  however,  is  more  than  thirty  years  later 
than  Liouville's.  The  connection  of  Liouville's  work  with  the  theory  of  integral 
equations  of  the  second  kind  has  been  generally  overlooked.  For  a  formulation  of 
the  method  of  successive  substitutions  which  is  convenient  even  in  far  more 
complicated  cases,  cf.  Mason,  Math.  Ann.  vol.  65  (1908),  p.  570. 

f  In  the  work  of  this  section  and  the  next  it  is  immaterial  whether  the 
functions  /  and  K  are  assumed  to  be  real  or  are  allowed  to  be  complex. 


5]  SUCCESSIVE   SUBSTITUTIONS  15 

We  may  regard  this  expression  for  u  (.r)  as  a  finite  series  of  n  +  1 
terms  plus  a  remainder,  this  remainder,  however,  involving  the  very 
function  «(.r)  which  we  are  developing*.  This  suggests  to  us  the 
consideration  of  the  infinite  series 

~  (3). 

From  Theorem  1,  §  1,  we  see  that  the  terms  of  this  "series  are 
continuous  in  /.  The  series  therefore  represents  a  function  continuous 
throughout  /  provided  it  converges  uniformly  there.  We  will  prove 
that  whenever  this  is  the  case,  the  function  u  (#)  represented  by  (3) 
is  a  solution  of  (1).  For  this  purpose  multiply  the  series  for  u  (£)  by 
K(JC,  £)  and  integrate  the  resulting  series  with  regard  to  £  from  a  to  b 
term  by  term,  as  we  have  a  right  to  do  on  account  of  its  uniform 
convergence.  This  gives  us  precisely  series  (3)  without  the  first  term  ; 
that  is 


and  this  is  the  integral  equation  of  which  we  wished  to  prove  that 
«  (x)  is  a  solution. 

We  will  now  obtain  two  different  sufficient  conditions  for  the 
uniform  convergence  of  (3). 

Let  us  first  consider  the  case  in  which 

JT(.r,£)  =  0  when  £>.r  (4). 

This,  as  we  have  seen,  is  the  case  in  which  the  upper  limit  of 
integration  in  (1)  may  be  taken  as  .r.  Here  the  general  term  of  the 
series  (3)  may  be  written 


,  () 

Let  M  and  JV  be  constant  such  that 

\K(x,t)\<M,     \f(x)\<N 
Then 

fX     ft  ft,     • 

>" 


fX     ft  ft,     •,  I  if  _  /lV* 

f  ...     >"*fe...4b*«JnP«Sj 

Ja  Ja        Ja  \n  +  1)  I 


*  Cf.  the  various  forms  of  Taylor's  series  with  a  remainder. 


16  THE    METHOD   OF  [5 

The  series  of  which  this  last  written  positive  constant  is  the  general 
term  is  convergent.  Consequently  the  series  (3)  is  absolutely  and 
uniformly  convergent  throughout  /. 

Without  the  special  restriction  we  just  made  on  K,  the  inequality 
which  we  should  get  in  the  same  way  for  the  general  term  Fn(x) 
of  (3)  would  be 

rb    i'b        r  b 
7?    (r\\  <  NMn+l  I  d£         fit-tit-—  NMn+l  (h  —  aVl+l 

Ju  n  \it )  I  =  -i>  JJJ.  I     •  •  •    j      M'Cjj  . .-.  U'£jU'£  —  -it  ±rJL  {u  ~  U)         , 

Ja,   Ja        J  a 

and  the  series  of  which  this  is  the  general  term  converges  only  when 

M(b-a}<l  (5). 

Thus  we  see  that  (3)  converges  absolutely  and  uniformly  when 
either  one  of  the  two  conditions  (4)  or  (5)  is  fulfilled. 

We  will  now  prove  that  in  either  of  these  cases  the  equation 
(1)  cannot  have  more  than  one  continuous  solution.  For  this  purpose 
we  turn  to  formula  (2)  by  which  any  continuous  solution  of  (1)  is 
expressed.  Referring  to  the  formula  for  Rn  (x},  we  see  that,  if  we 
denote  by  N'  the  maximum  of  ]  u  (x)  |  in  1,  the  same  reasoning  which 
led  us  before  to  the  inequalities  for  |  Fn  (x}  j  now  gives 


|  Rn  (x}  \  ±  N'  [M(b  -  a)]n+l  if  M(b  -«)<!. 
Thus  we  see  in  either  case  that 

lim  Rn  (x}  =  0. 

«=oo 

On  the  other  hand,  Sn  (x}  is  simply  the  sum  of  the  first  n+l  terms 
of  (3).  Consequently  the  function  u  given  by  (2)  is  precisely  the  value 
of  (3),  and  we  have  the  two  theorems  : 

THEOREM  1.  If  K  (x,  £)  is  finite  in  T  and  its  discontinuities,  if  it 
has  any,  are  regularly  distributed,  a  necessary  and  sufficient  condition 
that  the  equation 


have  a  solution  continuous  throughout  1  is  that  f '  (x)  be  continuous 
throughout  I,  and  if  this  condition  is  fulfilled,  (6)  tuis  only  one  con- 
tinuous solution,  which  is  given  by  the  absolutely  and  uniformly 
convergent  series  (3). 


5]  SUCCESSIVE   SUBSTITUTIONS  17 

THEOREM  2.  If  K(JC,  £)  is  finite  in  S  and  its  discontinuities,  if  it 
/«(*  <()iy,  are  regularly  distributed,  and  /(.r)  is  continuous  in  I,  then 
pm  r  ided  that 


//•//»  re  M  denotes  the  upper  limit  of  \K\  in  S,  equation  (I)  has  one  and 
only  one  solution  continuous  in  I,  and  this  solution  is  given  by  the 
abfuilutely  and  uniformly  convergent  series  (3). 

Besides  the  continuous  solution  whose  existence  has  just  been 
established,  the  integral  equation  of  the  second  kind  may  also  have 
discontinuous  solutions.  In  order  to  show  this,  we  will  consider  the 

special  case 

«  =  0,    /(-r)  =  0,     K(z,t)  =  S*-*. 

The  integral  equation  (6)  then  reduces  to 

(7). 


If  we  take  b  as  any  positive  constant,  the  conditions  of  Theorem  1 
are  fulfilled.  Equation  (7)  has  therefore  one  and  only  one  continuous 
solution,  and  this  solution  is  readily  seen  to  be  u  =  0 ;  as,  indeed,  is 
the  case  for  every  homogeneous  integral  equation  of  the  second  kind. 
By  direct  substitution,  we  verify  that  it  also  has  the  infinite  number 
of  discontinuous  solutions 

where  k  is  an  arbitrary  constant  different  from  zero. 

It  should  be  noticed  that  these  solutions  become  not  merely  infinite 

fb 

when  x  =  0,  but  become  infinite  so  strongly  that  I    u  (z)  dx  diverges. 

.'o 

It  may  readily  be  seen  that  if  the  integral  equation  satisfies  all  the 
conditions  of  Theorem  1  or  2,  any  discontinuous  solution  which  it  may 
have  will  necessarily  be  nou-integrable  *. 

These  non-integrable  solutions  have  not  as  yet  proved  to  be  of  any 
importance  and  we  shall  not  be  concerned  with  them  in  this  tract. 

There  are  various  extensions  of  the  considerations  of  this  section 
which  naturally  suggest  themselves.  In  the  first  place  we  may  ask 
under  what  conditions  it  is  possible  to  assert  that  the  continuous 
solution  of  (1)  or  (6)  has  a  continuous  derivative.  Let  us  assume  that 
both  K(x,  £)  and  dK/dx  are  finite  in  S  and  that  any  discontinuities 
which  these  functions  may  have  are  regularly  distributed.  Also  that 

*  We  note  in  passing  that  an  integral  equation  of  the  first  kind  may,  under 
similar  conditions,  have  a  discontinuous  solution  which  is  integrable.  Cf.  the  foot- 
note to  Theorem  2,  §  3. 

B.  2 


18  SUCCESSIVE   SUBSTITUTIONS  [5 

the  curves  on  which  the  discontinuities  of  K  lie  have  continuously 
turning  tangents  which  are  nowhere  parallel  to  the  axis  of  x  or  of  £. 
It  may  readily  be  inferred  from  this  that,  on  the  curves  where  it  is 
discontinuous,  the  function  K  joins  on  continuously  to  continuous 
boundary  values,  —  different  values,  of  course,  on  the  two  sides  of  the 
curve.  We  may  then,  proceeding  with  some  caution,  differentiate 
equation  (1)  and  thus  arrive  at  the  conclusion  that  under  the  con- 
ditions just  stated  a  necessary  and  sufficient  condition  that  the 
continuous  solution  u  of  (1)  have  a  continuous  derivative  is  that 
f(x)  have  a  continuous  derivative.  It  would  of  course  be  possible 
to  arrive  at  a  similar  result  with  much  less  drastic  restrictions  on  K. 

A  second  question  which  may  be  raised  is  as  to  the  existence  of 
continuous  solutions  of  (1)  or  (6)  when  the  kernel  is  not  finite.  We 
will  consider  here  only  a  special  case  which  may  serve  as  a  sample  of 
others. 

In  the  equation  (6),  let  K  be  defined  throughout  T  so  that 

0=.  CO<A<I), 


where  G  is  finite  in  T  and  where  its  discontinuities,  if  any  exist,  are 
regularly  distributed.  It  may  now  be  readily  proved  that  if  ^  (,r) 
is  any  function  continuous  in  7,  the  function 


I 

Ja 


K 


is  also  continuous  throughout  7. 

If  then/(#)  is  continuous  in  /,  we  see  that  the  same  is  true  of  each 
term  of  the  series  (3),  in  which  it  must  be  remembered  that  the  upper 
limits  of  integration  are  now  variable. 

For  the  general  term  Fn(x)  of  this  series  we  readily  get  the 
inequality 

,. , ^ NMn+i fx L_  r*  J_     f  *"-' * &    & 

where  |  /(#)  |  ^  N, 

In  terms  of  the  quantities 


the  (n  +  l)-fold  integral  last  written  may  be  readily  evaluated,  and  we 
thus  find 


If,  then,  we  can  prove  that  the  series  whose  general  term  is 

1*0*i  •  •  •  kn  (b  -  a)  <w  +  D(  i  -  A)  (H) 


5,  6]          VOLTERRA'S  TREATMENT  19 

converges,  the  absolute  and  uniform  convergence  of  (3)  follows.     The 
ratio  of  two  successive  terms  of  the  form  (8)  is 

if  (ft  -a)1-**.. 

The  convergence  therefore  follows  if  we  can  show  that 

lim  kn  =  0, 

n=  oo 

and  this  property  of  the  Ks  may  be  established  with  ease  from  their 
definition. 

Having  thus  established  the  uniform  convergence  of  (3),  the  proof 
that  this  series  represents  a  continuous  solution  of  the  integral  equation, 
and  the  further  proof  that  this  is  the  only  continuous  solution  of  this 
equation  follow  almost  as  before,  and  we  thus  get 

THEOREM  3.     The  equation 


in  which  f  is  continuous  in  I,  and  G  is  finite  in  T  and  such  dis- 
continuities as  it  may  have  are  regularly  distributed,  has  one  and  only 
one  continuous  solution,  and  this  solution  is  given  by  the  method  of  suc- 
cessive substitutions  in  the  form  of  an  absolutely  and  uniformly  convergent 
wries. 

6.  Volterra's  Treatment  of  Equations  of  the  Second 
Kind.  Iterated  and  Reciprocal  Functions.  "We  will  now  start 
afresh  and  approach  the  theory  of  integral  equations  of  the  second 
kind  from  a  new  point  of  view  due  to  Vblterra*. 

Let  us  assume  that  K  (x,  y]  is  finite  in  S  and  that  any  discontinui- 
ties which  it  may  have  are  regularly  distributed.  From  K  we  form  the 
iterated  functions  Kl,  K.2,  ...  by  means  of  the  formulae 


]£.f~   -A  —   '       V(~.    t\  V      ft   *.\  Jt  ( 


*  Six  papers  come  primarily  into  consideration  here  and  in  the  following 
sections,— four  in  the  Atti  of  the  Turin  Academy,  vol.  31,  1896  (Jan.  12— April  26), 
arid  two  in  the  Rendiconti  of  the  Accademia  dei  Lined,  series  5,  vol.  o1,  1896 
(March,  April).  These  articles  will  be  referred  to  as  Torino  I,  II,  III,  IV  and 
Lined  I,  II,  respectively.  There  is  also  a  paper  by  the  same  author  in  the 
Annali  di  Matematica,  ser.  2,  vol.  25  (1897),  p.  139,  which  should  be  cited  here. 
It  may  be  mentioned  that  Volterra's  work  on  integral  equations  commenced  in 
1884  with  a  note  "  Sopra  un  Problema  di  Elettrostatica,"  Atti  d.  R.  Accad.  dei 
Lincei,  ser.  3,  Transunti,  vol.  8,  p.  315,  iu  which  a  relation  to  the  calculus  of 
variations  was  pointed  out.  For  the  present  section,  cf.  Lincei  I. 

2—2 


20  VOLTERRA'S  TREATMENT  OF  [6 

A  reference  to  Theorem  1,  §  1,  shows  us  that  all  of  these  functions, 
except  possibly  Klt  are  continuous  throughout  S. 
By  successive  applications  of  (1)  we  get 

Ki(*>  y)  =  f  ».  /**•<*&)*&.  4)  ...  K(^,y-)d^  ...  <fc   (2). 

Ja          Ja 

If,  by  means  of  (2),  we  express  Kt  (x,  £)  and  K}  (£,  y}  as  («  -  1)  and 

rb 
(j—  l)-fold  integrals  respectively,  we  get  for  I  Kt  (x,  I)  Kj  (£,  y)  rf£  an 

Ja 

(i+j  -  l)-fold  integral,  which,  when  we  change  the  order  of  integration, 
becomes,  except  for  notation,  precisely  the  value  of  Ki+J  (x,  y}  given  by 
(2).  We  have  thus  established  the  important  formula 


=  tbKt(x,VK,(t,y)dt       ft/-l,  2,  ...)     (3) 

Ja 


which  includes  definition  (1)  as  a  special  case. 

If  K(x,  y)  vanishes  when  y>  x,  we  see  from  (1)  that  Kt  (x,  y)  also 
vanishes  when  y  >  x  for  all  values  of  i.  Consequently  in  this  case  (3) 
may  be  written 


=     *i(ff,*)J}(&y)<#        (ij=l,  2,  ...)      (4). 
Jv 


The  case  just  mentioned  is  the  only  one  considered  by  Volterra,  and  the 
formula  used  by  him  is  formula  (4). 

Returning  to  the  general  case,  let  us  consider  the  series 

'K!  (x,  y)  +  Kz (x,  y)  +  K3(x,y)+...  (5). 

It  will  be  shown  presently  that  in  certain  important  cases  this  series 
converges  uniformly  throughout  S.  Let  us  assume  that  this  is  the  case, 
and  denote  the  value  of  (5)  by  —  k  (x,  y).  Since  every  term  in  (5), 
except  possibly  the  first,  is  continuous  in  S,  it  follows  from  the  assumed 
uniform  convergence  of  (5)  that  k  (x,  y)  is  finite  in  S  and  is  discon- 
tinuous only  where  K  (x,  y}  is  discontinuous. 

Let  us  denote  the  sum  of  the  first  n  terms  of  (5)  by  Sn  (x,  y)  and 
the  remainder  after  this  point  by  Rn  (x,  y),  so  that 

,-6 

R»  W  y}  =     2         Ki-j  (x,  £)  Kj  (£,  y)  dt,  (6). 

i=n  +  1  Ja 

In  this  formula,  the  integer,/  may,  if  we  please,  vary  from  term  to  tenn, 
being  however  always  less  than  i. 

Let  us  first  assign  to  j  the  value  n  in  all  the  terms.  Then  the 
series  in  (6)  is  what  we  should  get  by  starting  from  the  series 


6]  EQUATIONS   OF  THE   SECOND   KIND  21 

multiplying  it  by  KH  (£,  y)  and  integrating  it  term  by  term.     Since  this 
integration  is  allowable,  we  get 

R*  (*t  y)  =  -  f*  (*,  0  K*  (£  y)  #  (7). 

/• 

On  the  other  hand,  let  us  give  to  /in  (6)  the  value  i-n.     Then  the 
series  in  (6)  may  be  obtained  by  starting  from  the  series 

-k(t,y)=   5     JCi-.tf.y), 
1=11  +  1 

multiplying  it  by  Kn  (.r,  £)  and  integrating  it  term  by  term.     We  thus 
get  as  a  second  form  for  the  remainder 

R*  (*  y)  =  -  f  *.  (*,  *>  *  (*.  ^)  ^  (»)• 

/a 

Since  we  have 

RI  (*,  y)  =  -k  (•*•>  y)-K  to  y), 

it  follows  that  in  the  case  n  =  1   the  two  formulae  (7),   (8)  may  be 
combined  into 


(9). 

This  is  one  of  the  most  fundamental  formulae  in  the  theory  of 
integral  equations.  We  base  upon  it  the  following 

DEFINITION.  Two  functions  K(x,y)  and  k(x,y}  are  said  to  be 
-•id  if  they  are  finite  in  S,  if  any  discontinuities  they  may  hace 
are  regularly  distributed,  and  if  they  satisfy  (9)*. 

The  relation  here  denned  between  two  functions  is,  as  the  very  name 
"  reciprocal  "  implies,  independent  of  the  order  in  which  these  functions 
are  taken,  since  (9)  is  not  changed  by  an  interchange  of  K  and  /•.  On 
the  other  hand,  we  see  from  (9)  (cf.  Theorem  1,  §  1)  that  the  sum  of  two 
reciprocal  functions  is  continuous. 

We  are  now  in  a  position  to  give  Volterra's  elegant  solution  of  the 
integral  equation  of  the  second  kind 


u  (x)  =f(x)  +  I  K(x,  £)  u  (£)  d£  (10). 

*  We  note,  for  later  use,  the  following  application  of  this  definition,  which  was 
given  by  Goursat  (C.  R.  Feb.  17,  1908) :  If  K  and  k  are  reciprocal,  and  if  r  (x)  is 
continuous  in  I  and  does  not  vanish  there,  then 

-7-r  K  (a-,  u)     and     —7—,  k  (x,  y) 
r(y)  r(y) 

are  also  reciprocal. 


22  VOLTERRA'S  TREATMENT  OF  [6 

Here  we  assume  as  before  that  K  is  finite  in  $  and  that  any  discon- 
tinuities it  may  have  are  regularly  distributed.     "We  also  assume  that 
there  exists  a  function  k  (x,  y)  reciprocal  to  K(x,  y). 
If  (10)  has  a  continuous  solution  u  (x),  we  may  write 


Multiplying  this  equation  by  k  (x,  £)  and  integrating,  we  get 
fa  £)•(£)<& 


If  we  reverse  the  order  of  integration,  and  then  apply  (9),  the  second 
term  on  the  riht  becomes 


The  second  part  of  this  cancels  against  the  first  member  of  (11), 
while  the  first  part  may  be  evaluated  by  means  of  (10).  Thus  (11) 
reduces  to 

u  (as)  =f(x}  -  fbt  (x,  £)/(#  dt  (12). 

Ja 

We  see  then  that,  under  the  conditions  we  have  imposed,  (10) 
cannot  have  more  than  one  continuous  solution,  and  if  it  has  one,  this 
will  be  given  by  formula  (12). 

In  order  to  prove  that  the  continuous  function  u(x)  defined  by  (12) 
really  is  a  solution  of  (10),  let  us  write  (12)  in  the  form 


This  may  be  regarded  as  an  integral  equation  for  determining  f(x). 
Since  K  is  a  function  reciprocal  to  k,  we  see  that  this  equation  satisfies 
all  the  conditions  which  we  previously  imposed  on  (10),  so  that, 
by  what  we  have  just  proved,  its  continuous  solution  f(x)  is  given  by 
the  formula 


This,  however,  is  precisely  the  equation  (10),  which  we  thus  see  is 
satisfied  by  u  (x),  and  we  have  proved  the 

THEOREM  1.     If  K(x,  £)  is  finite  in  8  and  any  discontinuities  it 
may  have  are  regularly  distributed,  and  f(x}  is  continuous  in  I,  then 


6]  EQUATIONS   OF  THE   SECOND    KIND  23 

the  equation  (10)  has  one  and  only  one  continuous  solution  provided  tht-rt 
t\c/.<fs  a  function  k  (x,  y)  reciprocal  to  K(x,y);  and  in  this  case  this 
solution  is  given  by  (12). 

We  saw  above  that  a  function  reciprocal  to  K  will  exist  provided 
the  series  (5)  converges  uniformly.  Although  this  is,  as  we  shall  see  in 
§  9  (Theorems  5,  6)  by  no  means  a  necessary  condition  for  the  existence 
of  a  reciprocal  function,  it  will  nevertheless  be  of  interest  to  determine 
certain  cases  in  which  this  condition  is  fulfilled. 

THEOREM  2*.     If  K(JC,  y)  is  finite  in  S  and  any  discontinuity  •<  >* 
may  have  are  regularly  distributed,  there  will  exist  a  reciprocal  function 
•  by  series  (o)  provided  that 


where  M  is  the  upper  limit  of  {  K(x,  y)  \  in  S. 
For  under  these  conditions,  as  we  see  from  (2), 


and  from  this  the  absolute  and  uniform  convergence  of  (5)  follows  at 
once. 

THEOREM  3.  If  K  (x,  y)  is  finite  in  S  and  any  discontin  uities  it  may 
/'.'»•!•  are  regularly  distributed,  there  ic  ill  exist  a  reciprocal  function  given 
by  series  (5)  provided  that 

K(*>y)-Q  wheny>x. 

In  this  case  we  can  use  formula  (4),  by  means  of  which  we  easily 
establish  the  inequality 


Consequently  the  general  term  of  (5)  does  not  exceed  in  absolute  value 
the  quantity 

J/  (b-ay-1 

(•-1)1 

and  since  this  is  the  general  term  of  a  convergent  series  of  positive 
constant  terms,  the  absolute  and  uniform  convergence  of  (5)  follows  at 
once. 

In  either  of  these  two  cases,  or  indeed  in  any  case  in  which  the  series 
(5)  converges  uniformly,  the  solution  (12)  may  be  written 


3 

=l  Ja 


*  For  a  theorem  which  in  some  cases   is   more   far-reaching   than  this,  see 
Theorem  5,  §  12. 


24  LTNEAR    EQUATIONS    WITH    AN  [6,  7 

If  here  we  replace  the  iterated  function  Kn  by  its  value  (2),  we  get  by  a 
mere  change  in  the  order  of  integration  precisely  the  series  which  we 
found  in  §  5  by  the  method  of  successive  substitutions.  Thus  we  get 
a  new  proof  of  Theorems  1,  2,  §  5.  Conversely,  from  those  theorems  all 
the  results  of  this  section  can  readily  be  deduced  so  far  as  they  relate  to 
the  cases  covered  by  those  theorems. 
Finally  we  prove 

THEOREM  4.  There  cannot  exist  two  different  functions  ki  (x,  y)  and 
kz  (x,  y)  both  reciprocal  to  the  same  function  K(x,  y). 

For,  as  we  have  seen,  K  +  k^  and  K  +  £2  would  both  be  continuous; 
accordingly  the  same  would  be  true  of  the  difference  of  these  two 
functions 

o-  (x,  y)  =  fa  (x,  y}  -  k,2  (x,  y). 

By  substituting  in  (9)  first  k  =  k^  then  k  =  k%  and  subtracting  the 
resulting  equations  from  each  other,  we  find 


This  may  be  regarded  as  a  homogeneous  integral  equation  of  the 
second  kind  for  determining  a-,  y  being  regarded  as  a  parameter.  Since, 
by  hypothesis,  K  has  a  reciprocal  function,  all  the  conditions  of 
Theorem  1  are  fulfilled.  Consequently  (13)  has  only  one  continuous 
solution,  and  this  is,  by  inspection,  <r  =  0.  We  thus  have  ki  =  k%,  and 
the  assumption  of  two  different  functions  reciprocal  to  K  is  impossible. 

7.  Linear  Algebraic  Equations  with  an  Infinite  Number 
of  Variables.  We  come  now  to  a  very  remarkable  and  important 
relation  between  the  theory  of  integral  and  of  algebraic  equations. 
This  relation  seems  to  have  been  first  noticed  by  Volterra,  who  pointed 
out*  that  an  integral  equation  of  the  first  kind  may  be  regarded  as  the 
limiting  form  of  a  system  of  n  linear  algebraic  equations  in  n  variables 
as  n  becomes  infinite.  Though  not  explicitly  mentioned  by  Volterra, 
his  remarks  make  it  at  once  clear  that  the  same  is  true  for  equations  of 
the  second  kind. 

It  is  Fredholm's  great  achievement  to  have  seen  how  this  observa- 
tion could  be  utilized  to  pass  from  the  solution  of  the  system  of  linear 
equations  to  the  solution  of  the  integral  equation  of  the  second  kind, 
which  he  was  thus  enabled  to  treat  in  a  far  more  general  manner  than 

*  Torino  I. 


7]  INFINITE   NUMBER    OF   VARIABLES  25 

it  had  ever  been  treated  before  *.  This  method  was  used  by  Fredholm 
as  a  heuristic  method  for  discovering  first  the  facts,  and  secondly 
methods  by  which  they  may  be  proved.  Later  Hilbertt  showed  how 
the  theory  can  be  rigorously  established  by  following  out  in  detail  the 
limiting  process  of  Volterra  and  Fredholm.  We  follow  Fredholm, 
giving  in  this  section  merely  the  heuristic  part  of  the  work,  and 
reserving  the  proofs  for  the  next  section.  The  whole  of  the  present 
section  is  therefore,  from  a  strictly  logical  point  of  view,  superfluous 
and  may  be  omitted. 

Let  us  divide  the  interval  ab  into  n  equal  parts  by  the  points 

.TJ  =  a  +  8,     .r,  =  a  +  28,   ...  ,?•„  =  a  +n&  =  b     f  8=  —    —  )  . 
If  we  replace  the  definite  integral  in  the  equation 


ii  (.r)  =/(.r)  +          feQ*(&tt  (1) 

.'a 

by  the  sum  of  which  it  is  the  limit,  we  get  the  equation 

n  (JT)  =/(*)  +  2  K  (a-,  a-j)  u  (q)  8  (2). 

j=i 

We  shall  see  in  a  moment  that  this  equation  has  in  general  one 
and  only  one  solution,  and  we  may  expect  that  this  solution,  which  we 
denote  by  un  (.r),  will  have  as  its  limit  for  n  =  oc  the  desired  solution 


Since  equation  (2)  is  to  hold  for  all  values  of  .r  in  ab,  it  must  in 
particular  hold  when  >r  =  .r1}  .?•.,,  ....r,,.  This  gives  us  the  system  of 
n  equations 

-  2  K(xit  *j)  un  (.r,)  8  +  Utt  (.r,-)  =/(.r,-)     (/  =  1,  2,  ...  »)     (3). 

J=i 

These  may  be  regarded  as  n  non-homogeneous  linear  equations  for 
determining  the  n  unknowns  un(.t\),  ...  «„(.?„).  When  these  have 
been  determined,  the  value  of  «•»(.?•)  may  be  found  by  substituting 

*  Sur  une  nonvelle  methode  pour  la  resolution  du  probleme  de  Dirichlet, 
Ofversigt  af  Kongl.  Vetenskaps-Akaderniens  Forhandlingnr,  vol.  57  (1900),  p.  39. 
This  is  the  Swedish  academy  at  Stockholm.  A  second  paper,  complete  in  itself, 
and  much  more  extensive,  appeared  in  Acta  Math.  vol.  27  (1903),  p.  365. 

t  Gottinyer  Xachrichten,  1904,  p.  49.  This  is  the  first  of  a  series  of  papers  in 
which  important  contributions  were  made  to  the  theory.  The  plan  of  deducing 
the  results  as  limiting  cases  of  algebraic  propositions  as  the  number  of  variables 
becomes  infinite  is  consistently  carried  through. 


26 


LINEAR   EQUATIONS   WITH   AN 


[7 


these  values  in  the  second  member  of  (2).      Consequently,  if  the 
determinant 


-8  K  (#2, 


of  the  system  (3)  does  not  vanish,  we  see  that,  as  was  asserted  above, 
there  exists  one  and  only  one  solution  of  (2).  In  fact,  if  we  desire 
not  un  (x)  itself  but  merely  its  limit  for  n  =  »  ,  we  need  not  consider 
(2)  at  all,  for  this  limit,  which  we  assume  to  be  continuous,  is 
completely  determined  by  the  values  of  un(x^)  obtained  from  (3). 
The  determinant  Dn  when  expanded  takes  the  form 


-        2 


K(xi,x^)K(xi,xJ}K(xi, 


K(xk,  Xi}  K(xk,  $j)  K(xk,  xk) 
K(x^,xl)  ... 


(xn,xl}  ...  K(xn,xn] 
If  we  allow  n  to  become  infinite,  we  get  in  this  expression  a  larger 
and   larger   number   of  terms,   and   the  terms   themselves  vary  and 
approach  definite  limits.     We  are  thus  led  to'  the  consideration  of  the 
infinite  series 


r 

=1- 

Ja 


!/•&/& 

if-,  -     / 

A].  J  a   Ja 


We  shall  prove  in  the  next  section  that  this  series  converges.  Its 
value  is  called  by  Fredholm  the  determinant  of  the  integral  equation 
(1)  or  of  the  kernel  K.  We  shall  not  stop  to  prove  that  it  is  the 
limit  of  Dn  for  n=  oo,  although  the  method  of  deriving  it  makes  this 
extremely  plausible  +. 

*  The  factorials  in  the  denominators  of  all  but  the  last  term  are  extremely 
important.  They  are  due  to  the  fact  that  in  the  following  2  every  term  is  repeated 
the  number  of  times  indicated  by  the  factorial  in  question.  For  the  sake  of 
simplicity,  the  notation  has  been  changed  in  the  last  term,  so  that  the  factorial 
does  not  appear. 

t  The  proof  here  required  is  very  similar  to  the  one  which  we  meet  in  the 
elements  of  analysis  when  we  prove  that  the  expansion  of  (1  +  !/;«)"'  by  the 


INFINITE    NUMBER   OF   VARIABLES 


27 


Let  us  further  consider  the  cofactor  of  the  determinant  Dn  which 
corresponds  to  the  vth  row  and  the  nth  column.  We  readily  see  that, 
when  /x  =£  v,  this  determinant  may  be  written 


.*',)  -  8 


-28 

i=l 


K(xj,xr)K(xj,xi)K(xj,xj) 


(-D" 


2S"-2 


(w-l)-rowed 
determinants 


If  here  we  let  n  become  infinite,  and  at  the  same  time  allow  p.  and  v 
to  vary  in  such  a  way  that  lim  (^>,  xv)  =  (x,  y),  we  get*  as  the  limit  of 
8-1  Dn  (XP,  xv}  the  series 

K(x,y)  K(x,^] 


1  f  f 
21  Ja  L 


»  y)  Kfr,  ft)  JT(ft  ,  ft)     rfftrfft  -  .-. 


This  series,  the  proof  of  whose  convergence  is  reserved  for  the  next 
section,  we  shall  call  the  adjoint  of  the  kernel  K+. 

It  should  be  noticed  that  we  have  not  here  considered  the  cofactors 
of  Dn  which  correspond  to  the  elements  of  its  principal  diagonal. 
These  cofactors  differ  in  form  only  slightly  from  the  determinant  Dn 
itself,  and  it  is  clear  that  we  shall  get  as  their  limits  when  n  -  x  pre- 
cisely the  determinant  D  of  the  integral  equation. 

Let  us  now  proceed  to  solve  the  system  (3)  of  equations  on  the 
supposition  that  Dn  4=  0.  We  have  by  Cramer's  formulae 

,     .  _/(.rt)  Dn  fa,  .r,) 


Da 


0*=  1,2, ...»). 


binomial  theorem  (HI  a  positive  integer)  approaches  as  its  limit  for  m  =  x  the 
familiar  series  for  e.  Both  of  these  proofs,  and  many  others  of  a  similar  character, 
may  be  easily  carried  through  by  means  of  a  general  theorem  of  Osgood,  Annals 
of  Math.  ser.  2,  vol.  3  (1902),  p.  138,  Ex.  8. 

*  This  statement  of  course  requires  proof.     Cf.  the  preceding  foot-note. 

t  Fredholm  uses  the  termyir*^  minor. 


28  LINEAR   EQUATIONS  [7 

If  we  let  n  become  infinite,  and  allow  at  the  same  time  /*  to  vary 
in  such  a  way  that  x^  approaches  x  as  a  limit,  we  obtain  as  the 
limiting  form  of  this  formula,  when  we  remember  that 


(\          JV      \  x        I         f  /  f-\     T\  /  l-\      7  f- 

M/*»  I   f  I  rv*  1      I      I     I  >  I      lilt''  1~\    rt  f- 

jjj  — J  V")  '    /-j  /    y  \€y  •*•'  V1*)  ^/  "'£• 
•*/  Ja 

This  is  actually  the  solution  of  the  integral  equation  (3)  as  found 
by  Fredholm. 

Instead  of  placing  this  solution  on  a  firm  foundation  by  justifying 
all  the  steps  we  have  taken,  we  prefer  to  establish  it  with  Fredholm 
ab  initio  in  the  next  section.  The  method  to  be  used  will  be  suggested 
to  us  if  we  recall  how  Cramer's  formulae  for  the  solution  of  (3)  are 
established ;  namely  by  multiplying  the  equations  (3)  by  the  cofactors 
Dn  (#M,  Xi)  and  adding  them  together.  The  essential  point  here  is 
that  this  has  the  effect  of  eliminating  all  the  unknowns  except  un  (&v). 
This  is  due  to  the  theorem  which  says  that  if  we  take  the  elements  of 
the  /tth  column  of  Dn  and  multiply  them  respectively  into  the  cofactors 
of  the  elements  of  another  column,  the  sum  of  the  results  thus  obtained 
will  be  zero.  This  may  be  expressed  by  the  formula 

n 

X   •<     If  I  ™        ff    \     T)      //».          rn   \  _l_     T)      (  r          V    \  —  O  /A    =b   ll\ 

^  •**-  \*^i')  *^M/        ft  \    A  J      if       -*^n  \**  A  >  ^M/  —  \     ^  r/* 

If  we  here  divide  by  -  8  and  then  let  n  become  infinite,  at  the 
same  time  allowing  x^  and  x^  to  approach  respectively  the  values 
x  and  y,  we  get  as  the  limiting  form  of  this  formula 

I  K(£t  y)  D  (x,  £)  d£  +  K(x,  y)D-D(x,y)  =  Q  (4). 

Ja 

This  suggests  that  (4)  is  the  essential  instrument  by  which  we  shall 
solve  the  integral  equation  (1)  in  the  next  section.  This  formula  must 
of  course  first  be  established  more  firmly  than  has  yet  been  done. 

By  the  side  of  this  formula  is  another  similar  one*  at  which  we 
arrive  by  starting  from  the  fact  that  if  in  the  determinant  Dn  we 
multiply  the  elements  of  the  vth  row  by  the  cofactors  of  the  cor- 

*  This  formula,  or  something  equivalent  to  it,  is  necessary  in  showing  that 
Cramer's  formulae  really  satisfy  the  system  of  linear  equations. 


7,  8]  FREDHOLM'S  SOLUTION  29 

responding  elements  of  another  row  and  add  the  results  together,  the 
sum  is  zero.     This  may  be  expressed  by  the  formula 

-82  K(xv,  xt)  DH  (xh  #x)  +  J)n  (av,  *A)  =  0. 

i=l 

By  taking  limits,  as  above,  we  are  led  to  the  second  fundamental 
formula 

(*,  I)  D  (t,  y)  dt  +  K(x,y)D-D  (*,  y)  =  0  (5). 


8.  Predholm's  Solution.  We  begin  by  establishing  a  theorem 
concerning  determinants  due  to  Hadamard*.  For  this  purpose  consider 
the  following  simple  geometrical  fact. 

If  a  parallelepiped  has  one  vertex  at  the  origin  and  the  three 
adjacent  vertices  at  the  points  (-TJ,  yi,  zj,  (x2)  y^  z?),  (x3,  yz,  z3),  it  is 
well  known  that  its  volume  will  be 


A=     x.2     y.2 

.•TS    ys 

Let  us  suppose  that  the  three  edges  issuing  from  the  origin  are  of 
unit  length 

*f+9?  +  *f-l         (/=!,  2,  3)         (1). 

Then  it  is  clear  geometrically  that  the  volume  of  the  parallelepiped 
will  be  a  maximum  when  the  three  edges  in  question  are  mutually 
perpendicular,  in  which  case  the  volume  is  1.  Hence  under  the 
conditions  (1)  we  see  that  i  A  |  ^  1.  This  suggests  at  once  the  following 
generalization  : 

LEMMA.     If  the  elements  a^  of  the  determinant 

au  ...  «!„ 


are  real  and  satisfy  the  conditions 

«rt2  +  «fc2+-..+a,v=l     (i=l,  2,  ...»)      (2), 

then  I  A  I  =  L 

*  Bull,  des  sciences  math,  et  astr.  2nd  ser.  vol.  17  (1893),  p.  240.  Hadamard's 
method  is  purely  algebraic.  "We  follow  a  method  of  Wirtinger,  MonatshefU  f. 
Math.  u.  Physik.  vol.  18  (1907),  p.  158.  Both  authors  consider  also  more  general 
questions,  in  particular  they  consider  the  case  in  which  the  elements  of  the 
determinant  are  imaginary.  Cf.  also  Fischer,  Archiv  d.  Math.  u.  Phys.  3  ser., 
vol.  13  (1908),  p.  32. 


30  FREDHOLM'S  SOLUTION  [8 

In  order  to  prove  this,  let  us  attempt  to  find  the  maxima  and 
minima  of  A  under  the  conditions  (2). 

In  the  first  place  it  is  clear  that  A  has  both  a  finite  maximum  and 
a  finite  minimum  under  the  conditions  (2),  since  these  conditions 
restrict  the  point  (an,  a12,  ...  «„„)  in  space  of  w2  dimensions  to  a  finite 
closed  region,  and  A  is  a  continuous  function  of  these  arguments. 
Moreover  since  A  has  continuous  first  partial  derivatives  with  regard 
to  these  arguments,  we  may  apply  the  ordinary  method  of  the  differ- 
ential calculus  for  finding  maxima  and  minima.  Let  us  allow 

#il>  ai2>    •••  ain 

to  vary,  leaving  the  other  a's  constant.     We  thus  get  from  (2) 

a-adan  +  a-itflata  +  ...  +  aindain  =  0  (3) ; 

and  when  we  remember  that  3A/3«y.  is  the  cofactor  Ay  of  ay  in  A,  we 
see  that  a  necessary  condition  for  a  maximum  or  minimum  is 

A  a  datl  +  Ai2  dai2  +  ...  +  Ain  dain  =  0  (4). 

Moreover  since,  when  A  is  a  maximum  or  minimum,  every  set  of  values 
of  the  differentials  which  satisfy  (3)  also  satisfy  (4),  the  first  members 
of  (3)  and  (4)  are  proportional  to  each  other,  and  we  have 

Atj  =  kitty  («',./ =1.  2>  •••  »)• 

Multiplying  these  equations  by  ay  and  adding,  we  get,  on  referring 

to  (2), 

A  =  A,., 

so  that*  Ay  =  Aay        (/,  j=l,  2,  ...  ri)        (5). 

The  determinant  of  the  nth  order  of  which  Ay  is  the  general  element 
has,  as  is  well  known,  the  value  A""1.  Forming  this  determinant  from 
the  values  of  Ay  in  (5),  we  thus  get 

A""1  =  A"+1. 

Consequently,  when  A  is  a  maximum  or  minimum, 

A  =  ±l. 

The  maximum  value  of  A  is  therefore  +  1  and  the  minimum  —  1,  and 
our  lemma  is  proved. 

We  can  now  readily  pass  to  a  more  general  result.  Let  us  suppose 
that  the  elements  of  A  are  still  real  but  that  conditions  (2)  are  not 
fulfilled.  Let  us  denote  the  value  of  the  left-hand  side  of  (2)  by  o-,-. 
If,  for  the  moment,  we  rule  out  the  possibility  of  all  the  elements  of 

*  Notice  that,  if  A=f=0,  this  is  precisely  the  condition  that  A  be  an  orthogonal 
determinant. 


FREDHOLMS   SOLUTION 


31 


one  row  of  A  vanishing,  these  cr/s  are  all  positive,  and  we  may  consider 
the  determinant 

U  ll  <?!» 


V  O'lO-.j  •  •  • 


a  HI          a™ 


This  determinant  satisfies  all  the  conditions  of  our  lemma,  and 
hence  we  have  the  inequality 

A    ^ 


If  now  we  let  M  be  a  positive  constant  at  least  as  great  as  any  of  the 

quantities   a^  ,  we  see  that 

<r,  ^  )i  J/J 

and  thus  we  obtain  the  formula 

A    5iVw"J/". 

This  inequality  obviously  also  holds  in  the  case  we  have  excluded 
in  which  all  the  elements  in  some  row  of  A  are  zero  ;  and  we  have 
proved 

HADAMARD'S  THEOREM.     If  the  elements  «<,  of  the  determinant 


A  = 


On  ...  alr, 


«»!•••  ««» 

are  real  and  satisfy  the  inequality 


then  [  A  j  ^  xV  J/". 

We  proceed  now  to  Fredholm's  solution  of  the  integral  equation 

(6) 

in  which  we  assume*  that  K(x,  £)  is  finite  in  8  and  that  any  dis- 
continuities it  may  have  are  regularly  distributed,  while  f(f)  is 
assumed  to  be  continuous  in  /.  Furthermore  we  will  assume  that 
K(x,  x)  is  integrable  throughout  the  interval  /. 

*  We  shall  not  consider  the  case  in  which  K  fails  to  remain  finite.  This  case 
is  of  considerable  practical  importance,  and  we  refer  the  reader  to  Fredholm's, 
Hilbert:s,  and  E.  Schmidt's  papers  for  various  treatments  of  it. 


82 


FREDHOLM'S  SOLUTION 


We  form  the  two  series 

"b  \        7C  1          f1       [b         '       -^(£l>     £l 

-i)    -1  +  ^]Ja  Ja    |  K(£.2,^ 


ft 

=  l- 

Ja 

_  1    fb    fb    [ 
O\  Ja   Ja   Ja 


(7), 


fad&-...       (8). 


All  of  the  multiple  integrals  which  occur  here  obviously  converge 
and  may  be  evaluated  as  iterated  integrals  taken  in  any  order. 

We  will  now  prove  that  the  first  of  these  series  converges  absolutely, 
and  that  the  second  converges  absolutely  and  uniformly  in  8.  As  was 
already  stated  in  the  last  section,  we  shall  call  the  constant  D  the 
determinant  of  K,  and  the  function  D  (x,  y)  the  adjoint  of  K*. 

Let  M  be  the  upper  limit  of  K(x,  y]  in  &  Then  each  of  the 
determinants  which  occur  in  (7)  satisfies  the  conditions  of  Hadamard's 
Theorem,  and  the  general  term  of  (7) 


-i)Y'...f 

W!     Ja         Ja 


<&... 


does  not  exceed  in  absolute  value 


The  series  of  which  CH  is  the  general  term  converges,  since 


a  quantity  which  obviously  approaches  zero  as  its  limit  when  n  =  oc . 
Consequently  the  series  (7)  converges  absolutely. 

Similarly  the  general  term  of  (8)  does  not  exceed  in  absolute  value 
Jnn  Mn  (h  —  a}n~1 


O-l)! 


1  JJn  +  -l  V»+    1  //^  1\"     ,r//  x 

and  -^j~-  =  -        -  v  /    1  +  -     M  (u  —a). 

Bn  n       V   \       ?2/ 

*  Strictly  speaking,  the  determinant  and  adjoint  toj'tfe  regard  to  the  region  S. 


FREDllOLM'S   SOLUTION 


33 


Since  this  last  quantity  approaches  zero  as  its  limit,  the  series  of  which 
Bn  is  the  general  term  converges ;  and,  this  being  a  series  of  constant 
positive  terms,  the  series  (8)  converges  absolutely  and  uniformly 
in  S. 

It  will,  of  course,  be  essential  for  us  to  discuss  the  question  of  the 
continuity  or  discontinuity  of  the  adjoint  function  D  (x,  y)*.  For 
this  purpose  let  us  write  the  general  term  of  (8)  in  the  form 


'b 


t 
(x,y}     ... 

Ja        Ja 


(9), 


where 


rb         r 

L(4jr)«f  .../ 

Ja        Ja 


(*,  60 


(10). 

Since  the  first  term  in  (9)  is  the  product  of  the  general  term  of  (7)  by 
K  (x,  y},  we  may  write 

00   (—  l)* 
D  (x,  y)  =  D .  K  (x,  y)  +  2  -  — ^-  Qn  (x,  y)  (11). 

If  we  expand  the  determinant  in  (10)  according  to  the  elements  of 

its  first  row,  we  get 


where 


Let  us  change  the  notation  for  the  variables  of  integration  by 
introducing  in  the  /th  integral  in  (12)  £  in  place  of  6,  f,  in  place  of 
6-+i,  li+i  in  place  of  gu.2,  etc.,  but  leaving  the  variables  6,  •••  6-1 
unchanged  in  notation.  If  then  in  this  /th  integral  we  bring  the  ith 

*  If  we  wished  to  assume  that  K  (x,  y)  is  continuous  throughout  S,  we  could 
easily  infer  from  the  continuity  of  the  terms  in  (8),  and  from  the  uniform 
convergence  of  this  series,  that  D  (x,  y)  is  continuous  in  S. 

B.  3 


FREDHOLMS   SOLUTION 


row  of  the  determinant  into  the  first  place,  all  the  terms  of  (12)  are 
seen  to  be  equal,  and  we  may  write 


n(x,y}=-nt\..  f 

J  a         J 


(13), 


where 


p  = 

•*-   n 


K (**-»  ?)*-(&_„  £,) ...  K(tn-lt  4-0 

We  are  now  in  a  position  to  examine  the  question  of  the  continuity 
of  D  (x,  y}.  For  this  purpose  let  us  first  show  that  the  functions 
Qn  (%,  y)  as  defined  by  (10)  are  continuous  throughout  8.  From  either 
(10)  or  (13)  it  is  clear  by  a  reference  to  Theorem  1,  §  1,  that  $x  (x,  y}  is 
continuous  throughout  S.  Let  us  assume  that  Q^,  ...  Qre_x  are  con- 
tinuous there.  If  from  this  assumption  we  can  infer  that  Qn  is 
continuous  there,  we  shall  have  completed  the  proof  that  all  Q's  are 
continuous  by  the  method  of  mathematical  induction.  We  may  write 
the  second  member  of  (13)  in  the  form 


rb         rb 

-n      ...  f  K(x,t) 

Ja         Ja 


The  first  of  these  integrals  is  merely  a  constant  multiple  of 

Ja 

and  is  therefore  continuous  by  Theorem  1,  §  1.     The  second  may  be 
written 


-n 


and  is  therefore  continuous  for  the  same  reason. 

Having  thus  seen  that  all  the  $'s  are  continuous,  we  infer  that  the 
function  represented  by  the  series  in  (11)  is  also  continuous,  since  this 


8]  FREDHOLM'S  SOLUTION  35 

series  is  uniformly  convergent  in  S,  being  the  difference  between  the 
uniformly  convergent  series  (8)  and  the  product  of  the  series  (7)  by  the 
finite  function  K  (x,  y).  We  can  now  read  off  from  (11)  the  finiteness 
and  the  nature  of  the  discontinuities  of  D  (x,  y). 

The  fundamental  identity  which  connects  K(x,  y),  D(x,  y),  and  D 
can  now  be  established.  If  we  substitute  the  value  of  Qn  from  (13) 
in  the  series  in  (11),  we  get  the  same  series  we  should  obtain  by 
multiplying  the  series  for  D  (£,  y)  by  K  (x,  £)  and  integrating  term 
by  term.  Since  this  integration  is  permissible  on  account  of  the 
uniform  convergence  of  (8)  and  of  the  integrability  of  the  value  of  this 
series  and  of  its  terms,  we  have  thus  established  the  formula 


In  precisely  the  same  way,  if  we  expand  the  determinant  in  (10) 
according  to  the  elements  of  its  first  column,  we  find 


n=l 


2  Qn  (*,  y)=D  (*,  0  K(t,  y)  d(  (15). 

- 


By  combining  (14)  and  (15)  with  (11),  we  thus  get  the  fundamental 
formula 

D  (x,  y}-D.K(x,y)  =  !  *  K(x,  fl  D  (*,  y)  d£ 

Ja 

(16). 


This  is  the  same  as  formulae  (4)  and  (5)  of  the  preceding  section, 
which  were  there  deduced  without  any  attempt  at  accuracy. 

We  have  thus  proved 

THEOREM  1.  Every  function  K  (x,  y)  which  is  finite  in  S  and  whose 
fKaomfsfMttfMB  are  regularly  distributed  and  for  which  K  (x,  x)  is 
integrable  in  I  has  a  determinant  D  given  by  the  absolutely  convergent 
series  (7)  and  an  adjoint  D  (x,  y)  given  by  the  absolutely  and  uniformly 
convergent  series  (8).  This  adjoint  function  is  finite  in  S  and  is 
continuous  at  every  jmnt  where  K  is  continuous*.  The  function  K,  its 
determinant  and  its  adjoint  satisfy  the  identity  (16). 

*  If  D  =  0,  D  (x,  y)  is  continuous  throughout  S.  If  D=4=0,  D  (x,  y)  is  discon- 
tinuous wherever  A'(x,  y)  is  discontinuous,  and  is  discontinuous  in  such  a  way  that 
D  (.T.  y)  -  D  .  K  (x,  y)  is  continuous. 

3—2 


36  FREDHOLM'S  SOLUTION  [8 

If  D  4=  0,  we  may  divide  (16)  by  -  D,  and  if  we  let 

*<*»)  =  -^  (17), 

the  identity  (16)  reduces  to  formula  (9),  §  6.     Hence 

THEOREM  2.  Every  function  K  (x,  y)  finite  in  S,  whose  discon- 
tinuities are  regularly  distributed,  for  which  K  (x,  x)  is  integrable  in 
I,  and  whose  determinant  is  different  from  zero  has  a  reciprocal,  which 
is  given  by  formula  (17). 

By  a  reference  to  Theorem  1,  §  6,  we  get  at  once  the  further  result 
THEOREM  3.  If  the  kernel  K  (x,  y}  of  the  equation  (6)  is  finite  in 
S,  and  any  discontinuities  it  may  have  are  regularly  distributed,  and 
K  (x,  x)  is  integrable  in  I,  and  if  its  determinant  D  is  not  zero,  and 
f(x)  is  continuous  in  I,  the  equation  (6)  has  one  and  only  one  continuous 
solution,  and  this  solution  is  given  by  the  formula 


In  order  to  secure  the  existence  of  a  determinant  and  an  adjoint 
we  have  been  obliged  to  assume  that  K  (x,  x)  is  integrable,  since 
otherwise  even  the  terms  of  the  series  (7),  (8)  would  be  meaningless. 
Such  a  restriction  on  the  kernel  of  equation  (6)  is,  however,  obviously 
immaterial  since  a  change  of  definition  of  K  on  the  line  x  =  y  will 
clearly  have  no  effect  on  the  solutions  of  this  equation.  In  order  to 
cover  all  cases  conveniently,  we  will  lay  down  the  following 

DEFINITION*.  Let  K  (x,  y)  be  any  function  finite  in  S  and  whose 
discontinuities  are  regularly  distributed,  and  let 


K  (x  ?/)  =  ' 

-  o  V  ,  J)     |      Q         when  x-y  ; 

then  the  determinant  D0  and  the  adjoint  D0  (x,  y)  of  K0  are  called  the 
modified  determinant  and  the  modi/led  adjoint  of  K  respectively. 

Using  this  definition,  we  obtain  at  once 

THEOREM  4.  If  we  drop  the  requirement  that  K  (x,  x)  be  integrable 
in  7,  Theorem  3  remains  true  provided  we  use  in  place  of  D  and 
D  (x,  y)  the  modified  determinant  and  the  modified  adjoint  of  K 
respectively. 

It  is  to  be  noticed  that  the  modified  determinant  and  the  modified 
adjoint  may  exist  when  the  determinant  and  adjoint  do  not.  If, 
however,  the  conditions  of  Theorem  1  are  fulfilled,  so  that  all  four  of 
these  quantities  exist,  it  is  a  matter  of  some  interest  to  know  the 

*  The  idea  involved  in  this  definition  was  used  by  Hilbert  (Gottinger  Nachrich- 
ten,  1904,  p.  82)  for  the  purpose  of  treating  a  case  in  which  K  is  not  necessarily 
finite. 


8]  FREDHOLM'S  SOLUTION  37 

relation  between  the  determinant  and  the  modified  determinant,  and 
also  between  the  adjoint  and  the  modified  adjoint.  These  relations 
are  given  by  the  formulae 

(18), 

n  / ,.   ,A  _  >  when  x  *y 

0  I*'  y)  -  V  [D  (x,  y)  -  DK  (x,  y)]  when  x  =  y 

where  D  and  D  (x,  y)  are  the  determinant  and  adjoint  of  K,  and  Z),, 
and  D0  (x,  y)  the  modified  determinant  and  modified  adjoint,  and 

(20). 

The  proof  of  (18)  and  the  first  half  of  (19)  consists  simply  in 
multiplying  the  series  for  D0  or  Dn  (x,  y)  by  the  series 


while  the  second  half  of  (19)  may  readily  be  deduced  either  in  the 
same  way  or  from  (16)  in  combination  with  (18)  and  the  first  half  of  (19). 
The  details  of  these  proofs  we  leave  to  the  reader*. 

The  case  considered  by  Liouville  and  Volterra  in  which  the  upper 
limits  of  integration  are  variable  may  readily  be  seen  to  come  under 
the  last  theorem.  We  get  this  case,  as  we  have  seen,  by  supposing 
that  K  (x,  y)  -  0  when  y  >  x.  It  is  not  hard  to  show  that  D0  =  1,  and 
that  the  series  (8)  for  D0  (x,  y)  reduces  to  the  series  (5)  of  §  6,  so  that 
the  solution  just  given  reduces  in  this  case  precisely  to  Volterra's 
solution. 

In  conclusion,  we  turn  very  briefly  to  the  case  D  =  Q.  If  (6)  has 
a  continuous  solution  u  (x),  we  shall  have 


Multiply  this  equation  by  D  (x,  £)  and  integrate  with  regard  to  £  from 
a  to  b.     By  reducing  the  result  obtained  by  means  of  (16)  we  deduce 

*  The  reader  may  also  show  that  if  K  (x,  y)  is  finite  in  S,  and  any  discon- 
tinuities it  may  have  are  regularly  distributed,  and  its  modified  determinant  D0*0, 
it  has  a  reciprocal  which  is  given  by  the  formula 


k(x,y)  = 


38  THE    INTEGRAL    EQUATION  [8,    9 

THEOREM  5*.  If  the  determinant  of  the  integral  equation  (6) 
vanishes,  while  the  other  conditions  of  Theorem  3  are  fulfilled,  a  necessary 
condition  for  the  equation  to  have  a  solution  continuous  throughout 
I  is 


/: 


Apart  from  the  special  case  in  which  D  (x,  y]  =  0,  it  will  be  seen 
that  when  D  =  0  it  is  only  for  special  functions  f  (x)  that  the  equation 
{6)  can  have  a  continuous  solution. 

Filially  we  note  that  if  we  drop  the  restriction  that  K  (x,  x)  be 
integrable,  Theorem  5  remains  true  if  we  replace  the  determinant  and 
the  adjoint  by  the  modified  determinant  and  the  modified  adjoint. 

9.  The  Integral  Equation  with  a  Parameter.  For  many 
purposes  it  is  important  to  consider  integral  equations  of  the  second 
kind  whose  kernel  contains  a  parameter  A.  In  particular,  the  case  in 
which  this  parameter  comes  in  only  as  a  factor  is  of  prime  importance. 
In  this  case  the  equation  may  be  written 


tb 

Ja 


(1). 


It  is  customary  to  speak  of  K  as  the  kernel  of  this  equation.  It 
seems  more  consistent,  however,  to  call  \K  the  kernel,  and  this  we 
shall  do. 

It  will  be  assumed  throughout  this  section  that  /  is  continuous 
throughout  /,  that  K  is  finite  in  $,  that  any  discontinuities  K  may 
have  are  regularly  distributed,  and  that  K  (x,  x)  is  integrable  through- 
out /+.  While  we  still  suppose  the  functions  /and  K  to  be  real§,  we 
shall  allow  the  parameter  A.  to  take  on  complex  as  well  as  real  values. 
This  makes  the  kernel  ^K  no  longer  necessarily  real,  but  this,  as  was 
remarked  in  the  second  footnote  to  §  5  will  not  in  any  way  affect  the 

*  This   theorem   obviously  corresponds   to   the   fact   that  if  the   determinant 
of  a   system   of  linear  equations   vanishes,  the   equations  have,  in   general,  no 
solution. 

t  It  may  be  noticed  that  if  we  let  \  =  l/p,  Liouville's  original  equation  (5),  §  4, 
has  precisely  this  form. 

*  This  last  restriction  need  not  be  made  for  Theorems  1  and  2  below,  while 
in  the  later  theorems  it  might  easily  be  avoided  by  using  the  modified  determinant 
and  adjoint. 

§  It  may  readily  be  seen  that  the  reality  of  K  is  not  necessary  for  the  proofs  of 
Theorems  1  and  2.  That  it  is  not  necessary  in  the  case  of  the  later  theorems 
becomes  evident  if  we  make  use  of  the  more  general  form  of  Hadamard's  Theorem 
which  refers  to  determinants  with  complex  elements. 


9]  WITH    A    PARAMETER  39 

developments  of  that  section  and  the  next,  the  main  results  of  which 
read  as  follows  when  we  replace  Kby  ^K: 

THEOREM  1.     If  K^  K»,  ...  are  the  functions  obtained  by  iteration 
from  K,  then  for  any  value  of  A  for  which  the  series 

*(*,y,  *)  =  -lKi(*,y)-KK*(*,y)--  (2) 

converges  uniformly  in  (x,  y)  throughout  S,  the  function  k  (x,  y  ;  A)  is  the 
i  -K-iprocal  of  \K  (x,  y};  and,  for  any  such  value  of\,  the  equation  (1)  has 
one  and  only  one  continuous  solution,  namely 

u  (x)  =f(x)  -    bk  (x,  i  •  A)/(£)  dt  (3), 


a  solution  which  is  obviously  real  if\  is  real. 

THEOREM  2.     If  M  denotes  the  upper  limit  of  |  K(x,y)  \  in  S,  the 
hypothesis  of  Theorem  1  <"•///  1  >t'  fulfilled  when 


This  hypothesis  will  be  fulfilled  for  all  values  of  \  if 

K(x,  y)  =  0         when  y  >  x  (5). 

Turning  now  to  §  8,  we  first  establish 
THEOREM  3.     The  two  power-series 


brb 


+ 


[I 

*>-*;. 

*(*,y) 


-. ..(7), 


r»n ft- rye  for  all  values  <>f  A,  and-  the  second  converges  uniformly  in 
(x,  y  ;  X)  for  (x,  y)  in  S  and  X  restricted  to  any  finite  range  of  values. 
For  this  range  of  values  of(.r,y;  A),  the  function  D  (x,y;  A)  represented 
by  the  second  series  is  finite  and  has  no  discontinuities  except  where  K 
is  discontinuous. 

These  functions  satisfy  the  identity 


(8). 


40  THE    INTEGRAL   EQUATION  [9 

If  we  restricted  ourselves  to  real  values  of  X,  this  theorem  would  be 
hardly  more  than  a  special  case  of  the  results  of  §  8  obtained  by 
replacing  K  by  A7T,  the  one  new  point  being  that  the  convergence  of 
(7)  is  uniform  in  A  as  well  as  in  (x,  y).  This  point,  however,  becomes 
at  once  obvious  when  we  replace  A.  by  a  positive  constant  p.  such  that 
for  the  range  of  values  of  A  considered  |  A.  j  ^  p.. 

In  the  case  where  A  is  complex*  the  theorem  cannot  be  obtained  as 
a  special  case  of  §  8,  since  in  that  section  we  assumed  that  K  was  real, 
and.  \K,  which  now  takes  the  place  of  K,  is  not  real.  The  proof  of  the 
theorem  is,  however,  immediate  if  we  run  through  the  developments  of 
§  8  again  replacing  K  by  ^K  at  every  point.  We  leave  it  for  the 
reader  to  do  this. 

We  add,  also  without  proof,  the  following  theorem  which  the 
reader  may  establish  by  reasoning  closely  analogous  to  that  used 
in  §8: 

THEOREM  4.  The  series  obtained  by  differentiating  (7)  p  times  term 
by  term  with  regard  to  A  converges  uniformly  in  (x,  y  ;  A)  for  (x,  y)  in 
S  and  A  restricted  to  any  finite  range  of  values.  The  function 


represented  by  this  series  is  finite  for  this  range  of  values  and  is  discon- 
tinuous only  where  K  is  discontinuous. 

We  next  establish  a  lemma  concerning  power-series  which  we  shall 
find  useful. 

LEMMA  t.     If  in  the  power-series 

P(x,y\  X)  =  a«(x,y}  +  al(x,y}\  +  az(x,y)X2+  ...  (9) 
each  of  the  coefficients  a,-  (x,  y)  is  finite  in  S,  and  if  there  exists  a  positive 
constant  R  such  that  when  A  |  <  R  the  series  (9)  converges  throughout  S  ; 
and:  if  the  function  P  represented  by  the  series  is  a  finite  function  of 
(x,  y  ;  A)  when  (x,  y)  varies  throughout  S  and  A  varies  on  the  circum- 
ference \\\=-r,  where  r  is  a  positive  constant  less  than  R;  tk#n,  if  p  is  a 
positive  constant  less  than  r,  the  series  (9)  converges  uniformly  in  (x,y  ;  A) 
for  (x,  y)  in  S  and  A  ^  p. 

For  denote  by  M  the  upper  limit  of  P  when  (x,  y)  is  in  S  nnd 
|  A  |  =  r.  By  a  well-known  theorem  on  power-series  (cf.  Forsyth,  Theory 

of  Functions,  2nd  ed.,  p.  34) 

M^\an\  rn. 

*  No  special  consideration  of  this  case  would  be  necessary  if  Hadamard's 
Theorem  had  been  established  for  determinants  whose  elements  are  complex. 

t  This  lemma  may  be  extended  at  once  to  the  case  in  which  the  a's  are 
functions  of  any  number  of  variables,  and  S  is  replaced  by  any  finite  or  infinite 
region. 


9]  WITH    A    PARAMETER  41 

Consequently,  when  (a;  y)  is  in  S  and  A  |  ^  p,  the  terms  of  (9)  do  not 
exceed  in  absolute  value  the  corresponding  terms  of  the  series 

M+M? 

r 

and,  this  being  a  convergent  series  of  positive  constant  terms,  the 
uniform  convergence  of  (9)  is  established. 

The  exceptional  character  of  the  case  in  which  D  =  0  was  evident  in 
§  8.  We  lay  down  the  following 

DEFINITION.  The  roots  of  the  Integral  analytic  function  D  (A)  shall 
be  called  simply  the  roots  for  the  function  K  (x,  y)*. 

We  are  already  familiar  with  one  case  in  which  D  (A)  has  no  roots, 
namely  the  case  in  which  K(x,  y)  is  zero  when  y>x.  A  simple  case 
in  which  D  (A)  does  have  a  root  is  when  K(x,  y)  =  l.  Here 

D  (A)  =  1  -  A  (b  -  a), 

so  that  D  (A)  has  one  and  only  one  root,  namely  A  =  l/(b  -  a).  Other 
important  cases  in  which  D  (A)  has  at  least  one  root  will  be  considered 
in§  11. 

By  the  proof  used  for  Theorem  2,  §  8,  we  obtain 

THEOREM  5.  Except  when  A  is  a  root  for  K  (x,  y),  the  function 
\K(x,  y)  has  a  reciprocal  k(x,  y  ;  A)  given  by  the  formula 


This  reciprocal  is  therefore  an  analytic  function  of  A  which  has  no 
singularities  in  the  finite  part  of  the  \-plane  except  perhaps  poles  at  the 
roots  of  D  (A).  We  will  now  show  that  at  any  such  point  £  does  have  a 
pole,  at  least  for  some  points  (.r,  y)  in  8. 

For  this  purpose  we  first  note  the  formula 


whose  truth  follows  at  once  from  formulae  (6),  (7)t. 

Let  Aj  be  the  root  we  wish  to  consider,  and  suppose  that  it  is  a  root 
of  the  mih  order  of  D  (A),  so  that  we  may  write 

#(A)  =  cm(A-A1)'»  +  Cm+1(A-A1)m+1+....    (w>0,<;m*0)        (12), 
this  equation  being  valid  for  all  values  of  A. 

*  We  note  that  the  roots  of  the  modified  determinant  Z>0  (\)  are  the  same  as 
those  of  D  (X),  cf.  formula  (18),  §  8.  We  may  therefore  compute  the  roots  for 
K(x,  y)  as  the  roots  of  D0(X).  We  should  so  define  them  if  K(x,  x)  were  not 
integrable. 

t  That  D  (£,  £  ;  A)  is  integrable  in  I  is  clear,  since  by  (11),  §  8,  it  differs  by  a 
continuous  function  from  a  constant  multiple  of  K  (f  ,  £).  This  same  formula  shows 
us  that  the  series  for  D  (|,  £  ;  X)  may  be  integrated  term  by  term. 


42  EQUATION   WITH    PARAMETER  [9 

On  the  other  hand  D(x,y;  A),  being  analytic  in  A.  for  all  finite  points 
in  the  A-plane,  may  be  developed  about  A.  =  Aj  in  a  power-series  whose 
coefficients  are  functions  of  (x,  y}.  Some  of  the  first  of  these  co- 
efficients may  be  identically  zero  throughout  S.  They  cannot,  however, 
all  be  identically  zero,  for  then,  as  we  see  from  (7),  we  should  have 
K(x,  y)  =  0,  and  consequently  Z>(A)  =  1,  and  this  is  inconsistent  with 
the  hypothesis  that  D  (A)  has  a  root  Aj .  We  may  therefore  write 

D(x,y,*)=  gk  (m,  y}  (A.  -  ^)k  +  gk+l  (a,  y)  (A  -  A,)**1  +  . . .      (k  ^0,  gh$0) 

(13). 

The  coefficients  gk,  gk+1,  ...  differ  merely  by  constant  factors  from 
successive  derivatives  of  D  (x,  y  •  A)  with  regard  to  A  when  A  =  Ax . 
From  Theorem  4  we  therefore  see  that  all  these  g's  are  finite  through- 
out S  and  are  discontinuous  only  where  K  is  discontinuous. 

A  reference  to  our  Lemma  now  shows  us  that  the  series  (13)  con- 
verges uniformly  in  (x,  y ;  A)  when  (x,  y)  varies  throughout  S  and 
A  varies  over  any  finite  region  in  the  A-plane.  We  may  then  substitute 
the  series  (12)  and  (13)  in  (11),  and  integrate  term  by  term  on  the  left, 
thus  getting 

S  (A  -  A,)*  (bgt  (£  t)  dt  =  A  5  ict  (A  -  A,)'-'. 

i=k  Ja  i=m 

If  here  we  write  A  =  Ax  +  (A  -  Aj),  we  see  that  the  second  member  of  this 
equation  may  be  developed  in  a  series  proceeding  according  to  positive 
integral  powers  of  A  —  A1;  and  beginuing  with  the  term  in  (A  — Aj)'" 
whose  coefficient  is  not  zero,  since  Aj  ^  0,  cm  *  0.  Since  the  left-hand 
side  is  also  a  power-series  in  A-  A1;  it  follows  that  it  must  also  begin 
with  the  same  term,  so  that  k  ^  m  —  1,  that  is 

k<m  (14). 

Since  /;  (x,  y;  A)  is  the  negative  of  the  ratio  of  (13)  to  (12),  we  thus  get 

THEOREM  6.  If  Ax  is  a  root  for  K(x,  y),  and  k  (x,  y  ;  A)  the  recipro- 
cal of  \K(x,  y),  then,  at  least  for  some  points  in  S,  k(x,  y ;  A)  has  a 
pole  at  the  point  A  =  Aj . 

Instead  of  considering  the  development  of  k  about  a  root  of  D  (A), 
let  us  now  consider  its  development  about  the  point  A  -  0.  This  is 
certainly  not  a  root  of  D  (A),  as  we  see  from  the  form  of  the  series  for 
D  (A).  Consequently  the  function  k  (x,  y ;  A)  is  analytic  in  A  at  the 
point  A  =  0,  and  can  be  developed  about  this  point  in  a  power-series 
which  converges  in  a  circle  described  about  the  origin  as  centre  and 
reaching  out  to  the  point  in  the  A-plane  nearest  to  the  origin  which 
represents  a  root  of  D(\).  It  is  possible  that  for  some  points  (,r,  //) 
this  series  may  converge  outside  of  this  circle,  but  it  cannot  do  so  for 


9,  10]        HOMOGENEOUS  INTEGRAL  EQUATIONS  43 

all  points  of  S.  This  development  must  be  precisely  the  series  (2), 
since  we  have  seen  that  that  series  represents  /•  (.r,  y\  A)  when  A.  is 
sufficiently  small. 

By  means  of  our  Lemma  we  can  go  a  step  further.  For,  if  we 
denote  by  /•  a  positive  constant  smaller  than  the  absolute  value  of 
any  of  the  roots  of  D  (A),  the  function  D  (A)  is  continuous  and  different 
from  zero  when  A  -r.  We  know  also  from  Theorem  3  that  D(x,y\  A) 
is  finite  when  i  A  -  r  and  (x,  y)  is  in  S.  Consequently,  by  (10),  the 
same  will  be  true  of  /•  (.r,  y  ;  A).  We  may  then  infer  by  means  of  our 
Lemma  the  uniform  convergence  of  (2),  and  we  thus  get 

THEOREM  7.  7/Aj  is  the  nx/tfor  K(.r,y)  of  smallest  absolute  calue, 
the  series  (2)  converges  uniformly  in  (x,  y;  A)  when  A  =  p  <  Aj  ,  and 
(.r,  y)  is  in  S,  to  the  function  i-  (,r,  y;  A)  reciprocal  to  \K(x,  y). 

From  Theorem  5  and  Theorem  1  ,  §  6,  we  infer  at  once 

THEOREM  8.     If  A  is  not  one  of  the  roots  for  K,  equation  (1)  has  one 

and  only  one  solution  continuous  in  j",  namely 

u(x,  A)  =/(,) 

It  will  readily  be  seen  that  u  (x,  A)  is  a  function  analytic  at  all  finite 
points  of  the  A-plane  except  at  the  roots  of  D  (A)  where  it  can  have  no 
other  singularities  than  poles. 

10.  The  Fundamental  Theorem  concerning  Homo- 
geneous Integral  Equations,  with  some  Applications.  We 
consider  in  this  section  the  equation 


where,  as  before,  we  assume  that  K  is  finite  in  S,  that  any  discon- 
tinuities it  may  have  are  regularly  distributed,  and  that  K(x,  x)  is 
integrable  in  /*. 

From  Theorem  8,  §  9,  we  see  that,  when  A  is  not  a  root  for  K,  the  only 
continuous  solution  of  (1)  is  the  obvious  solution  u  =  0.  It  remains 
merely  to  consider  the  case  in  which  A  is  a  root  of  D(\).  We  will 
prove  that  in  this  case  (1)  always  has  a  continuous  solution  which  does 
not  vanish  identically  T. 

*  As  in  §  9,  this  last  restriction  might  be  avoided  by  using  the  modified 
determinant  and  adjoint. 

t  This  fundamental  theorem  was  first  established  by  Fredholm.  We  follow 
Kneser,  Rendiconti  of  Palermo,  vol.  22  (1906),  p.  233. 


44  HOMOGENEOUS    INTEGRAL    EQUATIONS  [10 

Let  Aj  be  the  root  in  question,  and  develop  the  determinant  and  the 
adjoint  of  \K  into  the  series  (12)  and  (13)  of  §  9.  These  series  we 
substitute  in  the  identity  (8),  §  9,  and,  since  the  second  converges 
uniformly,  we  obtain,  after  dividing  by  (A  -  A^*,  the  formula 


=  x2(x-A1y-fc  Jr<«,0fl($jr)<«    (2). 

i=fc  Jo. 

If  here  we  let  A  =  A, ,  we  see,  on  referring  to  the  inequality  k  <  m 
(cf.  (14)  §  9),  that  all  the  terms  reduce  to  zero  except  the  first  term  in 
the  first  series  and  the  first  term  in  the  last  series.  We  thus  get 


9*  (#,  y)  =  *i  (  f(*,  £)  g*  (6  y)  dt  (3). 

Ja 


Thus  we  see  that  whatever  constant  value  in  the  interval  /  we  may 
assign  to  y,  the  function  <fe(#»  y)  is  a  solution  of  (1)  when  X  =  Xa. 
Moreover,  since  we  saw  in  §  9  that  gk  is  finite  in  8  and  that  any 
discontinuities  it  has  are  regularly  distributed,  we  see,  by  referring  to 
Theorem  1,  §  1,  that  the  second,  member  of  (2)  is  a  continuous  function 
of  (x,  y)  throughout  S.  Hence  gk  is  a  continuous  solution  of  (1). 
Finally,  we  know  that  gk  does  not  vanish  identically.  Thus  we  have 
proved 

THEOREM  1.  A  necessary  and  sufficient  condition  that  equation  (1) 
have  a  continuous  solution  which  is  not  identically  zero  is  that  X  60  «  root 
forK. 

DEFINITION  1.  By  a  principal  solution  J  "or  K(x,  if)  is  understood  a 
continuous  solution  of  t/ie  homogeneous  equation  (1)  which  is  not 
identically  zero*. 

The  last  theorem  established  shows  that  to  every  root  of  D  (X) 
correspond  one  or  more  principal  solutions.  In  fact  it  is  easily  seen 
that  to  a  root  of  D  (A)  will  always  correspond  an  infinite  number  of  such 
solutions.  For  if  u^  (x)  is  a  first  principal  solution  corresponding  to 
the  root  X15  then  clu1(x),  where  cl  is  any  constant  different  from  zero, 
will  clearly  also  be  a  principal  solution  corresponding  to  Xjt. 

*  A  more  complete  term  here  would  be  principal  solution  in  x  (Ntilllosung 
in  x)  ;  by  a  principal  solution  in  y  being  understood  a  continuous  solution  of 


fb 

u(y)  =  \  I    K(x,  y)u  (.>•),!.,• 
J  « 


which  does  not  vanish  identically. 

+  Notice  that  the  principal  solution  gk  (x,  y)  obtained  in  the  proof  of  Theorem  1 
also  contains  a  parameter,  y. 


10]  HOMOGENEOUS   INTEGRAL   EQUATIONS  45 

If,  besides  the  solutions  c1ul  (.r),  the  equation  (1)  has,  when  A  -  X1} 
other  continuous  solutions,  let  M,  (x)  be  such  a  one.     Then 


will  clearly  be  a  continuous  solution  whatever  the  values  of  the  constant 
Ci  and  c»  may  be.  Proceeding  in  this  way,  we  see  that  unless  the 
equation  (1)  has  when  X  =  X1  an  infinite  number  of  linearly  independent 
continuous  solutions  the  general  continuous  solution  of  (1)  for  X  =  AX 
may  be  written 

Ci  K!  (a?)  +  Ca  M2  (a-)  +  .  .  .  +  c»  «»  (a-) 

where  uly  ...  «„  are  linearly  independent  continuous  solutions  of  (1)  for 
X  =  XX  and  c1}  ...  cn  are  arbitrary  constants.  The  functions  «1?  ...  um 
may  then  be  called  a  fundamental  system  of  solutions  of  (1)  when 
X  =  Xj*  ;  and  the  number  72  may  be  called  the  index  of  the  root  Xj  ,  a 
term  not  to  be  confounded  with  the  multiplicity  of  this  root.  We  give 
the  formal  definition  of  both  of  these  terms  : 

DEFINITION  •>.  The  number  of  linearly  independent  principal 
solutions  for  K(x,  y)  which  correspond  to  a  root  Xx  for  K  is  called 
the  index  <//'X1  ;  the  number  of  times  ^  occurs  as  a  root  of  the  analytic 
function  Z>(X)  (the  determinant  of\K)  is  called  the  multiplicity  o/Xa. 

From  analogy  with  the  theory  of  linear  algebraic  equations  we 
should  expect  that  the  determination  of  the  index  of  a  root  for  K 
would  require  the  introduction  of  series  which  correspond  to  the 
second  minors,  third  minors,  etc.,  of  the  determinant  of  the  system 
of  linear  equations  just  as  the  series  for  D(x,y\  X)  corresponds  to  the 
first  minors  of  this  determinant.  Such  series  were,  in  fact,  introduced 
by  Fredholm,  and  form  an  essential  part  of  the  article  already  cited. 
By  means  of  them  he  was  able  to  determine  completely  the  index  of  any 
root  of  D  (X),  and  among  other  things  he  established  the  interesting 
fact  that  the  index  of  a  root  of  D(\)  can  never  exceed  its  multiplicity. 
In  particular,  since  the  multiplicity  is  necessarily  finite,  it  follows  that 
the  index  is  also  finite.  For  the  discussion  of  these  questions,  we  refer 
the  reader  to  Fredholm's  article.  In  §  12  we  shall  give  a  different 
proof,  due  to  E.  Schmidt,  that  the  index  of  a  real  root  of  D(X)  is 
finite. 

We  conclude  this  section  with  two  fairly  obvious  applications  of  the 
theory  of  the  homogeneous  equation,  first  to  the  theory  of  reciprocal 

*  AH  this  is  in  perfect  analogy  with  the  theory  of  homogeneous  linear  algebraic 
equations.  Cf.,  for  instance,  the  author's  Introduction  to  Higher  Algebra  (Mac- 
millan,  1907),  p.  49. 


46  SYMMETRIC   KERNELS  [10,    11 

functions,    and    secondly    to    the   theory    of   the    non-homogeneous 
equation. 

THEOREM  2.  If  K(x,  y)  is  finite  in  8  and  any  discontinuities  it 
may  have  are  regularly  distributed,  and  K(x,  x)  is  integrable  in  /,  a 
necessary  and  sufficient  condition  that  K  have  a  reciprocal  is  that  t/w 
determinant  of  K  do  not  vanish. 

That  this  is  a  sufficient  condition  was  already  proved  in  Theorem  2, 
§  8.  To  prove  it  necessary,  suppose  the  determinant  were  zero.  Then 
the  equation  (1)  has  an  infinite  number  of  continuous  solutions,  and 
this  is  seen  by  Theorem  1,  §  6,  to  be  impossible  if  Kha,s  a  reciprocal. 

THEOREM  3.     If  A.  is  a  root  f 01-  K(x,  y),  the  equation* 

u  (x}  =f(x)  +  A  (bK(x,  i)  u  (()  d€  (4) 

Ja 

has  either  no  continuous  solution  or  an  infinite  number  of  continuous 
solutions. 

For  it  is  clear  that  by  adding  to  a  first  continuous  solution  of  (4) 
any  continuous  solution  of  the  homogeneous  equation  (1),  we  get 
another  continuous  solution  of  (4). 

In  fact  it  is  readily  seen  that  the  general  solution  of  (4)  will  be 
obtained  by  adding  to  a  particular  solution  of  (4)  the  general  solution 
of  (1).  Cf.  the  corresponding  well-known  fact  for  non-homogeneous 
linear  differential  equations. 

11.  Symmetric  Kernels.  A  function  K(x,  y)  is  said  to  be 
symmetric  if  K(x,  y)  =  K(y,  x).  Integral  equations  whose  kernels  are 
symmetric  not  only  play,  as  has  been  shown  by  Hilbert,  a  very 
important  part  in  the  applications,  but  their  theory  may  be  used 
(cf.  the  papers  of  Hilbert  and  Schmidt  t)  as  a  foundation  for  the  theory 
of  integral  equations  whose  kernel  is  not  symmetric.  We  assume 
throughout  this  section  that  K  is  finite  in  8,  and  that  its  discon- 
tinuities are  regularly  distributed. 

*  We  assume,  of  course,  that  f(x)  is  continuous  in  7,  and  that  K  satisfies  the 
conditions  stated  at  the  beginning  of  this  section. 

t  A  series  of  papers  by  Hilbert  in  the  Gottinger  Nachrichten,  beginning  in  1904, 
entitled  "Gruiidziige  einer  allgemeinen  Theorie  der  linearen  Integralgleichungen  "  ; 
and  E.  Schmidt,  Math.  Ann.  vol.  63  (1907),  p.  433.  The  greater  part  of  this  last 
paper  originally  appeared  in  1905  as  a  Dr.  dissertation.  A  continuation,  referring 
to  non-linear  integral  equations,  appeared  in  Math.  Ann.  vol.  05  (1908),  p.  370. 


11]  SYMMETRIC    KERNELS  47 

THEOREM  1.  If  K(x,  y)  is  symmetric  and  does  not  vanish  at  nil 
point*  <>f  .s'  whrre  it  is  continuous,  then  all  of  the  iterated  functions 
K.2  (.r,  y),  K-A(x,  y),  ...  are  symmetric  and  none  of  them  is  identically 
ten. 

For  let  Kn  (x,  y)  be  the  first  of  these  functions  which  is  not 
symmetric.  Then 

Kn  (x,  y}  =  i  V.-!  (x,  *)  JCi  (£,  y)  ft  (I). 

.'a 

This,  by  the  symmetry  of  Kl  and  Kn_i,  reduces  to 


which  is  precisely  Kn  (y,  a:}.     Thus  Kn  is  symmetric. 

On  the  other  hand,  if  some  of  the  iterated  functions  vanish  identically, 
let  KH-i  (x,  y)  be  the  first  one  to  do  so.  We  see  by  (1)  that  Kn(x,  y) 
also  vanishes  identically.  One  of  the  two  integers  n  —  1  and  n  is  even. 
Calling  this  even  integer  2m,  we  have 

0  =  K,m  (x,  y)  =  \bKm  (x,  *)  Km  ((,  y}  d(. 

Ja 

Accordingly,  owing  to  the  symmetry  of  Km, 


This,  however,  is  possible  only  if  Km  vanishes  at  all  points  of  S  where 
it  is  continuous.  Since,  as  we  saw  in  §  6,  K2,  K3,  ...  are  continuous 
throughout  S,  we  are  thus  led  to  a  contradiction,  and  our  theorem  is 
proved. 

"We  now  come  to  a  fundamental  existence  theorem  first  established  by 
Schmidt,  and  which  we  shall  prove  by  a  method  due  to  Kneser*. 

THEOREM  2.  For  every  symmetric  function  K(x,  y}  which  does  not 
vanish  at  t  eery  point  irkere  it  is  continuous,  there  is  at  least  one  root. 

This  theorem  will  be  established  (cf.  Theorem  7,  §  9)  if  we  can  show 
that  the  series  (2)  of  §  9  is  not  uniformly  convergent  in  (x,  y)  for  every 
value  of  A.  Let  us  then  assume  that  it  is  uniformly  convergent 
throughout  S  for  every  value  of  A.  The  series 


is  then,  for  every  value  of  A,  uniformly  convergent  throughout  /.     It 

*  Loc.  cit.  p.  236. 


48  SYMMETRIC    KERNELS  [11 

can  therefore,  since  its  terms  are  continuous,  be  integrated  term  by 
term,  and,  if  we  let 

rb 

Un=  \  Kn (x,  x} dx  (2), 

Ja 

the  series  Z72A2  +  £73A3+  ... 

is  convergent  for  all  values  of  A.  Since  this  is  a  power-series,  it  is 
necessarily  absolutely  convergent,  and  hence  a  series  formed  from  part 
of  its  terms  is  convergent.  Thus 

Z72A2+  U4\4+  £76A6+...  (3) 

converges  for  all  values  of  A. 

If,  now,  we  remember  that  K  is  symmetric,  we  get,  by  formula 

Un+m  =  f  IK* (x,  £) Km (x,  I) d£dx, 

Ja  Ja 

and  in  particular 

Ja  Ja 

Hence  by  expanding  the  obvious  inequality 

r*>  rb 

I    I  [pKn+l  (x,  £)  +  qKn-i  (•?,  £)]2  d^dx  =  0, 

Ja  Ja 

where  p  and  q  are  arbitrary  real  parameters,  we  get 

that  is,  the  first  member  of  this  inequality  is  a  positive  definite  quadratic 
form  in  (p,  q).  Consequently 

or,  since  by  (4),  in  combination  with  Theorem  1,  the  V's  with  even 
subscripts  are  positive  (not  zero), 

2n+2  >  2n  /c\ 

TT       ~  TT  W- 


Now  in  the  series  (3),  the  ratio  of  two  successive  terms  is 

^at+2  >,2 

~TT~     ' 

i^2n 

which,  by  a  successive  application  of  (5),  we  see  is  not  less  than 


U.' 


If  then  we  let  A=  \ff72/l74,  we  see  that  the  terms  of  (2)  do  not 
decrease  as  we  go  out  in  the  series,  and  consequently  the  series  cannot 
converge.  Since  this  is  a  contradiction,  our  theorem  is  proved. 


11]  SYMMETRIC   KERNELS  49 

Our  proof  establishes  at  once  the 

COROLLARY.  Under  the  hypothesis  of  Theorem  2,  there  is  at  least 
one  root  for  K(x,  y)  which  in  absolute  value  does  not  exceed  >JU^Ut) 
where  the  U's  are  defined  by  (2). 

Let  us  now  suppose  that  for  the  symmetric  function  K(x,  y)  there 
are  two  roots,  Ax  and  A^,  and  let  us  denote  by  Ui  (x)  and  u2  (x)  principal 
solutions  for  K  which  correspond  to  \  and  A^  respectively.  Then 


Let  us  multiply  the  first  of  these  equations  by  ^u^(x),  the  second  by 
AjMj  (x)  and  subtract.  This  gives 

rb 

(\2  -  A.J)  Ul  (x)  u2  (x)  =  A!  A,    K  (x,  £)  [ttj  (£)  u»  (x)  -  w2  (£)  ^  (x)]  d£, 

Ja 

and  from  this  follows 

rb 

(A.J  -  Aj)  /  MJ  (x)  uz  (x)  dx 

Ja 

=  A,  A,  f  &  /*[JT  (ar,  0  u,  (*)  w2  (ar)  -  ^(f  ,  a?)  a.  (ar)  «,  (0]  dtdx. 

Ja  Ja 

By  interchanging  the  variables  of  integration  in  the  second  half  of  this 
double  integral,  we  see  that  the  second  member  of  this  equation  reduces 
to  zero,  and,  since  by  hypothesis  A2  4=  Aj  ,  we  get  the  result 

THEOREM  3.  If  ^(x)  and  u^(x)  are  principal  solutions  for  a 
symmetric  function  K  (x,  y)  which  correspond  to  two  distinct  roots, 
then 


.'a 


=  0. 


The  method  we  have  just  used  is  essentially  that  invented  by 
Poisson  in  the  analogous  case  of  linear  differential  equations  of  the 
second  order,  and  we  will  follow  out  the  analogy  still  further  by 
proving,  also  by  Poisson's  method,  as  Schmidt  has  done, 

THEOREM  4*.  For  the  symmetric  function  K  (x,  y),  there  can  be 
only  real  roots. 

Since  the  coefficients  of  the  power-series  D  (A)  are  real,  imaginary 

*  This  theorem  was  first  proved  by  a  wholly  different  method  by  Hilbert  as 
an  extension  of  a  well-known  theorem  concerning  symmetrical  determinants. 
Cf.  Weber,  Algebra,  2nd  Ed.  vol.  1,  p.  309. 

B.  4 


50  SYMMETRIC   KERNELS  [11 

roots  necessarily  occur  in  conjugate  imaginary  pairs.  If  possible  let 
/x.  +  vi  and  /A  -  vi  be  two  imaginary  roots.  Let  v  (x)  +  iw  (x}  be  a 
principal  solution  corresponding  to  /A  +  vi,  so  that 

rb 

v  (x)  +  iw  (x}  =  (//,  +  vi)  I  K(x,  £)  [v  (£)  +  iw  (£)]  dg. 

Ja 

If  in  this  equation  we  separate  real  and  imaginary  parts,  and  then 
recombine  the  resulting  equations,  we  see  that  v  (x}  -  iw  (x}  is  a 
principal  solution  corresponding  to  /A  -  vi.  Consequently  by  Theorem  3 


/' 

Ja 


This,  however,  is  impossible,  since  v  and  w  cannot  both  vanish 
identically,  as  otherwise  v  +  iw  would  not  be  a  principal  solution. 
Thus  our  theorem  is  proved. 

We  saw  in  §  9  that  the  roots  for  K(x,  y)  are  the  poles  of  the 
function  Jc(x,  y  \  X)  reciprocal  to  XK(x,  y).  In  the  case  we  are  now 
considering,  in  which  K(x,  y)  is  symmetric,  we  will  show  that  these 
poles  are  of  the  first  order  ;  or,  to  use  the  notation  of  §  9,  that 
for  every  root  X1}  m  =  k+\.  Suppose  this  were  not  the  case,  so  that 
m>k+l.  Then  from  formula  (2)  of  §  10  we  should  not  only  get, 
by  letting  X  =  Xx  , 

ff*  &  y}  =  AI  f  V(ar,  £)  gk  (t,  y}  <ti  (6), 

Ja 

as  in  §  10,  but  also,  by  first  differentiating  with  regard  to  X  and  then 
letting  X  =  Xj, 

rb  rb 

gk+i  (*?,  y)  =  I  K(%,  I)  0t  (£,  y)  dt  +  *il  K(x,  |)  gk+1  (£,  y)  d£. 

Ja  Ja 

This  may  be  reduced  by  (6)  to  the  form 

(x,  ()  gk+l  (t,  y)  dt          (7). 


We  now  multiply  (7)  by  gk(x,  y),  (6)  by  gk+l  (x,  y\  subtract  (6)  from 
(7),  and  integrate  the  resulting  equation  with  regard  to  x  from  a  to  b. 
This  gives 


rb    rb 

+  Xj    IK  (x,  £)  [gk  (x,  y)  gk+l  ((,  y)  -  gk+l  (x,  y}  gk  (&  y}}  d£dx. 

Ja  Ja 

The  double  integral  is  seen,  precisely  as  in  a  similar  case  in  the  proof 


11]  SYMMETRIC    KERNELS  51 

of  Theorem  3  above,  to  have  the  value  zero,  and  we  thus  get  the 
formula 


.'a 


this  equality  holding  for  all  values  of  y.  This,  however,  is  impossible, 
since  gk  (x,  y)  is  not  identically  zero.  Thus  we  have  proved 

THEOREM  5*.  If  K(x,y)  is  symmetric,  and  k(x,y;  X)  is  the 
reciprocal  of  \K(x,  y),  then  k  has  no  singularities  in  the  finite  part  of 
the  \-plane  other  than  poles  of  the  first  order. 

It  should  be  noticed  that  this  theorem  does  not  by  any  means  assert 
that  D  (A)  has  only  simple  roots. 

The  results  we  have  obtained  in  this  section  for  symmetric  kernels 
may  be  extended  to  certain  more  general  cases  by  the  following  device 
due  to  Goursat  t  : 

Form  the  function 

K'  (x,  y)  =  >Jp(x)p(y)q(x)q(y)  *  (x,  y) 

where  K  is  symmetric  and  does  not  vanish  at  every  point  where  it  is 
continuous,  while  p  and  q  are  continuous  and  positive  (not  zero) 
throughout  /.  The  function  K'  then  satisfies  all  the  conditions  im- 
posed on  K  in  Theorems  2,  4,  5.  Now  let 


q(x) 
and  form  the  function 

K(x,  y}  =          K'  (x,  y)=p  (x}  q  (y)  *  (x,  y)  . 


Let  us  denote  by  k'  (x,y;  X)  the  reciprocal  of  \K'  (x,  y).  Then  the 
reciprocal  of  \K(x,  y)  will,  by  the  footruote  to  the  definition  of  reciprocal 
functions  in  §  6,  be 

r  (x)  ,  ,  .  .  . 

~r  k  (x,  y  ;  X). 

r(y) 

*  Cf.  the  proof  of  this  theorem  in  a  somewhat  more  general  case  by  Boggio,  Paris 
C.  B.  Oct.  14,  1907.  The  case  considered  by  Boggio  is  less  general  than  the  result 
of  Goursat  to  be  given  in  a  moment.  We  mention  in  passing  that  the  present 
theorem  is  the  extension  to  the  case  of  an  infinite  number  of  variables  of  part  of  the 
well-known  algebraic  theorem  which  says  that  the  elementary  divisors  of  the  deter- 
minant Dn  of  §  7,  when  this  determinant  is  symmetric  and  K  is  replaced  by  \J£, 
are  all  of  the  first  degree.  Cf.,  for  instance,  the  writer's  Introduction  to  Higher 
Algebra,  p.  305. 

t  Paris  C.  E.  Feb.  17,  1908. 

4—2 


52  ORTHOGONAL  FUNCTIONS  [11,  12 

Now  the  roots  for  a  function  are  the  poles  of  the  reciprocal  of  X  times 
this  function.  Hence  the  roots  for  K(x,  y)  are  the  poles  of  the  function 
last  written,  and  since  these  are  the  same  as  the  poles  of  k'  (x,  y ;  A), 
which  in  turn  are  the  roots  for  K '  (a,  y),  we  see,  by  Theorem  2,  that 
there  is  necessarily  at  least  one  root  for  K(x,  y\  and,  by  Theorem  4, 
that  these  roots  are  necessarily  real.  Finally,  by  Theorem  5,  the 
reciprocal  of  \K(%,  y}  has  no  finite  poles  of  order  higher  than  the  first. 
Thus  we  get  the  result,  which  includes  Theorems  2,  4,  5  as  special 
cases, 

THEOREM  6.  If  K  (x,  y)  is  symmetric  and  does  not  vanish  at  all 
points  where  it  is  continuous,  and  if  p  (x)  and  q  (x)  are  continuous 
throughout  I  and  do  not  vanish  there*,  then  for  the  function 

K(x,y)=p(x}q(y)K(x,y) 

there  is  at  least  one  root,  all  such  roots  are  real,  and  the  function 
reciprocal  to  h.K(x,  y)  has  no  finite  poles  of  order  higher  than  the  first. 

12.  Orthogonal  Functions.  We  begin  by  laying  down  the 
following 

DEFINITION  1.  The  functions  ^(x),  u2(x),  ...,  finite  or  infinite  in 
number,  are  said  to  be  orthogonal  to  each  other  in  the  interval  I  if 

(a)  they  are  real  and  continuous  in  I; 

(b)  no  one  of  them  is  identically  zero  in  I; 

(c)  every  pair  of  them,  Ui  (x)  and  Uj  (x),  satisfy  the  relation 

' 


.la 


The  connection  between  the  subject  of  orthogonal  functions  and  the 
theory  of  homogeneous  integral  equations  with  a  symmetric  kernel  is 
evident  from  Theorem  3,  §  11.  On  the  other  hand  we  recall  that  in 
other  branches  of  mathematics,  in  particular  in  the  developments  of 
arbitrary  functions  which  occur  in  mathematical  physics,  systems  of 
orthogonal  functions  play  an  important  part.  Thus,  for  instance, 
in  the  subject  of  Fourier's  Series  we  have  to  deal  with  the  system 
of  functions 

fl     cos#     cos  2x    cos  ^x  ... 
1       sin  x     sin  2%     sin  3^  ... 

orthogonal  in   the   interval   0  ^  x  ^  2tr.     Or  again,   the  Legendre's 
Polynomials 

P«(x],  P,(x\  P,(x),  ..., 

*  We  are  clearly  justified  in  dropping  the  restriction  that  p  and  q  be  positive 
since  a  change  of  sign  of  K  does  not  affect  our  result. 


12]  ORTHOGONAL   FUNCTIONS  53 

in  terms  of  which  arbitrary  functions  may  be  developed  in  the  interval 
—  1  ^  x  ^  +  1,  are  orthogonal  in  this  interval.  On  the  other  hand  the 

Bessel's  Functions 

J0 (c4  x\  J0(<hX\  ..., 

in  terms  of  which  arbitrary  functions  may  be  developed  in  the  interval 
0  =  x  ^  1,  where  a1}  a^,  ...  are  the  roots  of  the  transcendental  equation 
J0  (a)  =  0,  are  not  orthogonal  in  this  interval,  but  become  so  if  they  are 
all  multiplied  by  the  factor  >Jx. 

This  subject  of  orthogonal  functions  is  therefore  one  which  has  long 
been  of  importance  for  its  own  sake*.  We  shall  first  develop  this 
subject  independently,  and  then  make  some  applications  of  the  results 
obtained  to  the  theory  of  integral  equations  t. 

THEOREM  1.  If  iii(x),  ...  un(x}  are  orthogonal  in  /,  they  are 
necessarily  linearly  independent  there. 

For,  if  possible,  let  there  be  a  relation 

C-L  Ui  (x)  +  . . .  +  CnUn  (x)  =  0 

where  at  least  one  of  the  constants  c,  say  cv,  is  not  zero.  Multiplying 
this  equation  by  uv(x)  and  integrating  from  a  to  b,  we  get 

cv  f  [uv(x)Ydx  =  0, 

Ja 

which  is  impossible. 

THEOREM  2.  If  i^  (x),  ...  un  (x)  are  real  and  continuous  throughout 
/,  and  are  linearly  independent,  there  exist  n  linear  combinations  with 
constant  real  coefficients  of  these  functions  which  are  orthogonal  in  I. 

Let  us  begin  by  assuming  that  this  theorem  is  true  in  the  case  of 
n  —  1  functions,  so  that  we  can  get  linear  combinations  with  real 
constant  coefficients  ih(x\  ...  vn-l(x)  of  the  functions  ^(x),  ...  un_^(x) 
which  are  orthogonal  in  /.  It  remains  merely  to  show  that  the  real 
constants  c  may  be  so  determined  that  the  function 

V* (X)  =  C^\  (x)+  ...+  C^Vn-!  (X)  +  lln  (x) 

is  orthogonal  to  t\  (x),  . . .  v^  (x).  Let  k  be  any  one  of  the  integers 
1,  2,  ...  n -  1.  If  we  multiply  the  last  equation  by  vk(x)  and  integrate 
from  a  to  b,  we  get 

rb  rb  rb 

\  v*  (x)  vt  (x)  dx  =  ck     [vk  (x)]2  dx  +     M,  (x)  vk  (x)  dx. 

Ja  Ja  Ja, 

*  So  far  as  the  writer  knows,  the  term  orthogonal  was  first  used  in  this  sense 
by  F.  Klein  in  a  course  of  lectures  on  the  differential  equations  of  mathematical 
physics,  delivered  in  Gottingen  in  the  summer  of  1889. 

t  Cf.  Gram,  Crelle,  vol.  94  (1883),  p.  41 ;  and  E.  Schmidt,  Math.  Ann.  vol.  63 
(1907),  p.  443. 


54  ORTHOGONAL   FUNCTIONS  [12 

If,  then,  we  let 

rb 

I  un  (x)  vk  (x}  dx 


the  function  vn(x)  will  be  orthogonal  to  vl  (x),  ...  vn^  (x). 

In  order  that  the  proof  of  our  theorem  by  mathematical  induction 
be  complete,  it  is  merely  necessary  to  notice  that  the  theorem  is  true 
(though  trivial)  in  the  case  of  a  single  function. 

We  saw  above  that  when  we  have  an  infinite  set  of  orthogonal 
functions  u^(x),  u2(x),  ...  the  problem  of  developing  an  arbitrary 
function  /  (x}  in  a  series  of  the  form 

f  (x)  =  c^u^x}  +  c.2u<i(x)  ...  (1) 

frequently  presents  itself,  the  series  to  hold  in  the  interval  /  in  which 
the  functions  are  orthogonal.  Without  attempting  to  consider  this 
question,  we  merely  give  the  familiar  formal  determination  of  the 
coefficients,  a  determination  which  is  completely  justifiable  if  we  know 
that  f(x)  can  be  developed  into  a  series  of  this  form  which  is  uniformly 
convergent,  or,  more  generally,  which  after  multiplication  by  any  con- 
tinuous function  can  be  integrated  term  by  term. 

This  formal  determination  of  the  coefficients  consists  in  multiplying 
(1)  by  iii  (x)  and  integrating  term  by  term  from  a  to  b.  All  the  terms  of 
the  series  but  one  then  drop  out  on  account  of  the  orthogonality  of  the 
functions,  and  we  get 

fb 

I  f  (x)  ut  (x}  dx 

(2)- 


Ja 


The  ordinary  formulae  for  the  coefficients  of  a  Fourier's  Series  or  of  a 
series  in  terms  of  Legendre's  Polynomials  or  of  Bessel's  Functions  are 
of  course  only  special  cases  of  (2). 

By  multiplying  the  functions  Ui,  uz,  •••  by  suitable  real  constants 
we  can  obviously  make  the  denominators  in  (2)  take  on  any  positive 
values  we  please.  If,  in  particular,  we  make  them  all  take  on  the 
value  1,  we  say  that  the  functions  are  normalized. 

DEFINITION  2.  A  set  of  functions  orthogonal  in  I  are  said  to  be 
normalized  if  the  integral  of  the  square  of  each  one  extended  over  I  is  1  . 

A  much  more  elementary  problem  than  the  problem  of  development 
we  have  just  touched  upon  is  the  problem  of  getting  an  approximate 
representation  of  a  function  f(x)  in  the  interval  /  by  means  of  a  given 


12]  ORTHOGONAL    FUNCTIONS  55 

finite  set  of  orthogonal  functions  HI(#),  ...  uk(x).  We  will  suppose 
f(x)  to  be  finite  in  /  and  to  have  at  most  a  finite  number  of  discon- 
tinuities there.  We  wish  to  determine  the  constants  c,,  ...  ct  in  such  a 
way  that  the  function 

F(x)  =  c^ul(x)+  ...  +  ckut(x) 

gives  the  best  approximate  representation  of  f(x}  in  /  in  the  sense  of 
the  method  of  least  squares.  That  is,  we  wish  to  determine  the 
constants  cl}  ...  ck  so  that 


shall  be  a  minimum.     We  have 

™  =  -  2  ?-  Ff(x)  F(x)  dx  +  j- 

CC-  '  OC- 

Now  since 

[b  »          fb 

we  get 

!  fb 

>»\    ji      I    yt\    //  f*    \_    O/»         I        I    II      I    fl  I       /T 

/      *  \     /  ^'*^    "~^^i    §     l"i  \     /  I 
Jo 


Equating  this  to  zero,  we  get  as  the  desired  values  of  d,  ...  ct  precisely 
the  quantities  (2). 

Since  cy/?cr>0,  &J/dCidCj  =  0,  we  see  that  formulae  (2)  really 
correspond  to  a  minimum  of  J,  and  we  have  thus  proved 

THEOREM  3.  If  Ui(x),  ...  itk(j")  are  orthogonal  in  I  and  f(x) 
is  finite  in  I  and  has  at  most  a  finite  number  of  discontinuities,  a 
necessary  and  sufficient  condition  that  the  finite  series 


give  the  best  representation  of  f(x)  in  I  in  the  sense  of  the  method  of 
least  squares  is  that  the  c's  be  determined  by  (2). 

For  the  sake  of  getting  simpler  formulae,  let  us  suppose  that  the 
functions  «,(.r)  are  normalized.  Then  the  minimum  value  of  the 
integral  J  is  seen,  by  making  use  of  the  values  (2),  to  be 


-  f  [/(£)?  #-*(  I 

Ja  i=l\Ja 


56  ORTHOGONAL   FUNCTIONS  [12 

Since  the  left-hand  side  of  this  equation  is  positive  or  zero,  the  same  is 
true  of  the  right-hand  side,  and  we  thus  get  the  important  inequality 

2  (  f  /(*)  «,  (*)  #)'  ^  f  [/(£)?  **  (4). 

i=l\ya  /         ya 

As  an  application  of  this  inequality,  we  will  establish  the  result, 
already  stated  without  proof  in  §  10, 

THEOREM  4.  If  K(x,  y)  is  finite  in  S  and  its  discontinuities  are 
regularly  distributed,  then  every  real  root  for  K  (x,  y)  has  a  finite 
index. 

For  suppose  that  to  the  real  root  \  there  correspond  n  linearly  in- 
dependent real  principal  solutions  for  K(x,  #)t.  From  these  by  means 
of  Theorem  2  we  can  form  n  orthogonal  principal  solutions.  Let  us 
call  these  functions,  after  they  have  been  normalized,  u±  (x},  ...  un  (x). 
We  will  take  these  for  the  functions  u  of  (4),  and  for  /(£)  we  will  take 
K(x,  £),  which,  for  any  fixed  value  of  x,  is  finite  in  /  and  has  only  a 
finite  number  of  discontinuities  there.  When  we  remember  that 


i 

«*  (*)  =  xi    K(%>  £)  Ui  (£)  d£       (i  =1,2,...  n\ 

Ja 

the  inequality  (4)  reduces  to 


AI  i=l 

If  we  integrate  this  with  regard  to  x  from  a  to  b,  we  get  on  the 
left,  when  we  remember  that  the  w's  are  normalized,  n/\iz  ;  and  thus 
finally 

(5). 

This  inequah'ty  gives  us  an  upper  limit  for  the  number  n  of  linearly 
independent  real  principal  solutions  corresponding  to  Xlt  It  is  easily 
seen,  however,  that  the  number  of  linearly  independent  complex 
principal  solutions  cannot  be  greater  than  the  number  of  linearly  in- 
dependent real  principal  solutions.  Thus  our  theorem  is  proved. 

*  Formula  (3)  is  called  by  E.  Schmidt  Bessel's  Identity,  and  (4)  Bessel's 
Inequality,  because  in  the  Astr.  Nachr.  vol.  6  (1828),  p.  333,  Bessel  had  in  the 
special  case  in  which  the  w's  are  trigonometric  functions  considered  a  problem 
analogous  to  the  one  here  treated,  in  which,  however,  no  integrals  but  only  finite 
sums  present  themselves.  The  first  to  consider  precisely  the  problem  here 
treated  both  for  trigonometric  functions  and  for  Legendre's  polynomials  seems  to 
have  been  Plarr,  Paris  C.  R.  vol.  44  (1857),  p.  985. 

t  We  do  not  here  exclude  the  possibility  of  still  other  linearly  independent 
principal  solutions. 


12]  ORTHOGONAL   FUNCTIONS  57 

The  inequality  (5)  may  also  be  regarded  as  giving  us  a  lower  limit 
for  the  absolute  value  of  the  real  root  Xj.  By  letting  n  =  1,  we  thus  get 
a  lower  limit  for  the  absolute  value  of  any  real  root.  Hence  we  see  that 
in  the  case  of  a  symmetrical  kernel,  or  in  the  more  general  case  of 
Theorem  6,  §  11,  no  root  can  be  in  absolute  value  less  than 


In  these  cases  we  may  therefore  replace  Theorem  2,  §  6,  by  the 
following  more  far-reaching  result  : 

THEOREM  5.  If  the  finite  function  K  whose  discontinuities  are 
regularly  distributed  satisfies  the  conditions  of  Theorem  6,  §11,  and 


the  series  (5)  of§  6  converges  absolutely  and  uniformly. 

In  the  special  case  in  which  K(x,  y)  is  symmetric,  the  roots  and 
the  real  principal  solutions  for  K  are  also  known  as  the  characteristic 
numbers  and  the  characteristic  functions*  of  K  respectively.  We  saw 
in§  11  that  these  characteristic  numbers  are  real,  and  that  two  charac- 
teristic functions  corresponding  to  different  characteristic  numbers  are 
orthogonal  to  each  other.  Moreover,  by  Theorem  4,  we  see  that  each 
characteristic  number  has  a  finite  index.  For  each  characteristic 
number  whose  index  n  is  greater  than  1,  we  can  pick  out  a  set  of 
n  orthogonal  characteristic  functions  on  which  every  other  character- 
istic function  corresponding  to  this  characteristic  number  will  then  be 
linearly  dependent.  We  thus  get  characteristic  functions  UI(-T), 
w2(.r),  ...,  finite  or  infinite  in  number  as  the  case  may  be,  ortho- 
gonal to  each  other  in  /,  and  such  that  every  characteristic  function 
of  K  is  linearly  dependent  upon  a  finite  number  of  them.  Such  a 
system  we  speak  of  as  a  complete  orthogonal  system  of  characteristic 
functions.  We  may,  of  course,  normalize  it  if  we  choose. 

A  problem  of  fundamental  importance  is:  under  what  conditions 
can  a  function  J  (j-)  be  developed  in  a  series  whose  terms  are  constant 
multiples  of  the  elements  of  this  complete  orthogonal  system  of 
characteristic  functions,  the  coefficients  of  the  series  being  deter- 
mined by  formula  (2)  ?  For  such  treatment  as  has,  up  to  the 
present  time,  been  given  of  this  problem  we  refer  to  the  papers 

*  Eigemcerte  and  Eigenfunktionen.  It  is  only  when  A'  is  symmetric  that  the 
definition  here  given  of  these  terms  is  correct.  For  the  case  of  an  unsymmetric 
kernel  K,  cf.  the  definition  given  by  E.  Schmidt,  loc.  cit.  p.  461. 


58  ORTHOGONAL   FUNCTIONS  [12 

already  cited  by  Hilbert  and  Schmidt*.  We  confine  ourselves  to  a 
single  important,  though  very  special,  case,  namely,  that  in  which 
the  function  f(x)  to  be  developed  is  the  symmetric  kernel  K(x,  y} 
itself,  and  even  here  we  do  not  attempt  to  treat  the  problem  completely. 
Let  us  suppose  that  the  complete  orthogonal  system  of  character- 
istic functions  Ui  (x),  u2  (x),  .  .  .  has  been  normalized,  and  denote  by 
AU  AS,  ...  the  characteristic  numbers  corresponding  to  themt.  For- 
mula (2),  when  we  let  J  (x}  -  K(x,  y),  now  reduces  to 


(bir/       \      i  \  j 
=  I  K  (x,  y)  uv  (x)  ax  = 

Ja 


We  are  thus  led,  by  this  purely  formal  work,  to  inquire  whether  the 
series  M*)*»(y)  +  «.(g)m(y)  +  (6) 

A!  A2 

really  converges  and  represents  the  function  K(x,  y).     We  prove  here, 
following  the  method  of  Schmidt,  the  theorem  due  to  Hilbert, 

THEOREM  6.  If  the  symmetric  function  K(x,  y)  is  finite  in  S  and 
its  discontinuities  are  regularly  distributed,  and  if  Ui  (x\  u2(x),  ...  is  a, 
complete  normalized  orthogonal  system  of  characteristic  functions  of  K, 
and  A15  A2j  ...  are  the  corresponding  characteristic  numbers  ;  then  if  the 
series  (6)  converges  uniformly  throughout  S,  it  represents  the  function 
K(x,y)  at  every  point  where  this  function  is  continuous. 

Since  the  series  (6)  converges  uniformly  and  its  terms  are  continuous, 
the  function  represented  by  (6)  is  continuous  throughout  S.  It  is  also 
obviously  symmetric.  Consequently  the  function 

Q(.,Q.f(f,  $-!*!<&!&  (7) 


is  symmetric  and  finite  in  S,  and  its  discontinuities  are  regularly 
distributed.  Let  us  assume  that  the  theorem  is  false.  Then  Q  does 
not  vanish  at  every  point  where  it  is  continuous ;  so  that,  by  Theorem  2, 
§  11,  it  has  at  least  one  characteristic  number  c.  Let  </>•  (x)  be  a 
characteristic  function  of  Q  corresponding  to  c,  so  that 

(8). 

*  Cf.  also  Kneser,  Math.  Ann.  vol.  63  (1907),  p.  477,  where  application  is  made 
to  the  problem  of  developing  an  arbitrary  function  according  to  the  solutions  of 
a  linear  differential  equation  of  the  second  order.  This  application  had  already 
been  made  by  Hilbert,  though  much  less  successfully. 

t  It  should  be  noticed  that  the  X's  are  not  necessarily  all  distinct. 


12]  ORTHOGONAL   FUNCTIONS  59 

If  we  multiply  (7)  by  iti(£)d£  and  integrate,  we  get,  when  we  remember 
that  the  w's  are  normalized, 


=  I 
./a 


(9). 


From  (8)  and  (9)  we  deduce 

f  V  (*)  ««  (0  #  -  c  f  f  Q  (£,  &)  «,  (*)  ^  (6)  #1  #  =  0    (10). 

./a  ./a  Jo, 

Multiplying  (7)  by  ^  (£)<££  and  integrating,  and  reducing  by  means  of 
(10),  we  find 


The  first  member  of  this  equation  is,  by  (8),  simply  iff  (x)jc.     Hence 


Since  i^(.r)  is  continuous  and  does  not  vanish  identically  in  7,  this 
shows  that  $  (.r)  is  a  characteristic  function  of  K,  so  that  it  must  be 
linearly  dependent  on  a  finite  number  of  the  u's.  This,  however,  is 
impossible,  by  Theorem  1,  since,  by  (10),  ^  is  orthogonal  to  all  the  us. 
Thus  the  theorem  is  proved,  since  the  assumption  that  it  is  not  true 
has  led  us  to  a  contradiction. 

The  condition  of  uniform  convergence  of  (6)  is,  of  course,  fulfilled  if 
the  series  has  only  a  finite  number  of  terms,  that  is  if  K  has  only  a 
finite  number  of  characteristic  numbers.  In  this  case  our  theorem  tells 
us  that  K  is  equal,  wherever  it  is  continuous,  to  the  finite  series  (6), 
which  is  continuous  throughout  £  Any  discontinuities  which  K  has 
must  therefore  be  removable  discontinuities,  that  is,  such  that  K 
becomes  continuous  throughout  S  if  its  definition  at  its  points  q/ 
discontinuity  be  suitably  changed. 

Conversely,  if  we  can  find  a  finite  sum  of  the  form 


where  the  fs  and  <£'s  are  continuous  in  /,  such  that  this  sum  is 
equal  to  K  at  all  points  of  S  where  K  is  continuous,  it  is  readily  proved 
that  K  can  have  only  a  finite  number  of  characteristic  numbers.  For, 
in  the  first  place,  in  computing  these  characteristic  numbers  we  may 
obviously  use  (11)  in  our  kernel  in  place  of  K(x,  y).  Now  if  we  sub- 
stitute (11)  for  K  in  the  series  for  D(^)  (formula  (6),  §  9),  it  will  be 
seen  that  all  the  determinants  of  order  higher  than  n  in  this  series  have 
the  value  zero.  Consequently  D  (X)  is  a  polynomial  of  at  most  the  nth 


60  EQUATION   OF   FIRST  KIND  [12,  13 

degree  in  X,  and  cannot,  therefore,  have  more  than  a  finite  number  of 
roots.  Thus  we  have  proved 

THEOREM  7.  I/  the  symmetric  function  K(x,  y}  is  finite  in  8  and 
its  discontinuities  are  regularly  distributed,  a  necessary  and  sufficient 
condition  that  K  have  only  a  finite  number  of  characteristic  numbers  is 
that  there  exist  a  sum  of  the  form  (11),  where  thefs  and  (J>s  are  continuous 
in  I,  which  is  equal  to  K  at  all  points  of  S  where  K  is  continuous. 

In  particular,  if  the  symmetric  function  K(x,  y)  is  finite  in  $and 
has  discontinuities  which  are  not  all  removable  and  which  are  regularly 
distributed,  it  will  surely  have  an  infinite  number  of  characteristic 
numbers. 

13.  The  Integral  Equation  of  the  First  Kind  whose  Kernel 
is  Finite.  In  this  section  and  the  next  we  take  up  briefly  the  subject 
of  integral  equations  of  the  first  kind.  We  shall  be  concerned,  in  the 
main,  with  the  equation 

/(«)-•   ife  0*(0#  (i), 


which  has  been  so  extensively  treated  by  Volterra*  that  Picard  has 
proposed  to  call  it  Volterra's  Equation. 

Let  us  assume  that  K  is  continuous  throughout  the  triangle  T,  and 
that  it  has  a  derivative 


finite  in  T  and  whose  discontinuities  are  regularly  distributed.  In 
order  that  (1)  have  a  solution  u  (x)  continuous  in  /,  it  is  evidently 
necessary  that  /  (x)  be  continuous  in  /  and  that  f  (a)  =  0.  Differen- 
tiating (1),  we  get 

/'  (x)  =  K  (x,  x}  u  (x)  +  [XK^  (x,  £  )  u  (£)  dt  (2). 


From  this  we  deduce  at  once,  as  a  further  necessary  condition,  that 
/  (x)  have  a  derivative  /'  (x)  continuous  throughout  /.  In  order  that 
(2)  be  an  equation  of  the  second  kind  satisfying  the  restrictions  we 
have  imposed  in  §§  5,  6,  it  is  now  sufficient  to  impose  on  K  the  further 
restriction  that  K  (x,  x}  do  not  vanish  at  any  point  of  /.  Making  this 
assumption,  we  can  infer  at  once  that  (1)  cannot  have  more  than  one 
continuous  solution,  and,  if  it  has  any  such  solution,  this  will  be  the 
continuous  solution  of  (2). 

*  See  the  papers  cited  in  §  6.  Cf.  also  Holmgren,  Atti  of  the  Turin  Academy, 
vol.  35  (1900),  p.  570  ;  and  T.  Labesco,  Journal  de  MatMmatique,  6th  eer.  vol.  4 
(1908),  p.  125. 


13]  WHOSE   KERNEL   IS   FINITE  61 

We  may  also,  if  we  impose  the  conditions  just  stated,  work  back 
from  (2)  to  (1).     For  equation  (2)  may  be  written 


Accordingly  the  continuous  solution  of  (2)  also  satisfies  the  equation 
f(x)  -  \X  K  (x,  £)  u  (£)  d£  =  const., 


and  by  taking  the  limit  as  x  approaches  a,  we  see  that  this  constant 
has  the  value  zero.     Thus  we  have  proved 

THEOREM  1  *.  If  K  (x,  £)  is  continuous  in  T  and  has  a  derivative 
Ki  —  ^KI^x  finite  in  T  and  ivhose  discontinuities  are  regularly  dis- 
tributed, and  if  K  (x,  x}  does  not  vanish  at  any  point  of  I,  a  necessary 
and  sufficient  condition  that  (1)  have  a  continuous  solution  is  that  f(x) 
and  its  derivative  f'(x}  be  continuous  in  I  and  f(a)  —  Q.  If  these 
conditions  are  fulfilled,  (1)  has  only  one  continuous  solution,  namely  the 
continuous  solution  of  the  equation  of  the  second  kind  (2)t. 

Without  considering  exhaustively  the  case  in  which  K  (x,  x) 
vanishes  at  one  or  more  points  in  /,  we  wish  to  examine  it  sufficiently 
to  show  that  it  is  really  an  exceptional  case. 

Let  us  first  suppose  that  K  (x,  x)  vanishes  identically  in  /.  In 
this  case  (2)  reduces  to 

(3). 


Assuming  that  not  only  K  but  also  KI  is  continuous  in  T,  and  that 
KI  (x,  x)  does  not  vanish  at  any  point  of  /,  we  see  that  equation  (3) 
comes  under  the  case  covered  by  Theorem  1  provided  that  KI  has  a 
finite  derivative  whose  discontinuities  are  regularly  distributed.  More- 

*  Cf.  Lincei  I.  See  also  Torino  I,  and  for  a  slightly  earlier  but  less  complete 
discussion,  Le  Roux,  Annales  de  VEcole  normale  superieure,  3rd  ser.  vol.  12 
(1895),  p.  243. 

t  Volterra  has  also  indicated  a  second  method  for  reducing  the  equation  (1)  to 
an  equation  of  the  second  kind.  This  consists  in  performing  an  integration  by 
parts.  If  we  let 

K,  (x,  y)= 


, 

we  get  from  (1)  by  an  integration  by  parts 


-/. 


f(x)=K(x,x)  V(x)  -       K%(x,  |)  U(Qd£  (2'). 


We  leave  it  for  the  reader  to  formulate  conditions  under  which  we  can  get  the 
general  continuous  solution  of  (1)  by  differentiating  the  continuous  solution 
of  (2'). 


62  EQUATION   OF   FIRST   KIND  [13 

over  we  see  as  before  that  any  continuous  solution  of  (3)  is  also  a 
solution  of  (1);  and  we  get 

THEOREM  2.  If  K  (x,  £)  and  K±  (x,  £)  =  dKfix  are  continuous  in 
T,  and  Ku  (x,  £)  =  (PK/dx2  is  finite  in  T  and  its  discontinuities  are 
regularly  distributed,  and  K  (x,  x)  =  0  while  K^  (x,  x)  does  not  vanish 
at  any  point  of  /,  then  a  necessary  and  sufficient  condition  that  (1)  have 
a  continuous  solution  is  that  f  (x},  f  (x),  f"  (x}  be  continuous  in  I  and 
that  f(a)=f  (a)  =  0.  If  these  conditions  are  fulfilled,  the  equation  (1) 
has  only  one  continuous  solution,  which  may  be  found  by  solving  (3). 

We  leave  it  for  the  reader  to  push  the  results  of  this  theorem 
farther  by  replacing  the  condition  that  K^  (x,  x}  shall  not  vanish  at 
any  point  of  /  by  the  condition  KI  (x,  x}  =  0. 

The  case  in  which  K  (x,  x)  vanishes  at  a  finite  number  of  points  in 
/has  been  discussed  at  length  by  Volterra*.  We  content  ourselves 
with  treating  a  simple  example  in  which  the  results  are  fairly  typical 
of  the  general  case. 

Consider  the  equation 

0  =  fX(a£  +  fix)  u  (£)  d£         (a  +  (3  4=  0)     (4). 
Jo 

Here  the  kernel  a|  +  fix  reduces  when  £  =  x  to  (a  +  ft)  x,  and  thus 
vanishes  when  and  only  when  x  =  0.  By  differentiating  (4)  twice  we 
see  that  any  continuous  solution  which  it  may  have  has,  except 
perhaps  when  x  =  0,  a  continuous  derivative  u'  (x} ;  and  we  obtain  for 
u  (x)  in  this  way  the  differential  equation 

(a  +  (3)  X  U  (x)  +  (a  +  2/3)  U  (x)  -  0  (5). 

The  general  solution  of  this  equation  is 

__£__! 

u(x)  =  cx  a+0  (6). 

If  c  4=  0,  this  is  continuous  when  and  only  when  ft  /(a  +  /?)  ^  - 1  ; 
that  is,  when 


or,  more  simply,  when 

-l>^-2  (7). 

If  this  condition  is  satisfied,  the  function  (6)  is  readily  seen  to  be 
a  solution  of  (4),  and,  since  (6)  contains  an  arbitrary  constant,  we  have 
proved 

*  Torino  III,  IV.     See  also  the  papers  by  Holmgren  and  Labesco  cited  at  the 
beginning  of  this  section. 


13]  WHOSE    KERNEL   IS   FINITE  63 

THEOREM  3.  Equation  (4)  has  one  and  only  one  continuous  solution 
{namely  u  =  0)  except  when  condition  (7)  is  fulfilled,  in  which  case  it 
has  an  infinite  number  of  such  solutions. 

We  leave  it  for  the  reader  to  extend  this  result  to  the  more  general 
equation 

/(a?)=f(«*  +  #r)«(*)#  (8), 

Jo 

where  f(x),  f  (x),  f"  (x)  are  continuous  in  /,  and/(0)  =/'  (0)  =  0. 

We  may  add  that  the  method  we  have  just  used  enables  us  to  find 
not  merely  the  continuous  solutions  of  (4)  or  (8)  but  also,  in  some  cases 
in  which  (7)  is  not  fulfilled,  discontinuous  but  integrable  solutions. 
Thus,  when  a//2  <  —  2,  the  function  (6)  is  integrable  and  is  a  solution 
of  (4). 

We  turn  now  to  the  more  general  integral  equation  of  the  first 
kind 


(9). 

a 

Volterra's  equation  is  the  special  case  of  this  in  which  K(x,  y)  vanishes 
when  y  >  x.  If  we  look  more  closely,  we  see  that  in  the  case  covered 
by  Theorem  1  the  kernel  K  (x,  y)  has  a  finite  jump  at  every  point  of 
the  line  x  =  y,  passing  from  the  value  K  (x,  x)  to  the  value  0  as  we 
cross  this  line.  The  cases  which  presented  more  difficulty  were  those 
in  which  there  is  either  no  discontinuity  along  this  line,  or  those  in 
•which  there  are  a  finite  number  of  points  where  there  is  no  dis- 
continuity. 

This  suggests  that  we  consider  in  place  of  the  line  y  =  x  the  curve 
y  =  4>  (x)  (where  we  will  assume  <f>  (x)  to  be  continuous  and  to  have 
a  continuous  derivative  <£'  (x)  in  /),  which  we  will  suppose  crosses  the 
square  S  from  left  to  right  dividing  it  into  an  upper  and  a  lower 
portion  ;  i.e.  we  assume  that  when 

a^x^b,     a^<f>(x)^b. 

Along  this  curve  we  suppose  the  function  K  (x,  y)  to  have  a  finite 
jump  ;  that  is,  if  (x0,  y0)  is  any  point  of  this  curve  at  which 

y0*a,    y0*b, 
we  assume  that  the  two  limits 

K  (x<>,  y0  -  0)  and  K  (x0,  y0  +  0) 

exist.  The  difference  of  these  limits  is  what  we  call  the  magnitude  of 
the  jump  at  (x0,  y0),  and  we  write 

J(x0)  =  K(x0,  y0  -  0)  -  K(x0,  y0  +  0). 


64  KERNEL   FINITE  [13 

We  will  suppose  this  function  J  (x)  to  be  continuous  in  /;  and  also 
that  K  is  finite  in  S  and  continuous  except  along  the  curve  y  =  <f>  (x), 
and  that  it  has  a  derivative 


finite  in  S  and  whose  discontinuities  are  regularly  distributed. 
Let  us  now  write  (9)  in  the  form 

/(*)  =  /*  (X)  K  (x,  |)  u  (*)  dt  +  !"    K(x,  |)  u  (£)  #. 

./a  y*Ca;>) 

By  differentiating  we  get 

/'  (an)  =  J  (a)  #  (x)  u  (as)  +  (V,  («,  £)  w  (£)  ^  (10). 


In  order  that  this  be  an  integral  equation  of  the  second  kind  of  the 
type  we  have  considered,  it  is  sufficient  to  make  the  further  assumption 
that  neither  J  nor  <f>  vanishes  in  /.  "We  then  have  an  integral 
equation  whose  kernel  is 


If  the  determinant  (or  modified  determinant)  of  this  function  is  not 
zero,  equation  (10)  has  a  continuous  solution  provided  that  /'  (x)  be 
continuous.  We  cannot,  however,  infer  from  this  that  (9)  also  has 
a  continuous  solution.  The  continuous  solution  of  (10)  satisfies,  as  we 
readily  see,  an  equation  of  the  form 

(*,  *)«(£)#  (11), 

and  satisfies  it,  of  course,  only  for  one  value  of  X.     Thus  we  get  the 
result  : 

THEOREM  4.     If  the  following  conditions  of  continuity  are  satis/led 

(1)  K(x,y)  is  finite  in  S,  and,  except  on  the  curve  y  =  <t>(x),  con- 
tinuous there; 

(2)  <f>  (x)  satisfies  the  inequality 

a^^(x)^b, 

is  continuous  and  has  a  continuous  derivative  in  I,  and  $  (x)  does  not 
vanish  in  I; 

(3)  At  every  point  of  I,  except  where 

<^(x)  =  a  or  <f>  (x)  =  b, 
the  two  limits 

K(x,  $  (x)  -  0)  and  K(x,  <j>  (x)  +  0) 


13,  14]  KERNEL   OR   INTERVAL    NOT    FINITE  65 

exist,  and  their  difference 

J(x)  =  K(x,<f>(x)-0)-K(x,<t>(x)  +  0) 
is  continuous  and  does  not  vanish  in  I; 

(4)  K  (x,  y)  has  a  derivative 

Ki  (of,  y}  =  ZKfiz 
finite  in  S  and  whose  discontinuities  are  regularly  distributed; 

(5)  f(x)  is  continuous  and  has  a  continuous  derivative  in  I  ; 
then  if  the  determinant,  or  modified  determinant,  of  the  function 

K,  (x,  y} 
J  (*)+'(*} 

is  not  zero,  there  will  be  one  and  only  one  value  of  the  parameter  A.  for 
which  the  equation  (11)  has  a  continuous  solution,  and  this  solution  will 
be  the  continuous  solution  of  (10). 

The  reader  will  find  no  difficulty  in  extending  this  theorem  so  as  to 
cover  cases  in  which  J(x)  =  0. 

14.  Equations  of  the  First  Kind  whose  Kernel  or  whose 
Interval  is  not  Finite.  We  pass  now  with  Volterra*  to  the  case  in 
which  the  kernel  K  of  the  equation 


(1), 

a 

has  the  form 


where  G  is  continuous  in  T.  Precisely  as  in  the  case  of  Abel's 
equation,  of  which  (1)  is  an  immediate  generalization,  we  see  that  (1) 
cannot  have  a  continuous  solution  unless  f(x)  is  continuous  throughout 
7  and  /(a)  =  0. 

Assuming  that  (1)  has  a  continuous  solution,  we  may  obtain  it  as 
follows  :  Multiply  (1)  by  (z-xY~ldx,  where 

a  ^  z  ^  b, 
and  integrate  the  resulting  equation,  getting 


r  zw*  =  r    I,  i'  e 

Ja  (Z-X)1    A       Ja  (Z-X)l-^Ja  (X  - 


The  second  member  of  this  formula  reduces  by  Dirichlet's  Extended 
Formula  (cf.  §  1)  to 


*  Cf.  Torino  H. 


66  EQUATION   OF   FIRST   KIND  [14 

Accordingly  if  we  write 

—  u~ \d>x 

"~"r  '  (3), 


equation  (2)  takes  the  form 

Z  (4). 


We  will  now  show  that  the  kernel  L  of  this  equation  is  continuous 
in  T,  except  on  the  line  z  =  £  where  it  is  not  yet  denned.  For  this 
purpose  introduce  y  as  variable  of  integration  in  place  of  x  by  means 
of  the  formula 


We  thus  get,  when  z  > 


Since  this  integral  remains  convergent  when  we  replace  G  by  the  upper 
limit  of  its  absolute  value,  it  follows  that  it  is  uniformly  convergent  in 
T,  and,  since  G  is  continuous,  L  is  also  continuous  wherever  it  is 
denned. 

In  order  next  to  see  whether  L  approaches  a  limit  as  the  point 
(z,  £)  approaches  a  point  (c,  c)  on  the  hypotenuse  of  T,  we  apply  the 
law  of  the  mean  for  integrals  to  the  original  definition  of  L,  getting 


I* 
Jt 


-,  --  — 

t  (z-  x)l~^  (x  -  £)A     sm  A.T 

Consequently 


Taking  this  limiting  value  as  the  definition  of  L  (c,  c),  we  see  that 
L  is  continuous  throughout  T,  and  that  if  we  demand  that  G  (xt  x) 
shall  not  vanish  at  any  point  of  /,  it  follows  that  L  (x,  x~)  does  not 
vanish  at  any  point  of  /. 

In  order  then  that  the  equation  (4)  come  under  the  case  covered 
by  Theorem  1,  §  13,  it  remains  merely  to  impose  on  G  restrictions 
which  will  make  L  (z,  £)  have  a  partial  derivative  with  regard  to  z 
finite  in  8  and  whose  discontinuities  are  regularly  distributed.  In 
order  to  avoid  all  complications,  we  will  assume  that  G  (z,  £)  has 
a  partial  derivative 


14]  KERNEL   OR   INTERVAL  NOT   FINITE  67 

which  is  continuous  in  T.     If,  then,  we  differentiate  (5)  with  regard  to 
z  under  the  integral  sign,  we  get 

rl  /     y     \l-> 


Since  this  integral  is  obviously  uniformly  convergent,  we  see,  by  the 
ordinary  test  for  differentiating  an  infinite  integral  under  the  sign  of 
integration,  that  it  represents 


except  perhaps  on  the  hypotenuse  z  =  £  of  T  where  formula  (5)  was  not 
valid.  From  the  expression  just  obtained  for  L1  we  see  that  LI  is 
finite  in  T,  and  from  the  uniform  convergence  of  this  integral,  that  Zx 
is  continuous  in  T  except  perhaps  on  the  hypotenuse  z  =  £. 

It  is  now  possible  to  apply  Theorem  1  of  §  13  to  the  equation  (4). 
In  order  that  this  equation  have  a  continuous  solution  it  is  therefore 
necessary  and  sufficient  \h&\>F(z)  be  continuous  and  have  a  continuous 
derivative  in  /,  and  that  F  (a)  =  0.  The  first  and  last  of  these 
conditions  will  be  fulfilled  if  f  (x)  is  continuous  in  /,  as  we  see  by 
a  reference  to  Theorem  2,  §  1.  "We  have  thus  obtained'  as  an  additional 
necessary  condition  for  (1)  to  have  a  continuous  solution  that  F  (z) 
have  a  continuous  derivative. 

It  remains  to  show  that,  if  all  the  conditions  we  have  mentioned 
are  fulfilled,  the  continuous  solution  of  (4)  satisfies  (1).  For  this 
purpose  differentiate  (4)  with  regard  to  z  and  then,  after  multiplying 
by  (x-zY^dz,  where 

a  ^  x  ^  b, 

integrate  from  a  to  &,  getting 
I       d     *     /(fl 

--*  • 


A  reference  to  Theorem  3,  §  1,  shows  that  the  first  member  of  this 
equation  is  equal  to 


By  an  application  of  Dirichlet's  Extended  Formula,  this  reduces  to 


68  EQUATION   OF   FIRST   KIND  [14 

The  second  member  of  (7)  may  be  written,  as  we  see  again  by 
a  reference  to  Theorem  3,  §  1, 


By  a  three-fold  application  of  Dirichlet's  Extended  Formula  this 
becomes 


d 


!'      ! 

jf  p= 


Sin  XT 

Substituting  the  values  we  have  just  found  on  the  two  sides  of  (7), 
we  see  that  this  equation  reduces  to  (1).     We  have  thus  proved  the 

THEOREM.     If  G  (&,  £)  and  its  derivative 


are  continuous  in  T,  and  G  (x,  x)  does  not  vanish  in  /,  a  necessary  and 
sufficient  condition  that  the  equation 


f(x}  =  f  P^ 

have  a  continuous  solution  is  thatf(x)  be  continuous  in  I,  that 


and  that  the  function 


have  a  continuous  derivative  throughout  I.  If  these  conditions  are 
fulfilled,  the  equation  has  only  one  continuous  solution,  namely  the 
continuous  solution  of  (4)  where  L  is  defined  by  (3). 

By  a  reference  to  Theorem  3,  §  1,  it  will  be  seen  that  a  sufficient 
though  not  a  necessary  condition  that  F  have  a  continuous  derivative 
is  that  f  be  continuous  and  have  a  finite  derivative  with  only  a  finite 
number  of  discontinuities. 

If  G  (x,  £)  =  1,  equation  (1)  reduces  to  equation  (1)  of  §  3.  In  this 
case 

L  =  7r/sin  XTT, 


14]  KERNEL  OR  INTERVAL  NOT   FINITE  69 

so  that  (4)  becomes 

tZ  f(x)  T  [Z 

1   —      \ —  d-K  =  —. — r —  I   u  (£)  d£, 

from  which,  by  differentiation,  we  get,  under  the  assumptions  there 
made,  precisely  the  solutions  (4)  and  (5)  of  §  3. 

The  theorem  just  proved  may  be  readily  modified  so  as  to  apply  to 
cases  in  which 

We  leave  it  for  the  reader  to  carry  through  the  discussion  here. 

A  case  which  may  be  made  to  depend  on  the  theorem  of  this  section 
is  that  in  which  in  equation  (1),  §  13,  the  function  K  (x,  x)  vanishes 
identically  while  the  derivative  K^  (x,  y)  has  the  form 

^ifoy)  =  Jl(flr'u  (o<x<i), 


where  G  is  continuous  in  T.  Here  too  we  leave  the  discussion  and 
the  precise  formulation  of  the  result  to  the  reader.  Theorem  4,  §  3,  is 
a  special  case  of  this. 

An  equation  of  some  importance,  which  bears  a  certain  resemblance 
to  the  equation  of  this  section,  is 


f(x)  =  f  {ctn  «•(*-*)+(?  (x,  0}  u 

Jo 


treated  by  Hilbert  and  Kellogg.  Here  G  (x,  £)  is  supposed  to  be 
finite  in  S,  and,  in  order  that  the  integral  should  have  a  meaning,  it  is 
in  general  necessary  that  we  interpret  it  to  mean  the  principal  value 
according  to  Cauchy.  For  a  treatment  of  this  equation  we  refer  to 
pages  17  —  27  of  Kellogg's  dissertation:  "Zur  Theorie  der  Integral- 
gleichungen  und  des  Dirichletschen  Princips."  Gottingen,  1902. 

We  close  by  a  brief  treatment  of  what  was  perhaps  the  first  integral 
equation  to  be  successfully  treated. 

One  form  of  Fourier's  Integral  Theorem  is  contained  in  the 
formula 


(*>  =  -  r 

TTJO      JO 


This  formula  is  valid  when  0  =  x  provided  the  function  f  (x) 
satisfies  certain  conditions  ;  for  instance,  it  is  sufficient  to  demand  that, 
when  0  =x,  f(x)  be  continuous  and  do  not  have  an  infinite  number  of 
maxima  and  minima  in  the  neighbourhood  of  any  point,  and  that 


jfl/M 


dx 


70  EQUATION   OF   FIRST   KIND  [14 

converge.     Formula  (8)   shows  us  that,   under  the  conditions  just 
stated,  the  integral  equation  of  the  first  kind 


^Jo   cos  (*£)«(£)#  (9) 

has  as  a  continuous  solution 

/2   f°° 
u(x}-\/  ~  I    cos(«^)y  (t)«t  (10). 


Conversely,  we  see  in  the  same  way  that  (10)  is  the  only  continuous 
solution  of  (9)  which  can  be  expressed  by  means  of  Fourier's  Integral 
Theorem. 

The  equation  (9)  has  as  its  kernel  the  continuous  symmetric 
function 

*/-cos(#£)  (11), 

and  the  only  peculiarity  is  that  the  interval  I  is  now  infinite.  This 
change  from  a  finite  to  an  infinite  interval  makes  a  very  essential 
difference  in  the  whole  theory  of  integral  equations*,  as  we  see  by 
considering  the  characteristic  numbers  of  the  kernel  (11).  If  the 
interval  were  finite,  there  would,  as  we  know,  be  an  infinite  number  of 
these  numbers,  each  with  a  finite  index  f.  Here,  however,  (at  least  if 
we  restrict  the  conception  of  characteristic  functions  to  functions  for 
which  Fourier's  Integral  Theorem  (8)  holds)  there  are  only  two 
characteristic  numbers,  namely  ±  1,  each  with  an  infinite  index.  To 
prove  this,  let  us  consider  the  homogeneous  equation  of  the  second 
kind, 

cos  (x£)  u  (£)  d£  (12). 


If  X  is  a  characteristic  number  and  u  (x)  a  corresponding  characteristic 
function,  not  only  will  this  equation  (12)  be  fulfilled,  but  also,  as  we 
see  by  taking  in  (9)  and  (10)  for /(a?)  the  function  u  (#)/A, 


(13). 

"  JO 
By  introducing  new  variables 


equation  (9)  with  an  infinite  interval  and  a  finite  kernel  may  be  reduced  to  an 
equation  with  a  finite  interval  and  an  infinite  kernel. 

t  We  are  here  assuming  the  easily  established  fact  that  cos  (z£ )  cannot  be 
expressed  as  the  sum  of  a  finite  number  of  terms  each  of  which  is  the  product  of  a 
function  of  x  by  a  function  of  £. 


14]  KERNEL   OR  INTERVAL   NOT   FINITE  71 

By  combining  (12)  and  (13),  we  see  that 

u  (x)  =  \*u  (#), 

so  that,  since  u  (z)  does  not  vanish  identically,  it  follows  that  A  =  ±  1 
are  the  only  possible  characteristic  numbers. 

To  prove  that  these  are  really  characteristic  numbers  and  that  they 
have  an  infinite  index,  we  need  merely  notice  that,  when  c  has  any 
positive  value,  the  functions 


are  solutions  of  (12)  for  A=  + 1  and  X  =  -  1  respectively. 

The  kernel  (11)  is  only  one  of  an  important  class  of  kernels  having 
similar  properties.  For  a  theory  of  such  kernels  we  refer  to  the 
dissertation  by  H.  Weyl :  "Singulare  Integralgleichungen  mit  besonderer 
Beriicksichtigung  des  Fourierschen  Integraltheorems,"  GiJttingen,  1908. 


INDEX 


The  numbers  refer  to  pages. 


,,  linear,  24 


Bessel's  functions,  53,  54 

identity  and  inequality,  56 
Boggio,  51 

Characteristic  functions,  57 

numbers,  57 
Complete  orthogonal  system,  57 

Determinant,  26,  32 

modified,  36 

Differential  equations,  1,  12,  46,  58 
Dirichlet's  formula,  4 
Discontinuous  solutions,  10,  17,  63 
Du  Bois-Keymond,  1 

Eigenfunktionen,  57 
Eigenwerte,  57 

Fischer,  29 

Fourier's  integral,  1,  69-71 
Fourier's  series,  52,  54 
Fredholm,  2,  24-29,  31,  43,  45 

Goursat,  10,  21,  51 
Gram,  53 

Hadamard,  29-31 

Hilbert,  2,  13,  25,  31,  36,  46,  49,  58,  69 

Holmgren,  60,  62 

Hurwitz,  4 

1,2 

Index  of  a  root  for  K,  45 

Infinite  kernels,  7-11,  18,  31,  36,  65 

solutions,  10,  17,  63 
Integral  equations,  homogeneous,  14,  43 

non-linear,  46 

of  first  kind,  1,  13,  60-71 

of  second  kind,  1,  13  ft. 

of  third  kind,  13 

singular,  71 

with  parameter,  38,  64 
Inversion  of  definite  integrals,  1 
Iterated  functions,  19 

Kellogg,  69 


Kernel,  13.     (See  also  infinite.) 
Klein,  53 
Kneser,  43,  47,  58 

Labesco,  60,  62 

La  Vallee  Poussin,  4 

Legendre's  polynomials,  52,  54,  56 

Le  Boux,  61 

Linear  algebraic  equations,  24 

Liouville,  2,  8,  11,  14,  37,  38 

Mason,  14 

Minors,  27,  45 

Modified  determinant  and  adjoint,  36 

Multiple  integrals,  1 

Multiplicity  of  a  root  for  K,  45 

Neumann,  14 

Non-linear  integral  equations,  46 
Normalized  functions,  54 
Nulllosung,  44 

Orthogonal  functions,  52 
Osgood,  27 

Parameter  in  integral  equation,  38,  64 

Picard,  60 

Plarr,  56 

Poisson,  49 

Principal  solution  for  K,  44 

Beciprocal  functions,  21 
Begularly  distributed,  3 
Boot  for  A",  41 
Bouche,  2 

S  2 

Schmidt,  31,  46,  47,  49,  53,  56,  57,  58 

Solutions,  discontinuous,  10,  17,  63 

principal,  44 

Successive  substitutions,  14 
Symmetric  kernel,  46 
Systems  of  integral  equations,  1 

T,  2 

Volterra,  2,  19,  24,  37,  60-63,  65-69 
Volterra's  equation,  60 

Weyl,  71 
Wirtinger,  29 


CAMBRIDGE  I     PRINTED  BY  JOHN  CLAY,  M.A.  AT  THE  UNIVERSITY  PRESS. 


Cambridge    Tracts    in    Mathematics 
and    Mathematical    Physics 


No.  i.      VOLUME    AND    SURFACE    INTEGRALS    USED    IN 

PHYSICS,  by  J.  G.  LEATHEM,  M.A.     25.  6d.  net. 
No.  2.     THE  INTEGRATION  OF  FUNCTIONS  OF  A  SINGLE 

VARIABLE,  by  G.  H.  HARDY,  M.A.     2s.  6d.  net. 
No.  3.     QUADRATIC  FORMS  AND  THEIR  CLASSIFICATION 

BY     MEANS     OF     INVARIANT     FACTORS,     by 

T.  J.  I'A.  BROMWICH,  M.A.,  F.R.S.     3*.  6d.  net. 
No.  4.     THE    AXIOMS    OF    PROJECTIVE    GEOMETRY,    by 

A.  N.  WHITEHEAD,  Sc.D.,  F.R.S.     2s.  6d.  net. 
No.  5.     THE    AXIOMS    OF    DESCRIPTIVE    GEOMETRY,    by 

A.  N.  WHITEHEAD,  Sc.D.,  F.R.S.     25.  6d.  net. 
No.  6.     ALGEBRAIC   EQUATIONS,  by   G.   B.   MATHEWS,   M.A., 

F.R.S.     25.  6d.  net. 
No.  7.     THE    THEORY    OF    OPTICAL    INSTRUMENTS,    by 

E.  T.  WHITTAKER,  M.A.,  F.R.S.     zs.  6d.  net. 

No.  8.     THE  ELEMENTARY  THEORY  OF  THE  SYMMETRI- 
CAL OPTICAL  INSTRUMENT,  by  J.  G.  LEATHEM, 

M.A.     2S.  6d.  net. 
No.  9.     INVARIANTS      OF     QUADRATIC      DIFFERENTIAL 

FORMS,  by  J.  E.   WRIGHT,  M.A.     2S.  6d.  net. 
No.  10.  AN  INTRODUCTION  TO  THE  STUDY  OF  INTEGRAL 

EQUATIONS,  by  MAXIME  BOCHER,  B.A.,  Ph.D.  25.  6d. 

net. 

In  preparation. 

THE  DEFINITE  INTEGRAL,  ITS  MEANING  AND 
FUNDAMENTAL  PROPERTIES,  by  E.  W.  HOBSON, 
Sc.D.,  F.R.S. 

SINGULAR  POINTS  AND  ASYMPTOTES  OF  PLANE 
CURVES,  by  Miss  C.  A.  SCOTT. 

THE  FUNDAMENTAL  THEOREMS  OF  THE  DIF- 
FERENTIAL CALCULUS,  by  W.  H.  YOUNG,  Sc.D. 

THE  CIRCLE  AT  INFINITY,  by  J.  H.  GRACE,  M.A. 

AN  INTRODUCTION  TO  THE  THEORY  OF  AT- 
TRACTIONS, by  T.  J.  FA.  BROMWICH,  M.A.,  F.R.S. 


PLEASE  DO  NOT  REMOVE 
CARDS  OR  SUPS  FROM  THIS  POCKET 

UNIVERSITY  OF  TORONTO  LIBRARY 


Bocher,  Maxime 

in  introduction  to  the 
B64  study  of  integral  equations 

cop.2