Marine Biological Laboratory

7_, I 33 Pt A R Y

AUG 1 4 1947

WOODS HOLE. MASS.

Mathematical Biophysics Monograph Series, No. 1

MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

By ALSTON S. HOUSEHOLDER

AND

HERBERT D. LANDAHL

SttJV-ULTPS

The Principia Press, Inc. bloomington, indiana

COPYRIGHT 1945 BY THE PRINCIPIA PRESS

COMPOSED AND PRINTED BY THE DENTAN PRINTING COMPANY COLORADO SPRINGS, COLORADO

PREFACE

Since the proposal of the two-factor dynamical model of neural activity by Professor Rashevsky in his book, Mathematical Biophysics, a great deal of work has been done in the formal development of the theory as well as in the applications to specific psychological problems. It is only natural that these developments by various authors over a period of time will be lacking somewhat in coherence and continuity.

With this in view, it seems appropriate to pause at the present time to review the work which has been done so far to explain sys- tematically the techniques, to summarize and describe such structures as have been devised and the applications made, to suggest promising directions for future development, to indicate types of experimental data needed for adequate checks, and also to present new material not published elsewhere. It is the hope that this perspective of past achievements and, still more, of future prospects, will be of benefit to those theorists and experimenters alike who are interested in con- tributing to the understanding of some of the mechanisms which un- derlie psychological processes.

While perhaps the names which most commonly run throughout the monograph are those of the authors themselves, this is largely only a reflection of their dominating interests in its preparation; and neither the monograph itself nor the papers of the authors and of many others herein referred to would ever have seen the light of day had not the general procedures and fundamental postulates been pre- viously developed by Professor Rashevsky. For this, and for other reasons too abundant to enumerate, the authors owe to him their foremost debt of gratitude.

Their thanks are due also to Dr. Warren S. MeCulloch, Dr. Ger- hardt von Bonin, and Dr. Ralph E. Williamson for many helpful sug- gestions made during the preparation of the manuscript, and to Mr. Clarence Pontius for preparing all the original drawings ; to Mrs. Gor- don Ferguson and Miss Helen De Young for typing of the manuscript, to Miss Gloria Robinson for final preparation of the manuscript, proofreading, and preparation of the index. For help with the latter thanks are also due to Mr. Richard Runge.

The authors also wish to thank the Editor of The Bulletin of Mathematical Biophysics for permission to reproduce Figures 1 and 2 of chapter vi ; Figure 2 of chapter vii ; Figures 3, 4, 9, and 10 of chapter ix ; Figures 2, 3, and 4 of chapter xi, Figure 1 of chapter xiii and Figure 1 of chapter xiv. To the Editors of Psychometrika, their thanks are due for permission to reproduce Figures 1, 6, 7, and 8 of chapter ix, and to The University of Chicago Press for permission to

iii

reproduce Figures 2 and 5 of chapter ix, from Rashevsky's Advances

and Applications of Mathematical Biology, and Figure 1 of chapter

xi from his Mathematical Biophysics.

Finally, the authors wish to express their gratitude to the Prin-

cipia Press and to the Dentan Printing Company for their unfailing

efforts involved in publishing the book.

Alston S. Householder Herbert D. Landahl

Chicago, Illinois

October, 19U

IV

TABLE OF CONTENTS

PAGE

Introduction vii

PART ONE

CHAPTER

I Trans-synaptic Dynamics 1

II. Chains of Neurons in Steady-State Activity - - 7

III. Parallel, Interconnected Neurons 13

IV. The Dynamics of Simple Circuits 22

V. The General Neural Net 30

PART TWO

VI. Single Synapse: Two Neurons 37

VII. Single Synapse : Several Neurons 49

VIII. Fluctuations of the Threshold 53

IX. Psychological Discrimination 56

X. Multidimensional Psychophysical Analysis - - 74

XI. Conditioning ------- 80

XII. A Theory of Color-Vision 90

XIII. Some Aspects of Stereopsis 94

PART THREE

XIV. The Boolean Algebra of Neural Nets - - - - 103

XV. A Statistical Interpretation Ill

Conclusion 114

Literature 116

Index 119

61131

INTRODUCTION

This monograph is directed toward the explanation of behavior by- means of testable hypotheses concerning the neural structures which mediate this behavior. We use the word behavior, for lack of a better term, in a very broad sense to cover any form of response to the en- vironment, internal or external, whether it is acting or only perceiv- ing, and whether the response occurs immediately or after long delay, providing only the response is governed by nervous activity initiated by occurrences in the environment. We are seeking to develop a theory of the nervous system as the determiner of behavior.

Data of anatomy and physiology are altogether inadequate — un- less in the case of a simple spinal reflex — for tracing in detail the progress of a nervous impulse from its inception at a receptor, through its ramified course in the nervous system, to its termination at the effector. We know pretty well where the fibers lead from the retina; we known even in some detail where the different retinal areas are mapped on the cortex ; we know a good deal about the inter- action of one region of the cortex upon another, and we know many details about the functioning of neural units. But how the neural units are combined in the visual area to enable the organism to locate an object seen and to act accordingly, how the nervous discharges from the two retinas are shunted this way and that to combine and emerge at the appropriate effectors, is not explained by existing ex- perimental and observational technique. For solving such problems it is necessary to create testable hypotheses, to be revised, replaced, or expanded according to the outcome of the tests.

The task of developing this theory is three-fold. First, an ideal- ized model of the elementary units must be constructed in terms of postulates governing their individual behavior and their interactions. The model must be simple enough to permit conceptual manipula- tion. The units we have designated neurons. Our neurons are defined by the hypotheses we impose upon them, and it may turn out that not the single neuron of the physiologist and anatomist, but some re- curring complex of these is most properly to be regarded as its pro- totype. The junction of neurons we refer to as a synapse, and where the impulses from two or more neurons are able to summate in pro- ducing a response in one efferent neuron, or in each of several, we have also, briefly, referred to the set of these junctions as constitut- ing a single synapse. Such usage, in harmony with that of Rashevsky

vii

MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

(1938, 1942) , seems to simplify the terminology, since for us the junc- tion is primarily of dynamical, not anatomical, significance. We de- vote chapter i to the development of the postulates of our system, and to the elaboration of a few elementary consequences.

The second stage in the development of the theory consists in the investigation of the properties of complexes of specified structure, these properties being deduced from the postulated properties of the units and from their interrelations in the structure. The bulk of Part 1 deals in a purely abstract manner with a number of different struc- tures which are in some sense typical of those required by the appli- cations, and considers the general problem of the reciprocal determi- nations of the structural form and the dynamics.

The final stage is the comparison of prediction with fact. A neu- ral complex of particular structure is assumed to link the stimulus to the response in a given class of cases. From this assumption, a certain quantitative functional relation between stimulus and re- sponse can be deduced. To the extent to which experience verifies the prediction, we have confidence in our initial assumption and are jus- tified in extending the range of our predictions. In general the func- tional relations involve variables and parameters, each capable of assuming values over a certain range, so that any such relation yields predictions well beyond the actual range of verification.

Whether or not verification occurs over some range, there must somewhere occur a failure. This is because both our units and our structures are of necessity over-simplified. But the failure is itself instructive, for the trend of the deviations can yield insight into the nature of the complications required for increasing the realism and extending the range of applicability of our model. This is the theme of Part II, in which deductions made on the basis of special struc- tures are compared with data, where data are available. Unfortunate- ly, even when data of a kind are available, these are not always well adapted to our special purpose. The test of a specific theory gen- erally requires the imposition of specific conditions upon the conduct of the experiment, and when the theory is not available to the experi- menter it is largely chance if these conditions are satisfied. Hence some comparisons can be made only in the light of special assump- tions, and too often no quantitative comparison at all is possible. It is our hope in publishing this monograph that more experiments will be planned to make these tests.

In Part III we present the basis for an alternative development, as laid down quite recently by McCulloch and Pitts (1943). The neu- ronal dynamics as postulated by these authors is much more realistic, but the deductions from them of laws of learning and conditioning,

viii

INTRODUCTION

of response-times, and of discrimination, remains largely a program for the future. Their laws are temporally microscopic, as opposed to those of Part I, which are temporally macroscopic. It is therefore to be hoped that the macroscopic laws can be deduced from the microscopic ones as approximations valid at least for certain com- monly occurring neural complexes, and some steps in this direction are outlined in the concluding chapter.

In general we have sought, within the available space, to sum- marize and systematize the most important methods and results to date. We have passed lightly over most of the results already reported in Rashevsky's "Advances and Applications of Mathematical Biology," and we have omitted all reference to Rashevsky's aesthetic theory. On the other hand, many pages of Part I have been devoted to formal dis- cussions making no immediate contact with experience. While those whose interest lies only in the applications may wish to skip this ma- terial, the theoretically minded will recognize in these pages the groundwork for the further elaboration of what we hope will become a comprehensive and unified theory of the operation of the central nervous system.

IX

PART ONE

TRANS-SYNAPTIC DYNAMICS

The performance of any overt act, by any but the most primitive of organisms, is accomplished by the contraction and relaxation of groups of specialized cells called muscles. Normally these contractions and relaxations, by whatever mechanisms they may be effected, are at least initiated by prior events occurring at the junctions with these muscles of certain other specialized cells called neurons. Whatever the nature of these prior events, and by whatever mechanism they are effected, they are themselves initiated by a sequence of still prior events in the neurons themselves, and these in turn by yet earlier events at the junctions of other neurons with these. Thus regressing, step by step, we conclude that apart from possible cycles, pools of perpetual activity, the whole sequence was started by an initial set of events at the points of origin of an initial set of neurons. And finally this ultimate set of initial events — the set, or any member of the set, according to convenience, being called a stimulus — was brought about by or consisted in some physical or physiological occur- rence in the environment or within the organism.

Doubtless there are often and perhaps always countless other accompanying events occurring within the organism and interacting to a greater or lesser degree with those events here mentioned, but no scientific theory can account for everything, and still less for everything all at once. We wish, therefore, to define our schematic reacting organism as one consisting solely of receptors (sense-or- gans) , effectors (muscles) , and a connecting set of neurons, the whole and the parts being affected by the physical or physiological environ- ment only insofar as this acts as a stimulus via the receptors. We wish to consider to what extent behavior can be accounted for in terms of such a model. In undertaking such an inquiry, we freely and expressly acknowledge that much is left out, and we emphatically refuse to make any claim in advance as to the range of the behavior that can be so accounted for. This is an empirical question to be ex- perimentally decided. But a hypothesis cannot even be refuted until it is clearly formulated.

The structure of a neuron is fairly complicated and its behavior is hardly less so. Consequently, to make progress the neurons, too, must be schematized. Structurally there is a cell body and two or more

2 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

threadlike processes, but the terminations of these processes are of two sorts. A termination of the one sort we shall call an origin, one of the other sort, a terminus. When appropriate stimulation of suffici- ently high degree is applied at an origin, there is conducted along the neuron a "nervous impulse" all the way to the various termini. This nervous impulse, arriving at a terminus, may contribute to the stimu- lation of any neuron which has an origin at the same place.

Doubtless in all strictness the impulse does not simply jump from neuron to neuron but passes by way of some intermediary process set up in the synapse which is the junction between the two neurons. From our point of view, it is largely a matter of convenience whether we postulate such an additional process or not.

The nervous impulse manifests itself as a localized change in electric potential, its duration at any point is about half a millisecond, and it is transmitted at a rate that, though low in some neurons, in others may equal or exceed 10* cm sec-1. Moreover, in physiological stimulation, if the stimulation is maintained, the impulses are repeat- ed and may reach a frequency which is of the order of 102 sec-1. The more intense the stimulation, the more frequent the impulses, but there is an upper limit for any given neuron which varies somewhat from neuron to neuron. When we have occasion to take account of it, we shall suppose this upper limit to be a fixed characteristic of the neuron.

McCulloch and Pitts (1943) have developed a theory of the "quantized" dynamics of the neuron which takes account of the in- dividual impulses and we shall return to this later at the end of this monograph. For the present, however, we shall schematize further by doing some statistical averaging and by fixing our attention upon the synapse rather than upon the neuron itself. We shall choose the alternative of supposing that the impulses of the afferent neurons are not the immediate stimuli for the efferent neuron, but that these im- pulses start or maintain at the synapse an intermediate process which is the immediate stimulus. To have a concrete picture, one may imag- ine that some chemical substance is released by the impulses and dis- sipated or destroyed as a monomolecular breakdown. However, it is by no means implied that this is the case, and furthermore, we shall not speak in such terms but shall speak merely of an "excitatory state," and denote the state or its intensity by s . More briefly we shall speak of the excitation e.

The amount by which the impulses increase s in unit time is pre- sumably proportional to the frequency of these impulses, and the fac- tor of proportionality is taken to be a characteristic of the fiber. We make the simplest assumption as to the rate of dissipation of e and

TRANS-SYNAPTIC DYNAMICS 3

assume it to be representable by the term ae . If we then take cuf> , proportional to the frequency, to represent this rate of increase of e by the impulses, we obtain the equation (Rashevsky, 1938)

de/dt = a(<j> — e) (1)

which we assume to describe the development of e . We take e to be a measure of the stimulus acting upon any neuron which originates at the synapse in question. Note that by equation (1) we pass, in a sense, directly from origin to terminus of the neuron, compressing into the function 6 our only reference to the intra- neuronal dynamics. When <f> = 0 the impulses have zero frequency, i.e. do not occur, and we shall say the neuron is at rest. Nevertheless, £ is not necessarily zero, and in fact, after the neuron has been active, e vanishes asymp- totically only according to equation (1) in which </> = 0 .

Now $ is proportional to the frequency of the generating im- pulses, and this is, as implied, an increasing function of the applied stimulus with, however, a finite asymptotic value. Hence we may write

$=<j>(S), (2)

where S is a measure of the applied stimulus. However, in order for the impulses to occur, S must exceed a certain minimal value, called the threshold, which is characteristic of the neuron in question and which we shall denote by h . Hence <£ (S) is zero for S = h , and for S > h , </> (<S) is positive, is monotonically increasing but with a de- creasing slope, and approaches a finite asymptotic value for large values of 5 .

Relatively simple analytic functions possessing these properties for S > h are the following (Rashevsky, 1938; in this connection cf. Hartline and Graham, 1932, and Matthews, 1933) :

<£ = 4>o[l-e-Q<s-»>], (3)

«/>0 (S-h)d + h

</> = log , (4)

log<5 S

where 6 is small and </>0 is the asymptotic value of </> . For not too large values of S either function may be approximated by an expression of the form

<l> = a(S-h), (5)

and the second by

4> = fi\og(S/h). (6)

In any case, for S ^ h , <f> = 0 .

4 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Now ^ is a function of S , and S may well be a function of t , in which case $ is a function of t . The complete solution of equation (1) is then given by

e = e-at[e0 + a(teaTct>(T)dT], (7)

J 0

where £0 is the initial value of e . When, in particular, S , and there- fore also </> , is constant with respect to time, this becomes

e = e-** e0 + <£(1 - e-at). (8)

Thus £ approaches the value <f> asymptotically, the approach being in all cases monotonic, and either increasing or decreasing according to whether </> exceeds or is exceeded by e.0 .

If we were now to introduce an assumption to relate the muscu- lar contraction with the applied e , we should have a system of for- mulae to be evaluated sequentially along any neural pathway from receptor to effector, for relating the time and the intensity of the re- sponse to the temporal form of the stimulus. But this would obvious- ly provide only a very incomplete picture. A given stimulus not only leads to the contraction of one set of muscles ; it leads also to the re- laxation of the antagonistic muscles. Any effective movement involves both components, of contraction and the inhibition of contraction. Thus we are inevitably led to extend our picture to include the phe- nomenon of inhibition.

There are many ways in which such a phenomenon could be in- troduced into our schematic picture, but the simplest way seems to be to suppose that at least some neurons have the property of creating, as the result of their activity, an inhibitory state of intensity j , (briefly, an inhibition j) , antagonistic to the excitatory state e , and to suppose that the production of j follows the same formal law as that for e:

dj/dt = b{y>-j). (9)

The function y> is of the same type as <£ and it is only as a matter of convenience that we introduce a separate symbol.

Rashevsky (1938, 1940) commonly assumes that in general the activity of any neuron leads to the production of both s and j , al- though for particular neurons, the one or the other may be negligible in amount. Evidently we may always replace a single neuron devel- oping both e and j by a pair, one developing £ alone and one j alone. It is useful, however, to consider some of the characteristics of a "mixed" neuron of the Rashevsky type.

Since £ and j are antagonistic, we are now supposing that

<r = s-j (10)

TRANS-SYNAPTIC DYNAMICS 5

is the measure of the effective stimulus acting upon any neuron origi- nating at the given neuron's terminus. If at any moment the o- due to the activity of a particular neuron is positive, we shall speak of the neuron as "exciting" or having an "exciting effect" at that mo- ment, without, however, meaning to imply thereby that it then ex- cites any neuron. It will do this provided only that a exceeds the threshold of a neuron suitably placed. Neither do we imply that the neuron which produced the a is at this moment acting, though it must have been acting in the very recent past if a is still appreciable. Likewise, we shall speak of the neuron as "inhibiting" or having an "inhibiting effect" whenever its a is negative. Consider the case of a constant 5 , so that <j> and y are themselves constant. Asymptotically £ and j approach </> and \p , respectively, so that the neuron is asymp- totically exciting or asymptotically inhibiting according to the rela- tive magnitudes of <p and y> . However, the initial rates of increase of £ and j are equal to a<}> and to by , respectively, so that an asymptoti- cally exciting neuron — for which cf> > ip — would be momentarily in- hibiting in case by> > a<j> , and vice versa. Thus the transient and the asymptotic effects of such a neuron would be quite opposite.

Furthermore, suppose, for definiteness, that the neuron is asymp- totically inhibiting, \p > </> , and consider the effect following the cessa- tion of its own stimulus, when the neuron, as a result, comes to rest. We suppose for simplicity that the constant stimulus is maintained long enough for the asymptotic state to be reached. Then, on re- moval of the stimulus, <£ and y.> both drop to zero so that £ and j de- cline exponentially to zero. If b > a , the decline of j is more rapid than that of £ and a transient exciting effect may, and in fact always does, ensue while the neuron is thus at rest.

To summarize all possible cases of this sort: A neuron is a) Asymptotically exciting whenever

4> > ip .

/*

K

f^ . — ~"":

\ \

//

' J

\ \

/ /

\ \

/ /

\ \

/ / 1 /

a<fc

\ \

1 /

1 /

a<f><ib

?

1 /

\ >^

1/

if

\ \^^

1/

•v — ^_

Figure 1

6 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Furthermore, when

i) a < b , cuj> > by it is always exciting in activity and transiently

exciting at rest; ii) a < b , 04 < by it is transiently inhibiting in activity, tran- siently exciting at rest (Figure 1) ;

Figure 2

iii) a > b , a<j> > by it is always exciting in activity, transiently inhibiting at rest (Figure 2). The case a > b , cuj> < by is inconsistent with <£ > y> . b) Asymptotically inhibiting whenever

Furthermore, when

<f> <xp

i) a > b , cuj) < by it is always inhibiting in activity and tran- siently inhibiting at rest; ii) a > b , cuf> > by it is transiently exciting in activity, transient- ly inhibiting at rest; iii) a < b , a$ < by it is always inhibiting in activity, transiently exciting at rest.

The case a < b , a</> > by is inconsistent with <£ < \p .

We have tacitly assumed that the <j at any synapse is affected only by neurons terminating, not at all by these originating, at this syn- apse. We further suppose that if several neurons terminate on the same neuron, the </s of all of them combine linearly. It is now ap- parent that with these simple assumptions, results of considerable complexity are possible. We shall attempt first of all to explore some of these complexities in the abstract, and next to relate a few of them to concrete psychological processes.

II

CHAINS OF NEURONS IN STEADY-STATE ACTIVITY

From our point of view, a receptor provides only the mediation between certain non-neural events and the occurrence of a stimulus S for one or more neurons, and an effector provides the mediation between the occurrence of an excitation a produced by neurons and certain other non-neural events. We shall not investigate these medi- ations, but shall consider only the problem of relating a to S for any given neural structure. Some of the neural structures actually to be found in the higher forms are of bewildering complexity, so that merely to describe them from observation is the task of a lifetime. We can hope to progress with our problem only if we start with very simple, hypothetical structures.

The simplest possible connected structure of more than one neu- ron is a chain of neurons. In a chain, every neuron but one has an origin which coincides with the terminus of some other. Call this one neuron N0 , its origin s0 and its terminus st . Let 2Va be the neu- ron whose origin is at Si , let &> be its terminus, and so sequentially, the terminus of the last neuron of a chain of n neurons being sn . If a stimulus S is applied to N0 at s0 , it may come from a receptor or from a neuron or neurons not in the chain. If Nn-X develops o- at sn , this may act upon an effector or upon a neuron not in the chain. That is immaterial. Suppose that the neurons of our chain are all of the simple excitatory type. Suppose, further, that only a negligible time is required for the o-(= e) produced by any neuron to reach its asymptotic value (/> when a constant stimulus S is applied. In other words, we are now considering the chain only in its asymptotic state after stimulation by a constant stimulus. Then if

u0 = S

is the total stimulus acting at s0 upon N0 , it follows that

o"i — </>o (o"o)

is the a produced by N.0 at sx , where fo is the ^-function of N0 . If no receptor or neuron outside the chain introduces any S or a at s1 , then

(T2 = <£i((Ti) = <£l[<£o(0"o)]

is the o- produced by Nx at s2 . Thus we can calculate sequentially all the o-i .

8 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Define the functions <f>i,v(o) by recursion formulas

^>i,v+i(c7) = 0i+v+i [<f>i,v(<r)] . (1)

Then

Ci+v+i — </>j,v \Oi) , (2)

so that each function <f>i>v gives the a produced at si+v+] in terms of that present at Si . If the derivatives exist, then

0'i,v+i(<r) — </>'i+v<-i[<£i,v(<x)] </>'i+v[^);,v-i (cr)] • • • <j>' i (<r) , (3)

where the primes denote the derivatives. Hence if, as we suppose, each <j>i(o) is monotonic, so is each 4>'i,v(<r) ; and if each <£'i(cr) is de- creasing, so is each </>'i,v(a). Further, if each ■/>'< vanishes asymptoti- cally, so does each <f>'itV . Hence these "higher order" excitation-func- tions are functions of the same type as the ordinary ones, and we may always replace any such chain of neurons by a single one, at least if we are interested in the asymptotic behavior alone. If every </>'» (a) ^1 for all a , then the sequence

Co , ffi , (J? , • • • , trn

is decreasing. In fact, if hi is the threshold of Nt , then

<7i+l < CTj ~~ Aij ,

and if h is the lowest threshold in the chain,

<j% *C <r0 Irl ,

so that the number of neurons in the chain that can be excited is not greater than the greatest integer in a.0/h .

If the <j>'i(a) > 1 for small values of a , this is not necessarily the case. It has been shown (Householder, 1938a) that when all the neu- rons are identical, and the chain is long, the a, will then either di- minish to zero, or approach a certain positive limit characteristic of the chain, according to whether <r.0 lies below or above a certain criti- cal value. The limit and the critical value are the two roots of the equation

£'(<0=i.

Note that it is legitimate to drop the subscript when all the <£* are identical.

If all of the ct's are within the range that permits of a linear ap- proximation to the ^ , it is easy to obtain an analytic expression for these ctj in terms of a0 and the subscript i (Landahl, 1938b) . We have, in fact [chap, i, equation (5)]

CHAINS OF NEURONS IN STEADY-STATE ACTIVITY 9

o-i = a(tr0 ~ h) ,

<r2 = a(<T! — h) =a2<r0 — ah(a + 1),

ui = a(ai_1 - h) =a*a0 - aMa*-1 + a*"2 + ••• + 1), or

(ah \ ah «o+- )-- , a^l, (4)

1 — a / 1 — a

vi = o0 — ih , a = 1 . (5)

Hence if a = 1 , the a; decrease in arithmetic progression until, for some i ,<nfkh , after which all succeeding <ri+v are zero. Otherwise the sequence

<n + ah/(l — a)

forms a geometric progression. The progression, and hence the se- quence <Ji , increases when

a > 1 , <t0 > ah/ (a — 1).

When the second inequality is reversed but the first holds, the pro- gression consists of negative terms which increase numerically until some <n falls below threshold. If the first inequality is reversed, the progression is decreasing. Finally, if the second inequality becomes an equality, then every a* has the same value.

So far we have been supposing that the only excitation intro- duced from the outside — from receptors, that is, or from neurons which are not members of the chain— was introduced at So alone. We have further supposed all the neurons in the chain to be excitatory, that is, asymptotically exciting while acting, since evidently, in such a situation, no activity could occur beyond an inhibiting member of the chain. We turn now to a somewhat more general situation in which the chain may contain neurons which are asymptotically in- hibiting while acting, and in which outside excitation may be present at any or all the Si . Following a suggestion made by Pitts (1942b), we now represent the function <j> as linear until the stimulus reaches a certain maximal value, and constant at the upper limit thereafter. This representation, though no doubt less accurate than the functions (3) and (4) of chapter i, is at any rate a fair first approximation, and is much more easily handled. Let St be the applied stimulus at Si , and define the quantities

£i = Si — hi , rji = £i + <Ti . (6)

Then r\i represents the excess over the threshold of the total stimulus

10 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

acting upon neuron A/* . In accordance with our description of the 0's, therefore, we have

CTi+i = ^>i(^i) (7)

where

4>i (v> ) — 0 when rji ^ 0 ,

<f>i (rji) = a» rji when 0 < rji < H% , (8)

</>« (rji) = an Hi when rji ^ #; .

The coefficient a* we shall call the activity-parameter of 2V< , and it may be positive, for an excitatory, or negative, for an inhibitory neu- ron, but is not zero. Our problem is the following: supposing Si , 52 , • • • , Sn fixed, to express rjn as a function of rj0 — S0 — K , and, more generally, to express rji+v as a function of rji when Si+1 , ••• , S^v remain fixed.

Since each rji+1 varies linearly with rjx when the latter occupies a certain restricted range, and rjin is otherwise constant, it is at once apparent that the same is true of the variation of any iji+v with rji . Again, since any & may be so large that rji always exceeds Hi , or so small that rji is always negative, it is evident that rjuv may remain constant for all values of rji . In such a case si+v is said to be inac- cessible to Si (Pitts, 1942a) ; otherwise it is accessible. More explic- itly, if, as rji varies over all values from — oo to + oo while Si+1 , ••• , Si+v remain fixed, the value of rji+v remains constant, then si+v is inac- cessible to Si . Clearly si+1 is always accessible to Si . If si+v is inac- cessible to Si , then si+v is also inaccessible to any s , and further ,

i—v'

any s is inaccessible to Si . Finally , it is clear that if Si+V is acces-

i+v+v'

sible to Si , then where rj^v varies with rji , it decreases if there is an odd number, increases if an even number of inhibitory neurons (with negative a's) between Si and Si+V .

The above conditions for inaccessibility may be phrased thus:

// any of the four following conditions holds:

a* > 0 ,

S^ + aiHi^O,

ai >0,

£i+l = "t+1 >

<Xi<0,

li+i ^ o ,

ai<0,

Si+i i <Xj Jtl i = "i+i ,

Mew Si+2 *s inaccessible to s% . Otherwise si+2 is accessible to Si .

These conditions are also sufficient for the inaccessibility of any si+2+v to any s. , where v and v are non-negative integers, but the

i-v'

conditions are not necessary.

The following conditions for accessibility are somewhat less ob- vious. Let us say of a neuron AT* whose stimulus exceeds its threshold

CHAINS OF NEURONS IN STEADY-STATE ACTIVITY 11

by more than Hi that Nt is in a state of maximal activity. Then (cf. Pitts, 1942a) :

A. Let there be an odd number of inhibitory neurons between Si and Sj . Then if, for given Si+1 , • • • , Sj , it can occur that both N% and ZV, are inactive, or that both are in maximal activity, sj+1 is inaccessible to Si , and a fortiori, sn to s0 .

B. Let there be an even number of inhibitory neurons, or none, be- tween Si and Sj . Then if, for given Si+1 , • • • , S, ■ , it can occur that Nj is at rest while Ni is in maximal activity, or that Nj is in maximal activity while Ni is at rest, s,+1 is inaccessible to s; , and a fortiori, sn x>o Sq .

Consider case A, the first alternative. Making Ni active could decrease, but could not increase, the activity of Nj and hence could not initiate activity in Nj . Similarly in the second alternative, dimin- ishing the activity of Ni could increase, but could not decrease that of Ni , and since Nj is already in maximal activity, the resulting a can not be further increased. Analogous considerations apply to case B.

Accessibility is denned only with reference to a given distribu- tion of the stimulation, supposedly fixed, applied at all synapses of the chain except at the origin. However, if, the distribution being fixed, sn is accessible to s0 , then, still with the same applied distribu- tion, rjn is a linear function of t]0 when ^ lies between certain limits, and elsewhere rjn is constant. Now each rji defined the excess of the total stimulation at Si over the threshold hi of Ni . Hence if

Vi = rji + hi

is the total stimulation, yi is a linear function of y0 when y0 lies be- tween suitable limits. This is the property postulated of single neu- rons for this discussion. To get an explicit representation, let

E = a,,-! (Sn-! — /&„-i) + • • • + an_a a„_2 • • • at (Sx — h^)

- a„_i • • • a0 h0 , (9)

A Ctn_i 0Cn_2 ■ • • (Xo .

Then for suitable y' and y" we have

yn = Sn + 3 + Ay' when yfl = V' ,

yn = Sn + S + Ay0 when y' < y0 < y" , (10)

yn = Sn + S + Ay" when y0 ^ y" .

The quantities E , y and y" depend upon the applied stimulation. The A does not.

With reference to the steady-state dependence of <r upon S , the properties of a chain of neurons are seen to be very similar to those of a single neuron. This is true especially in the case when a constant

12 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

S , coming, perhaps, from self-exciting circuits (chap, iv), is applied to each of the intermediate neurons of the chain, and if the linear representation is adequate the properties of the chain can be made ex- actly the same as those of an individual neuron. Thus it is legitimate to speak of two centers as connected by a single neuron even when it would be more plausible on anatomical grounds to suppose that an entire chain is required. On the other hand, chains can exhibit prop- erties quite different from those of a single neuron, since, in particu- lar, more variables enter, as is clear from equations (9) and (10).

The utility of the notion of accessibility will become more appar- ent when the general net is discussed in chapter v, but it is perhaps sufficiently evident already that the properties of very complicated nets might in special cases turn out to be very simple because of the inaccessibility of certain centers to certain others. It is evident, too, that whereas we have discussed accessibility only in connection with the linear representation of <j>(S), very similar results must hold in general.

Ill

PARALLEL, INTERCONNECTED NEURONS

Color-contrast and visual illusions of shape provide well-known examples of the almost universal interdependence of perceptions. Phy- siologically no stimulus occurs in the absence of all others, and the response to any stimulus depends in part upon the nature of the back- ground against which it is presented. One may wish to say, indeed, that the true stimulus to the organism is the whole situation, but since we cannot discuss any whole situation, and since the whole situ- ation is never duplicated, such terminology does not seem to serve any useful scientific purpose.

If two stimuli differ only in degree, it may be true that the re- sponses which they evoke differ only in degree, the stronger stimulus evoking the stronger response. But in many instances there is a com- plete change in the form of the response, and in others it is the weaker stimulus, and not the stronger, which brings forth the strong- er response.

In our schematic reacting organism, such phenomena are easily understood in terms of suitable interconnections between parallel neu- rons. We are reserving Part II for the precise formulations neces- sary to make quantitative predictions, so that we content ourselves here with a few qualitative results to indicate in general how this comes about.

In the barest terms, if two stimuli which differ only in degree lead to responses which differ in form, then there are pathways — neural chains from receptor to effector — which can be traversed when the impulses are initiated by a stimulus within a given range of in- tensities but not when these are initiated by a stimulus lying outside this range on the scale of intensities. The simplest neural mechanism having this property consists merely of two neurons, Ne excitatory and Ni inhibitory, having a common origin and a common terminus (Landahl, 1939a). Let he and hi be the thresholds of Ne and Ni , re- spectively, and let h be the threshold of some neuron N originating at the common terminus of the two neurons. Suppose

hi> he ,y>(oo) > <p( oo )f(f> (hi) >h.

Then if S is sufficiently near to hi in value (Figure 1), asymptotically

<r = </>(£) ~y(S) >h

and N will become excited, whereas a somewhat larger S will result

13

14 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Figure 1

in a negative a at the origin of N , and a somewhat smaller one will yield a sub-threshold, though positive, a . There will be some range h', h" , therefore, which contains h, , and within which S must lie if transmission is to occur.

These are not the only possible relations that will limit the range over which transmission may occur. We could have, for example (Figure 2),

- v

hi= he

Figure 2

hi = he, <j>' (hi) > y>'(hj), y(oo) > </>(oo).

Then for an S within a limited range, </> exceeds y> , which is all that is required except that h must be sufficiently small. Suppose, then, one has a number of such sets, Ni , Ne and N , all with a common origin, and each possessing a characteristic range. If these ranges do not overlap, and if each set is connected through a chain to a par- ticular effector, then any S will excite only the effector corresponding to the particular range on which S lies.

As a first step in discussing the interaction of perceptions (strict- ly the interaction of the transmitted impulse), and as a kind of gen-

PARALLEL, INTERCONNECTED NEURONS

15

eralization of the mechanism just discussed, consider the following (cf. Rashevsky, 1938, chap. xxii). The neurons N„ and N22 are ex- citatory, originating, respectively, at s1 and s2, terminating, respec- tively, at s\ and s'2 . The neurons N12 and AT.21 are inhibitory, originat- ing, respectively, at sx and s2 , terminating, respectively, at s'2 and s\ . Let us restrict ourselves here to a range of intensities over which the linear approximations to the functions </> and y> are adequate. Then, stimuli Si and S2 being applied at s3 and s2 , we have

<*i — «n (Si — /in) + a21 (S2 — h21), a2 = ai2 (Si — /i-i2) + a22 (S2 — /l22) ,

(1)

when the quantities within the parentheses are all positive. When any of these quantities within parentheses is negative, however, the term is deleted. The conditions for the excitation of 2v\ and N2 are, respectively, a1 > hx and <t2 > h? .

A geometric representation of these conditions is easily obtained on the (Si , S2) -plane. The graph of the relation <ti = hx is a broken line consisting of a single vertical ray of abscissa hxl + hjaxi extend- ing downward to infinity from the ordinate hzX , and a ray extending up and to the right with slope -an/a21 . Note that this slope is posi- tive, since, N21 being inhibitory, a2] is negative. Then the region de- fined by o-i > hi is that to the right of and below the broken line. Like- wise the region defined by or2 > Ju consists of those points above and to the left of a certain broken line which consists of a horizontal ray extending to the left, and a ray of positive slope. If these regions overlap (Figure 3), it is possible to have both Nx and N2 acting simul- taneously. Otherwise it is not possible.

Figure 3

These regions necessarily overlap if the determinant of the co- efficients in equation (1) is positive:

16 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

an a21

O.X2. tX22

>0, (2)

for then the line o-i = hx is steeper than a2 = h2 , so that for large Sx and S2 the magnitudes can be so related that both the inequalities, ax > hx and «r2 > hn , are satisfied.

The case when condition (2) fails is of some interest (Rashev- sky, 1938, chap. xxii). Now, if neither corner lies in the other re- gion, the rays do not intersect, and it is never possible, with any Si and S2 , for both Nx and N2 to be excited at the same time. In fact, even with strong stimuli, the point (Sx , S2) may be outside both re- gions and neither Nx nor 2V2 is excited. On the other hand, if either corner does lie in the other region, the rays do intersect, and there is a finite region of overlap as illustrated in Figure 3. Points (Si , S2) beyond the intersection and between the two rays represent pairs of stimuli which, though strong, fail to excite either Nt or N2 . The analytic condition for this is the simultaneous fulfilment of the two inequalities

an (h12 — &n — fei/flu) — a21(/^i — h22 — fh/a22) > 0 ,

(3)

— <x12(h12 — h22 — ftj/on) + a22(h2l — /k,2 — h>/a22) > 0 ,

together with the failure of equation (2).

The situation here described may be thought of as that of two stimuli competing for attention. When conditions (3) hold and (2) fails, there are moderate stimuli which lead to excitation of both 2V"i and 2V2 (awareness of both stimuli) , while with more intense stim- uli, unless one is sufficiently great as compared with the other, each stimulus prevents the response appropriate to the other and no re- sponse occurs.

For the special case in which the mechanism is altogether sym- metric (Landahl, 1938a; cf. also chap, ix) we may set

a11 = a22 = a, — a12 = — a21 = P,

(4)

If the two responses are incompatible in nature the parameters might be so related that the two conditions (3) cannot be satisfied. The failure of these reduces to the single inequality

h"^a(h'-h) (5)

which holds necessarily in case we have h' ^ h . If the relation (5) is replaced by an equation, we have a kind of discriminating mechan-

PARALLEL, INTERCONNECTED NEURONS 17

ism by which the stronger of two simultaneous stimuli elicits its ap- propriate response and prevents the other response. If, further,

a1 = S1- S2+ (h' -h),

and if, finally, h' and h are very nearly equal, the transmitted stimu- lus approximates the absolute value of the difference between the two stimuli.

More generally, let the excitatory neuron NH connect s, with s'i (i== 1 , ••• , n) (Figure 4), let the inhibitory neuron Na(i ¥= j) con-

N11 N, S^^—^ , ^^.jt>

So><L^ U- VT » ^r^H > '

2 — ><^^—""«

*-""Va . ->3^^_?j3.

Figure 4

nect St with s'j and let a , — ft , h , li and ft" be the activity parameters and the thresholds of the various neurons. If all neurons of the first level are active,

Vi = a(Si-h) -/J2 (Sj-hf) , (6)

and the conditions for excitation of N\ (originating at s\) is oi > h". If h < h', then for values of the Sj between h and h' the excitatory but not the inhibitory neurons are excited. If, further,

a(h' -h) >h" , (7)

then for values of the Sj near h' the neurons Ni are all excited. But with

n > 1 + a/0 , (8)

when all the Si are equal, the <n are equal, and if

P(n-l)h' - ah-h"

S^ , (9)

(w-1) 0-a

none of the Ni responds, though for somewhat smaller values of £ they all respond.

With the same relations among the parameters, suppose the Si are not all equal, but that each exceeds h' . It is no restriction to sup- pose that

18 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Evidently if some but not all the Ni are responding, there must be some ra ^ 1 such that Nlf N2 , ••• , Nm are responding, Nm+1 , • • • , Nn are not. Then for i = m, am as given by equation (6) must exceed, o-OT+i fail to exceed h" , and if S represents the mean of all the Si this leads to the relation

nfiS + ah- (n-l)ph' + h"

£>m -> " = Om+1* (10)

a + p In particular if

Oi == O2 — — • • • — — o«i — — o , &in+l Orft+2 * * * on O ,

then these relations are equivalent to

[a-/?(ra-l)] S' - P(n-m)S" > ah- (n-l)ph' + h"

(11)

^ [a - P(n-m-l)] S" - fimS' .

In either case the m stimuli Si produce their response and prevent the occurrence of the response to the other n—m stimuli (cf. Rashev- sky, 1938, chap, xxii ; Landahl, 1938a) .

Receptors in the skin and the retina are far too numerous to be treated by the simple algebraic methods so far employed. Here we must think in terms of statistical distributions. The receptors, or at least the origins of the neurons to be discussed, may have a one-, a two-, or a three-dimensional distribution. According to the case, let the letter x stand for the coordinate, the coordinate-pair, or the co- ordinate-triple of the origin of any neuron. Let x' represent the co- ordinate, the coordinate-pair, or the coordinate-triple of any terminus of one of these neurons. Running from the region x , dx (consisting of points whose coordinates fall between the limits x and x + dx) to the region x', dx' may be excitatory or inhibitory neurons, or both. If we consider only the linear representation of the functions <j> and tp , each neuron is characterized by the two parameters a and h . To consider a somewhat more general type of structure than the one just discussed for the discrete case, let

N (x , x' , a , h) dx dx' do. dh

represent the number of neurons originating within the region x , dx , terminating in x' , dx' , and characterized by parameters on the ranges a, da and h , dh . Then, S(x) being the stimulus-density at x , the a-density which results from these neurons alone at x' is

N(x , x' , a , h) [S(x) — h] dx da dh

PARALLEL, INTERCONNECTED NEURONS 19

provided h < S (x) . Hence the total a-density, obtained by summing over the entire region (x), over all values of a (positive and nega- tive), and over all values of h < S(x) is

a(x') =J(xJ^Jso^aN(x,x',a,h)[S(x) -K] dhdadx. (12)

Corresponding expressions can be derived, of course, on the suppo- sition that <f> and xp are non-linear of any prescribed form (Rashev- sky, 1938, chap. xxii).

Instead of writing the special form of expression (12) for the strict analogue of the discrete case considered above, let us suppose next that the inhibitory neuron Na , rather than passing from Si to s'j , passes from s\ to s'j (Rashevsky, 1938, chap. xxii). The net a at any s'j is then equal to the a produced by excitatory neurons terminating here diminished by the amount of inhibition produced by the inhibi- tory neurons originating at the other s'(- , whereas it is this net a at the s'i which acts as the stimulus for these inhibitory neurons. We have therefore to solve an integral equation in order to determine the net a.

In the continuous case, let a(x) be the gross a-density produced by the excitatory neurons terminating in the region x , dx . For the inhibitory neurons, let — p = a represent the activity parameter. Let N(x' y x , /} ,'h) dx' dx dp dh represent the number of inhibitory neu- rons which run from the region x' , dx' to x , dx , and have activity parameters and thresholds limited by the ranges p , dp and h , dh . Then

a(x) =o(x) —

(13) f x> lo IoiX,) PN(X' 'X'P'W&ix') ~ &] dhdfidx'.

This is an integral equation, but one in which the unknown func- tion enters as one of the limits of integration. If we interchange orders of integration, we have as a form equivalent to equation (13)

a(x) =7(x) —

(14) J"/00/ 0N(x' ,x,p,h)[<r(x') -h] dx'dfidh.

0 0 <T(X')>h

Then if, as a particular case, all inhibitory neurons have the same ft and the same h , this becomes

a(x) =a(x) -ft J N(X' ,x)[<t(x') - K\ dx' . (15)

It is easy to solve this equation in certain special cases. Suppose N(x' , x) is independent of x' and x . It is clear that the integral is then independent of x , and we may write

20 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

a(x) = a(x) ~ XI , (16)

where

I=f [*(x') -h]dx',k = pN. (17)

Now, if we knew the limits of the region over which a > h , we could substitute expression (16) for a into the integral in expression (17), integrate, solve for / , and finally place this value in relation (16) to obtain a . Not knowing these limits, we proceed as follows. Since o-

and a differ only by a constant, the limits of the region can be defined by the equation

«{x)=[i, (18)

for a suitable ii . Leaving /u for the moment undetermined, we carry through the steps as outlined except that the range of the integration

is to be defined by o- > n . We first obtain

fa(x')dx' - hM(fx)

/</*) = (19)

1 + XM(ti)

where M([i) is the measure (length, area, or volume, according to the

dimensionality) of the region o- > ii . But since a > ii and <r > h define the same region, it follows from (16) that

it-XI(ii)=h. (20)

Hence if we solve this equation for /a , then we find, by equations (16) and (20), that

(x) =7(x) +h- p. (21)

si

It is evident that the procedure here outlined is applicable, with obvi- ous modifications, in case AT is a function of x' but is independent of x . From the fact that a and cTdiff er only by a constant, certain prop- erties of the solution are at once apparent. If o- anywhere exceeds h , then o- must somewhere exceed h . For if a nowhere exceeded h , then

I = 0 , a = a , and we have a contradiction in the fact that o- itself somewhere exceeds h . Further, o- is a decreasing function of X . For if an increase in X led to an increase in a, then by relation (17) / would increase and by relation (16) a would decrease, which is a con- tradiction. To suppose that a decreases as h increases leads likewise to a contradiction, so that <r is an increasing function of h . If o- is everywhere increased by an additive constant, a also is increased by an additive constant, but the increase of o- is less than that of a . If a

PARALLEL, INTERCONNECTED NEURONS 21

and also h are increased by the same multiplicative factor k , then a

also is increased by the same factor k . But if <r alone, and not h , is so increased, the resulting a is everywhere less than k times the origi- nal a- . All these properties are intuitively evident from the character of the mechanism.

In general, this type of mechanism, in which the excitation at any point is dependent upon the total distribution of excitation, is highly suggestive of Gestalt-phenomena (Rashevsky, 1938, chap, xxxii). Ap- plications of mechanisms involving parallel interconnected neurons will be made in chapter ix to discriminal sequences of stimulus and response, and again in chapter xii to the perception of color.

IV

THE DYNAMICS OF SIMPLE CIRCUITS

If the terminus of a single neuron is brought into coincidence with its origin, or the final terminus of a chain into coincidence with the initial origin, the result is a simple circuit. Circuits are of com- mon occurrence in the central nervous system, and, in fact, Lorente de No (1933) asserts that for every neuron or chain of neurons pass- ing from one given cell-complex to another given cell-complex, there is also a neuron or a chain of neurons passing in the reverse direc- tion. O'Leary (1937) notes the frequent occurrence of circuits in the olfactory cortex of the mouse. A circuit composed of only excitatory fibers may have the effect of prolonging a state of activity after the withdrawal of the stimulus, of enhancing the activity due to a pro- tracted but weak stimulus, or, perhaps, of providing a permanent reservoir of activity through perpetual self-stimulation. Thus Kubie (1930) has discussed their possible role in the production of spon- taneous sensations and movements. Prolongation and enhancement will not, of course, occur when one member of the circuit is inhibi- tory, but besides the possible modulating effects that such circuits might have, they provide, perhaps less obviously, for the possibility of regular fluctuations in the response to a persistent, constant stim- ulus.

Fluctuation, prolongation and enhancement, permanent reser- voirs of activity, are all more or less directly observable within the central nervous system. Whether any or all of these can be attributed to mechanisms of precisely this type is a question to be decided by the comparison of experiment with theory. We proceed therefore to develop some of the consequences to be expected if this is indeed the case.

The simplest circuit is that formed by a single self-stimulating neuron (Landahl and Householder, 1939) of the simple excitatory type. The total stimulus acting upon the neuron at any time consists of a part a = e , due to the activity of the neuron itself, and of a part S coming from other neurons or receptors. If we may disregard the conduction time — this is always quite small— let

£ = S-h, (1)

and consider ^ as a function of the excess of stimulus over threshold, equation ( 1 ) of chapter i takes the form

de/dt = a\_<f>(£ + e) - e]. (2)

22

THE DYNAMICS OF SIMPLE CIRCUITS

23

If the outside stimulus is constant, the only case we shall consider, this differential equation can be solved by a quadrature,

t/ £1

de

</>(£ + £> -e

to obtain t as a function of e :

t = T(e)

= at ,

(3)

(4)

We must then solve this equation for £ as a function of t . However, certain properties of this solution are obtainable directly from a con- sideration of the form of equation (2).

We recall that for £ + £ positive, </> and its first derivative are positive, with the derivative decreasing monotonically to zero. For £ + £ negative $ is identically zero. Suppose first that <£'(0) = 1 . Then, since £ is always non-negative, the equation

£ = <£(! + £) (5)

has always a single root s0 which may be zero (Figure 1). For £ > £0

the right member of equation (2) is negative and £ is decreasing; for £ < £0 the right member of equation (2) is positive and £ is increas- ing. Hence £ = £0 represents a stable equilibrium. Whenever £ ^ 0 , £0 == 0 . Whenever I > 0 , then £0 > 0 , and enhancement of a in the amount £„ results, but after withdrawal of the stimulus, when £ = — h , the neuron comes to rest.

If (j>' (0) > 1 , equation (5) has a single root £ == e0 > 0 when I > 0 , the single root s = 0 when I < 0 and numerically large, two positive roots besides the root e = 0 when £ < 0 and numerically small (Fig- ure 2), and one positive root besides the root £ = 0 when £ = 0 . If

24 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

£<0

4>'io) >1 Figure 2

e is eliminated from equation (5) and

*'U + e)=l ,

and the resulting equation solved for £ = £<, , then £0 < 0 , and for £o < I < 0 we have the case for which equation (5) has two distinct positive roots.

Let £0 represent the greatest of the roots (possibly zero). Then Co represents a stable equilibrium of equation (2). Since we can have £0 > 0 even for £ < 0 (if also I > £0) it is possible for the activity to persist even after the withdrawal of the stimulus when £ = — h , provided —h > £0 , and provided the initial value £ = £i at the time of withdrawal of the stimulus exceeds the smaller, unstable, positive equilibrium obtained from equation (5) when £ — — h .

But whatever the value of </>'(0), if bx exceeds the threshold h at the time the stimulus is withdrawn, some activity will continue for a time, if not permanently. In order to account for learning in terms of activated circuits, the continuation must be permanent or nearly so (cf. chap. xi). Very likely a number of circuits would be involved in any act of learning, in which case forgetting could be accounted for as a result of the gradual damping out of one after another because of extraneous inhibition. In order to determine the period of the continuation where it is not permanent, it is necessary to know something about how the applied stimulus S disappears. If S suddenly drops to zero, then the time required for the activity to die out is given by equation (3) with £ = — h and the upper limit £ of the integration equal to +h . But if S is itself an £ from another neuron, a new set of equations must be written down and solved.

If a circuit is formed by a single inhibitory neuron, the behavior is described by the equation

THE DYNAMICS OF SIMPLE CIRCUITS

25

dj/dt = b [y(£- j) - ?] .

(6)

Then y> > 0 only if | > 0 , but in this case there is always a single, stable, equilibrium. The result is that the applied 5 is decreased at equilibrium by a certain amount j.0 . Also j(> increases as S increases, although if \p has a finite asymptotic value, j0 cannot exceed this, whatever the value of & . We may note, however, that the presence of additional circuits of this kind, with higher thresholds, which add their effects with increasing S , would provide an effective damping mechanism over an arbitrarily large range.

Consider next a two-neuron circuit, with one neuron passing from s2 to s2 , and the other from s2 to Sj . Suppose, first, that these are both excitatory, and, for simplicity, that they are identical in character. Let |t be the excess of Si over the threshold of the neuron originating at sx , let £i represent the excitation produced here by the other neuron, and let |2 and e2 represent the corresponding quantities at s2 . Then, still neglecting the conduction time, we have

del/dt = a [<j}(£2 + £2) ~ £1] » de2/dt = a [<£(!i + £1) — £2] .

(7)

If it happens that £1 = £2 and that the initial values of the £'s are equal, then it follows from symmetry that £1 = £2 for all t , and the pair of equations (7) can be replaced by a single equation of the form (2). In the general case for any £t and £2 , an equilibrium is

Figure 3

determined by an intersection of the curves in the (ex , e2) -plane de- fined by the two equations [Figure 3]

£i = <M£2 + £2>» ei=<j>l£i + £1). (8)

26 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

For any point to the right of the first curve sx is decreasing, and for any point above the second curve e2 is decreasing. In case <£'(0) ^ 1 , the two curves have always one and only one intersection ; at this both e's are positive only if at least one £ > 0 and the other is not too small, and they define a stable equilibrium. In case ^'(0) > 1 there will be one or three intersections. If there are three, one of these is always the origin, and this is always a stable equilibrium; if there is only one, this equilibrium is always stable and it may be the origin. In the former case, the intersection farthest from the origin is also stable. In particular, continuous activity following the withdrawal of the outside stimuli can occur only in circuits for which ^'(0) > 1 , and then only in case at least one of the initial e's has become suffici- ently large. More detailed discussions of this type of circuit in which an exponential form is assumed for the functions <j> , but these are not assumed to be identical, have been given by Rashevsky (1938), and Householder (1938b). Rashevsky (1938) has introduced these circuits in his theory of conditioning [cf. chap. xi].

Circuits containing both excitatory and inhibitory neurons are somewhat more interesting, because of the possibility of periodical phenomena (Landahl and Householder, 1939; Sacher, 1942). The scratch-reflex is one of numerous examples of a repetitive or fluctuat- ing response to a stimulus. Consider the case of a single inhibitory and a single excitatory neuron, both of which originate and termi- nate at the same place, s . A self-stimulating neuron of mixed type may be regarded formally as a special case. For a mixed neuron, it is common to assume (Rashevsky, 1938) that the functions of <j> and \p have a constant ratio for all values of their common argument, and we shall make this assumption for simplicity. Then the equations may be written

de/dt = AE(£ + e - j) -as,

dj/dt = BE(§ + s-j)-bj, (9)

E — 04/ A = by/B .

By dividing out a suitable factor from E and incorporating it into A and B , we may suppose without making any restrictions that

£7(0) =1. (10)

Now if it should happen that a = b , we could subtract the sec- ond of these equations (9) from the first, replace e — j everywhere by <r , and have a single equation of the same form as equation (1). Hence we suppose a ^ b . The pair of equations (9) in e and j can be reduced to a single second-order equation in a as follows. Differen- tiate equations (9) once each, and the equation :i- ■ j ■ — j ■— a - (11)

THE DYNAMICS OF SIMPLE CIRCUITS 27

twice. There result then seven equations, from which the six quan- tities e and j and their derivatives can be eliminated :

(12)

a" + [a + b - (A - B)E(£ + a)]a' - (bA - aB)E(§ + a) + aba = 0. Equilibrium occurs for e0 , /0 satisfying

E(£ + e - j) = ae/A = bj/B , (13)

and hence for v0 = e0 — j0 satisfying

(A/a-B/b)E($ + ,r) =a. (14)

The action of the circuit is somewhat different for the two pos- sible signs of the coefficient of E . If this is positive, this equation has the same form as equation (5). If

A/a -B/b^l,

there is always a single non-negative root <r0 of equation (14). If the relation fails there is at least one, and there may be three. Suppose cr0 is a positive root, the largest if there are more than one. Let

(15)

E (£ + a) = en + eiX + e2x2 • • •

where

e0 = a0/(A/a -B/b). (16)

Then equation (12) can be written

x" + [>+&- (A - B)e1] x

(17) + lab - (bA - aB)el] x + ~- = 0

with terms of second and higher degree in x omitted.

Now at the value o-0 considered, the slope of the left member of equation (14) must be less than one, and this, in view of the expan- sion (15), means that the coefficient of x in equation (17) is positive. Hence the characteristic roots of the linearized equation (17) are either complex or else real and of the same sign ; if, further,

a + b> (A -B)elf (18)

the real parts are both negative ; and if, finally,

(\M - \rB)2e1 <a-b< (V~3 + yjB)2 ex , (19)

the roots are complex. Hence if the relation (18) is satisfied, the equilibrium an is stable, and if relation (19) also holds, the approach to the equilibrium value fluctuates with a frequency v satisfying

28 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

- 16ji2v2 = [a- b - (A + B)eiy - 4 AB e2 .

When the coefficient of E in equation (14) is negative, there is- always a single non-positive root. If this is negative let <r0 denote it. In the transformed equation (17) the coefficient of x is always posi- tive, but the discussion is otherwise the same as before.

We consider finally a circuit consisting of an inhibitory neuron extending from sx to s2 and an excitatory neuron from s2 to sx. The equations are

de/dt = a [<j> (|2 - j) - e] ,

dj/dt=bbp(i-i + e) -n. (20)

There is always a single equilibrium obtained by equating to zero the right members of these equations (Figure 4) . Let e,0 , j0 represent

Figure 4

the values at equilibrium, and let neither of them vanish. Then if we set

X = £ - £0 , y = j - jo

and expand, equations (20) have the form

x' = — a(x + ay) +•••, y' = b(fix-y) +...,

(21)

where —a and /? are the derivatives of 4> and of \p at /<> and at e0 , re- spectively. The characteristic equation is

A2 + (a + b)X + ab (1 + a 0) — 0 .

(22)

Since all parameters are positive, the real parts of the characteristic roots are always negative and the equilibrium is stable. If, further

THE DYNAMICS OF SIMPLE CIRCUITS 29

(a- b)2 < 4 abaft ,

the roots are complex and the approach to equilibrium is fluctuating with a frequency v satisfying

-16ji2r2= (a- b)2 - 4 abaft .

In this circuit it is plain that permanent activity is only possible when |2 > 0 . Thus the simplest circuits which exhibit fluctuation are those consisting of one excitatory and one inhibitory neuron, and a circuit so constituted can maintain permanent activity in the ab- sence of external stimulation only if both neurons originate and termi- nate at the same synapse and A/a > B/b . This is, of course, quite evident intuitively.

THE GENERAL NEURAL NET

If the response of the organism can be expressed as some func- tion of the stimulus, this function must depend upon whatever para- meters are required for describing the structure of the nervous sys- tem. The psychologists can tell us much about the empirical charac- ter of this function but nothing about the parameters. The anat- omists and physiologists can tell us many things about the para- meters. Our hope is for a synthesis of the results of both lines of endeavor.

If we knew all about the structure, we might hope to devise meth- ods for deducing the function. Actually, with complex structures, this becomes exceedingly difficult, though we have done this for struc- tures of some very simple types. If we knew all about the function, empirically, we might hope to deduce some of the characteristics of the structure. However, there is never a single, unique structure, but many possible ones, all leading to a function of the same empirical characteristics. And, of course, we do not know all about either the structure or the function, but only some things about each.

Certainly the structure of the complete nervous system can be no less complex than the behavior which is an expression of it, and any Golgi preparation of a section from the retina or the cortex abun- dantly exhibits such complexity. We have already mentioned one of two general principles concerning this structure first stated by Lo- rente de No in a paper which appeared in the Archives of Neurology and Psychiatry, Vol. 30 (1933), pp. 245 ff. His statement of these is as follows:

"Law of Plurality of Connections. — If the cells in the spinal or cranial ganglia are called cells of the 'first order' and the following ones in the transmission system cells of the second, third to ••• nth. order, it can be said that each nucleus in the nervous system always receives fibers of at least n and n + 1 order, and often of n , n + 1 and n + 2 order."

"Law of Reciprocity of Connections.— If cell complex A sends fibers to cell or cell complex B , B also sends fibers to A , either direct or by means of one internuncial neuron."

In chapter ii we assumed the function <f> or y> for each neuron to be linear with S between the threshold and a certain maximal value characteristic of the neuron, and elsewhere constant. The coefficient

30

THE GENERAL NEURAL NET 31

of the linear variation we called the activity-parameter. We found that if a chain of n neurons leads from a synapse s,0 to a synapse sn , and if fixed stimuli &,•••,£» are applied at each synapse sx , ••• , sn , then the total excitation yn + <r„ + Sn present at sn is expressible as a linear function of the total y0 present at s0 provided y0 lies between certain positive fixed limits, and is otherwise constant. In special cases the limits are equal and yn is independent of y0 , s„. being said to be inaccessible to s0 . When sn is accessible to s0 , then the relation

y,i — Sn 4- 3 + Ay' when y0 < y' ,

Vn = Sn + 3 + Ay0 when y' ^y0^y" , (1)

yn = Sn + 3 + 43/" when */„ > y" ,

obtained in chapter ii for a chain, differs formally from that for a single neuron only by the presence of the term 3 . But if we set

Z = Sn + 3, (2)

we have more simply

yn = Z + Ay0 (3)

when y0 lies between the stated limits, and when it does not the near- est limit appears in this equation in place of y,0 • The similarity to the behavior of a single neuron is now complete, the term Z cor- responding to the stimulus applied at the terminus. However, this term, as well as the limits y and y" , depend here upon the particular stimuli applied at the various synapses of the chain.

In the discussion of more general types of net, only the occur- rence of circuits can present essential complications and hence we limit ourselves to these, considering first the case of a simple circuit of n neurons. Such a circuit is obtained by closing a chain of n neu- rons, bringing s„ and s0 into coincidence. But if sn is inaccessible to s0 before the closure then the closure makes no change in the value of yn . Hence we suppose sn accessible to sc .

Following Pitts (1942a) in substance, we find it convenient to employ semi-dynamical considerations, taking into account the con- duction-time. Let us introduce as the time-unit the time required for a nervous impulse to traverse the circuit completely. Having defined in chapter ii the 5 and 4 employed in equation (1), we shall have no further occasion to refer to the parameters of the individual fibers, or to the y at any point except s0 = sn , wherefore it is legitimate to drop all subscripts as designations of neurons and synapses. Fur- ther, it increases somewhat the generality without adding essential complications to allow the stimulus at this point during the interval 0 = t < 1 to be different from the constant value to be assumed there-

32 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

after. Then, if y0 is now taken to represent the value of y during this initial interval, we have

y(t)=y0 (0^t<l),

y(t + l)=Z + Ay(t) (t^O), (4)

when y(t) lies between the limits y' and y", and when this is not the case the nearest limit replaces y(t) in the latter equation.

Now it is clear from the nature of the mechanism that the fol- lowing- possibilities are exclusive and exhaustive:

i) y(t) approaches asymptotically a value yx on the interval from y to y" .

ii) y(t) reaches and remains constant at a value in excess of y" or else below y'.

iii) y(t) ultimately alternates between two fixed values.

If we set

t = v + r (0^t<1), (5)

where v is an integer, then when A ¥= 1 , the solution of the difference- equation (4) has the form

1- Av

y(v + T) Z + y0Av = y„ + {y«-y„)Av (6)

1- A

where

y„ = Z/(l-A), (7)

until y falls outside the interval from y to y". Hence case (i) occurs provided |A| < 1 and

v''s§ v. =£ y". (8)

If A > 1 , the interval between y(t) and y:X , wherever the latter may be, continues to increase while y(t) lies on the interval, and after having passed either limit — which will occur in a finite time — it re- mains constant. If A < — 1 , the interval y — y^ increases numeri- cally but with alternating sign until, after a finite time, one limit or the other is passed. Thereafter, if equation (8) is satisfied without the equality, (iii) occurs, while if equation (8) fails or if an equality holds, then (ii) occurs. When A = — 1 , alternation between fixed values starts immediately if relation (8) is satisfied without the equal- ity, and otherwise (ii) occurs. Finally if A = 1 , it is evident from the difference-equation (4) itself that the y(y) form an arithmetic progression until one limit is passed, the later terms being identical. Additional essential complications are involved in the discussion of nets consisting of two or more circuits. However, certain simplifi- cations can be performed at once. We wish to determine y at each

THE GENERAL NEURAL NET 33

synapse. But if any synapse is the origin of only one and the termi- nus of only one neuron, the two neurons constitute a chain, and after the E is determined for this chain, this synapse requires no further consideration. Again, let a neuron N form a synapse with two or more neurons Nx , N2 , • • • . The results are the same if we suppose N replaced by two or more neurons N7, N2', • •• , with identical prop- erties all originating at the origin of N , but N* forming a synapse with Nx alone, 2VY with iV2 alone, ••• (Pitts, 1942b). Thus the only synapses requiring separate consideration are those at which two or more neurons terminate. Each of the synapses of the set under consideration is the terminus of two or more chains which originate at other synapses of the set, and, the distribution of stimulation be- ing fixed, each chain is characterized by the values of its set of four parameters. In case the terminus of any of these chains is inacces- sible from its origin, the a which it produces is calculable indepen- dently of the value of y at its origin, and we may delete this chain and add this a to the 5 at the terminus. If there happens to be only one other chain terminating there, this can be combined with the chain or the chains originating there and the synapse dropped out of the set being considered. We therefore suppose that each synapse is accessible to the origin of every chain which terminates there. There is also the possibility that when the S applied at any synapse is increased by the maximum a that can be produced together by all the chains which terminate there, this is still below the y', or that the S increased by the minimal a of all together is above the y" of some chain which originates there. If so, the a produced by this chain can be calculated at once, the result added to the S applied there, and the chain deleted. It is clear, of course, that these dele- tions, which are made possible by the inaccessibility of one synapse to another, will be different for different distributions of stimulation.

Having performed these simplifications, we suppose, before pass- ing on to the most general case, that only one synapse remains. The resulting net, which consists of a number of circuits all joined at a single common synapse, we shall call a rosette (Pitts, 1943a), and the common synapse, we shall call its center. Now if the conduction- time is not the same around all these circuits, we may nevertheless, with sufficient accuracy, regard these times as commensurable, and we shall use their common measure as the time-unit. Let n be the number of circuits, and let //, be the conduction-time of the i-th circuit.

Now consider the contributions of the i-th chain to the stimulus y at s at any time. If y(t) is the total stimulus at time t , then the contribution at time t + m is

Si + Aiy(t) when y{ ^y(t) ^yj',

34 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

the y(t) in that expression being otherwise replaced by the nearest limit. Let us introduce quantities at(t), fti(t) denned as follows:

0Li(t)=l when jf(f)=Vo uj(O=0 when y(t)<yi,

(10) Pi(t)=l when y(0 =2/i",

A(0 =0 when y(t) >yi".

Then the contribution of the i-th chain to y(t + //;) may be written

Si + A^imWfiiit) y(t) + tl-aiW] y{ + [1 -&(*)] 1/i" }.

If we introduce the operator E defined by

Ey(t) = y(t + l),

let fi represent the largest of the jut , and set

P§ = (i — fij , then we have finally

E*y{t)=Z + ^AiE» {*dt) pi(t)y(t) + [1 - a,(t)] yi

(11)

+ ii-fawiyn.

The functions a and /? are constant except when y crosses one of the boundaries y' or y" associated with the corresponding chain. Hence the difference-equation (11) can be solved on the assumption that the a's and /3's are constant, and the solution is valid as long as it lies on the particular interval associated with the assumed values of the a's and /3's.

If the numbers y,' and y" are arranged in order, they limit at most 2n + 1 intervals (two of them infinite), and each interval is associated uniquely with a particular set of values a, , fit . No other set of the a; , /?/ is possible. Associated with each of these sets a* , /J» , is a unique value of y satisfying

[l-2A»ai0i] y-^ + 24i [(1-002//+ (l-fii)yn (12)

provided the coefficient of y is non-null. This defines a possible equi- librium of the difference-equation. However, if this value of y does not lie on the associated interval, then no equilibrium for the gen- eral equation (11) exists on this interval. If, for the set a, , /3; , the solution y of equation (12) does lie on the associated interval, the solution y(v + t) of the difference-equation (11) corresponding to the a; , /?, equal to these constant values differs from this constant value y by a sum of terms of the form p(v)Av, where p(v) is a poly- nomial in v multiplied, possibly, by a sine or a cosine, and a is a real

THE GENERAL NEURAL NET 35

root or the modulus of a complex root of the equation

a* ~ 2 Ai ai fr xp* = 0 . (13)

As before, v is an integer for which

t = V + T (O^T^l).

Hence the equilibrium is unstable unless every root of equation (13) has a modulus less than unity.

In case for any of the intervals the coefficient of y in equation

(12) vanishes, this equation has no solution unless the right member also vanishes. But then equation (13) has a root unity and the corre- sponding solution of (11) involves a simple polynomial of non-null degree, so that no stable equilibrium occurs. Thus, in brief, in order for any interval to possess a stable equilibrium, it is necessary and sufficient that the solution y of equation (12) obtained from the associated set a, , pt , shall lie on this interval, and that the equation

(13) shall have every root of modulus less than unity. Fluctuating equilibria of the sort met with in the simple circuit are here possible, and also another sort arising from possible complex roots of the char- acteristic equation (13) and leading to terms involving sines and cosines.

In the general case, let the synapses Si and the chains CK be sep- arately enumerated, and let us define two sets of quantities PjK and QjK as follows :

PjK = 1 if Sj is the origin of CK ,

Pjk = 0 if Sj is not the origin of CK ,

QiK = 1 if Si is the terminus of CK ,

QiK = 0 if s» is not the terminus of CK .

All simplifications as described above having been made, we can de- fine for each synapse s< a quantity

Zi = Si + ^QiKEKt

and for each chain CK the sets of quantities aK , pK with values 0 or 1 according to the value of y , at the origin CK , relative to yK' and yK". Then the difference equations satisfied by the yt have the form (Pitts, 1943a)

E* yi = Zi + 2 Q^ AKE»«{(1- aK)yK'

(14)

+ (1 -Pk)v" t- Ok & 2^*2//}.

If there are n, chains originating at Si , the a's and p's associated with these chains are able to take on at most 2/1* — 1 different sets of val- ues, and there are therefore at most U (2nt — 1) sets of values for

36 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

them altogether. Each set of values is associated with a region in y-spa.ce which may contain a single point whose coordinates yi rep- resent a steady-state of the net. For this to be so (the equilibrium being stable) two conditions must be fulfilled: The constant y-x de- fined by equations (14) when the aK and /5K are given these values and the operator E is taken to be the unit operator must define a point which lies in this region ; and a certain algebraic equation (the char- acteristic equation of these difference-equations) must have only roots whose moduli are less than one. In principle, therefore, the steady- state activity of nets of any degree of complexity can be determined, though admittedly the procedure could become exceedingly laborious. Thus given three synapses, joined each to each by a total of six chains, 27 regions in ?/-space may exist and require separate consideration as possible locations of equilibria. Moreover, persistent fluctuations may arise, no steady-state being approached at any time.

While the solution of the direct problem of describing the output of any given net is complete, at least in principle, the general inverse problem is still open. However, in the special case where the output function is such that the a's and /5's remain constant, Pitts (1943a) has shown how to construct a rosette to realize this function.

This concludes our purely formal discussion of neural structures, and we turn now to some special structures and their possible rela- tion to concrete types of response.

PART TWO

VI

THE DYNAMICS OF THE SINGLE SYNAPSE: TWO NEURONS

Thus far we have been concerned with the formal development of methods for determining the activity of structures composed of neurons. We shall now attempt to make application of the theory and method to experimental problems. Two paths are open to us. We could, on the one hand, examine specific neural structures, seeking' to determine for each the response which it mediates as a function of the stimulus, or we might start with this function and attempt to construct a suitable mechanism. In this and in succeeding chapters we follow the first course. In the final chapters of this Part II we follow the second course. The immediate problem is considered solved if, from the theoretical structure, quantitative relations are derived which agree with the experimental data within a suitable margin of error for some range of the variables in question, and if the number of parameters is not too large. Now many of the parameters may be explicit functions of certain other variables which have been kept constant throughout the experiment. Thus in many cases, the struc- ture will have different properties when the constants of the experi- mental situation are changed. In these cases we may say that the structure studied makes predictions regarding activity outside the domain of activity intended to be covered. Such predictions may sug- gest an experimental approach not otherwise evident. If the predic- tions are borne out, the theory is immediately extended in its applica- tion. If not, the structure must be extended or revised in such a man- ner as to include the old as well as the new properties.

Thus on whichever course we set out, whether working from mechanism to behavior, or from behavior to mechanism, we are led finally into the second course when extensions are required. In this we are guided by a consideration of the elements without which there could be no correspondence between the activity of the structure and the activity observed. If certain of the observed elements interact, then the elements of the structure must be inter-connected. If the action is unilateral in the experimental situation, a unilateral con- nection may suffice. If the observed activity depends on the order of the events of the past, the structure must contain elements which ex- hibit this property of hysteresis. Thus one is limited to a consider- able extent in the choice of mechanisms to be studied.

37

38 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

For first application we choose the simplest structures, working gradually to those of increasing complexity. We shall find in the present chapter how a very simple mechanism will serve for the inter- pretation of such superficially different sorts of data as those con- cerning the occurrence and duration of a gross response, just-dis- criminable intensity-differences, adaptation-times, and fusion-frequen- cies in vision and perhaps other modalities. In general, even where the structure is relatively simple, it is not possible to solve in closed form the equations resulting from this structure. Thus certain re- strictions upon the parameters may have to be introduced in order to obtain a workable, even if approximate, solution. As the choice of the restrictions is somewhat arbitrary, one should keep in mind that other equally plausible restrictions could lead to different results and increase both the accuracy and the scope of the theory.

The simplest structure which can be studied is a single neuron, and the simplest assumption that can be made about its activity is that it is of the simple excitatory type. Its activity is determined when we have evaluated e(t) for any 5 . However, one does not ob- serve e but some response R . Thus the simplest structure in which we can deal with observed quantities is a chain of two neurons, the first being acted upon by some stimulus S , and the second, which may be a muscular element, capable of producing some response R . The response R is produced as soon as e reaches the threshold of the sec- ond neuron. Hence if we set h = s(t) and solve for I , then since the function s(t) depends upon S through the function $(S) (chap, i, equation 1), we obtain the reaction-time t1(S) as a function of the intensity S , this time being measured from the application of the stimulus until e reaches the threshold. For this purpose we use <j> as given by equation (6) of chapter i and assume

1 h tl = --\0g (1 ), (1)

a ft log S/h,

where hx is the threshold of the afferent neuron.

This relationship should apply to an experiment on a simple re- flex in which a stimulus of intensity S produces a response R after a time tr . However, the total time tr from the application of the stimulus until occurrence of the response as registered by the timing- device involves, in addition to t1(S), also a time t0 which measures the time for conduction plus the time required for the muscular re- sponse to effect the recording instrument plus any other time of delay which does not depend appreciably upon S . We may then expect the equation

SINGLE SYNAPSE : TWO NEURONS

39

tr = to lOg (1

a

h

-)

piogS/K

(2)

to represent the experimentally determined relation between tr and S . The extent to which this does so in some cases for which data are available may be seen in Figures 1, 2, and 3 where experimental data

BERGEN ANO CATTELL VISUAL DATA

O SUBJECT B • SUBJECT C

STIMULUS INTENSITY S-

Figure 1. — Comparison of theory with experiment: dependence of delay of reaction upon intensity of the stimulus for visual stimuli. Curves, theoretical predictions by equation (2) ; points, experimental. (Visual data from Cattell, 1886.) Abscissa, intensity (on logarithmic scale) of stimulus; ordinate, interval between presentation of stimulus and occurrence of response.

(points) and theoretical predictions (curves) are shown for each of a number of rather different types of stimuli. The details are given in the legends.

In general, we cannot expect that the chain from receptor to effector involved in the reflex will contain so small a number of ele- ments. But certainly this demands first consideration since it is the simplest possible mechanism. And even if the chain were known to contain a larger number of neurons, the slowest synapse in the series will tend to govern by itself the temporal form of the response, so that if the remaining synapses are relatively fast, the equation just deduced will still provide an adequate description of the experimental situation.

If a stimulus S is presented for too short a time t , e will not

40 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

have reached the threshold h at the end of this time. But for a given S there is a minimal t = f at which time e = h . This i (S) is the minimal period of stimulation with the intensity S which just suffices to produce the response. We obtain for this an equation similar to

STIMULUS INTCNSITY S -

Figure 2. — Comparison of theory with experiment: dependence of delay of reaction upon intensity of the stimulus for auditory stimuli. Curves, theoretical predictions by equation (2) ; points, experimental. (Auditory data from Pieron, 1920). Abscissa, intensity (on logarithmic scale) of stimulus; ordinate, interval between presentation of stimulus and occurrence of response.

STIMULUS INTENSITY S -

Figure 3. — Comparison of theory with experiment: dependence of delay of reaction upon intensity of the stimulus for gustatory stimuli. Curves, theoretical predictions by equation (2) ; points, experimental. Gustatory data from Pieron, 1920.) Abscissa, intensity of stimulus; ordinate, interval between presentation of stimulus and occurrence of response.

SINGLE SYNAPSE : TWO NEURONS 41

equation (2) , but with t0 = 0 and tr replaced by t . Thus from a con- sideration of a chain of two neurons one should expect that if all other conditions remain unchanged the same relationship should hold in both cases, except that t0 would be absent in this case. Since the two cases are experimentally distinct, it may be that the results from the two types of experiments are widely divergent. If so, it may be necessary to assume that there are several neurons in the chain or even circuits in the structure. In any case, the kind of disagreement may suggest the nature of the change to be made in the neural net.

Let us consider another special case of a chain of two neurons. Let the afferent neuron be of the mixed type with <£ = y> , a > b , and threshold hx . A constant S > hx applied to such a neuron results in a a (= e — j) which is positive, but which vanishes asymptotically. Then, the stimulus being presented at t = 0 when e = j = 0 , if S is large enough, and h not too large, a will first reach the value h at some time tf . From this one can determine a relation tr(S) similar to that of equation (1). If S is maintained at a constant value for a sufficient time, <r reaches a maximum and declines. Let tr + r be the time at which a returns to the value h . Then we can determine the duration T of the reaction as a function of the intensity S when e = j = 0 initially.

At t = t* > t + t let S be replaced by S + AS, where A S may take on any positive value. Negative values of A S would be of in- terest only if tr < t* < tr + r . Then for t > t* > t r + T we may write

+ 4>(S + A S) [e-&<<-'*> - «?-«(«-«•>] .

If A S is large enough, a will again reach h at some time t — t* + tr' and the response will again be initiated. By setting a = h in equa- tion (3), we obtain tr'(AS, S, t*) from the smaller root, t. For t* > > 1/6 and tr' < < 1/6 , we may obtain an equation for tr' which shows the time V to depend only upon A S/S , and not upon hx .

At some time t = t* + tr' + t the response will again cease. Using the larger root of a = h in expression (3) we may determine the duration T'(AS ,S ,t*) of the response. For t* > > 1/6 and r > > 1/a, we may write

T' = -log[/log(l + zlSyS)] -tf', (4)

6

/ being a constant. For fairly large values of A S/S we may neglect tr' in equation (4). The equation thus makes a definite testable pre- diction as to the nature of the relation between the duration of the response and the relative increase of the stimulus.

42 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

If we let t* be large and if, for a fixed S , we restrict A S to the least value it can have while U remains finite (i.e., the two roots tr' and tr + t of equation (3) coincide), we obtain, if we use equation (6) of chapter i,

AS/S = d = d0, (5)

da being a constant. That is, when a constant stimulus of intensity S > hx has been applied for a long time, the smallest additional stim- ulus A S necessary to produce the response must be a constant frac- tion of the intensity of the original stimulus S . On the other hand, US < hi we have

6= (S0 + l)hJS- 1. (6)

Thus d(S) decreases hyperbolically from oo to S0 as S varies from zero to hx ; thereafter S is a constant. The quantity <3 of equations (5) and (6) is essentially a Weber ratio, and its variation with 5 as de- scribed in the above equations has the chief qualitative properties of the experimental relation for most types of stimuli. This problem will be discussed in more detail subsequently (chap. ix).

Suppose that instead of replacing S by S + A S at time T, we remove S for a time t' after which only A S is presented. This is the experimental technique for studying the processes of adaptation and recovery. Then for t > t* > t' ,

CT=z<£(S) [e-°<*-'*> - r*<*-**} + e-bt - e-at]

(7)

At the time t*, we shall suppose a < h . Hence at some time t = t* + t' + t" r , if A S is large enough, o- = h , and the response occurs. From this relation, together with equation (7), we can determine the reac- tion-time t"r(AS , S , t*, V) from the smaller root, and the duration t" (AS , S , t\ t') of the response from the larger root, at any stage of the process of recovery following preadaptation to the intensity S . We can further determine the minimal AS required for stimulation as well as the minimal interval of exposure at a given AS .

' If t* > > 1/b , t' > > 1/a , t' > > tr" and tr" < < 1/b , we have

cl>(AS) (1 - e-"*r") - ^(S) e-w - h = 0. (8)

Thus

tr" = - -log( 1 - *±-L—— , (9)

so that the reaction time tr" increases with S but decreases with both A S and t' .

SINGLE SYNAPSE : TWO NEURONS 43

Equation (9) makes a definite prediction as to the relationship between the reaction time and the variables S , A S , and t' and is thus subject to experimental test.

If, for fixed S and t', we now restrict A S to the smallest value

for which the response can still take place, we obtain a relation of the

form

AS S

log = e-bt'log — , (10)

h' K

where log K = log hx + h/(i . Thus, except for minimal if, the loga- rithm of the testing stimulus A S is an exponentially decreasing func- tion of the time t' of recovery. As the time t' becomes infinite, A S approaches h'. The intensity S determines by how much the ordinate is multiplied in the graph of A S against P. The type of relationship between A S and t' of equation (10) for the case of visual stimuli is found in the work of various investigators (cf. S. Hecht, 1920).

Suppose next, still assuming that <j> = y> , that any constant stim- ulus has been applied for a long time and that at t = 0 the stimulus is increased at a rate such that cUf>/dt = X . After a time t , we find that

X X

<r = — (1- e~bt) (1- e~at). (11)

b a

If X < abh/ (a — b) , where h is the threshold of the second neuron, a will never exceed h and there can be no response. This is analogous to the failure of slowly rising currents to produce excitation in peri- pheral nerves and corresponds to the effect, commonly experienced, that a stimulus which rises slowly in intensity often fails to evoke a response. If X is larger, and response occurs, then from equation (11) we can determine the reaction-time tr'" by solving for t with a = h . It is clear that the reaction-time depends very much upon the manner in which the stimulus is presented.

Let us consider the effect of a different mode of application of a stimulus to an afferent neuron of the mixed type with <f> = t/' • Let a stimulus S be given for a period of time rT followed by no stimulus for a period of time (l—r)T. Let this be repeated indefinitely. For each successive interval we can determine e and j from the differ- ential equation together with the requirement that £ and j both be continuous. The value of a(t) during the interval 0 < t < rT of stimulation after a large number of repetitions may be obtained as follows. Let e„-t represent the value of e at the beginning of the n-th resting period, and e'n-i the value at the beginning of the n-th period of stimulation, where e0 — 0 . We use equation (8) of chap-

44 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

ter i, replacing c0 by c'M_i and t by rT to calculate e»_i , and we use the same equation to calculate e'„ by setting <j> = 0 , replacing s0 by £„_i , and £ by (1— r)T . When we do this we find by simple induction that

e'B = 0(l - e-^) (1 - e-naT)e-a^T/(l - e-"T) ,

and as n becomes large the exponential containing it can be neglected. The expression for j'n , similarly defined, is the same with b replacing a. To obtain j(t) during the interval in question, we need only re- place j0 by j'n and b by a in the same equation (8) . After taking the difference e — j and performing elementary algebraical simplifica- tions, we obtain finally the desired expression:

— A p-bt

a = <},e

/ 1 - e-hil-r)T \ / 1 - e-a^-r)r \

Now a reaches a maximum at t = t* given by

t* =

log

a

a(l- c*<1-r>f) (1 - e-bTl 6(1 - e-^-'^Ml - <raT)

(13)

unless t* > rT , in which case the maximum value of a is a(rT). Set a = h in equation (12) with t replaced by rT or t* according to the case. Then for given r and T , that value of 5 which satisfies the equation is the least stimulus that will produce a steady response when repeated in this manner.

For this r and S , let T* be the particular value of T employed. Then f = 1/T* is a critical frequency separating response from no response for the value of 5 in question. Then, if <j> = ft log S/hi and H = h/0, for rT* < t* or f > r/t*,

I 6-°' V ~~1

brT* _ g-br*

-bT*

irT* (>-aT* \ S

log- = H,

1 - e-aT* I K

(14)

and for rT* > t* or /* < r/t*,

b

1 -- e~aT* b

-a(l-r)T*

-. -a-

1 - <rw*

a

9-b(l-r)r*

H

L (a - b) log (S/h,) J

a-b

(15)

For very large frequencies /*, we have from expression (14), for r not too near zero or unity,

a-b , S

/* = r(l — r) log—.

2H ha

(16)

This states that the frequency above which response fails to occur is proportional to log S/Jh as well as to r(l-r), the function of r

SINGLE SYNAPSE: TWO NEURONS

45

being maximal at r

and symmetric about r = \ . In terms of

visual stimulation /'* is the frequency above which the typical response to intermittent stimulation ceases, and thus /* may be identified with the critical flicker-frequency. Equation (16) then states that the criti- cal flicker-frequency increases with the logarithm of the intensity for large /*. This is essentially the Ferry-Porter law. Furthermore, with- in a limited range and for a fixed stimulus-intensity, this frequency is the same for a given value of the light-dark ratio, LDR = r/(l— r) , as for its reciprocal.

For very small frequencies, we find from equation (15) that in- dependently of r and /*, when S > h', there results a response to flick- ering (intermittent) illumination, whereas when S < h' there results no response, h' being a constant which is the effective threshold. Thus a plot of /*(log S/hx) begins at (log h'/hlf 0), rises vertically at first, then flattens off while approaching a final slope which depends upon r .

The relationship between f* and r is generally determined for a constant apparent brightness given by S' = Sr . Using the approxi- mate expression (16) with S' = constant, we find that f*(r) rises rapidly from zero to a maximum for r < ^(LDR < 1) and then falls to zero for r = 1 (Figure 4). The position and the height of the

.4

Figure 4

.8

1.0

maximum depends upon S". However, when r is near zero or unity, the approximations break down. Furthermore, equation (16) holds only for large enough /*. Hence the exact relation f*{r) may be of considerable complexity. For the various experimental relationships, one may consult Bartley (1941). Most of the results quoted agree with the above prediction that for constant S', f*(r) is decreasing in

46 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

r when r is just less than one-half (LDR just less than unity) .

More generally, if instead of alternating a stimulus 5 with no stimulus, we had alternated S + AS with S , we should have obtained the same results with log S/lh. replaced by log(l + A S/S) unless S < hx . From this it is clear that for a constant S + A S , an increase in S > h^ decreases the critical frequency f*. Similarly, an equal in- crease in both S and S + A S decreases f*. That is, an illumination added to both phases, as from stray light, decreases the critical flick- er-frequency.

Although we have referred to visual phenomena only, one may well expect that analogous properties of some other modalities also could be accounted for roughly by just such a simple mechanism as the one considered here.

We have assumed throughout that a > b and <p = xp . For a con- stant stimulus this gives a a which rises rapidly to a maximum and then subsides more slowly to zero. This resembles the "on" activity of the "on-off" fibers of the retina. Had we chosen a < b , we should find a < 0 upon application of a constant stimulus, but on cessation of the stimulus a would increase to a maximum and subside to zero. This resembles the activity of the "off" fibers. We should have ob- tained results entirely similar to these above but with (1—r) re- placed by r , If, however, we suppose elements of both types to be present at two different positions with different parameters and with some simple interaction, the complexity of our results increases great- ly. Again, if we remove the restriction $ = y> , and let R$ = xp , with R a fraction having a value between zero and one, we find, that for constant S , o- increases to a maximum and decreases to a constant value (1—R)$. This corresponds to behavior of the continuously acting elements of the retina. We proceed to consider some properties exhibited by a neural element of this latter type.

We have now a chain of two neurons, the afferent member of which is of the mixed type with <f> — xp/R , 0 < R < 1 . Let the stimu- lation again be intermittent, of frequency f = 1/T and fractional stimulation r . Let the intermittent stimulation be continued indef- initely. The value of a at the end of each period of stimulation, that is, <r(t) for t -> oo and t = rT(mod T) , can be determined in the manner described above. If 6 is that value of a divided by a ( co ) for r = 1 , 6 is essentially the ratio of a at the end of a period of stimula- tion for a particular / and r , to the value which a would have if a stimulus of the same intensity were applied continuously. 0 would be better defined as the ratio of (a - ^)/(tr00 - h), but we shall neg- lect the threshold as compared to a . We may then write

SINGLE SYNAPSE: TWO NEURONS

47

6 =

1- R

1 - e*rT

-brT

-uT

R

o-bT

(17)

Equation (17) gives a relation between the relative net excitation 8 , the fraction r of time of stimulation, and the frequency / = 1/T . For / = o , 6 = 1 f or all r . But for / = oo , 6 = r . Furthermore, 6 in- creases with / for small / and the height is greater for small r . The type of relationship between 0 and / for various values of r is shown

m

e

48 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

in Figure 5. Notice that the maximum moves to the right with in- creasing r . However, if instead of the 8 used, we had taken the aver- age value over the interval rT , we should have obtained essentially the same results, but with the maximum moving to the left. The equation corresponding to expression (17) is, however, much more complicated. The quantity 6 suggests immediately, in the terms of the visual field, the relative brightness during flicker. Experiments by Bartley (1941) show essentially the same type of variation as ex- pression (17) but there is no significant change in the position of the maximum with r on a range from \ nearly to 1. From what has been stated above, one could probably find a simple average which would give this result.

If cj) is proportional to S over the range to be considered, then multiplying S by 1/6 would make the responses equal. For the fre- quency / very large, 6 = r , that is, S must be increased to S/r to ap- pear the same as a continuously applied S . This is just a statement of the Talbot law.

From this brief consideration of the dynamics of two-neuron chains we have been able to derive equations predicting quantitative relations among various experimental variables. These include the relation between reaction-time and intensity of stimulation, for given change in stimulation and period of accommodation. Similarly, the duration of response is determined in terms of these same variables. A Weber ratio is determined as a function of accommodation-time. Furthermore, a relation is determined connecting flicker-frequency, light-dark-ratio and intensity. And finally, relations are determined between relative brightness and the light-dark-ratio and frequency. In some cases, quantitative agreement with experiment is exhibited. In others, general qualitative agreement is obtained. It is well worth noting at this point that while, on the one hand, to the extent that the formulae are verified these manifold relations are all brought within the scope of a single unifying principle, on the other hand the dis- cussion explicitly introduces a great many problems which the experi- ments only vaguely suggest.

VII

THE DYNAMICS OF THE SINGLE SYNAPSE: SEVERAL NEURONS

When a pair of afTerents, instead of the single one assumed in the preceding chapter, form a common synapse with a single efferent, the resultant a at this synapse is capable of varying with time in a much more complicated manner. We shall consider briefly two pos- sible applications of such a mechanism, one in which both afferents are supposed to be affected by the same stimulus, one in which the stimuli are assumed to be different.

Consider first the very special case in which both afferents are stimulated by the same constant stimulus. Let one of the afferents be of the simple inhibitory type with the associated yu and 6T . Let the other be of the mixed type with </>2 , xp2 , a2 and b2 . Let a2 > > bx or b2 and let </>2 — y\ — y^ = h , the threshold of the efferent. These assumptions are made to reduce the number of parameters. We em- ploy equation (8) of chapter i, with its analogue for j . Then, a2 be- ing large, the term e-a*[ quickly dies out so that except for very small t ,

a - h = ?/>! e~bit + y<2 e^1 . (1)

But the frequency v of response (chap, i) in the efferent neuron is proportional to a — h when this is not too large. Since y, and y>2 are arbitrary, w e may replace a — h by v . We can then attempt to inter- pret equation (1) as giving the variation with time of the frequency of the response of an efferent neuron when a constant sustained stim- ulus is applied to its afferents. Equation (1) is shown in Figure 1 (curve) for particular values of the constants while in the same figure are shown the results of experiments by Matthews (1931) (points). In these experiments the stretch receptors in muscle were stimulated by means of attached weights and the variation with time of the fre- quency of discharge was determined. Since experimentally the weights tend to sink with time, one might separate the stimulus into two parts, a constant part acting on the second neuron and a variable part acting on the first, making this one excitatory with large a. The variable part would presumably be roughly an exponentially decreas- ing function whose decay-constant must be equal to 6, of equation (1). This, also, would lead to equation (1). Whether or not the ac- tual rate of decay corresponds sufficiently with the experimentally determined decay-constant is then a question of fact. Formally, we

49

50 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

u

LU

T O

x

>- u

z

<JJ

3

a

time r

1.0

IN SECONDS

1.5

Figure 1. — Comparison of theory with experiment: adaptation to muscular stretch. Curve theoretical, predicted by equation (1) ; points, experimental (Matthews, 1931). Abscissa, duration of stretch; ordinate, frequency of response.

obtain the same result regardless of the point of view adopted.

The structure consisting of two afferents and one efferent where the afferent Nx is now simple excitatory leads to other interesting results when the stimulating conditions are altered. Let Si > h^. and S2 > h2 be constant stimuli applied at times t± = 0 and t2 to the neu- rons Nx and N2 respectively. Let h be, as before, the threshold of the efferent neuron. Also let ex , e2 and j2 be zero initially. Then the effer- ent neuron is excited at the time t' when a = ex + e2 — j2 = h , or

[1

-&2<

t'-h)]=hm (2)

^(1 - e-^') + & [1 - e^»(*'-*.)] - xp2

The value of t' depends on Sx , S2 and £> •

To simplify the problem further, let <£2 = y*2 and let o-2 be always less than h . This can be done readily by restricting S2 or requiring that cj>2 < h . Then no response can occur prior to the moment t = 0 , even if t2 < 0 . If we set tw = t' — t2 we obtain by solving equation (2)

SINGLE SYNAPSE: SEVERAL NEURONS

51

t' = --log

a,

h

, fa(S2)

^ (e-w,

-a2 1

fa(Sx) 0i (50

â– )

(3)

Finally if we set £r = t0 + £', where t0 is a constant as in equation (2) of chapter vi, we can determine the total reaction time tr as a function of S2 through fa , Si through fa , and of tw , which differs by t0 from the time by which S2 precedes the initiation of the response. As S2 is a stimulus which precedes Sx and affects the response time to Si , but is itself incapable of producing the response, it may be considered a warning stimulus. Hence we may take equation (3) to predict the kind of results to be obtained in an experiment in which a particular stimulus of intensity Si has been preceded by a warning stimulus S2 and produces a response in a time tr(S1} S2 , tw) depending on the strength of the warning stimulus as well as upon the manner in which Sx and S2 are spaced in time.

For the particular case in which a fixed Si and S2 are used and for tl0 > > tr , we may write equation (3) as

tr = t0' log [1 + D(e-6*<- - e-°»'»)]

a,

(4)

in which

U' = t0 log (1-h/fa) > D = fa/ (fa - h)

12 16 20 2*

PREPARATORY INTERVAL IN SECONDS tw — *

Figure 2. — Comparison of theory with experiment: effect of time of occur- rence of warning stimulus upon the reaction-time. Curve, theoretical predictions by equation (4); points, experimental (Woodrow, 1914). Abscissa, interval be- tween presentation of warning and effective stimuli; ordinate, interval between presentation of effective stimulus and occurrence of response.

52 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

In Figure 2 is shown a comparison of equation (4) (curves) with experimental data (points) by H. Woodrow (1914) in which the effect of the interval between warning stimulus and stimulus proper on the reaction-time was measured. The upper curve was obtained under the condition that the successive values of tw in the experiment were mixed randomly; the lower curve was obtained under the condition that the value of tw was kept the same for a number of trials before being changed to another value. As the conditions are different in the two cases, one might expect that the parameters would also be differ- ent. Further details are given in the legend of Figure 2. For a dis- cussion of a mechanism which can differentiate between these two conditions, the reader is referred to Landahl (1939a).

VIII FLUCTUATIONS OF THE THRESHOLD

It has been assumed thus far that the threshold of a neuron is a constant which does not depend on time. Actually it varies from moment to moment and when we speak of the threshold as a constant we must understand by this some mean value of a group of measure- ments of the threshold. A more complete description would give also a measure of the variability. The threshold may vary with many changes in the organism. These variations would generally be rather slow. But within the neuron and in its immediate surroundings there occur rapid minute fluctuations in the concentrations of the various metabolites. The work of C. Pecher (1939) indicates very strongly that it is these fluctuations in concentration that are responsible for the variations in the thresholds of the peripheral fibers with which he experimented. His calculations showed that as few as some thous- and ions was sufficient to produce excitation. From the kinetic theory, one should then expect that the per cent variation in the threshold should be one hundred divided by the square root of the number of ions necessary for excitation (Gyemant, 1925). This value in terms of the coefficient of variation is of the order of a few per cent and is comparable with the values obtained experimentally.

We may make the calculation of the variation as follows. In order for excitation to occur, it is necessary to stimulate a minimal region of a neuron. Suppose this to be a node of Ranvier. Let the width of the node be d oo 104 cm, the radius of the fiber r oo 10-4 cm. The effect of an ion is small at distances of a few diameters. Thus ions a few diameters removed from the cell surface will have little influence on the surface. Let this distance of influence be d oo 107 cm. Then the volume within which the ions affect the excitability of the neuron is 2nrdd . If C oo 10-5 is the molar concentration of the ions, and if N is Avogadro's number, the total number of ions influenc- ing the excitability is 2nrddCN and thus the per cent fluctuation is given by 100 /\/2jirddCN oo 2%. Had we used the area of an end- foot (c\3l0-7), the same sort of result would have been obtained. But because of the variations in these quantities one cannot exclude the possibility that rather large variations may occur. The calculations only indicate that the fluctuations about the threshold may be appre- ciable. As long as the range in variation is not comparable with the threshold itself, the kinetic theory requires that the fluctuations be distributed normally to a high degree of approximation. That this

53

54 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

is the case for single nerve fibers is illustrated in the data by C. Pecher (Landahl, 1941c).

If for a particular neuron the mean value of the threshold is h , the coefficient of variability is v , and if p(C) represents a normal curve of unit area, then the probability P of a response in the absence of a stimulus is given by the integral of p(C) from 1/v to oo. If t is the least time for a fluctuation to have effect, then after a time t/P one could reasonably expect a chance response. The mean frequency of such responses would then be given by P/r per second. These re- sponses would not be periodic. If t is taken to be of the order of mag- nitude of 10~3 seconds, then for v = 30% the mean time between re- sponses would be a few seconds, while for v = 20%, the mean time be- tween responses would be a number of hours. From this we see that the probability of a chance response, even over a considerable period of time, becomes negligible rapidly as v becomes much smaller than one-fifth. But one should consider also the slower changes in the en- vironment of the neuron which not only changes the threshold but also its degree of variation.

Variations in the threshold would cause the response-times and other measurable variables to be distributed in some manner about the value corresponding to the mean value of the threshold. As an illustration, let us estimate the dependence of a measure of the varia- tion in response-times on the intensity of the stimulus. We consider the case of a simple excitatory afferent stimulated by a constant stim- ulus S and acting on an efferent of threshold h . Suppose that tx is the value of t for which e = h . Then if h is decreased by an amount vh , e = h is satisfied by t = U. — a. Then

1 / vh \

a = -log[ 1+ (D

a V <£ — h J

is an average variation in the reaction-time due to the variation in threshold. In general, since one must consider more than one chain, one may suppose that variations, essentially independent of <f> , are introduced at other synapses and at the end-organ. Let a0 be a meas- ure of the total effect of this variation. Then the measured variation in the response-time will be given by the square root of the sum of the squares of these two variations. As v is generally quite small a = vh/((p — h)a and thus

at = Veto2 + vya*{4>/h - l)2, (2)

and we have a relation between a measure of the variation in re- sponse-times and the stimulus-intensity in terms of ^ . We may com-

FLUCTUATIONS OF THE THRESHOLD

55

pare this result with the results of experiments by Berger and Cattell (1886) cited in chapter vi, in which the mean variations of the re- sponse-times were measured. We may use the same parameters ex- cept for (70 which is arbitrary for this curve. Thus we have one para- meter to determine this relation. The data are incomplete in the re- gion of the threshold, but the comparison in Figure 1 is made for

.040

.030 -

O.020

.010

10 100

STIMULUS INTENSITY S— *

1000

Figure 1. — Comparison of theory with experiment : the variation in reaction- times as a function of stimulus-intensity. Curve, theoretical predictions by equa- tion (1); points, experimental (Cattell, 1886). Abscissa, intensity of stimulus; ordinate, mean variation in reaction times.

illustrative purposes primarily. Nothing has been said of the type of distribution one would expect for a particular stimulus-intensity. This would require a more detailed analysis. We wish only to indi- cate the kind of effects due to the variations in the threshold. In the next chapter we shall show how they provide a possible basis for the distribution of judgments in situations that require some form of discrimination.

IX

INTERCONNECTED CHAINS: PSYCHOPHYSICAL

DISCRIMINATION

In chapter iii we dealt with the general problem of the interac- tions among interconnected parallel chains of neurons, and more es- pecially with the mutual reduction of the a's developed by the simul- taneously stimulated chains when the interconnections are inhibitory. We also exhibited a mechanism capable of transmitting excitation when the intensity of the stimulus lies on a limited range only. In this chapter we shall apply these considerations to the interpreta- tion of sequences of a stimulus and a response of a type often stud- ied by psychologists, which we shall speak of as discriminal se- quences. By a discriminal sequence we shall mean any sequence in which the response is one of a limited set of qualitatively different possible responses, while that feature of the stimulus that deter- mines which response of the set is to occur is either its absolute in-

m

m

ti \J4 X S. Jz \ti

ft I

t

%

1 ^ Ml

V

r>

5

£;

$

Figure 1 56

PSYCHOPHYSICAL DISCRIMINATION 57

tensity or its intensity relative to that of some other specified stimulus.

We consider, then, parallel neurons or chains interconnected by inhibitory neurons (Figure 1), and we impose here the further re- strictions that their inhibitory effect is numerically the same as the excitatory effect due to the neuron with the same origin. Then a stim- ulus Sx may produce a response Ri , and S2 a response R2 , if the stim- uli are presented separately. But if the stimuli are presented to- gether, then the response Rx will be produced if Sx exceeds S2 by an amount which depends upon the thresholds of the efferent neurons. If the difference between Sx and S2 is too small, neither response oc- curs. If the thresholds are negligibly small, the response Rx occurs alone if 51 > S, and R2 if S2 > S1 .

Because of fluctuations of the type discussed in the preceding chapter, the values of s-l and e2 produced by the afferent neurons will not generally be exactly equal even when Sx = S2 . If St is slightly greater than S2 , there is a certain finite probability, less than one- half, that the response R2 will be given instead of Ri , the probability decreasing as the difference between Sx and S2 is increased.

Suppose that fluctuations occur only in the thresholds of the afferent neurons. The fluctuation of the threshold of any neuron causes fluctuation of the o- produced by this neuron, and we shall pos- tulate the distribution of o- rather than that of h . Furthermore, be- cause of the interconnections between the neurons, an increase in o- at the terminus of one afferent has the same effect as a decrease in a at the other, so that formally we may regard the fluctuation as oc- curring at only one synapse (Landahl, 1943). Thus we shall assume that the thresholds are constant but that at synapse sx , <r — e — j + C where s and j result from the activity of the afferents but C is nor- mally distributed about zero.

We shall assume that the net is completely symmetrical. Let h' be the threshold of either efferent. As we are assuming that y and b of the interconnecting inhibitory neurons are equal to <£ and a of the parallel excitatory neurons, a1 = e3 — j\ = — a2 = — (e4 — y3). Thus o-i and (72 are equal and opposite. Using equation (6) of chapter i, we may write the stationary values as

Si

ax — - a2 = p\0g— . (1)

Now a! exceeds h' if the stimulus S-l exceeds S2 sufficiently. But then e, will exceed e2 by some value h . We may summarize as follows:

If er — e2 — C > h , response Ri is given ;

58 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

if h i? £i — £.2 — £ = — h , there is no response ; if — h > £i — £2 — £ , response R2 is given.

If p(£)d£ is the probability that £ has a value in the range £ to £ + d£ , then the probability that response Rx occurs is obtained by integrating p(£) with respect to £ from minus infinity to (e^ — e2 — h). This becomes evident when we see that if £ is any value less than £1 — £2 — h , response Rx is produced. If we let f\ be the probability of response Rx , P2 the probability of response R2 , and P0 the prob- ability of neither response, and if we define

then we may write

P(x) = f* p(£)d£, (2)

J -"50

P^ = P(e1-B2-h), (3)

P2 = P(-e1 + e2-h), (4)

Po = l-P*-Pz. (5)

If Si = S2 , response Ri may be considered the correct response and R2 the wrong response. In this case Pr = Pc , the probability of a correct response, and P2 = Pw , the probability of a wrong re- sponse. Any failure to respond, or any response other than cor- rect or wrong such as "equal," "doubtful," could be included in the proportion to be identified with P0 • It is commonly the case that when a categorical judgment is required, so that either Rx or R2 is made at each trial, the subject must lower his criteria for judgment. We may interpret this with reference to the structure studied by assum- ing a lowered threshold. For this case we set h = 0 , whence P0 = 0 and Pi + P2 = 1. Thus from a knowledge of only the standard error of the probability distribution, one is able to calculate the probabil- ities of the various responses to any given pair of stimuli when the judgments are categorical; the additional parameter h enters when "doubtful" judgments are allowed.

Since complete symmetry has been assumed, it follows that Px = P2 for Si = S2 . In general, this is not true for the observed pro- portions. The amount by which the observed proportions differ from equality is a measure of the bias of the subject. The simplest inter- pretation of the bias is that the afferent thresholds are not exactly equal so that the mean values £x and £2 are not equal for Sx = S2 , but £x(£) = s2(S) + x0. Although x.0 will depend on Si, we shall not consider this any further, preferring rather to incorporate #0 into £ , so that p(£) has a mean value of x0 instead of zero. Thus modified, the mechanism may be applied to the experimental data by F. M. Urban shown in Table 1 (Urban, 1908). These are the average re-

PSYCHOPHYSICAL DISCRIMINATION

59

suits from observations made on seven subjects. The first entry in the table, .012, gives the proportion Pg of judgments that a weight of 84 gms is heavier than the standard, which is 100 gms in every case. This is for the case in which judgment of "equality" is permitted, the pro- portion being Pe = .027. On the same line in the last column is given the proportion Pi = 1 — Pg — Pe of times the weight of 84 gms is

.50

Experiment

a

/.CO

Figure 2. — Comparison of theory with experiment: distribution of judgments of relative weights. In 2a the abscissa of each point is the experimental value (Urban, 1908) of the proportion of judgments of the indicated type and the ordi- nate is the proportion theoretically predicted by equations (l)-(5). Thus perfect agreement would be indicated if all points lay on the straight line. In 2b the curve is theoretical, the points, experimental. The abscissa of each point is the proportion of "greater than" or of "less than" judgments as the case may be, the ordinate, the proportion of "doubtful" judgments when comparing the same "vari- able" stimulus with the standard".

60 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

n

ID

./>

ro

o

>£>

CO

ro

00

lO

f)

n

iT)

i"

CD

r

S330NVD fc)3l3W Nl 30N3fc)3JJI0 AHSN31NI 1HOH

Figure 3. — Comparison of theory with experiment: distribution of judgments of relative brightness. Curve, theoretical from equations (l)-(5) ; points, experi- mental (Kellogg, 1930). The ordinate is the difference between the intensities of the "variable" and the "standard" stimuli, in meter-candles. The abscissa is in every case the proportion of judgments of the type in question: for the solid circle, "greater than" categorical judgment; for the open circle, "greater than" with "doubtful" judgments permitted; for the crosses, the proportion of "doubt-

PSYCHOPHYSICAL DISCRIMINATION 61

judged to be lighter than the standard. Directly below the first entry- is the number .020 which is the proportion of times the weight of 84 gms is judged heavier than the standard when the judgment of "equal- ity" is ruled out, and so on for the other entries.

In Table 1 (p. 72) are given the corresponding probabilities com- puted by equations (1) through (5), using for the standard deviation of the distribution 5.7 gms, for the threshold h = 2.1 gms, and for the bias or constant error x = 2.75 gms. These values were determined from the three values in parentheses on the left, whence the corresponding values on the right are the same. It is to be noted that the parameters are all measured in grams. This is done for convenience only, as ac- tually there is an unknown constant which must multiply each para- meter to give the values in terms of e and j . Furthermore, a linearity between S and e is implied, which is nearly the case in the small range considered. In both the Table and Figure, the proportions Pg , and Pi are used. These are respectively the proportions of judgments of "greater," and "lesser." Their relation to Pc and Pw is evident.

The agreement between the theory and experiment is illustrated in Figure 2. Complete agreement would be indicated if all points fell on the line of slope one. The relation between the proportions of judgments of "equality" and the proportions of the correct and wrong responses is also shown in Figure 2. If these results are plotted on probability paper, the predicted results will be simply three parallel lines. The experimental data confirm this rather well (Landahl, 1939b) . The results from each of the seven subjects showed the same trend.

When one considers the visual data by F. M. Urban, one finds that the averaged data for several subjects, as well as those for individ- uals, cannot be so simply interpreted. With judgments of "equality" allowed, neither the proportions of correct responses nor those of wrong responses follow the integral of the normal distribution with- in the limits of experimental error; with judgments of "equality" ex- cluded the proportions are not peculiar. This suggests that the thresh- old h is not constant but is affected by the value of <jx . In order to preserve symmetry, the effect on the threshold of Sx > S2 must be supposed the same as that of S2 > &i if St and S2 are simply inter- changed. We may suppose that the lowering of the threshold h , due to the change of experimental situation when judgments of "equality" are ruled out, is the result of the activity of some outside group of

ful" judgments has been added to the proportion of "greater than" judgments. In the inset the curve represents the values of h predicted by equation (6) plotted against the difference of the intensities of the two stimuli and the points repre- sent the values computed directly from the data.

62 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

neurons. If a neuron of negligible threshold having afferent synapses; at sx and s2 tends to excite this outside group of neurons, then the threshold h will decrease with the absolute value of ux or <r2, these being proportional to the absolute values of a3 and <x4 . Thus h(\<j\) should decrease linearly for small |<r|, though h cannot become nega- tive. If we set

h = h0e-°w, (6)

we have a suitable form, with but one new parameter introduced.

In Figure 3 is shown a comparison between theory and experi- ment for the visual data by W. N. Kellogg. The curves are computed from the equations by setting the standard deviation equal to 0.58 meter-candles, — x0 = —0.10 meter-candles, ho = 0.49 meter-candles and 9 = 1.14 meter-candles1. The intensity of the standard was 21.68 meter-candles. In the inset of the Figure is shown a comparison be- tween equation (6) and the values of h determined from the data. These are symmetric about — xc . This type of relationship between h and the difference between the stimuli was found for each of the individuals upon whom the experiment was carried out. The deriva- tive of h(S) is discontinuous at —x(l. One would not expect to ob- serve such a discontinuity even if it were present. If the threshold of the neuron which produces the change of h with stimulus difference,, had not been neglected, the value of h would have been a constant in the neighborhood of — x,0 . For these reasons, a dotted curve is intro- duced in the Figure to indicate that the discontinuities are not ex- pected to appear in the data. For further details the reader is re- ferred to the paper by H. D. Landahl (1939b).

In the case of the auditory data (Figure 4) by W. N. Kellogg (1930) one finds that a further asymmetry is present. A rather accu- rate representation of the data results if one assumes that the effect on the threshold h due to stimuli for which Sr > S2 is not the same as that for which S2 > <S\ . Since for this modality S2 — Si may be a. rather large fraction, the first term of the expansion of equation (1) leads to noticeable error. Thus the parameters are measured in terms. of the logarithm of the ratio of the stimuli. If we let h.0 = 0.43, x<, = .02 , the standard deviation 0.18 or approximately 9 (milli- volts)2, and 9 = 2.8 for o^ > 0 , we obtain the curves shown in Figure 4. The points are the experimental values obtained by averaging the results from a number of subjects. The inset shows the relationship between h and log Si/S2 which is decidedly asymmetric. If this be considered significant, one might attempt to correlate the asymmetry with the mode of presentation or perhaps with the modality. The.

PSYCHOPHYSICAL DISCRIMINATION

63

o o

_j o

l"

O O O

-I _J U

>â–  o

a <

.SinOAIllllAI Nl A1ISN31NI QNOOS

^t

*

â– <*

rO

'-D

1

i

i

1

o rvj

1 o

iO

1

fi)

1

oS 1

1 o

'" *

1 i"

0» O)

00

o

0>

o

00

o

r-

o

I*

o z

S°-

o

o

_ o

rf>

- AJ

% 001

Figure 4. — Comparison of theory with experiment: distribution of judgments of relative loudness (data from Kellogg, 1930). The representation is the same as that in Figure 3 except for the use of the logarithmic scale of intensities, necessitated by the large relative range in the stimuli.

64 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

asymmetry appeared fairly clearly in the results from each of the individuals taking- part in the experiment.

On the basis of the mechanism considered, it is essential that the stimuli be presented simultaneously. However, a complication of the mechanism has been considered for which simultaneous presentation is not necessary. Essentially the same results may be obtained if the stimuli are presented in succession (Landahl, 1940a).

Since the integral of the normal distribution cannot be given in closed form, it is convenient to introduce an approximation by which closed solutions can be obtained. This is especially desirable when one wishes to use the results in other situations. With the distribu- tion (Landahl, 1938a)

p(C)=-e-fci<i (7)

we obtain, if sx> &..,

log2Pw + k(ei-e2) =0 (8)

to determine the probability of a wrong response when a categorical judgment is required. The applicability of this approximation has been tested by a comparison of theory and experiment made else- where (Landahl, 1938a).

While our definition of a discriminal sequence rules out the con- currence of the alternative responses and hence requires that the in- hibitory neurons connecting the parallel excitatory chains shall have activity-parameters at least as great as those of the excitatory chains themselves, it is natural to consider also the case where this restric- tion is removed. We noted in chapter iii, in the case of two parallel chains with inhibitory interconnections, what qualitatively different effects might follow the simultaneous stimulation of both chains as different relations are imposed upon the parameters of the constitu- ent neurons. We select for further consideration here only the sym- metric structure consisting of two parallel chains with crossing in- hibition where a > /5 (cf. equations (4) chapter iii), in which case concurrent transmission along the two paths will occur when the two stimuli are sufficiently strong and not too greatly different.

These chains may lead from neighboring cutaneous receptors, from neighboring retinal elements, or from organs of two disparate sensations. The responses which they occasion may be overt bodily movements or they may be merely awareness of the sensations. The mechanism has a possible application wherever there is interference by a stimulus of one type with evocation of a response by another, and reciprocally. It is at once apparent that the interference by the

PSYCHOPHYSICAL DISCRIMINATION 65

one stimulus with the other's response may occur even though the first stimulus would be inadequate, if presented alone, to produce its own response. Thus if the application were made to the interaction of auditory with visual perception, the mechanism provides that even a subliminal auditory stimulus would raise the absolute threshold for visual stimulation. With appropriate modifications — crossing excita- tion instead of crossing inhibition — the possibility of a mutual lower- ing of threshold could be similarly treated.

To link the mechanism with crossing inhibition more substan- tially with possible experimental results, we consider the effect of threshold-fluctuations, or, what is more convenient and mathemati- cally equivalent, random variations in the o-'s at sx and s2 . Since a^j?we cannot, as before, represent the combined effect by varia- tions at only one synapse, but any variations occurring also at Si and s2 could be formally accounted for by suitably modifying the distribution-functions at the first two synapses alone and we suppose, for simplicity, that with this modification the resulting distributions are identical at the two synapses. Denote these functions by p(C), C being the random addition to either synapse and having zero as its mean.

The mutual influence of the stimuli upon absolute thresholds can be determined from an investigation of near-threshold stimulation where the functions <f> and xp can be represented linearly:

<j>(S) =aS-a',y>(S) =ps-p, a > p . (9)

Corresponding to equations (1), chapter iii, we have

a1 = a(S1 + Cx) -a'-/?(S2 + C2) + /J',

(10)

<r2 = a(S2 + k) - a' - fiiSi + d) + fi' .

Then the respective conditions for the responses R1 and R2 are,

aS1-pS2 + at1-pz2-h>0, (11)

aS2- pS1 + a^~ PCi~h>0, (12)

where

h = a'- p + h' , (13)

and each efferent from sx' and s2 has the threshold h' . Let (RR), (RO) , (OR) and (00) denote the occurrence of both responses, the first only, the second only, and no response, respectively. The prob- abilities of these events, where Si and S2 have given values, are

P(RR)= f p(Ci) J f»(f.)<Zf»<Ztif (14)

66 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM P(RO) =

(15)

-Si+7»/(<H3) °° oo oo

f Piti) j P(C2) dC2dCi + J p(Ci) J P(Ca) #.dk,

-oo ^(fi+Si)/a+/i/a-S2 -Si+7i/(<H3) a(£i+Si)//3-ft/0-S2

P(Otf) =

(16)

-81+h/(a-p) a(£i-:S,)/0-fi/|3-S2 oo oo

j p(Ci) Jp(C2)^C2^C1+J 2>(Ci) JV(C2)<#3<#i,

-oo -oo -Si+fe/(o-|3) /3(fi+S!)/a+Va-S2

-S1+7i/(a-(3) (3(fi+S1)/a+ft/a-S2

P(00)=J 2?(d) J"p(C*)#»#i. (17)

-oo a(£i+Si)/0-/!/0-S2

Other expressions for P(RR) and for P{00) can be obtained by interchanging- subscripts. These four P's are functions of Sx and S2 whose values are experimentally determinable; their sum is unity so that only three are independent. If p(C) is given they depend upon the parameters a , /5 and h; if the distribution 29(C) is assumed to be normal there is an additional parameter, the standard deviation. Pear- son's tables of tetrachorics can be utilized for determining these para- meters from the empirical frequencies. The quantities St and S2 are not the intensities of the external stimuli but some monotonic func- tions of these; however, at the near-threshold level it is permissible to regard these functions as linear.

In the preceding paragraphs we have considered the case of two stimuli simultaneously presented to the organism, with two alterna- tive responses permitted. In chapter vi we considered the case where a response may follow the sudden increase in the intensity of a stim- ulus previously maintained at a constant level. We turn now to a process of more complex form in which each of a group of stimuli differing only in intensity elicits a distinct response. This involves absolute discrimination though a similar mechanism may be at work when relative discrimination occurs. The important point to notice is that an increase in the intensity of the stimulus does not merely change the strength of the response or bring into activity additional elements, but may so alter the response that none of the elements involved before the change is included among those active after the change.

Consider the net of Figure 5 which is a parallel interconnected structure containing also circuits (Householder, 1939b). Let an affer-

PSYCHOPHYSICAL DISCRIMINATION 67

ne

Figure 5

ent neuron form synapses with a number of neurons ne whose thresh- olds differ but which are otherwise equivalent. Let all those which have the same threshold be brought together to act on a single neuron. Only two of an indefinite number of final neurons are shown in the Figure. Thus all the neurons ne having the threshold hk are brought together to act on a single neuron ne3 . Let f(h)dhhe the number of these neurons having thresholds between h and h + dh . Consider- ing only the stationary-state activity, and using the linear relation- ship of equation (5), chapter i, with ft = 1 , we may write

s(S,h) = (S-h) f(h) (18)

as the value of the excitation at st , the terminus for the neurons ne with the threshold h .

Let equivalent inhibitory neurons of negligible thresholds origi- nate at each synapse s» and terminate at every other synapse s, with- out duplication. If a(S,h) is the value of a at the synapse Si at which the neurons of threshold h terminate, then the value of j(S) at this synapse, due to the activity of the neurons ni terminating there, is

j(S)=Xfo(S,h) f(h)dh, (19)

X being a constant measuring the activity of the inhibitory neurons. Thus

<j(S,h)=s(S,h) - X f a(S,h) f(h)dh. (20)

This is a special form of equation (15), chapter iii, where the /J and x' of the former equation are here replaced by X and h , respec- tively, N(x', x) becomes f(h), and the former h is negligible.

If f(h) is continuous and /(0) = 0, e(S,h) vanishes at h = 0 and h = S . Hence e(S,h) has at least one maximum in this range. Assuming it to have only one, we see that e(S,h) will equal j(S) at only two values of h , hx and h2 , and in this range e(S,h) > j(S) so that <r (S,h) > 0 . That is, the neurons ne3 corresponding to thresh- olds in the range hx to h2 are excited. Thus we may write

68 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

j(S) = X fh*<r(S,h) dh

Jht

so that, as in chapter iii,

X f^e(Sfh) dh

1+X(K-K)

(21)

(22)

Thus from e(S,M = e(S,h2) = j(S), we may determine h^S) and

MS).

Define a Weber ratio S(S) by the equation

MS) =MS + S<$).

(23)

The definition implies that when the intensity S is changed to S + Sd , a completely different set of neurons is activated, so that we may ex- pect a response to S + Sd which is distinct from the response to S .

In order to proceed we must prescribe the function f(h). The simplest assumption is that f(h) is proportional to h over a sufficient- ly wide range, which amounts to setting f(h) = In if any constant

X White

Brodhun

Konlg

O 670 \X (Jl

A White

a 670 a U

5

10

log

10

Figure 6. — Comparison of theory with experiment: intensity-discrimination at varying intensity-levels for visual, auditory, and tactile sensations. Solid curves, theoretical prediction by equations (25) -(28) ; points and dotted curve, experimental. Visual data from Konig and Brodhun, 1888 and 1889. Abscissa, intensity (on logarithmic scale) of stimulus; ordinate, ratio of just-discriminable difference to total intensity.

PSYCHOPHYSICAL DISCRIMINATION

69

multiplier is absorbed in the constant X . The graph of e(S,h) is an inverted parabola with a maximum at 5/2 , and thus th and h* are equidistant from S/2 . Define the "relative interval," x , by

Then

Sx = S-2h1 = 2ha-S. e(S,h1)=e{S,h2) =SH1 - x2)/4 = j(S)

(24) (25)

Introducing s = h(S—h) in equation (22) and the result into equa- tion (25), we obtain

where u is defined by

ux3 + x2 — 1 = 0

u = 2XS/Z

(26)

(27)

and is thus proportional to the intensity. The value of x for S + Sd is given by

x(S + Sd) = [6- x(S)]/(d + 1), (28)

7 J

Auditory data; Rlesz

1000 cycles per sec.

70 " " "

h 5

2 •

1 -

Figure 7. — Comparison of theory with experiment: intensity-discrimination at varying intensity-levels for visual, auditory, and tactile sensations. Solid curves, theoretical predictions by equations (25) -(28) ; points and dotted curves experimental. Auditory data from Reisz, 1938. Abscissa, intensity ( on logarith- mic scale) of stimulus; ordinate, ratio of just-discriminable difference to total intensity.

70 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

•7

.6 -

.1

Tactile data: Macdonald and Robertson

o <33<

1.5

2.0

2.5

log!

Figure 8. — Comparison of theory with experiment: intensity-discrimination at varying intensity-levels for visual, auditory, and tactile sensations. Solid curves, theoretical predictions by equations (25) -(28) ; points and dotted curves experimental. Tactile data from Macdonald and Robertson, 1930. Abscissa, in- tensity (on logarithmic scale) of stimulus; ordinate, ratio of just-discriminable difference of total intensity.

and evidently u(S + S6) =w(l + 6). Writing equation (26) with S replaced by S + S6 , and introducing (28) we obtain

ILX"

(36u + 1) x? + (S62u + 26) x- (u63 - 26 - 1) = 0 . (29)

By eliminating x between equations (29) and (26), we obtain 6(u), the desired relation between the Weber ratio 6 and the intensity of the stimulus. The result can be expressed in the form (Householder, 1942c)

S = pu-a^ (30)

where a is very nearly constant and lies between 1/2 and 1/3 .

In Figures 6 to 8 are shown a comparison between theory and ex- periment for visual, auditory, and tactile data (Householder, 1939). The abscissa log u is used for convenience; u is proportional to the stimulus S . It should be emphasized that there is but the one para- meter involved in each curve.

In Figure 9 is shown the theoretical and experimental results for the case of visual discrimination of lengths (Householder, 1940). In order to make the comparison it is necessary to assume only a propor- tionality between the length of the line and the value of e resulting

PSYCHOPHYSICAL DISCRIMINATION

71

from the movement of the eye from one end of the line to the other. The effect of binocular cues also has been considered (Householder, 1940) .

In the case of discrimination of weights (Householder, 1940), it is found that it is unsatisfactory to assume a simple proportionality between the weight W and the value of e at the end of the afferent neuron. If a logarithmic relation is assumed together with a particu- lar distribution of the thresholds, it is then possible to determine a re- lationship between d and W in terms of two parameters. Experimental results from discrimination of weights are shown as points in Figure 10, the weights being placed in the subject's hand. The theoretical predictions are shown as curves. The weight of the hand is a third parameter in this case and had to be estimated indirectly from the data in each case since the direct measurement was not included. The estimated values were 400 gms for each hand of the male subjects and 350 gms for each hand of the female subjects— values which are quite plausible.

ae

.010

00 = .005 log K = 8.18042

•

008

OC =.34277

.006

• /

004

>/•

002

•

.000

1 i i

- . i

â– 

,

.005

.01

.02

.05

e

.10

.20

.50

Figure 9. — Comparison of theory with experiment: discrimination of lengths of line-segments, visually perceived. Curve, theoretical, based on equations (25)- (28); points, experimental (Chodin, 1877). Abscissa, visual angle of shorter segment; ordinate, just-discriminable angular difference.

72 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Our discussion has dealt mainly with two mechanisms, and it is important to distinguish the experimental situations to which they were applied. The first mechanism was applied to relative discrimina- tion between stimuli simultaneously presented, and the distribution of "correct" and "wrong" judgments was predicted. The second was ap- plied to absolute discrimination, each stimulus being presented alone, and a Weber ratio was deduced. A third mechanism was first intro- duced in chapter vi, and could be applied to sensitivity following adap- tation. Still a fourth was suggested in chapter iii, but only as illus- trating the possibility of a mechanism failing to transmit under in- tense stimulation. No application to concrete quantitative evidence was made of this. The last mechanism discussed here will be extended in chapter xii to provide a mechanism for the discrimination of colors.

TABLE I LIFTED WEIGHTS

EXPERIMENTAL

THEORETICAL

s

r,

Pe

Pi

PB

Pe

Pi

Grams

P r

o p o r t i

o n s

84

0.012

0.027

0.961

0.004

0.021

0.975

.020

.980

.010

.990

88

.021

.082

.897

.025

.077

.898

( .053)

.947

.053

.947

92

.096

(.181)

.723

.103

(.181)

.716

.185

.815

.179

.821

96

.275

.266

.459

.284

.265

.451

.420

.580

.409

.591

LOO

.502

.267

.231

.551

.250

.199

( .683)

.317

( .683)

.317

L04

.842

.103

.055

.796

.140

.064

.920

.080

.880

.120

L08

.915

0.065

.020

.932

0.054

.014

0.963

0.037

0.966

0.034

PSYCHOPHYSICAL DISCRIMINATION

73

WEIGHT DIFFERENCE AW IN GRAMS

Figure 10. — Comparison of theory with experiment: discrimination of lifted weights for three observers Qi,%, and 03 , left hand L, right hand R, and aver- age for both hands M. Curve, theoretical, based on equations (25) -(28) ; points experimental (Holway, Smith, and Zigler, 1937). Abscissa, lesser weight in grams, ordinate, just-discriminable difference in grams.

X

INTERCONNECTED CHAINS: MULTIDIMENSIONAL PSYCHOPHYSICAL ANALYSIS

In some cases, for example in the making of aesthetic judgments, the stimulus-objects are complex and may provide stimulation in any number of distinct modes. Then if a statement of preference is called for — one of two incompatible responses — the sequence of stimulus and response may be regarded as a discriminal sequence as defined in the preceding chapter provided we regard each of the two stimuli as a resultant of the components in the various modes. The composition, we may suppose, is effected in some way within the organism through the concurrence at some point of the afferent chains leading from the several receptors.

The simplest scheme for representing the neural processes which mediate a discriminal sequence of this type is the following. Suppose that each complex stimulus-object, CP(p = 1 , 2), provides stimuli of intensities CP;(i = 1 , ■■• , n) in the n modalities, and that these stim- uli send impulses independently along discrete afferent chains to the synapse Si where they occasion the production of o- = SPi , the SPi for each p combining additively to yield the SP hitherto employed:

sP = zs

Pi

Each Spi is then some function of Cpi alone; still regarding only the near-threshold range we may take these functions to be all linear,

SP = ^LiCpi-M. (1)

We may now use either of the procedures introduced in the previous chapter, with a = /5 , according to the choice of location for the ran- dom element. In either case there are but three, or, with more special assumptions, only two, functions P . The functions remain, however, functions of the SP ; for any pair of stimuli d and C2 , which provide some (unknown, since the L, are unknown) <SX and S2 , it is possible to determine experimentally the relative frequencies P , and those stimulus-pairs which yield the same values of the P's will be those which yield a fixed difference

Si-s;=2£i(Cn-&i). (2)

The same result follows if we assume crossing inhibition connecting also the pair of afferents affected by the two stimuli for each modal- ity. This would justify extending the assumption of linearity to a

74

MULTIDIMENSIONAL PSYCHOPHYSICAL ANALYSIS 75

much greater range, since only the positive difference \C1i — C2i[ af- fects the subsequent members of any chain.

It is natural to identify this scale of S-values with Thurstone's "psychological scale" (Thurstone, 1927; cf. Guilford, 1936). The psy- chological scale is introduced quite abstractly in psychophysics, the assumptions being that each stimulus-object produces a "discriminal response" which can be measured on this scale, and that repeated presentation of the same object leads to varying responses as so measured, the distribution being normal on this scale. By following a well-defined procedure the empirical frequencies of the judgments can be utilized for determining the modal S associated with each stim- ulus-object, the determination being unique up to a linear transfor- mation.

In our terms, for any stimulus-object Cp , the modal response (on the part of the afferent chain, at the synapse sp) is Sp , while Sp + CP is the particular response on a given presentation; if Cp is taken to be normally distributed the identification of our S-scale with that of psychophysics is immediate; otherwise some scale-transfor- mation is required. In either case the methods of psychophysics can be utilized to determine the SP , for each Cp , at least up to a linear transformation. Then if, further, the CPi are directly measurable, the quantities Lt can be determined up to a common multiplicative factor. These quantities furnish measures of the relative contributions of the separate modes of stimulation to the judgment as a whole. We note, incidentally, the possible application of factor analysis with a large population of subjects (Thurstone, 1937).

Physical measurement of the CPt is possible but rarely and in the least interesting of the cases, and psychophysicists have endeavored to obtain from the empirical frequencies an insight into the number of distinct modes of stimulation of the organism by the complex stimulus-objects of a given class. It is clear that from judgments of preference alone no such information is to be had, for by the condi- tions of the experiment the subject is required to make a one-dimen- sional ordering of the objects. But by a slightly revised experimental procedure, interpreted in terms of a suitable neural mechanism, it is possible to obtain the data necessary for such a multidimensional analysis (cf. Householder and Young, 1940; Young and Householder, 1941).

In this procedure each stimulus-object is replaced by a pair, and the subject is now asked which of two given pairs is most unlike. The formal psychophysical analysis required for determining the S corre- sponding to each pair is identical with that required for determining the S for each object in the previous experiment; only the interpre-

76 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

tation is different, for the S now measures, on the psychological scale, the extent to which the two members of the pair are different. If the two members of any pair are identical the associated S must be zero, so that the additive term is determinate and the S's can in this case be determined uniquely up to a common multiplicative factor.

The ordering of the pairs along the S-scale is, however, only an intermediate step since we wish to associate the individual objects with points of a metric space of sufficiently many dimensions in such a way that the S for any pair is the distance between the points which represent the members of the pair. But the solution of this problem depends upon the character of the space's metric, and the metric in turn is a property of the mediating neural mechanism.

Suppose, then, that the objects Ap and BP constitute the pair CP and that the afferents for the i-th modality affected by AP and BP are connected by crossing inhibition. Then, if the thresholds are low, beyond the locus of the crossing the corresponding <r is proportional to

Cpi=\Api-Bpi\ (P = l,2) (3)

along the afferent from Api or Bpi , whichever is the greater, while it is negative along the other. If these two paths join at some subse- quent synapse, then succeeding neurons of this chain are stimulated in amounts proportional to CPi as given by equation (3). From here the mechanism is just like the one previously discussed. Hence for any pair (A , B) in place of equation (1) we have

S = 2Li\Ai-Bi\ .

If the quantities At and Bl are physically measurable, or if they are measurable by psychological methods independent of the method now being outlined, e.g., in terms of J. N. D/s, the quantities Li have sig- nifiance and it is an empirical problem to determine whether or not a set Li exists satisfying all equations of this type. If the Ai and Bi are not so measurable they are precisely the quantities to be deter- mined from this procedure, and we may introduce units so chosen that every L% = 1 . In this case, which we assume hereafter,

S = 2\Ai-Bi\. (4)

If it happens that for any three stimulus-objects, the S for one pair is equal to the sum of those for the other two, then this experi- ment provides no data for a multidimensional analysis. This does not imply, however, that only a single modality is involved. If, for any three objects, the S of any pair exceeds the sum of those for the other two, the equations (4) are inconsistent, and further analysis is impossible on the basis of the mechanism here proposed. Passing over

MULTIDIMENSIONAL PSYCHOPHYSICAL ANALYSIS 77

these cases to which the present method does not apply, we suppose, therefore, that for some set of these objects, say A(0), A(x), and A(2), the S of each pair is exceeded by the sum of the other two, and we attempt a two-dimensional representation. It is evident that the val- ues of the various S's alone can determine the various A» at most up to an additive constant — the zero for each modality is arbitrary un- less prescribed by considerations irrelevant to the formal experi- mental procedure. Such prescription, if available, may be observed later by an appropriate adjustment; for the present choose A<0) as one reference-point and assume that each Ai(0) = 0 . If S{pq) repre- sents the S corresponding- to the pair (A(p), A(9))» we may suppose, further, that

£(02) > gCL2>

relabeling the objects if necessary. For determining the four quan- tities A(i)j, only three equations are available. Hence we may make the assumption

r±l -f±2 )

subject to possible later revision. Finally, we may suppose that Ax{2) < A2(2), since we can only separate but not identify the two modes, and we have

Ax<2> = [£<°2> - £<12>]/2 , A2<2> = [S<02) + S<12>]/2 , A^ = A^ = S(01> /2 .

Now consider any object A(3). If either

£(03) =£(01) _|_ £(13) —£(02) _|_ £(23)

or else

£(03) = £(01) _ £(13) — £(02) _ £(23)

then

£(03)— ^(3) + 42<3)

and the quantities on the right are indeterminate. If neither, there are at least two independent equations involving Aa(3) and A2(3). If there are three, a two-dimensional representation is impossible, but if for every fourth object A(3) there are at most two new equations, two dimensions are sufficient.

Thus, apart from the arbitrariness indicated, with a sufficiently large number of stimulus-objects A all the Ai can be determined. It is perhaps clear enough from the above how one must proceed when more than two dimensions are required. The quantities S may be regarded as distances in the representative space, and the space is metric but not Euclidean. The assumption of linearity imposed upon the mechanism is not highly restrictive, in principle, since, if the

78 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

stimuli are all low in intensity, the linear expressions may be re- garded as the first terms of Taylor expansions. Additional cross- connections could be introduced into the mechanism, either inhibitory or excitatory in character, but the essential features would not be changed nor the metric of the space greatly modified. In particular, it is important to note, the arbitrariness is inherent in the nature of the experiment and can only be removed by further experiment or observation of a different kind.

In chapters vi and vii we considered the process at a single syn- apse terminating one or more afferent chains, and related this to various sequences of stimulus and response. In all these sequences there was but a single response, however complex in form, dependent upon the variation of the single a , evoked by a single, simple or com- plex stimulus, and perhaps modified, in degree or in the time of its occurrence, by other stimuli. In this and the preceding chapters we have considered two or more synapses with as many afferents, ter- minating afferent chains from various receptors. The structures were associated with classes of sequences of stimulus and response of the following sort. Each of a group of stimuli, simple or complex, when presented in isolation can evoke a certain characteristic response, whereas the concurrence of these stimuli modifies the separate re- sponses by enhancing, reducing, or even preventing them. We have by no means exhausted the possible applications of the various struc- tures ; by varying the assumed relations among the parameters an al- most countless variety of sequences is suggested. For example we have been considering only the case of crossing inhibition and have mentioned only in passing the possibility of having crossing excita- tion, a mechanism that would mediate sequences of a quite different sort. Numerous possible complications are easily suggested. The re- sponse to a given stimulus might be modified, not directly by another external stimulus but by the response to that stimulus, this calling for a connection running from the second effector back to some point in the afferent chain leading to the first effector. Circuits of the type discussed in chapter iv might be introduced at various points and their effects studied and related to observable sequences.

The procedure of starting with the simpler structures and seek- ing applications thereof has this decided advantage, that we can feel assured that the postulated mechanism is not more complicated than necessary for mediating the adduced sequence. Thus if one stimulus can in any way modify the response evoked by the isolated occurrence of another, then some connection must lead from the first receptor to the effector for that response, whether the connection is direct, through the spinal cord only, or indirect, through the thalamus, or

MULTIDIMENSIONAL PSYCHOPHYSICAL ANALYSIS 79

elsewhere. It is highly unlikely that the actual mechanism mediating any of the stimulus-response sequences here outlined is as simple as the one postulated for it, but by comparing the deductions from the simpler postulates with laboratory data we can take note of the devia- tions and be guided thereby in our endeavor to improve the picture. While this procedure, from mechanism to suggested application, could be pursued indefinitely through increasing degrees of complex- ity, we turn instead, in the following chapters, to the reverse proce- dure, considering certain forms of activity and attempting to con- struct mechanisms capable of mediating these.

XI

CONDITIONING

A most important property of neural circuits is that their ac- tivity may continue indefinitely after cessation of the stimulus. The possible application of this property to memory is evident, but to conditioning it is much less so. We now suggest a mechanism for ex- plaining conditioning and learning.

Consider first a few properties of the structure of Figure 1

i x

Figure 1

(Rashevsky, 1938) . For the present we shall ignore the presence of the dotted neurons. This structure consists of two neuron-chains, leading through a final common path to a response R , together with a uni- lateral interconnection and a simple circuit C . The chief character- istic of conditioning is that a particular response R , normally pro- duced by the "unconditioned" stimulus Su but not by the stimulus Sc , may after the repeated concurrence of the stimuli Sc and Su become capable of being evoked by Sc alone. This may require one or more concurrences of Sc and Su , and while Sc and Su need not be presented at exactly the same time, the time between them cannot be too long. Suppose, only for the sake of simplicity, that all the neurons are of the simple excitatory type, and let <£o represent the maximum value of <f> for any Sc . Then for the net of Figure 1, let

80

CONDITIONING 81

<£<, < h' < </>0 + e0 , (1)

</>,(, < h" < </>.0 + </>.0tt > (2)

</>o« < ^ , (3)

where £0 is the value of the excitation at sc due to the circuit when in steady-state activity, and where the </>'s refer to the afferent neurons.

If the unconditioned stimulus Su exceeds the threshold of /„ suf- ficiently, the response R may be elicited. But, as we assume that the circuit C is not active initially, the stimulus Sc cannot produce the response R because 4>0 < h' . Furthermore, neither Su nor Sc can bring C into activity when there is too long a time between their oc- currence. Thus Su alone can produce R but Sc cannot. Now suppose that Su and Sc are applied together for a sufficiently long time. Though simultaneous presentation is not a necessary condition, it will be con- sidered here to simplify matters. Because of condition (2) , the thresh- old of the circuit will be exceeded and the circuit will pass over into a state of steady activity. If, now, a large enough Sc is applied alone for a long enough time, the threshold h' will be exceeded because of condition (1). Thus the response R may now be produced by the hitherto inadequate stimulus Sc alone, and the structure exhibits one of the principal features of conditioning.

If we add now the inhibitory neurons III' and III, the resulting structure will exhibit another feature important in the phenomenon of conditioning. Whenever Su is applied, the effect of neuron III is blocked by III'. But if Su is not applied, then the continuous or re- peated application of Sc may cause III to produce enough inhibition at s' to block the action of Sc if conditioning has previously taken place. This corresponds to the loss in effectiveness of the conditioned stimulus which occurs when it is applied repeatedly without rein- forcement by the unconditioned stimulus.

If, instead of a single circuit C , we assume that there are a num- ber of them having different thresholds, we should be able to show that a more intense Su and Sc would tend to produce a more intense response. Furthermore, by considering repeated applications of Su and S , it is possible to determine the effect of the number of repeti- tions on the conditioning. By combining these extensions of the struc- ture, N. Rashevsky (1938, chap, xxv, equation 44) obtains an expres- sion

eB = A(il-e°») (4)

for eR , the excitation tending to produce the response R when S is ap- plied as a function of the number, n , of repetitions. The constant A in- creases with the intensity of the conditioned stimulus, while the con-

82 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

stant a increases with both Su and S and depends on the time between repetitions and the time of stimulation at each repetition. This, too, is in qualitative agreement with results of experiments on conditioning.

This conditioning mechanism requires the continued activity of some circuits and the objection could be raised that the great sta- bility of well-established memory-patterns, which resist disruption by either such major disturbances as shock and narcosis or by the cumulative effect of countless minor vicissitudes over a period of time, is inconceivable in terms of so vulnerable a structure. To this ob- jection at least two replies are possible. One is that a quantitative theory is useful in proportion to the extent of the phenomena for which it can account, and it is not less useful for failing to account for others, However, the objection can be met in a more positive way by supposing that the relatively rapid changes permitted by the above mechanism lead in some fashion to more permanent structural changes. It is quite possible that the usual intermittent rise and fall of e and j at a given synapse would have no physiological effect be- yond the exciting and inhibiting effects which we have postulated, whereas the maintenance at some sufficiently high value of either or both would cause permanent changes involving, among other things, a modification of threshold (cf. Douglas, 1932). The theory of the process of conditioning would then hold as outlined, but would re- quire a supplement to account in detail for the observed stability.

But regardless of the mechanism, the facts of conditioning re- quire some change in e with successive trials, leading to a change in response. The simplest assumption possible is that s is proportional to the number of trials, at least when the number of trials is small, and this is the essential content of equation (4), with Aa the constant

Figure 2

CONDITIONING 83

of proportionality. The theory could be developed formally regard- ing- Aa as a purely empirical constant with no definite physiological significance. But the model enables us to relate this constant to such variables as the strengths of the conditioned and the unconditioned stimuli, temporal factors, and the like, and so provides the possibility of relating a larger group of variables in a single formulation.

For interpreting some experimental results in these terms, we consider the net shown in Figure 2 (Landahl, 1941). Let a stimulus Sc normally produce a response Rc , and let a pleasant response R1 always follow Rc in the experimental situation. Let Sw normally pro- duce Rw , which, in the experimental situation, leads to an unpleas- ant stimulus, less pleasant stimulus or to an equally pleasant stimulus but after a longer time. Let the circuits M and C each represent a large group of circuits of different thresholds. Let the part of the structure composed of neurons III, III', IV, IV be equivalent to the corresponding part of Figure 1 of chapter ix. We shall consider only simultaneous presentation of SK and Sc . On the first trial, neither C nor C can become active, and thus we have acting at sc a quantity e,oc; and similarly at slc a quantity e,ow . Then, if one of the two re- sponses must be made, the probability Pc of the response Sc may be given by the approximate equation (8) of chapter ix, with £! — e2 replaced by £oc — £ow •

After Sc and Sw have been presented together n times, the re- sponse Rc will have been made, say, c times and the response Rw , w times. We shall refer to n as the number of trials, c as the number of correct responses, and w as the number of wrong responses. Then Pc , the probabilty of a correct response, may be identified with the proportion of correct responses, so that approximately

Pc = dc/dn; (5)

and similarly

Pw = dw/dn , (6)

Thus

Pc + Pw = l, c + w = n. (7)

When a stimulus Sc is presented, a certain group of the circuits M are brought into activity. Then, each time there is a response Rt , conditioning may take place in some circuits of C and the amount, as measured by the increase in the excitatory factor Aec at sc , will not be dependent upon the time tc between the presentation of Sc and response Ri . But, if the circuits M are acted upon by inhibitory neu- rons from various external sources, or if the circuits are replaced by single neurons, the activity will decay with the time, tc , roughly ex- ponentially. Thus, we may obtain an expression for Aec similar to

84 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

equation (4). To a first approximation, equation (4) becomes in this case Aec = ncb (Sc , tc) where b = Aa depends on Sc and tc , and where c is the number of repetitions of Sc and Ri together and thus replaces n . If Sc and tc are constant, the total e at sc is given by

ec = s0c + be . (8)

We can obtain a similar expression for ew at sw . This is the case when the final response is pleasant, but if the final response is unpleasant we should expect the effect of the conditioning to be the opposite. That is, the centers C are such that they have inhibitory fibers lead- ing to sw . Then the coefficient corresponding to b will be some nega- tive quantity — /S . Thus

ew = eow — pw, (9)

as w is the number of repetitions of wrong response leading to R2 .

Let us apply these results to the particular experimental situation which arises when Lashley's jumping-apparatus is used. Here an ani- mal is forced to jump toward either of two stimuli. Choice of one leads to reward, choice of the other may lead to punishment. For simplicity, we assume that the times tc and tw , respectively, from presentation of the stimuli to the reward and punishment, are con- stants. If we then introduce equations (8) and (9) into equation (8) of chapter ix, and eliminate Pw and c by means of equations (6) and (7), we obtain a differential equation in w and n . From this, with the initial condition w = 0 for n = 0 , we obtain

1 2bekle°c~e°w)

w — fog QO)

k(b - ft) & 2&e*(e"-£°«> - (b - /S) (1 - e-kbn)

for ft^jS. For 6 = >5 , the result is a rising curve which approaches a limit exponentially. In terms of the mechanism, we may consider the experiment as requiring a discrimination between two stimuli whose values, in effect, change in successive trials. The correct stimulus be- comes effectively larger due to the conditioning while the wrong de- creases. Thus, the probability of a wrong response diminishes.

In Figure 3 is shown a comparison between the theory and the experimental data by H. Gulliksen (1934). The lower and upper curves were obtained respectively by setting eoc — eow — 0 , kb = .0121 , ft = 0 and k(eoc — cow) = -.46 , kb = .0229, § = 0 . Besides giving a quantitative relation between w and n, our considerations actually give a great deal more. From the results of the preceding paragraphs, we can obtain a function b (Sc , Ri , tc), that is, b is a function of the intensity of the stimulus, the strength of the reward, and the time tc .

CONDITIONING

85

DATA BY HGULLIKSEN

â–  00 200 300

NUMBER OF TRIALS fl —

Figure 3. — Comparison of theory with experiment: simple learning. Curves, theoretical from equation (10); points experimental (Gulliksen, 1934). Abscissa, number of trials; ordinate, number of errors.

Similarly, one can obtain fl(Sw , R? , tw). Thus we should be able to make predictions for data as in Figure 3, but for various strengths of reward or punishment and for other variables. In this way a con- siderable amount of data could be brought into a single formulation and the prediction tested by experiment.

By considering various modifications of the experimental situa- tion consistent with the restrictions imposed, we can derive other re- lations which could be checked by experiment (Landahl, 1941). If the responses R% and R2 are made identical but tc ¥^ tw , we have an analogy to a situation in which there are two paths to a goal-response requiring different times, tc and tw , to traverse. We would refer to the shorter path as correct, and thus tc < tw . For constant Sc and Sw , the coefficient b will be a function of tc or tu, only. If Sc and Sw are not too different, ec will equal eoc + b(tc)c and sw will equal Sow + b (tw)w since the final response is the same. But, as b decreases with t, b(tc) > b(tw). Thus, at least when Sc = Sw initially, the probability of the wrong response will decrease towards zero since, for small c and w , c = w , sc — e„ = 6 (tc)c — b(tw)w > zero. Thus, we could determine the number of errors as a function of the number of trials for various t c and tw . If eoc < s„w , the correct response may never be learned.

Elimination of a blind alley can be accounted for since, the cor- rect path being entered last, the time between Sc and the reward is less than the time between Sw and the reward. Hence on later trials there is a tendency to turn away from the wrong stimulus. An equa-

86 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

tion for the number of errors as a function of the number of trials has been obtained on this basis and studied in relation to such para- meters as strengths of reward and punishment, length of the alley, and distance (time) from blind to goal (Landahl, 1941). One finds that generally fewer errors will be required to eliminate a blind alley if it is close to the goal. The dependence on the length of the alley of the number of errors required to eliminate the blind is found to be fairly complex. According to the strength of the reward, we find that a blind far from the goal will be eliminated with more difficulty the longer it is, while if it is near the goal it will be eliminated more readily if it is long. What we wish particularly to emphasize is that from relatively simple structure can be deduced fairly complex ac- tivity.

It is possible to generalize the mechanism to include a choice from among any number N of stimuli by constructing a net similar to that of Figure 2, but with N afferents and N(N — 1) crossing inhibitory neurons (cf. chap. iii). Suppose that out of the N stimuli there is but one correct stimulus Sc . Hence, instead of considering the indi- vidual wrong responses, we may consider their average effect. Thus

if sc is the net value at sc due to the correct response and if sw is the average value of all the e«/s we may write [Landahl, 1941, equation (9)]

NP

\og—-^- + k(ec-8w)=0 (11)

N — 1

'C

in place of equation (8) of chapter ix. We note that for ec

Pw = (N — 1)/N as would be expected by chance, while for large

ec — sw , Pw tends toward zero.

In the experimental situation, let a stimulus S'» (i s= 1 , 2 , • • • , M) accompany a group of stimuli Sj (j '= 1 , 2 , • • • , N) of equal intensity, one and only one of which will elicit its response. Among the stimuli S) is a stimulus Sic , the "correct" stimulus corresponding to S'i , which when chosen results in a reward ; response to any other stimu- lus Sj when accompanied by S\ results in punishment, or at least no re- ward. The number N may be referred to as the number of possible choices, while the number M is the number of associations to be learned in the experiment. After a wrong response is made, the ex- perimenter may choose to assist (prompt) the subject in making the correct response or he may not. He may do so each time, not at all or, in general, some fraction, 1 — / , of the times. Thus / is a variable under the control of the experimenter just as are M and N . We shall assume that throughout any particular experiment M , N , and / are

CONDITIONING 87

not changed. Then

£c = £oc + bc + b(l-f)w, (12)

since conditioning improves with each correct response as well as with a fraction (1 — /) of the wrong responses. The prompted correct responses are not counted in c so that we do not change the relation n = c + w . At each wrong choice, a quantity p is subtracted from £ow . This contributes only p/ (N — 1) to the average. Thus

** = 8a»-fi/{N-l). (13)

The parameter b gives a measure of the amount of conditioning per trial. If a response to one stimulus has no effect on the condition- ing at centers corresponding to other stimuli, the n is independent of M . But, if the response to one stimulus results in the stimulation of inhibitory neurons terminating at the various other conditioning cen- ters, then b will be less when there are more items M to be learned. We may account for this by introducing as a rough approximation, the relation

k

where r\ and 'Q are two parameters replacing bk.

Assuming eow = eoc , substituting equations (5), (12) and (13) into (11), and eliminating c by equation (7), we obtain a differential equation in zv and n . For the initial condition w = 0 for n = 0, and with b eliminated by relation (14), the solution of the differential equation is

(N-l)et» N

w = log . (15)

Nf-f-0 (Nf- f-0) e-vm + N - Nf + f

This equation gives the number w of errors as a function of the num- ber n of trials for any number M of items, for any number of N pos- sible choices, and for any fraction (1 — /) of prompting by the ex- perimenter. All this involves only two parameters C and r\ . As we have considered a highly over-simplified mechanism and introduced a number of approximations, it is not to be expected that the predic- tions of equation (15) should hold over too wide a range of values of M and N .

In Figure 4 are data obtained from a single experiment for the purpose of illustrating a rather special case of the experimental pro- cedure outlined above. The experiment corresponds to the case in

88 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

n-»-

Figure 4. — Comparison of theory with experiment: simple learning. Curve, theoretical from equation (15); points, experimental (Landahl, 1941). Abscissa, number of trials ; ordinate, number of errors.

which M = N and / = 1 . Then setting y\ = 1.15 and C = .098 we ob- tain the three curves for which N = 4, N = 8, and N = 12 respectively. The values of v\ and C may be determined from one point of each of any two curves. The third curve is then without unknown parameters. But another family of such curves is also determined by equation (15), without any additional parameters, for the case in which prompting- follows each wrong' response, i.e., f = 0 . In fact, / can be given any value in the range 0 = / = 1 , and M and N need not be equal.

From the previous discussion, we should expect b to depend upon the strength of reward. To a first approximation, we may set 6 = alP/k , where p is a measure of the strength of reward. Similar- ly, we can set ft ==■ a2/pk where p is a measure of the strength of pun- ishment. Equation (15) then determines a hyper-surface in the seven variables, w , n , N , M , f , p , and p in terms of three parameters a-! , a2 , and C •

CONDITIONING 89

Furthermore, we may wish, to determine what the performance in terms of the probabiliy of a correct response would be if, after n' trials, the experiment is discontinued for some interval of time. The addition of a single parameter enables one to determine the perform- ance in terms of the various experimentally controlled variables. One can then also determine the time required for the performance to drop to some preassigned level. It is found, for example, that while in- creasing the number of trials beyond some value has little or no effect on the performance at the time, it may have a considerable beneficial effect on the performance at some subsequent time. This is essen- tially what occurs in overlearning.

But these results apply only to the case of recognition-learning as we have assumed that the stimulus to which response is to be made is present at the time of choice. A generalization of the results can be made so as to include the case of recall-learning (Landahl, 1943). A number of parameters must be introduced in this case, but also two new experimental variables enter. One variable determines whether the experiment is that of recognition or recall. The other variable is the number of correct responses, as in this case c cannot be deter- mined from c = n + w due to the "equality" response, which in this case is a lack of response. Thus with a small number of parameters specified experimentally, c and w can be determined for various values of the seven other variables and the result may then be compared with experiment.

XII

A THEORY OF COLOR-VISION

According to the Young-Helmholtz theory, any color can be matched, at least after sufficient desaturation by admixture with white light, by combining in suitable proportions lights of three given colors (cf. Peddie, 1922) . These three colors may be chosen arbitrari- ly except that no one is to be matchable by combining the other two. The fact that three primary colors are sufficient for a match of all others strongly suggests, as Helmholtz brought out, that retinal ele- ments of three distinct types are involved in the perception of color. This interpretation has not gained universal acceptance by investi- gators, two of the objections being that anatomical studies fail to differentiate three types of receptor, and that the degree of acuity with monochromatic illumination is too high to be accounted for by only one-third the total density of elements. Hence theories have been proposed to yield quantitative predictions of the discriminal pre- cision in judging color-differences in particular without postulating three distinct types of receptor (e.g. Shaxby, 1943). However, the case here is analogous to the case of discrimination of intensity-dif- ferences in general: different intensities, and also different colors, can occasion qualitatively different responses so that at some place in the neural pathway from receptor to effector the locus of the a must be capable of varying with variation of the stimulus. This state- ment is practically self-evident since obviously different final path- ways are involved in reaching the different effectors, so that the only question is where the variation occurs — centrally, along the afferent pathway, or along the efferent pathway. Wherever this may take place, the method demands explanation. But the statement is also a consequence of the well known Muller's law of specificity, as is clearly brought out by Carlson and Johnson (1941).

Hence the neural mechanism which mediates any discriminal pro- cess must provide for a segregation of the neural pathways affected by different stimuli. In the case of a simple stimulus characterized by a single parameter, S , there needs to be only a one-dimensional array of synapses reached by neurons from the receptor in such a way that when S has one value the resultant a is positive at one set of the synapses, and when the value of & is changed sufficiently the resultant a is positive at a different set of the synapses. A mechanism capable of bringing this about was described in chapter ix. When the stimulus is complex and requires three parameters, Sv(i = 1 , 2 , 3),

90

A THEORY OF COLOR- VISION 91

to specify it, then a three-dimensional array of synapses is required.

In speaking- of one-dimensional and three-dimensional arrays we are not referring1 to their actual spatial distribution in the cortex, since manifestly all synapses are distributed in three spatial dimen- sions. We refer only to an abstract mode of classifying all the syn- apses in question. In the case of the discriminating mechanism of chapter ix, each synapse of the discriminating center is character- ized by a certain h in such a way that given any interval bounded by the numbers h' and h" , it is possible to say unambiguously of any given synapse whether its associated h does or does not lie within this interval. In the mechanism now to be discussed, of which we say that the synapses form a three-dimensional array, each synapse is char- acterized by the set of three parameters Sx , S2 , and S3 and given any set of these intervals Si to Si", we can say unambiguously of any synapse that each Si associated with it does or does not lie upon the interval from Sh' to Si".

We shall now describe a mechanism which generalizes that of chapter ix and provides the segregation of pathways required for the discrimination of colors. While it may not be the simplest one possible, it does possess the necessary qualitative properties and no other mechanism has been proposed which does. We follow Helm- holtz and assume three types of retinal receptors, each connected to all the synapses of the three-dimensional array constituting the "color-center." We utilize the three-receptor hypothesis because it is convenient, not because we are necessarily convinced that it is "true." We consider a small region of the retina only, containing one of each of the three primary receptors, and we disregard the problem of spatial localization or other attributes of the sensations which accom- pany the stimulation of these receptors, confining ourselves exclu- sively to the sensation of color. Admittedly the other attributes de- mand explanation, but we regard the problems as distinct.

The spatial arrangement required of the synapses at the color- center is highly arbitrary, and while we imagine a specific spatial localization of the various synapses this is for convenience of de- scription only. With this understood, we suppose: ( 1 ), each primary receptor is associated with a particular one-dimensional array, or axis, of synapses, the three axes being mutually orthogonal and all concurrent at a point O; (2), the stimulation of the i-th primary re- ceptor in the amount Si occasions the production of a throughout the color- center, the density being greatest all along its associated axis and being everywhere a function a, (Si , P) of Si and of the assumed position P; (3), as a function of position <n depends upon two para- meters only, the distance r = OP , from 0, and the angle dt between

92 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

OP and the i-th axis. The assumption that a, depends upon 6, alone is made for the sake of simplicity, and a natural generalization would be to allow all three angles to enter, these angles being related by the identity

2cos20i = l. (1)

For further simplification we suppose that the dependence upon 6i is through the cosine so that the connections from each primary recep- tor are symmetric about the associated axis, and we suppose in addi- tion that when the units for the Si are properly chosen the three func- tions a, (S , r , cos 6) are identical. Neither of these restrictions is essential.

The immediate result at the color-center of the stimulation of the three primary receptors is then the production at every point P of

^(S ,P) =2l*i(Si,r,cosOl). (2)

If, finally, the functions ai are so chosen that for any 5 , a has aways a maximum at a single point P , then the introduction of sufficient mu- tual inhibition between synapses (cf. chap, iii) will make the resul- tant a negative everywhere but in the neighborhood of P , and the desired segregation of the pathways is secured.

If the functions are properly chosen the analytical result will be essentially a representation of the familiar color-pyramid, as, indeed, we wish it to be, since this is found empirically to provide an accurate representation of the phenomena. What we provide, and what is not contained in the theory of the color-pyramid, is a neural mechanism capable of mediating perceptions organized in this way. We have supposed that the basis for a difference between perceptions lies in the discreteness of loci of excitation and the mechanism here described is capable of separating the loci. By the same rule an observed sim- ilarity must have its basis in some community of the loci. Hence if we suppose that synapses of the color-center which are collinear with the origin are all connected to some further center common to that set we have a possible basis for the identity of colors of varying intens- ity; if the centers of this group corresponding to complanar rays themselves lead to a common tertiary center we have a basis for iden- tity of colors of varying intensity and saturation, and so on.

The form of the predictions from this mechanism must coincide with those of the simpler one (chap, ix) when only intensity, but not color or saturation, is varied. Hence each function <n(S , r , 1) must be proportional to r(S—r) if we suppose the /^-centers of the previous mechanism to be uniformly spaced. Further, each a\ as a function of Si should have a maximum at 0; . = 0 . The simplest possible supposi-

A THEORY OF COLOR-VISION 93

tion is then that

en = r (Si — r) cos 9i . (3)

With this assumption the maximum of a for any stimulation Si occurs for

cos $! : cos 82: cos 83 = St — r: S2 ~ ri S3 — r ,

with r the smaller root of

6r2 - 3r 2 Si + 2 S>2 = 0 .

A more exact specification of the form of the functions en can be made only by a detailed comparison with experimental facts.

XIII

SOME ASPECTS OF STEREOPSIS

In this chapter we present some purely formal considerations of the visual perception of space without providing any specific neural mechanism. Such a formal analysis is, of course, a necessary pre- liminary since evidently when we do not know in advance the struc- ture of the neural mechanism involved we must know what is re- quired of it if we are to deduce the structure.

The structure of subjective space is developed gradually during the life of the individual and is the resultant of diverse sensory cues — visual, auditory, kinaesthetic, and perhaps others. The recognition of two pin-pricks simultaneously applied to different parts of the body, as distinct, involves discrimination of a certain primitive type and requires a certain minimal separation of the points; to recog- nize that one pin-prick is located at such a distance and in such a di- rection from the other involves a judgment much more advanced in form and requires a neural mechanism of much greater complication. To account for the first judgment no assumption is required beyond the distinctness of the neural pathways, and of the cortical centers ultimately affected by the two pricks. But the second judgment, while keeping the pricks distinct, assimilates them into a certain continuum and hence relates them in a definite way. Consequently the cortical centers must be connected with each other and with the motor cen- ters in some definite way, perhaps in such a way as to make possible the continuous movement of, say, an index finger from the location of one prick to that of the other.

Similar remarks may be made of vision. Let us confine ourselves for the present to monocular vision. The judgment that two objects, seen simultaneously, are distinct, and the judgment as to their rela- tive positions, are quite different judgments and the first does not by any means imply the second. The second judgment may be somehow associated with the ocular rotations that would be necessary for the fixating of first the one point and then the other (cf. Douglas, 1932), in which case if the visual space has been integrated into a unified space of perception as a whole, there may be associations of some sort with movements of the body or of a member from the one point to the other. In either case, however, that of the pin-pricks or that of the visual objects, we say nothing about whether the motor and kin- aesthetic connections are a part of the native endowment of the or-

94

SOME ASPECTS OF STEREOPSIS 95

ganism and formed, possibly before, possibly after birth, indepen- dently of any experience, or whether they are somehow due to con- ditioning.

Whatever may be the nature of a given perception, therefore, the perception of a disjunction implies a neural disjunction of some sort, whereas the localization of the disjoined elements within a field of relations implies the presence of interconnections of some sort be- tween nervous elements involved in the perception of these elements, whether these interconnections were previously functional, or only became functional as a result of the perception itself. And while there may be no unique structure of nervous interconnections capable of yielding the system of relations as perceived — the solution of the in- verse problem of neural nets is not necessarily unique — nevertheless there are limitations, and empirical anatomical data may serve, in time, to complete the characterization.

The problem we wish to consider is the following. By whatever means it is acquired, the normal human adult does possess a percep- tual space within which he locates the objects of his perception. With objects some distance away from him, visual cues are the chief, and frequently they are the only, means available to him for the localiza- tion of these objects within this space. In these cases, where the visual cues are the only ones, how is the localization effected? That is, what kinds of cues can be provided by the eyes alone, which, to- gether with past associations (but not the memory of a previous lo- calization), may serve the subject in localizing the object within his perceptual space If the eyes themselves provide several, more or less independent, cues for localizing the same object— and they certainly do— then the final localization must come as a kind of resultant of all of these. Under normal conditions these cues would doubtless act in harmony and reinforce one another, thus providing a fairly ac- curate judgment. But under abnormal conditions due to pathology or instrumentation, the normal harmony would be disrupted, making the perceived relations a bizarre distortion of the true ones. In fact, it is by considering the nature and occurrence of these distortions when the cues are in disharmony that we might hope to get our best information as to their separate modes of operation.

In the system of spatial relations among disjoint elements, per- haps the simplest and most primitive, beyond, of course, the mere fact of separation, is the amount of separation. This is easily under- stood in terms of kinaesthesis. Greater separation requires more movement for crossing it, a more intense kinaesthetic sensation, and in these terms our discussion of the discrimination of intensities may find an application here. With reference to visually perceived extent

96 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

a suggestion has been made already in this direction (Householder, 1940; cf. chap ix).

Perhaps the most obvious cue for judging distance is the "ap- parent size" of the object when the actual size is known or inferred from previous experience. The "apparent size" is by definition the solid angle subtended, but is not necessarily the size the object ap- pears to have. Thus a distant man appears small, but when he is fairly close the size he appears to have stays fairly constant while his distance, and hence his "apparent size," varies over quite a wide range. The "apparent size" is proportional to the size of the retinal image and provides a distance cue.

An object seen through a spyglass appears flattened and a pos- sible explanation can be found from considerations of apparent size. Thus consider a cube with one edge nearly, but not quite, sagittal in direction, and suppose it is viewed through a spyglass magnifying in the ratio M = 1 + u . That is, let the retinal image of the cube as seen through the spyglass be M times that formed by the cube seen at the same distance by the naked eye. If the actual distance of the front face is d , and if the edge of the cube has length s , then the back face has the actual distance d + s , but due to the magnification, front and back faces appear to be only 1/M times as far. They ap- pear, therefore, to have the distances d/M and (d + s)/M . But this leaves for the apparent depth of the cube only the distance s/M .

While it is well known that qualitatively the effect is present as described, no quantitative data are at hand, and the theory here sug- gested might fail to meet the more exacting requirements of a quan- titative test. In the first place, it is assumed that, in the absence of other cues, the perceived distance would be exactly 1/M times the actual distance. This might not be the case. The percieved distance might be, say 1/li times the true distance, where /u < M . But if so, we should expect the judged depth to be 1/// times the actual depth and the judged size to be M/li times the actual size. Moreover one could perform an experiment in which a truncated pyramid is pre- sented instead of a cube, the dimensions being such that the subject would be expected to perceive it as a cube. For this, when the dis- tance is judged to be 1/// times the actual distance d and the magnifi- cation is M times, the retinal image is the same as would be produced by a cube seen with the naked eye at a distance of dj ii . Now if a cube whose edges are Ms/tu is placed so that the nearest face is at a distance of d/ n , then the visual angles subtended by an edge of the front face and an edge of the back face are Ms/d- and Ms/ (d + Ms). These must be the visual angles of the faces of the truncated pyra- mid placed at a distance d and magnified M times. Hence, if the

SOME ASPECTS OF STEREOPS1S 97

depth of the frustum, as well as each of the edges nearest the obser- ver, is s , the edges of the other face must be

s(s + d)/(Ms + d) =s[l -us/d] ,

if Ms is small by comparison with d . Note that this result is inde- pendent of // .

While no mechanism is suggested here for this dependence of perceived distance upon apparent size, we note that a converse mech- anism has been suggested by Landahl (1939a) to account for the constancy of perceived size with the concomitant (mutually inverse) variation of apparent size and actual distance.

The factor of accommodation (Stanton, 1942) is certainly not sensitive as a distance-cue, but there is evidence that it does operate (Grant, 1942). Distant objects are seen with the relaxed eye (if em- metropic), whereas to focus clearly on nearby objects requires an effort of accommodation. Convergence, a binocular cue, is more certain (Householder, 1940c). Convergence upon near objects also require an effort, although it is known that the visual axes of relaxed eyes are not parallel, but diverge, so that some effort is required also for bi- nocular vision at a distance. The cues of both accommodation and convergence are muscular, and neither is sensitive enough to provide the fine discrimination known to be possible in binocular stereopsis. In fact, binocular stereopsis can be achieved by means of a stereo- scope when the visual axes are parallel and accommodation is relaxed. Thus other cues must be sought.

In normal binocular vision whereas there are two retinal images, one in each eye, of the object fixated, there is but one cortical image — the individual sees but one object. On the other hand if the atten- tion, but not the fixation, is shifted to an object enough nearer or far- ther than the point of fixation, then two images of this single object are seen. Objects somewhere in between the point of fixation and the object seen doubly may be seen singly but, if so, they can be recog- nized as nearer to or farther from the observer, as the case may be, than the point of fixation.

Let CL and CR represent the centers of rotation of the left and right eyes, respectively. The separation between nodal point and center of rotation is so slight that for present purposes we may regard CL and CR as being also the nodal points. Let P represent the fixation- point. Then PCL and PCR are the two visual axes; let these cut the retinas at PL and PR . The two retinal images of P are therefore lo- cated at PL and PR , and when so located these images fuse and the point P is seen singly. Let P be any point on the line CLP be- tween CL and P . Then P is also imaged at PL on the left retina, but

98 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

on the right retina the image is at some point PR in the temporal direction from PR . If an object is located at P', and if P' is not too far removed from P , there may be fusion even of the images on PL and P'R , though because of the more temporal location of P'R with respect to PR , P' is judged nearer the observer than P . The situation when P' lies beyond P is similar except that P'R is then medial to PR . But if P' is moved much farther or much closer, fusion is no longer possible, and two images result. We shall say that PL and PR are corresponding points on the two retinas, whereas Ph and any other point P'R different from PR are disparate.

While holding the fixation at P , any other point Q within the binocular visual field will form an image on a point QL of the left retina, QR on the right, and the images may fuse, but only if Q is not too near or too far away. We suppose that associated with each point QL of the left retina there is a unique point QR of the right retina which corresponds to it, while QL and any other point Q'R are dis- parate. If Q has as its images the point QL and a disparate point Q'R not too far removed from QR fusion may still occur, but fusion is in some sense optimal when the images fall on corresponding points. The extent of the disparity, or the absence of it, then provides a cue for the localization of the point Q in the subject's visual space. It is suggested that the degree of convergence, and possibly also the degree of accommodation serve as cues for the localization of the general region about P with respect to the observer, while the binocular dis- parity gives depth to this region and makes possible the localization with respect to P of the objects in its immediate neighborhood. Thus pictures seen in a stereoscope are seen as vaguely somewhere far off, although the positions of the various details with respect to one an- other are very definite.

The simplest neurological picture to correspond to this seems to be the following. Suppose there is a binocular representation in the visual cortex which is three-dimensional in character. Simultaneous stimulation of corresponding points QL and QR on the two retinas leads to maximal excitation a at an associated point of the binocular cortex. Simultaneous stimulation of slightly disparate points QL and Q'R leads to excitation <r of somewhat lesser amount and at a some- what different location. Simultaneous stimulation of widely disparate points gives only subthreshold excitation, if any, in the binocular cortex, though it may lead to excitation in the two cortical regions where the retinas are separately represented (cf. Bichowsky, 1941; Verhoff, 1925). This would involve at least three visual areas in the cortex, two monocular regions, which might be only two-dimensional, and one binocular region which corresponds, point for point, to a cer-

SOME ASPECTS OF STEREOPSIS 99

lain region of external space containing the point of fixation.

There are doubtless other possibilities, but whatever they are, they must provide for the fact that a slight disparity or none leads to a unitary cortical representation, different representations differing according to the magnitude and direction of the disparity. In other words, the three-dimensional character must be present, whether in a spatial fashion or otherwise. The occurrence of anomalous fusion in some cases of strabismus may seem to invalidate this argument (Brock, 1940). However, it can be questioned whether a real fusion occurs here, and it seems simpler to suppose that a quite different mechanism is at work, with the factor of conditioning playing a pre- dominant role, much as it must do in monocular depth judgments.

If to each point of one retina there is a unique "corresponding" point on the other retina, the lines joining any pair of these to the nodal points will generally be skew and fail to intersect in any point in space. For any given fixation of the two eyes the locus of points in space where pairs of corresponding lines do intersect is called the horopter. There is at least one such point, the fixation-point, and in general the horopter is a curve in external three-space (Helmholtz, 1896). Consider the situation in which, the head being upright, the visual axes are horizontal. The horizontal plane containing the visual axes does not necessarily contain any part of the horopter except the fixation-point and other isolated points. However, if we take any point QL on the intersection of this horizontal plane with the left retina, there will be a point Q'R of the intersection of this plane with the other retina which is closest of all these intersections to the true corresponding point QR , and the point Q in space which projects QL and Q'R lies somewhere on this plane. The locus of the point Q in the horizontal plane of fixation we shall, for the present, refer to as the horopter.

It is evident that in symmetric convergence this horopter-curve should be symmetric with respect to the subject's medial plane. If each eye were symmetric about the fovea, so that corresponding points were at equal distances from the two foveas, it is easy to see that the horopter would always be a circle. Actually it is found empirically that for a suitably located fixation-point the horopter is a straight line, while for nearer fixation it is an ellipse, for more distant fixation a hyperbola, either conic passing through the nodal points of the two eyes (Ogle, 1938).

Now granting the existence of a rectilinear horopter and the anatomical fixity of the corresponding points, that the other horop- ters are conies of the sort described follows from elementary projec- tive geometry, and furthermore the equations of these can be deduced

100 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

with no parameters undetermined once the rectilinear horopter has been located (Householder, 1940).

If the geometric analysis is carried further one can see that on placing" before one eye, say the left, a cylindrical size-lens, axis 90°, the horopter should undergo a rotation with the result that the en- tire visual field would appear to have undergone a rotation in a clock- wise direction. This is indeed borne out by experiment, and the amount of the rotation is, as predicted, proportional to the increment in size (Ogle, 1938).

Now so far as the horopter is concerned there is no reason to suppose that any shift of the subjective space should occur when the cylindrical size-lens is placed with axis 180° instead of 90°. Never- theless, it turns out that there is a shift, of approximately the same amount but in the opposite direction (Ogle, 1940). But none of the cues so far mentioned can account for this, so another one must be at work.

The experimental procedure here is to set up before the observer a plane which is free to rotate about a vertical axis and which con- tains a few small circles for fusion arranged in two horizontal rows. The subject is asked to adjust the plane until it is parallel to his own frontal plane. Best results are obtained when the fusion-contours are "restricted to relatively small areas above or below the center of the plane" (Ogle, 1940). It is found that the rotation is approxi- mately proportional to the magnification-increment when this is not too great, but that the effect breaks down when the magnification exceeds a certain critical amount.

It is evident that the vertical disparities, rather than the hori- zontal disparities, are producing the effect, and this fact, together with the fact of the breakdown of the effect for higher magnifica- tions, suggests that the subjects are locating the medial plane rather than the frontal plane. Normally the medial plane would be the locus of objects whose retinal images are equal in the vertical direction, and those which produce unequal images would be on the same side of the medial plane as the eye having the larger image. Normally, too, the medial and the frontal planes are perpendicular. But if a lens giving vertical magnification were placed before the left eye, and if the magnification were not too great, the points yielding equal retinal images would be to the right of the true medial plane. On the basis of this cue any object would therefore appear to lie closer to the left eye and farther from the right than it actually does. But when the magnification is great enough, any object whatever forms a larger image on the left retina, and in this extreme situation, if not sooner, the localization would be accomplished by means of other

SOME ASPECTS OF STEREOPSIS 101

cues, say the horizontal disparities which are unaffected by the lens. A quantitative analysis bears out the hypothesis just outlined (Householder, 1943). If we take for the nodal points, CL and CR , the coordinates (± 1, 0), with the positive y-axis extending forward from the observer, and if the magnification is, as before,

M = 1 + u ,

so that the increment of magnification in the vertical direction is 100 u % , then the locus of points in the horizontal plane whose retinal images are equal in the vertical direction is the circle

/ M2 + IV / 2 M \2

{*-jF=V+v°={m^1)- (1)

or, very nearly,

if we set

/ 1 + u\2 /1 + uV

(2)

u

1 = , (3)

2(1 + u)

then the tangent of the angle of rotation of the medial plane is, for a fixed distance y of the fixation-point,

x/y = Xy(l + Py2 + 21iyi---),

and for smajl angles this is equal to the angle itself. This is approxi- mately proportional to the increment of magnification, to the distance of the point of fixation, and also to the interocular distance since half of this distance is the unit employed. The figure shows a comparison of theory and experiment with two subjects, both left and right eyes. It is especially to be noted that all parameters are determinable, these being only the interocular distance and the magnification. The breakdown of the effect occurs before the equality of the images be- comes impossible at increments of about 6 or 8%. When a spherical size-lens is employed the two conflicting effects of horizontal and ver- tical disparities neutralize one another for small magnifications also up to about 6 or 8%, while for larger magnifications the horizontal disparities provide the effective cue and the effect of the vertical dis- parities gradually dies out.

102 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

2(f

A MB C. RIGHT EVE 0 M.B.C. LEFT EYE • K.N.O. RIGHT EYE O K.N O. LEFT EYE

l<f

3 4

PER CENT

Figure 1. — Comparison of theory with experiment: rotation of subjective medial plane due to cylindrical size-lens, axis 180°, before one eye. Curve, theo- retical from equation (4); points, experimental (Ogle, 1938). Abscissa, magnifi- cation-increment produced by size-lens; ordinate, rotation of the plane in degrees.

PART THREE

XIV

THE BOOLEAN ALGEBRA OF NEURAL NETS

To the extent that our neurons are representative of those stud- ied by the physiologist, our quantities s and j must be interpreted as time-averages. The assumptions underlying any purely formal theory can be justified only to the extent that the predictions to be drawn from them are borne out by experience. However, our theory is not intended to be a formal one only. On the contrary, it is intended that the theoretical neural structures will represent in some sense the essential character of the actual anatomical structures and that the postulated behavior of the constituent neurons will be representative of the actual physiological activity of the neurons, or at least of re- curring complexes of these, in the real organism. Hence our postu- lated £ and j must be identified with physiological states or events, and the postulated laws of their development, or something closely approximating them, must be deduced from more fundamental prin- ciples.

In large measure this is a statement of a program for future re- search. It is evident, of course, that the e and j are to be correlated with the "action-spikes" of the neurons. These occurrences occupy milliseconds, whereas ordinary psychological behavior is generally a matter of seconds at least, which justifies a statistical averaging pro- cedure. The first step in the deduction has been made by McCulloch and Pitts (1943), and we now outline their picture, microscopic as to time, of the neuron's behavior, a picture that rests almost immedi- ately upon direct observations. Later, in the next chapter, we indi- cate how, by carrying out a suggestion due to Landahl, McCulloch and Pitts (1943), this deduction might be completed. But since the com- plete deduction of the macroscopic picture, with which we have been concerned so far, from the microscopic picture, is still wanting, these chapters constitute largely a digression from the main trend of our. discussions and will appear to be, in some details, even contradictory.

The most striking feature of the nervous discharge as observed in the laboratory is its all-or-none character. A change in the. physio- logical state of the neuron — e.g. one due to a change in the. oxygen, tension — may alter the potential of any discharge the neurpn is ca- pable of making, but in any given condition either the maximal re- sponse or none at all will occur. Since nearly all overt reactions have gradations the complete deduction must explain in detail how

103

104 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

the two facts are to be reconciled and the greater part of this mono- graph has been devoted to a discussion of these gradations. For the present, however, let us consider some of the formal aspects of the all-or-none feature alone.

An afferent neuron may form any number of synaptic connec- tions with an efferent neuron and the firing of one afferent alone may or may not be sufficient to elicit a discharge in the efferent neuron. In order for a discharge to be elicited, it is necessary that there be a sufficient number of endfeet of discharging afferents all located with- in a sufficiently small region of the stimulated neuron. The discharges of the afferent neurons must, moreover, all occar within a sufficiently small time-interval. The minimal number of endfeet of discharging afferents required for eliciting a discharge in any neuron is what we now call the threshold, 8 , of the neuron ; the endfeet may or may not all come from the same afferent, but for simplicity we suppose them all equal in their effectiveness. The maximal time-interval within which the summation can occur is about a quarter of a millisecond. There is a delay of about half a millisecond between the arrival of the impulses upon a neuron and the initiation of its own discharge. Compared to this synaptic delay, the time required for the conduc- tion from origin to terminus is quite short.

In some instances the arrival of a nervous discharge upon a neu- ron will have the effect, not of initiating a discharge in it, but of pre- venting a discharge that would otherwise occur. No explanation of this phenomenon of inhibition has won general acceptance; neither is it known whether the inhibition is complete or only partial. But the fact of its occurrence is beyond question, and only the details of the schematic structure will be affected by the assumption of partial rather than complete inhibition (McCulloch and Pitts, 1943). To be definite we make the arbitrary assumption that inhibition when it occurs is complete.

The McCulloch-Pitts picture involves a formal representation of neural nets in terms of Boolean algebra as follows: Let the life-span of the organism be divided into elementary time-intervals of common length equal to the period of synaptic delay and introduce this inter- val as the time-unit. Since each nerve-impulse is followed by a refrac- tory period of about half a millisecond during which the neuron is incapable of further activity, no neuron can fire twice within any unit interval, and moreover, from the very definition of the unit of time, the firing of a neuron within a given unit interval can cause the fir- ing of any efferent neuron only during the next unit interval, if at all. Summation or inhibition, if either occurs, is only effective in case the summating or inhibiting neurons fire within the same unit interval.

THE BOOLEAN ALGEBRA OF NEURAL NETS 105

Now consider any net of interconnecting neurons. Let these neurons be assigned designations ©» in any manner and let Ni(t) denote the proposition that the neuron Ci has fired during the t-th time-interval. If this is any but a peripheral afferent, fired directly by a receptor, then the necessary and sufficient condition for Ni(t) is that some one proposition or group of propositions among, perhaps, several possible such, of the form Nj(t-l) shall be true, where the neurons c, are those afferents forming excitatory connections with Ci , and that furthermore the propositions Nk(t—1) are all false, the neurons ck being the afferents forming inhibitory connections with Ci . Let a; represent the class of subscripts corresponding to any set of afferents which form excitatory connections with Ci and whose summated impulses suffice to excite Ci and let k; represent the class of all such classes a* . Let pi represent the class of all subscripts corre- sponding to all afferents forming inhibitory connections with Ci . In conventional logical symbolism, the negation of Nk(t—1) is repre- sented by ™Nk(t— 1) and the joint negation for all kefii by

n ~zv*(*-i),

fce/3i

where the symbol s here denotes class membership. A sufficient con- dition for the firing of neuron d is then

n ~Ak(t-i) n Nj(t-i),

kepi jeai

and the necessary and sufficient condition is the disjunction of all such propositions, or

n ~tf*(*-i) 2 n Njh-i),

the symbols 77 and 2 denoting conjunction and disjunction respec- tively. If we introduce the functor 5 defined by the equivalence

SNi(t) . = .Ni(t-l), (1)

then the activity of the neuron d is completely described by the equi- valence

Ni(t) . = .S n â„¢Nk(t) 2 nNj(t). (2)

To consider examples, suppose neuron c3 has a threshold 6 = 2. If Ci and c2 each has a single enclfoot on c3 (Figure lc), then c3 fires in any interval only if c1 and c2 have both fired in the preceding in- terval :

N3(t) . = .SNAt) .SNAt).

On the other hand, if cx and o2 each has two endfeet on c3 (Figure lb) ,

106 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

a

<-o-

<

<

L^

:i3-

< — ^

<

J7

-4

-4

â– ^

^

Zl -B&

Figure 1

THE BOOLEAN ALGEBRA OF NEURAL NETS 107

then c3 fires if either cx or c2 has fired:

N3(t) . = .SNAt)vSN2(t),

the symbol v denoting disjunction. Finally, if cx has two endfeet on c3 , but c2 forms an inhibitory connection with c3 (Figure Id), then c3 fires provided d has fired and c2 has not:

N3(t) . = .â„¢SN2(t) .SNAt).

It may be that c; is itself a member of a class cu . For the mo- ment, however, we consider the case when the net contains no cycles so that if we pass successively from any neuron to any of its effer- ents in the net we shall never pass twice over the same neuron. Then associated with each neuron Oj except the peripheral afferents of the net there is a single equivalence of the form (2), and neither the assertion nor the negation of Ni(t) occurs anywhere on the right in this equivalence. If there occurs on the right of (2) any N (t) ,

where c.r is not a peripheral afferent, then N.t (t) can be replaced by

the right member of the equivalence associated with c . . By continu-

ing sequentially we shall find ultimately that Ni(t) is equivalent to a certain disjunction of conjunctions of propositions of the form SnNk(t) and of the negations of such, n being everywhere at least 1, and every c^ being a peripheral afferent. Moreover, since every term in the disjunction is a sufficient condition for the firing of Ci at the time t , no term in the disjunction can consist exclusively of nega- tions. The set of all equivalences of the type just described consti- tutes a solution of the net, since this set contains the necessary and sufficient condition for the firing at time t of every neuron in the net, in terms of the firing and non-firing of the peripheral afferents at earlier times.

The right member of an equivalence of the type just described McCulloch and Pitts call a temporal propositional expression (abbre- viated TPE) and it denotes a temporal propositional function. A TPE is any sentence formed out of primitive sentences of the type Ni(t) operated upon any number of times by the operator S , the sentences being combined by disjunctions and conjunctions, as well as any sentence formed by conjoining a sentence of the foregoing type with the negation of another sentence of this type. Otherwise put, a TPE is any disjunction of conjunctions of sentences SnNj(t) and M SnNk(t), except that no conjunction can consist exclusively of negations. The importance of this notion of a TPE lies in the fact that any TPE is "realizable," which is to say that given any TPE, it is possible to describe a non-cyclic net of such a sort that this TPE

108 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

expresses the necessary and sufficient condition for the firing of one of the neurons of this net. In other words, the behavior of any non- cyclic net can be described exclusively by TPE's, and conversely any TPE describes the behavior of a neuron in some theoretically possible non-cyclic net.

We give, as an illustration of this, an example due to McCulloch and Pitts, the construction of a neural net capable of giving the illu- sion of heat produced by transient cooling. In this illusion, a cold object held momentarily against the skin produces the sensation of heat, whereas if this object is held for a longer time the sensation is only of cold with no sensation of heat, even initially. Heat and cold are served by different skin receptors ; let ca and c2 be these neurons. Let c3 and c4 be the neurons whose activity implies a sensation of heat and cold. Then for the firing of oa it is necessary either that c* shall have fired, or else that c2 fired momentarily only, whereas for c4 to fire it is necessary that c2 shall fire for a period of time.

These conditions are expressible symbolically in the f orm

Na(t) . = .S{NAt) vS[SN2(t) .~N2 (*)]}, NM) . = .S[SN2(t) .N2(t)].

For convenience we suppose that the threshold 6 = 2 for each neuron, which is to say that two endfeet from active neurons must connect with any neuron in order that it may be made active. To construct the net we must exhibit connections between the peripheral alferents &x and c2 , and the peripheral eff erents c3 and c4 , by way, perhaps, of internuncial neurons, of such a sort that the equivalences hold as given above. We start with the sentence N2(t) affected by the great- est number of operations S and construct a neuron ca such that

Na(t) . = .SN2(t).

This requires two endfeet from Ca upon cc . An endfoot from c2 and one from ca each upon c4 gives

N,(t) . = .S[Na(t) .N2(t)]. = .S[SN2(l) .N2(t)]

which satisfies the second equivalence. We next introduce c6 having upon it two endfeet from ca and an inhibitory connection from c2 , giving

Nt(t) . = .S[Na(t) .~N2(t)]. = .SlSN2(t) .~tfa(*)].

Finally a pair of endfeet from c& upon c3 and also a pair from d gives the disjunction on the right of

N3(t) . = .S[NAt)vNb(t)-\,

THE BOOLEAN ALGEBRA OF NEURAL NETS 109

and with the substitution from the above equivalence for Nb(t) the construction is seen to be complete (Figure le). The other diagrams of Figure 1 are discussed by McCulloch and Pitts (1943).

In the case of non-cyclic nets, the necessary and sufficient condi- tions for the firing of any neuron can be stated exclusively in terms of the behavior of the peripheral afferents, i.e., the neurons of the net to which no neuron is afferent. Morover, for a given net the requisite firing times of the peripheral afferents is determinate in terms of the firing times of the eff erents. The introduction of cycles, however, ren- ders the problems considerably more difficult. For one thing, a cycle may be of such a sort that when activity is once initiated in a suffici- ent number of the neurons it will continue indefinitely. For the de- tails in the discussion of this case reference must be made to Mc- Culloch and Pitts (1943). However, there is in every cyclic net a certain minimal number of neurons whose removal would render the net non-cyclic. This number is called the order of the net, so that a non-cyclic net has order zero. Then the behavior of the net is deter- mined by the behavior of the set which consists of these neurons and the peripheral afferents, and the problem reduces to a consideration of the neurons of this set.

The most striking difference between cyclic and non-cyclic nets, with reference to the types of propositions which occur among the conditions for firing of a neuron, lies in this, that universal and exist- ential propositions may arise for cyclic nets. Thus the net consist- ing of a neuron c2 whose threshold is 2 , and a peripheral afferent Cj in which d and c2 each has a single endfoot upon c2 , realizes the universal sentence:

N*(t) . = . (z)t.SN1(z).

This is a symbolic formulation of the assertion that for c2 to fire at the time t it is necessary and sufficient that cx must have fired in every interval z prior to t . An objection is immediate at this point, to the effect that the theory provides no mechanism, therefore by which the circuit can ever get started, and it must be admitted that within the net in question there is no such mechanism. However, it is to be understood that the theory concerns the behavior of the net while free from all disturbing outside influences except specified ones of a specified type (stimulation of peripheral afferents). That is, it is a theory of a system in isolation, as any theory must be. Hence the assertion is to be taken to mean that the circuit being once started in an unspecified fashion and the system being then placed in "isola- tion," the circuit can be maintained only by continued stimulation of

110 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

If Cj and c2 each has two endfeet upon c2 , the existential sen- tence is realized:

N*(t) . = . (Ez)t.SNi(z).

This is a symbolic formulation of the assertion that c^ will fire at the time t provided N^ has fired in some interval z prior to t . For in this case the cycle formed by c2 , being once set into activity, will continue in a state of permanent activity. In either case we could replace iVj (t) by any TPE, and the neuron d by the net which realizes this TPE. Again, in the case of the "existential" net, if instead of the single self-exciting neuron c2 we have a cycle of n neurons, then, activity being once started, each neuron of the cycle will be active subsequently once every n time-intervals, and not in every time-in- terval.

It is understood, of course, that some common features of neu- ronal activity are left out of the formalized picture given here. On the other hand, some of these can be given a formal representation in these terms, such a formal representation having by no means, how- ever, the purport of factual description or explanation of the phe- nomenon in question. Thus learning could be described in terms of activated cycles and facilitation in terms of internuncials in the net. Extinction can be described in terms of relative inhibition, and ref- erence has already been made to the fact that relative inhibition can, in turn, be replaced formally by total inhibition in nets of somewhat different structure.

XV

A STATISTICAL INTERPRETATION

The time-units in the Boolean formulation are of the order of milliseconds, the approximate minimal separation between consecu- tive impulses in any neuron being, in fact, about half this unit. Hence, for even the briefest of overt responses, which require generally a duration of seconds or longer, there is time for the occurrence of hundreds or even thousands of individual impulses on the part of any one of the participating neurons. It is legitimate, therefore, and necessary, to develop a statistical theory of the temporal distribution of these impulses for application to the temporally macroscopic psy- chological processes.

Whether this statistical development will lead immediately to quantities that can be interpreted as being the postulated e and j of the Rashevsky theory is still to be seen. It is quite possible, indeed, that £ and j must rather be regarded as the statistical effects of im- pulses in certain groups of neurons, rather than individual neurons. As Rashevsky (1940, chap, viii) has emphasized, while the postulated development of e and j is suggested by direct observation of the ac- tion of individual neurons, it is necessary to suppose only that there are neural units or groups of some kind whose activity follows the postulated pattern, and the interpretation in terms of groups is strongly suggested by the paucity of hypothetical neurons required to account for many of the psychological processes discussed in earlier chapters of this monograph. Moreover, the predictive success of the theory provides strong presumptive evidence in favor of the supposi- tion that the development of e and j as postulated does correspond to some basic physiological processes, whatever these may be.

As a step toward the deduction of the macroscopic from the mi- croscopic theory, Landahl, McCulloch, and Pitts (1943) have shown how the propositional equivalences can be transformed immediately, by well-defined formal replacements, into statistical relations. To obtain the rule for making these replacements, we now let d repre- sent the period of latent addition, the interval within which the mini- mal number of converging impulses must occur in order to result in a stimulation. If v} is the mean frequency of the impulses in the neu- ron Cj afferent to the neuron Ci , then Vjd is the probability that an impulse will occur in c, during any particular interval 6 , and 1 — v}d is the probability that an impulse will not occur in ck . If the im- pulses in the various neurons c; concurrent upon d are statistically in- Ill

112 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

dependent, the probability that impulses will arrive concomitantly (that is, within an interval 6) along any group of r of these neurons is equal to the product Sr JJ Vj , there being r factors Vj . Likewise the probability that no impulse will arrive along any one of a group of s (inhibitory) neurons ck is the product 77(1 — S vk), there being here s factors. And if the group of r neurons c, is sufficient to excite Ci , the probability that c, will be excited in any interval <3 by the neurons Cj is the product of r + s factors.

d'nvjlia-dn) (1)

if the s neurons c^ include all inhibitory afferents to c; and if the im- pulses are statistically independent. If we then form the product such as (1) corresponding to every group of neurons c, sufficient to fire Ci by their concomitant activity in the absence of inhibitory impulses, the probability that d will fire is given approximately by summing all products so formed. Thus with the same notation as that em- ployed for the equivalence (2) of the preceding chapter, we find that the frequency vi with which c, fires is given by the equation

dvi = n a-svk) 2 *r(ai) n vj, (2)

where r(cti) is the number of neurons in the set ai . But if we com- pare this equation, and the manner in which it was formed, with the equivalence (2), chapter xii, we see that the two sides of the equiva- lence become identical with the two sides of the equation when each assertion N is replaced by the corresponding dv , and each negation °°N by the corresponding (1 — dv).

However, as we have remarked, this expression is approximate only. Each product 6r 77 v} is equal to the probability that at least all the neurons in the particular set of r neurons Cj will fire in the time- interval S , but the possibility that additional excitatory afferents may also fire is not ruled out. Duplication is therefore possible and the sum would then be too large. To give a concrete example, suppose there are only two afferents, c1 and c2 , both excitatory, and each alone capable of stimulating c3 . Then c3 may be excited by the firing of (h alone, by the firing of c2 alone, or by the concomitant firing of c^ and c2 . Now by the formula we obtain the probability

d vA = 6 v-l + 6 Vi ,

whereas the true probability is

d v3 = d V! ( 1 — d v2) + <5 v2 ( 1 ~ <5 v-i ) + b v1 d v2

= 6 v-i + 6 v-2 — S2 v-i v-2 ,

the first formulation exhibiting in detail the probability of the three

A STATISTICAL INTERPRETATION 113

exclusive events each sufficient for the occurrence of the firing of C3 .

Hence a more exact formulation in place of equation (2) would

involve including with each product dr II v, also the product of all

factors of the form (1 — d v. ) corresponding to all excitatory aifer-

ents c not included in the set of the c/s . However , since d is a small

quantity, the terms omitted in equation (2) are terms of a higher or- der and in general negligible. In fact, to the same degree of approxi- mation we can pick out the smallest among the r(ai) and neglect all terms in the sum involving a larger r . We must suppose, however, that the sum of the vk is large by comparison with the sum of the v, for any set at , since otherwise the effect of the inhibition also would be negligible.

With this understanding we find, by straightforward induction, that the following rules of replacement enable us to transform a logi- cal equivalence with regard to neural nets into a probability-relation:

(1) Replace each assertion N(t) by 6 v(t) with the same sub- script ;

(2) Replace each negation ^Nit) by [1 — S v(t)] giving to v the subscript of the N ;

(3) Replace logical disjunction and conjunction by arithmetic

addition and multiplication;

t t

(4) Replace the operators (z)t and (Ez)t by 77 and 2 respee-

t=0 (=0

tively ;

(5) Where a function of t is preceded by an operator Sa, re- place the argument t by t — a 6 .

The factor d can be everywhere omitted when the period of latent addition is taken as the unit of time.

We note in conclusion that with the approximation made here the equation (2) can be expanded to the form

dn = <5r(l-<5 2 vk) 2 n vi9 (3)

ks^i aiEKi(r) jeai

where r is the smallest of the r(aO, and Ki(r) includes only those classes at which contain r neurons. On removing the parentheses and multiplying out we obtain two terms, the first essentially positive, the second essentially negative. The first we can therefore interpret as the e , the second as the j of the present theory.

CONCLUSION

The measure of success of a predictive theory depends upon at least three factors: the simplicity of the theory, the range of its appli- cations, and its accuracy. It is quite possible for both of two incom- patible theories to be in common use when one is much simpler, the other more extensive or more exact in its applications. This is the case with the gravitational theories of Newton and Einstein. While the measure of a theory's simplicity is evident to any competent read- er, the test of its range and accuracy must be the work of the experi- menter. Thus the success, and also the improvement, of a scientific theory rests quite as much upon the experimenter as upon the theorist.

The theories presented in this monograph point two ways. On the one hand, they provide for the prediction of behavior. On the other hand, they presuppose the presence of specific anatomical struc- tures. Such a theory might give a very accurate representation of behavior without necessarily deriving from a correct representation of the actual mediating stucture. If so, it is no less useful in the one direction for having failed in the other, though it is to be expected that further investigation should lead to a theory that is more success- ful in both respects.

The advantage of having a theory lies not only in the fact that the theory, if it should prove accurate in its predictions, provides just that much of an increase to our fund of knowledge, or increases by just that much our command over nature. A well-formulated the- ory, in addition, gives direction and meaning to experimental work even when the experiments fail to confirm the theory, since the fail- ure must occur in a certain direction, by a certain amount, and must therefore provide the necessary, hitherto lacking, background for the construction of a better theory.

Furthermore, theoretical organization gives unity and coherence to the otherwise multifarious and apparently disparate phenomena of observation. It has been remarked (Williams, 1927) that the ele- mentary physics textbooks have diminished considerably in size dur- ing recent decades in spite of the tremendous increase in our physical information, just because details formerly treated separately are more and more brought together as special cases of a few general princi- ples. In our brief account we have indicated here and there (cf. in particular chap, vi) how quite different forms of response could be mediated by structures of the same kind and are thus to be subsumed under the same principle. Doubtless many other instances could be found, and will be as the work progresses. Thus it is to be hoped that as the formulations are improved the same progress will be observ-

114

CONCLUSION 115

able in the textbooks of psychology as in those of physics, and the extent to which this may occur will depend entirely upon the extent to which theorist and experimenter cooperate in their efforts to bring it about. On the other hand, it is neither to be expected nor to be desired that quantitative formulations can in any sense represent all of the complexities of any psychological process, or a fortiori, the true "inwardness" of conscious life.

If this brief summary of theoretical developments to date leaves the reader with a feeling of incompleteness, as of a story interrupted before it is well under way, this is only as it should be. For the authors the hardest task in presenting each topic has been to leave it incom- plete rather than to postpone the writing of it while pursuing it fur- ther. In general, we have summarized each topic as far as it has yet been developed. On the other hand, we have had to leave out many things that might have been included. To mention a few of these, Landahl (1940) has given an interpretation of factor-loadings in terms of time-scores; and he has suggested (Landahl, 1939) a pos- sible solution of the "bottle-neck" problem of the visual pathway; Coombs (1941) has discussed the galvanic skin response; Household- er (1939) has discussed a mechanism for a certain Gestalt-phenome- non; Householder and Amelotti (1938) have outlined a theory of the delay in the conditioned reflex. Rashevsky (1938) has outlined sev- eral theories of rational thinking. Pitts (1943) has given a formal theory of learning and conditioning, and Lettvin and Pitts (1943) have developed a formal theory of certain psychotic states. While neither of these last two theories provides an explicit neural mech- anism, both carry, by implication, fairly clear suggestions for these and their construction is in progress. Finally, Rashevsky's work on visual aesthetics, referred to in the introduction, is now the subject of extensive experimental test with very satisfactory preliminary re- sults. Only part of this theory has been published as yet.

While Rashevsky is actively developing his theory of aesthetics, additional work is now under way and soon to be published dealing with shock and certain psychotic mechanisms, with apparent move- ment, and with further elaboration of the theory of color-vision.

But all this together— the work accomplished and the work under way— touches only a very small part of the field of psychology, and the theories as we have them now represent at best a number of pre- liminary attempts, many of them still untested. We do not present them as finished products. Quite the contrary, no one would be more surprised — or disappointed !— than the authors if a decade hence does not find every one of these theories considerably modified if not wholly discarded for a better.

LITERATURE

Bartley, S. H. 1941. Vision: A Study of Its Basis. New York: D. Van Nostrand

Co., Inc. Bichowsky, F. Russel. 1941. The anatomy of the cortex as derived from per- ception. Privately circulated. Brock, Frederick W. 1940. The cortical mechanism of binocular vision in normal

and anomalous projection. Am. Jour. Optometry, 17, 460-472. Carlson, Anton J., and Victor Johnson. 1941. The Machinery of the Body. 2nd.

edition. Chicago: The University of Chicago Press. Cattell, J. McK. 1886. The influence of the intensity of the stimulus on the

length of the reaction time. Brain, 8, 512-515. Chodin, A. 1877. 1st das Weber-Fechnersche Gesetz auf das Augenmaass anwend-

bar? Graefes Arch. Fur Ophth. 23, 92-108. Coombs, Clyde H. 1941. Mathematical biophysics of the galvanic skin response.

Bull. Math. Biophysics, 3, 97-103. Douglas, A. C. 1932. The Physical Mechanism of the Human Mind. Edinburgh :

E. & S. Livingstone. Grant, Vernon W. 1942. Accommodation and convergence in visual space per- ception. J. Exper. Psychol., 31, 89-104. Guilford, J. P. 1936. Psychometric Methods. New York : McGraw-Hill Book Co.,

Inc. Gulliksen, Harold. 1934. A rational equation of the learning curve based on

Thorndike's law of effect. J. Gen. Psychol, 11, 395-434. Gyemant, R. 1925. Grundziige der Kolloidphysik vom Standptinkte des Gleich-

gewichts. Braunschweig: Friedr. Vieweg u. Sohn. Hartline, H. K., and C. H. Graham. 1932. Nerve impulses from single receptors

in the eye. J. Cell, and Comp. Physiol., 1, 277-295. Hecht, S. 1920. Intensity and the process of photoreception. Jour. Gen. Physiol.,

2, 337-347.

Helmholtz, H. von. 1896. Handbuch der physiologischen Optik. Hamburg u. Leipzig: Verlag von Leopold Voss.

Holway, Alfred H., Janet E. Smith and Michael J. Zigler. 1937. On the dis- crimination of minimal differences in weight: II. Number of available ele- ments as variant. Jour. Exper. Psychol., 20, 371-380.

Householder, Alston S. 1938a. Excitation of a chain of neurons. Psychometrika,

3, 69-73.

Householder, Alston S. 1938b. Conditioning circuits. Psychometrika, 3, 273-289.

Householder, Alston S. 1939a. Concerning Rashevsky's theory of the Gestalt. Bidl. Math. Biophysics, 1, 63-73.

Householder, Alston S. 1939b. A neural mechanism for discrimination. Psycho- metrika, 4, 45-58.

Householder, Alston S. 1940a. A neural mechanism for discrimination II. Dis- crimination of weights. Bull. Math. Biophysics, 2, 1-13.

Householder, Alston S. 1940b. A note on the horopter. Bull. Math. Biophysics, 2, 135-140.

Householder, Alston S. 1940c. A neural mechanism for discrimination III. Vis- ually perceived lengths and distances. Bull. Math. Biophysics, 2, 157-167.

Householder, Alston S. 1943. A theory of the induced size effect. Bull. Math. Biophysics, 5, 155-160.

116

LITERATURE 117

Householder, Alston S., and E. Amelotti. 1937. Some aspects of Rashevsky's theory of delayed reflexes. Psychometrika, 2, 255-262.

Householder, A. S., and Gale Young. 1940. Weber laws, the Weber law and psy- chophysical analysis. Psychometrika, 5, 183-194.

Kellogg, W. N. 1930. An experimental evaluation of equality judgments in psy- chophysics. Arch. Psychol., No. 112.

Konig, Arthur, and Eugen Brodhun. 1888, 1 889. Experimented Untersuchungen iiber die psychophysische Fundamentalformel in Bezug auf den Gesichtssinn. Sitz. d. k. Pr. Ak. d. Wiss. Berlin, 1888, 917-931; 1889, 641-644.

Kubie, Lawrence S. 1930. A theoretical application to some neurological prob- lems of the properties of excitation waves which move in closed circuits. Brain, 53, 166-177.

Landahl, H. D. 1938a. A contribution to the mathematical biophysics of psycho- physical discrimination. Psychometrika, 3, 107-125.

Landahl, H. D. 1938b. The relation between the intensity of excitation and the number of neurons traversed. Psychometrika, 3, 291-295.

Landahl, H. D. 1939a. Contributions to the mathematical biophysics of the cen- tral nervous system. Bull. Math. Biophysics, 1, 95-118.

Landahl, H. D. 1939b. A contribution to mathematical biophysics of psycho- physical discrimination II. Bull. Math. Biophysics, 1, 159-176.

Landahl, H. D. 1940a. Discrimination between temporally separated stimuli. Bull. Math. Biophysics, 2, 37-47.

Landahl, H. D. 1940b. Time scores and factor analysis. Psychometrika, 5, 67-74.

Landahl, H. D. 1940c. A contribution to the mathematical biophysics of psycho- physical discrimination III. Bull. Math. Biophysics, 2, 73-87.

Landahl, H. D. 1941a. Studies in the mathematical biophysics of discrimination and conditioning I. Bull. Math. Biophysics, 3, 13-26.

Landahl, H. D. 1941b. Studies in the mathematical biophysics of discrimination and conditioning II. Special case: errors, trials, and number of possible re- sponses. Bull. Math. Biophysics, 3, 63-69.

Landahl, H. D. 1941c. Theory of the distribution of response times in nerve fibers. Bull. Math. Biophysics, 3, 141-147.

Landahl, H. D. 1943. Studies in the mathematical biophysics of discrimination and conditioning: III. Bull. Math. Biophysics, 5, 103-110.

Landahl, H. D. and A. S. Householder. 1939. Neuron circuits; the self-exciting neuron. Psychometrika, 4, 255-267.

Landahl, H. D., W. S. McCulloch and Walter Pitts. 1943. A statistical conse- quence of the logical calculus of nervous nets. Bull. Math. Biophysics, 5, 135-137.

Lettvin, Jerome Y. and Walter Pitts, 1943. A mathematical theory of the affective psychoses. Bull. Math. Biophysics, 5, 139-148.

Lorente de No, Rafael. 1933. Vestibulo-ocular reflex arc. Arch. Neur. and Psy- chiatry, 30, 245-291.

McCulloch, Warren S. and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophysics, 5, 115-133.

Macdonald, P. A., and D. M. Robertson. 1930. The psychophysical law. — III. The tactile sense. Phil. Mag., 10, 1063-1073.

Matthews, Bryan H. C. 1933. Nerve endings in mammalian muscle. J. Physiol., 78, 1-53.

Ogle, Kenneth N. 1938. Die mathematische Analyse des Langshoropters. Arch, f. d. ges. Physiol, 239, 748-766.

Ogle, Kenneth N. 1940. The induced size effect. Jour. Opt. Soc. Am., 30, 145-151.

118 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

O'Leary, James Lee. 1937. Structure of the primary olfactory cortex of the mouse. J. Comp. Neur., 67, 1-31.

Pecher, C. 1939. La fluctuation d'excitabilite de la fibre nerveuse. Arch. In- ternat. Physiol, 49, 129-152.

Peddie, W. 1922. Colour Vision. London: Edward Arnold & Co.

Pieron, H. 1920. Nouvelles recherches sur l'analyse du temps de latence sen- sorielle et sur la loi qui relie le temps a 1'intensibe d'excitation. L' 'Annie Psydhologique, 22, 58-142.

Pitts, Walter. 1942a. Some observations on the simple neuron circuit. BulL Math. Biophysics, 4, 121-129.

Pitts, Walter. 1942b. The linear theory of neuron networks: The static prob- lem. Bull. Math. Biophysics, 4, 169-175.

Pitts, Walter. 1943a. The linear theory of neuron networks: The dynamic problem. Bull. Math. Biophysics, 5, 23-31.

Pitts, Walter. 1943b. A general theory of learning and conditioning. Psycho- metrika, 8, 1-18, 131-140.

Rashevsky, N. 1938. Mathematical Biophysics. Physicomathematical Founda- tions of Biology. Chicago: The University of Chicago Press.

Rashevsky, N. 1940. Advances and Applications of Mathematical Biology. Chi- cago: The University of Chicago Press.

Rashevsky, N. 1942a. Suggestions for a mathematical biophysics of auditory perception with special reference to the theory of aesthetic ratings of com- binations of musical tones. Bull. Math. Biophysi.es, 4, 27-32.

Rashevsky, A. 1942b. An alternate approach to the mathematical biophysics of perception of combinations of musical tones. Bull. Math. Biophysics, 4, 89-90.

Rashevsky, N. 1942c. Further contributions to the mathematical biophysics of visual aesthetics. Bull. Math. Biophysics, 4, 117-120.

Rashevsky, N. 1942d. Some problems in mathematical biophysics of visual per- ception and aesthetics. Bull. Math. Biophysics, 4, 177-191.

Riesz, R. R. 1928. Differential intensity sensitivity of the ear for pure tones. Physical Rev., 31, 867-875.

Sacher, George. 1942. Periodic phenomena in the interaction of two neurons. Bull. Math. Biophysics, 4, 77-81.

Shaxby, John H. 1943. On sensory energy, with special reference to vision and colour-vision. Phil. Mag., 34, 289-314.

Stanton, Henry E. 1941. A neural mechanism for discrimination IV: Monocular depth perception. Bull. Math. Biophysics, 3, 113-120'.

Thurstone, L. L. 1927. A law of comparative judgment. Psychol. Rev., 34, 273-286.

Thurstone, L. L. 1935. Vectors of Mind. Chicago: The University of Chicago Press.

Urban, F. M. 1908. The Application of Statistical Methods to the Problem of Psychophysics. Philadelphia: Psychol. Clinic Press.

Verhoeff, F. N. 1925. A theory of binocular perspective. Am. Jour. Physiol. Op- tics, 6, 416-448.

Williams, H. B. 1927. Mathematics and the biological sciences. The Gibbs Lec- ture for 1926. Bull. Am. Math. Soc, 33, 273-293.

Woodrow, H. 1914. The measurement of attention. Psychol. Rev. Mono., 17, No. 5.

Young, Gale, and Alston S. Householder. 1941. A note on multi-dimensional psychophysical analysis. Psychometrika, 6, 331-333.

INDEX

Absolute discrimination, 66, 72 Absolute thresholds, 65 Accessibility, 10, 11, 12 Accessible, 10, 31 Accommodation, 48, 97, 98 Action-spikes, 103 Activity

continuous, 26

maximal, 11

nervous, vii

permanent, 22, 29

steady-state, 7, 67, 81 Activity parameters, 17, 19 Adaptation, 42, 72

to muscular stretch, 50 Aesthetic judgments, 74 Afferent chains, 74 Afferent neuron, firing of, 104 All-or-none character, 103 Amelotti, E., 115 Analysis

factor, 75

multidimensional psychophysical, 74,76 Anatomy, vii Anomalous fusion, 99 Apparent depth, 96 Apparent size, 96, 97 Applied stimulation, 11 Applied stimulus, 9 Asymptotic value, 7, 25 Asymptotically exciting neuron, 5 Asymptotically inhibiting neuron, 5,

6, 9 Auditory data, 62, 69 Auditory sensations, 68, 69, 70 Auditory stimuli, 40 Awareness of stimuli, 16

Bartley, S. H., 45, 48

Behavior, vii, 1

Berger, G. O., 39, 55

Bichowsky, F. R., 98

Binocular cue, 71, 97

Binocular disparity, 98

Binocular stereopsis, 97

Binocular vision, 97

Binocular visual field, 98

Blind alley, 85

Boolean algebra of neural nets, 103

statistical interpretation of, 111 Brightness

constant, 45

judgment of, 6

relative, 48 Brock, F. W., 99 Brodhum, E., 68

Carlson, A. J., 90

Categorical judgment, 58, 60 Cattell, J. McK., 39, 55 Centers, inaccessibility of, 12 Chain of neurons, 8, 9, 11, 12, 22, 31,

35 Chain of two neurons, 38, 41, 46 Chains, afferent, 74 Chains, interconnected, 56, 74 Chains of neurons, 7 Chains, two-neuron, 79 Chodin, A., 71 Circuit, 24, 28, 31, 81

two-neuron, 25 Circuits, 66, 78, 79, 83

self -exciting, 12

simple, 22 Color-center, 91, 92 Color-contrast, 13 Color-perception, 21 Color-pyramid, 92 Color-vision, theory of, 90 Colors, 92

discrimination of, 72, 91 Common synapse, 49 Common terminus, 13 Complete inhibition, 104 Complex stimulus-object, 74 Conditioned stimuli, 83 Conditioned stimulus, 81 Conditioning, 79, 81 Conduction time, 22, 31, 33, 104 Conjunction, 105 Connections, excitatory, 105 Constancy of perceived size, 97 Constant brightness, 45 Constant stimulus, 42, 48, 49, 54 Continuous activity, 26 Contraction, muscular, 1, 4 Convergence, 97, 98 Coombs, C. H., 115 "Correct" judgments, 72 Correct response, 58, 61, 83, 87

probability of, 58, 59, 89 Corresponding points, 98 Cortical image, 97 Critical flicker-frequency, 45, 46 Critical frequency, 44, 46 Critical value, 8 Crossing excitation, 65, 78 Crossing inhibition, 65, 74, 76, 78 Cutaneous receptors, 64 Cycles, 107 Cyclic net, 109 Cylindrical size-lens, 100, 102

Delay of reaction, 39, 40 Depth, apparent, 96 Discharge, nervous, 104 Discriminal response, 75

119

]20 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Discriminal sequence, 64, 74 Discriminal sequences, 56 Discriminal stimulus sequences, 21 Discriminating mechanism, 91 Discrimination

absolute, 66, 72

intensity, 69, 70

of colors, 72, 91

of lengths, 69, 71

of lifted weights, 73

of weights, 71

psychophysical, 56

relative, 66, 72 Disjunction, 105

neural, 95 Disparate point, 98 Disparities

horizontal, 100, 101

vertical, 100 Disparity, 98 Distance-cue, 96, 97 Distance, interocular, 101 Distinct response, 68 "Doubtful" judgments, 59, 60 Douglas, A. C, 82, 94 Duration

of reaction, 41

of response, 41, 42 Dynamics, 22

intra-neuronal, 3

neuronal, viii

of simple circuits, 22

of single synapse, 37, 48

trans-synaptic, 1

Effective stimulus, 5

Effector, vii, 7, 13, 78, 90

Effectors, 1

Effects, modulating, 22

Endfeet, 104

"Equality", judgment of, 59, 61

Equilibria, fluctuating, 35

Equilibrium, 27, 28

fluctuating, 29

stable, 23, 26, 35

unstable, 24, 35 Errors, number of, 85, 88 Excitation, 2, 9, 21, 81

crossing, 65, 78

functions, higher order, 8 Excitatory connections, 105 Excitatory factor, 83 Excitatory neuron, 17, 19, 28 Excitatory neurons, 15, 57 Excitatory state, 2, 4 Existential net, 110 Existential sentence, 110 Expression, temporal propositional, 107

realizable, 107 Eye movement, 71

Facilitation, 110 Factor, excitatory, 83

Factor analysis, 75 Ferry-Porter law, 45 Final common path, 80 Final pathways, 90 Firing of afferent neurons, 104 Firing of neuron, 109 Flicker-frequency, critical, 45, 46 Fluctuating equilibrium, 29 Fluctuations in response, 22 Fluctuations of threshold, 53, 57 Frequency, critical, 44, 46 Frequency of response, 49, 50 Frontal planes, 100 Fusion, 99

anomalus, 99

of images, 98 Function, temporal propositional, 107

General neural net, 30

Gestalt-phenomena, 21

Golgi preparation, 30

Graham, C. H., 3

Grant, V. W., 97

"Greater", judgments of, 61

"Greater than" judgments, 59, 61

Group of stimuli, 86

Guilford, J. P., 75

Gulliksen, H., 84, 85

Gustatory stimuli, 40

Gyemant, R., 53

Hartline, H. K., 3 Hecht, S., 43

Helmholtz, H. von, 90, 91, 99 Higher order excitation-functions, 8 Holway, A. H., 73 Horizontal disparities, 100, 101 Horopter, 99 Horopter rotation, 100 Householder, A. S., 8, 22, 26, 66, 70, 75, 96, 97, 100, 101, 115

Idealized model, vii Image

cortical, 97

retinal, 96, 97 Images, fusion of, 98 Impulse, nervous, vii, 2, 31 Impulses, 13

statistical effects of, 111

statistically independent, 112 Inaccessibility of centers, 12 Inaccessible, 10, 11, 31, 33 Inadequate stimulus, 81 Incompatible responses, 16, 74 Inhibiting effect, 5 Inhibition, 4, 104

complete, 104

crossing, 65, 74, 76, 78

partial, 104

relative, 110

total, 110 Inhibitory neurons, 10, 11, 15, 17, 19, 28, 57, 67, 81, 87

INDEX

121

Intensity, 92

of sound, 63

of stimuli, 61

of stimulus, 39, 40, 55, 84 Intensity-discrimination, 69, 70 Interaction of perceptions, 14 Interaction of transmitted impulse, 14 Interconnected chains, 56, 74 Interconnecting neurons, 105

inhibitory, 57 Interdependence of perceptions, 13 Intermediate neurons, 12 Intermittent stimulation, 45, 46 Internuncial neuron, 30 Internuncial neurons, 108 Interocular distance, 101 Intra-neuronal dynamics, 3

Johnson, V., 90

Judgment, categorical, 58, 60

Judgments

aesthetic, 74

"correct", 72

"doubtful", 59, 60

"greater than", 59, 61

"less than", 59

of brightness, 60

of "equality", 59, 61

of "greater", 61

of "less", 61

of loudness, 63

"wrong", 72

Kellogg, W. N., 60, 62, 63 Kinaesthesis, 95 Konig, A., 68 Kubie, L. S., 22

Landahl, H. D., 8, 16, 18, 22, 26, 52, 54, 57, 61, 62, 64, 83, 85, 86, 88, 89, 97, 103, 111, 115

Lashley, K. S., 84

Latent addition, period of, 111, 113

Law of Plurality of Connections, 30

Law of Reciprocity of Connections, 30

Learning, 79, 85, 88

Length, discrimination of, 70, 71

"Less", judgments of, 61

"Less than" judgments, 59

Lettvin, J. Y., 115

Lifted weights, 72

discrimination of, 73

Light-dark ratio, 45

Lorente de No, R., 22, 30

Loudness, judgments of, 63

McCulloch, W. S., viii, 2, 103, 104, 107,

108, 109, 111 MacDonald, P. A., 70 Macroscopic picture, 103 Magnification-increment, 101 Matthews, B. H. C, 3, 49, 50 Maximal activity, 11 Mean value of threshold, 54

Measure of region, 20 Mechanism, 32, 37, 49, 56, 64, 72, 76, 86

discriminating, 91

neural, 13, 90, 92 Medial plane, 100 Memory, 79

Microscopic picture, 103 Mixed neuron, 26 Mode of presentation, 62 Model, viii, 1

idealized, vii Modulating effects, 22 Monocular vision, 94 Movement, eye, 71 Movements, spontaneous, 22 Muller's law of specificity, 90 Muscles, 1

Muscular contraction, 4 Muscular stretch, adaptation to, 50 Multidimensional psychophysical analysis, 74, 76

Near-threshold stimulation, 65 Negation, 105 Nerve-impulse, 104 Nervous activity, vii Nervous discharge, 104 Nervous impulse, vii, 2, 31 Net, 30, 66, 83, 86, 105, 107

cyclic, 109

existential, 110

non-cyclic, 107, 109

order of, 109

steady-state of, 36

symmetrical, 57 Nets

neural, 95, 113

boolean algebra of, 103 Neural chains, 13 Neural circuits, 79 Neural complex, viii Neural disjunction, 95 Neural mechanism, 13, 90, 92 Neural net, general, 30 Neural nets, 95, 113

boolean algebra of, 103 Neural pathway, 4 Neural pathways, segregation of, 90 Neural structures, vii, 7, 37 Neuron, 5, 22, 25, 33, 41, 49, 53, 54, 61, 62, 107

asymptotically exciting, 5

asymptotically inhibiting, 5, 6, 9

effects of ions on excitability of, 53

excitatory, 17, 19, 28

firing of afferent, 109

inhibitory, 17, 19, 28 single, 24

internuncial, 30

minimal region of, 53

mixed, 26

self-stimulating, 22, 26

122 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

single, 11, 31, 38

transiently exciting, 6

transiently inhibiting, 6 Neuronal dynamics, viii Neurons, vii, 1, 7, 13, 37, 62, 83, 103, 112

chain of, 8, 9, 11, 12, 31

chain of two, 38, 41, 46

chains of, 7, 22

excitatory, 15, 57

inhibitory, 10, 11, 15, 57, 67, 81, 87

interconnecting, 105

intermediate, 12

internuncial, 108

parallel, 13

interconnected, 21

several, 49 Nodal points, 97, 101 Node of Ranvier, 53 Non-cyclic net, 107,109 Non-neural events, 7 Number of errors, 85, 88 Number of possible choices, 86 Number of trials, 85, 88

Ogle, K. N., 99, 100, 102

O'Leary, J. L., 22

One-dimensional array of synapses, 90

Order of net, 109

Origin, 2, 3, 22, 33, 35

Organism, 1, 13, 103

response of, 30 Overlearning, 89

Parallel neurons, 13

interconnected, 21 Parameters, 17

activity, 17, 19 Partial inhibition, 104 Path, final common, 80 Pathways

final, 90

segregation of neural, 90 Pecher, C, 53, 54 Peddie, W., 90

Perceived size, constancy of, 97 Perception

color, 21

of space, 94, 95 Perceptions, 13, 14

interaction of, 14

interdependence of, 13 Perceptual space, 95 Period of latent addition, 111, 113 Permanent activity, 22, 29 Physiological stimulation, 2 Physiology, vii Pieron, H., 40 Pitts, W., viii, 2, 9, 31, 33, 35, 36, 103,

104, 107, 108, 109, 111, 115 Pleasant response, 83 Plurality of Connections, law of, 30 Possible choices, number of, 86

Presentation, mode of, 62

Primary receptor, 92

Probability of correct response, 58, 59, 89

Probability of wrong response, 58, 84

Probability-relation, 113

Prompting, 88

Proposition, 105

Propositions, universal existential, 109

Psychological scale, 75, 76

Psychophysical analysis, multidimen- sional, 74, 76

Psychophysical discrimination, 56

Punishment, 84 strength of, 88

Ranvier, node of, 53

Rashevsky, vii, ix, 3, 4, 15, 16, 18, 19,

21, 26, 79, 81, 111, 115 Rate of decay, 49 Reaction

delay of, 39, 40 duration of, 41 Reaction-time, 38, 42, 43, 51, 52, 54 Reaction-times, variation in, 55 Recall-learning, 89 Receptor, 7, 13, 78, 90

primary, 92 Receptors, 1

cutaneous, 64

retinal, 91

stretch, 49 Reciprocity of Connections, law of, 30 Recognition-learning, 89 Recovery time, 43 Reflex, 39

scratch, 26 Region, measure of, 20 Reisz, R. R., 69 Relative brightness, 48 Relative discrimination, 66, 72 Relative inhibition, 110 "Relative interval", 69 Relaxation, 1

Replacement, rules of, 113 Response, vii, viii, 13, 16, 17, 37, 38, 41, 43, 51, 56, 57, 61, 66, 78, 80, 83

correct 58, 61, 83, 87

discriminal, 75

distinct, 68

duration of, 41, 42

fluctuations in, 22

frequency of, 49, 50

of organism, 30

pleasant, 83

probability of, 58, 59

probability of correct, 58, 59, 89

probability of wrong, 58

steady, 44

unpleasant, 84, 86

wrong, 58, 84, 87 Responses, incompatible, 16, 74 Response-time, 54, 55

INDEX

123

Retinal images, 96, 97

Retinal receptors, 91

Retinas, 97, 98

Reward, strength of, 84, 86, 88

Robertson, D. M., 70

Rosette, 33, 36

Rotation

horopter, 100

of subjective medial plane, 102

visual field, 100

Sacher, G., 26

Saturation, 92

Scale, psychological, 75, 76

Scratch reflex, 26

Segregation of neural pathways, 90

Self-exciting circuits, 12

Self-stimulating neuron, 22, 26

Sensations

auditory, 68, 69, 70

disparate, 64

spontaneous, 22

tactile, 68, 69, 70

visual, 69, 70 Sequence, discriminal, 64, 74 Sequences, discriminal, 56 Shape, 13

visual illusions of, 13 Shaxby, J. H., 90 Simple circuits, 22 Simultaneous stimulation, 64 Simultaneous stimuli, 17 Size

apparent, 96

constancy of perceived, 97 Size-lens, 101,102

cylindrical, 100, 102

spherical, 101 Smith, J. E., 73 Space

perception of, 94

perceptual, 95 Spontaneous movements, 22 Spontaneous sensations, 22 Stable equilibrium, 23, 26, 35 Stanton, H. E., 97 Statistical effects of impulses. 111 Statistically independent impulses, 112 Steady response, 44 Steady-state, 11

activity, 7, 67, 81

of net, 36 Stereopsis, 94

binocular, 97 Stimulation

applied, 11

intermittent, 45, 46

near-threshold, 65

physiological, 2

simultaneous, 64

total, 11 Stimuli, 13, 16, 78

auditory, 40

awareness of, 16

conditioned, 83

group of, 86

gustatory, 40

intensity of, 61

simultaneous, 17

strong, 16

two competing for attention, 16

unconditioned, 83

visual, 39 Stimulus, viii, 5, 7, 10, 13, 22, 30, 31, 37, 38, 49, 56, 57, 66, 78, 83, 90

applied, 9

conditioned, 81

constant, 42, 43, 49, 54

discriminal sequences of — and re- sponse, 21

effective, 5

inadequate, 81

intensity of, 39, 40, 55, 84

subliminal auditory, 65

testing, 43

total, 9

unconditioned, 80, 81

unpleasant, 83

warning, 51, 52 Stimulus-density, 18 Stimulus-object, 75, 76, 77

complex, 74 Strength of punishment, 88 Strength of reward, 84, 86, 88 Stretch receptors, 49 Strong stimuli, 16 Structure, 30, 50, 79, 81, 83

symmetric, 64 Structures, viii, 37

neural, vii, 7, 37

of subjective space, 94 Subjective medial plane, rotation of,

102 Sub-threshold, 14 Summation, 104 Symmetric structure, 64 Synapse, vii, 6, 31, 33, 35

common, 49

single,

common, 33 dynamics of, 37, 49 Synapses, 91

one-dimensional array of, 90

three-dimensional array of, 91 Synaptic delay, 104

Tactile data, 70 Tactile sensations, 68, 69, 70 Talbot law, 48

Temporal propositional expression, 107

realizable, 107 Temporal propositional function, 107 Terminus, 2, 3, 7, 18, 22, 33, 35, 67

common, 13 Testing stimulus, 43 Theory, viii, ix

124 MATHEMATICAL BIOPHYSICS OF THE CENTRAL NERVOUS SYSTEM

Three-dimensional array of synapses,

91 Three-receptor hypothesis, 91 Threshold, 3, 9, 10, 24, 38, 41, 43, 53, 61, 67, 104

fluctuation of, 57

fluctuations of, 53

mean value of, 54

modification of, 82

variations in, 54, 55 Thresholds, 13, 17, 19, 57

absolute, 65 Thurstone, L. L., 75 Time-unit, 31, 33 Total inhibition, 110 Total stimulation, 11 Total stimulus, 9 Transiently exciting neuron, 6 Transiently inhibitory neuron, 6 Transmission, 14 Transmitted impulse, interaction of,

14 Trans-synaptic dynamics, 1 Trials, number of, 85, 88 Two-neuron chains, 79 Two-neuron circuit, 25 Two parallel chains with crossing in- hibition, 64

Unconditioned stimuli, 83 Unconditioned stimulus, 80, 81 Universal existential propositions, 109 Universal sentence, 109 Unpleasant response, 84 Unpleasant stimulus, 83 Unstable equilibrium, 24, 35

Urban, F. M., 58, 59, 61

Value, asymptotic, 7, 25 Variations in reaction-times, 55 Variations in Threshold, 55 Verhoeff, F. N., 98 Vertical disparities, 100 Vision

binocular, 97

monocular, 94 Visual axes, 97 Visual cues, 95 Visual data, 61, 68 Visual field, binocular, 98 Visual field rotation, 100 Visual illusions of shape, 13 Visual sensations, 68, 69, 70 Visual stimulus, 39

Warning stimulus, 51, 52 Weber ratio, 42, 48, 68, 72 Weights

discrimination of, 71

lifted, 72

discrimination of, 73 Williams, H. B., 114 Woodrow, H., 51, 52 "Wrong" judgments, 72 Wrong responses, 58, 61, 84, 86, 87

probability of, 58, 84

Young, G., 75 Young-Helmholtz theory, 90

Zigler, M. J., 73