Skip to main content

Full text of "The Algebra of Logic"

See other formats


The Project Gutenberg EBook of The Algebra of Logic, by Louis Couturat 

This eBook is for the use of anyone anywhere at no cost and with 
almost no restrictions whatsoever. You may copy it, give it away or 
re-use it under the terms of the Project Gutenberg License included 
with this eBook or online at www.gutenberg.net 



Title: The Algebra of Logic 

Author: Louis Couturat 

Release Date: January 26, 2004 [EBook #10836] 

Language: English 

Character set encoding: TeX 

*** START OF THIS PROJECT GUTENBERG EBOOK THE ALGEBRA OF LOGIC *** 



Produced by David Starner, Arno Peters, Susan Skinner 
and the Online Distributed Proofreading Team. 



THE ALGEBRA OF LOGIC 



BY 



LOUIS COUTURAT 



AUTHORIZED ENGLISH TRANSLATION 



BY 



LYDIA GILLINGHAM ROBINSON, B. A. 



With a Preface by PHILIP E. B. JOURDAIN. M. A. (Cantab.) 



Preface 

Mathematical Logic is a necessary preliminary to logical Mathematics. "Math- 
ematical Logic" is the name given by Peano to what is also known (after 
Venn ) as "Symbolic Logic"; and Symbolic Logic is, in essentials, the Logic 
of Aristotle, given new life and power by being dressed up in the wonderful — 
almost magical — armour and accoutrements of Algebra. In less than seventy 
years, logic, to use an expression of De Morgan's, has so thriven upon sym- 
bols and, in consequence, so grown and altered that the ancient logicians would 
not recognize it, and many old-fashioned logicians will not recognize it. The 
metaphor is not quite correct: Logic has neither grown nor altered, but we now 
see more of it and more into it. 

The primary significance of a symbolic calculus seems to lie in the econ- 
omy of mental effort which it brings about, and to this is due the characteristic 
power and rapid development of mathematical knowledge. Attempts to treat 
the operations of formal logic in an analogous way had been made not infre- 
quently by some of the more philosophical mathematicians, such as Leibniz 
and Lambert ; but their labors remained little known, and it was Boole 
and De Morgan, about the middle of the nineteenth century, to whom a 
mathematical — though of course non- quantitative — way of regarding logic was 
due. By this, not only was the traditional or Aristotelian doctrine of logic 
reformed and completed, but out of it has developed, in course of time, an 
instrument which deals in a sure manner with the task of investigating the fun- 
damental concepts of mathematics — a task which philosophers have repeatedly 
taken in hand, and in which they have as repeatedly failed. 

First of all, it is necessary to glance at the growth of symbolism in mathe- 
matics; where alone it first reached perfection. There have been three stages in 
the development of mathematical doctrines: first came propositions with par- 
ticular numbers, like the one expressed, with signs subsequently invented, by 
"2 + 3 = 5"; then came more general laws holding for all numbers and expressed 
by letters, such as 

lastly came the knowledge of more general laws of functions and the formation 
of the conception and expression "function". The origin of the symbols for par- 
ticular whole numbers is very ancient, while the symbols now in use for the 
operations and relations of arithmetic mostly date from the sixteenth and sev- 
enteenth centuries; and these "constant" symbols together with the letters first 
used systematically by ViETE (1540-1603) and Descartes (1596-1650), 
serve, by themselves, to express many propositions. It is not, then, surprising 
that Descartes, who was both a mathematician and a philosopher, should 
have had the idea of keeping the method of algebra while going beyond the 
material of traditional mathematics and embracing the general science of what 
thought finds, so that philosophy should become a kind of Universal Mathemat- 
ics. This sort of generalization of the use of symbols for analogous theories is a 
characteristic of mathematics, and seems to be a reason lying deeper than the 



erroneous idea, arising from a simple confusion of thought, that algebraical sym- 
bols necessarily imply something quantitative, for the antagonism there used to 
be and is on the part of those logicians who were not and are not mathemati- 
cians, to symbolic logic. This idea of a universal mathematics was cultivated 
especially by Gottfried Wilhelm Leibniz (1646-1716). 

Though modern logic is really due to Boole and De Morgan, Leibniz 
was the first to have a really distinct plan of a system of mathematical logic. 
That this is so appears from research — much of which is quite recent — into 
Leibniz's unpublished work. 

The principles of the logic of Leibniz, and consequently of his whole philos- 
ophy, reduce to two^ : (1) All our ideas are compounded of a very small number of 
simple ideas which form the "alphabet of human thoughts"; (2) Complex ideas 
proceed from these simple ideas by a uniform and symmetrical combination 
which is analogous to arithmetical multiplication. With regard to the first prin- 
ciple, the number of simple ideas is much greater than Leibniz thought; and, 
with regard to the second principle, logic considers three operations — which we 
shall meet with in the following book under the names of logical multiplication, 
logical addition and negation — instead of only one. 

"Characters" were, with Leibniz, any written signs, and "real" characters 
were those which — as in the Chinese ideography — represent ideas directly, and 
not the words for them. Among real characters, some simply serve to represent 
ideas, and some serve for reasoning. Egyptian and Chinese hieroglyphics and the 
symbols of astronomers and chemists belong to the first category, but Leibniz 
declared them to be imperfect, and desired the second category of characters 
for what he called his "universal characteristic".^ It was not in the form of an 
algebra that Leibniz first conceived his characteristic, probably because he 
was then a novice in mathematics, but in the form of a universal language or 
script.^ It was in 1676 that he first dreamed of a kind of algebra of thought,^ and 
it was the algebraic notation which then served as model for the characteristic.^ 

Leibniz attached so much importance to the invention of proper symbols 
that he attributed to this alone the whole of his discoveries in mathematics.^ 
And, in fact, his infinitesimal calculus affords a most brilliant example of the 
importance of, and Leibniz' s skill in devising, a suitable notation.^ 

Now, it must be remembered that what is usually understood by the name 
"symbolic logic", and which — though not its name — is chiefiy due to Boole, is 
what Leibniz called a Calculus ratiocinator, and is only a part of the Universal 
Characteristic. In symbolic logic Leibniz enunciated the principal properties 
of what we now call logical multiplication, addition, negation, identity, class- 
inclusion, and the null-class; but the aim of Leibniz's researches was, as he 



^ CouTURAT, La Logique de Leibniz d^apres des documents inedits, Paris, 1901, pp. 431- 
432, 48. 

"^Ibid., p. 81. 
^Ibid., pp. 51, 78 
"^Ibid., p. 61. 
^Ibid., p. 83. 
^Ibid., p. 84. 
^Ibid., p. 84-87. 



said, to create "a kind of general system of notation in which ah the truths 
of reason should be reduced to a calculus. This could be, at the same time, 
a kind of universal written language, very different from all those which have 
been projected hitherto; for the characters and even the words would direct 
the reason, and the errors — excepting those of fact — would only be errors of 
calculation. It would be very difficult to invent this language or characteristic, 
but very easy to learn it without any dictionaries". He fixed the time necessary 
to form it: "I think that some chosen men could finish the matter within five 
years"; and finally remarked: "And so I repeat, what I have often said, that a 
man who is neither a prophet nor a prince can never undertake any thing more 
conducive to the good of the human race and the glory of God". 

In his last letters he remarked: "If I had been less busy, or if I were younger 
or helped by well-intentioned young people, I would have hoped to have evolved 
a characteristic of this kind"; and: "I have spoken of my general characteristic 
to the Marquis de I'Hopital and others; but they paid no more attention than if 
I had been telling them a dream. It would be necessary to support it by some 
obvious use; but, for this purpose, it would be necessary to construct a part at 
least of my characteristic; — and this is not easy, above all to one situated as I 
am". 

Leibniz thus formed projects of both what he called a characteristica uni- 
versalis, and what he called a calculus ratiocinator; it is not hard to see that 
these projects are interconnected, since a perfect universal characteristic would 
comprise, it seems, a logical calculus. Leibniz did not publish the incomplete 
results which he had obtained, and consequently his ideas had no continuators, 
with the exception of Lambert and some others, up to the time when Boole, 
De Morgan, Schroder, MacColl, and others rediscovered his theorems. 
But when the investigations of the principles of mathematics became the chief 
task of logical symbolism, the aspect of symbolic logic as a calculus ceased to be 
of such importance, as we see in the work of Frege and Russell. Frege's 
symbolism, though far better for logical analysis than Boole's or the more 
modern Peano's, for instance, is far inferior to Peano's — a symbolism 
in which the merits of internationality and power of expressing mathematical 
theorems are very satisfactorily attained — in practical convenience. Russell, 
especially in his later works, has used the ideas of Frege, many of which he 
discovered subsequently to, but independently of, Frege, and modified the 
symbolism of Peano as little as possible. Still, the complications thus intro- 
duced take away that simple character which seems necessary to a calculus, and 
which Boole and others reached by passing over certain distinctions which a 
subtler logic has shown us must ultimately be made. 

Let us dwell a little longer on the distinction pointed out by Leibniz be- 
tween a calculus ratiocinator and a characteristica universalis or lingua char- 
acteristica. The ambiguities of ordinary language are too well known for it to 
be necessary for us to give instances. The objects of a complete logical sym- 
bolism are: firstly, to avoid this disadvantage by providing an ideography, in 
which the signs represent ideas and the relations between them directly (with- 
out the intermediary of words), and secondly, so to manage that, from given 



m 



premises, we can, in this ideography, draw all the logical conclusions which they 
imply by means of rules of transformation of formulas analogous to those of 
algebra, — in fact, in which we can replace reasoning by the almost mechanical 
process of calculation. This second requirement is the requirement of a calculus 
ratiocinator. It is essential that the ideography should be complete, that only 
symbols with a well-defined meaning should be used — to avoid the same sort of 
ambiguities that words have — and, consequently, — that no suppositions should 
be introduced implicitly, as is commonly the case if the meaning of signs is not 
well defined. Whatever premises are necessary and sufficient for a conclusion 
should be stated explicitly. 

Besides this, it is of practical importance, — though it is theoretically irrelevant,- 
that the ideography should be concise, so that it is a sort of stenography. 

The merits of such an ideography are obvious: rigor of reasoning is ensured 
by the calculus character; we are sure of not introducing unintentionally any 
premise; and we can see exactly on what propositions any demonstration de- 
pends. 

We can shortly, but very fairly accurately, characterize the dual development 
of the theory of symbolic logic during the last sixty years as follows: The calculus 
ratiocinator aspect of symbolic logic was developed by Boole, De Morgan, 
Jevons, Venn, C. S. Peirce, Schroder, Mrs. Ladd-Franklin and 
others; the lingua characteristica aspect was developed by Frege, Peano 
and Russell. Of course there is no hard and fast boundary-line between the 
domains of these two parties. Thus Peirce and Schroder early began to 
work at the foundations of arithmetic with the help of the calculus of relations; 
and thus they did not consider the logical calculus merely as an interesting 
branch of algebra. Then Peano paid particular attention to the calculative 
aspect of his symbolism. Frege has remarked that his own symbolism is 
meant to be a calculus ratiocinator as well as a lingua characteristica, but the 
using of Frege 's symbolism as a calculus would be rather like using a three- 
legged stand-camera for what is called "snap-shot" photography, and one of the 
outwardly most noticeable things about Russell's work is his combination of 
the symbolisms of Frege and Peano in such a way as to preserve nearly all 
of the merits of each. 

The present work is concerned with the calculus ratiocinator aspect, and 
shows, in an admirably succinct form, the beauty, symmetry and simplicity of 
the calculus of logic regarded as an algebra. In fact, it can hardly be doubted 
that some such form as the one in which Schroder left it is by far the best 
for exhibiting it from this point of view.^ The content of the present volume 
corresponds to the two first volumes of Schroder's great but rather prolix 
treatise.^ Principally owing to the infiuence of C. S. Peirce, Schroder 



^Cf. A.N. Whitehead, A Treatise on Universal Algebra with Applications, Cambridge, 
1898. 

^ Vorlesungen iiber die Algebra der Logik, VoL L, Leipsic, 1890; VoL II, 1891 and 1905. We 
may mention that a much shorter Abriss of the work has been prepared by Eugen Muller. 
VoL III (1895) of Schroder's work is on the logic of relatives founded by De Morgan 
and C. S. Peirce, — a branch of Logic that is only mentioned in the concluding sentences 



IV 



departed from the custom of Boole, Jevons, and himself (1877), which 
consisted in the making fundamental of the notion of equality, and adopted the 
notion of subordination or inclusion as a primitive notion. A more orthodox 
BOOLIAN exposition is that of Venn, ^^ which also contains many valuable 
historical notes. 

We will finally make two remarks. 

When Boole (cf. §0.2 below) spoke of propositions determining a class of 
moments at which they are true, he really (as did MacColl ) used the word 
"proposition" for what we now call a "propositional function". A "proposition" 
is a thing expressed by such a phrase as "twice two are four" or "twice two are 
five", and is always true or always false. But we might seem to be stating a 
proposition when we say: "Mr. William Jennings Bryan is Candidate for 
the Presidency of the United States", a statement which is sometimes true and 
sometimes false. But such a statement is like a mathematical function in so far 
as it depends on a variable — the time. Functions of this kind are conveniently 
distinguished from such entities as that expressed by the phrase "twice two 
are four" by calling the latter entities "propositions" and the former entities 
"propositional functions": when the variable in a propositional function is fixed, 
the function becomes a proposition. There is, of course, no sort of necessity 
why these special names should be used; the use of them is merely a question 
of convenience and convention. 

In the second place, it must be carefully observed that, in §0.13, and 1 are 
not defined by expressions whose principal copulas are relations of inclusion. A 
definition is simply the convention that, for the sake of brevity or some other 
convenience, a certain new sign is to be used instead of a group of signs whose 
meaning is already known. Thus, it is the sign of equality that forms the princi- 
pal copula. The theory of definition has been most minutely studied, in modern 
times by Frege and Peano. 

Philip E. B. Jourdain. 

Girton, Cambridge. England. 



of this volume. 

lOSymbolic Logic, London, 1881; 2nd ed., 1894. 



Contents 



Preface i 

Bibliography viii 

0.1 Introduction 2 

0.2 The Two Interpretations of the Logical Calculus 2 

0.3 Relation of Inclusion 3 

0.4 Definition of Equality 4 

0.5 Principle of Identity 5 

0.6 Principle of the Syllogism 6 

0.7 Multiplication and Addition 6 

0.8 Principles of Simplification and Composition 8 

0.9 The Laws of Tautology and of Absorption 9 

0.10 Theorems on Multiplication and Addition 10 

0.11 The First Formula for Transforming Inclusions into Equalities . . 11 

0.12 The Distributive Law 13 

0.13 Definition of and 1 14 

0.14 The Law of Duality 16 

0.15 Definition of Negation 17 

0.16 The Principles of Contradiction and of Excluded Middle 19 

0.17 Law of Double Negation 19 

0.18 Second Formulas for Transforming Inclusions into Equalities ... 20 

0.19 The Law of Contraposition 21 

0.20 Postulate of Existence 22 

0.21 The Development of and of 1 23 

0.22 Properties of the Constituents 23 

0.23 Logical Functions 24 

0.24 The Law of Development 24 

0.25 The Formulas of De Morgan 26 

0.26 Disjunctive Sums 27 

0.27 Properties of Developed Functions 28 

0.28 The Limits of a Function 30 

0.29 Formula of Poretsky 32 

0.30 Schroder's Theorem 33 

0.31 The Resultant of Elimination 34 

0.32 The Case of Indetermination 36 

0.33 Sums and Products of Functions 36 



VI 



0.34 The Expression of an Inclusion by Means of an Indeterminate . . 39 

0.35 The Expression of a Double Inclusion by Means of an Indeterminate 40 

0.36 Solution of an Equation Involving One Unknown Quantity .... 42 

0.37 Elimination of Several Unknown Quantities 45 

0.38 Theorem Concerning the Values of a Function 47 

0.39 Conditions of Impossibility and Indetermination 48 

0.40 Solution of Equations Containing Several Unknown Quantities . 49 

0.41 The Problem of Boole 51 

0.42 The Method of Poretsky 52 

0.43 The Law of Forms 53 

0.44 The Law of Consequences 54 

0.45 The Law of Causes 56 

0.46 Forms of Consequences and Causes 59 

0.47 Example: Venn's Problem 60 

0.48 The Geometrical Diagrams of Venn 62 

0.49 The Logical Machine of Jevons 64 

0.50 Table of Consequences 64 

0.51 Table of Causes 65 

0.52 The Number of Possible Assertions 67 

0.53 Particular Propositions 67 

0.54 Solution of an Inequation with One Unknown 69 

0.55 System of an Equation and an Inequation 70 

0.56 Formulas Peculiar to the Calculus of Propositions 71 

0.57 Equivalence of an Implication and an Alternative 72 

0.58 Law of Importation and Exportation 74 

0.59 Reduction of Inequalities to Equalities 76 

0.60 Conclusion 77 

1 PROJECT GUTENBERG "SMALL PRINT" 



vn 



Bibliography 



11 



George Boole. The Mathematical Analysis of Logic (Cambridge and Lon- 
don, 1847). 

— An Investigation of the Laws of Thought (London and Cambridge, 1854). 
W. Stanley Jevons. Pure Logic (London, 1864). 

— "On the Mechanical Performance of Logical Inference" {Philosophical Trans- 

actions^ 1870). 

Ernst Schroder. Der Operationskreis des Logikkalkuls (Leipsic, 1877). 

— Vorlesungen iiher die Algebra der Logik, Vol. I (1890), Vol. II (1891), Vol. Ill: 

Algebra und Logik der Relative (1895) (Leipsic). ^^ 

Alexander MacFarlane. Principles of the Algebra of Logic, with Exam- 
ples (Edinburgh, 1879). 

John Venn. Symbolic Logic, 1881; 2nd. ed., 1894 (London). ^^ Studies in 
Logic by members of the Johns Hopkins University (Boston, 1883): par- 
ticularly Mrs. Ladd-Franklin, O. Mitchell and C. S. Peirce. 

A. N. Whitehead. A Treatise on Universal Algebra. Vol. I (Cambridge, 

1898). 

— "Memoir on the Algebra of Symbolic Logic" {American Journal of Mathe- 

matics, Vol. XXIII, 1901). 

EuGEN MtJLLER. Ubcr die Algebra der Logik: I. Die Grundlagen des Ge- 
bietekalkuls; II. Das Eliminationsproblem und die Syllogistik; Programs of 
the Grand Ducal Gymnasium of Tauberbischofsheim (Baden), 1900, 1901 
(Leipsic). 

W. E. Johnson. "Sur la theorie des egalites logiques" {Bibliotheque du Con- 
gres international de Philosophic. Vol. Ill, Logique et Histoire des Sci- 
ences; Paris, 1901). 

Platon Poretsky. Sept Lois fondamentales de la theorie des egalites logiques 
(Kazan, 1899). 

— Quelques lois ulterieures de la theorie des egalites logiques (Kazan, 1902). 

— "Expose elementaire de la theorie des egalites logiques a deux termes" {Revue 

de Metaphysique et de Morale. Vol. VIII, 1900.) 



^^This list contains only the works relating to the system of Boole and Schroder 
explained in this work. 

^^EuGEN MuLLER has prepared a part, and is preparing more, of the pubhcation of supple- 
ments to Vols. II and III, from the papers left by Schroder. 

^^A valuable work from the points of view of history and bibliography. 



vni 



— "Theorie des egalites logiques a trois termes" {Bibliotheque du Congres in- 

ternational de Philosophie). Vol. III. {Logique et Histoire des Sciences). 
(Paris, 1901, pp. 201-233). 

— Theorie des non-egalites logiques (Kazan, 1904). 

E. V. Huntington. "Sets of Independent Postulates for the Algebra of 
Logic" {Transactions of the American Mathematical Society^ Vol. V, 1904). 



IX 



THE ALGEBRA OF LOGIC 



0.1 Introduction 

The algebra of logic was founded by George Boole (1815-1864); it was 
developed and perfected by Ernst Schroder (1841-1902). The fundamental 
laws of this calculus were devised to express the principles of reasoning, the 
"laws of thought". But this calculus may be considered from the purely formal 
point of view, which is that of mathematics, as an algebra based upon certain 
principles arbitrarily laid down. It belongs to the realm of philosophy to decide 
whether, and in what measure, this calculus corresponds to the actual operations 
of the mind, and is adapted to translate or even to replace argument; we cannot 
discuss this point here. The formal value of this calculus and its interest for the 
mathematician are absolutely independent of the interpretation given it and of 
the application which can be made of it to logical problems. In short, we shall 
discuss it not as logic but as algebra. 

0.2 The Two Interpretations of the Logical Cal- 
culus 

There is one circumstance of particular interest, namely, that the algebra in 
question, like logic, is susceptible of two distinct interpretations, the parallelism 
between them being almost perfect, according as the letters represent concepts 
or propositions. Doubtless we can, with Boole and Schroder, reduce 
the two interpretations to one, by considering the concepts on the one hand 
and the propositions on the other as corresponding to assemblages or classes; 
since a concept determines the class of objects to which it is applied (and which 
in logic is called its extension), and a proposition determines the class of the 
instances or moments of time in which it is true (and which by analogy can also 
be called its extension). Accordingly the calculus of concepts and the calculus of 
propositions become reduced to but one, the calculus of classes, or, as Leibniz 
called it, the theory of the whole and part, of that which contains and that which 
is contained. But as a matter of fact, the calculus of concepts and the calculus 
of propositions present certain differences, as we shall see, which prevent their 
complete identification from the formal point of view and consequently their 
reduction to a single "calculus of classes". 

Accordingly we have in reality three distinct calculi, or, in the part common 
to all, three different interpretations of the same calculus. In any case the reader 
must not forget that the logical value and the deductive sequence of the formulas 
does not in the least depend upon the interpretations which may be given them, 
and, in order to make this necessary abstraction easier, we shall take care to 
place the symbols "C. I." {conceptual interpretation) and "P. I." {prepositional 
interpretation) before all interpretative phrases. These interpretations shall 
serve only to render the formulas intelligible, to give them clearness and to 
make their meaning at once obvious, but never to justify them. They may be 
omitted without destroying the logical rigidity of the system. 



In order not to favor either interpretation we shall say that the letters repre- 
sent terms; these terms may be either concepts or propositions according to the 
case in hand. Hence we use the word term only in the logical sense. When we 
wish to designate the "terms" of a sum we shall use the word summand in order 
that the logical and mathematical meanings of the word may not be confused. 
A term may therefore be either a factor or a summand. 

0.3 Relation of Inclusion 

Like all deductive theories, the algebra of logic may be established on various 
systems of principles'^; we shall choose the one which most nearly approaches 
the exposition of Schroder and current logical interpretation. 

The fundamental relation of this calculus is the binary (two-termed) relation 
which is called inclusion (for classes), suhsumption (for concepts), or implication 
(for propositions) . We will adopt the first name as affecting alike the two logical 
interpretations, and we will represent this relation by the sign < because it has 
formal properties analogous to those of the mathematical relation < ("less than") 
or more exactly <, especially the relation of not being symmetrical. Because of 
this analogy Schroder represents this relation by the sign G which we shall 
not employ because it is complex, whereas the relation of inclusion is a simple 
one. 

In the system of principles which we shall adopt, this relation is taken as a 
primitive idea and is consequently indefinable. The explanations which follow 
are not given for the purpose of defining it but only to indicate its meaning 
according to each of the two interpretations. 

C. I.: When a and b denote concepts, the relation a < b signifies that the 
concept a is subsumed under the concept b; that is, it is a species with respect 
to the genus b. From the extensive point of view, it denotes that the class of a's 
is contained in the class of 6's or makes a part of it; or, more concisely, that "All 
a's are 6's". From the comprehensive point of view it means that the concept 
b is contained in the concept a or makes a part of it, so that consequently the 
character a implies or involves the character b. Example: "All men are mortal"; 
"Man implies mortal"; "Who says man says mortal"; or, simply, "Man, therefore 
mortal". 

P. I.: When a and b denote propositions, the relation a < b signifies that the 
proposition a implies or involves the proposition 6, which is often expressed by 
the hypothetical judgement, "If a is true, b is true"; or by "a implies 6"; or more 
simply by "a, therefore 6". We see that in both interpretations the relation < 
may be translated approximately by "therefore". 



^^See Huntington, "Sets of Independent Postulates for the Algebra of Logic", Transactions 
of the Am. Math. Soc, Vol. V, 1904, pp. 288-309. [Here he says: "Any set of consistent 
postulates would give rise to a corresponding algebra, viz., the totality of propositions which 
follow from these postulates by logical deductions. Every set of postulates should be free from 
redundances, in other words, the postulates of each set should be independent, no one of them 
deducible from the rest."! 



Remark. — Such a relation as "a < 6" is a proposition, whatever may be the 
interpretation of the terms a and b. Consequently, whenever a < relation has 
two like relations (or even only one) for its members, it can receive only the 
propositional interpretation, that is to say, it can only denote an implication. 

A relation whose members are simple terms (letters) is called a primary 
proposition; a relation whose members are primary propositions is called a sec- 
ondary proposition, and so on. 

From this it may be seen at once that the propositional interpretation is 
more homogeneous than the conceptual, since it alone makes it possible to give 
the same meaning to the copula < in both primary and secondary propositions. 

0.4 Definition of Equality 

There is a second copula that may be defined by means of the first; this is the 
copular = ("equal to"). By definition we have 

a = 6, 

whenever 

a < b and b < a 

are true at the same time, and then only. In other words, the single relation 
a = b 18 equivalent to the two simultaneous relations a <b and b < a. 

In both interpretations the meaning of the copula = is determined by its 
formal definition: 

C. I.: a = b means, "All a's are 6's and all 6's are a's"; in other words, that 
the classes a and b coincide, that they are identical. ^^ 

P. I.: a = b means that a implies b and b implies a; in other words, that the 
propositions a and b are equivalent, that is to say, either true or false at the 
same time.^^ 

Remark. — The relation of equality is symmetrical by very reason of its def- 
inition: a = b is equivalent to b = a. But the relation of inclusion is not 
symmetrical: a < b is not equivalent to 6 < a, nor does it imply it. We might 
agree to consider the expression a > b equivalent to 6 < a, but we prefer for 
the sake of clearness to preserve always the same sense for the copula <. How- 
ever, we might translate verbally the same inclusion a < b sometimes by "a is 
contained in 6", and sometimes by "6 contains a". 

In order not to favor either interpretation, we will call the first member of 
this relation the antecedent and the second the consequent. 

CI.'. The antecedent is the subject and the consequent is the predicate of a 
universal affirmative proposition. 



^^This does not mean that the concepts a and h have the same meaning. Examples: "trian- 
gle" and "trilateral", "equiangular triangle" and "equilateral triangle". 

^^This does not mean that they have the same meaning. Example: "The triangle ABC has 
two equal sides", and "The triangle ABC has two equal angles". 



p. I.: The antecedent is the premise or the cause, and the consequent is the 
consequence. When an imphcation is translated by a hypothetical (or condi- 
tional) judgment the antecedent is cahed the hypothesis (or the condition) and 
the consequent is cahed the thesis. 

When we shah have to demonstrate an equahty we shah usuahy analyze it 
into two converse inclusions and demonstrate them separately. This analysis is 
sometimes made also when the equality is a datum (a premise). 

When both members of the equality are propositions, it can be separated into 
two implications, of which one is called a theorem and the other its reciprocal. 
Thus whenever a theorem and its reciprocal are true we have an equality. A 
simple theorem gives rise to an implication whose antecedent is the hypothesis 
and whose consequent is the thesis of the theorem. 

It is often said that the hypothesis is the sufficient condition of the thesis, and 
the thesis the necessary condition of the hypothesis; that is to say, it is sufficient 
that the hypothesis be true for the thesis to be true; while it is necessary that 
the thesis be true for the hypothesis to be true also. When a theorem and its 
reciprocal are true we say that its hypothesis is the necessary and sufficient 
condition of the thesis; that is to say, that it is at the same time both cause and 
consequence. 

0.5 Principle of Identity 

The first principle or axiom of the algebra of logic is the principle of identity, 
which is formulated thus: 

Ax. 1 

a < a, 

whatever the term a may be. 

C. I.: "All a's are a's", i.e., any class whatsoever is contained in itself. 

P. I.: "a implies a", i.e., any proposition whatsoever implies itself. 

This is the primitive formula of the principle of identity. By means of the 
definition of equality, we may deduce from it another formula which is often 
wrongly taken as the expression of this principle: 

a = a, 

whatever a may be; for when we have 

a < a,a < a, 

we have as a direct result, 

a = a. 

C. I.: The class a is identical with itself. 

P. I.: The proposition a is equivalent to itself. 



0.6 Principle of the Syllogism 

Another principle of the algebra of logic is the principle of the syllogism, which 
may be formulated as follows: 

Ax. 2 

(a < b){b < c) < {a < c). 

C. I.: "If all a's are 6's, and if all 6's are c's, then all a's are c's". This is the 
principle of the categorical syllogism. 

P. I.: "If a implies b, and if b implies c, a implies c." This is the principle of 
the hypothetical syllogism. 

We see that in this formula the principal copula has always the sense of 
implication because the proposition is a secondary one. 

By the definition of equality the consequences of the principle of the syllogism 
may be stated in the following formulas^^: 

(a <b) {b = c) < {a < c), 
(a = b) {b < c) < {a < c), 
(a = b) {b- c) < {a = c). 

The conclusion is an equality only when both premises are equalities. 
The preceding formulas can be generalized as follows: 

(a <b) {b < c) {c< d) < {a < d), 
{a = b) {b = c) {c = d)<{a = d). 

Here we have the two chief formulas of the sorites. Many other combinations 
may be easily imagined, but we can have an equality for a conclusion only when 
all the premises are equalities. This statement is of great practical value. In 
a succession of deductions we must pay close attention to see if the transition 
from one proposition to the other takes place by means of an equivalence or only 
of an implication. There is no equivalence between two extreme propositions 
unless all intermediate deductions are equivalences; in other words, if there is 
one single implication in the chain, the relation of the two extreme propositions 
is only that of implication. 

0.7 Multiplication and Addition 

The algebra of logic admits of three operations, logical multiplication, logical 
addition, and negation. The two former are binary operations, that is to say, 
combinations of two terms having as a consequent a third term which may or 
may not be different from each of them. The existence of the logical product 



^^Strictly speaking, these formulas presuppose the laws of multiplication which will be 
established further on; but it is fitting to cite them here in order to compare them with the 
principle of the syllogism from which they are derived. 



and logical sum of two terms must necessarily answer the purpose of a double 
postulate, for simply to define an entity is not enough for it to exist. The two 
postulates may be formulated thus: 

Ax. 3 Given any two terms, a and b, then there is a term p such that 

p < a,p < 6, 
and that for every value of x for which 

X < a, X < 6, 

we have also 

X < p. 

Ax. 4 Given any two terms, a and b, there exists a term s such that 

a < 5, 6 < 5, 

we have also 

s < X. 

It is easily proved that the terms p and s determined by the given conditions 
are unique, and accordingly we can define the product ab and the sum a + 6 as 
being respectively the terms p and s. 

C. I.: 1. The product of two classes is a class p which is contained in each 
of them and which contains every (other) class contained in each of them; 

2. The sum of two classes a and 6 is a class s which contains each of them 
and which is contained in every (other) class which contains each of them. 

Taking the words "less than" and "greater than" in a metaphorical sense 
which the analogy of the relation < with the mathematical relation of inequality 
suggests, it may be said that the product of two classes is the greatest class 
contained in both, and the sum of two classes is the smallest class which contains 
both.^^ Consequently the product of two classes is the part that is common to 
each (the class of their common elements) and the sum of two classes is the class 
of all the elements which belong to at least one of them. 

P. I.: 1. The product of two propositions is a proposition which implies each 
of them and which is implied by every proposition which implies both: 

2. The sum of two propositions is the proposition which is implied by each 
of them and which implies every proposition implied by both. 

Therefore we can say that the product of two propositions is their weakest 
common cause, and that their sum is their strongest common consequence, 
strong and weak being used in a sense that every proposition which implies 



^^According to another analogy Dedekind designated the logical sum and product by the 
same signs as the least common multiple and greatest common divisor ( Was sind und was 
sollen die Zahlen? Nos. 8 and 17, 1887. [Cf. English translation entitled Essays on Number 
(Chicago, Open Court Publishing Co. 1901, pp. 46 and 48)] Georg Cantor originally gave 
them the same designation (Mathematische Annalen, Vol. XVII, 1880). 



another is stronger than the latter and the latter is weaker than the one which 
implies it. Thus it is easily seen that the product of two propositions consists 
in their simultaneous affirmation: "a and b are true", or simply "a and 6"; and 
that their sum consists in their alternative affirmation, "either a or 6 is true", or 
simply "a or U\ 

Remark. — Logical addition thus defined is not disjunctive;^^ that is to say, 
it does not presuppose that the two summands have no element in common. 

0.8 Principles of Simplification and Composition 

The two preceding definitions, or rather the postulates which precede and justify 
them, yield directly the following formulas: 

(1) ab < a, ab < 6, 

(2) {x < a){x <b) < {x < ab), 

(3) a < a^b, b < a^b, 

(4) (a < x){b < x) < {a^b < x). 

Formulas (1) and (3) bear the name of the principle of simplification because 
by means of them the premises of an argument may be simplified by deducing 
therefrom weaker propositions, either by deducing one of the factors from a 
product, or by deducing from a proposition a sum (alternative) of which it is a 
summand. 

Formulas (2) and (4) are called the principle of composition, because by 
means of them two inclusions of the same antecedent or the same consequent 
may be combined (composed) . In the first case we have the product of the 
consequents, in the second, the sum of the antecedents. 

The formulas of the principle of composition can be transformed into equal- 
ities by means of the principles of the syllogism and of simplification. Thus we 
have 

1 (Syh.) {x < ab){ab < a) < {x < a), 
(Syh.) {x < ab){ab <b) < {x <b). 

Therefore 

(Comp.) {x < ab) < {x < a){x <b). 

2 (Syh.) {a < a^ b){a -^ b < x) < {a < x), 
(Syh.) (6 < a + b){a ^b<x)<{b<x). 



^^ Boole, closely following analogy with ordinary mathematics, premised, as a necessary 
condition to the definition of "x + y", that x and y were mutually exclusive. Jevons, and 
practically all mathematical logicians after him, advocated, on various grounds, the definition 
of "logical addition" in a form which does not necessitate mutual exclusiveness. 



Therefore 

(Comp.) {a^b < x) < {a < x){b < x). 

If we compare the new formulas with those preceding, which are their con- 
verse propositions, we may write 

{x < ah) = {x < a){x < b), 
{a -\- b < x) = {a < x){b < x). 

Thus, to say that x's contained in ab is equivalent to saying that it is con- 
tained at the same time in both a and b; and to say that x contains a -\- b is 
equivalent to saying that it contains at the same time both a and b. 

0.9 The Laws of Tautology and of Absorption 

Since the definitions of the logical sum and product do not imply any order 
among the terms added or multiplied, logical addition and multiplication evi- 
dently possess commutative and associative properties which may be expressed 
in the formulas 

ab = ba, a -\- b = b -\- a, 

{ab)c = a{bc)^ {a ^b) ^ c = a ^ {b ^ c). 

Moreover they possess a special property which is expressed in the law of 
tautology: 

a = aa. a = a -\- a. 



Demonstration: 




1 (Simpl.) 


aa < a, 


(Comp.) 


(a < a){a < a) = {a < aa) 



whence, by the definition of equality, 

{aa < a){a < aa) = (a — aa) 

In the same way: 

2 (Simpl.) a < a-\- a^ 

(Comp.) (a < a){a < a) = {a -\- a < a), 

whence 

{a < a-\- a){a ^ a < a) = {a = a ^ a). 



From this law it follows that the sum or product of any number whatever 
of equal (identical) terms is equal to one single term. Therefore in the algebra 
of logic there are neither multiples nor powers, in which respect it is very much 
simpler than numerical algebra. 

Finally, logical addition and multiplication posses a remarkable property 
which also serves greatly to simplify calculations, and which is expressed by the 
law of absorption: 

a -\- ab = a, a{a -\- b) = a. 

Demonstration: 

1 (Comp.) (a < a){ab < a) < {a -\- ab < a), 

(Simpl.) a < a -\- ab, 

whence, by the definition of equality, 

{a-\- ab < a){a < a -\- ab) = {a -\- ab = a). 

In the same way: 

1 (Comp.) {a < a){a < a -\- b) < [a < a{a -\- 6)], 

(Simpl.) a{a + 6) < a, 

whence 

[a < a{a + b)][a{a + 6) < a] = [a{a + 6) = a]. 

Thus a term (a) absorbs a summand {ab) of which it is a factor, or a factor 
(a + b) of which it is a summand. 

0.10 Theorems on Multiplication and Addition 

We can now establish two theorems with regard to the combination of inclusions 
and equalities by addition and multiplication: 

Th. 1 

{a <b) < {ac < be), {a < b) < {a ^ c < b ^ c). 

Demonstration: 

1 (Simpl.) ac < c, 

(Syh.) {ac < a){a < b) < {ac < b), 

(Comp.) {ac < b){ac < c) < {ac < be). 

2 (Simpl.) c<b^c, 

(Syh.) {a < b){b <b^c) <{a<b^c). 

(Comp.) (a < 6 + c){a < b ^ c) < {a ^ c < b ^ c). 



10 



This theorem may be easily extended to the case of equaUties: 

{a = b) < {ac = bc)^ {a = b) < {a -\- c = b -\- c). 



Th. 2 



(a < b){c < d) < {ac < bd), 
{a < b){c <d) <{a^c<b^d). 



Demonstration: 



1 (Syh.) {ac < a){a <b) < {ac < b), 
(Syh.) {ac < c){c < d) < {ac < a), 
(Comp.) {ac < b){ac < d) < {ac < bd). 

2 (Syh.) {a < b){b <b^d)<{a<b^d), 
(Syh.) (c < d){d <b^d)<{c<b^d), 
(Comp.) (a < 6 + d){c <b ^ d) < {a ^ c <b ^ d). 

This theorem may easily be extended to the case in which one of the two 
inclusions is replaced by an equality: 

{a = b){c <d) < {ac < bd), 
{a = b){c <d) <{a^c<b^d). 

When both are replaced by equalities the result is an equality: 

{a = b){c = d) < {ac = bd), 

{a = b){c = d) < {a^c = b^d). 

To sum up, two or more inclusions or equalities can be added or multiplied 
together member by member; the result will not be an equality unless all the 
propositions combined are equalities. 

0.11 The First Formula for Transforming Inclu- 
sions into Equalities 

We can now demonstrate an important formula by which an inclusion may be 
transformed into an equality, or vice versa: 

{a < b) = {a = ab) {a < b) = {a -\- b = b) 

Demonstration: 

1. {a <b) < {a = ab), {a < b) < {a ^ b = b). 



11 



For 

(Comp.) (a < a){a < b) < {a < ab)^ 

{a < b){b <b) < {a^b<b). 

On the other hand, we have 

(Simpl.) ab < a^b < a -\- b, 

(Def. =) (a < ab){ab < a) = {a = ab) 

(a + 6 < b){b <a^b) = {a^b = b). 

2. (a = ab) <{a <b), {a ^ b = b) < {a < b). 

For 

(a - ab){ab < b) < {a < b), 
{a < a^ b){a -^ b = b) < {a < b). 

Remark. — If we take the relation of equahty as a primitive idea (one not 
defined) we shah be able to define the relation of inclusion by means of one 
of the two preceding formulas. ^^ We shall then be able to demonstrate the 
principle of the syllogism. ^^ 

From the preceding formulas may be derived an interesting result: 



{a = b) = {ab = a -\-b). 



For 



1. [a = b) = {a < b){b < a), 

{a < b) = {a = ab)^ {b < a) = {a -\- b = a), 
(Syll.) (a = ab){a -\- b = a) < {ab = a -\- b). 

2. {ab = a^b) < {a^b < ab), 
(Comp.) {a^b < ab) = {a < ab){b < ab), 

{a < ab){ab < a) = {a = ab) = {a < b), 
{b < ab){ab < b) = {b = ab) = {b < a), 



Hence 



{ab = a^b) < {a < b){b < a) = {a = b). 



^'-'See Huntington, op. cit., §??. 

^■•^This can be demonstrated as follows: By definition we have {a < b) = (a = ab), and 
(b < c) = {b = be). If in the first equahty we substitute for b its value derived from the second 
equality, then a = abc. Substitute for a its equivalent ab, then ab = abc. This equality is 
equivalent to the inclusion, ab < c. Conversely substitute a for ab; whence we have a < c. 
Q.E.D. 



12 



0.12 The Distributive Law 

The principles previously stated make it possible to demonstrate the converse 
distributive law, both of multiplication with respect to addition, and of addition 
with respect to multiplication, 

ac-\- be < {a -\- b)c, ab -\- c < {a -\- c){b -\- c). 

Demonstration: 

{a < a^b) <[ac < {a^ b)c], 
(6 < a + 6) < [6c < (a + b)c]; 

whence, by composition, 

[ac < (a + b)c] [be < (a + b)c] < [ac -^ be < {a ^ b)c] 



2. {ab < a) < {ab -\- c < a -\- c), 

{ab < 6) < (a6 + c < 6 + c), 

whence, by composition, 

{ab^c< a^ c){ab -^ c < b ^ c) < [ab ^ c < {a ^ c){b + c)]. 

But these principles are not sufficient to demonstrate the direct distributive 
law 

{a -\- b)c < ac^ be, {a ^ c){b ^ c) < ab -\- c, 

and we are obliged to postulate one of these formulas or some simpler one 
from which they can be derived. For greater convenience we shall postulate the 
formula 

Ax. 5 

{a -\- b)c < ac^ be. 

This, combined with the converse formula, produces the equality 

{a + b)c = ac-\- be 

which we shall call briefly the distributive law. 
From this may be directly deduced the formula 

{a -\-b){c-\- d) = ac -\- be -\- ad -\- bd, 

and consequently the second formula of the distributive law, 

{a -\- c){b -\- c) = ab -\- c. 



13 



For 

(a -\- c){b -\- c) = ab -\- ac -\- be -\- c, 

and, by the law of absorption, 

ac -\- be -\- c = c. 
This second formula implies the inclusion cited above, 

(a -\- c){b -\- c) < ab -\- c, 

which thus is shown to be proved. 
Corollary. — We have the equality 

ab -\- ac -\- be = {a -\- b){a + c)(6 + c), 

for 

{a^b){a^c){b^c) = {a^bc){b^c) = ab^ac^bc. 

It will be noted that the two members of this equality differ only in having 
the signs of multiplication and addition transposed (compare §0.14). 

0.13 Definition of and 1 

We shall now define and introduce into the logical calculus two special terms 
which we shall designate by and by 1, because of some formal analogies that 
they present with the zero and unity of arithmetic. These two terms are formally 
defined by the two following principles which affirm or postulate their existence. 

Ax. 6 There is a term such that whatever value may be given to the term x, 
we have 

0<x. 

Ax. 7 There is a term 1 such that whatever value may be given to the term x, 
we have 

X <1. 

It may be shown that each of the terms thus defined is unique; that is to 
say, if a second term possesses the same property it is equal to (identical with) 
the first. 

The two interpretations of these terms give rise to paradoxes which we shall 
not stop to elucidate here, but which will be justified by the conclusions of the 
theory.^^ 

C. I.: denotes the class contained in every class; hence it is the "null" or 
"void" class which contains no element (Nothing or Naught), 1 denotes the class 
which contains all classes; hence it is the totality of the elements which are 



^^ Compare the author's Manuel de Logistique, Chap. I., §8, Paris, 1905 [This work, however, 
did not appear]. 



14 



contained within it. It is called, after Boole, the "universe of discourse" or 
simply the "whole". 

P. I.: denotes the proposition which implies every proposition; it is the 
"false" or the "absurd", for it implies notably all pairs of contradictory proposi- 
tions, 1 denotes the proposition which is implied in every proposition; it is the 
"true", for the false may imply the true whereas the true can imply only the 
true. 

By definition we have the following inclusions 

0<0, 0<1, 1< 1, 

the first and last of which, moreover, result from the principle of identity. It is 
important to bear the second in mind. 

C. I.: The null class is contained in the whole?^ 

P. I.: The false implies the true. 

By the definitions of and 1 we have the equivalences 

(a < 0) = (a = 0), (1 < a) = (a = 1), 

since we have 

< a, a < 1 

whatever the value of a. 

Consequently the principle of composition gives rise to the two following 
corollaries: 

(a = 0)(6 = 0) = (a + 6 = 0), 
(a = l){h= 1) = {ah= 1). 

Thus we can combine two equalities having for a second member by adding 
their first members, and two equalities having 1 for a second member by multi- 
plying their first members. 

Conversely, to say that a sum is "null" [zero] is to say that each of the 
summands is null; to say that a product is equal to 1 is to say that each of its 
factors is equal to 1. 

Thus we have 

(a + 6 = 0) < (a = 0), 
{ah = 1) < (a = 1), 

and more generally (by the principle of the syllogism) 

(a<6)(6 = 0) < (a = 0), 
{a<h){a = 1) < (6 = 1). 

It will be noted that we can not conclude from these the equalities a6 = 
and a + 6 = 1. And indeed in the conceptual interpretation the first equality 



^The rendering "Nothing is everything" must be avoided. 

15 



denotes that the part common to the classes a and b is nuh; it by no means 
fohows that either one or the other of these classes is null. The second denotes 
that these two classes combined form the whole; it by no means follows that 
either one or the other is equal to the whole. 

The following formulas comprising the rules for the calculus of and 1, can 
be demonstrated: 

axO = 0, a + l = l, 
a + = a, a X 1 = a. 

For 

(0 < a) = (0 = X a) = (a + = a), 
(a < 1) = (a = a X 1) = (a + 1 = 1). 

Accordingly it does not change a term to add to it or to multiply it by 1. 
We express this fact by saying that is the modulus of addition and 1 the 
modulus of multiplication. 

On the other hand, the product of any term whatever by is and the sum 
of any term whatever with 1 is 1. 

These formulas justify the following interpretation of the two terms: 

C. I.: The part common to any class whatever and to the null class is the 
null class; the sum of any class whatever and of the whole is the whole. The 
sum of the null class and of any class whatever is equal to the latter; the part 
common to the whole and any class whatever is equal to the latter. 

P. I.: The simultaneous affirmation of any proposition whatever and of a 
false proposition is equivalent to the latter {i.e., it is false); while their alter- 
native affirmation is equal to the former. The simultaneous affirmation of any 
proposition whatever and of a true proposition is equivalent to the former; while 
their alternative affirmation is equivalent to the latter {i.e., it is true). 

Remark. — If we accept the four preceding formulas as axioms, because of 
the proof afforded by the double interpretation, we may deduce from them the 
paradoxical formulas 

< X, and x < 1, 

by means of the equivalences established above, 

{a — ah) = {a < b) = {a -\- b = b). 

0.14 The Law of Duality 

We have proved that a perfect symmetry exists between the formulas relating to 
multiplication and those relating to addition. We can pass from one class to the 
other by interchanging the signs of addition and multiplication, on condition 
that we also interchange the terms and 1 and reverse the meaning of the 
sign < (or transpose the two members of an inclusion). This symmetry, or 



16 



duality as it is called, which exists in principles and definitions, must also exist 
in all the formulas deduced from them as long as no principle or definition is 
introduced which would overthrow them. Hence a true formula may be deduced 
from another true formula by transforming it by the principle of duality; that 
is, by following the rule given above. In its application the law of duality makes 
it possible to replace two demonstrations by one. It is well to note that this 
law is derived from the definitions of addition and multiplication (the formulas 
for which are reciprocal by duality) and not, as is often thought^^, from the 
laws of negation which have not yet been stated. We shall see that these laws 
possess the same property and consequently preserve the duality, but they do 
not originate it; and duality would exist even if the idea of negation were not 
introduced. For instance, the equality (§0.12) 

ab -\- ac -\- be = {a -\- b) {a -\- c){b -\- c) 

is its own reciprocal by duality, for its two members are transformed into each 
other by duality. 

It is worth remarking that the law of duality is only applicable to primary 
propositions. We call [after Boole ] those propositions primary which contain 
but one copula (< or =). We call those propositions secondary of which both 
members (connected by the copula < or =) are primary propositions, and so 
on. For instance, the principle of identity and the principle of simplification are 
primary propositions, while the principle of the syllogism and the principle of 
composition are secondary propositions. 



0.15 Definition of Negation 

The introduction of the terms and 1 makes it possible for us to define negation. 
This is a "uni-nary" operation which transforms a single term into another term 
called its negative?^ The negative of a is called not-a and is written a' ?^ Its 
formal definition implies the following postulate of existence^^: 



^^ Boole thus derives it {Laws of Thought, London 1854, Chap. Ill, Prop. IV). 

25 pj^ French] the same word negation denotes both the operation and its result, which 
becomes equivocaL The result ought to be denoted by another word, like [the English] "nega- 
tive". Some authors say, "supplementary" or "supplement", [e.g. Boole and Huntington 
], Classical logic makes use of the term "contradictory" especially for propositions. 

^^We adopt here the notation of MacColl; Schroder indicates not-a by ai which 
prevents the use of indices and obliges us to express them as exponents. The notation a' has 
the advantage of excluding neither indices nor exponents. The notation a employed by many 
authors is inconvenient for typographical reasons. When the negative affects a proposition 
written in an explicit form (with a copula) it is applied to the copula < or =) by a vertical 
bar (^) or /). The accent can be considered as the indication of a vertical bar appHed to 
letters. 

^^ Boole follows Aristotle in usually calhng the law of duality the principle of contradiction 
"which affirms that it is impossible for any being to possess a quality and at the same time 
not to possess it". He writes it in the form of an equation of the second degree, x — x'^ = 0, or 
x{l — cc) = in which 1 — x expresses the universe less x, or not x. Thus he regards the law 
of duality as derived from negation as stated in note 24 above. 



17 



Ax. 8 Whatever the term a may be, there is also a term a' such that we have 
at the same time 

aa = 0, a -\- a = 1. 

It can be proved by means of the fohowing lemma that if a term so defined 
exists it is unique: 
If at the same time 

ac = be, a -\- c = b -\- c, 

then 

a = b. 

Demonstration. — Multiplying both members of the second premise by a, we 
have 

a -\- ac = ab -\- ac. 

Multiplying both members by b, 

ab -\- be = b -\- be. 

By the first premise, 

ab -\- ae = ab -\- be. 

Hence 

a -\- ae = b -\- be, 

which by the law of absorption may be reduced to 

a = b. 

Remark. — This demonstration rests upon the direct distributive law. This 
law cannot, then, be demonstrated by means of negation, at least in the system 
of principles which we are adopting, without reasoning in a circle. 

This lemma being established, let us suppose that the same term a has two 
negatives; in other words, let a[ and 02 be two terms each of which by itself 
satisfies the conditions of the definition. We will prove that they are equal. 
Since, by hypothesis, 

aa[ =0, a -\- a[ = 1, 
aa'2 = 0, a + a2 = 1, 

we have 

aa'i = aa'2 5 a + a 1 = a + ^2 5 
whence we conclude, by the preceding lemma, that 

We can now speak of the negative of a term as of a unique and well-defined 
term. 

The uniformity of the operation of negation may be expressed in the follow- 
ing manner: 

If a = 6, then also a' = b' . By this proposition, both members of an equality 
in the logical calculus may be "denied". 



18 



0.16 The Principles of Contradiction and of Ex- 
cluded Middle 

By definition, a term and its negative verify the two formulas 

aa = 0, a + a = 1, 

which represent respectively the principle of contradiction and the principle of 
excluded middle?^ 

C. I.: 1. The classes a and a' have nothing in common; in other words, no 
element can be at the same time both a and not-a. 

2. The classes a and a' combined form the whole; in other words, every 
element is either a or not-a. 

P. I.: 1. The simultaneous affirmation of the propositions a and not-a is 
false; in other words, these two propositions cannot both be true at the same 
time. 

2. The alternative affirmation of the propositions a and not-a is true; in 
other words, one of these two propositions must be true. 

Two propositions are said to be contradictory when one is the negative of 
the other; they cannot both be true or false at the same time. If one is true the 
other is false; if one is false the other is true. 

This is in agreement with the fact that the terms and 1 are the negatives 
of each other; thus we have 

0x1=0, + 1 = 1. 

Generally speaking, we say that two terms are contradictory when one is the 
negative of the other. 

0.17 Law of Double Negation 

Moreover this reciprocity is general: if a term h is the negative of the term a, 
then the term a is the negative of the term h. These two statements are expressed 
by the same formulas 

a6 = 0, a + 6 = 1, 

and, while they unequivocally determine h in terms of a, they likewise determine 
a in terms of h. This is due to the symmetry of these relations, that is to say, to 
the commutativity of multiplication and addition. This reciprocity is expressed 
by the law of double negation 

{a')' = a. 



^^As Mrs. Ladd-Franklin has truly remarked ( Baldwin, Dictionary of Philosophy 
and Psychology^ article "Laws of Thought"), the principle of contradiction is not sufficient to 
define contradictories-^ the principle of excluded middle must be added which equally deserves 
the name of principle of contradiction. This is why Mrs. Ladd-Franklin proposes to 
call them respectively the principle of exclusion and the principle of exhaustion^ inasmuch 
as, according to the first, two contradictory terms are exclusive (the one of the other); and, 
according to the second, they are exhaustive (of the universe of discourse). 



19 



which may be formany proved as fohows: a' being by hypothesis the negative 
of a, we have 

aa = 0, a + a = 1. 

On the other hand, let a" be the negative of a'] we have, in the same way, 

a' a" = 0, a' ^a" = 1. 
But, by the preceding lemma, these four equalities involve the equality 

a = a . 

Q. E. D. 

This law may be expressed in the following manner: 

lib = a' ^ we have a = h' ^ and conversely, by symmetry. 

This proposition makes it possible, in calculations, to transpose the negative 
from one member of an equality to the other. 

The law of double negation makes it possible to conclude the equality of two 
terms from the equality of their negatives (if a' = h' then a = b), and therefore 
to cancel the negation of both members of an equality. 

From the characteristic formulas of negation together with the fundamental 
properties of and 1, it results that every product which contains two contra- 
dictory factors is null, and that every sum which contains two contradictory 
summands is equal to 1. 

In particular, we have the following formulas: 

a = ab -\- ab\ a = {a-\-b){a-\- b')^ 

which may be demonstrated as follows by means of the distributive law: 

a = a X 1 = a{b + b') = ab -\- ab\ 
a = a + = a + 66' = (a + b){a + b'). 

These formulas indicate the principle of the method of development which 
we shall explain in detail later (§§0.21 sqq.) 

0.18 Second Formulas for Transforming Inclusions 
into Equalities 

We can now establish two very important equivalences between inclusions and 
equalities: 

{a<b) = {ab' = 0), {a<b) = {a' ^b = 1). 

Demonstration. — 1. If we multiply the two members of the inclusion a < b 
by b' we have 

{ab' < bb') = {ab' < 0) = {ab' = 0). 



20 



2. Again, we know that 

a = ab -\- ah' . 

Now if a6' = 0, 

a = a6 + = a6. 

On the other hand: 1. Add a' to each of the two members of the inclusion 
a < 6; we have 

{a' ^a<a' ^h) = {l<a' ^h) = a' ^h = 1). 

2. We know that 

h= {a^h){a' ^h). 

Now, if a' + 6= 1, 

6 = (a + 6) X 1 = a + 6. 

By the preceding formulas, an inclusion can be transformed at will into 
an equality whose second member is either or 1. Any equality may also be 
transformed into an equality of this form by means of the following formulas: 

{a = h) = {ah' + a'h = 0), {a = h) = [{a + h'){a' + 6) = 1]. 

Demonstration: 

{a = h) = {a< h){h <a) = {ah' = 0){a'h = 0) = {ah' + a'h = 0), 
{a = h) = {a< h){h <a) = {a' ^h= l){a ^ h' = 1) = [{a' + h'){a' + 6) = 1]. 

Again, we have the two formulas 

{a = h) = [{a + h){a' + h') = 0], {a = h) = {ah + a'h' = 1), 

which can be deduced from the preceding formulas by performing the indicated 
multiplications (or the indicated additions) by means of the distributive law. 

0.19 The Law of Contraposition 

We are now able to demonstrate the law of contraposition, 

{a<h) = {h' < a'). 
Demonstration. — By the preceding formulas, we have 

{a<h) = {ah = 0) = {h' <a'). 
Again, the law of contraposition may take the form 

{a < h') = {h< a'), 



21 



which presupposes the law of double negation. It may be expressed verbally as 
follows: "Two members of an inclusion may be interchanged on condition that 
both are denied". 

C. I.: "If all a is 6, then all not-b is not-a, and conversely". 

P. I.: "If a implies 6, not-b implies not-a and conversely"; in other words, "If 
a is true b is true", is equivalent to saying, "If b is false, a is false". 

This equivalence is the principle of the reductio ad absurdum (see hypothet- 
ical arguments, modus tollens, §0.58). 

0.20 Postulate of Existence 

One final axiom may be formulated here, which we will call the postulate of 
existence: 

Ax. 9 

1 ^0 

whence may be also deduced 1 7^ 0. 

In the conceptual interpretation (C. I.) this axiom means that the universe 
of discourse is not null, that is to say, that it contains some elements, at least 
one. If it contains but one, there are only two classes possible, 1 and 0. But 
even then they would be distinct, and the preceding axiom would be verified. 

In the propositional interpretation (P. I.) this axiom signifies that the true 
and false are distinct; in this case, it bears the mark of evidence and necessity. 
The contrary proposition, 1=0, is, consequently, the type of absurdity (of the 
formally false proposition) while the propositions = 0, and 1 = 1 are types of 
identity (of the formally true proposition) . Accordingly we put 

(1 = 0) =0, (0 = 0) = (/ = 1) = 1. 

More generally, every equality of the form 

X = X 

is equivalent to one of the identity-types; for, if we reduce this equality so that 
its second member will be or 1, we find 

{xx' + xx' = 0) = (0 = 0), {xx + xV = 1) = (1 = 1). 

On the other hand, every equality of the form 

is equivalent to the absurdity-type, for we find by the same process, 

{xx + xV = 0) = (1 = 0), {xx' + xx' = 1) = (0 = 1). 



22 



0.21 The Development of and of 1 

Hitherto we have met only such formulas as directly express customary modes 
of reasoning and consequently offer direct evidence. 

We shall now expound theories and methods which depart from the usual 
modes of thought and which constitute more particularly the algebra of logic 
in so far as it is a formal and, so to speak, automatic method of an absolute 
universality and an infallible certainty, replacing reasoning by calculation. 

The fundamental process of this method is development. Given the terms 
a,b,c. . . (to any finite number) , we can develop or 1 with respect to these terms 
(and their negatives) by the following formulas derived from the distributive law: 

= aa\ 

= aa' + bb' = (a + b){a + b'){a' + b){a' + b'), 

= aa' + bb' -^ cc' = (a + 6 + c){a + 6 + c^){a + 6' + c) 

X {a^b' ^c'){a' ^b^c) 

X (a' + 6 + c'){a' + 6' + c){a' + 6' + c'); 

1 = a + a', 

1 = (a + a'){b + b') = ab ^ ab' + a'b + a'b' , 

1 = (a + a'){b + b'){c + c') = abc + abc^ + ab^c + ab^c^ 



and so on. In general, for any number n of simple terms; will be developed in 
a product containing 2'^ factors, and 1 in a sum containing 2'^ summands. The 
factors of zero comprise all possible additive combinations, and the summands 
of 1 all possible multiplicative combinations of the n given terms and their 
negatives, each combination comprising n different terms and never containing 
a term and its negative at the same time. 

The summands of the development of 1 are what Boole called the con- 
stituents (of the universe of discourse). We may equally well call them, with 
PORETSKY, ^^ the minima of discourse, because they are the smallest classes 
into which the universe of discourse is divided with reference to the n given 
terms. In the same way we shall call the factors of the development of the 
maxima of discourse, because they are the largest classes that can be determined 
in the universe of discourse by means of the n given terms. 

0.22 Properties of the Constituents 

The constituents or minima of discourse possess two properties characteristic 
of contradictory terms (of which they are a generalization); they are mutually 
exclusive, i.e., the product of any two of them is 0; and they are collectively 
exhaustive, i.e., the sum of all "exhausts" the universe of discourse. The latter 



^See the Bibliography, page xiv. 



23 



property is evident from the preceding formulas. The other results from the 
fact that any two constituents differ at least in the "sign" of one of the terms 
which serve as factors, i.e., one contains this term as a factor and the other the 
negative of this term. This is enough, as we know, to ensure that their product 
be null. 

The maxima of discourse possess analogous and correlative properties; their 
combined product is equal to 0, as we have seen; and the sum of any two of 
them is equal to 1, inasmuch as they differ in the sign of at least one of the 
terms which enter into them as summands. 

For the sake of simplicity, we shall confine ourselves, with Boole and 
Schroder, to the study of the constituents or minima of discourse, i.e., the 
developments of 1. We shall leave to the reader the task of finding and demon- 
strating the corresponding theorems which concern the maxima of discourse or 
the developments of 0. 

0.23 Logical Functions 

We shall call a logical function any term whose expression is complex, that is, 
formed of letters which denote simple terms together with the signs of the three 
logical operations. ^^ 

A logical function may be considered as a function of all the terms of dis- 
course, or only of some of them which may be regarded as unknown or variable 
and which in this case are denoted by the letters x,y,z. We shall represent a 
function of the variables or unknown quantities, x,y,z, by the symbol f{x,y,z) 
or by other analogous symbols, as in ordinary algebra. Once for all, a logical 
function may be considered as a function of any term of the universe of discourse, 
whether or not the term appears in the explicit expression of the function. 

0.24 The Law of Development 

This being established, we shall proceed to develop a function f{x) with respect 
to X. Suppose the problem solved, and let 

ax -\- hx' 

be the development sought. By hypothesis we have the equality 

f{x) = ax -\- hx' 

for all possible values of x. Make x = 1 and consequently x' = 0. We have 

/(!) = «• 



^•-•in this algebra the logical function is analogous to the integral function of ordinary algebra, 
except that it has no powers beyond the first. 



24 



Then put x = and x' = 1; we have 

/(O) = b. 

These two equahties determine the coefficients a and b of the development 
which may then be written as fohows: 

f{x) = mx+f{o)x', 

in which /(I), /(O) represent the value of the function f{x) when we let x = 1 
and X = respectively. 

Corollary. — Multiplying both members of the preceding equalities by x and x^ 
in turn, we have the following pairs of equalities ( MacColl ): 

xf{x) = ax x' f{x) = hx' 

xf{x)=xf{l), x'f{x) = x'f{0). 

Now let a function of two (or more) variables be developed with respect to 
the two variables x and y. Developing f{x,y) ffist with respect to x, we find 

f{x,y)=f{l,y)x + f{0,y)x'. 

Then, developing the second member with respect to y, we have 

/(x, y) = /(I, l)xy + /(I, 0)xy' + /(O, l)x'y + /(O, 0)xy 

This result is symmetrical with respect to the two variables, and therefore 
independent of the order in which the developments with respect to each of 
them are performed. 

In the same way we can obtain progressively the development of a function 
of 3,4, , variables. 

The general law of these developments is as follows: 

To develop a function with respect to n variables, form all the constituents 
of these n variables and multiply each of them by the value assumed by the 
function when each of the simple factors of the corresponding constituent is 
equated to 1 (which is the same thing as equating to those factors whose 
negatives appear in the constituent). 

When a variable with respect to which the development is made, y for in- 
stance, does not appear explicitly in the function (/(x) for instance), we have, 
according to the general law, 

fix) = f{x)y + f{x)y'. 

In particular, if a is a constant term, independent of the variables with re- 
spect to which the development is made, we have for its successive developments, 

a = ax ^ ax' ^ 

a = axy + axy' + ax'y + ax'y' ^ 

a = axyz + axyz' + axy' z + axy' z' + ax'yz + ax'yz' + ax'y' z 
+ ax'y'z'.'^^ 



25 



and so on. Moreover these formulas may be directly obtained by multiplying 
by a both members of each development of 1. 
Cor. 1. We have the equivalence 

(a + x^){b -\- x) = ax -\- bx -\- ab = ax -\- bx' . 

For, if we develop with respect to x, we have 

ax + bx' + abx + abx' = (a + ab)x + (6 + ab)x' = ax + bx' . 

Cor. 2. We have the equivalence 

ax + bx' + c = (a + c)x + (6 + c)x' . 

For if we develop the term c with respect to x, we find 

ax + 6x' + ex + ex' = (a + c)x + (6 + c)x' . 

Thus, when a function contains terms (whose sum is represented by c) inde- 
pendent of X, we can always reduce it to the developed form ax^bx' by adding c 
to the coefficients of both x and x' . Therefore we can always consider a function 
to be reduced to this form. 

In practice, we perform the development by multiplying each term which 
does not contain a certain letter (x for instance) by (x + x') and by developing 
the product according to the distributive law. Then, when desired, like terms 
may be reduced to a single term. 

0.25 The Formulas of De Morgan 

In any development of 1, the sum of a certain number of constituents is the 
negative of the sum of all the others. 

For, by hypothesis, the sum of these two sums is equal to 1, and their product 
is equal to 0, since the product of two different constituents is zero. 

From this proposition may be deduced the formulas of De Morgan: 

(a + b)' = a'b', (ab)' = a' ^ b' . 
Demonstration. — Let us develop the sum {a-\-b): 

a -\- b = ab -\- ab' -\- ab -\- a'b = ab -\- ab' -\- a'b. 

Now the development of 1 with respect to a and b contains the three terms 
of this development plus a fourth term a'b' . This fourth term, therefore, is the 
negative of the sum of the other three. 

We can demonstrate the second formula either by a correlative argument 
{i.e., considering the development of by factors) or by observing that the 
development of {a' -\- b'), 

a'b + ab' + a'b', 



^ These formulas express the method of classification by dichotomy. 

26 



differs from the development of 1 only by the summand ab. 

How De Morgan's formulas may be generalized is now clear; for instance 
we have for a sum of three terms, 



This development differs from the development of 1 only by the term a^b^c\ 
Thus we can demonstrate the formulas 

(a + 6 + c)' = a%^c\ {ahc)' = a' ^h' ^ c' , 

which are generalizations of De Morgan's formulas. 

The formulas of De Morgan are in very frequent use in calculation, 
for they make it possible to perform the negation of a sum or a product by 
transferring the negation to the simple terms: the negative of a sum is the 
product of the negatives of its summands; the negative of a product is the sum 
of the negatives of its factors. 

These formulas, again, make it possible to pass from a primary proposition 
to its correlative proposition by duality, and to demonstrate their equivalence. 
For this purpose it is only necessary to apply the law of contraposition to the 
given proposition, and then to perform the negation of both members. 

Example: 

ab -\- ac -\- be = (a + b){a -\- c){b -\- c). 

Demonstration: 

{ab ^ac^ be)' = [{a + b){a + c){b + c)], 
{abYiacYibcY = (a + b)' + (a + c)' + (6 + c)^ 
{a' + b'){a' + c'){b' + cO = a'b' + a'c' + b'c\ 

Since the simple terms, a, 6, c, may be any terms, we may suppress the sign 
of negation by which they are affected, and obtain the given formula. 

Thus De Morgan's formulas furnish a means by which to find or to 
demonstrate the formula correlative to another; but, as we have said above 
(§0.14), they are not the basis of this correlation. 

0.26 Disjunctive Sums 

By means of development we can transform any sum into a disjunctive sum, 
i.e., one in which each product of its summands taken two by two is zero. For, 
let (a + 6 + c) be a sum of which we do not know whether or not the three terms 
are disjunctive; let us assume that they are not. Developing, we have: 

a-\-b -\- c = abc -\- abc' + ab'c + ab'c' + a'bc + a'bc' + a'b'c. 

Now, the first four terms of this development constitute the development of 
a with respect to b and c; the two following are the development of a'b with 
respect to c. The above sum, therefore, reduces to 

a + a'b + a'b' c. 



27 



and the terms of this sum are disjunctive hke those of the preceding, as may be 
verified. This process is general and, moreover, obvious. To enumerate without 
repetition ah the a's, ah the 6's, and ah the c's, etc., it is clearly sufficient to 
enumerate all the a's, then all the 6's which are not a's, and then all the c's 
which are neither a's nor 6's, and so on. 

It will be noted that the expression thus obtained is not symmetrical, since 
it depends on the order assigned to the original summands. Thus the same sum 
may be written: 

b -\- ab^ -\- a'b'c^ c-\- ac^ -\- a'bc' ^ .... 

Conversely, in order to simplify the expression of a sum, we may suppress as 
factors in each of the summands (arranged in any suitable order) the negatives 
of each preceding summand. Thus, we may find a symmetrical expression for a 
sum. For instance, 

a + a'b = b -\- ab^ = a -\- b. 

0.27 Properties of Developed Functions 

The practical utility of the process of development in the algebra of logic lies in 
the fact that developed functions possess the following property: 

The sum or the product of two functions developed with respect to the 
same letters is obtained simply by finding the sum or the product of their 
coefficients. The negative of a developed function is obtained simply by replacing 
the coefficients of its development by their negatives. 

We shall now demonstrate these propositions in the case of two variables; 
this demonstration will of course be of universal application. 

Let the developed functions be 

dixy -\- bixy' + cix'y + dix'y' ^ 
a2xy + b2xy' + C2x'y + d2x'y' . 

1. I say that their sum is 

(ai + a2)xy + {bi + b2)xy' + (ci + C2)x'y + {di + d2)x'y' . 

This result is derived directly from the distributive law. 

2. I say that their product is 

aia2xy + bib2xy' + ciC2x'y + did2x'y' , 

for if we find their product according to the general rule (by applying the dis- 
tributive law), the products of two terms of different constituents will be zero; 
therefore there will remain only the products of the terms of the same con- 
stituent, and, as (by the law of tautology) the product of this constituent mul- 
tiplied by itself is equal to itself, it is only necessary to obtain the product of 
the coefficients. 



28 



3. Finally, I say that the negative of 

axy + hxy' + cx'y + dx'y' 

is 

a'xy + h'xy' + c'x'y + d'x'y' . 

In order to verify this statement, it is sufficient to prove that the product of 
these two functions is zero and that their sum is equal to 1. Thus 

{axy + hxy' + cx'y + dx'y'){a'xy + h'xy' + c'x'y + d'x'y') 

= {aa'xy -\- bb'xy' -\- cc'x'y -\- dd'x'y') 

= {0 ' xy ^ ' xy' ^ ' x'y ^ ' x'y') = 
{axy -\- hxy' + cx'y + dx'y') + {a'xy + h'xy' + c'x'?/ + d'x'y') 

= [{a + a')xy + (6 + ^O^?/' + (c + c^x'?/ + (<i + ^O^'^/T 

= (Ix?/ + Ixy' -\- Ix'y -\- Ix'y') = 1. 

Special Case. — We have the equalities: 

{ah + a'h'Y = ah' + a'h, 
{ah'^a'h'Y = ah^a'h', 

which may easily be demonstrated in many ways; for instance, by observing that 
the two sums {ah -\- a'h') and {ah' -\- a'h) combined form the development of 1; 
or again by performing the negation {ah -\- a'h')' by means of De Morgan's 
formulas (§0.25). 

From these equalities we can deduce the following equality: 

{ah' + a6 = 0) = {ah + a'h' = 1), 

which result might also have been obtained in another way by observing that (§0.18) 

{a = h) = {ah' + a'h = 0) = [{a + h'){a' + 6) = 1], 

and by performing the multiplication indicated in the last equality. 
Theorem. — We have the following equivalences:^'^ 

{a = he' + h'c) = {h = ac' + a'c) = (c = ah' + a'h). 

For, reducing the ffist of these equalities so that its second member will be 0, 

a{hc^h'c')^a'{hc' ^h'c) =0, 
ahc + ah'c' + a'hc' + a'h'c = 0. 

Now it is clear that the first member of this equality is symmetrical with 
respect to the three terms a^h^c. We may therefore conclude that, if the two 



W. Stanley Jevons, Pure Logic, 1864, p. 61. 

29 



other equalities which differ from the first only in the permutation of these three 
letters be similarly transformed, the same result will be obtained, which proves 
the proposed equivalence. 

Corollary. — If we have at the same time the three inclusions: 

a < bc^ -\- h' c^ b < ac^ -\- a'c^ c < ab^ -\- a'b. 

we have also the converse inclusion, an therefore the corresponding equalities 

a = bc^ -\- b'c^ b = ac^ -\- a'c^ c = ab^ -\- a'b. 

For if we transform the given inclusions into equalities, we shall have 

abc + ab'c' = 0, abc + a'bc' = 0, abc + a'b'c = 0, 

whence, by combining them into a single equality, 

abc + ab'c' + a'bc' + a'b'c = 0. 

Now this equality, as we see, is equivalent to any one of the three equalities 
to be demonstrated. 

0.28 The Limits of a Function 

A term x is said to be comprised between two given terms, a and b, when it 
contains one and is contained in the other; that is to say, if we have, for instance, 

a < X, X < b, 

which we may write more briefiy as 

a < X < b. 

Such a formula is called a double inclusion. When the term x is variable and 
always comprised between two constant terms a and b, these terms are called 
the limits of x. The first (contained in x) is called inferior limit; the second 
(which contains x) is called the superior limit. 

Theorem. — A developed function is comprised between the sum and the 
product of its coefficients. 

We shall first demonstrate this theorem for a function of one variable, 

ax -\- bx' . 

We have, on the one hand, 

{ab < a) < {abx < ax), 
{ab <b) < {abx' < bx'). 



30 



Therefore 

abx -\- ahx' < ax + hx' ^ 

or 

ah < ax ^ hx' . 



On the other hand, 



Therefore 



or 



To sum up, 



{a < a ^h) < [ax < {a -\- b)x], 
{b <a^b) < [bx' < (a + b)x']. 

ax -\- bx' < {a-\-b){x -\- x')^ 

ax + bx' < a -\- b. 

ab < ax -\- bx' < a -\- b. 



Q. E. D. 

Remark 1. This double inclusion may be expressed in the following form:^^ 

f{b) < fix) < f{a). 

For 

/(a) = aa -\- ba' = a -\- b, 
f{b) =ab^ bb' = ab. 

But this form, pertaining as it does to an equation of one unknown quantity, 
does not appear susceptible of generalization, whereas the other one does so 
appear, for it is readily seen that the former demonstration is of general appli- 
cation. Whatever the number of variables n (and consequently the number of 
constituents 2^) it may be demonstrated in exactly the same manner that the 
function contains the product of its coefficients and is contained in their sum. 
Hence the theorem is of general application. 

Remark 2. — This theorem assumes that all the constituents appear in the 
development, consequently those that are wanting must really be present with 
the coefficient 0. In this case, the product of all the coefficients is evidently 0. 
Likewise when one coefficient has the value 1, the sum of all the coefficients is 
equal to 1. 

It will be shown later (§0.38) that a function may reach both its limits, and 
consequently that they are its extreme values. As yet, however, we know only 
that it is always comprised between them. 



EuGEN MuLLER, Aus dcv Algebra der Logik, Art. II. 



31 



0.29 Formula of Poretsky.^^ 

We have the equivalence 

{x = ax -\- hx') = {h < X < a). 

Demonstration. — First multiplying by x both members of the given equality 
[which is the first member of the entire secondary equality], we have 

which, as we know, is equivalent to the inclusion 

X < a. 
Now multiplying both members by x' ^ we have 

= hx', 
which, as we know, is equivalent to the inclusion 

h < X. 
Summing up, we have 

{x = ax ^ hx') < {h < X < a). 

Conversely, 

{h < X < a) < {x = ax ^ hx'). 

For 

{x < a) = {x = ax), 
{h<x) = {hx' = 0). 

Adding these two equalities member to member [the second members of the 
two larger equalities], 

{x = ax){o = hx) < {x = ax -\- hx'). 

Therefore 

{h < X < a) < {x = ax -\- hx') 

and thus the equivalence is proved. 



^^ PoRETSKY, "Sur les methodes pour resoudre les egalites logiques". (Bull, de la Soc. 
math, de Kazan, Vol. II, 1884). 



32 



0.30 Schroder's Theorem. ^^ 

The equality 

ax -\- hx' = 

signifies that x hes between a' and h. 
Demonstration: 

{ax + hx' = 0) = {ax = {)){hx' = 0), 
{ax = 0) = (x < a'), 
{hx' = {)) = {h<x). 

{ax ^hx' = ^) = {h <x < a'). 



Hence 



Comparing this theorem with the formula of Poretsky, we obtain at once 
the equality 

{ax + hx' = 0) = (x = a'x + hx')^ 

which may be directly proved by reducing the formula of Poretsky to an 
equality whose second member is 0, thus: 

{x = a'x + hx') = [x{ax + h'x') + x' {a'x + hx') = 0] = {ax + hx' = 0). 

If we consider the given equality as an equation in which x is the unknown 
quantity, Poretsky's formula wih be its solution. 
From the double inclusion 

h <x <a' 

we conclude, by the principle of the syllogism, that 

h<a' 

This is a consequence of the given equality and is independent of the un- 
known quantity x. It is called the resultant of the elimination of x in the given 
equation. It is equivalent to the equality 

a6 = 0. 

Therefore we have the implication 

{ax^hx' = 0) < (a6 = 0). 

Taking this consequence into consideration, the solution may be simplified, 
for 

(a6 = 0) = {h = a'h). 



Schroder, Operationskreis des Logikkalkiils (1877), Theorem 20. 



33 



Therefore 

X = a'x + bx' = a'x + a'bx' 
= a'bx -\- a'b'x + a'bx' = a'6 + a'6'x 

This form of the solution conforms most closely to common sense: since x' 
contains b and is contained in a', it is natural that x should be equal to the sum 
of b and a part of a' (that is to say, the part common to a' and x) . The solution 
is generally indeterminate (between the limits a' and 6); it is determinate only 
when the limits are equal, 

a' = b, 

for then 

X = b ^ a'x = b -\-bx = b = a\ 

Then the equation assumes the form 

{ax -\- a'x' = 0) = {a' = x) 
and is equivalent to the double inclusion 

{a' < X < a') = {x = a'). 

0.31 The Resultant of Elimination 

When ab is not zero, the equation is impossible (always false), because it has a 
false consequence. It is for this reason that Schroder considers the resultant 
of the elimination as a condition of the equation. But we must not be misled 
by this equivocal word. The resultant of the elimination of x is not a cause 
of the equation, it is a consequence of it; it is not a sufficient but a necessary 
condition. 

The same conclusion may be reached by observing that ab is the inferior limit 
of the function ax-\-bx' , and that consequently the function can not vanish unless 
this limit is 0. 

{ab < ax -\- bx'){ax -\- bx' = 0) < {ab = 0). 

We can express the resultant of elimination in other equivalent forms; for 
instance, if we write the equation in the form 

{a^x'){b^x) =0, 

we observe that the resultant 

ab = 

is obtained simply by dropping the unknown quantity (by suppressing the 
terms x and x'). Again the equation may be written: 

a'x -\- b'x' = 1 



34 



and the resultant of elimination: 

a' + 6' = l. 

Here again it is obtained simply by dropping the unknown quantity.^^ 
Remark. If in the equation 

ax -\-bx^ = 

we substitute for the unknown quantity x its value derived from the equations, 

X = a^x -\- bx\ x^ = ax -\- b^x\ 

we find 

{abx -\- abx' = 0) = {ab = 0), 

that is to say, the resultant of the elimination of x which, as we have seen, is 
a consequence of the equation itself. Thus we are assured that the value of x 
verifies this equation. Therefore we can, with VoiGT, define the solution of an 
equation as that value which, when substituted for x in the equation, reduces 
it to the resultant of the elimination of x. 

Special Case. — When the equation contains a term independent of x, z.e., 
when it is of the form 

ax + 6x' + c = 

it is equivalent to 

(a + c)x + (6 + c)x' = 0, 

and the resultant of elimination is 

{a -\- c){b -\- c) = a6 + c = 0, 

whence we derive this practical rule: To obtain the resultant of the elimination 
of X in this case, it is sufficient to equate to zero the product of the coefficients 
of X and x', and add to them the term independent of x. 



^^This is the method of ehmination of Mrs. Ladd-Franklin and Mr. Mitchell, but 
this rule is deceptive in its apparent simpHcity, for it cannot be appHed to the same equation 
when put in either of the forms 

ax + bx' = 0, (a + x){b' + x) = 1. 

Now, on the other hand, as we shall see (§0.54), for inequalities it may be applied to the 
forms 

ax + bx' / 0, (a' + x'){b' + x) / 1. 

and not to the equivalent forms 

{a-\-x){b-\- x) ^0, a'x^b'x +1. 

Consequently, it has not the mnemonic property attributed to it, for, to use it correctly, it 
is necessary to recall to which forms it is applicable. 



35 



0.32 The Case of Indetermination 

Just as the resultant 

ab = 

corresponds to the case when the equation is possible, so the equality 

corresponds to the case of absolute indetermination. For in this case the equation 
both of whose coefficients are zero (a = 0), (6 = 0), is reduced to an identity 
(0 = 0), and therefore is "identically" verified, whatever the value of x may be; 
it does not determine the value of x at all, since the double inclusion 

b < X < a' 

then becomes 

<x< 1 

which does not limit in any way the variability of x. In this case we say that 
the equation is indeterminate. 

We shall reach the same conclusion if we observe that (a + b) is the superior 
limit of the function ax-\-bx and that, if this limit is 0, the function is necessarily 
zero for all values of x, 

{ax + 6x' < a + b){a + 6 = 0) < (ax + bx^ = 0). 

Special Case. — When the equation contains a term independent of x, 

ax -\- bx^ -\- c = 0, 

the condition of absolute indetermination takes the form 

a + 6 + c = 0. 

For 

ax -\- bx' -\- c = (a + c)x -\- {b -\- c)x' ^ 
(a + c) + (6 + c) = a + 6 + c = 0. 

0.33 Sums and Products of Functions 

It is desirable at this point to introduce a notation borrowed from mathematics, 
which is very useful in the algebra of logic. Let f{x) be an expression containing 
one variable; suppose that the class of all the possible values of x is determined; 
then the class of all the values which the function f{x) can assume in conse- 
quence will also be determined. Their sum will be represented by ^^ f{x) and 
their product by Y\x /(^) This is a new notation and not a new notion, for it is 
merely the idea of sum and product applied to the values of a function. 



36 



When the symbols ^ and Yl ^^^ apphed to propositions, they assume an 
interesting significance: 

X 

means that f{x) =0 is true for every value of x; and 

X 

that f{x) = is true for some value of x. For, in order that a product may 
be equal to 1 {i.e., be true), all its factors must be equal to 1 {i.e., be true); 
but, in order that a sum may be equal to 1 {i.e., be true), it is sufficient that 
only one of its summands be equal to 1 {i.e., be true). Thus we have a means 
of expressing universal and particular propositions when they are applied to 
variables, especially those in the form: "For every value of x such and such a 
proposition is true", and "For some value of x, such and such a proposition is 
true", etc. 

For instance, the equivalence 

{a = b) = {ac = be) {a -\- c = b -\- c) 

is somewhat paradoxical because the second member contains a term (c) which 
does not appear in the first. This equivalence is independent of c, so that we 
can write it as follows, considering c as a variable x 

TT[(a = b) = {ax = bx){a -\- x = b -\- x)], 

X 

or, the first member being independent of x, 

(a = 6) = 1 I [{ax = bx) {a -\- x = b -\- x)]. 

X 

In general, when a proposition contains a variable term, great care is neces- 
sary to distinguish the case in which it is true for every value of the variable, 
from the case in which it is true only for some value of the variable. ^^ This is 
the purpose that the symbols Yl ^^^ XI serve. 

Thus when we say for instance that the equation 



ax - 



bx' = 



is possible, we are stating that it can be verified by some value of x; that is to 
say, 

(ax -\- bx' = 0), 



E(^ 



^^This is the same as the distinction made in mathematics between identities and equations, 
except that an equation may not be verified by any value of the variable. 



37 



and, since the necessary and sufficient condition for this is that the resultant 
{ab = 0) is true, we must write 



2_^{cix -\- bx = 0) = {ab = 0), 



although we have only the implication 

{ax^bx = 0) < {ab = 0). 

On the other hand, the necessary and sufficient condition for the equation 
to be verified by every value of x is that 

a + 6 = 0. 

Demonstration. — 1. The condition is sufficient, for if 

(a + 6 = 0) = (a = 0)(6 = 0), 

we obviously have 

ax -\-bx' = 

whatever the value of x; that is to say, 

Y[{ax^bx' = 0). 

X 

2. The condition is necessary, for if 

W{ax + bx') = 0, 

X 

the equation is true, in particular, for the value x = a; hence 

Therefore the equivalence 

Y[{ax + 6x' = 0) = (a + 6 = 0) 

X 

is proved. ^^ In this instance, the equation reduces to an identity: its first 
member is "identically" null. 



^^ EUGEN MULLER, Op. Clt. 



38 



0.34 The Expression of an Inclusion by Means of 
an Indeterminate 

The foregoing notation is indispensable in almost every case where variables or 
indeterminates occur in one member of an equivalence, which are not present in 
the other. For instance, certain authors predicate the two following equivalences 

{a < b) = {a = hu) = {a-\-v = b)^ 

in which u, v are two "indeterminates". Now, each of the two equalities has the 
inclusion (a < b) as its consequence, as we may assure ourselves by eliminating 
u and V respectively from the following equalities: 

1. [a{b' + u') + a%u = 0] = [{ab' + a%)u + au' = 0]. 

Resultant: 

[{ab' + a'b)a = 0] = {ab' = 0) = {a<b). 

2. [(a + v)b' + a%v = 0] = [b'v + {ab' + a^y = 0]. 

Resultant: 

[b\ab' + a%) = 0] = {ab' = 0) + (a < b). 

But we cannot say, conversely, that the inclusion implies the two equalities 
for any values of u and v; and, in fact, we restrict ourselves to the proof that 
this implication holds for some value of u and v, namely for the particular values 

u = a, b = v; 

for we have 

(a = ab) = {a < b) = {a -\- b = b). 

But we cannot conclude, from the fact that the implication (and therefore 
also the equivalence) is true for some value of the indeterminates, that it is true 
for all; in particular, it is not true for the values 

IX = 1, V = 0, 

for then (a = bu) and {a-\-v = b) become {a = b), which obviously asserts more 
than the given inclusion (a < b).^^ 

Therefore we can write only the equivalences 

{a <b) = ^(a = bu) = ^(a -^v = b), 



^^ Likewise if we make 

u = 0, V = 1, 

we obtain the equalities 

(a = 0), (6=1), 
which assert stih more than the given inclusion. 



39 



but the three expressions 



are not equivalent. ^^ 

0.35 The Expression of a Double Inclusion by 
Means of an Indeterminate 

Theorem. The double inclusion 

b < X < a 

is equivalent to the equality x = au -\- bu' together with the condition (b < a), u 
being a term absolutely indeterminate. 

Demonstration. — Let us develop an equality in question, 

x{a'u + b'u') + x'{au + bu') = 0, 
{a'x + ax')u + {b'x + bx')u' = 0. 

Eliminating u from it, 

a'b'x + abx' = 0. 

This equality is equivalent to the double inclusion 

ab < X < a -\- b. 



^'-' According to the remark in the preceding note, it is clear that we have 
]J(a = bu) = {a = b = 0), J|(a + i; = b) = {a = b=l), 

V V 

since the equahties affected by the sign H ^^ay be Hkewise verified by the values 

u = 0, u = 1 and v = 0, v = 1. 

If we wish to know within what limits the indeterminates u and v are variable, it is sufficient 
to solve with respect to them the equations 

(a < b) = (a = bu), (a < b) = {a -\- v = b), 

or 

{ab' = a'bu + ab' + au , ab' = ab' + b' v + a'bv' , 
or 

a'bu + abu' = 0, a'b'v + a'bv' = 0, 

from which (by a formula to be demonstrated later on) we derive the solutions 

u = ab -\- w{a + b'), v = a'b + w{a + 6), 

or simply 

u = ab + wb' , V = a'b + wa, 

w being absolutely indeterminate. We would arrive at these solutions simply by asking: By 
what term must we multiply b in order to obtain a? By a term which contains ab plus any 
part of b' . What term must we add to a in order to obtain b? A term which contains a'b plus 
any part of a. In short, u can vary between ab and a + b' , v between a'b and a + 6. 



40 



But, by hypothesis, we have 

{b < a) = {ah = b) = {a -\- b = a). 

The double inclusion is therefore reduced to 

b < X < a. 

So, whatever the value of u, the equality under consideration involves the 
double inclusion. Conversely, the double inclusion involves the equality, what- 
ever the value of x may be, for it is equivalent to 

a'x -\- bx' = 0, 

and then the equality is simplified and reduced to 

ax'u + b'xu' = 0. 

We can always derive from this the value of u in terms of x, for the resultant 
{ab'xx' = 0) is identically verified. The solution is given by the double inclusion 

b'x < u < a' -\- X. 

Remark. — There is no contradiction between this result, which shows that 
the value of u lies between certain limits, and the previous assertion that u is 
absolutely indeterminate; for the latter assumes that x is any value that will 
verify the double inclusion, while when we evaluate u in terms of x the value of 
X is supposed to be determinate, and it is with respect to this particular value 
of X that the value of u is subjected to limits. ^^ 

In order that the value of u should be completely determined, it is necessary 
and sufficient that we should have 

b'x = a' + X, 

that is to say, 

b'xax' + (6 + x'){a' + x) = 

or 

bx + a'x' = 0. 

Now, by hypothesis, we already have 

a'x + bx' = 0. 
If we combine these two equalities, we find 

(a + 6 = 0) = (a = l)(6 = 0). 



^■•^ Moreover, if we substitute for x its inferior limit b in the inferior limit of u, this limit 
becomes bb' = 0; and, if we substitute for x its superior limit a in the superior limit of u, this 
limit becomes a + a' = 1. 

41 



This is the case when the value of x is absolutely indeterminate, since it lies 
between the limits and 1. 
In this case we have 

u = b^x = a -\- X = X. 

In order that the value of u be absolutely indeterminate, it is necessary and 
sufficient that we have at the same time 

b'x = 0, a' + x = l, 

or 

b'x -\- ax' = 0, 

that is 

a < X < b. 

Now we already have, by hypothesis, 

b < X < a; 

so we may infer 

b = X = a. 

This is the case in which the value of x is completely determinate. 

0.36 Solution of an Equation Involving One Un- 
known Quantity 

The solution of the equation 

ax -\- bx' = 

may be expressed in the form 

X = a'u + bu' ^ 

u being an indeterminate, on condition that the resultant of the equation be 
verified; for we can prove that this equality implies the equality 

ab'x + a'bx' = 0, 

which is equivalent to the double inclusion 

a'b < X < a' -\- b. 

Now, by hypothesis, we have 

{ab = 0) = {a% = b) = {a' ^b = a'). 

Therefore, in this hypothesis, the proposed solution implies the double in- 
clusion 

b < X < a'] 



42 



which is equivalent to the given equation. 

Remark. — In the same hypothesis in which we have 

{ab = 0) = {h<a'), 

we can always put this solution in the simpler but less symmetrical forms 

X = b-\- a'u, x = a\b-\-u). 

For 

1. We have identically 

b = bu-\- bu' . 

Now 

{b < a') < {bu < a'u). 

Therefore 

{x = bu' + a'u) = (x = 6 + a'u). 

2. Let us now demonstrate the formula 

Now 

a'b = b. 

Therefore 

X = b -\- a'u 

which may be reduced to the preceding form. 

Again, we can put the same solution in the form 

X = a'b -\- u{ab + a'b')^ 

which follows from the equation put in the form 

ab'x + a'bx' = 0, 

if we note that 

a' ^b = ab^a'b^ a'b' 

and that 

ua'b < a'b. 

This last form is needlessly complicated, since, by hypothesis, 

ab = 0. 

Therefore there remains 

X = a'b -\- ua'b' 

which again is equivalent to 

X = b -\- ua' ^ 



43 



since 

a'h = h and a' = a'h + a'h' . 

Whatever form we give to the solution, the parameter u in it is absolutely 
indeterminate, z.e., it can receive all possible values, including and 1; for when 
1^ = we have 

X = 6, 

and when i^ = 1 we have 



and these are the two extreme values of x. 

Now we understand that x is determinate in the particular case in which 
a' = 6, and that, on the other hand, it is absolutely indeterminate when 

6 = 0, a' = 1, (ora = 0). 

Summing up, the formula 

X = a'u + hu' 

replaces the "limited" variable x (lying between the limits a' and h) by the 
"unlimited" variable u which can receive all possible values, including and 1. 
Remark}'^ — The formula of solution 

X = a'x + hx' 

is indeed equivalent to the given equation, but not so the formula of solution 

X = a'u + hu' 

as a function of the indeterminate u. For if we develop the latter we find 

ah'x + a'hx' + ah{xu + x'u') + a'h'{xu' + x'u) = 0, 

and if we compare it with the developed equation 

ah + ah'x + a'hx' = 0, 

we ascertain that it contains, besides the solution, the equality 

ah{xu' -\- x'u) = 0, 

and lacks of the same solution the equality 

a'h'{xu' ^x'u) = 0. 

Moreover these two terms disappear if we make 



PoRETSKY. Sept lois, Chaps. XXXIII and XXXIV. 

44 



and this reduces the formula to 

X = a'x + hx' . 

From this remark, Poretsky concluded that, in general, the solution of 
an equation is neither a consequence nor a cause of the equation. It is a cause 
of it in the particular case in which 

ah = 0, 

and it is a consequence of it in the particular case in which 

{a'h' = {)) = {a^b = i). 

But if ab is not equal to 0, the equation is unsolvable and the formula of 
solution absurd, which fact explains the preceding paradox. If we have at the 
same time 

ab = and a -\- b = 1, 

the solution is both consequence and cause at the same time, that is to say, it 
is equivalent to the equation. For when a = b the equation is determinate and 
has only the one solution 

X = a' = b. 

Thus, whenever an equation is solvable, its solution is one of its causes; and, 
in fact, the problem consists in finding a value of x which will verify it, i.e., 
which is a cause of it. 

To sum up, we have the following equivalence: 

{ax -\- bx' = 0) = {ab = 0) /_^(^ = cl'u + bu') 

u 

which includes the following implications: 

{ax^bx' = 0) < (a6 = 0), 
{ax + bx' = 0) < /_^(^ = ci'u + bu'), 

u 

{ab = 0) ^(x = a'u + bu') < {ax + bx' = 0). 

u 

0.37 Elimination of Several Unknown Quantities 

We shall now consider an equation involving several unknown quantities and 
suppose it reduced to the normal form, i.e., its first member developed with 
respect to the unknown quantities, and its second member zero. Let us first 
concern ourselves with the problem of elimination. We can eliminate the un- 
known quantities either one by one or all at once. 



45 



For instance, let 

0(x, 7/, z) = axyz -\- hxyz' + cxy' z + dxy' z' 

+ fx'yz + gx'yz' + hx'y' z + kx'y' z' = 

be an equation involving three unknown quantities. 

We can eliminate 2: by considering it as the only unknown quantity, and we 
obtain as resultant 

{axy + cxy' + fx'y + hx'y'){hxy + dxy' + 5'x'?/ + kx'y') = 

or 

(6) ahxy + C(ix7/' + fgx'y -\- hkx'y' = 0. 

If equation (5) is possible, equation (6) is possible as well; that is, it is verified 
by some values of x and y. Accordingly we can eliminate y from the equation 
by considering it as the only unknown quantity, and we obtain as resultant 

{abx -\- fgx'){cdx + hkx') = 



or 

(7) abcdx -\- fghkx' = 0. 

If equation (5) is possible, equation (7) is also possible;, that is, it is verified 
by some values of x. Hence we can eliminate x from it and obtain as the final 
resultant, 

abed • fghk = 

which is a consequence of (5), independent of the unknown quantities. It is 
evident, by the principle of symmetry, that the same resultant would be obtained 
if we were to eliminate the unknown quantities in a different order. Moreover 
this result might have been foreseen, for since we have (§0.28) 

abcdfghk < (/)(x, ?/, z), 

0(x, 7/, z) can vanish only if the product of its coefficients is zero: 

[(l){x,y,z) = 0] < {abcdfghk = 0). 

Hence we can eliminate all the unknown quantities at once by equating to 
the product of the coefficients of the function developed with respect to all 
these unknown quantities. 

We can also eliminate some only of the unknown quantities at one time. To 
do this, it is sufficient to develop the first member with respect to these unknown 
quantities and to equate the product of the coefficients of this development to 0. 
This product will generally contain the other unknown quantities. Thus the 
resultant of the elimination of z alone, as we have seen, is 

abxy -\- cdxy' -\- fgx'y -\- hkx'y' = 
46 



and the resultant of the ehmination of y and z is 

ahcdx + fghkx' = 0. 

These partial resultants can be obtained by means of the following practical 
rule: Form the constituents relating to the unknown quantities to be retained; 
give each of them, for a coefficient, the product of the coefficients of the con- 
stituents of the general development of which it is a factor, and equate the sum 
to 0. 

0.38 Theorem Concerning the Values of a Func- 
tion 

All the values which can he assumed by a function of any number of variables 
f{x^y^z...) are given by the formula 

abc . . .k -\-u{a-\-b-\- c-\- . . . -\- k)^ 

in which u is absolutely indeterminate, and a^b^c. . . ,k are the coefficients of 
the development of f. 

Demonstration. — It is sufficient to prove that in the equality 

/(x, y^z . . .) = abc . . .k -\- u{a + 6 + c + . . . + /c) 

u can assume all possible values, that is to say, that this equality, considered as 
an equation in terms of ix, is indeterminate. 

In the first place, for the sake of greater homogeneity, we may put the second 
member in the form 

u'abc . . .k -\- u{a + 6 + c + . . . + /c), 

for 

abc . . .k = uabc . . .k -\- u'abc . . . /c, 

and 

uabc . . .k < u{a -\- b -\- c-\- . . . -\- k). 

Reducing the second member to (assuming there are only three variables 

x,y,z) 

{axyz + bxyz' + cxy' z + . . . + kx'y' z') 

X [ua'b'c' ...k' ^ u\a' + 6' + c' + . . . + k')] 
+ {a'xyz + b'xyz' + c'xy' z + . . . + k'x'y' z') 
X [u{a + 6 + c+... + A;)+ u'abc . . . /c] = 0, 

or more simply 

u{a + 6 + c + . . . + k) {a'xyz -\- b'xyz -\- c'xy' z + . . . + k'x'y' z') 
-^ u' {a' ^b' ^ c' ^ . . . ^ k'){axyz -\- bxyz + . . . + kx'yz) = 0. 



47 



If we eliminate all the variables x, ?/, z, but not the indeterminate u, we get 
the resultant 

1^(0 + 6 + c + . . . + k)a'h'c' ...k' 

+ u'{a' + 6' + c' + . . . + k')ahc ...k = 0. 

Now the two coefficients of u and u' are identically zero; it follows that u is 
absolutely indeterminate, which was to be proved. ^^ 

From this theorem follows the very important consequence that a function 
of any number of variables can be changed into a function of a single variable 
without diminishing or altering its "variability". 

Corollary. — A function of any number of variables can become equal to either 
of its limits. 

For, if this function is expressed in the equivalent form 

abc . . .k -\- u{a -\- b -\- c-\- . . . -\- k), 

it will be equal to its minimum {abc. . . k) when i^ = 0, and to its maximum 
(a + 6 + c + . . . + /c) when u = 1. 

Moreover we can verify this proposition on the primitive form of the function 
by giving suitable values to the variables. 

Thus a function can assume all values comprised between its two limits, 
including the limits themselves. Consequently, it is absolutely indeterminate 
when 

abc . . .k = and a -\- b -\- c-\- . . . -\- k = 1 

at the same time, or 

abc...k = = a%'c' ...k\ 



0.39 Conditions of Impossibility and Indetermi- 
nation 

The preceding theorem enables us to find the conditions under which an equation 
of several unknown quantities is impossible or indeterminate. Let /(x, y^z . . .) 
be the first member supposed to be developed, and a^b^c. . . ,k its coefficients. 
The necessary and sufficient condition for the equation to be possible is 

abc. . .k = 0. 

For, (1) if / vanishes for some value of the unknowns, its inferior limit 
abc. . . k must be zero; (2) if abc. . .k is zero, / may become equal to it, and 
therefore may vanish for certain values of the unknowns. 

The necessary and sufficient condition for the equation to be indeterminate 
(identically verified) is 

a + 6 + c... + A: = 0. 



^^ Whitehead, Universal Algebra, Vol. I, §33 (4). 

48 



For, (1) if a + 6 + c + . . . + A: is zero, since it is the superior limit of /, this 
function wih always and necessarily be zero; (2) if / is zero for all values of the 
unknowns, a -\- b -\- c-\- . . . -\- k will be zero, for it is one of the values of /. 

Summing up, therefore, we have the two equivalences 

^[/(x,7/,z,...)=0] = (a6c.../c = 0). 
l[[f{x, 7/, z, . . .) = 0] = (a + 6 + c. . . + /c = 0). 

The equality abc ... /c = is, as we know, the resultant of the elimination of 
all the unknowns; it is the consequence that can be derived from the equation 
(assumed to be verified) independently of all the unknowns. 

0.40 Solution of Equations Containing Several Un- 
known Quantities 

On the other hand, let us see how we can solve an equation with respect to its 
various unknowns, and, to this end, we shall limit ourselves to the case of two 
unknowns 

axy -\- hxy' + cx'y + dx'y' = 0. 

First solving with respect to x, 

X = {a'y + h'y')x + {cy + dy')x' . 

The resultant of the elimination of x is 

acy + hdy' = 0. 

If the given equation is true, this resultant is true. 
Now it is an equation involving y only; solving it, 

y = {a' ^c')y^hdy'. 

Had we eliminated y first and then x, we would have obtained the solution 

y = {a'x + c'x')y + {hx + dx')y' 

and the equation in x 

abx -\- cdx' = 0, 

whence the solution 

X = (a' + h')x + cdx' . 

We see that the solution of an equation involving two unknown quantities 
is not symmetrical with respect to these unknowns; according to the order in 
which they were eliminated, we have the solution 

X = {a'y + b'y')x -\- {cy -\- dy')x' , 
y = {a'^ c')y + hdy', 



49 



or the solution 

X = {a' ^ h')x + cdx, 

y = {a'x + c'x')y + {hx + dx')y' . 

If we replace the terms x, ?/, in the second members by indeterminates u^ v, 
one of the unknowns will depend on only one indeterminate, while the other 
will depend on two. We shall have a symmetrical solution by combining the two 
formulas, 

X = {a^ -\- h')u + cdu\ 
y = {a'^ c')v + hdv', 

but the two indeterminates u and v will no longer be independent of each other. 
For if we bring these solutions into the given equation, it becomes 

ahcd + ah' c'uv + a'hd'uv' + a'cd'u'v + h'c'du'v' = 

or since, by hypothesis, the resultant ahcd = is verified, 

ah' c'uv + a'hd'uv' + a'cdu'v + h'c'du'v' = 0. 

This is an "equation of condition" which the indeterminates u and v must 
verify; it can always be verified, since its resultant is identically true, 

ah'c' • a'hd' • a'cd' • h'c'd = aa' • hh' • cc' • dd' = 0, 

but it is not verified by any pair of values attributed to u and v. 

Some general symmetrical solutions, i.e., symmetrical solutions in which 
the unknowns are expressed in terms of several independent indeterminates, 
can however be found. This problem has been treated by Schroder ^^, by 
Whitehead ^^ and by Johnson. ^^ 

This investigation has only a purely technical interest; for, from the practical 
point of view, we either wish to eliminate one or more unknown quantities (or 
even all), or else we seek to solve the equation with respect to one particular 
unknown. In the first case, we develop the first member with respect to the 
unknowns to be eliminated and equate the product of its coefficients to 0. In 
the second case we develop with respect to the unknown that is to be extricated 
and apply the formula for the solution of the equation of one unknown quantity. 
If it is desired to have the solution in terms of some unknown quantities or in 
terms of the known only, the other unknowns (or all the unknowns) must first 
be eliminated before performing the solution. 



"^"^ Algebra der Logik, Vol. I, §24. 
"^^ Universal Algebra, Vol. I, §§35-37. 

^^"Sur la theorie des egalites logiques", Bibl. du Cong, intern, de Phil., Vol. Ill, p. 185 
(Paris, 1901). 



50 



0.41 The Problem of Boole 

According to Boole the most general problem of the algebra of logic is the 
following^^ : 

Given any equation (which is assumed to be possible) 

f{x,y,z,...) = 0, 

and, on the other hand, the expression of a term t in terms of the variables 
contained in the preceding equation 

to determine the expression of t in terms of the constants contained in / and in 

Suppose / and Lp developed with respect to the variables x^y^z . . . and let 
PiiP2^P3^ • • • be their constituents: 

/(x, ?/, z, . . .) = Api + Bp2 + Cps + . . . , 
0(x, ?/, z, . . .) = api + 6p2 + cp3 + . . . . 

Then reduce the equation which expresses t so that its second member will 
be 0: 

(t0' + t> = 0) = [(aVi + b'p2 + cVs + . . 0^ 

+ {api + bp2 + cp3 + . . 0^' = 0]- 

Combining the two equations into a single equation and developing it with 
respect to t: 

[{A + a')pi + (5 + b')p2 + (C + c')ps + . . .]t 

+ [{A + a)pi + (5 + b)p2 + (C + c)ps + . . .]t' = 0. 

This is the equation which gives the desired expression of t. Eliminating t, 
we obtain the resultant 

Api + 5p2 + Cp3 + . . . = 0, 

as we might expect. If, on the other hand, we wish to eliminate x, 7/, z, . . . {i.e., 
the constituents Pi,P2,P3 • • O? we put the equation in the form 

{A + a't + at')pi + (5 + b't + bt')p2 + (C + c't + ct')ps + . . . = 0, 

and the resultant will be 

{A + a't + at'){B + b't + bt'){C + c't + ct') . . . = 0, 



^ Laws of Thought, Chap. IX, 



51 



an equation that contains only the unknown quantity t and the constants of 
the problem (the coefficients of / and of (f). From this may be derived the 
expression of t in terms of these constants. Developing the first member of this 
equation 

{A + a'){B + b'){C + cO . . . X t + (A + a){B + b){C + c) . . . x t' = 0. 

The solution is 

t = (A + a)(5 + 6)(C + c) . . . + u{A'a + B% ^C'c^.. .). 

The resultant is verified by hypothesis since it is 

ABC . . . = 0, 

which is the resultant of the given equation 

f{x,y,z,...) = 0. 

We can see how this equation contributes to restrict the variability of t. 
Since t was defined only by the function (f, it was determined by the double 
inclusion 

abc ... < t < a-\-b -\- c-\- 

Now that we take into account the condition / = 0, t is determined by the 
double inclusion 

(A + a)(5 + 6)(C + c) . . . < t < {A'a + B% + C'c + . . .).48 

The inferior limit can only have increased and the superior limit diminished, 
for 

abc...< (A + a)(5 + 6)(C + c)... 

and 

A'a + 5'6 + C'c . . . < a + 6 + c . . . . 

The limits do not change if A = B = C = . . . = 0, that is, if the equation 
/ = is reduced to an identity, and this was evident a priori. 

0.42 The Method of Poretsky 

The method of Boole and Schroder which we have heretofore discussed 
is clearly inspired by the example of ordinary algebra, and it is summed up in 
two processes analogous to those of algebra, namely the solution of equations 
with reference to unknown quantities and elimination of the unknowns. Of these 
processes the second is much the more important from a logical point of view, 
and Boole was even on the point of considering deduction as essentially con- 
sisting in the elimination of middle terms. This notion, which is too restricted. 



Whitehead, Universal Algebra, p. 63. 

52 



was suggested by the example of the synogism, in which the conclusion results 
from the elimination of the middle term, and which for a long time was wrongly 
considered as the only type of mediate deduction. ^^ 

However this may be, Boole and Schroder have exaggerated the anal- 
ogy between the algebra of logic and ordinary algebra. In logic, the distinction 
of known and unknown terms is artificial and almost useless. All the terms 
are — in principle at least — known, and it is simply a question, certain relations 
between them being given, of deducing new relations (unknown or not explic- 
itly known) from these known relations. This is the purpose of Poretsky's 
method which we shall now expound. It may be summed up in three laws, the 
law of forms, the law of consequences and the law of causes. 

0.43 The Law of Forms 

This law answers the following problem: An equality being given, to find for 
any term (simple or complex) a determination equivalent to this equality. In 
other words, the question is to find all the forms equivalent to this equality, any 
term at all being given as its first member. 

We know that any equality can be reduced to a form in which the second 
member is or 1; i.e., to one of the two equivalent forms 

N = 0, N' = l. 

The function N is what Poretsky calls the logical zero of the given 
equality; N^ is its logical whole.^^ 

Let U be any term; then the determination of U: 

U = N'U + NU' 

is equivalent to the proposed equality; for we know it is equivalent to the equality 

{NU ^NU' = {)) = {N = {)). 
Let us recall the signification of the determination 

U = N'U ^NU'. 



^^In fact, the fundamental formula of elimination 

{ax + hx' = 0) < {ah = 0) 
is, as we have seen, only another form and a consequence of the principle of the syllogism 

{b < X < a) < {b < a). 

^•-•They are called "logical" to distinguish them from the identical zero and whole, i.e., to 
indicate that these two terms are not equal to and 1 respectively except by virtue of the 
data of the problem. 



53 



It denotes that the term U is contained in N' and contains N. This is easily 
understood, since, by hypothesis, N is equal to and A^' to 1. Therefore we 
can formulate the law of forms in the following way: 

To obtain all the forms equivalent to a given equality, it is sufficient to express 
that any term contains the logical zero of this equality and is contained in its 
logical whole. 

The number of forms of a given equality is unlimited; for any term gives rise 
to a form, and to a form different from the others, since it has a different first 
member. But if we are limited to the universe of discourse determined by n 
simple terms, the number of forms becomes finite and determinate. For, in this 
limited universe, there are 2'^ constituents. Now, all the terms in this universe 
that can be conceived and defined are sums of some of these constituents. Their 
number is, therefore, equal to the number of combinations that can be made 
with 2^ constituents, namely 2^ (including 0, the combination of constituent, 
and 1, the combination of all the constituents). This will also be the number of 
different forms of any equality in the universe in question. 

0.44 The Law of Consequences 

We shall now pass to the law of consequences. Generalizing the conception 
of Boole, who made deduction consist in the elimination of middle terms, 
PORETSKY makes it consist in the elimination of known terms (connaissances) . 
This conception is explained and justified as follows. 

All problems in which the data are expressed by logical equalities or inclu- 
sions can be reduced to a single logical equality by means of the formula^^ 

(A = 0)(5 = 0)(C = 0) . . . = (A + 5 + C . . . = 0). 

In this logical equality, which sums up all the data of the problem, we develop 
the first member with respect to all the simple terms which appear in it (and 
not with respect to the unknown quantities). Let n be the number of simple 
terms; then the number of the constituents of the development of 1 is 2'^. Let 
m (< 2^) be the number of those constituents appearing in the first member of 
the equality. All possible consequences of this equality (in the universe of the n 
terms in question) may be obtained by forming all the additive combinations of 
these m constituents, and equating them to 0; and this is done in virtue of the 
formula 

{A^B = 0)< {A = 0). 

We see that we pass from the equality to any one of its consequences by 
suppressing some of the constituents in its first member, which correspond to 
as many elementary equalities (having for second member), i.e., as many as 
there are data in the problem. This is what is meant by "eliminating the known 
terms". 



^^We employ capitals to denote complex terms (logical functions) in contrast to simple 
terms denoted by small letters (a, b,c, . . .) 



54 



The number of consequences that can be derived from an equahty (in the 
universe of n terms with respect to which it is developed) is equal to the number 
of additive combinations that may be formed with its m constituents; i.e., T^ . 
This number includes the combination of constituents, which gives rise to the 
identity = 0, and the combination of the m constituents, which reproduces 
the given equality. 

Let us apply this method to the equation with one unknown quantity 

ax + hx' = 0. 

Developing it with respect to the three terms a, 6, x\ 

{abx -\- ah'x + ahx' + a'hx' = 0) 

= [ah{x + x') + ah'x + a'hx' = 0] 
= {ah = 0){ah'x = 0){a'hx' = 0). 

Thus we find, on the one hand, the resultant ah = 0, and, on the other hand, 
two equalities which may be transformed into the inclusions 

X < a' -\- h, a'h < x. 

But by the resultant which is equivalent to 6 < a', we have 

a' ^h' = a', a'h = h. 

This consequence may therefore be reduced to the double inclusion 

X < a' , h < X, 

that is, to the known solution. 

Let us apply the same method to the premises of the syllogism 

(a < h){h < c). 

Reduce them to a single equality 

{a<h) = {ah' = 0), {h < c) = {he' = 0), {ah' + he' = 0), 

and seek all of its consequences. 

Developing with respect to the three terms a, 6, c: 

ahe' + ah'e + ah'e' + a'he' = 0. 

The consequences of this equality, which contains four constituents, are 16 
(2^) in number as follows: 

1. {ahe' = 0) = {ah < c); 

2. {ah'e = 0) = {ae < 6); 

3. {ah'e' = 0) = (a<6 + c); 

4. {a'he' = 0) = (6<a + c); 

5. {ahe' + ah'e = 0) = (a < 6c + h'e'); 

6. {ahe' + ah'e' = 0) = {ae' = 0) = (a < c). 



55 



This is the traditional conclusion of the syllogism. ^^ 

7. {abc' + a%c' = 0) = {be' = 0) = (6 < c). 
This is the second premise. 

8. {ab'c + ab'c' = 0) = {ab' = 0) = {a<b). 
This is the first premise. 

9. {ab'c + a'bc' = 0) = (ac < 6 < a + c); 

10. {ab'c' + a%c' = 0) = (a6' + a'6 < c); 

11. (a6c' + ab'c + a6'c' = 0) = (a6' + ac' = 0) = (a < 6c); 

12. {abc' + a6'c + a'6c' = 0) = {ab'c + 6c' = 0) = (ac < 6 < c); 

13. {abc' + a6'c' + a' be' = 0) = (ac' + 6c' = 0) = (a + 6 < c); 

14. {ab'c + a6'c' + a'6c' = 0) = {ab' + a'6c' = 0) = (a < 6 < a + c). 

The last two consequences (15 and 16) are those obtained by combining 
constituent and by combining all; the first is the identity 

15. = 0, 

which confirms the paradoxical proposition that the true (identity) is implied 
by any proposition (is a consequence of it); the second is the given equality itself 

16. ab' + be' = 0, 

which is, in fact, its own consequence by virtue of the principle of identity. These 
two consequences may be called the "extreme consequences" of the proposed 
equality. If we wish to exclude them, we must say that the number of the 
consequences properly so called of an equality of m constituents is 2^ — 2. 

0.45 The Law of Causes 

The method of finding the consequences of a given equality suggests directly 
the method of finding its causes, namely, the propositions of which it is the 
consequence. Since we pass from the cause to the consequence by eliminating 
known terms, i.e., by suppressing constituents, we will pass conversely from the 
consequence to the cause by adjoining known terms, i.e., by adding constituents 
to the given equality. Now, the number of constituents that may be added to 



^^It will be observed that this is the only consequence (except the two extreme consequences 
[see the text below]) independent of b; therefore it is the resultant of the elimination of that 
middle term. 



56 



it, i.e., that do not already appear in it, is 2^ — m. We will obtain all the 
possible causes (in the universe of the n terms under consideration) by forming 
all the additive combinations of these constituents, and adding them to the first 
member of the equality in virtue of the general formula 

(A + 5 = 0)<(A = 0), 

which means that the equality {A = 0) has as its cause the equality {A-\-B = 0), 
in which B is any term. The number of causes thus obtained will be equal to 
the number of the aforesaid combinations, or 2'^^ — m. 

This method may be applied to the investigation of the causes of the premises 
of the syllogism 

(a < h){h < c) 

which, as we have seen, is equivalent to the developed equality 

ahc' + ah'c + ah'c' + a'hc' = 0. 

This equality contains four of the eight (2^) constituents of the universe of 
three terms, the four others being 

abc, a'hc, a'h'c, a'h'c' . 

The number of their combinations is 16 (2^), this is also the number of the 



57 



causes sought, which are: 

{abc + abc' + ab'c + ab'c' + a%c' = 0) 

^^^ = (a + 6c' = 0) = (a = 0)(6 < c); 

(a6c' + a6'c + ab'c' + a'6c + a'bc' = 0) 

^ ^ = (a6c' + ab' + a'6 = 0) = {ab < c){a = b); 

(abc' + ab'c + a6'c' + a%c' + a'6'c = 0) 

^ ^ = {be' + 6'c + a6'c' = 0) = (6 = c)(a < 6 + c); 

(a6c' + a6'c + ab'c' + a'6c' + a'b'c' = 0) 

^^^^ = (c' + a6' = 0) = (c = l)(a < 6); 

(a6c + a6c' + ab'c + a6'c' + a'bc + a'6c' = 0) 
(12) 
^ ^ =(a + 6 = 0) = (a = 0)(6 = 0); 

(a6c + abc' + a6'c + ab'c' + a'6c' + a'6'c = 0) 
fl3) 
^ ^ = (a + 6c' + b'c = 0) = (a = 0)(6 = c); 

{abc + a6c' + a6'c + ab'c' + a'6c' + a'b'c' = 0) 
^^^^ = (a + c' = 0) = (a = 0)(c = l)^^ 

(a6c' + ab'c + a6'c' + a'6c + a'bc' + a'6'c = 0) 
(15) = {ac' + a'c + a6'c + a'6c' = 0) 

= (a = c)(ac <6<a + c) = (a = 6 = c); 

{abc' + a6'c + a6'c' + a'bc + a' 6c' + a'b'c = 0) 

^^^^ = {c' + a6' + a'6 = 0) = (c = l)(a = 6); 

, , {abc' + a6'c + ab'c' + a'6c' + a'6'c + a'b'c' = 0) 

(17) 

^ ^ =(6' + c' = 0) = (6 = c=l). 

Before going any further, it may be observed that when the sum of certain 
constituents is equal to 0, the sum of the rest is equal to 1. Consequently, instead 
of examining the sum of seven constituents obtained by ignoring one of the four 
missing constituents, we can examine the equalities obtained by equating each 
of these constituents to 1: 

(18) (a'6V = 1) = (a + 6 + c = 0) =(a = 6 = c = 0); 

(19) {a'b'c = 1) = (a + 6 + c' = 0) = {a = b = 0)(c = 1); 

(20) {a'bc = 1) = (a + 6' + c' = 0) = (a = 0)(6 = c = 1); 

(21) (a6c=l) =(a = 6 = c=l). 

Note that the last four causes are based on the inclusion 

0<1. 

The last two causes (22 and 23) are obtained either by adding all the miss- 
ing constituents or by not adding any. In the first case, the sum of all the 
constituents being equal to 1, we find 

(22) 1 = 0, 

58 



that is, absurdity, and this confirms the paradoxical proposition that the false 
(the absurd) implies any proposition (is its cause). In the second case, we obtain 
simply the given equality, which thus appears as one of its own causes (by the 
principle of identity) : 

(23) ab' + be' = 0. 

If we disregard these two extreme causes, the number of causes properly so 
called will be 



0.46 Forms of Consequences and Causes 

We can apply the law of forms to the consequences and causes of a given equality 
so as to obtain all the forms possible to each of them. Since any equality is 
equivalent to one of the two forms 

N = 0, N' = l, 

each of its consequences has the form^^ 

NX = 0, orA^' + X' = l, 

and each of its causes has the form 

A^ + X = 0, otN'X' = 1. 

In fact, we have the following formal implications: 

{N^X = 0) <{N = 0)< {NX = 0), 
{N'X' = 1) < {N' = 1) = {N' + X' = 1). 

Applying the law of forms, the formula of the consequences becomes 

U = {N' ^X')U ^NXU', 
and the formula of the causes 

U = N'X'U ^{N ^X)U'] 



^^In §0.44 we said that a consequence is obtained by taking a part of the constituents of 
the first member A^, and not by multiplying it by a term X; but it is easily seen that this 
amounts to the same thing. For, suppose that X (like A^) be developed with respect to the 
n terms of discourse. It will be composed of a certain number of constituents. To perform 
the multiplication of A^ by X, it is sufficient to multiply all their constituents each by each. 
Now, the product of two identical constituents is equal to each of them, and the product of 
two different constituents is 0. Hence the product of A^ by X becomes reduced to the sum of 
the constituents common to A^ and X, which is, of course, contained in A^. So, to multiply A^ 
by an arbitrary term is tantamount to taking a part of its constituents (or all, or none). 



59 



or, more generally, since X and X' are indeterminate terms, and consequently 
are not necessarily the negatives of each other, the formula of the consequences 
will be 

U = {N' ^X)U^NYU', 

and the formula of the causes 

U = N'XU ^{N ^Y)U'. 

The first denotes that U is contained in {N' + X) and contains NY] which 
indeed results, a fortiori^ from the hypothesis that U is contained in N' and 
contains N . 

The second formula denotes that U is contained in N'X and contains N' ^Y 
whence results, a fortiori^ that U is contained in N' and contains N . 

We can express this rule verbally if we agree to call every class contained 
in another a sub- class ^ and every class that contains another a super- class. We 
then say: To obtain all the consequences of an equality (put in the form U = 
N'U + NU')^ it is sufficient to substitute for its logical whole N' all its super- 
classes, and, for its logical zero N, all its sub-classes. Conversely, to obtain all 
the causes of the same equality, it is sufficient to substitute for its logical whole 
all its sub-classes, and for its logical zero, all its super-classes. 

0.47 Example: Venn's Problem 

The members of the administrative council of a financial society are either bond- 
holders or shareholders, but not both. Now, all the bondholders form a part of 
the council. What conclusion must we draw? 

Let a be the class of the members of the council; let b be the class of the 
bondholders and c that of the shareholders. The data of the problem may be 
expressed as follows: 

a < bc^ -\- b' c^ b < a. 

Reducing to a single developed equality, 

a{bc = b'c') = 0, a'b = 0, 

(24) abc + ab'c' + a'bc + a'bc' = 0. 

This equality, which contains 4 of the constituents, is equivalent to the fol- 
lowing, which contains the four others, 

(25) abc' + ab'c + a' b'c + a' b'c' = 1. 

This equality may be expressed in as many different forms as there are classes 
in the universe of the three terms a, 6, c. 



Ex. 1. a = abc' -\- ab'c -\- a' be -\- a'bc'^ 



60 



that is, 



Ex. 2. 
Ex. 3. 



b < a <bc^ ^ h'c, 
b = abc^ -\- ab'c = ac'] 
ab'c + a'b'c + ab'c' + a' be' 



that is, 



ab' ^a'b < c < b' . 



These are the solutions obtained by solving equation (24) with respect to a, 
b, and c. 

From equality (24) we can derive 16 consequences as follows : 



1. abc ■■ 

2. {ab'c' = 0) : 

3. {a'bc = 0) : 

4. {a'bc = 0) : 

5. {abc ^ ab'c' = 0) 

6. {abc + a'bc = 0) : 

7. {abc + a'6c' = 0) : 

8. {ab'c' + a'bc = 0) : 

9. {ab'c' + a'6c' = 0) : 

10. {a'bc + a'6c' = 0) : 

11. {abc + ab'c' + a'6c = 0) : 

12. abc + ab'c + a'6c' : 

13. {abc + a'6c + a'6c' = 0) : 

14. ab'c' + a'bc + a'6c' : 



0; 

{a < b -\- c); 

{be < a); 

{b < a -\- c); 

{a < be' + b'c) [1^* premise]; 

(6c = 0); 

(6 < ac' -\- a'c); 

{be < a < b -\- c); 

{ab' -\- a'b < c); 

{a'b = 0) [2^ premise]; 

{be ^ ab'c' = 0); 

0; 

{bc- 
0. 



a'6c0 = 0; 



The last two consequences, as we know, are the identity (0 = 0) and the 
equality (24) itself. Among the preceding consequences will be especially noted 
the 6*^ {be = 0), the resultant of the elimination of a, and the 10*^ {a'b = 0), 
the resultant of the elimination of c. When b is eliminated the resultant is the 
identity 

[{a' ^ c)ac' = 0] = (0 = 0). 

Finally, we can deduce from the equality (24) or its equivalent (25) the 



61 



following 16 causes: 

1. {abc' = 1) = (a = l){b = l)(c = 0) 

2. {ab'c = 1) = (a = l){b = 0)(c = 1) 

3. {a%'c = 1) = (a = 0)(6 = 0)(c = 1) 

4. (a'6'c' = 1) = (a = 0){b = 0)(c = 0) 

5. {abc' + a6'c = 1) = (a = l){b' = c); 

6. (a6c' + a'b'c = 1) = (a = 6 = c'); 

7. (a6c' + a%'c' = 1) = (c = 0)(a = 6); 

8. (a6'c + a%'c = 1) = (6 = 0)(c = 1); 

9. {ab'c + a'6'c' = 1) = (6 = 0)(a = c); 

10. {a'b'c + a'6'c' = 1) = (a = 0)(6 = 0); 

11. {abc' + ab'c + a'6'c = 1) = (6 = cO(c' < a); 

12. (a6c' + ab'c + a'6'c' = 1) = (6c = 0)(a = 6 + c); 

13. {abc' + a'6'c + a'6'c' = 1) = {ac = 0){a = b); 

14. (a6'c + a'b'c + a'6'c' = 1) = (6 = o)(a < c). 

The last two causes, as we know, are the equality (24) itself and the absurdity 
(1=0). It is evident that the cause independent of a is the 8*^ (6 = 0)(c = 1), 
and the cause independent of c is the 10*^ (a = 0)(6 = 0). There is no cause, 
properly speaking, independent of b. The most "natural" cause, the one which 
may be at once divined simply by the exercise of common sense, is the 12*^: 

(6c = 0)(a = 6 + c). 

But other causes are just as possible; for instance the 9*^ {b = 0)(a = c), the 
7*^ (c = 0)(a = 6), or the 13*^ {ac = 0){a = b). 

We see that this method furnishes the complete enumeration of all possible 
cases. In particular, it comprises, among the forms of an equality, the solutions 
deducible therefrom with respect to such and such an "unknown quantity", and, 
among the consequences of an equality, the resultants of the elimination of such 
and such a term. 

0.48 The Geometrical Diagrams of Venn 

Poretsky's method may be looked upon as the perfection of the methods of 
Stanley Jevons and Venn. 

Conversely, it finds in them a geometrical and mechanical illustration, for 
Venn's method is translated in geometrical diagrams which represent all the 
constituents, so that, in order to obtain the result, we need only strike out 
(by shading) those which are made to vanish by the data of the problem. For 
instance, the universe of three terms a, 6, c, represented by the unbounded plane. 



62 




Figure 1: 

is divided by three simple closed contours into eight regions which represent the 
eight constituents (Fig. 1). 

To represent geometrically the data of Venn's problem we must strike out 
the regions abc, ah'c' ^ a' he and a'hc'-^ there will then remain the regions ahc' ^ 
ah'c^ a'h'c^ and a'h'c' which will constitute the universe relative to the problem^ 
being what Poretsky calls his logical whole (Fig. 2). Then every class will be 
contained in this universe, which will give for each class the expression resulting 
from the data of the problem. Thus, simply by inspecting the diagram, we see 
that the region he does not exist (being struck out); that the region h is reduced 
to ah'c' (hence to a6); that all a is 6 or c, and so on. 




Figure 2: 

This diagrammatic method has, however, serious inconveniences as a method 
for solving logical problems. It does not show how the data are exhibited by 
canceling certain constituents, nor does it show how to combine the remaining 
constituents so as to obtain the consequences sought. In short, it serves only to 
exhibit one single step in the argument, namely the equation of the problem; 
it dispenses neither with the previous steps, z.e., "throwing of the problem into 



63 



an equation" and the transformation of the premises, nor with the subsequent 
steps, i.e., the combinations that lead to the various consequences. Hence it is 
of very httle use, inasmuch as the constituents can be represented by algebraic 
symbols quite as well as by plane regions, and are much easier to deal with in 
this form. 



0.49 The Logical Machine of Jevons 

In order to make his diagrams more tractable, Venn proposed a mechanical 
device by which the plane regions to be struck out could be lowered and caused to 
disappear. But Jevons invented a more complete mechanism, a sort of logical 
piano. The keyboard of this instrument was composed of keys indicating the 
various simple terms (a, 6, c, d), their negatives, and the signs + and =. Another 
part of the instrument consisted of a panel with movable tablets on which were 
written all the combinations of simple terms and their negatives; that is, all the 
constituents of the universe of discourse. Instead of writing out the equalities 
which represent the premises, they are "played" on a keyboard like that of a 
typewriter. The result is that the constituents which vanish because of the 
premises disappear from the panel. When all the premises have been "played", 
the panel shows only those constituents whose sum is equal to 1, that is, forms 
the universe with respect to the problem, its logical whole. This mechanical 
method has the advantage over Venn's geometrical method of performing 
automatically the "throwing into an equation", although the premises must first 
be expressed in the form of equalities; but it throws no more light than the 
geometrical method on the operations to be performed in order to draw the 
consequences from the data displayed on the panel. 

0.50 Table of Consequences 

But Poretsky's method can be illustrated, better than by geometrical and 
mechanical devices, by the construction of a table which will exhibit directly all 
the consequences and all the causes of a given equality. (This table is relative to 
this equality and each equality requires a different table). Each table comprises 
the 2^ classes that can be defined and distinguished in the universe of discourse 
of n terms. We know that an equality consists in the annulment of a certain 
number of these classes, viz., of those which have for constituents some of the 
constituents of its logical zero N. Let m be the number of these latter con- 
stituents, then the number of the subclasses of N is 2^ which, therefore, is the 
number of classes of the universe which vanish in consequence of the equality 
considered. Arrange them in a column commencing with and ending with N 
(the two extremes). On the other hand, given any class at all, any preceding 
class may be added to it without altering its value, since by hypothesis they 
are null (in the problem under consideration). Consequently, by the data of the 
problem, each class is equal to 2^ classes (including itself). Thus, the assem- 



64 



blage of the 2^ classes of discourse is divided into 2^~"^ series of 2^ classes, each 
series being constituted by the sums of a certain class and of the 2^ classes of 
the first column (sub-classes of N). Hence we can arrange these 2^ sums in the 
following columns by making them correspond horizontally to the classes of the 
first column which gave rise to them. Let us take, for instance, the very simple 
equality a = b, which is equivalent to 

ab' + a'b = 0. 

The logical zero (N) in this case is ab^ -\- a'b. It comprises two constituents 
and consequently four sub-classes: 0, ab' ^ a'b^ and ab' ^a'b. These will compose 
the first column. The other classes of discourse are a6, a'b' ^ ab-\-a'b', and those 
obtained by adding to each of them the four classes of the first column. In this 
way, the following table is obtained: 

ab a'b' ab^a'b' 

ab' a b' a ^ b' 

a'b b a' a' ^ b 

ab'^a'b a^b a'^b' 1 

By construction, each class of this table is the sum of those at the head of 
its row and of its column, and, by the data of the problem, it is equal to each 
of those in the same column. Thus we have 64 different consequences for any 
equality in the universe of discourse of 2 letters. They comprise 16 identities 
(obtained by equating each class to itself) and 16 forms of the given equality, 
obtained by equating the classes which correspond in each row to the classes 
which are known to be equal to them, namely 

= a6' + a'b, ab = a^b, a'b' = a' ^ b' ab ^ a'b' = 1 
a = b, b' = a', ab' = a'b, a -\- b' = a' -\- b. 

Each of these 8 equalities counts for two, according as it is considered as a 
determination of one or the other of its members. 



0.51 Table of Causes 

The same table may serve to represent all the causes of the same equality in 
accordance with the following theorem: 

When the consequences of an equality A^ = are expressed in the form of 
determinations of any class U, the causes of this equality are deduced from the 
consequences of the opposite equality, A^ = 1, put in the same form, by changing 
U to U' in one of the two members. 

For we know that the consequences of the equality A^ = have the form 

U = {N' ^ X)U ^ NYU' , 
and that the causes of the same equality have the form 

U = N'XU^{N^Y)U'. 



65 



Now, if we change U into U^ in one of the members of this last formula, it 
becomes 

U = {N ^ X')U ^ N'Y'U' , 

and the accents of X and Y can be suppressed since these letters represent 
indeterminate classes. But then we have the formula of the consequences of the 
equality A^' = or A^ = 1. 

This theorem being established, let us construct, for instance, the table of 
causes of the equality a = b. This will be the table of the consequences of the 
opposite equality a = h' ^ for the first is equivalent to 



and the second to 





ab' + a'b 


= 0, 




{ab + 


a'b' = 0) = 


{ab' 


+ a'b=l). 





ab' 


a'b 


ab' + a'b 


ah 


a 


b 


a + b 


a'h' 


b' 


a' 


a' + b' 



ah ^ a'h' a^h' a' ^h 1 

To derive the causes of the equality a = h from this table instead of the con- 
sequences of the opposite equality a = 6', it is sufficient to equate the negative 
of each class to each of the classes in the same column. Examples are: 

a' ^y = 0, a' ^y = a'h', a' ^h' = ah^ a'h', 
a' ^h = a, a' ^h = h', a' ^ h = a ^ h'; . . . . 

Among the 64 causes of the equality under consideration there are 16 ab- 
surdities (consisting in equating each class of the table to its negative); and 16 
forms of the equality (the same, of course, as in the table of consequences, for 
two equivalent equalities are at the same time both cause and consequence of 
each other). 

It will be noted that the table of causes differs from the table of consequences 
only in the fact that it is symmetrical to the other table with respect to the 
principal diagonal (0, 1); hence they can be made identical by substituting the 
word "row" for the word "column" in the foregoing statement. And, indeed, 
since the rule of the consequences concerns only classes of the same column, we 
are at liberty so to arrange the classes in each column on the rows that the rule 
of the causes will be verified by the classes in the same row. 

It will be noted, moreover, that, by the method of construction adopted for 
this table, the classes which are the negatives of each other occupy positions 
symmetrical with respect to the center of the table. For this result, the sub- 
classes of the class N' (the logical whole of the given equality or the logical zero 
of the opposite equality) must be placed in the first row in their natural order 
from to N'; then, in each division, must be placed the sum of the classes at 
the head of its row and column. 



66 



With this precaution, we may sum up the two rules in the fohowing practical 
statement: 

To obtain every consequence of the given equality (to which the table relates) 
it is sufficient to equate each class to every class in the same column; and, to 
obtain every cause, it is sufficient to equate each class to every class in the row 
occupied by its symmetrical class. 

It is clear that the table relating to the equality A^ = can also serve for 
the opposite equality A^ = 1, on condition that the words "row" and "column" 
in the foregoing statement be interchanged. 

Of course the construction of the table relating to a given equality is useful 
and profitable only when we wish to enumerate all the consequences or the 
causes of this equality. If we desire only one particular consequence or cause 
relating to this or that class of the discourse, we make use of one of the formulas 
given above. 

0.52 The Number of Possible Assertions 

If we regard logical functions and equations as developed with respect to all the 
letters, we can calculate the number of assertions or different problems that may 
be formulated about n simple terms. For all the functions thus developed can 
contain only those constituents which have the coefficient 1 or the coefficient 
(and in the latter case, they do not contain them). Hence they are additive 
combinations of these constituents; and, since the number of the constituents 
is 2"^, the number of possible functions is 2^ . From this must be deducted 
the function in which all constituents are absent, which is identically 0, leaving 
2^—1 possible equations (255 when n = 3). But these equations, in their turn, 
may be combined by logical addition, i.e., by alternation; hence the number of 
their combinations is 2^ ~^ — 1, excepting always the null combination. This is 
the number of possible assertions affecting n terms. When n = 2, this number is 
as high as 32767.^^ We must observe that only universal premises are admitted 
in this calculus, as will be explained in the following section. 

0.53 Particular Propositions 

Hitherto we have only considered propositions with an affirmative copula {i.e., 
inclusions or equalities) corresponding to the universal propositions of classi- 

^^ G. Peano, Calcolo geometrico (1888) p. x; Schroder, Algebra der Logik, Vol. II, 
p. 144-148. 



67 



cal logic. ^^ It remains for us to study propositions with a negative copula (non 
inclusions or inequalities), which translate particular propositions^^; but the cal- 
culus of propositions having a negative copula results from laws already known, 
especially from the formulas of De Morgan and the law of contraposition. We 
shall enumerate the chief formulas derived from it. 

The principle of composition gives rise to the following formulas: 

(c ^ ab) = {c ^ a) -\- {c ^ 6), 
{a-\-b ^ c) = {a ^ c) -\- {b ^ c)^ 

whence come the particular instances 

(a5^1) = (a^l) + (6^1), 
{a + b^O) = {a^O) + {c^O). 

From these may be deduced the fohowing important imphcations: 

{a^O)<{a + b^O), 
{aj^l)< (abj^l). 

From the principle of the syllogism, we deduce, by the law of transposition, 

{a<b){a^O)< (b^O), 
{a<b){bj^l) <{a^l). 

The formulas for transforming inclusions and equalities give corresponding 
formulas for the transformation of non-inclusions and inequalities, 

{aftb) = {ab' ^0) = {a' + b^l), 
{a^b) = {ab' + a'b ^Q) = {ab + a'b' + 1). 



^^The universal affirmative, "All a's are 6's", may be expressed by the formulas 
(a <b) = {a = ab) = (ab' = 0) = (a' + b = 1), 
and the universal negative, "No a's are 6's", by the formulas 

{a < b') = {a = ab') = {ab = 0) = {a' + b' = 1). 

^^For the particular affirmative, "Some a's are 6's", being the negation of the universal 
negative, is expressed by the formulas 

{a ^ b') = (a / ab') = {ab / 0) = {a' + b' ^ 1), 

and the particular negative, "Some a's are not 6's", being the negation of the universal affir- 
mative, is expressed by the formulas 

{a^b) = {a^ ab) = {ab' / 0) = (a' + 6 / 1). 



68 



0.54 Solution of an Inequation with One Unknown 

If we consider the conditional inequality (inequation) with one unknown 

ax -\- bx ^ 0, 
we know that its first member is contained in the sum of its coefficients 

ax -\- hx' < a -\- b. 

From this we conclude that, if this inequation is verified, we have the in- 
equality 

a^b^O. 

This is the necessary condition of the solvability of the inequation, and the 
resultant of the elimination of the unknown x. For, since we have the equivalence 

Y[{ax -^bx' = 0) = {a^b = 0), 

X 

we have also by contraposition the equivalence 

^{ax -^bx' ^0) = {a^b^ 0). 

X 

Likewise, from the equivalence 

^{ax + bx' = 0) = {ab = 0) 

X 

we can deduce the equivalence 

Y[{ax^bx' ^0) = {ab^O), 

X 

which signifies that the necessary and sufficient condition for the inequation to 
be always true is 

(ab + 0); 

and, indeed, we know that in this case the equation 

[ax + bx' = 0) 

is impossible (never true). 

Since, moreover, we have the equivalence 

{ax -\- bx' = 0) = (x = a'x + bx')^ 

we have also the equivalence 

{ax + bx' 7^ 0) = (x 7^ a'x + bx'). 



69 



Notice the significance of this solution: 

{ax + bx' 7^ 0) = {ax 7^ 0) + {bx' ^ 0) = {x it a') + (6 ^ x). 

"Either x is not contained in a\ or it does not contain U\ This is the negative 
of the double inclusion 

b < X < a. 

Just as the product of several equalities is reduced to one single equality, 
the sum (the alternative) of several inequalities may be reduced to a single 
inequality. But neither several alternative equalities nor several simultaneous 
inequalities can be reduced to one. 

0.55 System of an Equation and an Inequation 

We shall limit our study to the case of a simultaneous equality and inequality. 
For instance, let the two premises be 

{ax + bx^ = 0) {ex + dx' ^ 0). 

To satisfy the former (the equation) its resultant a6 = must be verified. 
The solution of this equation is 

X = a! X + bx' . 

Substituting this expression (which is equivalent to the equation) in the 
inequation, the latter becomes 

{a!c + ad)x + (be + b' d)x' ^ 0. 

Its resultant (the condition of its solvability) is 

{a!c ^ad^bc^ b'd 7^ 0) = [{a' + b)c + (a + b')d ^ 0], 

which, taking into account the resultant of the equality, 

{ab = 0) = (a' + 6 = a') = {a^b' = b') 

may be reduced to 

a'c + b'd ^ 0. 

The same result may be reached by observing that the equality is equivalent 
to the two inclusions 

{x<a'){x' <b'), 

and by multiplying both members of each by the same term 

{ex < a' c){dx' < b'd) < {ex -\- dx' < a'c -\- b'd) 
{ex + dx' 7^ 0) < {a'c + b'd ^ 0). 



70 



This resultant implies the resultant of the inequality taken alone 

c + ^7^0, 

so that we do not need to take the latter into account. It is therefore sufficient 
to add to it the resultant of the equality to have the complete resultant of the 
proposed system 

{ab = 0){a'c^b'd^O). 

The solution of the transformed inequality (which consequently involves the 
solution of the equality) is 

X 7^ {a'c' -\- ad')x + {he + h' d)x' . 

0.56 Formulas Peculiar to the Calculus of Propo- 
sitions. 

All the formulas which we have hitherto noted are valid alike for propositions 
and for concepts. We shall now establish a series of formulas which are valid 
only for propositions, because all of them are derived from an axiom peculiar 
to the calculus of propositions, which may be called the principle of assertion. 
This axiom is as follows: 

Ax. 10 

(a = 1) = a. 

P. I.: To say that a proposition a is true is to state the proposition itself. In 
other words, to state a proposition is to affirm the truth of that proposition.^^ 
Corollary. 

a' = (a' = 1) = (a = 0). 

P. I.: The negative of a proposition a is equivalent to the affirmation that 
this proposition is false. 

By Ax. 9 (§0.20), we already have 

(a = l)(a = 0) = 0, 

"A proposition cannot be both true and false at the same time", for 

(Syh.) (a = l)(a = 0) < (1 = 0) = 0. 

But now, according to Ax. 10, we have 

(a = 1) + (a = 0) = a + a' = 1. 



^^We can see at once that this formula is not susceptible of a conceptual interpretation 
(C. I.); for, if a is a concept, (a = 1) is a proposition, and we would then have a logical 
equality (identity) between a concept and a proposition, which is absurd. 

71 



"A proposition is either true or false". From these two formulas combined 
we deduce directly that the propositions (a = 1) and (a = 0) are contradictory, 

{a^l) = {a = 0), {a^O) = {a = l). 

From the point of view of calculation Ax. 10 makes it possible to reduce 
to its first member every equality whose second member is 1, and to transform 
inequalities into equalities. Of course these equalities and inequalities must have 
propositions as their members. Nevertheless all the formulas of this section 
are also valid for classes in the particular case where the universe of discourse 
contains only one element, for then there are no classes but and 1. In short, 
the special calculus of propositions is equivalent to the calculus of classes when 
the classes can possess only the two values and 1. 

0.57 Equivalence of an Implication and an Alter- 
native 

The fundamental equivalence 

(a < 6) = (a' + 6 = 1) 
gives rise, by Ax. 10, to the equivalence 

{a<b) = (a' + b) 

which is no less fundamental in the calculus of propositions. To say that a 
implies b is the same as affirming "not-a or U\ i.e., "either a is false or b is true." 
This equivalence is often employed in every day conversation. 
Corollary. — For any equality, we have the equivalence 

{a = b) = ab^ a'b' . 

Demonstration: 

{a = b) = {a< b){b < a) = {a' + b){b' -^ a) = ab ^ o!b' 

"To affirm that two propositions are equal (equivalent) is the same as stating 
that either both are true or both are false". 

The fundamental equivalence established above has important consequences 
which we shall enumerate. 

In the first place, it makes it possible to reduce secondary, tertiary, etc., 
propositions to primary propositions, or even to sums (alternatives) of elemen- 
tary propositions. For it makes it possible to suppress the copula of any proposi- 
tion, and consequently to lower its order of complexity. An implication (A < 5), 
in which A and B represent propositions more or less complex, is reduced to 
the sum A' -\- B, in which only copulas within A and B appear, that is, propo- 
sitions of an inferior order. Likewise an equality {A = B) is reduced to the sum 
{AB -\- A'B') which is of a lower order. 



72 



We know that the principle of composition makes it possible to combine 
several simultaneous inclusions or equalities, but we cannot combine alternative 
inclusions or equalities, or at least the result is not equivalent to their alternative 
but is only a consequence of it. In short, we have only the implications 

{a <c)^{b <c) < {ah < c), 
{c<a)^{c<b) < {c<a^ b), 

which, in the special cases where c = and c = 1, become 

(a = 0) + (6 = 0) < (a6 = 0), 
(a = l) + (6=l)<(a + 6 = l). 

In the calculus of classes, the converse implications are not valid, for, from 
the statement that the class ah is null, we cannot conclude that one of the 
classes a or 6 is null (they can be not-null and still not have any element in 
common) ; and from the statement that the sum (a + b) is equal to 1 we cannot 
conclude that either a or 6 is equal to 1 (these classes can together comprise all 
the elements of the universe without any of them alone comprising all). But 
these converse implications are true in the calculus of propositions 

{ab < c) < (a < c) + (6 < c), 
{c<a^b) < {c<a)^{c< b); 

for they are deduced from the equivalence established above, or rather we may 
deduce from it the corresponding equalities which imply them, 

(1) {ab < c) = (a < c) + (6 < c), 

(2) {c<a^b) = {c<a)^{c< b). 

Demonstration: 

(1) {ab <c) = a^ ^b^ ^c, 

{a <c) ^{b <c) = {a' + c) + {b' -^ c) = a' ^ b' ^ c; 

(2) {c<a^b) =c' ^a^b, 
{c<a)^{c<b) = {c' + a) + (c' + 6) = c' + a + 6. 

In the special cases where c = and c = 1 respectively, we find 

(3) (a6 = 0) = (a = 0) + (6 = 0), 

(4) (a + 6 = l) = (a = l) + (6 = l). 

P. I.: (1) To say that two propositions united imply a third is to say that 
one of them implies this third proposition. 

(2) To say that a proposition implies the alternative of two others is to say 
that it implies one of them. 



73 



(3) To say that two propositions combined are false is to say that one of 
them is false. 

(4) To say that the alternative of two propositions is true is to say that one 
of them is true. 

The paradoxical character of the first three of these statements will be noted 
in contrast to the self-evident character of the fourth. These paradoxes are 
explained, on the one hand, by the special axiom which states that a proposition 
is either true or false; and, on the other hand, by the fact that the false implies 
the true and that only the false is not implied by the true. For instance, if both 
premises in the first statement are true, each of them implies the consequence, 
and if one of them is false, it implies the consequence (true or false). In the 
second, if the alternative is true, one of its terms must be true, and consequently 
will, like the alternative, be implied by the premise (true or false). Finally, in 
the third, the product of two propositions cannot be false unless one of them is 
false, for, if both were true, their product would be true (equal to 1). 

0.58 Law of Importation and Exportation 

The fundamental equivalence {a < b) = a^ -\- b has many other interesting 
consequences. One of the most important of these is the law of importation and 
exportation, which is expressed by the following formula: 

[a < {b < c)] = {ab < c). 

"To say that if a is true b implies c, is to say that a and b imply c". 

This equality involves two converse implications: If we infer the second mem- 
ber from the first, we import into the implication {b < c) the hypothesis or 
condition a; if we infer the first member from the second, we, on the contrary, 
export from the implication {ab < c) the hypothesis a. 

Demonstration: 

[a < {b < c)] = a' + (6 < c) = a' + 6' + c, 
{ab <c) = {ab)' ^c = a' ^b' ^c. 

Cor. 1. — Obviously we have the equivalence 

[a<{b < c)] = [b <{a< c)], 

since both members are equal to {ab < c) , by the commutative law of multipli- 
cation. 

Cor. 2. — We have also 

[a < {a < b)] = {a < b), 

for, by the law of importation and exportation, 

[a < {a < b)] = {aa < b) = {a < b). 



74 



If we apply the law of importation to the two following formulas, of which the 
first results from the principle of identity and the second expresses the principle 
of contraposition, 

{a <b) < {a < h), {a < b) < {h' < a'), 

we obtain the two formulas 

(a < h)a < b), {a < h)h' < a' , 

which are the two types of hypothetical reasoning: "If a implies 6, and if a is 
true, h is true" {modus ponens); "If a implies b, and if b is false, a is false" {modus 
tollens). 

Remark. These two formulas could be directly deduced by the principle of 
assertion, from the following 

{a<b){a = l) < (6 = 1), 
{a<b){b = {)) < (a = 0), 

which are not dependent on the law of importation and which result from the 
principle of the syllogism. 

From the same fundamental equivalence, we can deduce several paradoxical 
formulas: 

1. a < {b < a), a' < {a < b). 

"If a is true, a is implied by any proposition 6; if a is false, a implies any 
proposition 6". This agrees with the known properties of and 1. 



2. a< [{a <b) < b], a' < [{b < a) < b']. 

"If a is true, then 'a implies V implies 6; if a is false, then '6 implies a' implies 
not-6." 

These two formulas are other forms of hypothetical reasoning {modus ponens 
and modus tollens). 



3. [{a <b) <a]=a, ^^ [{b < a) < a'] = a\ 

"To say that, if a implies b, a is true, is the same as affirming a; to say that, 
if b implies a, a is false, is the same as denying a". 
Demonstration: 

[{a < b) < a] = {a' ^ b < a) = ab' ^ a = a, 
[{b < a) < a^] = {b' ^a < a') = a'b + a' = a' . 



^^This formula is Bertrand Russell's "principle of reduction". See The Principles of 
Mathematics, Vol. I, p. 17 (Cambridge, 1903). 



75 



In formulas (1) and (3), in which b is any term at ah, we might introduce 
the sign Yl with respect to b. In the fohowing formula, it becomes necessary to 
make use of this sign. 

4. Y[ {[^ < {b < x)] < x} = ab. 

X 

Demonstration: 

{[a < {b < x)] <x} = {[a^ + (6 < x)] < x} 

= [(a' -\- b^ -\- x) < x] = abx^ -\- x = ab -\- x. 

We must now form the product Ylx{cLb-\-x), where x can assume every value, 
including and 1. Now, it is clear that the part common to all the terms of 
the form {ab -\- x) can only be ab. For, (1) ab is contained in each of the sums 
{ab-\-x) and therefore in the part common to all; (2) the part common to all the 
sums {ab-\-x) must be contained in (a6 + 0), that is, in ab. Hence this common 
part is equal to a6,^^ which proved the theorem. 

0.59 Reduction of Inequalities to Equalities 

As we have said, the principle of assertion enables us to reduce inequalities to 
equalities by means of the following formulas: 

(a^O) = (a = l), (a^l) = (a = 0), 

{a^b) = {a = b'). 

For, 

{a^b) = {ab' + a'b + 0) = {ab' + ab' = 1) = (a = b'). 

Consequently, we have the paradoxical formula 

{a^b) = {a = b'). 

This is easily understood, for, whatever the proposition 6, either it is true 
and its negative is false, or it is false and its negative is true. Now, whatever 
the proposition a may be, it is true or false; hence it is necessarily equal either 
to b or to b' . Thus to deny an equality (between propositions) is to affirm the 
opposite equality. 



^•-•This argument is general and from it we can deduce the formula 

]^(a + x) = a, 

X 

whence may be derived the correlative formula 

2J ax = a. 



76 



Thence it results that, in the calculus of propositions, we do not need to 
take inequalities into consideration — a fact which greatly simplifies both theory 
and practice. Moreover, just as we can combine alternative equalities, we can 
also combine simultaneous inequalities, since they are reducible to equalities. 

For, from the formulas previously established (§0.57) 



we deduce by contraposition 



{ab-- 


= 0) = (a = 


= 0) + (& = 


= 0), 


(a + b-- 


= 1) = (a = 


= !) + (& = 


= 1), 


)ition 
(a^0)(5^0) = 


= {ab ^ 0), 




(a^: 


l)(5^1) = 


--{a + b^ 


!)• 



These two formulas, moreover, according to what we have just said, are 
equivalent to the known formulas 

(a = l){b = l) = (a6 = l), 
(a = 0)(6 = 0) = (a + 6 = 0). 

Therefore, in the calculus of propositions, we can solve all simultaneous 
systems of equalities or inequalities and all alternative systems of equalities or 
inequalities, which is not possible in the calculus of classes. To this end, it is 
necessary only to apply the following rule: 

First reduce the inclusions to equalities and the non-inclusions to inequali- 
ties; then reduce the equalities so that their second members will be 1, and the 
inequalities so that their second members will be 0, and transform the latter 
into equalities having 1 for a second member; finally, suppress the second mem- 
bers 1 and the signs of equality, i.e., form the product of the first members of 
the simultaneous equalities and the sum of the first members of the alternative 
equalities, retaining the parentheses. 

0.60 Conclusion 

The foregoing exposition is far from being exhaustive; it does not pretend to be 
a complete treatise on the algebra of logic, but only undertakes to make known 
the elementary principles and theories of that science. The algebra of logic is 
an algorithm with laws peculiar to itself. In some phases it is very analogous to 
ordinary algebra, and in others it is very widely different. For instance, it does 
not recognize the distinction of degrees; the laws of tautology and absorption 
introduce into it great simplifications by excluding from it numerical coefficients. 
It is a formal calculus which can give rise to all sorts of theories and problems, 
and is susceptible of an almost infinite development. 

But at the same time it is a restricted system, and it is important to bear 
in mind that it is far from embracing all of logic. Properly speaking, it is only 
the algebra of classical logic. Like this logic, it remains confined to the domain 



77 



circumscribed by Aristotle, namely, the domain of the relations of inclusion 
between concepts and the relations of implication between propositions. It is 
true that classical logic (even when shorn of its errors and superfluities) was 
much more narrow than the algebra of logic. It is almost entirely contained 
within the bounds of the theory of the syllogism whose limits to-day appear 
very restricted and artiflcial. Nevertheless, the algebra of logic simply treats, 
with much more breadth and universality, problems of the same order; it is at 
bottom nothing else than the theory of classes or aggregates considered in their 
relations of inclusion or identity. Now logic ought to study many other kinds of 
concepts than generic concepts (concepts of classes) and many other relations 
than the relation of inclusion (of subsumption) between such concepts. It ought, 
in short, to develop into a logic of relations, which Leibniz foresaw, which 
Peirce and Schroder founded, and which Peano and Russell seem to 
have established on deflnite foundations. 

While classical logic and the algebra of logic are of hardly any use to math- 
ematics, mathematics, on the other hand, flnds in the logic of relations its 
concepts and fundamental principles; the true logic of mathematics is the logic 
of relations. The algebra of logic itself arises out of pure logic considered as 
a particular mathematical theory, for it rests on principles which have been 
implicitly postulated and which are not susceptible of algebraic or symbolic ex- 
pression because they are the foundation of all symbolism and of all the logical 
calculus. ^^ Accordingly, we can say that the algebra of logic is a mathematical 
logic by its form and by its method, but it must not be mistaken for the logic 
of mathematics. 



^^The principle of deduction and the principle of substitution. See the author's Manuel 
de Logistique, Chapter 1, §§ 2 and 3 [not published], and Les Principes des Mathematiques, 
Chapter 1, A. 



78 



Index 



Absorption, Law of 
Absurdity, Type of 
Addition, and multiplication. Logi- 
cal 

and multiplication. Modulus of 

and multiplication. Theorems on 

Logical, not disjunctive 
Affirmative propositions 
Algebra, of logic an algorithm 

of logic compared to mathemat- 
ical algebra 

of thought 
Algorithm, Algebra of logic an 
Alphabet of human thought 
Alternative 

affirmation 

Equivalence of an implication 
and an 
Antecedent 
Aristotle 

Assertion, Principle of 
Assertions, Number of possible 
Axioms 

Baldwin 
Boole 

Problem of 
Bryan, William Jennings 

Calculus, Infinitesimal 

Logical 

ratiocinator 
Cantor, Georg 
Categorical syllogism 
Cause 
Causes, Forms of 

Law of 



Sixteen 

Table of 
Characters 
Classes, Calculus of 
Classification of dichotomy 
Commutativity 
Composition, Principle of 
Concepts, Calculus of 
Condition 

Necessary and sufficient 

Necessary but not sufficient 

of impossibility and indetermi- 
nation 
Connaissances 
Consequence 
Consequences, Forms of 

Law of 

of the syllogism 

Sixteen 

Table of 
Consequent 
Constituents 

Properties of 
Contradiction, Principle of 
Contradictory propositions 

terms 
Contraposition, Law of 

Principle of 
Council, Members of 
Couturat, v 

Dedekind 
Deduction 

Principle of 
Definition, Theory of 
De Morgan 

Formulas of 



79 



Descartes 
Development 

Law of 

of logical functions 

of mathematics 

of symbolic logic 
Diagrams of Venn, Geometrical 
Dichotomy, Classification of 
Disjunctive, Logical addition not 

sums 
Distributive law 
Double inclusion 

expressed by an indeterminate 

Negative of the 
Double negation 
Duality, Law of 

Economy of mental effort 
Elimination of known terms 

of middle terms 

of unknowns 

Resultant of 

Rule for resultant of 
Equalities, Formulas for transform- 
ing inclusions into 

Reduction of inequalities to 
Equality a primitive idea 

Definition of 

Notion of 
Equation, and an inequation 

Throwing into an 
Equations, Solution of 
Excluded middle. Principle of 
Exclusion, Principle of 
Exclusive, Mutually 
Existence, Postulate of 
Exhaustion, Principle of 
Exhaustive, Collectively 

Forms, Law of 

of consequences and causes 
Frege 

Symbolism of 
Functions 

Development of logical 

Integral 



Limits of 

Logical 

of variables 

Properties of developed 

Propositional 

Sums and products of 

Values of 

Hopital, Marquis de 1' 
Huntington, E. V 
Hypothesis 
Hypothetical arguments 

reasoning 

syllogism 

Ideas, Simple and complex 
Identity 

Principle of 

Type of 
Ideography 
Implication 

and an alternative. Equivalence 
of an 

Relations of 
Importation and exportation. Law 

of 
Impossibility, Condition of 
Inclusion 

a primitive idea 

Double 

expressed by an indeterminate 

Negative of the double 

Relation of 
Inclusions into equalities. Formulas 

for transforming 
Indeterminate 

Inclusion expressed by an 
Indetermination 

Condition of 
Inequalities, to equalities. Reduction 
of 

Transformation of non-inclusions 
and 
Inequation, Equation and an 

Solution of an 
Infinitesimal calculus 



80 



Integral function 
Interpretations of the calculus 

Jevons 

Logical piano of 
Johnson, W. E 

Known terms (connaissances) 

Ladd-Franklin, Mrs 

Lambert 

Leibniz 

Limits of a function 

MacCoh 

MacFarlane, Alexander 

Mathematical function 
logic 

Mathematics, Philosophy a univer- 
sal 

Maxima of discourse 

Middle, Principle of excluded 
terms. Elimination of 

Minima of discourse 

Mitcheh, O 

Modulus of addition and multiplica- 
tion 

Modus ponens 

Modus tollens 

Miiller, Eugen 

Multiplication. See s. v. "Addi- 
tion." 

Negation 

defined 

Double 

Duality not derived from 
Negative 

of the double inclusion 

propositions 
Non-inclusions and inequalities. Trans- 
formation of 
Notation 
Null-class 
Number of possible assertions 

One, Definition of. 



Particular propositions, 
Peano, 
Peirce, C. S., 

Philosophy a universal mathemat- 
ics. 
Piano of Jevons, Logical, 
Poretsky, 

Formula of. 

Method of. 
Predicate, 
Premise, 

Primary proposition. 
Primitive idea. 

Equality a. 

Inclusion a. 
Product, Logical, 
Propositions, 

Calculus of. 

Contradictory, 

Formulas peculiar to the calcu- 
lus of. 

Implication between, 

reduced to lower orders. 

Universal and particular. 
Reciprocal, 

Reductio ad ahsurdum, 
Reduction, Principle of. 
Relations, Logic of. 
Relatives, Logic of. 
Resultant of elimination. 

Rule for, 
Russell, B., 

Schroder, 

Theorem of. 
Secondary proposition. 
Simplification, Principle of. 
Simultaneous affirmation. 
Solution of equations, 

of inequations. 
Subject, 

Substitution, Principle of, 
Subsumption, 
Summand, 
Sums, 

and products of functions. 



81 



Disjunctive, 

Logical, 
Syllogism, Principle of the. 

Theory of the. 
Symbolic logic. 

Development of. 
Symbolism in mathematics. 
Symbols, Origin of. 
Symmetry, 
Tautology, Law of 
Term, 
Theorem, 
Thesis, 
Thought, 

Algebra of. 

Alphabet of human. 

Economy of. 
Transformation 

of inclusions into equalities, 

of inequalities into equalities, 

of non-inclusions and inequali- 
ties. 

Universal 

characteristic of Leibniz, 
propositions. 

Universe of discourse. 
Unknowns, Elimination of. 

Variables, Functions of, 
Venn, John, 

metrical diagrams of. 

Mechanical device of. 

Problem of, 
Viete, 
Voigt, 

Whitehead, A. N., 
Whole, Logical, 

Zero, 

Definition of. 
Logical, 



82 



Chapter 1 

PROJECT GUTENBERG 
"SMALL PRINT" 



End of the Project Gutenberg EBook of The Algebra of Logic, by Louis Couturat 

*** END OF THIS PROJECT GUTENBERG EBOOK THE ALGEBRA OF LOGIC *** 

***** This file should be named 10836-pdf .pdf or 10836-pdf .zip ***** 
This and all associated files of various formats will be found in: 
http : //www . gutenberg . net/1/0/8/3/10836/ 

Produced by David Starner, Arno Peters, Susan Skinner 
and the Online Distributed Proofreading Team. 

Updated editions will replace the previous one- -the old editions 
will be renamed. 

Creating the works from public domain print editions means that no 
one owns a United States copyright in these works, so the Foundation 
(and you!) can copy and distribute it in the United States without 
permission and without paying copyright royalties. Special rules, 
set forth in the General Terms of Use part of this license, apply to 
copying and distributing Project Gutenberg-tm electronic works to 
protect the PROJECT GUTENBERG-tm concept and trademark. Project 
Gutenberg is a registered trademark, and may not be used if you 
charge for the eBooks, unless you receive specific permission. If you 
do not charge anything for copies of this eBook, complying with the 
rules is very easy. You may use this eBook for nearly any purpose 
such as creation of derivative works, reports, performances and 
research. They may be modified and printed and given away- -you may do 
practically ANYTHING with public domain eBooks. Redistribution is 
subject to the trademark license, especially commercial 
redistribution . 



*** START: FULL LICENSE *** 

THE FULL PROJECT GUTENBERG LICENSE 

PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK 

To protect the Project Gutenberg-tm mission of promoting the free 
distribution of electronic works, by using or distributing this work 
(or any other work associated in any way with the phrase "Project 
Gutenberg") , you agree to comply with all the terms of the Full Project 
Gutenberg-tm License (available with this file or online at 
http://gutenberg.net/license) . 



Section 1. General Terms of Use and Redistributing Project Gutenberg-tm 
electronic works 

l.A. By reading or using any part of this Project Gutenberg-tm 
electronic work, you indicate that you have read, understand, agree to 
and accept all the terms of this license and intellectual property 
(trademark/ copyright) agreement. If you do not agree to abide by all 
the terms of this agreement, you must cease using and return or destroy 
all copies of Project Gutenberg-tm electronic works in your possession. 
If you paid a fee for obtaining a copy of or access to a Project 
Gutenberg-tm electronic work and you do not agree to be bound by the 
terms of this agreement, you may obtain a refund from the person or 
entity to whom you paid the fee as set forth in paragraph I.E. 8. 

l.B. "Project Gutenberg" is a registered trademark. It may only be 
used on or associated in any way with an electronic work by people who 
agree to be bound by the terms of this agreement. There are a few 
things that you can do with most Project Gutenberg-tm electronic works 
even without complying with the full terms of this agreement . See 
paragraph l.C below. There are a lot of things you can do with Project 
Gutenberg-tm electronic works if you follow the terms of this agreement 
and help preserve free future access to Project Gutenberg-tm electronic 
works. See paragraph l.E below. 

l.C. The Project Gutenberg Literary Archive Foundation ("the Foundation" 
or PGLAF) , owns a compilation copyright in the collection of Project 
Gutenberg-tm electronic works. Nearly all the individual works in the 
collection are in the public domain in the United States. If an 
individual work is in the public domain in the United States and you are 
located in the United States, we do not claim a right to prevent you from 
copying, distributing, performing, displaying or creating derivative 
works based on the work as long as all references to Project Gutenberg 
are removed. Of course, we hope that you will support the Project 
Gutenberg-tm mission of promoting free access to electronic works by 
freely sharing Project Gutenberg-tm works in compliance with the terms of 



this agreement for keeping the Project Gutenberg-tm name associated with 
the work. You can easily comply with the terms of this agreement by 
keeping this work in the same format with its attached full Project 
Gutenberg-tm License when you share it without charge with others . 

l.D. The copyright laws of the place where you are located also govern 
what you can do with this work. Copyright laws in most countries are in 
a constant state of change. If you are outside the United States, check 
the laws of your country in addition to the terms of this agreement 
before downloading, copying, displaying, performing, distributing or 
creating derivative works based on this work or any other Project 
Gutenberg-tm work. The Foundation makes no representations concerning 
the copyright status of any work in any country outside the United 
States . 

I.E. Unless you have removed all references to Project Gutenberg: 

l.E.l. The following sentence, with active links to, or other immediate 
access to, the full Project Gutenberg-tm License must appear prominently 
whenever any copy of a Project Gutenberg-tm work (any work on which the 
phrase "Project Gutenberg" appears, or with which the phrase "Project 
Gutenberg" is associated) is accessed, displayed, performed, viewed, 
copied or distributed: 

This eBook is for the use of anyone anywhere at no cost and with 
almost no restrictions whatsoever. You may copy it, give it away or 
re-use it under the terms of the Project Gutenberg License included 
with this eBook or online at www.gutenberg.net 

I.E. 2. If an individual Project Gutenberg-tm electronic work is derived 
from the public domain (does not contain a notice indicating that it is 
posted with permission of the copyright holder) , the work can be copied 
and distributed to anyone in the United States without paying any fees 
or charges. If you are redistributing or providing access to a work 
with the phrase "Project Gutenberg" associated with or appearing on the 
work, you must comply either with the requirements of paragraphs l.E.l 
through I.E. 7 or obtain permission for the use of the work and the 
Project Gutenberg-tm trademark as set forth in paragraphs I.E. 8 or 
I.E. 9. 

I.E. 3. If an individual Project Gutenberg-tm electronic work is posted 
with the permission of the copyright holder, your use and distribution 
must comply with both paragraphs l.E.l through I.E. 7 and any additional 
terms imposed by the copyright holder. Additional terms will be linked 
to the Project Gutenberg-tm License for all works posted with the 
permission of the copyright holder found at the beginning of this work. 

I.E. 4. Do not unlink or detach or remove the full Project Gutenberg-tm 
License terms from this work, or any files containing a part of this 
work or any other work associated with Project Gutenberg-tm. 



I.E. 5. Do not copy, display, perform, distribute or redistribute this 
electronic work, or any part of this electronic work, without 
prominently displaying the sentence set forth in paragraph l.E.l with 
active links or immediate access to the full terms of the Project 
Gutenberg-tm License. 

I.E. 6. You may convert to and distribute this work in any binary, 
compressed, marked up, nonproprietary or proprietary form, including any 
word processing or hypertext form. However, if you provide access to or 
distribute copies of a Project Gutenberg-tm work in a format other than 
"Plain Vanilla ASCII" or other format used in the official version 
posted on the official Project Gutenberg-tm web site (www.gutenberg.net), 
you must, at no additional cost, fee or expense to the user, provide a 
copy, a means of exporting a copy, or a means of obtaining a copy upon 
request, of the work in its original "Plain Vanilla ASCII" or other 
form. Any alternate format must include the full Project Gutenberg-tm 
License as specified in paragraph l.E.l. 

I.E. 7. Do not charge a fee for access to, viewing, displaying, 
performing, copying or distributing any Project Gutenberg-tm works 
unless you comply with paragraph I.E. 8 or I.E. 9. 

I.E. 8. You may charge a reasonable fee for copies of or providing 
access to or distributing Project Gutenberg-tm electronic works provided 
that 

- You pay a royalty fee of 20'/, of the gross profits you derive from 

the use of Project Gutenberg-tm works calculated using the method 
you already use to calculate your applicable taxes . The fee is 
owed to the owner of the Project Gutenberg-tm trademark, but he 
has agreed to donate royalties under this paragraph to the 
Project Gutenberg Literary Archive Foundation. Royalty payments 
must be paid within 60 days following each date on which you 
prepare (or are legally required to prepare) your periodic tax 
returns. Royalty payments should be clearly marked as such and 
sent to the Project Gutenberg Literary Archive Foundation at the 
address specified in Section 4, "Information about donations to 
the Project Gutenberg Literary Archive Foundation." 

- You provide a full refund of any money paid by a user who notifies 

you in writing (or by e-mail) within 30 days of receipt that s/he 
does not agree to the terms of the full Project Gutenberg-tm 
License. You must require such a user to return or 
destroy all copies of the works possessed in a physical medium 
and discontinue all use of and all access to other copies of 
Project Gutenberg-tm works. 

- You provide, in accordance with paragraph 1.F.3, a full refund of any 

money paid for a work or a replacement copy, if a defect in the 



electronic work is discovered and reported to you within 90 days 
of receipt of the work. 

- You comply with all other terms of this agreement for free 
distribution of Project Gutenberg-tm works. 

I.E. 9. If you wish to charge a fee or distribute a Project Gutenberg-tm 
electronic work or group of works on different terms than are set 
forth in this agreement, you must obtain permission in writing from 
both the Project Gutenberg Literary Archive Foundation and Michael 
Hart, the owner of the Project Gutenberg-tm trademark. Contact the 
Foundation as set forth in Section 3 below. 

l.F. 

l.F.l. Project Gutenberg volunteers and employees expend considerable 
effort to identify, do copyright research on, transcribe and proofread 
public domain works in creating the Project Gutenberg-tm 
collection. Despite these efforts. Project Gutenberg-tm electronic 
works, and the medium on which they may be stored, may contain 
"Defects," such as, but not limited to, incomplete, inaccurate or 
corrupt data, transcription errors, a copyright or other intellectual 
property infringement, a defective or damaged disk or other medium, a 
computer virus, or computer codes that damage or cannot be read by 
your equipment . 

l.F. 2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right 
of Replacement or Refund" described in paragraph l.F. 3, the Project 
Gutenberg Literary Archive Foundation, the owner of the Project 
Gutenberg-tm trademark, and any other party distributing a Project 
Gutenberg-tm electronic work under this agreement, disclaim all 
liability to you for damages, costs and expenses, including legal 
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT 
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE 
PROVIDED IN PARAGRAPH F3. YOU AGREE THAT THE FOUNDATION, THE 
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE 
LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR 
INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH 
DAMAGE. 

l.F. 3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a 
defect in this electronic work within 90 days of receiving it, you can 
receive a refund of the money (if any) you paid for it by sending a 
written explanation to the person you received the work from. If you 
received the work on a physical medium, you must return the medium with 
your written explanation. The person or entity that provided you with 
the defective work may elect to provide a replacement copy in lieu of a 
refund. If you received the work electronically, the person or entity 
providing it to you may choose to give you a second opportunity to 
receive the work electronically in lieu of a refund. If the second copy 



is also defective, you may demand a refund in writing without further 
opportunities to fix the problem. 

l.F.4. Except for the limited right of replacement or refund set forth 
in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO OTHER 
WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO 
WARRANTIES OF MERCHANTIBILITY OR FITNESS FOR ANY PURPOSE. 

l.F.5. Some states do not allow disclaimers of certain implied 
warranties or the exclusion or limitation of certain types of damages. 
If any disclaimer or limitation set forth in this agreement violates the 
law of the state applicable to this agreement, the agreement shall be 
interpreted to make the maximum disclaimer or limitation permitted by 
the applicable state law. The invalidity or unenforceability of any 
provision of this agreement shall not void the remaining provisions. 

l.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the 
trademark owner, any agent or employee of the Foundation, anyone 
providing copies of Project Gutenberg-tm electronic works in accordance 
with this agreement, and any volunteers associated with the production, 
promotion and distribution of Project Gutenberg-tm electronic works, 
harmless from all liability, costs and expenses, including legal fees, 
that arise directly or indirectly from any of the following which you do 
or cause to occur: (a) distribution of this or any Project Gutenberg-tm 
work, (b) alteration, modification, or additions or deletions to any 
Project Gutenberg-tm work, and (c) any Defect you cause. 



Section 2. Information about the Mission of Project Gutenberg-tm 

Project Gutenberg-tm is synonymous with the free distribution of 
electronic works in formats readable by the widest variety of computers 
including obsolete, old, middle-aged and new computers. It exists 
because of the efforts of hundreds of volunteers and donations from 
people in all walks of life. 

Volunteers and financial support to provide volunteers with the 
assistance they need, is critical to reaching Project Gutenberg-tm' s 
goals and ensuring that the Project Gutenberg-tm collection will 
remain freely available for generations to come. In 2001, the Project 
Gutenberg Literary Archive Foundation was created to provide a secure 
and permanent future for Project Gutenberg-tm and future generations. 
To learn more about the Project Gutenberg Literary Archive Foundation 
and how your efforts and donations can help, see Sections 3 and 4 
and the Foundation web page at http://www.pglaf.org. 



Section 3. Information about the Project Gutenberg Literary Archive 
Foundation 



The Project Gutenberg Literary Archive Foundation is a non profit 
501(c)(3) educational corporation organized under the laws of the 
state of Mississippi and granted tax exempt status by the Internal 
Revenue Service. The Foundation's EIN or federal tax identification 
number is 64-6221541. Its 501(c)(3) letter is posted at 
http://pglaf.org/fundraising. Contributions to the Project Gutenberg 
Literary Archive Foundation are tax deductible to the full extent 
permitted by U.S. federal laws and your state's laws. 

The Foundation's principal office is located at 4557 Melan Dr. S. 
Fairbanks, AK, 99712., but its volunteers and employees are scattered 
throughout numerous locations. Its business office is located at 
809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887, email 
business@pglaf.org. Email contact links and up to date contact 
information can be found at the Foundation's web site and official 
page at http : //pglaf . org 

For additional contact information: 
Dr. Gregory B. Newby 
Chief Executive and Director 
gbnewby@pglaf . org 

Section 4. Information about Donations to the Project Gutenberg 
Literary Archive Foundation 

Project Gutenberg-tm depends upon and cannot survive without wide 
spread public support and donations to carry out its mission of 
increasing the number of public domain and licensed works that can be 
freely distributed in machine readable form accessible by the widest 
array of equipment including outdated equipment. Many small donations 
($1 to $5,000) are particularly important to maintaining tax exempt 
status with the IRS. 

The Foundation is committed to complying with the laws regulating 
charities and charitable donations in all 50 states of the United 
States. Compliance requirements are not uniform and it takes a 
considerable effort, much paperwork and many fees to meet and keep up 
with these requirements. We do not solicit donations in locations 
where we have not received written confirmation of compliance. To 
SEND DONATIONS or determine the status of compliance for any 
particular state visit http://pglaf.org 

While we cannot and do not solicit contributions from states where we 
have not met the solicitation requirements, we know of no prohibition 
against accepting unsolicited donations from donors in such states who 
approach us with offers to donate. 

International donations are gratefully accepted, but we cannot make 
any statements concerning tax treatment of donations received from 
outside the United States. U.S. laws alone swamp our small staff. 



Please check the Project Gutenberg Web pages for current donation 
methods and addresses. Donations are accepted in a number of other 
ways including including checks, online payments and credit card 
donations. To donate, please visit: http://pglaf.org/donate 



Section 5. General Information About Project Gutenberg-tm electronic 
works . 

Professor Michael S. Hart is the originator of the Project Gutenberg-tm 
concept of a library of electronic works that could be freely shared 
with anyone. For thirty years, he produced and distributed Project 
Gutenberg-tm eBooks with only a loose network of volunteer support . 

Project Gutenberg-tm eBooks are often created from several printed 
editions, all of which are confirmed as Public Domain in the U.S. 
unless a copyright notice is included. Thus, we do not necessarily 
keep eBooks in compliance with any particular paper edition. 

Each eBook is in a subdirectory of the same number as the eBook's 
eBook number, often in several formats including plain vanilla ASCII, 
compressed (zipped), HTML and others. 

Corrected EDITIONS of our eBooks replace the old file and take over 
the old filename and etext number. The replaced older file is renamed. 
VERSIONS based on separate sources are treated as new eBooks receiving 
new filenames and etext numbers. 

Most people start at our Web site which has the main PG search facility: 

http : //www . gutenberg . net 

This Web site includes information about Project Gutenberg-tm, 
including how to make donations to the Project Gutenberg Literary 
Archive Foundation, how to help produce our new eBooks, and how to 
subscribe to our email newsletter to hear about new eBooks. 

EBooks posted prior to November 2003, with eBook numbers BELOW #10000, 
are filed in directories based on their release date. If you want to 
download any of these eBooks directly, rather than using the regular 
search system you may utilize the following addresses and just 
download by the etext year. For example: 

http : //www . gutenberg . net/etext06 

(Or /etext 05, 04, 03, 02, 01, 00, 99, 
98, 97, 96, 95, 94, 93, 92, 92, 91 or 90) 

EBooks posted since November 2003, with etext numbers OVER #10000, are 



filed in a different way. The year of a release date is no longer part 
of the directory path. The path is based on the etext number (which is 
identical to the filename) . The path to the file is made up of single 
digits corresponding to all but the last digit in the filename. For 
example an eBook of filename 10234 would be found at: 

http : //www . gutenberg . net/1/0/2/3/10234 

or filename 24689 would be found at: 

http : //www . gutenberg . net/2/4/6/8/24689 

An alternative method of locating eBooks: 
http : //www . gutenberg . net/GUTINDEX . ALL