# Full text of "Introduction To Chemical Physics"

## See other formats

THE BOOK WAS DRENCHED < = CD OU 164180 INTERNATIONAL SERIES IN PHYSICS LEE A. DtrBRIDGE, CONSULTING EDITOR INTRODUCTION TO CHEMICAL PHYSICS INTERNATIONAL SERIES IN PURE AND APPLIED PHYSICS G. P. HARNWELL, Consulting Editor BRILLOUIN WAVE PROPAGATION IN PERIODIC STRUCTURES CADY PIEZOELECTRICITY CLARK APPLIED X-RAYS CURTIS ELECTRICAL MEASUREMENTS DAVEY CRYSTAL STRUCTURE AND ITS APPLICATIONS EDWARDS ANALYTIC AND VECTOR MECHANICS GURNEY INTRODUCTION TO STATISTICAL MECHANICS HARDY AND PERRIN THE PRINCIPLES OF OPTICS HARNWELL ELECTRICITY AND ELECTROMAGNETISM HARNWELL AND LIVINGOOD EXPERIMENTAL ATOMIC PHYSICS HOUSTON PRINCIPLES OF MATHEMATICAL PHYSICS HUGHES AND DUBRIDGE PHOTOELECTRIC PHENOMENA HUND HIGH-FREQUENCY MEASUREMENTS INGERSOLL, ZOBEL, AND INGERSOLL HEAT CONDUCTION KEMBLE PRINCIPLES OF QUANTUM MECHANICS KENNARD KINETIC THEORY OF GASES KOLLER THE PHYSICS OF ELECTRON TUBES MORSE VIBRATION AND SOUND PAULING AND GOUDSMIT THE STRUCTURE OF LINE SPECTRA RICHTMYER AND KENNARD INTRODUCTION TO MODERN PHYSICS RUARK AND UREY ATOMS, MOLECULES, AND QUANTA SCHIFF QUANTUM MECHANICS SEITZ THE MODERN THEORY OF SOLIDS SLATER INTRODUCTION TO CHEMICAL PHYSICS MICROWAVE TRANSMISSION SLATER AND FRANK ELECTROMAGNETISM INTRODUCTION TO THEORETICAL PHYSICS MECHANICS SMYTHE STATIC AND DYNAMIC ELECTRICITY STRATTON ELECTROMAGNETIC THEORY WHITE INTRODUCTION TO ATOMIC SPECTRA WILLIAMS MAGNETIC PHENOMENA Dr. Lee A. DuBridge was consulting editor of the series from 1939 to 1946. INTRODUCTION TO CHEMICAL PHYSICS BY J. C. SLATER Professor of Physics Massachusetts Institute of Technology FIRST EDITION" SEVENTH IMPRESSION McGRAW-HILL BOOK COMPANY, INC. NEW YORK AND LONDON 1939 (\)PYRK8HT, 1939, BY TIIK i BOOK COMPANY, INC. PRINTED IN" THE UNITED STATES OP AMERICA All rights reserved. This book, or parts thereof, may not be reproduced in any form without permission of the publishers. THE MAPLE PRESS COMPANY, YORK, PREFACE It is probably unfortunate that physics and chemistry over were separated. Chemistry is the science of atoms and of the way they com- bine. Physics deals with the interatomic forces and with the large-scale properties of matter resulting from those forces. So long as chemistry was largely empirical and nonmathematical, and physics had not learned how to treat small-scale atomic forces, the two sciences seemed widely separated. But with statistical mechanics and the kinetic theory on the one hand and physical chemistry on the other, the two sciences began to come together. Now that statistical mechanics has led to quantum theory and wave mechanics, with its explanations of atomic interactions, there is really nothing separating them any more. A few years ago, though their ideas were close together, their experimental methods were still quite different : chemists dealt with things in test tubes, making solutions, pre- cipitating and filtering and evaporating, while physicists measured every- thing with galvanometers and spectroscopes. But even this distinction has disappeared, with more and more physical apparatus finding its way into chemical laboratories. A wide range of study is common to both subjects. The sooner we realize this the better. For want of a better name, since Physical Chemistry is already preempted, we may call this common field Chemical Physics. It is an overlapping field in which both physicists and chemists should be trained. There 4 seems no valid reason why their training in it should differ. This book is an attempt to incorporate some of the material of this common field in a unified presentation. What should be included in a discussion of chemical physics? Logi- cally, we should start with fundamental principles. We should begin with mechanics, then present electromagnetic theory, and should work up to wave mechanics and quantum theory. By means of these w r e should study the structure of atoms and molecules. Then we should introduce thermodynamics and statistical mechanics, so as to handle large collections of molecules. With all this fundamental material we could proceed to a discussion of different types of matter, in the solid, liquid, and gaseous phases, and to an explanation of its physical and chemical properties in terms of first principles. But if we tried to do all this, we should, in the first place, be writing several volumes which would include almost all of theoretical physics and chemistry; and in the second place no one but an experienced mathematician could handle the vi PREFACE theory. For both of these reasons the author has compromised greatly in the present volume, so as to bring the material into reasonable com- pass and to make it intelligible to a reader with a knowledge of calculus and differential equations, but unfamiliar with the more difficult branches of mathematical physics. In the matter of scope, most of the theoretical physics which forms a background to our subject has been omitted. Much of this is considered in the companion volume, " Introduction to Theoretical Physics," by Slater and Frank. The effort has been made in the present work to pro- duce a book which is intelligible without studying theoretical physics first. This has been done principally for the benefit of chemists and others who wish to obtain the maximum knowledge of chemical physics with the minimum of theory. In the treatment of statistical mechanics only the most elementary use of mechanics is involved. For that reason it has not boon possible to give a complete discussion, although the parts used in the calculations have been considered. Statistical mechanics has been introduced from the standpoint more of the quantum theory than of classical theory, but the quantum theory that is used is of a very elemen- tary sort. It has seemed desirable to omit wave mechanics, which demands more advanced mathematical methods. In discussing atomic and molecular structure and the nature of interatomic forces, descriptive use has boon made of the quantum theory, but again no detailed use of it. Thus it is hoped that the reader with only a superficial acquaintance with modern atomic theory will be able to read the book without great diffi- culty, although, of course, the reader with a knowledge of quantum theory and wave mechanics will have a great advantage. Finally in the matter of arrangement the author has departed from the logical order in the interest of easy presentation. Logically one should probably begin with the structure of atoms and molecules, crystals and liquids and gases; then introduce the statistical principles that govern molecules in large numbers, and finally thermodynamics, which follows logically from statistics. Actually almost exactly the opposite order has been chosen. Thermodynamics and statistical mechanics come first. Then gases, solids, and liquids are treated on the basis of thermo- dynamics and statistics, with a minimum amount of use of a model. Finally atomic and molecular structure are introduced, together with a discussion of different types of substances, explaining their interatomic forces from quantum theory and their thermal and elastic behavior from our thermodynamic and statistical methods. In this way, the historical order is followed roughly, and, at least for chemists, it proceeds from what are probably the more familiar to the less familiar methods. It is customary to write books either on thermodynamics or on statistical mechanics; this one combines both. It seems hardly necessary PREFACE Vll to apologize for this. Both have their places, and both are necessary in a complete presentation of chemical physics. An effort has been made to keep them separate, so that at any time the reader will be clear as to which method is being used. In connection with thermodynamics, the method of Bridgman, which seems by far the most convenient for prac- tical application, has been used. There is one question connected with thermodynamics, that of notation. The continental notation and the American chemical notation of Lewis and Randall are quite different. Each has its drawbacks. The author has chosen the compromise notation of the Joint Committee of the Chemical Society, the Faraday Society, and the Physical Society (all of England), which preserves the best points of both. It is hoped that this notation, which has a certain amount of international sanction, may become general among both physicists and chemists, whose poblems are similar enough so that they surely can use the same language. In a book like this, containing a number of different types of material, it is likely that some readers and teachers will want to use some parts, others to use other parts. An attempt has been made to facilitate such use by making chapters and sections independent of each other as far as possible. The book has been divided into three parts: Part I, Thermo- dynamics, Statistical Mechanics, and Kinetic Theory; Part II, Gases, Liquids, and Solids; Part III, Atoms, Molecules, and the Structure of Matter. The first part alone forms an adequate treatment of thermo- dynamics and statistical theory, and could be used by itself. Certain of its chapters, as Chap. V on the Fermi-Dirac and Einstein-Bose Statistics, Chap. VI on the Kinetic Method and the Approach to Thermal Equilib- rium, and Chap. VII on Fluctuations, can be omitted without causing much difficulty in reading the following parts of the book (except for the chapters on metals, which depend on the Fermi-Dirac statistics). In Part II, most of Chap. IX on the Molecular Structure and Specific Heat of Polyatomic Gases, Chap. X on Chemical Equilibrium in Gases, parts of Chap. XII on Van der Waals' Equation and Chap. XIII on the Equation of State of Solids, Chap. XV on The Specific Heat of Com- pounds, Chap. XVII on Phase Equilibrium in Binary Systems, and Chap. XVIII on Phase Changes of the Second Order are not necessary for what follows. In Part III, Chap. XIX on Radiation and Matter, Chap. XX on lonization and Excitation of Atoms, and Chap. XXI on Atoms and the Periodic Table will be familiar to many readers. Much of the rest of this part is descriptive; one chapter does not depend on another, so that many readers may choose to omit a considerable portion or all, of this material. It will be seen from this brief enumeration that selections from the book may be used in a variety of ways to serve the needs of courses less extensive than the whole book. viii PREFACE The author hopes that this book may serve in a minor way to fill the gap that has grown between physics and chemistry. This gap is a result of tradition and training, not of subject matter. Physicists and chemists are given quite different courses of instruction; the result is that almost no one is really competent in all the branches of chemical physics. If the coming generation of chemists or physicists could receivo training, in the first place, in empirical chemistry, in physical chemistry, in metallurgy, and in crystal structure, and, in the second place, in theoretical physics, including mechanics and electromagnetic theory, and in particular in quantum theory, wave mechanics, and the structure of atoms and molecules, and finally in thermodynamics, statistical mechanics, and what we have called chemical physics, they would be far better scientists than those receiving the present training in either chemistry or physics alone. The author wishes to indicate his indebtedness to several of his colleagues, particularly Professors B. E. Warren and W. B. Nottingham, who have read parts of the manuscript and made valuable comments. His indebtedness to books is naturally vory great, but most of thorn arc mentioned in the list of suggested references at the end of this volume. J. C. SLATER. CAMBRIDGE, MASSACHUSETTS, September, 1939. CONTENTS PA an PREFACE . . . v PART I THERMODYNAMICS, STATISTICAL MECHANICS, AND KINETIC THEORY CHAPTER I HEAT AS A MODE OF MOTION INTRODUCTION .... .3 1. THE CONSERVATION OF ENERGY .... . .3 2. INTERNAL ENERGY, EXTERNAL WORK, AND HEAT FLOW ... 6 3. THE ENTROPY AND IRREVERSIBLE PROCESSES . . 9 4. THE SECOND LAW OF THERMODYNAMICS . . . ... 12 CHAPTER II THERMODYNAMICS INTRODUCTION . . 16 1. THE EQUATION OF STATE ... . 16 2. THE ELEMENTARY PARTIAL DERIVATIVES .... 18 3. THE ENTHALPY, AND HELMHOLTZ AND GIBBS FREE ENERGIES 20 4. METHODS OF DERIVING THERMODYNAMIC FORMULAS 23 5. GENERAL CLASSIFICATION OF THERMODYNAMIC FORMULAS 27 6. COMPARISON OP THERMODYNAMIC AND GAS SCALES OF TEMPERATURE 30 CHAPTER III STATISTICAL MECHANICS INTRODUCTION . . . .32 1. STATISTICAL ASSEMBLIES AND THE ENTROPY ... . 32 2. COMPLEXIONS AND THE PHASE SPACE 36 3. CELLS IN THE PHASE SPACE AND THE QUANTUM THEORY .... 38 4. IRREVERSIBLE PROCESSES . . . . 43 5. THE CANONICAL ASSEMBLY 46 CHAPTER IV THE MAXWELL-BOLTZMANN DISTRIBUTION LAW INTRODUCTION 52 1. THE CANONICAL ASSEMBLY AND THE MAXWELL-BOLTZMANN DISTRIBUTION . 52 2. MAXWELL'S DISTRIBUTION OF VELOCITIES . 55 3 t THE EQUATION OF STATE AND SPECIFIC HEAT OF PERFECT MONATOMIC GASES 58 4. THE PERFECT GAS IN A FORCE FIELD .... 62 ix X CONTENTS PAOB CHAPTER V THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS INTRODUCTION . 65 1. THE MOLECULAR PHASE SPACE 65 2. ASSEMBLIES IN THE MOLECULAR PHASE SPACE . . 68 3. THE FERMI-DIRAC DISTRIBUTION FUNCTION 72 4. THERM ODYNAMIC FUNCTIONS IN THE FERMI STATISTICS 76 5. THE PERFECT GAS IN THE FERMI STATISTICS. ... 80 6. THE EINSTEIN-ROSE DISTRIBUTION LAW 83 CHAPTER VI THE KINETIC METHOD AND THE APPROACH TO THERMAL EQUILIBRIUM INTRODUCTION 86 1. THE EFFECT OF MOLECULAR COLLISIONS ON THE DISTRIBUTION FUNCTION IN THE BOLTZMANN STATISTICS 86 2. THE EFFECT OF COLLISIONS ON THE ENTROPY ... ... 89 3. THE CONSTANTS IN THE DISTRIBUTION FUNCTION 92 4. THE KINETIC METHOD FOR FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS . 96 CHAPTER VII FLUCTUATIONS INTRODUCTION .101 1. ENERGY FLUCTUATIONS IN THE CANONICAL ASSEMBLY . . 101 2. DISTRIBUTION FUNCTIONS FOR FLUCTUATIONS 104 3. FLUCTUATIONS OF ENERGY \ND DENSITY 107 PART II GASES, LIQUIDS, AND SOLIDS CHAPTER VIII THERMODYNAMIC AND STATISTICAL TREATMENT OF THE PERFECT GAS AND MIXTURES OF GASES INTRODUCTION 115 1. THERMODYNAMICS OF A PERFECT GAS 115 2. THERMODYNAMICS OF A MIXTURE OF PERFECT GASES 120 3 STATISTICAL MECHANICS OF A PERFECT GAS IN BOLTZMANN STATISTICS . . .124 CHAPTER IX THE MOLECULAR STRUCTURE AND SPECIFIC HEAT OF POLYATOMIC GASES INTRODUCTION 130 1. THE STRUCTURE OF DIATOMIC MOLECULES 130 2. THE ROTATIONS OF DIATOMIC MOLECULES 134 3. THE PARTITION FUNCTION FOR ROTATION 138 4. THE VIBRATION or DIATOMIC MOLECULES 140 5. TH PARTITION FUNCTION FOR VIBRATION 142 6. THE SPECIFIC HEATS OF POLYATOMIC GASES 145 CONTENTS xi PAGB CHAPTER X CHEMICAL EQUILIBRIUM IN GASES INTRODUCTION 150 1. RATES OP REACTION AND THE MASS ACTION LAW 151 2. THE EQUILIBRIUM CONSTANT, AND VAN'T HOFF'S EQUATION 154 3. ENERGIES OF ACTIVATION AND THE KINETICS OF REACTIONS .158 CHAPTER XI THE EQUILIBRIUM OF SOLIDS, LIQUIDS, AND GASES INTRODUCTION . . 100 1. THE COEXISTENCE OF PHASES 166 2. THE EQUATION OF STATE 169 3. ENTROPY AND GIBBS FREE ENERGY 170 4. THE LATENT HEATS AND CLAPEYRON'S EQUATION 174 5. THE INTEGRATION OF CLAPEYRON'S EQUATION AND THE VAPOR PRESSURE CURVE 176 6. STATISTICAL MECHANICS AND THE VAPOR PRESSURE CURVE . .178 7. POLYMORPHIC PHASES OF SOLIDS 180 CHAPTER XII VAN DER WAALS' EQUATION INTRODUCTION ... . 182 1. VAN DER WAALS' EQUATION 182 2. ISOTHERMALS OF VAN DER WAALS' EQUATION . 184 3. GIBBS FREE ENERGY AND THE EQUILIBRIUM OF PHASES FOR A VAN DER WAALS GAS 187 4. STATISTICAL MECHANICS AND THE SECOND VIRIAL COEFFICIENT 190 5. THE ASSUMPTIONS OF VAN DER WAALS' EQUATION . . 194 6. THE JOULE-THOMSON EFFECT AND DEVIATIONS FROM THE PERFECT GAS LAW 196 CHAPTER XIII THE EQUATION OF STATE OF SOLIDS INTRODUCTION . 199 1. THE EQUATION OF STATE AND SPECIFIC HEAT OF SOLIDS 199 2. THERMODYNAMIC FUNCTIONS FOR SOLIDS. . . ... . 205 3. THE STATISTICAL MECHANICS OF SOLIDS. . .... 211 4. STATISTICAL MECHANICS OF A SYSTEM OF OSCILLATORS 215 5. POLYMORPHIC TRANSITIONS 220 CHAPTER XIV DEBYE'S THEORY OF SPECIFIC HEATS INTRODUCTION . 222 1. ELASTIC VIBRATIONS OF A CONTINUOUS SOLID . 222 2. VlBRATIONAL FREQUENCY SPECTRUM OF A CONTINUOUS SOLID 225 3. DEBYE'S THEORY OF SPECIFIC HEATS 234 4. DEBYE'S THEORY AND THE PARAMETER y 238 CHAPTER XV THE SPECIFIC HEAT OF COMPOUNDS INTRODUCTION 241 1. WAVE PROPAGATION IN A ONE-DIMENSIONAL CRYSTAL LATTICE 241 2. WAVES IN A ONE-DIMENSIONAL DIATOMIC CRYSTAL 247 xii CONTENTS PAGE 3. VIBRATION SPECTRA AND SPECIFIC HEATS OF POLYATOMIC CRYSTALS . . . 252 4. INFRARED OPTICAL SPECTRA OP CRYSTALS .... . 254 CHAPTER XVI THE LIQUID STATE AND FUSION INTRODUCTION ... . 256 1. THE LIQUID PHASE . . . 256 2. THE LATENT HEAT OF MELTING 258 3. THE ENTROPY OF MELTING 260 4. STATISTICAL MECHANICS AND MELTING 265 CHAPTER XVII PHA8K EQUILIBRIUM IN BINARY SYSTEMS INTRODUCTION . . . . 270 1. TYPES OF PHASES IN BINARY SYSTEMS. . 271 2. ENERGY AND ENTROPY OF PHASES OF VARIABLE COMPOSITION . 275 3. THE CONDITION FOR EQUILIBRIUM BETWEEN PHASES 278 4. PHASE EQUILIBRIUM BETWEEN MUTUALLY INSOLUBLE SOLIDS 283 5. LOWERING OF MELTING POINTS OF SOLUTIONS . 288 CHAPTER XVIII PHASE CHANGES OF THE SECOND ORDER INTRODUCTION . 291 1. ORDER-DISORDEU TRANSITIONS IN ALLOYS 293 2. EQUILIBRIUM IN TRANSITIONS OF THE Cu-ZN TYPE . 296 3. TRANSITIONS OF THE Cu-ZN TYPE WITH ARBITRARY COMPOSITION . 301 PART III ATOMS, MOLECULES, AND THE STRUCTURE OF MATTER CHAPTER XIX RADIATION AND MATTER INTRODUCTION . . . . 307 1. BLACK BODY RADIATION AND THE STEFAN-BOLTZMANN LAW . . 307 2. THE PLANCK RADIATION LAW 313 3. EINSTEIN'S HYPOTHESIS AND THE INTERACTION OF RADIATION AND MATTER 316 CHAPTER XX IONIZATION AND EXCITATION OF ATOMS INTRODUCTION . . . 32 J 1. BOHR'S FREQUENCY CONDITION . ... . 321 2. THE KINETICS OF ABSORPTION AND EMISSION OF RADIATION . . 324 3. THE KINETICS OF COLLISION AND IONIZATION .... 326 4. THE EQUILIBRIUM OF ATOMS AND ELECTRONS . . . 333 CHAPTER XXI ATOMS AND THE PERIODIC TABLE INTRODUCTION . . 336 1. THE NUCLEAR ATOM .... 336 CONTENTS xiii PAGE 2. ELECTRONIC ENERGY LEVELS OF AN ATOM ... 338 3. THE PERIODIC TABLE OF THE ELEMENTS . 344 CHAPTER XXII INTERATOMIC AND INTERMOLECULAR FORCES INTRODUCTION ... 352 1. THE ELECTROSTATIC* INTERACTIONS BETWEEN RIGID ATOMS OR MOLECULES AT LARGE DISTANCES 353 2. THE ELECTROSTATIC: OR COULOMB INTERACTIONS BETWEEN OVERLAPPING RIGID ATOMS 361 3. POLARIZATION AND INTERATOMIC FORCES . 363 4. EXCHANGE INTERACTIONS BETWEEN ATOMS AND MOLECULES 367 5. TYPES OF CHEMICAL SUBSTANCES. ... ... . 375 CHAPTER XXIII IONIC CRYSTALS INTRODUCTION . . . 377 1. STRUCTURE OF SIMPLE BIN \RY IONIC COMPOUNDS 377 2. IONIC RADII 382 3. ENERGY AND EQUATION OF STATE OF SIMPLE IONIC LATTICES AT THE ABSOLUTE ZERO . 385 4. THE EQUATION OF STATE OF THE ALKALI HALIDES 390 5. OTHER TYPES OF IONIC CRYSTALS . 396 6. POL ARIZ ABILITY AND UNSYMMETRICAL STRUCTURES . . 398 CHAPTER XXIV THE HOMOPOLAR BOND AND MOLECULAR COMPOUNDS INTRODUCTION 400 1. THE HOMOPOLAU BOND . 400 2. THE STRUCTURE OF TYPICAL HOMOPOLAR MOLECULES. . 402 3. GASEOUS AND LIQUID PHASES OF HOMOPOLAR SUBSTANCES 407 4. MOLECULAR CRYSTALS . 414 CHAPTER XXV ORGANIC MOLECULES AND THEIR CRYSTALS INTRODUCTION .... 420 1. CARBON BONDING IN ALIPATIIIC MOLECULES . 420 2. ORGANIC RADICALS 425 3. THE DOUBLE BOND, AND THE AROMVTIC COMPOUNDS 428 4. ORGANIC CRYSTALS . 432 CHAPTER XXVI HOMOPOLAR BONDS IN THE SILICATES INTRODUCTION 435 1. THE SILICON-OXYGEN STRUCTURE . 435 2. SILICON-OXYGEN CHAINS. . . . . . 436 3. SILICON-OXYGEN SHEETS 439 4. THREE-DIMENSIONAL SILICON-OXYGEN STRUCTURES . . . 441 CHAPTER XXVII METALS INTRODUCTION . ... 444 1. CRYSTAL STRUCTURES OF METALS . . 444 xiv CONTENTS PAOB 2. ENERGY RELATIONS IN METALS ... 450 3. GENERAL PROPERTIES OF METALS . 456 CHAPTER XXVIII THERMIONIC EMISSION AND THE VOLTA EFFECT INTRODUCTION 460 1. THE COLLISIONS OP ELECTRONS AND METALS. . 460 2. THE EQUILIBRIUM OP A METAL \ND AN ELECTRON GAS 463 3. KINETIC DETERMINATION OP THERMIONIC EMISSION . . . . 464 4. CONTACT DIFFERENCE OF POTENTIAL 467 CHAPTER XXIX THE ELECTRONIC STRUCTURE OF METALS INTRODUCTION . . . 472 1. THE ELECTRIC FIELD WITHIN A MKTA.L 472 2. THE FREE ELECTRON MODEL OF A METAL 475 3. THE FREE ELECTRON MODEL AND THERMIONIC EMISSION 480 4. THE FREE ELECTRON MODEL AND ELECTRICAL CONDUCTIVITY . 484 .5. ELECTRONS IN A PERIODIC FORCE FIELD . ... 489 6. ENERGY BANDS, CONDUCTORS, AND INSULATORS . . 495 PROBABLE VALUES OF THE GENERAL PHYSICAL CONSTANTS 503 SUGGESTED REFERENCES 505 INDEX 507 PART I THERMODYNAMICS, STATISTICAL MECHANICS, AND KINETIC THEORY CHAPTER I HEAT AS A MODE OF MOTION Most of modern physics and chemistry is based on three fundamental ideas: first, matter is made of atoms and molecules, very small and very numerous; second, it is impossible in principle to observe details of atomic and molecular motions below a certain scale of smallncss; and third, heat is mechanical motion of the atoms and molecules, on such a small scale that it cannot be completely observed. The first and third of these* ideas are products of the last century, but the second, the uncertainty principle, the most characteristic result of the quantum theory, has arisen since 1900. By combining these three principles, we have the theoretical foundation for studying the branches of physics dealing with matter and chemical problems. 1. The Conservation of Energy.- -From Newton's second law of motion-one can prove immediately that the work done by an external force cm a system during any nuitioiMHj^i^s Uin increase of kinetic emvrgy of the system. This can be stated in the form (1.1) where KE stands for the kinetic energy, dW the infinitesimal clement of work done on the system. Certain forces are called conservative; they have the property that the work done by them when the system goes from an initial to a final state depends only on the initial and final state, not on the details of the motion from one state to the other. Stated technically, we say that the work done between two end points depends only on the end points, not on the path. A typical example of a conservative force is gravitation; a typical nonconservative force is friction, in which the longer the path, the greater the work done. For a conservative force, we define the potential energy as PE, = ~*W. (1.2) This gives the potential energy at point 1, as the negative of the work done in bringing the system from a certain state where the potential energy is zero to the state 1, an amount of work which depends only on the points 1 and 0, not on the path. Then we have (1.3) 4 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. I and, combining with Eq. (1.1), + PE l = KE* + PE* = E, (1.4) where, since 1 and 2 are arbitrary points along the path and KE + PE is the same at both these points, we must assume that KE + PE remains constant, and may set it equal to a constant E, the total energy. This is the law of conservation of energy. To avoid confusion, it is worth while to consider two points connected with the potential energy: the negative sign which appears in the defi- nition (1.2), and the choice of the point where the potential energy is zero. Both points can bo illustrated simply by the case of gravity acting on bodies near the earth. Gravity acts down. We may balance its action on a given body by an equal and opposite upward force, as by supporting the body by the hand. We may then define the potential energy of the body at height h as the work done by this balancing force in raising the body through this height. Thus if the mass of the body is m, and the acceleration of gravity g, tho force of gravity is mg (positive directions being upward), the balancing force is + nig, and the work dono by the hand in raising the mass through height h is mgh, which we define as the potential energy. Tho negative sign, then, comes because the potential energy is defined, not as the work done by the force we are interested in, but the work done by an equal and opposite balancing force. As for the arbitrary position whero wo choose the potential energy to bo zero, that appears in this example because \ve nan measure our height h from any level \\o choose. It is important to notice that tho same arbitrary constant appears essentially in the energy E. Thus, in Eq. (1.4), if we chose to redefine our zero of potential energy, we should have to add a constant to the total energy at each point of the path. Another way of stating this is that it is only the difference E PE whose magnitude is determined, neither the total energy nor the potential energy separately. For E PE is the kinetic energy, which alono can be deter- mined by direct experiment, from a measurement of velocities. Most ftf*t.iifl.l f(frfpK fiTn Tint, fionfiftf vn.ii vfi* f^p in fl.ln^ost fill n|*Rot,ir*fl.l cases there is friction of one sort or another- _ And yo.t tho r la^f rontnrv has seen tho conservation of 'onofgy -built. np_fio that, H iff HOW regarded as the most important principle of physics. The first step in this develop- merit was the mechanical theory of heat, the sciences of thermodynamics and statistical mechanics. Heat had for many years been considered as a fluid, sometimes called by the name caloric, which was abundant in hot bodies and lacking in cold ones. This theory is adequate to explain calorimetry, the science predicting the final temperature if substances of different initial temperatures are mixed. Mixing a cold body, lacking in caloric, with a hot one, rich in it, leaves the mixture with a medium SEC. 1] HEAT AS A MODE OF MOTION 5 amount of heat, sufficient to raise it to an intermediate temperature. But early in the nineteenth century, difficulties with the theory began to appear. As we look back, we can sec that these troubles came from tho implied assumption that the caloric, or heat, was conserved. In a calorimetric problem, some of the caloric from the hot body flows to the cold one, leaving both at an intermediate temperature, but no caloric is lost. It was naturally supposed that this conservation was universal. The difficulty with this assumption may be soon as clearly as anywhere in Rumford's famous observation on the boring of cannon. Rumford noticed that a great deal of heat was given off in the process of boring. The current explanation of this was that the chips of metal had their heal, capacity reduced by the process of boring, so that tho heat which was originally present in them was able to raise them to a higher temperature. Rumford doubted this, and to demonstrate it ho used a very blunl tool, which hardly removed any chips at all and yet produced oven moio heat than a sharp tool. He showed by his experiments beyond any doubt that heat could be produced continuously and in apparently unlimited quantity, by the friction. Surely this was impossible if heat, or caloric, were a fluid which was conserved. And his conclusion stated essentially our modern view, that heat is really a form of energy, convertible into energy. In his words: 1 What is Heat? Is there any such thing as an igneous fluid? Is there any thing that can with propriety be called caloric? ... In reasoning on this subject, we must not forget to consider that most remarkable circumstance, that the source of Heat generated by friction, in these Experiments, appeared evidently to be inexhaustible. It is hardly necessary to add, that any thing which any insulated body, or system of bodies, can continue to furnish without limitation, cannot possibly be a material substance; and it appears to me to be extremely difficult, if not quite impossible, to form any distinct idea of any thing, capable of being excited and communicated, in the manner the Heat was excited and communicated in these experiments, except it be MOTION. From this example, it is clear that both conservation laws broke down at once. In a process involving friction, energy is not conserved, but rather disappears continually. At the same time, however, heat is not conserved, but appears continually. Rumford essentially suggested that the heat which appeared was really simply the energy which had dis- appeared, observable in a different form. This hypothesis was not really proved for a good many years, however, until Joule made his experiments on the mechanical equivalent of heat, showing that when a certain amount of work or mechanical energy disappears, the amount of heat 1 Quoted from W. F. Magie, "Source Book in Physics," pp. 160-161, McGraw- Hill Book Company, Inc., 1935. 6 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. I appearing is always the same, no matter what the process of trans- formation may be. The calorie, formerly considered as a unit for measuring the amount of caloric present, was seen to be really a unit of energy, convertible into ergs, the ordinary units of energy. And it became plain that in a process involving friction, there really was no loss of energy. The mechanical energy, it is true, decreased, but there was an equal increase in what we might call thermal energy, or heat energy, so that the total energy, if properly defined, remained constant. This generalization was what really established the conservation of energy as a great and important principle. Having identified heat as a form of energy, it was only natural for the dynamical theory of heat to be developed, in which heat was regarded as a mode of motion of the mole- cules, on such a small scale that it could not be observed in an ordinary mechanical way. The extra kinetic and potential energy of the molcules on account of this thermal motion was identified with the energy which had disappeared from view, but had reappeared to be measured as heat. With the development of thermodynamics and kinetic theory, conser- vation of energy took its place as the leading principle of physics, which it has held ever since. 2. Internal Energy, External Work, and Heat Flow. We have seen that the theory of heat is based on the idea of conservation of energy, on the assumption that the; total energy of the universe is conserved, if we include not only mechanical energy but also the mechanical equivalent of the heat energy. JLt is not very convenient to talk about the whole universe every time we wish to work a problem, however. Ordinarily, thermodynamics deals with a finite system, isolated from its neighbors by an imaginary closed surface. Everything within the surface belongs to the system, everything outside is excluded. Usually, though not always, a fixed amount of matter belongs to the system during the thennodynamic processes we consider, no matter crossing the boundary. Very often, however, we assume that energy, in the form of mechanical or thermal energy, or in some other form, crosses the boundary, so that the energy of the system changes. The principle of conservation, which then becomes equivalent to the first law of thermodynamics, simply states that the net increase of energy in the system, in any process, equals the energy which has flowed in over the boundary, so that no energy is created within the system. To make this a precise law, we must consider the energy of the body and its change on account of flow over the boundary of the system. The total energy of all sorts contained within the boundary of the system is called the internal energy of the system, and is denoted by U. From an atomic point of view, the internal energy consists of kinetic and potential energies of all the atoms of the system, or carrying it SEC. 2] HEAT AS A MODE OF MOTION 7 further, of all electrons and nuclei constituting the system. Since potential energies always contain arbitrary additive constants, the internal energy U is not determined in absolute value, only differences of internal energy having a significance, unless some convention is made about the state of zero internal energy. Macroscopically (that is, viewing the atomic processes on a large scale, so that we cannot see what indi- vidual atoms are doing), we do not know the kinetic and potential energies of the atoms, and we can only find tlm p.lm.ny> nf internal nnnrgy by the amounts of ClierV added to tr> {^stpm n.nrnK t.hn and by making use of the law of conservation of energy. Thermo- dynamics, which is a macroscopic science^ makes no attempt to analyze? internal energy into its parts, as for Dimple mechanical energy and heat, onotgy. It simply deals with t.hn totnl iiit.p.riml onpr g y n.nH wifji i| [ti change^. Energy can (Miter the system in many ways, but most methods can bo classified easily and in an obvious way into mechanical work and heat. Familiar examples of external mechanical work are work done by pistons, shafts, belts and pulleys, etc., and work done by external forces acting at a distance, as gravitational work dono on bodies within the system on account of gravitational attraction by external bodies. A familiar example of heat flow is heat conduction across the surface. Convection of hrat into the system is a possible form of energy interchange if atoms and molecules are allowed to cross the surface, but not otherwise. Elec- tric and magnetic work done by forces between bodies within the system and bodies outside is classified as external work; but if the electro- magnetic energy enters in the form of radiation from a hot body, it is classified as heat. There are casos whore the distinction between the two forms of transfer of energy is not clear and obvious, and electro- magnetic radiation is one of them. In ambiguous cases, a definite classification can be obtained from the atomic point of view, by means of statistical mechanics. In an infinitesimal change of the system^ the energy which has_ entered the system as heat flow is called dQ^ and the energy which has left the system as mechanical work is called dW (so that the energy which has entered as mechanical work is called rfTF). The reason for choosing this sign for dw is simply convention; thermodynamics is very often used in the theory of heat engines, which produce work, so that the important case is that in which energy leaves the system as mechanical work, or when dW in our definition is positive. We see then that the 1 total energy which enters the system in an infinitesimal change is dQ dW. By the law of conservation of energy, the increase in internal energy in a process equals the energy which has entered the system: dU = dQ - dW. _ (2.1) 8 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. I Equation (2.1) is the mathematical statement of the first law of thermo- dynamics. It is to be noted that both sides of the equation should be expressed in the same units. Thus if internal energy and mechanical work are expressed in ergs, the heat absorbed must be converted to ergs by use of the mechanical equivalent of heat, 1 calorie = 4.185 X 10 7 ergs = 4.185 joules. Or if the hoat absorbed is to bo measured in calories, the work and internal energy should be converted into that unit. It is of tho utmost importance to realize that the distinction between heat flow n.nrf mop.tin.ninn.1 work, which WQ have madn in talking about pnnrgry in fmn^f infr) n system, does not apply to the onergy once it is in the system. It is completely fallacious to try to break down tho stato- mont of Kq. (2.1) into two statements: "Tho iiicroaso of hoat onorgy of a body oquals tho hoat which has fiowod in," and "Tho doorcase of mechan- ical onorgy of a body oquals tho work clono by tho body on its surround- ings. " For theso statements would correspond just to separate conservation laws for hoat and moohanioal onorgy, and we havo soon in the last section that such separate laws do not exist. To return to tho last section, Rumford put a groat deal of mechanical work into his cannon, produced no moohanioal results on it, but succeeded in raising its temperature greatly. As we havo statod before, the energy of a system cannot be differentiated or separated into a mechanical and a thermal part, by any method of thermodynamics. Tho distinction between heat and work is made in discussing energy in transit, and only there. The internal energy of a system depends only on tho state of the system; that is, on pressure, volume, temperature, or whatever variables arc used to describe tho system uniquely. Thus, the change 1 in internal energy between two states 1 and 2 depends only on these states. This change of internal energy is an integral, f/ - I/i = f*dU = f'dQ - f*dW. (2.2) Since this integral depends only on the end points, it is independent of the path used in going from state 1 to state 2. But the separate integrals J> and dW, representing the total heat absorbed and the total work done in going from state 1 to 2, are not independent of the path, but may be entirely different for different processes, only their difference being independent of path. Since these integrals are not independent of the path, they cannot be written as differences of functions Q and W at the end points, as / dU can be written as the difference of the C7 7 s at the end points. SEC. 3] HEAT Ati A MODE OF MOT JON 9 Such functions Q and W do not exist in any unique way, and we are not allowed to use them. W would correspond essentially to the negative of the potential energy, but ordinarily a potential energy function does no1 exist. Similarly Q would correspond to tho amount of hoat in the body, but we have seen that this function also dors not exist. The fact that functions Q and W do not exist^ or that f dQ and I dW are not independent of path t really is only another way of saying that mechanical and heat energy are interchangeable, and that the internal energy cannot T? dividftfl into n. mnnhanical and a thermal part by thermodynamics. At first sight, it seems too bad that / dQ is not independent of path, for some such quantity would be useful. It would be pleasant to be able to say, in a given state of the system, that the system had so and so much heat energy. Starting from the absolute zero of temperature, where we could say that the heat energy was zero, we could heat tho body up to the stato we wore interested in, find J dQ from absolute zero up to this state, and call that the heat energy. But the stubborn fact remains that we should get different answers if we heated it up in different ways. For instance, we might heat it at an arbitrary constant pressure until we reached the desired temperature*, then adjust the pressure at constant temperature to the desired value; or we might raise it first to the desired pressure, then heat it at that pressure to the final temperature; or many other equally simple processes. Each would give a different answer, as we can easily verify. There is nothing to do about it. It is to avoid this difficulty, and obtain something resembling the " amount of heat in a body/' which yet has a unique meaning, that we introduce the entropy. If T is the absolute temperature, and if the heat dQ is absorbed at temperature T in a reversible way, then / dQ/T proves tcTbe an integral independent of path, which evidently increases as the body is heated: that is. as heat flows into it. This integral, from a fixed zero point (usually taken to be the absolute zero of temperature), is called the entropy. Like the internal energy, it is determined by the state of the system, but unlike the internal energy it measures in a certain way only heat energy, not mechanical energy. We next take up the study of entropy, and of the related second law of thermodynamics. 3. The Entropy and Irreversible Processes. Unlike the internal energy and the first law of thermodynamics, the entropy and the second law are relatively unfamiliar. Like them, however, their best inter- pretation comes from the atomic point of view, as carried out in statistical mechanics. For this reason, we shall start with a qualitative description of the nature of the entropy, rather than with quantitative definitions and methods of measurement. The entropy is a quantity characteristic of thft state of ft system. Tnpfl.siiring the randomness or disorder in the atomic arrangement of that 10 INTRODUCTION TO CHEMICAL 1'UYMCti [CHAP. 1 state. It increases when a body is heated, for then the random atomic motion increases. But it ftlso innrnn^fia wfrpn a regular, orderly motion is f npvnrfnH info n fftpdom motion. Thus, consider an enclosure containing a small piece of crystalline solid at the absolute zero of temperature, in a vacuum. The atoms of the crystal are regularly arranged and at rest; its entropy is zero. Heat the crystal until it vaporizes. The molecules are now located in random positions throughout the enclosure and have velocities distributed at random. Both types of disorder, in the coordi- nates and in the velocities, contribute to the entropy, which is now large. But we could have reached the same final .state in a different way, not involving the absorption of heat by the system. We could have accelerated the crystal at the absolute zero, treating it as a projectile and doing mechanical work, but without heat How. We could arrange a target, so that the projectile would automatically strike the target, without external action. If the mechanical work which we did on the system were equivalent to the heat absorbed in the other experiment, the final internal energy would be the same in each case. In our second experiment, then, when the projectile struck the target it would be heated so hot as to vaporize, filling the enclosure with vapor, and the final state would be just the same as if the vaporization were produced directly. The increase of entropy must then be the same, for by hypothesis the entropy depends only on the state of the system, not on the path by which it has reached that state. In the second case, though the entropy has increased, no heat has been absorbed. Rather, ordered mechanical energy (the kinetic energy of the projectile as a whole, in which each molecule was traveling at the same velocity as every other) has been converted by the collision into random, disordered energy. Just this change results in an increase of entropy. It is plain that entropy cannot be conserved, in the same sense that matter, energy, and momentum are. For here entropy has been produced or created, just by a process of changing ordered motion into disorder. Many other examples of the two ways of changing entropy could be given, but the one we have mentioned illustrates them sufficiently. We have considered the increase of entropy of the system; let us now ask if the processes can be reversed, and if the entropy can be decreased again. Consider the first process, where the solid was heated gradually. Let us be more precise, and assume that it was heated by conduction from a hot body outside; and further that the hot body was of an adjustable temper- ature, and was always kept very nearly at the same temperature as the system we were interested in. If it were just at the same temperature, heat would not flow, but if it were always kept a small fraction of a degree warmer, heat would flow from it into the system. But that process can be effectively reversed. Instead of having the outside body a fraction SBC. 3] HEAT AS A MODE OF MOTION 11 of a degree warmer than the system, we )et it be a fraction of a degree cooler, so that heat will flow out instead of in. Then things will cool down, until finally the system will return to the absolute zero, and every- thing will be as before. In the direct process heat flows into the system; in the inverse process it flows out, an equal amount is returned, and when everything is finished all parts of the system and the exterior are in essentially the same state they were at the beginning. But now try to reverse the second process, in which the solid at absolute zero was accelerated, by means of external work, then collided with a target, and vaporized. The last steps were taken without external action. To reverse it, we should have the molecules of the vapor condense to form a projectile, all their energy going into ordered kinetic energy. It would have to be as shown in a motion picture of the collision run backward, all the fragments coalescing into an unbroken bullet. Then we could apply a mechanical brake to the projectile as it receded from the target, and get our mechanical energy out again, with reversal of the process. But such things do not happen in nature. The collision of a projectile with a target is essentially an irreversible process, which never happens backward, and a reversed motion picture of such an event is inherently ridiculous and impossible. The statement that such events cannot be reversed is one of the essential parts of the second law of thermodynamics. If we look at the process from an atomic point of viow, it is clear why it cannot reverse. The change from ordered to disordered motion is an inherently likely change, which can be brought about in countless ways; whereas the change from disorder to order is inherently very unlikely, almost sure not to happen by chance. Consider a jigsaw puzzle, which can be put together correctly in only one way. If we start with it put together, then remove each picco and put it in a different place on tho table, we shall certainly disarrange it, and we can do it in almost countless ways; while if we start with it taken apart, and remove each pioce and put it in a different place on the table, it is true that we may happen to put it together in the process, but the chances arc enormously against it. The real essence of irreversibility, however, is not merely the strong probability against the occurrence of a process. It is something deeper, coming from the principle of uncertainty. This principle, as we shall see later, puts a limit on the accuracy with which we can regulate or prescribe the coordinates and velocities of a system. It states that any attempt to regulate them with more than a certain amount of precision defeats its own purpose : it automatically introduces unpredictable pertur- bations which disturb the system, and prevent the coordinates and velocities from taking on the values we desire, forcing them to deviate from these values in an unpredictable way. But this just prevents us from being able experimentally to reverse a system, once the randomness 12 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. 1 has reached the small scale at which the principle of uncertainty operates, To make a complicated process like a collision reverse, the molecules would have to be given very definitely determined positions and velocities so that they would just cooperate in such a way as to coalesce and become unbroken again; any errors in determining these conditions would spoi the whole thing. But we cannot avoid these errors. It is true that b> chance they may happen to fall into lino, though the chance is minute But the important point is that we cannot do anything about it. From the preceding examples, it is clear that we must consider twr types of processes: reversible and irreversible. Tjie essential feature oi reversible processes is that things are almost balanced, almost in equilib- rium, at every stage, so that an infinitesimal change will swing the motioi from one direction to the other. Tjrflveraihle processes, on the othei hand, involve complete departure from equilibrium, as in a collision. II will be worth while to enumerate a few other common examples oi irreversible processes. Heat flow from a hot body to a cold body at mon than an infinitesimal difference of temperature is irreversible, for the heal never flows from the cold to the hot body. Another example is viscosity in which regular motion of a fluid is converted into random moleculai motion, or heat. Still another is diffusion, in which originally unmixec substances mix with other, so that they cannot bo unmixed again withoul external action. In all these cases, it is possible of course to bring tin system itself back to its original state. Even the projectile which ha* been vaporized can be reconstructed, by cooling and condensing the vapor and by recasting the material into a new projectile. But th< surroundings of the system would have undergone a permanent change the energy that was originally given the system as mechanical energy, tc accelerate the bullet, is taken out again as heat, in cooling the vapor, sc that the net result is a conversion of mechanical energy into heat in the surroundings of the system. Such a conversion of mechanical energy into heat is often called degradation of energy, and it is characteristic oi irreversible processes. A reversible process is one which can be reversed iu such a way that the system itself and its surroundings both return tc their original condition; while an irreversible process is one such that th( system cannot be brought back to its original condition without requir- ing a conversion or degradation of some external mechanical energy intt heat. 4. The Second Law of Thermodynamics. We are now ready to give a statement of the second law of ^hermnflyria^nica, in n^p of its many forms: The entropy. a function only of the state of fl. system, increases in a reversible absorbed, T the absolute temperature at which it is absorbed) and increases by a larger amount than dQ/T in an irreversible process. SEC. 4] HEAT AS A MODE OF MOTION 13 This statement involves a number of features. First, it gives a way of calculating entropy. By sufficient ingenuity, it is always possible to find reversible ways of getting from any initial to any final state, pro- vided both arc equilibrium states. Then we can calculate / dQ/T for such a reversible path, and the result will bo the change of entropy between the two states, an integral independent of path. We can then measure entropy in a unique \vay. If we now go from the same initial to the same final stale by an irreversible path, the change of entropy must, still be the same, though now / dQ/T must necessarily be smaller than before, and hence smaller than the change in entropy. We see that, the heat absorbed in an irreversible path must be less than in a reversible path between the same end points. Since ^hfc change in internal energy must be the same in either ease, the first law then tells us that the external work done by the system is less for the irreversible path than for the reversible one. If our system is a heat engine, whose object is to absorb heat and do mechanical work, \\e see that the mechanical work accom- plished will be less for an irreversible engine than for a reversible one, operating between the same end points. It is interesting to consider the limiting case of adiabatic processes, processes in which the system interchanges no heat witkihc surroundings, the only changes in internal energy coming from mechanical work. We see that in a. reversible adiabatic process the entropy does not change (a convenient way of describing such processes). In an irreversibly fljin.hn.fip prr>n^^ foe entropy increases. In particular, for a system entirely isolated from its surroundings, the; entropy increases whenever irreversible processes occur within it. An isolated system in which irreversible processes can occur is surely not in a steady, equilibrium state; the various examples which we have considered are the rapidly moving projectile, a body with different temperatures at different parts (to allow heat conduction), a fluid with mass motion (to allow viscous friction), a body containing two different materials not separated by an impervious wall (to allow diffusion). All these systems have less entropy than the state of thermal equilibrium corresponding to the same internal energy, which can be reached from the original state by irreversible processes without interaction with the outside. This state of thermal equilibrium is one in which the temperature is everywhere constant, there is no mass motion, and where substances are mixed in such a way that there is no tendency to diffusion or flow of any sort. A condition for thermal equilibrium, which is often applied in statistical mechanics, is that the equilibrium state is that of highest entropy consistent with the given internal energy and volume. These statements concerning adiabatic changes, in which the entropy can only increase, should not cause one to forget that in ordinary changes, 14 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. I in which heat can be absorbed or rejected by the system, the entropy can either increase or decrease. In most thermodynamic problems, w r e confine ourselves to reversible changes, in which the only way for the entropy to change is by heat transfer. We shall now state the second law in a mathematical form which is very commonly used. We let S denote the entropy. Our previous statement is then dS ^ dQ/T, or T dS ^ dQ, the equality sign holding for the reversible, the inequality for irreversible, processes. But now we use the first law, Eq. (2.1), to express dQ in terms of dU and dW. The inequality becomes at once J^dS ^ dU + dW, ' (4.1) the mathematical formulation of the second law. For reversible . pror- csses f which we ordinarily consider, the ecmalitv sign is to be used. The second law may be considered as a postulate. We shall see in Jhap. II that definite consequences can be drawn from it, and they prove to be always in agreement with experiment. We notice that in stating it, we have introduced the temperature without apology, for the first time. This again can be justified by its consequences: the temperature so defined proves to agree with the temperature of ordinary experience, as defined for example by the gas thermometer. Thermo- dynamics is the science that simply starts by assuming the first and second laws, and deriving mathematical results from them. Both laws are simple and general, applying as far as \vc know to all sorts of processes. As a result, we can derive simple, general, and fundamental results from thermodynamics, which should be independent of any particular assump- tions about atomic and molecular structure, or such things. Thermo- dynamics has its drawbacks, however, in spite of its simplicity and generality. In the first place, there are many problems which it simply cannot answer. These are detailed problems relating, for insTanC^to the equation of state and specjfi^ hfi gf nf r^t.i/nilar types of substances. Thfirnqftdynprnifta must assume JJiat_thpp qnanfjfog T P determined by experiment* once they are known, it can predict certain relationships between observed quantities, but it is unable to say what values the quantities must have. IrT addition to this, thermodynamics is limited to the discussion of problems in equilibrium. This is on account of the form of the second law, which can give only qualitative, and not quanti- tative, information about processes out of equilibrium. Statistical mechanics is a much more detailed science than thermo- dynamics, and for that reason is in some ways more complicated. It undertakes to answer the questions, how is each atom or molecule of the substance moving, on the average, and how do these motions lead to observable large scale phenomena? For instance, how do the motions SEC. 4] HEAT AS A MODE OF MOTION 15 of the molecules of a gas lead to collisions with a wall which we interpret as pressure? Fortunately it is possible to derive some very beautiful general theorems from statistical mechanics. In fact, one can give proofs of the first and second laws of thermodynamics, as direct conse- quences of the principles of statistical mechanics, so that all the results of thermodynamics can be considered to follow from its methods. But it can go much further. It can start with detailed models of matter and work through from them to predict the results of large scale experi- ments on the matter. Statistical mechanics thus is much more powerful than thermodynamics, and it is essentially just as general. It is some- what more complicated, however, and somewhat more dependent on the exact model of the structure of the material which we use. Like thermo- dynamics, it is limited to treating problems in equilibrium. Kinetic theory 'is a study of the rates of atomic and molecular proc- esses, treated by fairly direct methods, without much benefit of general principles. If handled properly, it is an enormously complicated subject, though simple 1 approximations can be made in particular cases. It is superior to statistical mechanics and thermodynamics in just two respects. In the first place, it makes use only of well-known and elementary methods, arid for that reason is somewhat more comprehensible at first sight than statistical mechanics, with its more advanced laws. In the second place, it can handle problems out of equilibrium, such as the rates of chemical reactions and other processes, which cannot be treated by thermodynamics or statistical mechanics. We see that eadi of our three sciences of heat has its own advantages. A properly trained physicist or chemist should know all three, to be able to use whichever is most suitable in a given situation. We start witli thermodynamics, since it is the most general and fundamental method, taking up thermodynarnic calculations in the next chapter. Following that we treat statistical mechanics, and still later kinetic theory. Only then shall we be prepared to make a real study of the nature of matter. CHAPTER II THERMODYNAMICS In the last chapter, we became acquainted with the two laws of thermodynamics, but we have not seen how to use them. In this chapter, we shall learn the rules of operation of thermodynamics, though we shall postpone actual applications until later. It has already been mentioned that thermodynamics can give only qualitative information for irreversible processes. Thus, for instance, the second law may be stated dW Z TdS - dU, (1) giving an upper limit to the work done in an irreversible process, but not predicting its exact amount. Only for reversible processes, where the equality sign may bo used, can thermodynamics make definite predictions of a quantitative sort. Consequently almost all our work in this chapter will deal with reversible systems. We shall find a number of differential expressions similar to Eq. (1), and by proper treatment we can convert these into equations relating one or more partial derivatives of one thcrmodynamic variable with respect to another. Such equations, called thermodynamic formulas, often relate different quantities all of which can be experimentally measured, and hence furnish a check on the accuracy of the experiment. In cases where 1 one of the quantities is difficult to measure, they can be used to compute one of the quantities from the others, avoiding the necessity of making the experiment at all. There are a very great many thermodynamic formulas, and it would be hopeless to find all of them. But we shall go into general methods of computing them, and shall set up a convenient scheme for obtaining any one which we may wish, with a minimum of computation. Before starting the calculating of the formulas, we shall introduce several new variables, combinations of other quantities which prove to be useful for one reason or another. As a matter of fact, we shall work with quite a number of variables, some of which can be taken to be inde- pendent, others dependent, and it is necessary to recognize at the outset the nature of the relations between them. In the next section we consider the equation of state, the empirical relation connecting certain thermo- dynamic variables. 1. The Equation of State. In considering the properties of matter, our system is ordinarily a piece of material enclosed in a container and 16 SEC. 1] THERMODYNAMICS 17 subject to a certain hydrostatic pressure. This of course is a limited type of system, for it is not unusual to have other types of stresses acting, such as shearing stresses, unilateral tensions, and so on. Thermodynamics applies to as general a system as we please, but for simplicity we shall limit our treatment to the conventional case where the only external work is done by a change of volume, acting against a hydrostatic pressure. That is, if P is the pressure and V the volume of the system, we shall have dW = PdV. (1.1) In any case, even with much more complicated systems, the work done will have an analogous form; for Eq. (1.1) is simply a force (P) times a displacement (dV), and we know that work can always be put in such a form. If there is occasion to set up the thermodynamic formulas for a more general type of force than a pressure, we simply set up dW in a form corresponding to P]q. (1.1), and proceed by analogy with the derivations which we shall give here. We now have a number of variables : P. V } r l\ U . and AS. How many of these T we mav ask, are independent.? The_ answer is, any two. For example, with a given system, we may fix the pressure and temperature Then in general the volume is determined, as we can find experimentally The experimental relation giving volume as a function of pressure and temperature is called the equation of state. Ordinarily, of course, it is not a simple analytical equation, though in special cases like a perfect gas it may be. Instead of expressing volume as a function of pressure and temperature, we may simply say that the equation of state expresses a relation between these three variables, which may equally well give, pres- sure as a function of temperature and volume, or temperature as a function of volume and pressure. Of these three variables, two are inde- pendent, one dependent, and it is immaterial which is chosen as the dependent variable. The equation of ata.t.e. doos not, include all the experimental informa- tion which wo n^ist have about a system or substance. We need 'to know also its hftftt pfl.pu>it.y or specific heat, as a. function of temperature. Sup- pose, for instance, that we know the specific heat at constant pressure CP as a function of temperature at a particular pressure. Then we can find the difference of internal energy, or of entropy, between any two states. From the first state, we can go adiabatically to the pressure at which we know Cp. In this process, since no heat is absorbed, the change of internal energy equals the work done, which we can compute from the equation of state. Then we absorb heat at constant pressure, until we reach the point from which another adiabatic process will carry us to the desired end point. The change of internal energy can be found for the process at constant pressure, since there we know CP, from which we can 18 INTRODUCTION TO CHEMICAL PHYSICS |CiiAi>. II find the heat absorbed, and since the equation of state will tell us the work done; for the final adiabatic process we can likewise find the work done and hence the change of internal energy. Similarly we can find the change in entropy between initial and final stale. In our particular case, assuming the process to be carried out reversibly, the entropy will not change along the adiabatics, but the change of entropy will be dQ CrdT T T in the process at constant pressure. We see, in other words, that the difference of internal energy or of entropy bnt,ween fl,Tiy t wn t"^ can be found if we know equation of state and specific heat, and since both these quantities have arbitrary additive constants, this is all the informa- tion which we can expect to obtain about them anyway. Given the equation of state and specific hoat r we snn that we can obtain all but two of the quantities P, F y T, U, S, provided those two are known. We have shown this if two of the three quantities P, F, T an* known; but if U and S are determined by these quantities, that means simply that two out of the five quantities are independent, the rest dependent. It is then possible to use any two as independent variables. For instance, in thermodynamics it is not unusual to use T and S, or V and >S, as independent variables, expressing everything else as functions of them. 2. The Elementary Partial Derivatives. We can set up a number of familiar partial derivatives and thermodynamic formulas, from the information which we already have. We have five variables, of which any two are independent, the rest dependent. We can then set up the partial derivative of any dependent variable with respect to any inde- pendent variable, keeping the other independent variable constant. A notation is necessary showing in each case what are the two independent variables. This is a need not ordinarily appreciated in mathematical treatments of partial differentiation, for there the independent variables are usually determined in advance and described in words, so that there is no ambiguity about them. Thus, a notation, peculiar to thermody- namics, has been adopted. In any partial derivative, it is obvious that the quantity being differentiated is one of the dependent variables, and the quantity with respect to which it is differentiated is one of the inde- pendent variables. It is only necessary to specify the other independent, variable, the one which is held constant in the differentiation, and the convention is to indicate this by a subscript. Thus (dS/dT) Py which is ordinarily read as the partial of S with respect to T at constant P, is the derivative of S in which pressure and temperature are independent vari- ables. This derivative would mean an entirely different thing from the derivative of S with respect to T at constant V, for instance. SEC. 2| THERMODYNAMICS 19 There are a number of partial derivatives which have elementary meanings. Thus, consider the thermal expansion. This is the fractional increase of volume per unit rise of temperature, at constant pressure: Thermal expansion T J -r-= 1 (2.1) .. V\dl/ Similarly, the isothermal compressibility is the fractional decrease of volume per unit increase of pressure, at constant temperature: Isothermal compressibility = ^ilv T This is the compressibility usually employed; sometimes, as in considering sound waves, we require the adiabatic compressibility, the fractional decrease of volume per unit increase of pressure, when no heat flows in or out. If there is no heat flow, the entropy is unchanged, in a reversible process, so that an adiabatic process is one at constant entropy. Then we have Adiabatic compressibility = TT( ~ 7 ; 1 (2.3) v V \ol Js The specific heats have simple formulas. At constant volume, the heat absorbed equals the increase of internal energy, since no work is done. Since the heat absorbed also equals the temperature times the change of entropy, for a reversible process, and since the heat capacity at constant A^olume CV is the heat absorbed per unit change of temperature at constant volume, we have the alternative formulas V To find the heat capacity at constant pressure Cp, we first write the form- ula for the first and second laws, in the case we are working with, where the external work comes from hydrostatic pressure and where all processes are reversible: dU = TdS -PdV, or TdS = dU + PdV. (2.5) From the second form of Kq. (2.5), we can find the heat absorbed, or T dS. Now Cr is the heat absorbed, divided by the change of tempera- ture, at constant pressure. To find this, we divide Eq. (2.5) by dT y indicate that the process is at constant P, and we have 20 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. 11 Here, and throughout the book, we shall ordinarily mean by CV and CV not the specific heats (heat capacities per gram), but the heat capacities of the mass of material with which we are working; though often, where no confusion will arise, wo shall refer to them as the specific heats. From the first and second laws, Eq. (2.5), we can obtain a number of other formulas immediately. Thus, consider the first form of the equa- tion, dU = T dS PdV. From this we can at once keep the volume constant (set dV = 0), and divide by dS, obtaining ' dt/ \-r. (2.7) Similarly, keeping entropy constant, so that we have an adiabatic proc- ess, we have /*TT\ = -P. (2.8) But we could equally well have used the second form of Eq. (2.5), obtain- ing (dS\ _ 1 \w) v ~ T' dS\ _P ~ r (2 - 9) From these examples, it will be clear how formulas involving partial derivatives can be found from differential expressions like Eq. (2.5). 3. The Enthalpy, and Helmholtz and Gibbs Free Energies. Wo notice that Eq. (2.6) for the specific heat at constant pressure is rather complicated. We may, however, rewrite it \a(u+PV)\ / , n Cp = [ dT\p'( (3>1) for I ^TJJT- I = P[ -^TJ ) ) since P is held constant in the differentiation L d' > wA The quantity U + PV comes in sufficiently often so that it is worth giving it a symbol and a name. We shall call it the enthalpy, and denote it by H. Thus we have = H = U + PV^ dH = dU +PdV + VdP = TdS+ VdP, (3.2) using Eq. (2.5). From Eq. (3.2), we see that if dP = 0, or if the process is taking place at constant pressure, the change of the enthalpy equals the heat absorbed. This is the feature that makes the enthalpy a useful quantity. Most actual processes are carried on experimentally at con- SBC. 3] THERMODYNAMICS 21 stant pressure, and if we have the enthalpy tabulated or otherwise known, we can very easily find the heat absorbed. We see at once that Cp = \JT) P ' (3 - 3) a simpler formula than Eq. (2.6). As a matter of fact T the enthalpy fills, essentially the role for processes at constant pressure which the internal e tie rgy docs for processes at constant volume. Thus the first form of Eq. (2.5), dU = T dS - P dV, shows that the heat absorbed at constant volume equals the increase of internal energy, just as Eq. (3.2) shows that the heat absorbed at constant pressure equals the increase of the enthalpy. In introducing the entropy, in the last chapter, we stressed the idea that it measured in some way the part of the energy of the body bound up in heat, though that statement could not be made without qualification. The entropy itself t of course, has not the dimcnsigns of energy, hut, the product TS has. This quantity TS is sometimes called the bound energy, and in a somewhat closer wav it represents the o.no.rgy bound RS VIPHJ. l n any process, the change in TS is given by T dS + SdT. If now the process is reversible and isothermal (as for instance the absorption of heat by a mixture of liquid and solid at the melting point, whore heat can be absorbed without change of temperature, merely melting more of the solid), dT = 0, so that d(TS) = T dS = dQ. Thus the increase of bound energy for a reversible isothermal process really equals the heat absorbed. This is as far as the bound energy can be taken to represent the energy hound as heat; for a nonisothermal process the change of hound energy np Inngpr equals the hoat absorbcdj and as we have seen r no quantity which is ji function of the state alone can represent the total heat absorbed If the bound energy TS represents in a sense the energy bound fls heat, the remaining part of the internal energy, U TS^ should be in the same sense the mechanical part of the energy, which is available to do mechanical jwork. We sjiall call this part of the energy the Helmholtz free energy, and denote it by A. Let us consider the change of the Helm- holtz free energy in any process. We have A = U - TS, dA =dU- TdS- SdT. (3.4) By Eq. (1) this is dA ^ -dW - SdT y or -dA 2 dW + SdT. (3.5) 22 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. II For a system at constant temperature, this tells us that the work done is less than or equal to the decrease in the Helmholtz free energy. The Helmholtz free energy then measures the maximum ^rk which can be done by foe system in n f n isothcrma] flhfl^g For a process at constant temperature, in which at the same time no mechanical work is done, the right side of Eq. (3.5) is zero, and wo see that in such a process the Holm- holtz free energy is constant for a reversible process, but decreases for an irreversible process. The Holmholtz free energy will decrease until tho system roaches an equilibrium state, whon it will have reached tho mini- mum value consistent with the temperature and with the fact that no external work can be done. For a system in equilibrium under hydrostatic pressure, wo may rewrite Kq. (3.5) as <IA = -PdV - Nrf7\ (:)) .suggesting that the convenient variables in which to express the 4 Holm- holtz free energy are the volume} and the temperature. In the 1 case 1 of equilibrium, wo find from Eq. (3.6) tho important relations (dA\ Ul = ^ Tho first of those 1 shows that, at constant temperature, the Holmholtz free energy has some of the properties of a potential energy, in that its negative derivative} with respect to it coordinate (tho volume) givos tho force (the) pressure). If A is known as a function of V and T, the first Eq. (3.7) givos a relation between P, V, and ? T , or the equation of state. From the second, we> know entropy in terms of temperature and volume, and differentiating with respect to temperature at constant volume, using Kq. (2.4), we can find the specific heat. Thus a knowledge of tho Holm- holtz free energy as a function of volume and temperature gives both the equation of state and specific heat, or complete information about the system. Instead of using volume and to f nipfirfl.tiire as independent variables, however^ wo more often wish to use pressure and temperature. In this case, instead of using. [fa* Hnlmhnlt? froo energy, it is more convenient to use tho Gibbs free .energy G } defined bv the equations f / = // - TS = U + PV - TS = A + PV- (3.8) It will be seen that this function stands in the same relation to the* enthalpy that the Helmholtz free energy does to the internal energy. We can now find the change of the Gibbs free energy G in any process. By definition, we have dG = dH - T dS - S dT. Using Eq. (3.2), this is dO = dU +PdV + VdP - TdS-SdT, and by Eq. (1) this is dG : V dP - S dT. (3.9) SEC. 4] THERMODYNAMICS 23 For a system at constant pressure anc} temperature, we see that the Gibbs free energy is constant for a reversible process but decreases for an irrever- sible process, reaching a minimum value consistent with the pressure and temperature for the equilibrium state; just as for a system at constant volume the Holmholtz free energy is constant for a reversible process but decreases for an irreversible process. As with A, we can get the equation of state and specific heat from the derivatives of (7, in equilibrium. We have the first of these giving the volume as a function of pressure and tempera- ture, the second the entropy as a function of pressure and temperature, from which we can find C P B^mFSFrts of Eq. (2.6). Thp (^rilifys jjffi^ nnnrgy O is pfl.yt ( jf?ularlv iniDOrfaftUti nn gM^-fmntj nf f]rtiM M 1 physical processes that occur at constant pressure and temperature The most important of these processes^ a change of phase, as the melting of a solid or the vaporization of a liquid If unit mass of a substance chan&es phase reycrsiblv.tit constant pro^n *"<\ ^^"p'^+urffj thr tnf.n.1 (li^lm frecMMiergy must be unchanged. That is. in equilibrium, the Gibbs free energy per unit mass must hr> thp. sn.mr> For both plm.sps. On tlie other hand, at a temperature and pressure which do not correspond to equilib- rium between two phases, the Gibbs free energies per unit mass will be different for the two phases. Then the st.n.hlf plmo nmW t.h<^ rttpfo- t in|]N| fpijaf \\r> ^.Vinf wliif'h fr^.s thp. low<*r Gibbs free energy. If the svstem is actually found in the phase of higher Gibbs free energy, it will be unstable and will irreversibly change to the other phase. Thus for instance, the Gibbs free energies of liquid and solid as functions of the temperature at atmospheric pressure are represented by curves whici i cross at the melting point! Below the melting point the solid has the lower Gibbs free energy. It is possible to have the liquid boln^y j L lif > melting point: it is in the condition known as supercooling. But any Alight disturbance is enough to produce a sudden and irreversible solidi- fication, with reduction of Gibbs free energy, the final stable state being the solid. It is evident from these examples that the Gibbs free energy is uf gyeat importance in discussing physical and chemical processes Thp Helmholtz free energy ^f>e r s nnt. WVP *my annh ^p 0r f,jinnf> Wo tt | m l| see later, however, that the methods of fltatfctjfi.l mpnlmnW WH pa.rf.ifi|1.rly simply to a calculation of the Helmholtz free energy, and its principal valueTc'omeg about IhTKiTway. " " k 4. Methods of Deriving Thermodynamic Formulas. We have now introduced all the thermodynamic variables that we shall meet: P, V, T, S, U, H, A, G. The number of partial derivatives which can be formed 24 INTRODUCTION TO CHEMICAL PHY SIC $ [CHAP. II from these is 8 X 7 X 6 = 336, since each partial derivative involves one dependent and two independent variables, which must all be different. A few of these are familiar quantities, as we have seen in Sec. 2, but the great majority are unfamiliar. It can be shown, 1 however, that a relation can be found between any four of those derivatives, and certain of the thermo- dynamie variables. Those rotations are the thormodynarnic formulas. Since there are 336 first derivatives, there aro 336 X 335 X 334 X 333 ways of picking out four of these, so that tho numbor of independent rela- tions is this number divided by 4!, or 521,631,180 separate formulas. No other branch ot physics is so rich in mathematical formulas, and some systematic method must be used to bring order into the situation. No one can be expected to derive any considerable number of the formulas or to keep them in mind. There are four principal methods of mathematical procedure used to derive those formulas, and in the present section wo shall discuss them. Then in the next section we shall describe a system- atic procedure for finding any particular formula that we friay wish. Tho four mathematical methods of finding formulas are 1. We have already seen that there are a number of differential rela- tions of the form dx = Kdy +Ldz, (4.1) where K and L are functions of the variables. The most important rela- tions of this sort which we have met aro found in Kqs. (2.5), (3.2), (3.6), and (3.9), and aro dU = -PdV + TdS, dll = VdP + T dflf, dA = -PdV - S dT, dG = VdP - SdT. (4.2) We have already seen in Eq. (2.6) how wo can obtain formulas from such an expression. Wo can divide by tho differential of one variable, say du, and indicate that the process is at constant value of another, say w. Thus wo have In rlninpr fjijg WP mnwf Ko giirp fh^ fa is tho diffnrnntin.1 of a function of tho-Statfi_Qf the &v^tojii_-Qii-iiil v in that case is it proper to write R DRT- tial derivative like (Ax/Au^^ Thus in particular we cannot proceed in snporfu'.mllv t.hf>y look 1 For the method of classifying thermodynamic formulas presented in Sees. 4 and 5, see P. W. Bridgman, "A Condensed Collection of Thermodynamic Formulas," Harvard University Press. SEC. 4] THERMODYNAMICS 25 like Eq. (4.1). Using the method of Eq. (4.3), a very large number of formulas can be formed. A special case has been seen, for instance, in the Eqs. (2.7) and (2.8). This is the case in which u is one of the variables y or z, and w is the other. Thus, suppose u = y, w = z. Then we have (!),=- and similarly = L. (4.4) It is to be noted that, using Eq. (4.4), we may rewrite Kq. (4.1) in the form ? (4.5) a form in which it becomes simply the familiar mathematical equation expressing a total differential in terms of partial derivatives. 2. Suppose we have two derivatives such as (dx/du) t , (dy/du) s , taken with respect to t^e same variable and with the same variable held con- stant. Since z is held fixed in both cases, they act like ordinary deriva- tives with respect to the variable u. But for ordinary derivatives we should have T , = --, Thus, in this case we have dy/du ay (**\ w. ?V A (4.6) We shall find that the relation in Eq. (4.6) is of great service in our sys- tematic tabulation of formulas in the next section. For to find all partial derivatives holding a particular z constant, we need merely tabulate the six derivatives of the variables with respect to a particular u, holding this z constant. Then we can find any derivative of this type by Eq. (4.6). 3. Let us consider Eq. (4.5), and set x constant, or dx = 0. Then we may solve for dy/dz, and since x is constant, this will be (dy/dz) x . Doing this, we have x (-} dy\ _ \dz/ dZ/ x /dx\ (4.7) 26 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. II Using Eq. (4.6), we can rewrite Eq. (4.7) in either of the two forms (4.8) (4.9) , I (?*\ (**}(*} __! \\dyj \te/\dxL The reader should note carefully the difference between Eq. (4.8) and Eq. (4.6). At first glance they resemble each other, except for the differ- ence of sign; but it will be noted that in Eq. (4.6) each of the throe deriva- tives has the same variable held constant, while in Eq. (4.8) each one has a different variable held constant. 4. We Start with Eq. (4.4). Then we use the fundamental theorem regarding second partial derivatives: Substituting from Eq. (4.4), this gives us /a*\ /a/A \dz) y \dy) z (4.10) (4.11) In Eq. (4.10), it is essential that .r be a function of the state of the system, or of y and z. Four important relations result from applying Eq. (4.11) to the differential? expressions (4.2). These are 1 t ~ jtSJv dP/. dTv \dV dT dP (4.12) The Eqs. (4.12) are pjcnerallv called Maxwell's relations. We have now considered the four processes used in deriving thermo- dynamic formulas. By combinations of them, any desired relation connecting first derivatives can be obtained. In the next section we consider the classification of these formulas. SEC. 5] THERMODYNAMICS 27 5. General Classification of Thermodynamic Formulas. Bridgman 1 has suggested a very convenient method of classifying all the thermody- namic formulas involving first derivatives. As we have pointed out, a relation can be found between any four of the derivatives. Bridpman's method is then to write each derivative in terms of three standard deriva- tives, for which he chooses (dV/dT)p_, (dV/dP) T , and C P = (dH/dT) P These are chosen because they can DC found immediately from expori- ment, the first two being closely related to thermal expansion and compressibility [see Eqs. (2.1) and (2.2)]. If now we wish a relation between two derivatives, we can write each in terms of these standard derivatives, and the relations will immediately become plain. Our task, then, is to find all but these three of the 336 first partial derivatives, in terms of these three. As a matter of fact, we do not have to tabulate nearly all of these, on account of the usefulness of Eq. (4.6). We shall tabulate all derivatives of the form (dx/dT)^ (dx/BP)r. (dx/dT)v. and (dx/dT)s^ Then by application of Eq. (4.6), we can at once find any derivative whatever at constant P, constant 7 7 , constant V, or constant S. We could continue the same thing for finding derivatives holding the other quantities fixed; but we shall not need such derivatives very often, and they are very easily found by application of methods (2) and (3) of the preceding section, and by the use of our table. Wo shall now tabulate these derivatives, later indicating the derivations of the only ones that are at all involved, and giving examples of their application. We shall be slightly more general than Bridgman, in that alternative forms are given for some of the equations in terms either of C P or (7, . TABLE 1-1. TABLE OF TIIBUMODYNAMH; RELATIONS (dS\ = CV \dT/ P T = -P - 8 r \dTjp 1 See reference under Sec. 4. 28 INTRODUCTION TO CHEMICAL PHYSICS [HAP. II TABLE 1-1. TABLE OF THERMODYNAMIC RELATIONS (Continued] dI>/ AP OP T dP (ov\ = _ \dT/ P /QV\ \dP/T (dS\ = Cv = CP \d'r) v T T + dT P dP/ T ("} aH r = ' SEC. 5] THERMODYNAMICS 29 TABLE 1-1. TABLE OF THERMODYNAMIC RELATIONS (Continued) fav\~ \af) P < + -< rii ~T~ > ov a ("'-V c, , \#rj, W) \dPjr -T + /dH\ __ VC P W- = JV ~ 2-1 -s The formulas of Table 1-1 all follow in very obvious ways from the methods of See. 4, except perhaps the relation between Cv and CV, used in the derivatives at constant V and constant *S. To prove the relation between these two specific heats, we proceed as follows. We have (5.1) Tds = r(^ + T \M) T dV = CvdT + T( ^ = T(^ dT + I ,(dS \dp dP = C P dT- T(~ We subtract the first of Eqs. (5.1) from the second, and sot dV = 0, obtaining (C P -C v )dT= T(^- r )dP. 30 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. II Dividing by dT 7 , this is dp T the result used in the formulas of Table 1-1. The result of Eq. (5.2) is an important formula, and is the generalization holding for all substances of the familiar formula C P Cv = nR holding for perfect gases. It serves in any case to find Cp Cv from the equation of state. Since 1 (dV/dP) r is always negative, we see that CV is always greater than C v . "^ o. Comparison of Thermodynamic and Gas Scales of Temperature.- In Chap. I, Sec. 4, we have discussed the therrnodynamic temperature, introduced in the statement of the second law of thermodynamics, and tt e have mentioned that it can be proved that this is the same temperature that is measured by a gas thermometer. We arc now in position to prove this fact. First, we must define a perfect gas in a way that could be applied experimentally without knowing any temperature scale except that furnished by the gas itself. Wo can define it as a gas which in the first place? obeys Boyle's law: PV constant at constant temperature, or PV = f(T), where T is the thermodynamic temperature, and / is a func- tion as yet unknown. Secondly, it obeys what is called Joule's la\v: the internal energy 1 T is independent of the volume at constant temperature. These assumptions can both be proved by direct experiment. We can certainly observe 1 constancy of temperature without a temperature scale, so that we can verify Boyle's law. To chock Joule's law, we may consider the free expansion of the* gas. We let the gas expand irreversibly into a vacuum. It is assumed that the process is carried out aeliabatically, and since there is ne> external work done, the internal energy is unchanged in the process. We allow the gas to come te) equilibrium at its new volume, and observe whether the temperature is the? same that it was originally, or different. If it is the same, then the gas is said to obey Joule's law. To check the mathematical formulation of this law, we note that the experiment of free expansion tells us directly that the tempera- ture is independent erf volume, at constant internal energy: (3T/dV)u 0. But by Eq. (4.9), we have /d[A (dlf\(dT\ (dT\ , n \av) r = ~\dT)\dv) v - ~ Cv WA = * ((U) Equation (6.1) states that the internal energy is independent of volume at constant temperature, the usual statement of Joule's law. Without further assumption about the gas, we can now prove that f(T) = constant X T, so that the pressure of a perfect gas at constant SEC. G] THERMODYNAMICS 31 volume is proportional to the thermodynamic temperature, and if we uae proper units, the gas scale of temperature is identical with the thermo- dynamic scale. Using Table 1-1 we have the important general relation . _ _ __ /dP av- - /dV\ ( d p) T giving the change of internal energy with volume of any substance at constant temperature. We set this equal to zero, on account of Joule's law. From the equation of state, f(T) J'(T) Substituting Eq. (6.3) in Kq. (6.2), and canceling out a factor P, we have T f(T) = or d In/ = d In r, ln/(T) = In T + const., f(T) - const. X T, (6.4) which was to be proved. Instead of defining a perfect gas as we have done, by Boyle's law and Joule's law, we may prefer to assume that a thermodynamic temperature scale is known, and that the perfect gas satisfies the general gas law PV = const. X T. Then we can at once use the relation (6.2) to calcu- late the change of internal energy with volume at constant temperature, and find it to be zero. That is, we show directly by thermodynamics that Joule's law follows from the gas law, if that is stated in terms of the thermodynamic temperature. CHAPTER III STATISTICAL MECHANICS Thermodynamics is a simple, general, logical science, based on two postulates, the first and second laws of thermodynamics. We have seen in the last chapter how to derive results from these laws, though we have not used them yet in our applications. But we have seen that they are limited. Typical results are like Kq. (5.2) in Chap. II, giving the differ- ence of specific heats of any substance, Cp CV, in terms of derivatives which can be found from the equation of state. Thermodynamics can give relations, but it cannot derive the specific heat or equation of state directly. To do that, we must go to the statistical or kinetic methods. Even the second law is simply a postulate, verified because it leads to correct results, but not derived from simpler mechanical principles as far as thermodynamics is concerned. We shall now take up the statistical method, showing how it can lead not only to the equation of state and specific heat, but to an understanding of the second law as well. 1. Statistical Assemblies and the Entropy. To apply statistics to any problem, we must have a great many individuals whose average properties we are interested in. We may ask, what are the individuals to which we apply statistics, in statistical mechanics? The answer is, they are a great many repetitions of the same experiment, or replicas of the same system, identical as far as all large-scale, or macroscopic, properties are concerned, but differing in the small-scale, or microscopic, properties which we cannot directly observe. A collection of such replicas of the same system is called a statistical assembly (or, following Gibbs, an ensemble). Our guiding principle in setting up an assembly is to arrange it so that the fluctuation of microscopic properties from one system to another of the assembly agrees with the amount of such fluctuation which would actually occur from one repetition to another of the same experiment. Let us ask what the randomness that we associated with entropy in Chap. I means in terms of the assembly. A random system, or one of large entropy, is one in which the microscopic properties may be arranged in a great many different ways, all consistent with the same large-scale behavior. Many different assignments of velocity to individual mole- cules, for instance, can be consistent with the picture of a gas at high temperatures, while in contrast the assignment of velocity to molecules at the absolute zero is definitely fixed: all the molecules are at rest. Then 32 SEC. 1] STATISTICAL MECHANICS 33 to represent a random state we must have an assembly which is dis- tributed over many microscopic states, the randomness being measured by the wideness of the distribution. We can make this idea more precise. Following Planck, we may refer to a particular miprosnopir. stn.t.o. nf thu system as a complexion. We may describe an assembly by stating what fraction of the systems of the assembly is found in each possible com- plexion. We shall call this fraction, for the it\\ complexion. /V. and shall refer to the sot of / t 's as the distribution function describing tho assembly. Plainly, since all systems must be in one complexion or another, Then in a random assembly, describing a system of large entropy, there will be systems of the assembly distributed over a great many complex- ions, so that many f l 's will be different from zero, each ono of these frac- tions being necessarily small. On the other hand, in an assembly of low entropy, systems will be distributed over only a small number of com- plexions, so that only a few / t 's will be different from zero, and these will bo comparatively large. We shall now postulate a mathematical definition of entropy, in terms of the/?s, whicTTlij lafiafm the case of a random distribution, small other- wise! This definition is (1.2) Hero k is a constant, called Boltzmann's constant, which will appear fre- quently in our statistical work. It has the same dimensions as entropy. or specific heat, that is. energy divided by tnmpnratiim. ^Its value in absolute units is 1.379 X 1Q~ 16 er er degree. This value is derived indirectly; using Eq. (1.2) ? for thn pnt.rnpyj <<* ^" flfTJ v ^ tl gas law and the gas constant, in torms of fc, thereby determining k from It is easy to see that Eq. (1.2) has the required property of being large for a randomly arranged system, small for one with no randomness. If there is no randomness at all, all values of f l will be zero, except one, which will be unity. But the function / In / is zero when / is either zero or unity, so that the entropy in this case will be zero, its lowest possible value. On the other hand, if the system is a random one, many com- plexions will have / t different from zero, equal to small fractions, so that their logarithms will be large negative quantities, and the entropy will be large and positive. We can see this more clearly if we take a simple case: suppose the assembly is distributed through W complexions, with 34 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill equal fractions in each. The value rf *y>.nh f- in titlfififi finTnplpr ;/ is 1/W, while for other complexions / is zero. Then we have Q _J&.T/F_ in o /t rr jrr "I - L = fc In W (1.3) Tho^p j^py, in nrh a rfl fflj j<g nrflpprtional to the logarithm of the number nf nnrnplpviflna in whipli ay^ff TUB of thfi assembly can be found. AsTtnis number of complexions increases, the distribution becomes more random or diffuse, and the entropy increases. Boltzmann 1 based his theory of t.hn relation of probability to entropy on Eq. (1.3), rather than using the more general relation M .2V HP pnllod )V the thermodynamic probability "f * ^tf r P-rtPI^ much as wo have that a random state, which is inherently likely to be realized, will have a large value of W. Planck 2 has shown by the following simple argument that the logarithmic form of Eq. (1.3) is reasonable. Suppose the system consists of two parts, as for instance two different masses of gas, riot con- nectod with each other. In a given state, represented by a given assem- bly, let there be Wi complexions of the first part of the system consistent with the macroscopic description of the state, and Wo complexions of the second part. TV , j"> *h* *" p?r f s nf f1 "* *v*tf f^re independonTof each otherTthere must be W^W^ comnlexioiis of the combined system, since each complexion of the first part can be joined to any one of the complexions of the second part to give a complexion of the combined system. We shall then find for the entropy of the combined system S *= k In WiW z = k\nWi + k\n W 2 . (1.4) But if we considered the first part of the system by itself, it would have an entropy Si- k In Wi, and the second part by itself would have an entropy /S> 2 = Arin TF 2 . Thus, on account of the relation (1,3), we have S = S! + S2. (1.5) But surely this relation must bo true; in thermodynamics, the entropy of two separated systems is tho sum of the entropies of the parts, as wo can see directly from the second law, since the changes of entropy, dQ/T, in a reversible process, are additive. Then we can reverse the argument fl-bovp. Fjfliiation (\.&\ must be trno. &iicl if_tliii-iiiiti*Qny is a, function of W } it can be shown that the only possible function consistent with the additivity of the entropy is tho logarithmic function of ECL fl.SV 1 See for example, L. Boltzmarm, "Vorlesungen uber Gastheorie," Sec. 6, J. A. Barth. 2 See for example, M. Planck, "Heat Radiation," Sec. 119, P. Blakiston's Sons & Company. SEC. 1] STATISTICAL MECHANICS 35 Going back to our more general formula (1.2), we can show that if the assembly is distributed through W complexions, the entropy will have its maximum value when the /'s are of equal magnitude, and is reduced by any fluctuation in / t from cell to cell, verifying that any concentration of systems in particular complexions reduces the entropy. Taking the formula (1.2) for entropy, wo find how it changes when the/ t 's aro varied. Differentiating, wo havo at onco dS = -*](l + In /.MA- (1-6) i But we know from Eq. (1.1) that X/ t = 1, from which at. onco i , = 0. (1.7) Thus the first term of Kq. (1.6) vanishes; and if we assume that the density is uniform, so that ln/ is really independent of / we can take it- out of the summation in Eq. (1.6) as a common factor, and the remaining term will vanish too, giving dS = 0. That is. fof uniform ffan^yj fl variation of the entropy for small variations of the asscn^^fv vfl-fllfihfifti **- necessary condition for a maximum of the entropy. A little further investigation would convince us that this really gives a maximum, not a minimum, of entropy, and that in fact Eq. (1.3) gives the absolute maxi- mum, the highest value of which S is capable, so long as only W complex- ions are represented in the assembly. The only way to get a still greater value of S would be to havo more terms in the summation, so that each individual / could bo even less. We have postulated a formula for the entrypy. How can we expeot, to prove that it is correct? We can do this onlv bv goiny bank to thn_ second law of thermodynamics^ showing that our entropy has the prop- erties ciemanciea Dy ttiat law, and that in forma of it thn Uw i a.t.ifinfj. We have already shown that, our formula for the entropy has one of the properties demanded ot the entropy: it is determined bv the Ht.fl.te of thn system. In statistical mechanics, the only thing we can mean by the state of the system is the statistical assembly, since this determines average or observable properties of all sorts, and our formula (1.2) for entropy is determined by the statistical assembly. Next we must show that our formula, represents a quantity that increases in an irreversible* process. This will be done by qualitative but valid reasoning in a later section. It will then remain to consider thermal equilibrium and reversi- ble processes, and to show that in such processes the change of entropy is dQ/T. 36 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill 2. Complexions and the Phase Space. We wish to find how our formula for the entropy changes in an irreversible process. To do this, we must find how the/ t 's change with time, or how systems of the assem- bly, as time goes on, change from one complexion to another. This is a problem in kinetics, and we shall not take it up quantitatively until the chapter on kinetic methods. For the present we shall be content with qualitative discussions. The first thing that we must do is to get a more precise definition of a complexion. We have a certain amount of informa- tion to guide us in making this definition. We are trying to make our definition of entropy agree with experience, and in particular we want the state of maximum entropy to be the; stable, equilibrium state. But we have just seen that for an assembly distributed through W complexions, the state of maximum entropy is that in which equal numbers of systems are found in each complexion. This is commonly expressed by saying that complexions have equal a priori probability; that is, if we have no specific information to the contrary, we are as likely to find a system of an assembly in one complexion as in another, in equilibrium. Our definition of a complexion, then, must be consistent with this situation. The method of defining complexions depends on whether we are treat- ing our systems by classical, Newtonian mechanics or by quantum theory. First we shall take up classical mechanics, for that is more familiar. But later, when we describe the methods of quantum theory, we shall observe that that theory is more correct and more fundamental for statistical purposes. In el^ssien.1 ^elianics. a system is f]esr*rihpfl by giving th^ coordinates and velocities of all its particles. -Instead of the velocities, it proves to be more desirable to use the momenta. With rectangular coordinates, the momentum associated with each coordinate is simply the mass of the particle times the corresponding component of velocity; with angular coordinates a momentum is an angular momentum ; and so on. If there are N coordinates and N momenta (as for instance the rectangular coordinates of JV/3 particles, with their momenta), we can then visualize the situation by setting up a 2N dimensional space, called a phase space, in which the coordinates and momenta are plotted as variables, and a single point, called a representative point, gives complete information about the system. An assembly of systems corresponds to y| p.nllop.finn nf representative points, and we shall generally assume that ty QTO san many systems in the assembly that the distribution of representative points is practically continuous in tliejjhase space. Now a complexion, Or microSCOPic State, of thp system mnf. nopiypnnH t.n n or small region, of the phase space; to bemore precise, spond to a small volume of the phase space. We subdivide the whole phase space into small volume elements and call each volume element a complexion, saying that f tt the fraction of systems of the assembly in a SBC. 2] STATISTICAL MECHANICS 37 particular complexion, simply equals the fraction of all representative points in the corresponding volume clement. The only question that ' q fllp ghqpg and SJEP of Y^^mc elements representing complexions. """To answer this question, we must consider how points move in the phase space. We must know the time rates of chuufiQ of all coordinates and momenta, in terms of the coordinates and momenta themselves. Newton's i second law gives us the time rate of change of each momentum, stating that it equals the corresponding component of force, which is a function of the coordinates in a conservative system. The time rate of change of each coordinate is simply the corresponding velocity com- ponent, which can be found at onee from the momentum. Thus we can find what- is essentially the 2N dimensional velocity vector of each repre- sentative point. This velocity vector is determined at each point of phase space and defines a rate of flow, the representative points streaming through the phase space as a fluid would stream through ordinary space. We are thus in a position to find how many points enter or leave each element of volume, or each complexion, per unit time, and therefore to find the rate at which the fraction of systems in that complexion changes with time. It is now easy to prove, from the equations of motion, a general theorem callec} LiouvihVs theorem. } This theorem states, in mathematical language, the following fact : the swarm of points moves in such a wfl.y f.Vm|, the H?* ic " f y " f p***^ ** fallow along with the swarm, never changes^ The flow is like a streamline flow of an incompressible fluid, each particle of fl*^] nlwnya pro Mf rvingr it.M r> W n density. This does not mean that the rlnnsitv ut, a. given point of spn.o.n dons not rhginge with time; in general it does, for in the course uf the flow, first a. f|ono pnrf W the swarm, then a less dense onn, m,i.y woll be swept, by f-.hn point, jjj_ question, as if we had an incompressible fluid, but one whose density changed from point to point. It does mean, however, that we can find a very simple condition which is necessary and sufficient for the density at a given point of space to be independent of time: the density of points must be constant all along each streamline, or tube of flow, of the points. For then, no matter how long the flow continues, the portions of the swarm successively brought up to the point in question all have the same density, so that the density there can never change. To find the condition for equilibrium, then, we must investigate the nature of the streamlines. For a periodic motion, a streamline will be closed, the system returning to its original state after a single period. This is a very special case, however; most motions of many particles are not periodic and their streamlines never close. Rather, they wind around 1 For proof, see for example, Slater and Frank, " Introduction to Theoretical Physics," pp. 365-366, McGraw-Hill Book Company, Inc., 1933. 38 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill in a very complicated way, coming in the course of time arbitrarily close bo every point of phase space corresponding to the same total energy (of course the energy cannot change with time, so that the representative point must stay in a region of constant energy in the phase space). Such a motion is called quasi-ergodic ? and it can be shown to be the general hypfi of rnofirm pnrinHif*. motions being a rare pv^^n| ( jpT^ Then, from the statement in the last paragraph, we see that to have a distribution inde- pendent of time, we must have a density of DOII^S in phase apftf.e whinh is constant for all regions of the same energy, _ But on the 0ther hand thermal equilibrium must correspond to a distribution inHp.pondnnt. nf time, and we have seen that the state of maximum entropy is one in which all complexions have the same number of avstgiflfi, Tfrpsp lwn *-fg- menta are only compatible if each complexion corresponds to the same volume of phase space. For then a constant volume density of points, which by Liouville's theorem corresponds to a distribution independent of time, will at the same time correspond to a maximum entropy. We thus draw the important conclusion that regions of equal volume in phase space have equal a priori probability, pr that. ^ nomple^on norr^ponds to a quite definite volume of phase space. Classical mechanics, however, does not lead to any way of sayi_ri^hQ w 1 Qr p? thiil Ar olume is. Thi^g it cannot lead to any unique definition of the entropy; fr>r tJhfi Ajfl dfiP ftnr * on how large a volume each complexion corrggpQiifo t.n ; . .nH they jp turn determine the entropy. 3. Cells in the Phase Space and the Quantum Theory. Quantum mechanics starts out quite differently from classical mechanics. It does not undertake to say how the coordinates and momenta of the particles change as time goes on. Rather, it is a statistical theory from the begin- ning: it sets up a statistical assembly, and tells us directly how that assembly changes with time, without the intermediate step of solving for the motion of individual systems by Newton's laws of motion. And it describes the assembly, from the outset, in terms of definite complexions, so that the problem of defining the complexions is answered as one of the postulates of the theory. It sets up quantum states, of equal a priori probability, and describes an assembly by giving the fraction of all sys- tems in each quantum state. Instead of giving laws of motion, like Newton's second law, its fundamental equation is one telling how many systems enter or leave each quantum state per second. In particular, if ?qual fractions of the systems are found in all quantum states associated with the same energy, we learn that these fractions will not change with time; that is, in a steady or equilibrium state all the quantum states are equally occupied, or have equal a priori probabilities, ye are then jiiat.ififtfl in iHpnt.ifvinp; t.hpgP qimniiiinj af [ft .f.o fl innc ^yV.^V. W g ViQir*> moyif^/^ When we deal witL quantum statistics, SBC. 3) STATISTICAL MECHANICS 39 A will refer to the fraction of all svstmns in tfif tt.h niianfnm g+ Q fo This gives a definite meaning |. n t.hn pnmpWinnfi BT1f J i^ c ^ o ^fi n jt. P nni^n^ ical value for the entropy. Quantum tfteory provides no unique way of setting up the quantum states, or the complexions. We can understand this much better by considering the phase space. iyrn.nv feature^ of the Quantum theory can be described by dividing the phase space into cells of equal volume, and associating each cell with a quantum state. The volume of these cells is uniquely fixed by the quantum theory, but not their shape. We can, for example, take simply rectangular cells, of dimensions A#i along the axis representing the first coordinate, A<? 2 for the second coordinate, and so on up to A#.v for the Nth coordinate, and Api to Ap N for the corresponding momenta. Then there is a very simple rule giving the volume of such a cell: we have A? t Ap, = A, (3.1) where h is Planck's constant, equal numerically to 6.61 X 10~~ 7 absolute units. Thus, with N coordinates, the 2A r -dimensional volume of a cell is h N . We can equally well take other shapes of cells. A method which is often useful can be illustrated with a problem having but one coordinate q and one momentum p. Then in our two-dimensional phase space we can draw a curve of constant energy. Thus for instance consider a particle* of mass m held to a position of equilibrium by a restoring force propor- tional to tho displacement, so that its energy is E - SL + 27r 2 w*>y, (3.2) where v is the frequency with which it would oscillate in classical mechanics. The curves of constant energy are then ellipses in the p-q space, as we see by writing the equation in the form 02 the standard form for the equation of an ellipse of semiaxes -\/2mE and \/E/2ir 2 mv' 2 . Such an ellipse is show r n in Fig. III-l . Then we can choose cells bounded by such curves of constant energy, such as those indicated in Fig. III-l. Since the area between curves must be A, it is plain that the nth ellipse must have an area nA, where n is an integer. The area of an ellipse of semiaxes a and b is irdb\ thus in this case we have an area of ir\/2mE\/E/2'ir 2 mv z = E/v, so that the energy of the ellipse connected 40 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill with a given integer n is given by E n = nhv. (3.4) Another illustration of this method is provided by a freely rotating wheel of moment of inertia 7. The natural coordinate to use to describe it is the angle 0, and the corresponding momentum p Q is the angular momen- P 'TrrTT Kiu. 1II-1 Cells in phase spare, for the linear oscillator The shaded area, between two ellipses of constant energy, has an area h in the qinntum theory turn, /co, \\licre co = dO/dt is the angular velocity. If no torques act, the energy is wholly kinetic, equal to E = ^ (3.5) Then, as shown in Fig. III-2, lines of constant energy are straight lines at constant value of po. Since goes from zero to 2?r, and then the motion P0 3H/27T repeats, we* use only values of the coordinate in this range. Then, if the cells an* set up so that the area of each is h, we must have them bounded by the lines nh (3.6) so that the energy associated with the nth line is E. = Sr (3-7) Km. III-2 Cells m phase j[ u forms of these cells, we can now under- space, for the rotator. The x r j xix x shaded area has an area of h stand one of the most fundamental statements in the quantum theory. o f ^ j ie q uan tum theory, the principle of uncer- tainty : it is impossible to regulate the coordinates and momenta of a system more accurately than to require that they lie somewhere within a given cell. Any attempt to be more precise, on account of the necessary clumsiness of nature, will result in a disturbance of the system just great enough to SEC. 3] STATISTICAL MECHANICS 41 shift the representative points in an unpredictable way from one part of the cell to another. The best we can do in setting up an assembly, in other words, is to specify what fraction of the systems will be found in each quantum state or complexion, or to give the /,'s. This does not. imply by any means, however, that it does not make sense to talk about the coordinates and momenta of particles with more accuracy than to locate the representative point in a given cell. Xh^re is nothing inher- ently impossible in knowing the coordinates and momenta of a system as accurately as wo please; the restriction is only that we cannot prepare a system, or an assembly of systems, with as precisely determined coordi- nates and momenta as we might please. Since we may be interested in precise values of the momenta and coordinates of a system, there must bo something in the mathematical framework of the theory to describe them. We must be able to answer questions of this sort: given, that an assembly has a given fraction of its systems in each coll of phase space, what is the probability that a certain quantity, such as ono of the coordinates, lies within a certain infinitesimal range of valuos? Put in another way, if we know that a system is in a given coll, what is the probability that its coordinates and momenta lie in definite ranges? The quantum theory, and specifically the wave ID' chanics, can answer such questions; and because it can, wo aro justified ia regarding it as an essentially statistical theory. By experimental methods, we can insure that a system lies in a given cell of phase space. That is, we can prepare an assembly all of whose representative points lie in this single cell, but this is the nearest we can come to setting up a sys- tem of quite definite coordinates and momenta. Having prepared such an assembly, however, quantum theory says that the coordinates and momenta will be distributed in phase space in a definite way, quite inde- pendent of the way we prepared the assembly, and therefore quite unpre- dictable from the previous history of the system. In other words, all that the theory can do is t.o p;ivo. ijs statistical information about a system. not jrlp^ily/* krmwlodge of exactly what it will do. This is in striking contrast to. the classical mechanics, which allows precise prediction of the future of a system if we know its past history. The cells of the type described in Figs. III-l and III-2 have a special property: all the systems in such a quantum state have the same energy. The momenta and coordinates vary from system to system, roughly as if systems were distributed uniformly through the cell, as for example through the shaded area of either figure, though as a matter of fact the real distribution is much more complicated than this. But the energy is fixed, the same for all systems, and is referred to as an energy level. It is equal to some intermediate energy value within the cell in phase space, as computed classically. Thus for the oscillator, as a matter of fact, the 42 INTRODUCTION TO CHEMICAL PHYMCti [CHAP. Ill energy levels arc E = (* + JV' (3.8) \ */ which, as we see from Eq. (3.4), is the energy value in the middle of the cell, and for a rotator the energy value is Kn = n(n + IJg^j* (3.9) approximately the mean value through the cell. The integer n is called the quantum number. The distribution of points in a quantum state of fixed energy is independent of time, and for that reason the state is called a stationary state. This is in contrast to other ways of sotting up cells. For instance, with rectangular cells, we find in general that tho systems in one state have a distribution of energies, and as time goes on systems jump at a certain rate from one state to another, having what are called quan- tum transitions, so that the* number of systems in each state changes with time. One can draw a certain parallel, or correspondence, between the jumping of systems from one quantum state to another, and the uniform flow of representative points in the phase space in classical mechanics. Suppose we have a classical assembly whose density in the phase space changes very slowly from point to point, changing by only a small amount in going from what would be one quantum cell to another. Then we can set up a quantum assembly, the fraction of systems in each quantum state being given by tho fraction of the classical systems in the corresponding cell of phase space. And the time rate of change of the fraction of sys- tems in each quantum state will be given, to a good approximation, by the corresponding classical value. This correspondence breaks down, however, as soon as the density of the classical assembly changes greatly from cell to cell. In that case, if we set up a quantum assembly as before, we shall find that its time variation does not agree at all accurately with what we should get by use of our classical analogy. Antp.1 ftfomif systems pbev thf fljjQ-nfiim *Vnry n/rt r]flssiffll mechanicsTso that we shall be concerned with auaptum statistics. The only cases in whinfi we can use classical theory as an app^oxinmf.inii art* tHbse.in which the density in phase varies only a, lif^lr fr-nm g fn.fn ^ state, the case we have mentioned in the last paragraph As a matter of fact, as we shall see later, this corresponds roughly to the limit of high tempera- ture. Thus, we shall often find that classical results are correct at high temperatures but break down at low temperature. \ f - v nn il jfflftTnnlf- of this is the theory of specific heat; we shall find others as we go on. We now understand the qualitative features of quantum statistics well enough so that in the next section we can go on to our task of understanding the SBC. 4] STATISTICAL MECHANICS 43 nature of irreversible processes and the way in which the entropy increases with time in such processes. 4. Irreversible Processes. We shall start our discussion of irreversi- ble processes using classical mechanics and Liouville's theorem. Let us try to form a picture of what happens when we start with a system out of equilibrium, with constant energy and volume, follow its irreversible change into equilibrium, and examine its final steady state. To have a specific example, consider tho approach to equilibrium of a perfect gas having a distribution of velocities which originally does not correspond to thermal equilibrium. Assume that at the start of an experiment, a mass of gas is rushing in one direction with a large velocity, us if it had just been shot into a container from a jet. This is far from an equilibrium distribution. The random kinetic energy of the molecules, which we should interpret as heat motion, may be very small and the temperature low, and yet they have a lot of kinetic energy on account of their motion in the jet. In the phase space, the density function will be large only in the very restricted region where all molecules have almost the same velocity, the velocity of the jot (that is, the equations Pi! _ Prf . _ y H( , - ~ - r j , I I C . . m i w 2 where V x is the x component of velocity of the jot, are almost satisfied by all points in the assembly), and all have coordinates near tho coordinate of the center of gravity of the rushing mass of gas (that is, tho equations Xi = x 2 = - - - = A', where X is the x coordinate of the center of gravity of the gas, are also approximately satisfied). We see, then, that the entropy, as defined by kf, In/,, will be small under these conditions. But as time goes on, the distribution \\ill change. The jet of molecules will strike the opposite wall of tho container, and after bouncing back and forth a few times, will become moro and more dissipated, with irregu- lar currents and turbulence sotting in. At first we shall describe those things by hydrodynamics or aerodynamics, but wo shall find that tho description of the flow gets more and more complicated with irregularities on a smaller and smaller scale. Finally, with the molecules colliding with the walls and with each other, things will become extremely involved, some molecules being slowed down, some speeded up, the directions changed, so that instead of having most of the molecules moving with almost the same velocity and located at almost the same point of space, there will be a whole distribution of momentum, both in direction and magnitude, and the mass will cease its concentration in space and will be uniformly distributed over the container. There will now be a great 44 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill many points of phase space representing states of the system which could equally well be this final state, so that the entropy will be large. And the increase of entropy has come about at the stage of the process where we cease to regard the complication in the motion as large-scale turbulence, and begin to classify it as randomness on a microscopic or atomic scale. Finally the gas will come to an equilibrium state, in which it no longer changes appreciably with time, and in this state it will have reached its maximum entropy consistent with its total energy. This qualitative argument shows what we understand by an irreversi- ble process and an increase of entropy: an assembly, originally concen- trated in phase space, changes on account of the motion of the system in such a way that the points of the assembly gradually move apart, filling up larger and larger regions of phase space. This is likely, for there are many ways in which it can happen ; while the reverse process, a concentra- tion of points, is very unlikely, and we can for practical purposes say that it does not happen. The statement we have just made seems at first to be directly con- trary to Liouville's theorem, for we have just said that points originally concentrated become dispersed, while Liouville's theorem states that as we follow along with a point, tho density never changes at all. We can give an example used by Gibbs 1 in discussing this point. Suppose we have a bottle of fluid consisting of two different liquids, one black and one white, which do not mix with each other. We start with one black drop in the midst of the white liquid, corresponding to our concentrated assembly. Now we; shake or stir the liquid. Tho black drop will become shaken into smaller drops, or be drawn out into thin filaments, which will become dispersed through the white liquid, finally forming something like an emulsion. Each microscopic black drop or filament is as black as ever, corresponding to the fact that the density of points cannot change in the assembly. But eventually the drops will become small enough and uniformly enough dispersed so that each volume element within the bottle will seem uniformly gray. This is something like what happens in the irreversible mixing of the points of an assembly. Just as a droplet of black fluid can break up into two smaller droplets, its parts traveling in different directions, so it can happen that two systems represented by adjacent representative points can separate and have quite different histories; one may be in position for certain molecules to collide, while the other may be just different enough so that these molecules do not collide at all, for example. Such chance events will result in very different, detailed histories for the various systems of an assembly, even if the original systems of the assembly were quite similar. That is, they will !J. W. Gibbs, "Elementary Principles in Statistical Mechanics," Chap. XII, Longmans, Green & Company. SEC. 4] STATISTICAL MECHANICS 4/' result in representative points which were originally close together in phase space moving far apart from each other. From the example and the analogy we have used, we see that in an irreversible process the points of the original compact and orderly assem- bly gradually get dissipated and mixed up, with consequent increase of entropy. Now let us see how the situation is affected when we consider the quantum theory and the finite size of cells in phase space. Our description of the process will depend a good deal on the scale of the mixing involved in the irreversible process. So long as the mixing is on a large scale, by Liouville's theorem, the points that originally wore in one roll will simply be moved bodily to another cell, so that the contribution of these points to fc2/ t ln/ t will be the same as in the original distribu- tion, and the entropy will be unchanged. The situation is very different, however, when the distribution as we should describe it by classical mechanics involves a set of filaments, of different densities, on a scale small compared to a cell. Then the quantum / t , rather than equaling the classical value, will be more nearly the average of the classical values through the cell, leading to an increase of entropy, at the samo time that the average or quantum density begins to disobey Liouville's theorem. It is at this same stage of the process that it becomes really impossible to reverse the motion. It is a well-known result of Newton's laws that if, at a given instant, all the positions of all particles are left unchanged, but all velocities are reversed in direction, the whole motion will reverse 1 , and go back over its past history. Thus every motion is, in theory, rever- sible. What is it that in practice makes some motions reversible, others irreversible? It is simply the practicability of setting up the system with reversed velocities. If the distribution of velocities is on a scale large enough to see and work with, there is nothing making a reversal of the velocities particularly hard to set up. With our gas, we could suddenly interpose perfectly reflecting surfaces normal to the various parts of the jet of gas, reversing the velocities on collision, or could adopt some such device. But if the distribution of velocities is on too small n scale to see and work with, we have no hope of reversing the velocities experimentally. Considering our emulsion of black and white fluid, which we have pro- duced by shaking, there, is no mechanical reason why the fluid could not be unshaken, by exactly reversing all the motions that occurred in shaking it. But nobody would be advised to try the experiment. It used to be considered possible to imagine a being of finer and more detailed powers of observation than ours, who could regulate systems on a smaller scale than we could. Such a being could reverse processes that we could not; to him, the definition of a reversible process would be differ- ent from what it is to us. Such a being was discussed by Maxwell and is often called "Maxwell's Demon." Is it possible, we may well ask, to 46 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill imagine demons of any desired degree of refinement? If it is, we can make any arbitrary process reversible, keep its entropy from increasing, and the second law of thermodynamics will cease to have any signifi- cance. The answer to this question given by the quantum theory is No. An improvement in technique can carry us only a certain distance, a distance practically reached in plenty of modern experiments with single atoms and electrons, and no conceivable demon, operating accord- ing to the laws of nature, could carry us further. The quantum theory gives us a fundamental size of cell in the phase space, such that we cannot regulate the initial conditions of an assembly on any smaller scale. And this fundamental cell furnishes us with a unique way of defining entropy and of judging whether a given process is reversible or irreversible. 5. The Canonical Assembly. In the preceding section, we have* shown that our entropy, as_de_fined in Kg. (1.2), has one f}f fdlfi p^p^ti^ of the physical entropy: it increases in an irreversible process T for it incrcascTwhonovor the assembly becomes ftffused or sc^ttero^ and this happens in irreversible processes. We must next take up thermal eauilib- ium. finding tirst the correct assembly to describe the fiensity fungti on in thermal equilibrium, and then proving from this fonaitv fnnct.ion r thnf, our entropy satisfies the condition dS = dO IT for a reversible, proress. From Liouville's foeorem, we have one piece of information aboqt the, assembly: in or^ler that it ffiav be independent of time, the quantity /, must^be a function only of the energy of the system. We^et A\ be the energy of a system in the ii\\ ccll ? choosing for this purpose the; type of quantum cells representing stationary states oy ^"p-rp[y H lg wish to haye/ t a function of J?,, but we do not, vot, xr how-to this r function. The essential method which we use is the following: We have seen that in an irreversible process, the entropy tends to increase to a maximum, for an assembly of isolated systems. If all systems of the assembly have the same energy, then the only cells of phase space to which systems can travel in the course of the irreversible process are cells of this same energy, a finite number. The distribution of largest entropy in such a case, as we have seen in Sec. 1, is that in which systems are distributed with uniform density through all the available cells. This assembly is called the microcanonical assembly, and it satisfies our condition that the density be a function of the energy only: all the / t 's of the particular energy represented in the assembly are equal, and all other / t 's are zero. But it is too specialized for our purposes. For thermal equilibrium, we do not demand that the energy be pr^p|y "dp.tenninpd WP dpnmnd rather that the tempcrnturfl nf ^ 1] yretems of th* a^mblv be flie same. This can be interpreted most properly in the .following way. We allow eacfr system of the assembly to be in contact with a temperature bath of SBC. 51 STATISTICAL MECHANICS 4? the required temperature, a body of very large heat capacity held at the desired temperature. The systems of the assembly are then not isolated. Rather, they can change their energy by interaction with the temperature bath. Thus, even if we started out with an assembly of systems all of the same energy, some would have their energies increased, some decreased, by interaction with the bath, and the final stable assembly would havo a whole distribution of energies. There would certainly be a definite aver- age energy of the assembly, however; with a bath of a given temperature, it is obvious that systems of abnormally low energy will tend to gain energy, those of abnormally high energy to lose energy, by the interaction. To find the final equilibrium statc f then, wo may ask this question: what is the assembly of systems w r hich has the maximum entropy, subject onT7 to the condition that its moan energy havo a given value?.. It seems most reasonable that this will he the assembly which will.. bo tho final result, of the irreversible contact of any group of systems with a largo temperature The 1 assembly that results from these conditions is called tho canonical assembly! Lot us formulate tho conditions which it must satisfy. It must bo the assembly for which S = ~k2\f t In / t is a maximum, subject (o a constant mean energy. But we can find the mean energy immedi- ately in terms of our distribution function/,. In tho ?th coll, u system has energy E % . The fraction f v of all systems will be found in this coll. Hence the weighted mean of the energies of all systems is \ (5.1) This quantity must be held constant in varying the / t 's. Also, as we saw- in Eq. (1.1), the quantity ]^/ t equals unity. This must always be satis- fled, no matter how the / t 's vary. We can restate the conditions, by finding dS and dU: we must have , (In/. + 1), (5.2) for all sets of d//s for which simultaneously dU = = 2///,A\, (5.3) and 48 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill On account of Eq. (5.4), we can rewrite Eq. (5.2) in the form dS = - -kdfi ln/i. (5.5) i The set of simultaneous equations (5.3), (5.4), (5.5) can be handled by the method called undetermined multipliers: the most general value which In/* can have, in order that dS should be zero for any set of d/Vs for which Eqs. (5.3) and (5.4) are satisfied, is a linear combination of the coeffi- cients of dfi in Eqs. (5.3) and (5.4), with arbitrary coefficients: In fi = a + bE % . (5.6) For if Eq. (5.6) is satisfied, Eq. (5.5) becomes = -ka^dfi - kb^dfrft, , (5.7) % % which is zero for any values of cZ/ t for which Eqs. (5.3) and (5.4) are satis- fied. The values of / t for the canonical assembly are determined by Eq. (5.6). It may be rewritten /, = fe 6 *.. (5.8) Clearly b must bo negative; for ordinary systems have possible states of infinite energy, though not of negatively infinite energy, and if b wore positive, fi would become infinite for the states of infinite energy, an impossible situation. We may easily evaluate the constant a in torms of 6, from the condition 5/ = 1. This gives at once so that J>K. (5.9) If the assembly (5.9) represents thermal equilibrium, the change of entropy when a certain amount of heat is absorbed in a reversible process SEC. 5] STATISTICAL MECHANICS 49 should be dQ/T. The change of entropy in any process in thermal equilibrium, by Eqs. (5.5) and (5.9), is dS = -k^df> In /, = -k^dMbEt - In i l J (5.10) using Eq. (5.4). Now consider the change of internal energy. This is E^ (5.11) The first term in Ea. (5.11^ arises when the external forces stay constant, resulting in constant values of E t) but there is a change m tne assembly, meaning a shift of molecules from one position and velocity to another. This change of course is different from that considered in Eq. (5.3), for that referred to an irreversible approach to equilibrium, while this refers, to a change from one equilibrium state to another of different energy.! Such a change of molecules on a molfif.nlfl.r sp.fl.lft is t.q \}& interpreted as ail absorption of heat. The second term, however, comes about when the /Vs and the entropy do not change, but the energies of the cells themselves change, on account of changes in external forces and in the potential energy. This is to be interpreted as external work donft on thn s the negative of the work done by the system. Thus we^have I dQ = Etdf dW = - (5.12) Combining Eq. (5.12) with Ea. (5.11^ yivns us the first law, dU = dO - dW. Combining with Eq. (5.10) T we have dS = -kbdQ.^ (5.13) Equation (5.13), stating the proportionality of dS and dQ for a reversible process, is a statement of the second law of thermodynamics for a reversi- ble process, if we have - (5.14) Using Eq. (5.14), we can identify the constants in Eq. (5.9), obtain- ing as the representation of the canonical assembly ~" (5.15) 50 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. Ill It is now interesting to compute the Helmholtz free energy A = (7 TS. This, using Eqs. (5.1) and (5.15), is A = 2/ t ( # t - E\ - kT In t \ = -ArTMn JV**, - (5.16) or ^ t (5.17) j Using Eq. (5.17), we may rewrite the formula (5.15) as I = 6 * y (5.18) The result of Eq. (5.17) is, for practical purposes, the most importan t result of statistical mechanics. For it gives a perfectly direct and strajghtforwftff] way nf (jpfiving the Helmholtz fyee energy, find hence the equation of state and specific heat, of any system, if we know its energy as a function of coordinates and Hininmtft The sum of Eq. (5.17), which we have denoted bv Z. is often called the partition function. Oftnn it fc flfiflfill ffft fro flh^ **^a* l^o pn|rnnv [ ^ptP.i^^pnP.rpv, and specific heat directly from the partition function, without separately computing the Helmholtz free energy. For the entropjt, using we have S = \ (kT In Z) / = k In Z H ( I (5.19) For the internal energy, U = A + TS, we have U = -*r In Z + kT]nZ + ^-fe (5.20) where the last form is often useful. For the specific heat at constant SBC. 5| STATISTICAL MECHANICS 51 volume, we may use either CV = (dV/dT)v or C v = T(dS/dT)r. From the latter, we have Jd*(kTluZ)\ Cv = 1- -- . (5.21) We have stated our definitions of entropy, partition function, and other quantities entirely in terms of summations. Often, however, the quantity / t changes only slowly from cell to cell; in this case it is con- venient to replace the summations bv integrations over the ohaso space. We recall that all cells are of the same volume, A n , if there are n coordi- nates and n momenta in the phase space. Thus the number of cells in a volume element dcji . . . da*. dv\ . . . dv n of phase space is Then the partition function becomes e kT dqi . . . dp n , \ (5.22) \ a very convenient form for such problems as finding the partition function of a perfect gas. CHAPTER IV THE MAXWELL-BOLTZMANN DISTRIBUTION LAW In most physical applications of statistical mechanics, we deal with a system composed of a great number of identical atoms or molecules, and are interested in the distribution of energy between these molecules. The simplest case, which we shall take up in this chapter, is that of the perfect gas, in which the molecules exert no forces on each other. We shall be led to the Maxwell-Boltzmann distribution law, and later to the two forms of quantum statistics of perfect gases, the Fermi-Dirac and Einstein-Bose statistics. 1. The Canonical Assembly and the Maxwell-Boltzmann Distribu- tion. Let us assume a gas of N identical molecules, and let each molecule have n degrees of freedom. That is, n quantities are necessary to specify the configuration completely. Ordinarily, three coordinates are needed to locate each atom of the molecule, so that n is three times the; number of atoms in a molecule. In all, then, Nn coordinates are necessary to describe the system, so that the classical phase space has 2Nn dimen- sions. It is convenient to think of this phase space as consisting of N subspaces each of 2n dimensions, a subspace giving just the variables required to describe a particular molecule completely. Using the quan- tum theory, each subspace can be divided into cells of volume h n , and a state of the whole system is described by specifying which quantum state each molecule is in, in its own subspace. The energy of the whole system is the sum of the energies of the N molecules, since for a perfect gas there are no forces between molecules, or terms in the energy depending on more than a single molecule. Thus we have N #=2< (f) , (1.1) where 6 (t) is the energy of the zth molecule. If the ith molecule is in the A t th cell of its subspace, let its energy be j. Then we can describe the energy of the whole system by the set of kjs. Now, in the canonical assembly, the fraction of all systems for which each particular molecule, as the fth, is in a particular state, as the fc t th, is 52 SBC. 1] THE MAXWELL-BOLTZMANN DISTRIBUTION LAW 53 kl k N ,t^ki/kT (1.2) ki k N It is now interesting to find the fraction of all systems in which a particu- lar molecule, say the ith, is in the fc t th state, independent of what other molecules may be doing. To find this, we merely sum the quantity (1.2) over all possible values of the fc's of other molecules. The numerator of each separate fraction in Eq. (1.2), when summed, will then equal the denominator and will cancel, leaving only (1.3) as the fraction of all systems of the assembly in which the zth molecule is in the /b t th state, or as the probability of finding the ith molecule in the A; t th state. Since all molecules are alike, we may drop the subscript i in Eq. (1.3), saying that the probability of finding any particular molecule in the kih state is 2 e ~ Ff II (1.4) Equation ^1.4) expresses what is called the Maxwell-Boltzmann distribu- tion law. If Eq. (1.4) gives the probability of finding any particular molecule in the fcth state, it is clear that it also gives the fraction of ^11 molecules to be found in that state, averaged through the assembly. The Maxwell-fioltzmann distribution law can be used for many calculations regarding gases ; in a later chapter we shall take up its appli- cation to the rotational and vibrational levels of molecules. For the present, we shall describe only its use for monatomic gases, in which there is only translational energy of the molecules, no rotation or vibration. In this case, as we shall show in the next paragraph, the energy levels * of a single molecule are so closely spaced that we can regard them as continu- ous and can replace our summations by integrals. We shall have three coordinates of space describing the position of the molecule, and three momenta, p x , p y , p t , equal to the mass ra times the components of velocity, v x , v v , v g . The energy will be the sum of the kinetic energy, %mv 2 = p */2m, 54 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV and the potential energy, which we shall denote by <(z, y, z) and which may come from external gravitational, electrostatic, or other types of force field. Then the fraction of molecules in the range of coordinates and momenta dx dy dz dp x dp v dp, will be _?_ kT ^dx dy dz dp x dp y dp z '" (pV2m+*) * ' U-O; J7JJ J J e kT dx dy dz dp x dp v dp z In the next section we shall derive some simple consequences from this form of the Maxwcll-Boltzmann distribution for perfect moriatomic gases. In the last paragraph we have used the fact that the translational energy levels of a perfect monatomic gas are very closely spaced, accord- ing to the quantum theory. We can sec this as follows, limiting ourselves to a gas in the absence of an external force field. Each molecule will have a six-dimensional phase space. Consider one pair of variables, as x and p x . Since no forces act on a molecule, the momentum p x stays constant during its motion, which must take placo in a range X along the x axis, if the gas is confined to a box of sides X, F, X along the three coordinates. Thus the area enclosed by the path of the particle in the x p x section of phase space is p x X, which must be equal to an integer n x times h. Then we have n x h n v h nji ,, , ~ where the n's are integers, which in this case can be positive or negative (since momenta can be positive or negative). The energy of a molecule is then .- 2m ~ 2m\X* F 2 Z* To get an idea of the spacing of energy levels, let us see how many levels are found below a given energy e. We may set up a momentum space, in which p x , p vi p z are plotted as variables. Then Eq. (1.6) states that a lattice of points can be set up in this space, one to a volume where V = XYZ is the volume of the container, each point corresponding to an energy level. The equation 2m SEC. 2] THE M AX W ELL-BOLT Z MANN DISTRIBUTION LAW 55 is the equation of a sphere of radius p = \/2we in this space, and the number of states with energy less than 6 equals the number of points within the sphere, or its volume, ^7r(2mc)^, times the number of points per unit volume, or V/h*. Thus the number of states with energy less than e is , (1.9) and the number of states between e and e + rfe, differentiating, is (/c. (1.10) The average energy between successive states is the reciprocal of Eq. (1.10), or 1 Let us see what this is numerically, in a reasonable case. We take a helium atom, with mass 6.63 X 10~ 24 gm., in a volume of 1 cc., with an energy of k = 1.379 X 10~~ 16 erg, which it would have at a fraction of a degree absolute. Using h = 6.61 X 10~ 27 , this gives for the energy difference between successive states the quantity 8. 1 X 10~ 38 erg, a com- pletely negligible energy difference. Thus we have justified our state- ment that the energy levels for translational energy of a perfect gas are so closely spaced as to be essentially continuous. 2. Maxwell's Distribution of Velocities. Returning to our distribu- tion law (1.5), let us first consider the case where there is no potential energy, so that the distribution is independent of position in space. Then the fraction of all molecules for which the momenta lie in dp x dp v dp x is e 2mhT dp x dp y dp, (e> - v _ , \^' * ) fffe~*"*Tdp x dp v dp z where p 2 stands for pi +' p 2 + p 2 . The; integral in the denominator can be factored, and written in the form /oo P '* / ?"*- / QO P* I g 2mkT ^p x i g % mkT dp y l e % mkT dp g . (2.2) a/ oo J w / oo /OO 1 e~ au * du, where a = *, We shall meet many integrals of this type before we are through, and we may as 56 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV well give the formulas for them here. We have I u n e~ QH * du = Jo Wa f r n = 5? f r W = * 1 [?r ' = 2 L for ;/ s 3 = 4 4- } for ?i = 5, etc. (2.3) 3 It sVo 1 Starting with fe~ au * du, for the even powers of n, or with fue~ au * du for the odd powers, each integral can be found from the one above it by differen- tiation with respect to a, by means of which the table can be extended. To get the integral from oo to <x> , the result with the even powers is twice the integral from to oo t and for the odd powers of course it is zero. Using the integrals (2.3), the quantity (2.2) becomes (2wmkT)^'. Thus we may rewrite Eq. (2.1) as follows: the fraction of molecules with momentum in the range dp x dp y dp g is (ZirmkT) e 2mkT dp t dp u dp,. (2.4) Equation (2.4) is one form of the famous Maxwell distribution of velocities. Often it is useful to know, not the fraction of molecules whose vector velocity is within certain limits, but the fraction for which the magnitude of the velocity is within certain limits. Thus let v be the magnitude of the velocity: m Then we may ask, what fraction of the molecules have a speed between i) and v + dv, independent of direction? To answer this question we consider the distribution of points in momentum space. The volume of momentum space corresponding to velocities between v and v + dv is the volume of a spherical shell, of radii mv and rn(v + dv). Thus it is 4w(mvy d(mv). We must substitute this volume for the volume dps dp v dp z of Eq. (2.4). Then we find that the fraction of molecules for which the magnitude of the velocity is between v and v + dv, is This is the more familiar form of Maxwell's distribution law. We give a SEC. 2] THE M AXW ELL-BOLT Z MANN DISTRIBUTION LAW 57 graph of the function (2.6) in Fig. IV-1. On account of the factor y' 2 , tho function is zero for zero speed; and on account of the exponential it is zero for infinite speed. In between, there is a maximum, which is easily found by differentiation and comes at v \/2kT/m. That is, the maxi- mum, and in fact the whole curve, shifts outward to larger velocities as the temperature increases. From Maxwell's distribution of velocities, either in the form (2.4) or (2.6), we can easily find the moan kinetic energy of a molecule at tem- perature T. To find this, we multiply the kinetic energy p 2 /2m by tho fraction of molecules in a given range dp x dp y dp g , and integrate over all values of momenta, to get the weighted mean. Thus we have -54 C C Cv* Mean kinetic energy = (2irmkT) III se 2mkT dp* dp u dp z - v ( C " 2 - PT * \ e~ 2 "'^'dp x (2TrmkT) (J- 2m + similar terms in p*, pi / *($kT)(2TrmkT)** = tfcZ 7 . (2.7) The formula (2.7) for the kinetic energy of a molecule of a perfect gas leads to a result called the equipartition of onorgy. Each molecule has three coordinates, or three degrees of freedom. On the average, each of those will have one-third of the total kinetic energy, as we can see if we find the average, not of (pi + p% + pi) /2m, but of the part pl/2m associated with the x coordinate. Thus eqg|i of thcao degrees of frccdom^haa nn the average* tJ^gj^gjgyA-fcT 7 . Thy energy, in other words, is equally distributed between each one having the 1.0 2.0 and this is called the eauipartition of energy. Mathe- ., ... , *, v ^7 *. . . i " Yft .. n^ - FIG. IV-1. Maxwell's distribution of matlCally, as an examination Of Our velocities, giving the fraction of molecules proof shows ftniiinfl.rtit.inn is fl. result whose velocity is between v and v + dv, in iM^^rftaBrfbBi^ fc^Mfc^M iiM^ a g as a ^ temperature T. of th(^fact^thaJ^th 1 J^BJ|2^^ with each degree of freedom is nronprtional to the square of ^ rro< rifii or coordinate. which is found in the energy only as a square, will be found to have a mean energy of ^feTy provided the energy levels are continuously 58 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV that anmmg.f.inng fian be replaced bv integrations. This applies not only to the momenta associated with translation of a single atom, but to such quantities as the angular momentum of a rotating molecule (whose kinetic energy is p 2 /2/), if the rotational levels are spaced closely. It applies as well to the coordinate, in the case of a linear oscilla- tor, whose restoring force is kx, and potential energy fcx 2 /2; the mean potential energy of such an oscillator is ffiT, if the levels are spaced closely enough, though in many physical cases which wo shall meet the spacing is not close enough to replace summations by integrations. 3. The Equation of State and Specific Heat of Perfect Monatomic Gases. Having found the distribution of molecular velocities, we can calculate the equation of state and specific heat of perfect monatomic gases by elementary methods, postponing until a later chapter the iv-2. -Diagram to direct statistical calculation by means of the illustrate collisions of mole- partition function. We shall again limit our- cules with 1 sq. cm. of surface _i * 4.1 i L , i O f wa ji. selves to the special case where there is no potential energy and where the distribution function is independent of position. This is the only ease where we should expect the pressure to be constant throughout the container. We shall find the pressure by calculating the momentum carried to the wall per second by molecules colliding with it. The momentum transferred to 1 sq. cm. of wall per second is the force acting on the wall, or the pressure. For convenience, let us choose the x axis perpendicular to the square centimeter of wall considered, as in Fig. IV-2. A molecule of velocity v = p/m close to the wall will strike the wall if p x is positive, and will transfer its momentum to the wall. When it is reflected, it will in general have a different momentum from what it 6ngmally~had, and will come away with momentum p', the component p' x being negativeT^-STEer collision, in other words, it will again belong to the group ofmolecules near the wall, but now corresponding to negative p x , and it wilFhave taken away from the wall the momentum p', or will" have given the wall the negative of this momentum. We can, then, get all the momentum transferred to the wall by considering all molecules, both with positive and negative p x 's. Consider those molecules contained in "the element dp x dp y dpz in momentum space, and lying in tHe prism drawn in Fig. IV-2. Each of these molecules, and no others, will strike the square centimeter of wall in time dt. The volume of the prism is p x /m dt. The average number of molecules per unit volume in the momentum element dp x dp v dp zj averaged through the assembly, will be denoted by f m dp x dp v SBC. 3] THE M AXW ELL-BOLT Z MANN DISTRIBUTION LAW 59 dp,. For Maxwell's distribution law, we have f m = y(2irrofcrr V 5 ^ (3.1) using Eq. (2.4), where N/V is the number of molecules of all velocities per unit volume. We shall not explicitly use this form for f m at the moment, however, for our derivation is more general than Maxwell's distribution law, and holds as well for the Fermi-Dirac and Einstcin-Bose distributions, which we shall take up in later sections. Using the func- tion f m , the average number of molecules of the desired momentum, in the prism, is p^/m/m dp x dpi, dp,. Each such molecule takes momentum of components p x , p y , p z to the surface. Hence the total momentum given the surface in time dt by all molecules is the integral over momenta of the quantity with components (pl/rn, p x p 1t /m, p x p z /m)dtf m dp x dp v dp*. Dividing by dt, we have the force exerted on the square centimeter of surface, which has components f f (V x component of force = I I I f m dp x dp y dp z , y component of force = I I I - f m dp x dp u dp z , z component of force = I I I f m dp* dp y dp t . (3.2) Now we shall limit our distribution function. We shall assume that a molecule with a given value of p x is equally likely to have positive or negative values of p y and p z , so that f m (px> p y } = f(pxj p y ), etc. Plainly the Maxwell law (3.1) satisfies this condition. Then the second and third integrals of Eq. (3.2) will be zero, since the integrands with a given p x , p y will have opposite sign to the integrands at p x , p y , and will cancel each other. The force on the unit area, in other words, is along the normal, or corresponds to a pure pressure, without tangential forces. Thus we finally have ///* dp x dp v dp z . (3.3) Now f m dp x dp y dp z is the number of molecules per unit volume in the range dp x dp v dp z . Multiplying by pl/m and integrating, we have simply the sum of pl/m for all molecules in unit volume. This is simply the sum of pl/m for all molecules of the gas, divided by V. Let us assume that the distribution function is one for which all directions in space are equivalent. This is the case with the Maxwell distribution, Eq. (3.1), for this depends only on the magnitude of p, not on its direction. Then the sum of 60 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV equals that of pj/m or pl/m. We have, moreover, rf + Py + Sf.P^^X kinetic energy. (3.4) m m m m Hence, the sum of pl/m is two-thirds the total kinetic energy, and since there is no potential energy, this is two-thirds the internal energy. Finally, then, we have p- 2 ^, ^ ~3F' or PV = f U. (3.5) Equation (3.5) gives the relation predicted by kinetic theory for the pressure of a perfect monatomic gas, in terms of its internal energy and volume. We can now combine Eq. (3.5), and Eq. (2.7), giving the mean kinetic energy of a monatomic gas, to find the equation of state. From Eq. (2.7), we have at once for N molecules U = $NkT. (3.6) Combined with Eq. (3.5), this gives at once PV = NkT, (3.7) as the equation of state of a perfect gas, as derived by elementary meth- ods. We should compare Eq. (3.7) with the gas law as ordinarily set up from experimental measurements. Let us suppose that we have n moles of our gas. That is, the mass of gas wo are dealing with is nM, where M is the molecular weight. Then the usual law is PV = nRT, (3.8) where R, the gas constant per mole, is given alternatively by the numerical values R = 8.314 X 10 7 ergs per degree = 0.08205 l.-atm. por degree (3 - 9) The law (3.8) expresses not only Boyle's and Charles's laws, but also Avogadro's law, stating that equal numbers of moles of any two perfect gases at the same pressure and temperature occupy the same volume. Now let JV be the number of molecules in a gram molecular weight, a universal constant. This is ordinarily called Avogadro's number and is SBC. 3] THE M AXW ELL-BOLT Z MANN DISTRIBUTION LAW 61 given approximately by #o - 6.03 X 10 28 , (3.10) by methods based on measurement of the charge on the electron. Then we have N = rcAT , (3.11) MO that Eq. (3.8) is replaced by PV = N-^-T. (3.12) -tVo Equation (3.12) agrees with Eq. (3.7) if n k = -^-i or R = Nok, NO so that 8.314 X 10 7 6.03 X 10 2 * = 1.379 X 10- lfi erg per degree, (3.13) as was stated in Chap. Ill, Sec. 1. From the internal energy (3.6) we can also calculate the specific heat. We have The expression (3.14), as we have mentioned before, gives the heat capac- ity of n moles of gas. The specific heat is the heat capacity of 1 gin., or 1/M moles, if M is the molecular weight. Thus it is 3 R Specific heat per gram = HT>' (3.15) Very often one also considers the molecular heat, the heat capacity per mole. This is Molecular heat, per mole = f 72 = 2.987 cal. per mole. (3.16) To find the specific heat at constant pressure, we may use Eq. (5.2), Chap. II. This is (-Y ? (3.17) ^P/r which holds for any amount of material. Substituting V = nRT/P, we have (dV/dT) f = nR/P, (dV/dP) T - -nRT/P\ so that C P = CV + nR = -Jnff = 4.968 cal. per mole, (3.18) 62 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV and ^ = 7=f = 1.667. (3.19) v/ V O The results (3.14), (3.18), and (3.19) hold theoretically for monatomic perfect gases, and actually they are approximately true for real monatomic gases. 4. The Perfect Gas in a Force Field. For two sections we have been considering the distribution of velocities included in the Maxwell-Boltz- mann distribution law. Next, we shall take up the distribution of coordi- nates in cases where there is an external field of force. First, we should observe that on account of the form of Eqs. (1.4) and (1.5), the distribu- tions of coordinates and velocities are independent of each other. These equations show that the distribution function contains one factor e 2mkT depending on velocities, another factor e kT depending on coordinates. This has important implications. The Maxwell distribution of velocities. wfafoh we l^ave discu^snd. is the *ffiflic at any point of a gas, even in an external field; and the variation of density with position is the same for the whole density, or for the particular class of molecules having an v chosen velocity. We wish, then, to discuss the variation of density com- ing from the factor e k ^. The most familiar example of this formula is found in the decrease of density of the atmosphere as we go to higher altitudes. The potential energy of a molecule of mass m at height ft above the earth is mgft, where g is the acceleration of gravity. Then the density of gas at height ft, assuming constant temperature throughout (which is not a good assumption for the actual atmosphere), is given by mgh Density proportional to e kr . (4.1) Formula (4.1) is often called the barometer formula, since it gives the variation of barometric pressure with altitude. It indicates a gradual decrease of pressure with altitude, going exponentially to zero at infinite height. The barometer formula can be derived by elementary methods, thus checking this part of the Maxwell-Boltzmann distribution law. Con- sider a column of atmosphere 1 sq. cm. in cross section, and take a section of this column bounded by horizontal planes at heights ft and ft + dh. Let the pressure in this section be P; we are interested in the variation of P with ft. Now it is just the fact that the pressure is greater on the lower face of the section than on the upper one which holds the gas up against gravity. That is, if P is the upward pressure on the lower face, P + dP the downward pressure on the upper face, the net downward force is dP y SBC. 4] THE MAXWELL-BOLTZMANN DISTRIBUTION LAW 6 the net upward force dP, and this must equal the force of gravity on th material in the section. The latter is the mass of the gas, times g The mass of the gas in the section is the number of molecules per uni volume, times the volume dh, times the mass m of a molecule. The num ber of molecules per unit volume can be found from the gas law, whicl can be written in the form P = ^*r, (4.2 where (N/V) is the number of molecules per unit volume. Then we fine that the mass of gas in the volume dh is (P/kT)m dh. The differentia equation for pressure is then In P- const. -, (4.3; from which, remembering that at constant temperature the pressure if- proportional to the density, we have the barometer formula (4.1). It is possible not only to derive the barometer formula, but the whole Maxwell-Boltznuum distribution law, by an extension of this method, though we shall not do it. 1 One additional assumption must be made, which we have treated as a consequence ot the distribution law rather thin as an independent hypothesis: that the mean kinetic o^gy flf ft nW " ] ' is 4fer T indepn^Hnnt of whore it may hn found. Assuming it and con- sidering the distribution of velocities in a gravitational field, we seem at first to meet a paradox. Consider the molecules that are found low in the atmosphere, with a certain velocity distribution. As any one oi these molecules rises, it is slowed down by the earth's gravitational field, the increase in its potential energy just equaling the decrease in kinetic- energy. Why, then, do not the molecules at a great altitude have lowci average kinetic energy than those at low altitude? The reason is not difficult to find. The slower molecules at low altitude never reach the high altitude at all. They follow parabolic paths, whose turning points come at fairly low heights. Thus only the fast ones of the low molecules penetrate to the high regions; and while they are slowed down, they slow down just enough so that their original excessive velocities are reduced to the proper average value, so that the average velocity at high altitudes equals that at low, but the density is much lower. Now this explanation ^cc for instance, K. F. Herzfeid, "Kinetische Theorie der Warme," p. 20, Vieweg, 1929. 64 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IV puts a very definite restriction on the distribution of velocities. By the barometer formula, we know the way in which the density decreases from h to h + dh. But the molecules found at height A, and not at height h + dhj are just those which come to the turning point of their paths between these two heights. This in turn tells us the vertical components of their velocities at any height. The gravitational field, in other words, acts to spread out molecules according to the vertical component of their velocities. And a simple calculation based on this idea proves to lead to the Maxwell distribution, as far as the vertical component of velocities is concerned. No other distribution, in othnr wnrds r would have this special property of giving the same distribution of velocities at anv height. The derivation which we have given for the barometer formula in Eq. (4.3) can be easily extended to a general potential energy. Let the potential energy of a molecule be <t>. Then the force acting on it is d<t>/ds, where ds is a displacement opposite to the direction in which the force acts. Take a unit cross section of height ds in this direction. Then, as before, we have In P = const. - pp, (4.4) general formula for pressure, or density^ as it depends on potential energy. CHAPTKR V THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS The Maxwell-Boltzmann distribution law, which we have* derived and discussed in the last chapter, seems like a perfectly straightforward application of our statistical methods. Nevertheless, when we come to examine it a little more closely, we find unexpected complications, arising from the question of whether there really is any way of telling the mole- cules of the gas apart or not. We shall analyze these questions in the present chapter, and shall find that on account of the quantum theory r t.lm Maxwell-Boltzinann distribution law is really only an approximation valid for gases at comparatively low density, a limiting case? of two other distributions, known by the names of the Fcrmi-Dirac and the KlinsteiiY Bosc statistics. _ Real gases obey one or the other of these latter forms of statistics, some being governed by one, some by the other. As a matter of fact, for all real fiases the nnrrnp.t.innH t.n t.ho MnYwp1]-fi n ]t zm . inn dis- tribution law which result from the quantum statistics an* negligibly small except at the very lowest temperatures, and helium is the only fi^s remaining in the vauor state at low enough temperature for the correc- tions to be important. Thus the reader who is interested only in molecu- lar gases may well feel that these forms of quantum statistics are unnecessary. There is one respect in which this feeling is not justified: we shall find that in the calculation of the entropy, it is definitely wrong not to take account of the identity of molecules. But the real iiimortanrn of the nnfUltllTn fo^T^fi nf Jttfq.f.l*jt.lf.H Pompfi from f.hfl ffU'.f i.ltn.f, f.lw* nl^nfrnnw in solids satisfy the Fcrmi-Uirac statistics, and for them thf* nn moving] quantities are such that the behavior is completely different frorp what would be predicted bv the M^wn 11 -**" 1 *"""" '1^+^"+: law. The ^""tHn-RrtfiP ^-p**^ tho 11 ^ * f MS applications to black-body radia- tion, does not have the general importance of the Fermi-Dirac statistics. 1. The Molecular Phase Space. In the last chapter, we pointed out that for a gas of N identical molecules, each of n degrees of freedom, the phase space of 2Nn dimensions could be subdivided into N subspaces, each of 2n dimensions. We shall now consider a different way of describ- ing our assembly. We take simply a 2/i-dimensional space, like one of our previous subspaces, and call it a molecular phase space, since a point in it gives information about a single molecule. This molecular phase space will be divided, according to the quantum theory, into cells of volume h n . A given quantum state, or complexion, of the whole gas of N 65 66 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V molecules, then corresponds to an arrangement of N points representing the molecules, at definite positions, or in definite cells, of the molecular phase space. But now we meet immediately the question of the identity of the molecules. Are two complexions to be counted as the same, or as different, if they differ only in the interchange of two identical molecules between cells of the molecular phase space? Surely, since two such complexions cannot be told apart by any physical means, they should not be counted as different in our enumeration of complexions. Yet they correspond to different cells of our general phase space of 2Nn dimen- sions. In one, for example, molecule 1 may be in cell a of the molecular phase space, molecule 2 in cell 6, while in the other molecule 1 is in cell bj molecule 2 in cell a. Tlm&JiLJj_S!stem. of identical molecules, it is incorrect to assume that every complexion that we can set up in the general phase space, by assigning each molecule to a particular cell of its ffiKgpapp yq Hi.gf.iti/+, 1 /vir^y ^hor Qfi ^yp tanitly assumed in Chap. By considering the molecular phase space, we can see how many apparently different complexions really are to be grouped together as one. Let us describe a complexion by numbers N v , representing the number of molecules in the Mi cell of the molecular phase space. This is a really valid way of describing the complexion; interchange of identical molecules will not change the Nfs. How many ways, we ask, are there of setting up complexions in the general phase space which lead to a given set of JWs? We can understand the question better by taking a simple example. Let us suppose that there are three cells in the molecular phase space and three molecules, and that we arc assuming Ni 1, N% = 2, TABLE V-l Cell 1 2 3 ,.. 1 2 Complexion a i! 2 3 3 2 !2 1 3 2 3 1 J 3 1 2 c is 2 1 NB = 0, meaning that one molecule is in the first cell, two in the second, and none in the third. Then, as we see in Table V-l, there are three apparently different complexions leading to this same set of Ni's. In complexion a, molecule 1 is in cell 1, and 2 and 3 are in cell 2; etc. We SBC. 1] THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS 67 see from this example how to find the number of complexions. First, we find the total number of permutations of N objects (the N molecules). This, as is well known, is TV!; for any one of the N objects can come first, any one of the remaining (N 1 ) second, and so on, so that the number of permutations is N(N i)(N 2) . . . 2.1 = Nl In the case of Table V-l, there are 3! = 6 permutations. But some of these do not represent different complexions, even in the general phase space, as we show by our brackets; as for example the arrangements 1,23 and 1,32 grouped under the complexion (a). For they both lead to exactly the same assignment of molecules to cells. In fact, if any N % is greater than unity, Nil (in our case 2! = 2) different permutations of the N objects will correspond to the same complexion. And in general, the number of complexions in the general phase space which lead to the same TVYs, and hence are really identical, will be Remembering that 0! = 1! = 1, we see that in our example we have 3 1/1 12 10! =|= 3. If then we wished to find the partition function for a perfect gas, using the general phase space, we should have to proceed as follows. We couldset uu cells in phase space, each of volume h Nn . but we could not assume that each of these represented a different complexion, or thftt we were to sum over all these flnlla in nnrnpiifing +^ poi.*^* 1,,^;,^ Rather, each cell would be one of Nl/N-dNf] - . . similar cells, all taken together fa ygpreseiyfr one single complexion. We could handle this .11 we chose^ bv summing, or in the case of continuous energy levels fry i ntegrathifi. over all cells, but dividinp: the contribution of each cell by the number (1.1). computed for that cell. Since this number can change from cell to cell, this is a very inconvenient procedure and cannot be carried out without rather complicated mathematical methods. There is a special case, however, in which it is very simple. This is the case where the gas is so rare jfchat we are very unlikely! in our assembly, to find any appreciablo^iiiinibr of systems with more than a single molecule in any cell. In this case, each of the JV^s^iiLJJifi_denQminatQr of formula (1.1) will be or 1, each of the JVJ's will be 1. and the number (1.1) becomes simply Nl. Thus, in this case, we can find the partition function by carrying out the summation or integration in the general phase space in the usual way, but dividing the result by JVI, and using the final value to find the Helmholtz free energy and other thermodynamic quantities. This method leads to the Mn.Y^p|1-Rn1t.7.mnnn HiHf.Hhiitjpn IRW. and it is the method which we shall use later in Chap. VIII, dealing with thermo- dynamic and statistical properties of ordinary perfect gases. When we 68 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V are likely to find JVYs greater than unity, however, this method is imprac- ticable, and we must adopt an alternative method based directly on the use of the molecular phase space. 2. Assemblies in the Molecular Phase Space. When we describe a system by giving the JWs, tho numbers of molecules in each coll of the molecular phase space, we automatically avoid the difficulties described in the last section relating to the identity of molecules. We now meet immediately the distinction between the Fcrmi-Dirac, the Einstein-Bose, and the classical or Boltzmann statistics. In the Einstcin-Bose statistics. the simplest form in thenrv WP ^ f up a comnlexion hv giving a. set O f Ni's, and we say that any possible set of JWs, subject only to the obvious restriction represents a. possible e.orflplexion. all florflplexions having equal A priori probability. The Fermi-Dirac statistics differs from the Einstein-Bose in that there is ayi n^ti! p^^pl^, nnllpH fh pv1i"=jjnn p r j^pi Q superposed on the principle of identity of molennles.. The exclusion nrTneiple states that no two molecules mav he in the same nail nf the molecular phase space at the same time: that is. no one of t.hp. /y.' nrm.v be greater than unity. This principle gains its importance from the fact that electrons are found experimentally to obey it. It is a principle which, as we shall see later, is at the foundation of the structure of the atoms and the periodic table of the elements, as well as having the greatest importance in all problems involving electrons. In the Fermi-Dirac statistics, then, any possible set of JWs. subject to ECL (2.1) and to the nrn OT unitv r forms a possible complexion nf the ^ysteni. Finally the Boltzmann statistics is the limiting case of cither of the other types, in the limit of low density, where so few molecules ^ajiL-disijibutod amiing ^o-uiisiiiszL-cells thfl/h the chance of finding two in the saino_c^ll js negligible anvwfty l ^i\c\ thn difYnr- ence between the Fcrmi-Dirac and the Einstein-Bo*^ fftitjtfif ^ ^*pp^ Let us consider a single complexion, represented by a set of A/Ys, in the molecular phase space. We see that the JVYs are likely to fluctuate greatly from cell to cell. For instance, in the limiting case of the Boltz- mann statistics, where there are many fewer molecules than cells, we shall find most of the JVYs equal to zero, a few equal to unity, and almost none greater than unity. It is possible in principle, according to the principle of uncertainty, to know all the A/Ys definitely, or to prepare an assembly of systems all having the same JVYs. But for most practical purposes this is far more detailed information than we require, or can ordinarily give. We have found, for instance, that the translational energy levels SEC. 2) THE FERMI-DIRAC AND El N STEIN-BOISE STATISTICS 69 of an ordinary gas are spaced extremely close together, and while there is nothing impossible in principle about knowing which levels contain molecules and which do not, still practically we cannot tell whether a molecule is in one level or a neighboring one. In other words, for this case, for all practical purposes the scale of our observation is much coarser than the limit set by the principle of uncertainty. Let us, then, try to set up an assembly of systems reflecting in some way the actual errors that we are likely to make in observing molecular distributions. Let us suppose that really we cannot detect anything smaller than a group of G cells, where G is a rather large number, containing a rather large number of molecules in all the systems of our assembly. And let us assume that in our assembly the average number of molecules in the ith cell, one of our group of G, isffit, a quantity that ordinarily will b< a frac- tion rather than an integer In the particular case 01 ttie uoit/mami statistics. N . ^jll b^ n. f motion much less than unity \ in the Feriui-Dirac . statistics it will be les^ than unity, but not necessarily much less; while in the Einstein-Bose statistics it can have any value. We shall now try to set up an assembly leading to these postulated values of the N^. To do this, we shall find all the complexions that lead to the postulated $Ys, in the sense of having N t G molecules in the group of G cells, and we shall assume that these complexions, and these only, are represented in the assembly and with equal weights. Our problem, then, is to calculate the number of complexions consistent with a given set of JVYs, or the thermo- dynamic probability W of the distribution, in Boltzinann's sense, as described in Chap. Ill, Sec. 1. Having found the thermodynamic prob- ability, we can compute the entropy of the assembly by the fundamental relation (1.3) of Chap. Ill, or 8 = * In W. (2.2) For actually calculating the thermodynamic probability, we must distinguish between the Fermi-Dirac and the Einstein-Bose statistics. First we consider the Fermi-Dirac case. We wish the number of ways of arranging Nf molecules in G cells, in such a way that we never have more than one molecule to a cell. To find this number, imagine G coun- ters, of which Nf are labeled 1 (standing for 1 molecule), and the remain- ing (1 ftt)G are labeled (standing for no molecules). If we put one counter in each of the G cells, we can say that the cells which have a counter labeled 1 in them contain a molecule, the others do not. Now there are G\ ways of arranging G counters in G cells, one to a cell, as we have seen in the last section. Not all of these G\ ways of arranging the counters lead to different arrangements of the molecules in the cells, however, for the N$ counters labeled 1 are identical with each other, and 70 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V those labeled are identical with each other. For a given assignment of molecules to cells, there will be (Nf) 1 ways of rearranging the counters labeled 1, and [(1 Ni)G]\ ways of rearranging those labeled zero, or (N l G)l[(l N l )G]\ arrangements in all, all of which lead to only one complexion. Thus to get the total number of complexions, we must divide G\ by this quantity, finding Number of complexions of NjB atoms in G cells = _ - -- (2.3) We shall now rewrite Eq. (2.3), using Stirling's theorem. This states that, for a large value of N, Nl = V'lirN ~ (2.4) Stirling's formula is fairly accurate for values of N greater than 10; for still larger JV's, where N\ and (N/e) N are very large numbers, the factor is so near unity in proportion that it can be omitted for most purposes, so that we can write Nl simply as (N/e) N . Adopting this approximation, we can rewrite Eq. (2.3) as Number of complexions of N 1 G atoms in G cells [1 #/'(! - Equation (2.5) is of an interesting form: being a quantity independent of G, raised to the G power, we may interpret it as a product of terms, one for each cell of the molecular phase space. Now to get the whole number of complexions for the system, we should multiply quantities like (2.5), for each group of G cells in the whole molecular phase space. Plainly this will give us something independent of the exact way we divide up the cells into groups, or independent of (?, and we find where JJ indicates a product over all cells of the molecular phase space. SBC. 2] THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS 71 Using Eq. (2.2), we then have S = -*5[#< In - # In 1 - (2.7) as the expression for entropy in the Formi-Dirac statistics in terms of the average number Ni of molecules in each cell. For the Einstein-Bose statistics, we wish the number of ways of arranging Nfr molecules in G cells, allowing as many molecules as we please in a cell. This number ran be shown to be Number of complexions of N 1 G atoms in G cells _ (#/?+(? -1)1 - 1)! ' We can easily make Eq. (2.8) plausible, 1 though without really proving it, by an example. Let us take N 1 G = 2, G = 3, and make a table, as Table V-2, showing the possible arrangements: TABLE V-2 Cell 1 2 ' 3 11 1 1 1 I I 1 1 1 1 1 11 1 1 1 1 1 1 11 1 1 In the first three columns of Table V-2, we indicate the three colls, and indicate each of the two molecules by a figure 1, showing the six possible arrangements 6 = - ^ ~ -v .-" ; following that, we give a scheme with L *HJ l;i J _ four columns (4 = N t G + G 1 =2 + 3 1) in which we give all the possible arrangements of the two 1's, two O's, with one in each column (2 = N t G, 2 = G 1). In the general case, the number of such arrange- ments is given just by Eq. (2.8), as wo can see by arguments similar to those used in deriving formula (1.1). But the four-column arrangement of Table V-2 corresponds exactly witli the three-column one, if we adopt the convention that two successive IV in the four-column scheme belong in the same cell. It is not hard to show that the same sort of correspond- ence holds in the general case and thus to justify Eq. (2.8). Applying Stirling's theorem to Eq. (2.8) and neglecting unity com- pared to G, we now have 1 For proof, as well as other points connected with quantum statistics, see L. Brillouin, "Die Quantenstatistik," pp. 129 iT , Julius Springer, 1931. 72 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V Number of complexions of N&f atoms in G cells [ As with the Fermi-Dirac statistics, this is a product of terms, one for each of the G cells; to find the whole number of complexions, or the thermodynamic probability, we have w = N,- and S = -fc[#, In ft, - (1 + #,) In (1 + #,)] (2.11) Iii Eqs. (2.7) and (2.11), we have found the general expressions for the entropy in the Fermi-Dirac and Ernst ein-Bose statistics. From either one, we can find the entropy in the Boltzmann statistics by passing to the limit in which all JVVs are very small compared to unity. For small N t , In (1 N t ) approaches Ni, and (1 JV) can be replaced by unity. Thus either Eq. (2.7) or (2.11) approaches , ln ^* ~ ^*). (2.12) Equation (2.12) expresses the form of the ontropy for the Boltzmann statistics. 3. The Fermi-Dirac Distribution Function. In the preceding section, we have set up assemblies of systems satisfying either the Formi-Dirac or the Einstein-Bose statistics and having an arbitrary average number of molecules N l in the ith coll of molecular phase space. We have found the thermodynamic probability and the entropy of such an assembly. These assemblies, of course, do not correspond to thermal equilibrium and, as time goes on, the effect of collisions of molecules will be to change the numbers ff t gradually, with an approach to a steady state. In the next chapter we shall consider this process specifically, really following in detail the irreversible approach to a steady state. We shall verify then, as we could assume from our general knowledge, that during the irreversi- ble process the entropy will gradually increase, until finally in equilibrium it reaches the maximum value consistent with a constant value of the SBC. 3] THE FERMI-D1RAC AND EIN8TE1N-BO8E STATISTICS 73 total energy. But, for the moment, we can assume this final condition and use it to find the equilibrium distribution. We ask, in other words, what set of Ni's will give the maximum entropy, subject to a constant internal energy? We can solve this problem, as we solved a similar one in Chap. Ill, Sec. 5, by the method of undetermined multipliers. For the Fermi-Dirac statistics, we wish to make the entropy (2.7) a maximum, subject to a constant value of the energy. Rather than impose just this condition, we employ the thermodynamically equivalent one of making the function A = U TS a minimum for changes at constant temperature, as discussed in Chap. II, Sec. 3. This is essentially a form of the method of undetermined multipliers, the constants multiply- ing U and S being respectively unity and T. As in Chap. IV, Sec. 1, we let the energy of a molecule in the ith state be . Then the average energy over the assembly is clearly U = the summation being over the cells of the molecular phase space. Using Eq. (2.7) for the entropy, we then have A = ][# t c t + kTNi In N l + kT(\ - #,) In (1 - N t )]. (3.1) Now we find the change of A, when the $Vs are varied, keeping tem- perature and the e's fixed. We find at once dA = = + kT ln T^ (3-2) t as the condition for equilibrium. This must be satisfied, subject only to the condition i = 0, (3.3) i expressing the fact that the changes of the Ni's are such that the total number of molecules remains fixed. The only way to satisfy Eq. (3.2), subject to Eq. (3.3), is to have e< + kT In - * = <> = const., (3.4) 1 - Ni independent of i. For then the bracket in Eq. (3.2) can be taken outside the summation sign, and Eq. (3.3) immediately makes the whole expres- sion vanish. 74 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V Using Eq. (3.4), we can immodiatcly solve for N*. We have N< / N\ l + e kT = e ~ kT <?-*> o'kT (3 ' 5) Equation (3.5) expresses the Fermi distribution law, which we shall now proceed to discus:,. First, let us see that the Fermi-Dirac distribution law reduces to the Maxwell-Boltzmann law in the limit when the NSs are small. In that case, it must be that the denominator in Eq. (3.5) is large compared to the numerator. But if e k ? +1 is large compared to 1, the numera- tor, it must be that 1 can be neglected compared to the exponential. Thus, in this limit, we can write Ni = e^, (3.6) which is the Maxwell-Boltzmann law, Eq. (1.4) of Chap. IV, if is prop- erly chosen. We notice that even if the temperature is low, so that some JVVs are not small, still the states of high energy will have large values of e c kT . Thus the argument we have just used will apply to these states, and for them the Maxwell-Boltzmann distribution will be correct, even though it is not for states of low energy. The quantity o is to be determined by the condition that the total number of particles is N. Thus we have N-^?,Ni = ^-Trrl (3-7) Since the e's are determined, Eq. (3.7) can be satisfied by a proper choice of . Unfortunately, Eq. (3.7) cannot be solved directly for c , and it is a matter of considerable difficulty to evaluate this important quantity. It is not hard, however, to see how N* behaves as a function of t , particu- larly for low temperatures. When c t eo is negative, or for energies below o, the exponential in Eq. (3.5) is less than unity, becoming rapidly very small as the temperature decreases. Thus for these energies the denominator is only slightly greater than unity, and ft* only slightly less than unity. On the other hand, when a eo is positive, for energies above , the exponential is greater than unity, becoming rapidly large as SEC. 3] THE FERMI-DIRAC AND EINSTEIN-ROSE STATISTICS 75 the temperature decreases. In this case we can almost neglect unity compared to the exponential, and we have the case of the last paragraph, where the Boltzmann distribution is approximately correct, in the form (3.6). In Fig. V-l we show the function -j ( o y as a function of e, e w' + 1 for several temperatures. At T = 0, the function drops from unity to zero sharply when e = e , while at higher temperatures it falls off smoothly. For large values of , it approximates the exponential falling off of the Boltzmann distribution. -7 -6 -5 -4 -3 -2 67 FIG. V-l. Fermi distribution function, as function of energy, for several temperatures. Curve a, kT = 0; 6, kT = 1; c, kT = 2.5 One important feature of the function (3.5) is the following: 1 - kT kT 1-1 + (3.8) kT + 1 That is, in Fig. V-l, the distribution function at any point to the right of is equal to the difference between the function and unity, the same distance to the left of e , and vice versa. The curve, in other words, is symmetrical with change of sign about the point = o and ordinate . From this it follows that is approximately constant, for small tempera- tures at least. For the summation in Eq. (3.7), which must give N independent of temperature, is found as follows. Along the axis of abscissae in Fig. V-l, we mark the various energy levels of the problem. At each energy level we erect a line, extending up to the distribution curve. The sum of the lengths of all these lines is the summation desired. We must now adjust , moving the curve to the left or right, so that the sum equals N. At the absolute zero this is perfectly simple: we simply count up to the Nth energy level from the bottom, and put somewhere between the Nth and the (N -f l)st levels. At a higher temperature, 76 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V suppose we try the same value of and see if it is correct. Then the sum- mation will change by subtraction of the part of the lines above the distri- bution curve to the left of o and by addition of those below the distribution curve to the right. These areas are equal by the result we have just found. Thus, if levels come with the same density to the left and to the right of co, the summation will again be N, and the same value of o will be correct. In the next section we find how much co will change with temperature if the density of levels changes with the energy, as of course it will to some extent. 4. Thermodynamic Functions in the Fermi Statistics. Having derived the Fermi-Dirac distribution law, we shall go on to find some of its properties in the case of low temperatures, the important case in the practical applications to electrons in metals, where for the present pur- poses temperatures even of several thousand degrees can be regarded as low. The distribution function, as we see in Fig. V-l, changes with temperature mostly in the immediate neighborhood of the energy c . If, then, we know the distribution of energy levels in the neighborhood of c , we can find the variation of the thermodynamic functions with tem- perature. We carry out that analysis in the present section, assuming that the energy levels are distributed continuously in energy, an approxi- mately correct assumption in the cases with which we shall deal. Let the value of c at the absolute zero of temperature be c o. We know how to find it from Sec. 3, simply by counting up N levels from the lowest level. First we shall try to find how eo depends on temperature. We shall assume that the number of energy levels between c and e + d* is - [(). + $),< - > + > <"> a Taylor's expansion of the function dN/de about the point e = e o, at which the derivatives (dN/dt)^ and (dW/dc 2 ) are evaluated. With this assumption, all our summations can be converted into integrations. We write the summation of Eq. (3.7) in the form of an integration; instead of using just this form, we find the difference between the summation and that at the absolute zero, which should give a difference of zero. Thus we have = > The term 1 in the first integral takes care of the summation at the absolute zero, where the Fermi function is unity for energies less than oo, SEC. 4] THE FUHM1-D1HAV AND E1NSTJK1N-BOSE STATISTICS 77 zero for higher energies. In the first integral, we use Eq. (3.8). Then in the first we make tho change of variables u = (e ), and in the second u = (e ). The two integrals then combine. We retain only the terms necessary for a first approximation; this means, it is found, that in the term (dW/d 2 ) we can neglect the distinction between c and o, though this distinction is essential in tho term in (dN/di)^ In this way we find ~ < u 7 t4\ - u --- du. (4.3) In the first integral, through the very small range from c o to to c e o, we can replace the integrand by its value when u = 0, or . Thus the first term becomes (rfJV/de)o(e e o). To reduce the second integral __ u to a familiar form, we let x = e * r , u = kT In x, du = kTdx/x. The integral then becomes (4.4) f *-^du = -(*T)> f 'iqr^r - g */o *r */ ^ the integral in Eq. (4.4) being tabulated for instance in B. O. Peirce's "Short Table of Integrals." Then Eq. (4.3) becomes or (4.5) Equation (4.5) represents by the first two terms of a power series in the temperature, and the approximations we have made give the term in T 2 correctly, though we should have to be more careful to get higher terms. We see, as we should expect from the last section, that if (dW/de 2 )o = 0, so that the distribution of energy levels is uniform at , c will be independent of temperature to the approximation we are using. Next, let us find the internal energy in the same sort of way. Written as a summation, it is u = (4 - 6) + 1 78 INTRODUCTION* CHEMICAL PHYSICS [CHAP. V Here again, in converting to an integration, we shall find, not [/, but U f/o, where 7 is the value at the absolute zero. Then we find at once that the integral expression for it is exactly like the integral in Eq. (4.2), only with an additional factor e in the integrand of each integral. The leading term hero, however, comes from the term (dAT/de)o, and we can neglect the terms in (rfW/ck' 2 ) . Furthermore, in the integrals we retain, we can neglect the difference between o and eoo. Then we have and u - u ' + (f )l (fcr)2 ' (4 - 7) again correct to terms in T 2 . From the internal energy we can find the heat capacity CV, by the equation c " - (wl - t k'T. (4.8) We notice that at low temperatures the specific heat of a system with continuous energy levels, obeying the Fermi statistics, is proportional to the temperature. We shall later see that this formula has applications in the theory of metals. Let us next find the entropy. We can get a general formula from Eq. (2.7). This can be rewritten 5 - -kN> In -r - k In (1 - ft.) (4 ' 9) where we have used the Fermi distribution law. Replacing the summa- tion in Eq. (4.9) by an integration and using Eqs. (4.5) and (4.7) for c and 7, we can compute S. The calculation is a little involved, however, and it is easier to use the relation SEC. 4] THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS 79 from which 8 - (f XT** " In the integration leading to Eq. (4.10), we have used the fact that S - at the absolute zero. This follows directly from Eq. (2.7) for the entropy. For each term in the entropy is of the form N In N or (1 N) In (1 N), and at the absolute zero each value of N is cither 1 or 0, so that each of these terms is 1 In 1 or In 0, either of which is zero. From the internal energy and the entropy we can find the Helmholtz free energy A = U - T8 _-A _>-) In (e. * T ~ + 1) (4.11) - (f where Eq. (4.11) is derived from Eq. (4.9), and Eq. (4.12) from Eqs. (4.7) and (4.10). By differentiating the function A with respect to tempera- ture at constant volume, we get the negative of the entropy, as we should. By differentiating with respect to the volume at constant temperature, we get the negative of the pressure, and hence can find the equation of state. So far, we have not mentioned the dependence of any of our functions on the volume. Surely, however, the stationary states and energy levels of the particles will depend on the volume, though not explicitly on the temperature. Hence Uo and (rf#/rfe) are to be regarded as functions of the volume. The functional dependence, of course, cannot be given in a general discussion, applicable to all systems, such as the present one. Using this fact, then, we have The first term, the leading one ordinarily, is independent of temperature, and the second, a small additional one which can be of either sign, is proportional to the square of the temperature. Thus, the equation of state is very different from that of a perfect gas on Boltzmann statistics. It is to be borne in mind, however, that this formula, like all those of the present section, applies only at low temperatures and is only the beginning of a power series. At high temperatures the statistics reduce to the Boltzmann statistics, as we have seen, and the equation of state of a Fermi 80 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V system and the corresponding system obeying Boltzmann statistics must approach each other at high temperature. 5. The Perfect Gas in the Fermi Statistics. As an example of the application of the Fermi statistics, we can consider the perfect gas. In Chap. IV, Sec. 1, we have found the number of energy levels for a mole- cule of a perfect gas, in the energy range de. Rewriting Eq. (1.10) of that chapter, we have at once **L = ZLL(% m \*& (51) ae /r and If we substitute o for e in Eqs. (5.1) and (5.2), we have the quantities (dN/de)o and (rfW/dc 2 ) of the previous section. To find eoo, we note that from Eq. (1.9), Chap. IV, the number of states with energy less than is (4irV/3Ji 8 )(2w) w . Remembering that there are just N states with energy less than eoo, this gives We notice, as is natural, that eoo, the highest occupied energy level at the absolute zero, increases as the number of particles N increases. It is important to notice, however, that it is the density of particles, N/V, that is significant, not the absolute number of particles in the system. In a gas obeying the Fermi statistics, the particles cannot all have zero energy at the absolute zero, as they would in the Boltzmann statistics; but since there can be only one particle in each stationary state, there is an energy distribution up to the maximum energy eoo. Let us see how large this is, in actual magnitude. We can hardly be interested in cases where N/V represents a density much greater than the number of atoms of a solid per unit volume. Thus for example let N/V be one in a cube of 3 X lO" 8 cm. on a side, or let it equal & X 10 24 . Let us make the calculation in kilogram calories per gram mole (1 kg.-cal. equals 1000 cal. or 4.185 X 10 10 ergs, one mole contains 6.03 X 10 23 molecules), and let us do it first for an atom of unit molecular weight, for which one molecule weighs 1.66 X 10~~ 24 gm. Then we have 6.03 X 10 23 coo [3 X (6.61 X 1Q- 27 ) 3 T 10-"L 47r X 27 X 10- 24 J 4.185 X 10 10 2 X 1.66 X 10- = 0.081 kg.-cal. per gram mole. (5.4) There do not seem to be any ordinary gases in which the energy calculated from Eq. (5.4) is appreciable. Hydrogen H 2 and helium He both satisfy SBC. 5J THE FERMI-DIRAC AND E1NSTEIN-BOSE STATISTICS 81 the Einstein-Bose statistics instead of the Fermi, and so are not suitable examples. With any heavier gas, the mass that comes in the denominator of Eq. (5.3) would reduce the value to a few gram calories per mole, a value small compared to the internal energy which the gas would acquire in even a few degrees with normal specific heat as given by the Boltzmann statistics, in Chap. IV, Sec. 3. The one case where the Fermi statistics is of great importance is with the electron gas, on account of the very small mass of the electron. The atomic weight of the electron can be taken to be TSTV- Then to get coo for an electron gas of the density mentioned above, we multiply the figure of Eq. (5.4) by 1813, obtaining o = 148 kg.-cal. per gram mole = 6.4 electron volts. (5.5) The value (5.5), instead of being small in comparison with thermal magni- tudes like (5.4), is of the order of magnitude of large heats of dissociation or ionization potentials, and enormously large compared with thermal energies at ordinary temperatures. Thus in an electron gas at high den- sity, the fastest electrons, even at the absolute zero, will be moving with very high velocities and very large energies. We can next use Eq. (4.5) to find c at other temperatures. We have at once From Eq. (5.6) we note that in ordinary gases, where too is of the order of magnitude of kT for a low temperature, the term in 7 12 will be large, showing that the series converges slowly. On the other hand, in an elec- tron gas, where eoo is very large compared to fcT, the series converges rapidly at ordinary temperatures, and is approximately independent of temperature, decreasing slightly with increasing temperature. The quantity C/o, the internal energy at the absolute zero, is easily found, from the equation (5.7) using Eq. (5.1). Thus the mean energy of a particle at the absolute zero is three-fifths of the maximum energy. From Eq. (4.7) we can then find the internal energy at any temperature, finding 82 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. V The heat capacity is given by C v M (5.9) * coo For an electron gas at ordinary temperature, this is a small quantity. An ordinary gas would have a heat capacity fJVfc. This value is the ordinary value, multiplied by (7r 2 /3)(fc!T/oo). Now at 1000 abs., for instance, a fairly high temperature, kT is about 2 kg.-cal. per gram mole, whereas we have seen in Eq. (5.5) that e o is of the order of 148 kg.-cal. per gram mole. Since ir 2 /3 is about 3, this means that the electronic specific heat at 1000 abs. is about four per cent of that of a perfect gas on the Boltzmann statistics, while at room temperature it would be only a little over 1 per cent. The other thermal quantities arc easily found. As we see from Eqs. (4.8) and (4.10), the entropy equals the specific heat, up to terms linear in the temperature, so that the entropy of the perfect gas is given by Eq. (5.9) at low temperatures. And the function A, using Eq. (4.12), is Q ir 2 IcT A = ^00 - NkT^-r' (5.10) O 4 00 To find the pressure, we wish to know A explicitly as a function of volume. Substituting for o from Eq. (5.3), we have A - We can now differentiate to get the pressure: -GO - (5-12) as we can see by substituting for 00 in Eq. (5.8). Equation (5.12), stating that PV = ft/, is the equation found in Eq. (3.5), Chap. IV, by a kinetic method, without making any assumptions about the distribution of velocities. It must therefore hold for the Fermi distribution as well as for the Boltzmann distribution, and it was really not necessary to make a special calculation of the equation of state at all. It is obvious, however, that the final equation of state is very different from that of the perfect gas on the Boltzmann statistics, on account of the very different relation which we have here between internal energy and temperature. Since SEC. 6] THE FERMI-DIRAC AND EINSTEIN-BOSE STATISTICS 83 the internal energy is very large at the absolute zero, but increases only slowly with rising temperature, with a term proportional to T 72 , the same is true here of PV. Thus, in the example used above, the pressure is 149,000 atm. at the absolute zero. We note that, in contrast to the Boltzmann statistics, the internal energy here depends strongly on the volume, as Eqs. (5.8) and (5.3) show; thus the gas does not obey Boyle's law. This dependence of internal energy on volume is an interesting thing, for it does not indicate in any way the existence of forces between the particles, which we are neglecting here just as in the Boltzmann theory of a perfect gas. The kinetic energy is what depends on the volume, on account of the dependence of eoo on volume. 6. The Einstein-Bose Distribution Law. We can find the Einstein- Bose distribution law, proceeding by exact analogy with the methods of Sec. 3 but using the expression (2.11) for the entropy. Thus for the function A we have A = 2)L/V\< + kTN v In Ni - kT(\ + NJ In (1 + ft.)]. ,g ^ i Varying the NS* and requiring that ^4 be a minimum for equilibrium, we have dA = = ^5( . + kT In TTVV^- (6-2) ^MM V 1 T- TV / As in Sec. 3, Eq. (6.2) must be satisfied, subject to the condition _ V, = 0, leading to the relation e t + kT In - -j-^r = o = const, (6.3) 1 ~r ^Vi Solving for Ni, as in the derivation of Eq. (3.5), we have N * = ~K-3 (6.4) e kT - 1 Equation (6.4) expresses the Einstein-Bose distribution law. As with the Fermi-Dirac law, the constant co is to be determined by the condition ^' = 2 "^^ ' ' kT ' _ 84 INTRODUCTION TO CHEMICAL I'/IYtiWS [CHAP. V We can show, as we did with the Fermi-Dirac statistics, that the distribution (6.5) approaches the Maxwell-Boltzmann distribution law at. high temperatures. It is no easier to make detailed calculations with the Einstein-Bose law than with tho Fermi-Dirac distribution, and on account of its smaller practical importance we shall not carry through a detailed -2 kT FIG. V-2. Distribution functions for Fcrmi-Dirac statistics (a) ; Maxwell-Boltzmann statistics (&); and lOinstom-Bose statistics (f). discussion. It is interesting, however, to compare the three distribution laws. This is done in Fig. V-2, where we plot the function /^x - representing the Einstcin-Bose law, l/e kT representing the Maxwell- Boltzmann, and -(,_,) representing the Fermi-Dirac, all as functions z.y i -t e kl +1 of . We observe that the curve for the Einstein-Boso distribution becomes asymptotically infinite as c approaches eo. From this and from Eq. (6.5), it follows that e must lie lower than any of the energy levels of a system, in contrast to the case of the Fermi-Dirac distribution. We see that the Maxwell-Boltzmann distribution forms in a certain sense an intermediate case between the two other distributions. The Fermi-Dirac statistics tends to concentrate the molecules more in the higher energies, having fewer molecules in proportion in the lower energies than in the Maxwell-Boltzmann statistics. On the contrary, the Einstein-Boso statistics tends to have more molecules in the lower energies. As a mat- ter of fact, more elaborate study of the Einstein-Bose distribution law shows that the concentration of molecules in the low states is so extreme that at low enough temperatures a phenomenon of condensation sets in, somewhat analogous to ordinary changes of phase of a real gas. From SBC. 6J THE FERMI-DIRAC AND E1NSTEIN-BOSE STATISTICS 85 these properties of the distribution laws, we can see that in some super- ficial ways the effect of the Fermi-Dirac statistics is similar to that of repulsive forces between the molecules, leading to a large pressure even at the absolute zero, while the effect of the Einstein-Bose statistics is similar to that of attractive forces, leading to condensation into a phase resembling a liquid. The real gases hydrogen and helium obey the Einstein-Bose statistics, and there are indications that at temperatures of a few degrees absolute the departures from the Maxwell-Boltzmann statistics are appreciable. Of course, the molecules have real attractive forces, but the effect of the statistics is to help those forces along, producing condensation at some- what higher temperature than would otherwise be expected. The sugges- tion has even been made that the anomalous condensed phase He II, a liquid persisting to the absolute zero at ordinary pressures and showing extraordinarily low viscosity, may bo tho condensed phase of tho Einstoin- Bose statistics. For other gases than hydrogen and helium, the inter- molecular attractions are so much greater than the effect of tho Einstein-Bose statistics that they liquefy at temperatures too high to detect departures from the Maxwell-Boltzmann law. Aside from these gases, the only important application of the Einstein-Bose statistics eomes in the theory of black-body radiation, in which it is found that photons, or corpuscles of radiant energy, obey the Einstein-Bose statistics, leading to a simple connection between the Einstein-Bose distribution law and the Planck law of black-body radiation, which we shall discuss in a later chapter. CHAPTER VI THE KINETIC METHOD AND THE APPROACH TO THERMAL EQUILIBRIUM In the preceding chapters, we have taken up in a very general v\ay thermodynamics and statistical mechanics, including some applications to perfect gases. Both, as we have seen, are very general and powerful methods, hut both are limited, as far as quantitative predictions are concerned, to systems in thermal equilibrium. The kinetic theory, some of whose methods we shall use in this chapter, is not so limited. It can handle the rates of molecular processes and incidentally treats thermal equilibrium by looking for a steady state, in which the rate of change of any quantity is zero. But it has disadvantages compensating this groat advantage: it is much more complicated and much less general than thermodynamics and statistical mechanics. For this reason we shall not pretend to give any methods of handling an arbitrary problem by kinetic theory. We limit ourselves to a very special case, the perfect monatomic gas, and wo shall not oven make any quantitative calculations for it. Later on, in various parts of the book, we shall handle other special problems by the kinetic method. Always, we shall find that an actual calculation of the rate of a process gives us a better physical insight into what is going on than the more general methods of statistical mechanics. But generally we shall find that the kinetic methods do not go so far, and always they are more complicated. Our problem in this chapter is to ' investigate thermal equilibrium in a perfect monatomic gas by the kinetic method. We sot up an arbitrary state of a gas and investigate how it changes as time goes on. We compute its entropy at each stage of the process, showing that in fact the entropy increases in the irreversible process by which the arbitrary distribution changes over to thermal equilibrium, and we can actually find how fast it increases, which we could not do by our previous methods. Finally by looking for the final state, in which the entropy can no longer increase, we get the condition for thermal equilibrium and show that it agrees with the condition derived from statistical mechanics and the canonical assembly. 1. The Effect of Molecular Collisions on the Distribution Function in the Boltzmann Statistics. Let us set up a distribution in the molecular phase space, as described in Chap. V, Sec. 1. We consider, not a single state of the gas, but an assembly of states, as set up in Sec. 2 of that same 86 SBC. 1] THE KINETIC METHOD 87 chapter, defined by the average number JV t of molecules in the ith cell of the molecular phase space, and having the entropy given in Eq. (2.7), (2.li), or (2.12) of Chap. V. We start with an arbitrary set of JVVs, and ask how they change as time goes on. The changes of $Vs arise in two ways. First, there are thoso changes that would be present even if there were no collisions between molecules. A molecule with a certain velocity moves from point to point, and hence from cell to cell, on account of that velocity. And if the molecule is actod on by an external force field, which changes its momentum, it goes from cell to cell for that reason too. These changes are in the nature of streamline flows of the representative points of the molecules in the molecular phase space. Wo shall discuss them later and shall show that they do not result in any change of entropy. Secondly, there are changes of $Ys on account of collisions between molecules. These are the changes resulting in irrever- sible approach to a random distribution and in an increase of entropy. Since they are for our present purposes the most interesting changes, we consider them first. Consider two molecules, one in the itli cell, one in thejth, of molecular phase space. If these cells happen to correspond to the same value of the coordinates, though to different values of the momenta, there is a chance that the molecules may collide. In the process of collision, the represen- tative points of the molecules will suddenly shift to two other cells, say the kth and Zth, having practically the same coordinates but entirely different momenta. The momenta will be related to the initial values; for the collision will satisfy the conditions of conservation of energy and conservation of momentum. These relations give four equations relating the final momenta to the initial momenta, but since there are six com- ponents of the final momenta for the two particles, the four equations (conservation of energy and conservation of three components of momen- tum) will still leave two quantities undetermined. For instance, we may consider that the direction of one of the particles after collision is undeter- mined, the other quantities being fixed by the conditions of conservation. We now ask, how many collisions per second are there in which mole- cules in the ith and jth cells disappear and reappear in the kth and Ith cells? We can be sure that this number of collisions will be proportional both to the number of molecules in the ith and to the number of molecules in the jth cell. This is plain, since doubling the number of either type of molecule will give twice as many of the desired sort that can collide, and so will double the number of collisions per unit time. In the case of the Boltzmann statistics, which we first consider, the number of collisions will be independent of the number of molecules in the kth and Ith cells, though we shall find later that this is not the case with the Fermi-Dirac and Einstein-Bose statistics. We can then write the number of collisions of 88 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI the desired type in unit time, averaged over the assembly, as A^fWi. (1.1) The coefficient A\{ will of course depend on the momenta associated with all four cells, and in particular will be zero if these momenta do not satisfy tho conditions of conservation. It will also depend on the properties of the atom. For instance, it is obvious that the larger the molecules are, the more likely they are to collide, and the larger the A's will be. We do not have to go more into details of the A's for our present purposes, however. In addition to these collisions, we shall have to consider what we shall call an inverse collision. This is one in which the molecules before colli- sion are in the cells k and Z, and after collision are in cells i and j. The number of such collisions per unit time, by the same argument as before, will be A"ftjti. (1.2) Now we ask, what relation, if any, is there between the two coefficients Atf and A 1 ** of the direct and inverse collisions? The answer to this question is simple but not very easy to justify. It is this: if the cells are all of the same size, as we are assuming, we have simply AH = A? f (1.3) In case the collision takes place according to Newtonian mechanics, the relation (1.3) can be proved by means of Liouville's theorem. In quan- tum mechanics, Eq. (1.3) is practically one of the postulates of the theory, following directly from quantum mechanical calculations of transition probabilities from one state to another. For our present purpose, con- sidering that this is an elementary discussion, we shall simply assume the correctness of relation (1.3). This relation is sometimes called the prin- ciple of microscopic reversibility. We are now in position to find how our distribution function changes on account of collisions. Let us consider a certain cell i } and ask how the average number Ni of molecules in this cell changes with time, on account of collisions. In the first place, whenever a molecule in the cell collides with another molecule in any other cell, the first molecule will be removed from the ith cell, and tho number of molecules in this cell will be dimin- ished by one. But the whole number of collisions of a molecule in cell ij with all other molecules, per second, is where we are summing over all other types of molecule j with which the SBC. 2] THE KINETIC METHOD 89 original molecule can collide, and over all possible states k and I into which the molecules can be sent by collision. On the other hand, it is possible to have a collision of two molecules having quite different momenta, such that one of the molecules after collision would be in cell i. This would result in an increase of unity in the number of molecules in the coll i. The number of such collisions per second is (1.5) 3 ki where we have used the result of Eq. (1.3). Thus the total change in ff l per second is given by (1.6) 2. The Effect of Collisions on the Entropy. Equation (1.6) represents the first part of our derivation of the effect of collisions in producing an irreversible change in the distribution and hence in increasing the entropy. Now we must go back to the definition of the entropy in Eq. (2.12) in Chap. V, and find how much S changes per unit time on account of the collisions. Differentiating that equation with respect to the time, we have at once dS In AT.f '. (2.1) t Substituting from Eq. (1.6), this becomes I*'/ In AT (W N . A/. 3v.^ fO 9^ ,. /v s- , tljci ill IV i\l\ ilV j IMjelMi). \i.L) at <*** We notice that the fourfold summation over i, j, k y I is perfectly sym- metrical in i and j; they are simply the indices of the two colliding particles before collision. We could interchange their names, and could equally well write ln R>(8& - #*#') (2-3) 90 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI By Eq. (1.3), this could also be written But in Eq. (2.4), we can interchange the names of i, j with k, Z, obtaining dS ^^ dt <-* J kl l 3 lyi In N k (ff l Nj - N k Ni). (2.5) Finally, interchanging the role of the fcth and Ith atoms, we have dS "SJ , . . , i Tt = -*.^* ln ' We now have, in Eqs. (2.2), (2.3), (2.5), and (2.6), four equivalent ways of writing dS/dt. We add these equations and divide by 4, obtain- ing the final form J.Q ^'%n \JUkJ Iv ^W ' A / -jTV , i -Tr i iTV i __^_ ^^ v4 ' ( ITI W I In 7V - \\\ /Vt - ifi dt 4-* J kl * 7 - ln N*Ni)(ff<fli - ^*^)- (2-7) The result of Eq. (2.7) is a very remarkable one. Each term of the summation is a product of a coefficient A (which is necessarily positive), and a factor of the form (In z - In T/) (*-</), (2.8) where x = NiNj, y = NkNi. But the factor (2.8) is necessarily positive. If x > y, so that x y is positive, then In x > In y, so that the other factor In x In y is positive as well. On the other hand, if x < y, so that x y is negative, In x In y is also negative, so that the product of the two factors is again positive. Thus, every term of the summation (2.7) is positive, and as a result the summation is positive. The only way to avoid this is to have each separate term equal to zero; then the whole summation is zero. But if the summation is positive, this means that the entropy S is increasing with time. Thus we have proved Boltz- mann's famous theorem (often called the H theorem, because he called the summation of Eq. (2.12), Chap. V, by the symbol H, setting S = kH) : the entropy S continually increases, on account of collisions, SBC. 2] THE KINETIC METHOD 91 unless it has already reached a steady state, for which the condition is N.Nj - R k N t - (2.9) for every set of cells i, ,7, &, I between which a collision is possible (that is, for which Aft ^ 0). By comparison with Eq. (1.6), we see that Eq. (2.9) leads to the condition that dN % /dt should be zero, by demanding that each separate term of Eq. (1.6) should be zero. That is, in equilibrium, the collision in which atoms i and j collide to give atoms k and Z, together with the inverse to this type of collision, by themselves give no net change in the numbers of atoms in the various states, the number of direct collisions just balancing the number of inverse collisions. This condition is called the condition of detailed balancing. It is a general characteristic of thermal equilibrium that this detailed balancing should hold and, as we have seen, it follows directly from the second law in its statistical form. We may now rewrite Eq. (2.9) in the form fftffi = N k Ni, or In # t + In Nj = In N k + In NI. (2.10) This holds for every transition for which A& 7* 0; that is, for every colli- sion satisfying the laws of conservation of energy and momentum. Using the notation of Sec. 3 in Chap. IV, we can let the average number of molecules in an element dx dy dz dp x dp y dp z of the molecular phase space be/ m dx dy dz dp x dp y dp z . According to Chap. Ill, Sec. 3, the volume of molecular phase space associated with one cell is A 8 . Then we have the relation ffi = h*f m , (2.11) where / m is to be computed in the ith cell. We now substitute Eq. (2.11) in Eq. (2.10), writing that equation in terms of the/ m 's. Since all four of our cells must refer to the same point of coordinate space, since molecules cannot collide unless they are in contact, we write f m merely as fm(p x pypz), or/(p) for short. Then we have ln/(p,) + ln/(p/) = ln/(p*) + ln/(p). (2.12) Equation (2.12) states that there is a certain function In /of the momen- tum of a molecule, such that the sum of the functions of the two molecules before collision equals the sum of the functions of the two after collision. That is, the total amount of this function is conserved on collision. But there are just four quantities that have this property: the energy and the three components of momentum. Any linear function of these four quantities will also be conserved, and it can be proved that this is the most 92 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI general function which has this property. Thus we may conclude that In /(p) A^ + Bp x + C Py + Dp z + E, (2.13) where A, B, C, Z), E are arbitrary constants. Substitution of Eq. (2.13) in Eq. (2.12) shows at once that Eq. (2.12) is satisfied. We have been able, in other words, to get a general solution of the problem of the dis- tribution function in equilibrium. The linear combination of Eq. (2.13) can be rewritten in the form [(PX ~~ Pxo) * + (PV "" Pvo)2 + (PZ " po)f] . _ 2m / " w*y roM' " mkT where T (later to be identified with the temperature), p x0 , p y0 , po, /o are arbitrary constants, whose relation to the constants of Eq. (2.13) is evi- dent from Eq. (2.14). Thus we have -((p-pxo) 2 + (7>v-p y o) 2 + (pe-p.o) 2 ) , ex /(p) =/ oe ~~2^r~ " (2.15) 3. The Constants in the Distribution Function. In Eq. (2.15), we have found a distribution function satisfying the condition of thermal equilibrium and containing five arbitrary constants, T, p x0 , p yQj p zQ , / . Since our calculation has been entirely for a single point of ordinary space, these five quantities, for all we know, may vary from point to point or be functions of position. Shortly we shall find how the quantities must vary with position in order to have thermal equilibrium. We may anticipate by stating the results which we shall find and giving their physical interpretation. In the first place, we shall find that the four quantities T, p x ^ p v0 , p zQ must be constant at all points of space, for equilibrium. By comparison with Eq. (2.4) of Chap. IV, the formula for the Maxwell distribution of velocities, we see that T must be identified with the temperature, which must not vary from point to point in thermal equilibrium. The quan- tities PXO, p V Q, p Z Q are the components of a vector representing the mean momentum of all the molecules. If they are zero, the distribution (2.15) agrees exactly with Eq. (2.4) of Chap. IV. If they are not zero, however, Eq. (2.15) represents the distribution of velocities in a gas with a certain velocity of mass motion, of components p X Q/m, p^o/w, p z o/in. The quan- tities p x p*o, etc., represent components of momentum relative to this momentum of mass motion, and the relative distribution of velocities is as SBC. 3] THE KINETIC METHOD 93 in Maxwell's distribution. Since ordinarily we deal with gas without mass motion, we ordinarily set p xQ , p yQ , and p*o equal to zero. In general, we shall not have thermal equilibrium unless the velocity of mass motion is independent of position; otherwise, as we can seo physically, there would be the possibility of viscous effects between different parts of the gas. We have now considered the variation of 7 7 , p x o, p yQ , p z0 with posi- tion and have shown that they are constants. Finally wo consider / . Our analysis will show that if the potential energy of a molecule is <, a function of position, we must have tf> /o = const. X e k ~ f . ' (3- 1 ) Equation (3.1) shows that the density varies with position just as described in Chap. IV, Sec. 4. Thus, with these interpretations of the constants, we see that P]q. (2.15), representing the distribution function which we find by the kinetic method for the bteady state distribution, is exactly the same that we found previously, in Chap. IV, by statistical methods. We shall now prove the results that we have mentioned above, regard- ing the variation of T, p x0 , p y ^ p zQ , / with position. We stated in Sec. 1 that a molecule could shift from point to point in phase spare not only on account of collisions, which we have considered, but also on account of the velocity and of external force fields, and that these shifts were in the nature of streamline flows in the phase space and did not correspond to changes of entropy. We must now analyze these motions of the mole- cules. We shall assume classical mechanics for this purpose; the energy levels of a perfect gas, as we have seen in Chap. IV, Sec. 1, are spaced so closely together that the cells in phase space can be treated as continuous. Let us, then, take a volume element dx dy dz dp x dp v dp z in molecular phase space, and find the time rate of change of f m dx dy dz dp x dp y dp g , the number of molecules in the volume clement, for all reasons except collisions. First, we consider the number of molecules entering the element over the surface perpendicular to the x axis, whose (five-dimen- sional) area is dy dz dp x dp y dp z . The component of velocity of each molecule along the x axis is p x /m. Then, using an argument similar to that of Chap. IV, Sec. 3, the number of molecules entering the element over this surface per second is the number contained in a prism of base dy dz dp x dp y dp z and altitude p x /m. This is the volume of the prism [(Px/rn)dy dz dp x dp y dp z ] times the number of molecules per unit volume in phase space, or / m . Hence the required number is (P.\ w \dy dz dp x dp y dp g . 94 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI But a similar number of molecules will leave over the face at x + dx. The only difference will be that in computing this number we must use f m computed at x + dx, or must use f m (x + dx, yzp x p y p g ) = f m (xyzp x p y p z ) + dx-J(xyzp x p v p t ) + - - , the first two terms of a Taylor's expansion. Thus the net number enter- ing over the two parallel faces perpendicular to x is y d z dp x dp v dp z , fit uX and for the three sets of faces perpendicular to x, y, z, we have the net number -fe -/; + 5 + If)"* * * * <" *- <M> In a similar way we can consider the face perpendicular to the p x axis in the phase space. The component of velocity of a representative point along the p x axis in this space is by definition simply the time rate of change of p x ; that is, by Newton 's second law, it is the x component of force acting on a molecule. If the potential energy of a molecule is <, this component of force is d<t>/djc. Thus, the number of molecules entering over the face perpendicular to the p x axis is /ml -fa) dx d 1l dz d Py d P*> and the net number entering over the faces at p x and at p x + dp x is f^JL 3f ~dx dp X dy <iZ dpx dj)y flp " We have three such terms for the three components of momentum, and combining with Eq. (3.2), we have for the total change of the number of molecules in the volume element per second the relation *- < 3 - 8) Having found the change in the distribution function, in Eq. (3.3), we shall first show that it involves no change of entropy. The physical reason is that it corresponds to a streamline motion in phase space, result- ing in no increase of randomness. We use Eq. (2.1) for the change of SEC. 3] THE KINETIC METHOD entropy with time, Eq. (2.11) for the relation between Eq. (3.3) for df m /dt. Then we have and /,, and \dx dp 9 . (3.4) Each of the integrals over the coordinates is to be carried to a point out- side the container holding the gas, each integral over momenta to infinity. In Eq. (3.4), each term can be integrated with respect to one of the vari- ables. Thus the first term, as far as the integration with respect to x is concerned, can be transformed by the relation f ' = Infndfm = (/ w ln/ w -, (3.5) At both limits of integration, f m = 0, since the limits lie outside the con- tainer, so that the integral vanishes. A similar transformation can be made on each term, leading to the result that the changes of f m we are now considering result in no change of entropy. This justifies our analysis of Sec. 2, in which we treated the change of entropy as arising entirely from collisions. Now we can use our condition (3.3) to find the variation of our quanti- ties /o, T, pxo, p y0 , p z( ) of Sec. 2 with position. In thermal equilibrium, we must have df m /dt = 0. Thus Eq. (3.3) gives us a relation involving the various derivatives of f m . We substitute for/ m from Eq. (2.15), treating the quantities just mentioned as functions of position. Then Eq. (3.3) becomes, canceling the exponential, = l?^L m dx /o P*: m Pzdfo m dz dp X Q -F + <" - *> , da- P + P. ^ +IP.- r^ (Px ~ tg + (p, _ ^ /o ~ Po) 2 + (py - + (Pz - dT dz dT - P-O + (PV " p * o) + (p * " p * o) (3 - 6) Equation (3.6) must be satisfied for any arbitrary values of the momenta. Since it is a polynomial in p x , p v) p, involving terms of all degrees up to 96 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI the third, and since a polynomial cannot be zero for all values of its argument unless the coefficient of each term is zero, we can conclude that the coefficient of each power of each of the p's in Eq. (3.6) must be zero. From the third powers we see at once that dT - i\ - c\ dT - ft tt 7^ te ~ ' ay " 0| ei ~ ' (3 ' 7) or the temperature is independent of position. From the second powers we then see that the derivatives of the form dp xQ /dx must all be zero, or the average momentum is independent of position. We are left with only the first and zero powers. From the zero powers, we sec that either PxO = P V = PzQ = 0, or d<t> d<t> d<t> _ ,- ~v (3 - 8) That is, if there is an external force field, there can be no mass motion of the gas, for in this case the external field would do work on the gas and its energy could not be constant. Then we are left with the first power terms. The coefficient of p x , for instance, gives fadx = ~~kfdx' ' 3 ' 9 ' with similar relations for the y and z components. Equation (3.9) can be rewritten "-* dx dx ' In /o = ^ + const., - /o = const. e kT , or Eq. (3.1). Thus we have proved all the results regarding our distribu- tion function that we have mentioned earlier in the section, and have completed the proof that the Maxwell-Boltzmann distribution law is the only one that will not be affected by collisions or the natural motions of the molecules and, therefore, must correspond to thermal equilibrium. 4. The Kinetic Method for Fermi -Dirac and Einstein -Bose Statistics. The arguments of the preceding sections must be modified in only two ways to change from the Boltzmann statistics to the Fermi-Dirac or Einstein-Bose statistics. In the first place, the law giving the number of collisions per unit time, Eq. (1.1), must be changed. Secondly, as SBC. 4] THE KINETIC METHOD 97 we should naturally expect, we must use the appropriate formula for entropy with each type of statistics. First, we consider the substitute for the law of collisions. Clearly the law (1.1), giving AHftJtj collisions per second in which molecules in states i and j collide to give molecules in states k and I cannot be correct for Fermi-Dirac statistics. For the fundamental feature of Fermi-Dirac statistics is that if the fcth or Ith stationary states happen to be already occupied by a particle, there is no chance of another particle going into them. Thus our probability must depend in some way on the number of particles in the kill and Ith states, as well as the ith and jth. Of course, the kth state can have either no particles in it, or one; never more. Thus in one example of our system, chosen from the statistical assembly, Nk may be zero or unity. If it is zero, there is no objection to another particle entering it. If it is unity, there is no possibility that another particle can enter it. Averaging over the assembly, the probability of having a collision in which a particle is knocked into the fcth state must clearly have an additional factor equal to the fraction of all examples of the assembly in which the kth state is unoccupied. Now Nk is the mean number of particles in the kth state. Since the number of particles is always zero or one, this means that Nk is just the fraction of examples in which the fcth state is occupied. Then 1 Nk is the fraction of examples in which it is unoccupied, and this is just the factor we were looking for. Similarly we want a factor 1 N t to represent the probability that the Zth state will bo unoccupied and available for a particle to enter it. Then, finally, we have for the number of collisions per second in which particles in the ith and jth cells are knocked into the fcth and Ith cells, the formula AllftJt,(l - #*)(! - tf,). (4.1) In the Einstein-Bose statistics, there is no such clear physical way to find the revised law of collisions as in the Fermi-Dirac statistics. The law can be derived from the quantum theory but not in a simple enough way to describe here. In contrast to the Fermi-Dirac statistics, in which the presence of one molecule in a cell prevents another from entering the same cell, the situation with the Einstein-Bose statistics is that the presence of a molecule in a cell increases the probability that another one should enter the same cell. In fact, the number of molecules going into the fcth cell per second turns out to have a factor (1 + $*), increasing linearly with the mean number ff k of molecules in that cell. Thus, the law of collisions for the Einstein-Bose statistics is just like Eq. (4.1), only with + signs replacing the signs. In fact, we may write the law of collisions for both forms of statistics in the form ffi), (4.2) 98 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI where the upper sign refers to the Einstein-Bose statistics, the lower to the Fermi-Dirac. Next, we must consider the change of the mean number of molecules in the fth state, with time. Using the law of collisions (4.2) and proceed- ing as in the derivation of Eq. (1.6), we have at once Ni}(l Nj} - M(1 ^ )(1 Ni)] - (4 - 3) Having found the number of collisions, we can find the change in entropy per unit time. Using the formulas (2.7) and (2.11) of Chap. V for the entropy in the case of Fermi-Dirac and Einstein-Bose statistics, we find at, once that where again the upper sign refers to Einstein-Bose statistics, the lower one to Fermi-Dirac statistics. Substituting from Eq. (4.3), we have AHlhi B* - ln ijkl #*)(! Ni)]. (4.5) As in Sec. 2, we can write four expressions equivalent to Eq. (4.5), by interchanging the various indices t, jf, fc, I. Adding these four and divid- ing by four, we obtain 2? = ^H{ In [#*#i(i #0(1 #0] - ln[ftjt,(l #*) Hid (1 Ni)]}[N*N l (\ #.)(! #0 - ftjt,(l N.)(l Ni)]. (4.6) But, as in Sec. 2, this expression cannot be zero as it must be for a steady state, unless N k Ni(l #0(1 Ni) = NiNj(l N k )(l #0, and if it is not zero, it must necessarily be positive. Thus we have demon- strated that the entropy increases in an irreversible process, and have found the condition for thermal equilibrium. From Eq. (4.7) we can find the distribution functions for the Einstein- Bose and Fermi-Dirac statistics. We rewrite the equation in the form #0 Sue. 41 THE KINETIC METHOD 99 or N As in Sec. 2, we may now conclude that the quantity In must be (1 N) a function which is conserved on collision, since the sum for the two par- ticles before and after collision is constant. And as in that section, this quantity must be a linear combination of the kinetic energy and the momentum, the coefficients in general depending on position. Also, as in that section, the momentum really contributes nothing to the result, implying merely the possibility of choosing an arbitrary average velocity for the particles. Neglecting this, we then have In -^~ = a + 6e kin , (4.9) where a and 6 are constants, km is the kinetic energy of a molecule in the ith cell. That is, we have ft* As we see by comparison with the formulas (3.5) and (6.4) of Chap. V, the quantity b is to be identified with 1/kT. Thus we have *' = i- (4 ' U) e a e kT + 1 In formula (4.11), as in (2.15), there are certain quantities a and T which are constant as far as the momenta are concerned, but which might vary from point to point of space. We can investigate their variation just as we did for the Boltzmann statistics in Sec. 3. The formula (3.3) for the change of the distribution function with time on account of the action of external forces holds for the Einstein-Bose and Fermi-Dirac statistics just as for the Boltzmann statistics, and leads to a formula very similar to Eq. (3.6) which must be satisfied for equilibrium. The only difference comes on account of the different form in which we have expressed the constants in Eq. (4.11). Demanding as before that the relation like (3.6) must hold independent of momenta, we find that the temperature must be independent of position, and that the constant a of Eq. (4.11) must be given by 100 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VI a - ^t (4.12) where eo is a constant, </> is the potential energy of molecule. Thus, finally, we have N * = "3 ' (4.13) where c is the total energy, kinetic and potential, of a molecule in the ith cell, in agreement with Eqs. (3.5) and (6.4) of Chap. V. CHAPTER VII FLUCTUATIONS A statistical assembly contains many replicas of the same system, agreeing in large-scale properties but varying in small-scale properties. Sometimes these variations, or fluctuations, are important. Thus, two repetitions of the same experiment may disclose different densities of a gas at a given point, though the average density through a large volume may be the same in each case. Such fluctuations of density can be of experimental importance in such problems as the scattering of light, which is produced by irregularities of density. Again, in the emission of elec- trons from a hot filament, there are fluctuations of current, which are observable as the shot effect and which are of great practical importance in the design of amplifying circuits. We shall take up some of the simpler sorts of fluctuations in this chapter. We begin by considering the fluc- tuations of energy in a canonical assembly. We recall, from the argu- ments of Chap. Ill, Sec. 5, that an assembly of systems in equilibrium with a temperature bath must be assumed to have a variety of energies, since they can interchange energy with the bath. We can now show, however, that actually the great majority of the systems have an energy extremely close to a certain mean value and that deviations from this mean are extremely small in comparison with the total energy. This can be shown by a perfectly straightforward application of the distribution function for the canonical assembly, in the case w^here our system is a sample of perfect gas obeying the Boltzmarm statistics, and we start with that example. 1. Energy Fluctuations in the Canonical Assembly. Let E be the energy of a particular system in the canonical assembly, U being the average energy over the assembly, or the internal energy. We are now interested in finding how much the energies E of the individual systems fluctuate from their average value. The easiest way to find this is to compute the mean square deviation of the energy from its mean, or (E 7) 2 . This can be found by elementary methods from the Maxwell- Boltzmann distribution law. Referring to Eq. (1.1) of Chap. IV, we can write the energy as N E = , (i.D 1-1 101 102 INTRODUCTION TO CHEMICAL PHYSICS (CHAP. VII where (i) is the energy of the ith molecule, and U = 2 (1.2) where c (t) is the average energy of the ith molecule over the assembly. Thus we have N (E-U) = ](<> -<), (1.3) t-i and N N (? - e (i) )(7 - <). (1.4) We must now perform the averaging in Eq. (1.4). We note that there are two sorts of terms: first, those for which i = j; secondly, those for which i 7^ j. We shall now show that the terms of the second sort average to zero. The reason is the statistical independence of two mole- cules i and j in the Boltzmann distribution. To find the average of such a term, we multiply by the fraction of all systems of the assembly in which the fth and jth molecules have tho particular energies c ( and ej^, where k l and &, are indices referring to particular cells in phase space, and sum over all states of the assembly. From Eq. (1.2) of Chap. IV, giving the fraction of all systems of the assembly in which each particular molecule, as the ith, is in a particular state, as the &th, we see that this average is ( - <)(<*> - <'>) k, J* hi = (^>""-niW)(?jrirgO>) = (gW _ e<>)(eC) - g(0) = 0. (1.5) Having eliminated the terms of Eq. (1.4) for which i ^ j, we have left only N (E - C/) 2 " = (e<'> -!<">)*. (1.6) That is, the mean square deviation of the energy from its mean equals the sum of the mean square deviation of the energies of the separate molecules from their means. Each molecule on the average is like every other, so that the terms in the summation (1.6) are all equal, and we may SEC. 1] FLUCTUATIONS 103 write (E - 7) 2 = tflc - O 2 , (1.7) where represents the energy of a single molecule, i its mean value. We can understand Eq. (1 .7) better by putting it in a slightly different form. We divide the equation by f7 2 , so that it represents the fractional deviation of the energy from the mean, squared, and averaged. In computing this, we use Eq. (1.2) but note that the mean energy of each molecule is equal, so that Eq. (1.2) becomes V = Nl. (1.8) Using Eq. (1.8), we then have Equation (1.9) is a very significant result. It states that the fractional mean square deviation of energy for N molecules is 1/JVth of that for a single molecule, in Boltzmann statistics. The greater the number of molecules, in othor words, the less in proportion are the fluctuations of energy from the mean value. The fractional deviation of energy of a single molecule from its mean is of the order of magnitude of its total energy, as we can see from the wide divergence of energies of different molecules to be observed in the Maxwell distribution of velocities and a.s we shall prove in the next paragraph. Thus the right side of Eq. (1.9) is of the order of magnitude of 1/N. If N is of the order of magnitude of 10 24 , as with a large scale sample of gas, this means that practically all systems of the assembly have energies departing from the mean by some- thing whose square is of the order of 10~ 24 of the total energy, so that the average deviation is of the order of 10~ 12 of the total energy. In other words, the fluctuations of energy in a canonical assembly are so small as to be completely negligible, so long as we are dealing with a sample of macroscopic size. To evaluate the fluctuations of Eq. (1.9) exactly, we must find the fluctuations of energy of a single molecule. We have ( 6 - g)2 = 5 _ 2c + (e) 2 = T* - (i) 2 , (1.10) a relation which is often useful. We must find the mean square energy of a single molecule. Using the distribution function (2.4) or (2.6) of Chap. IV, we find easily that (1.11) 104 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VII Remembering that i = ikT, (1.12) as shown in Eq. (2.7) of Chap. IV, this gives (e - -) 2 = (v - |)(fcr) = |(*r)', (1-13) from which Eq. (1.9) becomes (E-u\_2i_ a 14) \~U~J ~3AT' (L14) fixing the numerical value of the relative mean square deviation of the energy. 2. Distribution Functions for Fluctuations. In the preceding section we have given an elementary derivation of the energy fluctuations in Boltzmann statistics. The derivation we have used is not applicable to many interesting fluctuation problems, and in the present section we shall develop a much more general method. Suppose we have a quantity x, in whose fluctuations from the mean we are interested. This may be the energy, as in the last section, or many other possible quantities. Our method will be to set up a distribution function /(x), such that f(x)djc gives the fraction of all systems of the assembly for which x lies in the range dx. Then it is a simple matter of integration to find the mean of any function of #, and in particular to find the mean square deviation, for which the formula is (x - z) 2 = f(x - xYf(x)dx. (2.1) We shall assume that the energy levels of the problem are so closely spaced that they can be treated as a continuous distribution. For each of the energy levels, or states of the system, there will be a certain value of our quantity x. We shall now arrange the energy levels according to the values of x and shall set up a density function, which we shall write in the form e k , such that dx (2.2) is the number of energy levels for which x is in the range dx. We shall see later why it is convenient to write our density function in the form (2.2). Now we know, from the canonical assembly, as given in formula (5.15) of Chap. Ill, that the fraction of all systems of the assembly in a _ E given energy level is proportional to e **. Thus, multiplying by the number (2.2) of levels in dx, we find that the fraction of systems in the SBC. 2] FLUCTUATIONS 105 range dx is given by f(x)dx = const, e kT dx, (2.3) where E(x) is the energy corresponding to the levels in dx. We may immediately evaluate the constant, from the condition that the integral of f(x) over all values of x must be unity, and have (E(r)-Ts(x)] (2.4) In the problems we shall be considering, f(x) has a very sharp and narrow maximum at a certain value .TO, rapidly falling practically to zero on both sides of the maximum. This corresponds to the fact that jc fluctuates only slightly from XQ in the systems of the assembly. The reason for this is simple. The function E(x) Ts(x) must have a mini- mum at , in order that/(x) may have a maximum therc\ The function f(x) will then be reduced to l/e of its maximum value when E(x) Ts(x) is greater than its minimum by only kT. But E is the energy of the whole system, of the order of magnitude of NkT, if there arc N atoms or mole- cules in the system, and wo shall find likewise that TS(JC) is of this same order of magnitude. Thus an exceedingly small percentage change in x will be enough to increase the function E(x) Ts(x) by kT or much more. We can get a very useful expression for/(r) by assuming that E(x) - 7'sCr) can be approximated by a parabola through the very narrow range in which f(x) is appreciable. Let us expand E(x) Ts(x) in Taylor's series about x . Remembering that the function has a minimum at z , so that its first derivative is zero there, we have E(x) - Ts(x) = E(x) - TS(XO) * x. . /0 -. x -^ 2+ " (2 ' 5) The second derivatives in Eq. (2.5) are to be computed at x = XQ. Then the numerator of Eq. (2.4) becomes where i a ~ 2icT\dx* 106 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VII The function e~ ( *~* g)l is called a Gauss error curve, having been used by Gauss to describe distribution functions similar to our f(x) in the theory of errors. It equals unity when x = z , and falls off on both sides of this point symmetrically, being reduced to l/e when x Xo = l/\/a. Using formulas (2.6) and (2.7), and the integrals (2.3) of Chap. IV, we can at once compute the denominator of Eq. (2.4) and find f(x) = /JV(*-* )>. (2.8) Formula (2.8) is an obviously convenient expression for the distribution function. From it one can find the mean square deviation of XQ. Obvi- ously the mean value of x is x 0l from the symmetry of Eq. (2.8). Then, using Eq. (2.1), we have 1 (x - *o) 2 = ~ (2.9) In Eq. (2.9), we have a general expression for mean square fluctua- tions, if only we can express E Ts as a function of x. This ordinarily can be done conveniently for the internal energy E. We shall now show that, to a very good approximation, s equals the entropy S, so that it also can be expressed in terms of the parameter x, by ordinary thermo- dynamic means. To do this, we shall compute the partition function Z of our assembly and from it the entropy. To find the partition function, _ V as in Eq. (5.17) of Chap. Ill, we must sum e kT over all stationary states. Converting this into an integral over x and remembering that Eq. (2.2) gives the number of stationary states in dx, we have - (/? - } Z = je kT dx kT fe-*( f - f ^dx Ts(ro)} r = e ' kT '~JZ- (2.11) Then, using Eq. (5.16) of Chap. Ill, we have A = U - TS = -kTlnZ = E(x<>) - Ts(x Q ) - kT In J^- (2.12) Now if the peak of f(x) is narrow, E(XO) will be practically equal to C7, the mean value of E, which is used in thermodynamic expressions like Eq. SBC. 3] FLUCTUATIONS 107 (2.12). It and TS are proportional to the number of molecules in the system, as we have mentioned before. But a, as we see from Eq. (2.7), is of the order of magnitude of the number of molecules in the system, so that the last term in Eq. (2.12) is of the order of the logarithm of the number of molecules, a quantity of enormously smaller magnitude than the number of molecules itself (In 10 23 = 23 In 10 = 53). Hence we can perfectly legitimately neglect the last term in PJq. (2.12) entirely. We then have at once *(x ) = S. (2.13) This expression, relating the entropy to the density of energy levels by use of Eq. (2.2), is a slight generalization of what is ordinarily called Gibbs's third analogy to entropy [his first analogy was the expression - k S/ t In / t , his second was closely related to Eq. (2.13)]. UsingEq. (2.13), we can then write the highly useful formula (/* /*. V2 * *(>; y-, rt ~, 3. Fluctuations of Energy and Density.- Using the general formula (2.14), we can find fluctuations in many quantities. Let us first find the fluctuation in the total energy of the system, getting a general result whose special caso for the perfect gas in Boltzmann statistics was discussed in Sec. 1. In this case x equals E, so that d 2 E/dx 2 = 0. The derivative of S with respect to E is to be taken at constant volume, for all the states represented in the canonical assembly are computed for the same volume of the system. Then we have, using the thermodynamic formulas of Chap. II, Sec. 5, (as\ i (svs\ ^i \dU/v T' \dU 2 J v T ___ TdU/v T 2 C V ' Substituting in Eq. (2.14), we find for the fluctuation of energy (E - U) 2 = kT*C v . (3.2) We can immediately see that this leads to the value we have already found for the perfect gas in the Boltzmann statistics. For the perfect gas, we have C v = %Nk, so that (E - C7) 2 = $AT(A;T) 2 , (3.3) agreeing with the value found from Eqs. (1.6) and (1.13). The formula (3.2), however, is quite general, holding for any type of system. Since Cv is of the same order of magnitude for any system containing N atoms 108 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VII that it is for a perfect gas of the same number of atoms, we see that the energy fluctuations of any type of system, in the canonical assembly, are negligibly small. The heat capacity CV is proportional to the number of atoms in the system, so that the mean square deviation of energy from the mean is proportional to the number of atoms, and the fractional mean square deviation of energy for N atoms is proportional to 1/JV, as in Eq. (1.9). As a second illustration of the use of the general formula (2.14), we take a perfect gas and consider the fluctuations of the number of molecules in a group of G cells in the molecular phase space. Two important physical problems are special cases of this. In the first place, the G cells may include all those, irrespective of momentum, which lie in a certain region of coordinate space. Then the fluctuation is that of tho number of molecules in a certain volume, leading immediately to tho fluctuation in density. Or in the second place, we may be considering tho number of molecules striking a certain surface per second and the fluctuation of this number. In this case, the G colls include all those whoso molecules will strike the surface in a second, as for example the colls contained in prisms similar to those shown in Fig. IV-2. Such a fluctua- tion is important in the theory of the shot effect, or the fluctuation of tho number of electrons omitted thcrmionically from an olement of surface of a heated conductor, per second; we assumo that the number omitted can be computed from the number striking tho surface from inside tho metal. To take up this problem mathematically, wo oxpross tho energy and the entropy in terms of the 2VYs, the average numbers of molecules in the various cells. The energy is U = #,ft, (3.4) where c t is the energy of a molecule in the tth cell of tho molecular phaso space. For the entropy, combining Eqs. (2.7) and (2.11) of Chap. V, we have 8 = -kfti In Ni + (1 #.) In (1 #,)], (3.5) where the upper sign refers to Einstein-Bose statistics, the lower to Fermi- Dirac, and where we shall handle tho Boltzmanu statistics as the limiting case of low density. In this case, the quantity x is the number of mole- cules in the particular cells we have chosen, which we shall call No, so that # t = x/G a* No/G, (3.6) SBC. 3) FLUCTUATIONS 109 where # is the average number of molecules in one of the cells of our group of G, which we assume are so close together that the numbers # t and energies e t - are practically the same for all G cells. In terms of this notation, we have U = Noc* + terms independent of No, (3.7) and 8 = -kN In ? k(G N G ) In (l + terms independent of No. (3.8) Then we have and Substituting in Eq. (2.14), we have AT + " > A7 \ ~~ ^ /^ \7 \ ' (3.10) Mo .J+ , MG\ \r I i , fVo ~- and ' in which we remember that the upper sign refers to the Einstein-Bose statistics, the lower to the Fermi-Dirac, and where we find the Boltzmaim statistics in the limit where Noo/G approaches zero, so that the right side becomes merely l/Na Q . Thus, we see that in the Boltzmann statistics the absolute mean square fluctuation nf the ni]TnKfir nf mnWnlog in Q volume of phase space equals the mean number in the volume, and the relative fluctuation is the reciprocal of the mean number, becoming very small if the number of molecules is large. These are important results, often used in many applications. We also see that the fluctuations in Einstein-Bose statistics are greater than in Boltzmann statistics, while in Fermi-Dirac statistics they are less, becoming zero in the limit N*/G - ffi - 1, 110 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VII since in that limit all cells in the group of G are filled and no fluctuation is possible. As a third and final illustration of Eq. (2.14), we consider the fluctua- tion of the density of an arbitrary substance, really a generalization of the result we have just obtained. Instead of the fluctuation of the density itself, we find that of the volume occupied by a certain group of molecules; the relative fluctuations will be the same in either case, since a given proportional increase in volume will give an equal proportional decrease in density. In this case, then, the quantity x is the volume V of a small mass of material selected from the whole mass. The derivatives in Eq. (2.14) are those of the whole internal energy and entropy of the system, as the volume of the small mass is changed. Since the part of the system exterior to the small mass is hardly changed by a change of its volume, we can assume that only the internal energy and entropy of the small mass itself are concerned in Eq. (2.14). Furthermore, we are interested merely in the fluctuation in density, neglecting any corresponding fluctuation in temperature, so that the derivatives of Eq. (2.14) are to be computed at constant temperature. Then we can rewrite Eq. (2.14) as where A = U TS. Using the thcrmodynainic formulas of Chap. II, we have so that _ _ P 9Vr ~ ' and , /*T/\ 1 ^ 8) The quantity in brackets is the isothermal compressibility, which is inde- pendent of V . We see, then, that the relative mean square fluctuation of the volume is inversely as the volume itself, becoming small for large volumes. This is in accordance with the behavior of the other fluctua- tions we have found. Let us check Eq. (3.13) by application to the perfect gas in the Boltz- mann statistics. Using PV - NkT, we have (~l/Vd(dV/dP) T SBC. 3] FL UCT UA TIONS 1 1 1 Thus TrTZTvA 2 /.T i (3.14) where No is the mean number of molecules in F . This value checks Eq. (3.11), for the case of the Boltzmann statistics, giving the same value for the relative fluctuation of volume which wo have already found for the fluctuation of the number of molecules in a given volume, as we have seen should be the case. For substances other than perfect gases, the compressibility is ordinarily less than for a perfect gas, so that Eq. (3.13) predicts smaller relative fluctuations of density; a perfectly incom- pressible solid would have no density fluctuations. ( )M the other hand, in some cases the compressibility can be greater than for a perfect gas. An example is an imperfect gas near the critical point, where the compressi- bility approaches infinity, a finite change in volume being associated with no change of pressure; here the density fluctuations are abnormally great, being visible in the phenomenon of opalescense, the irregular scattering of light, giving the material a milky appearance. Below the critical point, in the region where liquid and gas can coexist, it is well known that the material maintains the same pressure, the vapor pressure, through the whole range of volume from the volume of the liquid to that of the gas. Thus here again the compressibility is infinite. Formula (3.13) cannot be strictly applied in this case, but the fluctuations of density which it would indicate are easily understood physically. A given volume in this case can happen to contain vapor, or liquid drops, or both, and the fluctuation of density is such that the density can be anywhere between that of the liquid and the vapor. Such problems arc hardly suitable for a fluctuation theory, however; we shall be able to handle them better when we take up equilibrium between phases of the same substance. PART II GASES, LIQUIDS, AND SOLIDS CHAPTER VIII THERMODYNAMIC AND STATISTICAL TREATMENT OF THE PERFECT GAS AND MIXTURES OF GASES In Chap. IV, we learned some of the simpler properties of perfect gases obeying the Boltzinann statistics, using simple kinetic methods. We can go a good deal further, however, and in the present chapter we apply thermodynamics and statistical mechanics to the problem, seeing how far each can carry us. The results may seem rather formal and uninteresting to the reader. But we are laying the groundwork for a great many applications later on, and it will be found very much worth while to under- stand the fundamentals thoroughly before we begin to apply them to such problems as the specific heats of gases, the* nature of imperfect gases, vapor pressure, chemical equilibrium, thermionic emission, electronic phenomena, and many other subjects depending direct 1 y on the properties of gases. For generality, we shall include a treatment of mixtures of perfect gases, a subject needed particularly in discussing chemical equilib- rium. We begin by seeing how much information thermodynamics alone, plus the definition of a perfect gas, will give us, and later introduce a model of the gas and statistical methods, obtaining by statistical mechanics some of the results found by kinetic theory in Chap. IV. 1. Thermodynamics of a Perfect Gas. By definition, a perfect gas in the Boltzmann statistics is one whose equation of state is PV = nRT, (l.l) which has already been discussed in Sec. 3 of Chap. IV. Furthermore, from the perfect gas law, using Eq. (6.2) of Chap. II, we can prove that a perfect gas obeys Joule's law that the internal energy is independent of the volume at constant temperature. For we have (6U\ _ (dP\ WA Wr ( ' This is a reversal of the argument of Chap. II, Sec. 6, where we used Joule's law as an experimental fact to prove that the gas scale of tempera- ture was identical with the thermodynamic temperature. Here instead we assume the temperature T in Eq. (1.1) to be the thermodynamic temperature, and then Joule's law follows as a thermodynamic conse- quence of the equation of state. 115 116 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII No assumption is made thermodynamically about the specific heat of a perfect gas. Some results concerning it follow from thermodynamics and the equation of state, however. Wo have already seen that CV - C v (1.3) where Cp and Cv are the heat capacities of n moles of gas, at constant pressure and volume respectively. Furthermore, we can find thermo- dynamically how C P changes with pressure*, or C\ with volume, at con- stant temperature. We have jLjf*a\ dP\dT} P ^OF), dPOT dTdP (1.4) But from the Table of Thermodynamic Relations in Chap. II we have (dH^ Thus, we find 0. dT (1.5) (1.6) By an exactly analogous proof, substituting (7 for H, V for P, wo can prove f *a\ = T(^I . (i.7) Substituting the perfect gas law in Eqs. (1.6) and (1.7), we find at once for a perfect gas. That is, both specific heats are independent of pressure, or volume, at constant temperature, meaning that they are functions of the temperature only. Thermodynamics can state nothing regarding the variation of specific heat with temperature. It actually happens, however, that the heat capacities of all gases approach the values that we have found theoret- ically for monatomic gases in Eq. (3.18), Chap. IV, namely, f nR, C P = f nR t (1.9) SBC. 1] THE PERFECT GAS 117 at low enough temperatures. This part of the heat capacity, as we know from Chap. IV, arises from the translational motion of the molecules. The remaining heat capacity arises from rotations and vibrations of the molecules, electronic excitation, and in general from internal motions, and it falls to zero as the tomperaturo approaches the absolute zero, on account of applications of the quantum theory which wo shall make in later chapters. Sometimes it is useful to indicate this internal heat capacity per mole as C t , so that we write C v = %nR + nCi, CV - $nR, + nC,, (1. 10) where experimentally C\ goes to zero at the absolute zero, liquation (1.10) may be taken as a definition of C,. Next we take up the internal energy, entropy, Holmholtz free energy, and Gibbs free energy of a perfect gas From Joule', 1 - law, the internal energy is a function of the temperature alone, independent of volume. We let (/ n be the internal energy per mole* at the absolute zero, a quantity which cannot be determined uniquely since there is always an arbitrary additive constant in the energy, as we have pointed out in ('hap. I, Sec. 1. Then the change of internal energy from the absolute zero to temperature T is determined from the specific heat, and we have r = w r + ( T c v dT a/0 (1.11) We find the entropy first as a function of temperature and pressure, using the relations, following at once from the Table of Thermodynamic Relations of Chap. II, and the equation of state, (es\ = Cj. (*&\ = Jsv\ _ _UR \dTj f T' \dPj T \dTj P P (IAZ) Substituting for C P from Eq. (1.10) and integrating, we have r fT 7m S = nR In T - ntt In P + n\ C^ + const. (1.13) ^ Jo ' The constant of integration in Kq. (1.13) cannot bo determined by thermo- dynamics. It is of no practical importance when we are considering tho gas by itself, for in all cases we have to differentiate the entropy, or take differences, in our applications. But when we come to the equilibrium of different phases, as in the problem of vapor pressure, and to chemical equilibrium, we shall find that the constant in the entropy is of great importance. Thus it is worth while devoting a little attention to it here. There is one piece of information which we can find about it from thormo- 118 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII dynamics: we may assume that it is proportional to the number of moles of gas. To see this, consider first two separate masses of gas, one of n\ moles, the other of H* moles, both at the same pressure and temperature, with a partition between their containers. The total entropy of the two masses is certainly the sum of the separate entropies of the two. Now remove the partition between them. This is a reversible process, involv- ing no heat flow, and hence no change of entropy, so long as the gases on the two sides of the partition are made of identical molecules; if there had been different gases on the two sides, of course diffusion would have occurred when the partition was removed, resulting in irreversibility. Thus the entropy of the combined mass of (n\ + n%) moles is the sum of the separate entropies of the masses of HI and n 2 moles. But this will be true only if the entropy is proportional to the number of moles of gas. We know that this is true with every term of Eq. (1.13) except the con- stant, and we see therefore that it must be true of the constant as well. Thus the constant is n times a quantity independent of n, P, 7 7 , and hence depending only on the type of gas. This constant must have the dimensions of /, so that it must be nil times a numerical factor. For reasons which we shall understand shortly, it is convenient to write it in the form n(i + %)R, where i is called the chemical constant of the gas. Thus we have S = ^nR In T - nR In P + n I C^- + nRli + ~| (1 .14) It is often useful as well to have the entropy as a function of tempera- ture and volume. We can find this by integrating the equations f^\ = ^ f^\ = f^\ = ^ V .nT* I W * V Z \7 I V UHl I 17" ' N LtHjJ \O ./ / V * \O w / T \U J. / V V or by substituting for P in terms of T and V from the perfect gas law in Eq. (1.14). The latter has the advantage of showing the connection between the arbitrary constants in the two equations for entropy, in terms of T and P, and in terms of T and V. Using this method, we have at once 3 C T dT f 5 S = ~nR In T + nR In V + n I d-^- + nR\ i + ^ In (nl n L C ^Y From Eq. (1.16) we note that the additive constant in the entropy, in the form involving the temperature and volume, has a term nR In n, which is not proportional to the number of moles. This is as we should expect, however, as we can see from rewriting Eq. (1.16) in the form S = ln T - nR\n~ + n ( C.~ + nfiti + | - In R\ (1.17) SBC. 1] THE PERFECT GAS 119 In the form (1.17), each term is proportional to n, except the one This involves the ratio n/F, the number of moles per unit volume, which is proportional to the density. We thus see from Eq. (1.17) that if two masses of gas of the same temperature and the same density are put in contact, the total entropy is independent of whether they have a partition between them or not. This statement is entirely analogous to the previ- ous one about two masses at the same temperature and pressure. Next we find the Helm holt z free energy A = U TS as a function of temperature and volume. We can find this directly from Eqs. (1.11) and (1.16) or (1.17). Wo havo at once = n\ I/o - |r In T - RT In - L & n . dT - TJ*CJjr\ - KT(i + 1 - In )] (1.18) A /o The two terms depending on C, can bo written in two other forms by integration by parts. Those aro r T __ , c 1 dT _ _ rv C T dT\ ,, Jo Jo T J \Jo l T / C T IT C T -- r j.#j, (? " ir o- 20 ' To prove Eq. (1.19), we apply the formula Jw dv uv Jv du to the right side, setting r i ajL j i^iui ,- , Arr , % -Y> du = Y~> v = *> dv = dl, and Eq. (1.19) follows at once. To prove Eq. (1.20), we integrate the expression fC % /T dT on the left side of Eq. (1.19) by parts, setting u = 1/r, du = -l/T 2 dT, v = fCidT, dv = C t dT, from which Eq. (1.20) follows. Thus we have the alternative formulas To y A = n t/o - ^RT In T - /2'f In ~ + 1 - In /Z) (1.21) - fi7 7 In T - B5T In - C* dT - RW + 1 - In ) d-22) - In ) ] d- 120 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII Finally, we wish the Gibbs free energy G = U + PV TS as a function of pressure and temperature. Using Eqs. (1.11), (1.14), (1.19), and (1.20), this has the alternative forms G = n(u - ^RT In T + RT In P 7 t dT - T f C\ ~ - flW) (1.23) Jo V / = n\ f/ - ^RT In - f (f c^)" 1 - ] <i - T 7 + /M' In P - 24> - #7' Jn T + K7 7 In P We see that the term proportional to T in Eqs. (1.23), (1.24), and (1.25), RTi, has a particularly simple form. It is for this reason that the additive constant in the entropy, nR(i + f), in Eq. (1.20), is chosen in the particular form it is. For practical purposes, the appearance of this quantity in the Gibbs free energy is mon* important than it is in the entropy. 2. Thermodynamics of a Mixture of Perfect Gases. Suppose we have a mixture of HI moles of a gas 1, n 2 moles of gas 2, and so on, all in a container of volume V at temperature 7\ First we define the fractional concentration c* 7 of the ith substance as the ratio of the number of moles of this substance to the total number of moles of all substances present: t r= -- - - . V^- U n\ + n z + - We also define the partial pressure P t of UK; ?th substance as the pressure which it would exert if it alone occupied the volume V. That is, since all gases are assumed perfect, r>//T ^ = n^- (2.2) Then the equation of state of the mixture of gases proves experimentally to be just what we should calculate by the perfect gas law, using the total number of moles, (n\ + W& +)> that is, it is P - fa + n 2 + - )~ (2.3) SEC. 2] THE PERFECT GAS 121 Equation (2.3) may be considered as an experimental fact; it follows, how- ever, at once from our kinetic derivation of the equation of state in Chap. IV, for that goes through without essential change if we have a mixture instead of a single gas. Then from Eqs. (2.1), (2.2), and (2.3), we have (2.4) From Eq. (2.4), in other words, the fractional concentration of one gas equals the ratio of its partial pressure to tho total pressure. Plainly as a corollary of Eq. (2.4) we have Pi + /J * + - ir^r-i -> - p > ( 2 - 5 ) ni -f- HZ + and ci + c, + = I. (2.6) Equation (2.5) expresses the fact that the sum of the partial pressures equals the total pressure. We next consider the entropy, Helmholtz free energy, and Gibbs free energy of the mixture of gases. We start with the expression (1.14) for the entropy of a single gas. In a mixture of gases, it is now reasonable to suppose that the total entropy is the sum of the partial entropies of each gas, each one being given by Eq. (1.14) in terms of the partial pressure of the gas. If we have a mixture of n\ moles of the first gas, n^ of the second, and so on, the total entropy is then s = n ' fi ln T + c '~r~ ~ R ln P] + 5 R + i}R ' (2 ' 7) 3 where Cj is the internal heat capacity per mole of the jth gas, ij its chem- ical constant. We can express Eq. (2.7) in terms of the total pressure. Then we have s = n/j? ln T + C; ~ R ln p + /e + * jlnc,. (2.8) In Eq. (2.8), the first summation is the sum of the entropies of the various gases, if each one were at the same pressure P. The second summation is an additional term, sometimes called the entropy of mixing. Since the c/s are necessarily fractional, the logarithms are negative, and the entropy 122 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII of mixing is positive. It is such an important quantity that we shall examine it in more detail. Suppose the volume V were divided into compartments, one of size CiF, another c 2 F, otc., and all the gas of the first sort were in the first compartment, that of the second sort in the second, and so on. Then each gus would have the pressure P, and the entropy of the whole system, being surely the* sum of tho entropies of the separate samples of gas, would be given by the first summation of Eq. (2.8). Now imagine tho partitions between the compartments to be removed, so that the, gases can^irrcversi- bly diffuse into o.'irlj nthi*r. This diffusion, being an irreversible process, muafr result in rm increase uf entropy, and the second -term of Eg. (2.8). the entropy of mixing, represents just this increase. To verify its correctness, we must find an alternative reversible path for getting from the initial state to the final one, and find / dQ/T for this reversible path. That will II i Gas 1+2 Gas 2 _J Gasl+2 p (b) (c) Flu VIII-1. Reversible mixing oi two gases give the change of entropy, whether the actual path is reversible or not. We shall set up this reversible process by moans of semi-permeable mem- branes, membranes allowing molecules of one gas to pass through them, but impervious to the other gas. Such membranes actually exist in a few cases, as for instance heated palladium, which allows hydrogen to pass through it freely, but holds back all other gases. There is no error involved in imagining such membranes in other cases as well. We simplify by considering only two gases. Originally let the parti- tion separating the compartments c\V and c 2 V be two semipermeable membranes in contact, one permeable to molecules of type 1 but not of type 2 [membrane (1)], the other permeable to type 2 but not type 1 [membrane (2)]. The two together will not allow any molecules to pass. Each of the membranes will be subjected to a one-sided pressure from the molecules that cannot pass through it. Thus, in Fig. VIII-1 (a), mem- brane (1) is pushed to the left by gas 2, membrane (2) to the right by gas 1. Each of the membranes then really forms a piston, and if rods are attached to them as in Fig. VIII-1, they are capable of transmitting force and doing work outside the cylinder. Now let membrane (1) move slowly and reversibly to the left, as in (b), doing work on some outside device. If the expansion is isothermal, we know that the internal energy of the perfect gases is independent of volume, so that heat must flow in just equal SEC. 21 THE PERFECT GAS 123 to the work done. We can then find tho boat flowing in in the process by integrating P <1V for membrane (1). The pressure exerted on it, when the volume to the right of it is T, is n- RT/V 9 sinee only the molecules 2 exert a pressure on it. Thus the work done when the volume increases from c%V to V is C v RT I V \ ri2-dV = n 2 RT In (- ) = -RTni In c 2 . JctV ' \^2r / (2.9) This equals the heat flowing in. Since the corresponding increase of entropy is the heat divided by the temperature, it is -/en 2 lnc 2 . (2.10) Now in a similar way we draw the membrane (2) to the right, extracting external work reversibly and letting heat flow in to keep the temperature constant. By similar arguments, the increase of entropy in this process is Rni In Ci. And the total change of entropy in this reversible mixing is ICal - AS = In ci + H 2 In r 2 ), (2.11) just the value given foi the entropy of mixing in Eq. (2.8). It is interest ing to see hon the entropy of mixing of two gases depends on the concentrations. Let n = /M + "2 ~ the t_ i i 1 p 1 / rm total number ol moles oi gas. J hen, remembering that c\ + c 2 = 1, [Eq. (2.G)], we have AS = ~nfl[ci In ci + (1 - ci) In (1 - ci)]. o o 10 C t vni-2. Entropy of mixing of tWO KU*U*S. (2.12) In Fig. VIII-2 we plot AS from Eq. (2.12), as a function of ci, which can range from zero to unity. We see that the entropy of mixing has its maximum value when c = ^, or with equal numbers of the two types of molecules. At this concentration its value is given by -n#(i In 4 + i In i) = nR In 2 = 0.6931n/2 = 1.375 cal. per mole per degree. Having found the entropy of a mixture of gases in Eq. (2.8), it is a sim- ple thing to find the Gibbs free energy, from the relation G = U + PV - TS. We have + f RT 124 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII PV = ty-RT, (2.14) j so that G = jJnXZ, + flrjn, In c,-, (2.15) y y wliere K r r r r ,rr = U, - 5/22' In 7 1 + C/dr - 2 7 C^ + RT In P - t,*!? 2 Jo Jo J P C T f?T C T = Uj-^RTlnT - T\ %\ C, dT + RT In P - {,-RT. (2. * Jo 1 Jo 16) In Eqs. (2.15) and (2.16), [// represents the arbitrary constant in the energy for the jth type of molecule, corresponding to f/o of Eq. (1.11), and the last step in Eq. (2.16) is made by the same integration by parts used in Eqs. (1.19) and (1.20). The quantity (?/ represents the Gibbs free energy per mole of the jth gas at temperature T and pressure P. Thus Eq. (2.15) indicates that the Gibbs free energy of the mixture is the sum of the free energies of the constituents, at the final pressure arid temperature, plus a mixing term which is always negative. 3. Statistical Mechanics of a Perfect Gas in Boltzmann Statistics. ~ Since the internal energy of a perfect gas is independent of volume, by Joule's law, it is obvious that there can be no forces acting between the molecules, for if there were, they would result in an internal energy depending on the volume. Thus the molecular model of a perfect gas, which we make the basis of our statistical treatment, is a collection of N molecules, each of mass m, exerting no forces on each other. If the gas is monatomic, each molecule requires only three coordinates, the rectangular coordinates of its center of gravity, and the three conjugate momenta, to describe it completely, so that the phase space contains QN dimensions. When the gas is polyatomic, additional coordinates are necessary to describe the orientation and relative distances of separation of the atoms in the molecules. We assume there are s such coordinates, 5 momenta, so that in all there are (3 + s) coordinates, (3 + s) momenta, for each molecule, or (6 + 2s)N dimensions in the general phase space. We shall call the coordinates of the jth molecule and the momenta Here Xjyfif are the coordinates of the center of gravity, p xj , p ui , p zj the components of total momentum, of the molecule. SBC. 3] THE PERFECT GAS 125 The first step in applying statistical mechanics to our gas is to compute the partition function Z, given by Eq. (5.17) or (5.22) of Chap. III. To do this, we must first know the energy of the gas, E, as a function of the coordinates and momenta. Since there are no forces between the mole- cules, this is a sum of separate terms, one for each molecule. Now it is a general theorem of mechanics that the energy of a structure like a mole- cule, composed of particles exerting forces on each other but not actod on by an external force field, is the sum of tho kinetic energy of the structure as a whole, determined by the velocity of tho center of gravity, and an additional term representing the energy of the internal motions. Thus. for the energy of the gas, we have (3..) In Eq. (3.1), e' represents the energy of internal motions of a molecule. In evaluating the partition function, we must take exp ( E/kT), whore E is given in Kq. (3.1), and integrate over all coordinates and momenta. We observe in the first place that, since? E is a sum of terms for each molecule, exp ( E/kT) will be a product of such terms, and the whole partition function will be a product of N factors, one from each molecule, each giving an identical integral, which we can refer to as the partition function of a single molecule. We next observe that the parti- tion function of a single molecule factors into terms depending on the center of gravity of the molecule and terms depending on the internal motion. Thus we have [i /* j (3.2) The integration over x, y, z is to be carried over the volume of the con- tainer and gives simply a factor V. The integrations over p x , p y , p~ are carried from <*> to <*> , and can be found by Eqs. (2.3) of Chap. IV. The integral depending on the internal coordinates and momenta will not be further discussed at present; we shall abbreviate it ^ = i^ I ' I e kT dqi- dp, _LL T . (3.3) 126 INTRODUCTION TO CHEMICAL PHYSICS [CiiAP. VIII In Eq. (3.3), Z is the internal partition function of a single molecule. The second way of writing it, in terms of a summation, by analogy with Eq. (5.17) of Chap. Ill, refers to a summation over all cells in a 2s-dimen- sional phase space in which q\ p 8 are the dimensions. We note, for future reference, that the quantity Z> depends on the temperature, but not on the volume of the gas. Using the methods just described, Eq. (3.2) becomes Z, I Z = -^wmkT^Z, (3.4) There is one thing, however, which wo have neglected in our derivation, and that is the fact that the gas really is governed by Fermi- Dirac or Kinstcin-Bose statistics, in the limit in which they load to tho Boltzrnann statistics. As wo havo soon in Chap. V, Sec. 1, on account of the identity of the molecules, thoro aro roally AM different colls of tho general phase space corresponding to one state or complexion of the system. The reason is that there are A"! different permutations of the molecules, each of which would lead to the same number of molecules in each cell of the molecular phase space, and each of which therefore would correspond to the same complexion. In other words, by integrating or summing over all values of the coordinates and momenta of each molecule, we have counted each complexion AH times, so that the expression (3.4) is Nl times as great as it should be, and we must divide by AH to get the correct formula. Using Stirling's formula, I/AM = (e/N) N approximately, and multiplying by this factor, our amended partition function is (3.5) We shall use Eq. (3.5) as the basis for our future work. From the partition function (3.5), we can now find the Helmholtz free energy, entropy, and Gibbs free* energy of our gas. Using the equation A = -kT In Z, we have A = -\NkT In T - NkT In V - NkT In Z< _ NkThl + ! _ lu L A 8 From A we can find the pressure by the equation P (dA/dV^T* We have at once NIcT p = iHi, or PV = NkT. (3.7) SBC. 3] THE PERFECT GAS 127 Thus we derive the perfect gas law directly from statistical mechanics. We can also find the entropy, by the equation S = (dA/dT)r. Using the relation Nk = n/Z, we have S = |nfl In T + nR In V + nR^(T In Z % ) , _J. (ZwmW* ,5 , x D xl , ^ + nfll In YS ------ h 2 - in (n/Z) I- (3.8) From S we can find the specific heat GY by the equation CV = T(dS/dT) v . We have C F = j*nfi + nRTJ~(T In Z t ). (3.9) The specific heat given in Eq. (3.9) is of the form given in Eq. (1.10); by comparison we see that the internal heat capacity per mole, Ct, is given by ,72 Ct = RT~(TlnZJ, (3.10) similar to Eq. (5.21) of Chap. III. We shall use Eq. (3.10) in the next chapter to compute the specific heat of polyatomic gases. For mon- atomic gases, for which there are no internal coordinates, of course C t is zero. Using Eq. (3.10), we can rewrite T In Z l and its temperature deriva- tives in terms of C t . First, we consider the behavior of T In Z z and its derivatives at the absolute zero. Let there be go cells of the lowest energy in the molecular phase space, gi of the next higher, and so on. It is cus- tomary to call these g's the a priori probabilities of the various energy levels, meaning merely the number of elementary eells which happen to have the same energy. Then we have, from Eq. (3.3), _/? -Jl 1 __!l Zi=g e kT + ffl e **+ = ^ w as y^ 0> ( 3<u j Hence T In Z % approaches T In go ~ as T approaches zero. In the derivative of T In Z t with respect to temperature, the only term which does not approach zero with an exponential variation is the term In 0o. Using these values, then, we have p(T In Z t ) =\ng, + ct? (3.12) T In Z< = + T In ,. 4- cT. (3.13) 128 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. VIII Using Eq. (3.13), we can rewrite Eq. (3.6) as A = n\ f/o - l/Zr In T - RT In f H whore and i = / AT' No being Avogadro's number, the number of molecules in a mole, given in Eq. (3.10) of Chap. IV. We observe that Eq. (3.14) is exactly the same as (1.21), determined by thermodynamics, except that now we have found the quantities UQ, tho arbitrary constant in the energy, and z, the chemical constant, in terms' of atomic constants. Similarly, wo can show that all the other formulas of Sec. 1 follow from our statistical mechanical methods, using Eqs. (3.15) and (3.16) for the constants which could not be evaluated from thermodynamics. If we have a mixture of NI molecules of one gas, N 2 of another, and so on, the general phase space will first contain a group of coordinates and momenta for the molecules of the first gas, then a group for the second, and so on. The partition function will then be a product of terms like Eq. (3.5), one for each type of gas. The entropy will be a sum of terms like Eq. (1.14), with n in place of n, and P t , the partial pressure, in place of P. But this is just the same expression for entropy in a mixture of gases which we have assumed thcrmodynamically in Eq. (2.7). Thus the results of Sec. 2 regarding the thermodynamic functions of a mixture of gases follow also from statistical mechanics. It is worth noting that if we had not made the correction to our parti- tion function on account of the identity of particles and had used the incorrect function (3.4) instead of the correct one (3.5), we should not have found the entropy to be proportional to tho number of molecules. We should then have found an entropy of mixing for two samples of the same kind of gas: the entropy of (n\ + n 2 ) moles would be greater than the sum of the entropies of HI moles and n 2 moles. It is not hard to show that the resulting entropy of mixing would be just the value found in Eq. (2.8) for the mixing of unlike gases. This is natural; if we forgot that the molecules were really alike, we should think that the diffusion of one sample of gas into another was really irreversible, since surely we cannot separate the gas again into two samples containing the identical SEC - 31 THE PERFECT GAS 129 molecules with which we started. But the molecules really are identical, and it is meaningless to ask whether the molecules we find in the final samples are the same ones we started with or not. Thus mixing two samples of unlike gases increases the entropy, while mixing two samples of like gases does not. It might seem paradoxical that these two results could he simultaneously true. For consider the mixing of two unlike gases, with increase of entropy, and then let the molecules of the two kinds of gas gradually approach each other in properties. When do they become sufficiently similar so that the process is no longer irreversible, and there no longer is an increase of entropy on mixing? This paradox is known as Gibbs's paradox, and it is removed by modern ideo.s of the struc- ture of atoms and molecules, based on the quantum theory. Jn the quantum thenrv r ^re is a. perfectly clear-cut distinction: cither two particles are identical or they are not. There is no such thing as a gradual change from one to the other, for identical particles are things like elec- trons, of fixed properties, which we cannot change gradually at will. With this clear-cut distinction, it is no longer paradoxical that identical particles are to be handled differently in statistics from unlike particles. CHAPTER IX THE MOLECULAR STRUCTURE AND SPECIFIC HEAT OF POLYATOMIC GASES We have seen in the preceding chapter that the equation of stale, of a perfect gas is independent of the nature of the molecule. This is not true, however, of the specific heat; the quantity G\, which we called the internal specific heat, results from molecular rotations and vibrations and is different for different gases. For monatomic gases, where C t is zero, we have found CV = fn/2, C P = nR. Using the value R = 1.987 cal. per degree per mole, from Eq. (3.9) of Chap. IV, we have found the numerical values to be Cv 2.980 cal. per degree per mole, and Cp = 4.968 cal., values which are correct within the limits of experimental error for the specific heats of He, Ne, A, Kr, Xe, and of monatomic vapors of metals, when extrapolated to zero pressure, so that they obey the perfect gas law. But for gases which are not monatomic, the additional term C in the specific heat can only bo found from a rather careful study of the structure of the molecule. This study, which we shall make in the present chapter, is useful in two ways, as many topics in this book will be. In the first place, it throws light on the specific problem of the heat, capacity of gases. But in the second place, it leads to general and valu- able information about molecular structure and to theories which can be checked from the experimentally determined heat capacities. 1. The Structure of Diatomic Molecules. Many of the, most impor- tant molecules are diatomic and furnish a natural beginning for our study. The atoms of a molecule are acted on by two types of forces, fundamen- tally electrical in origin, though too complicated for us to understand in detail without a wide knowledge of quantum theory. First, there are forces of attraction, the forces which are concerned in chemical binding, often called valence forces. We shall look into their nature much more closely in later chapters. These forces fall off rapidly as the distance r between the atoms increases, increase; rapidly with decreasing r. Being attractions, they are negative forces, as shown in Fig. IX-1 (a), curve I. Secondly, there are repulsive forces, quite negligible at large distances, but increasing even more rapidly than the attraction at small distances. These repulsions are just the mathematical formulation of the impene- trability of matter. If two atoms are pushed too closely into contact, they resist the push. The repulsion, a force of positive sign, is shown in 130 SEC. 1] POLYATOMIC GASES 131 Curve II, Fig. IX-1 (a). If the atoms were rigid spheres, this repulsion would be zero if r were greater than the sum of the radii of the spheres, and would become infinite as r became less than this sum of radii. The fact that it rises smoothly, not discontinuously, shows that the atoms do not really have sharp, hard boundaries; they begin to bump into each other gradually, though quite rapidly. Now when we add the attractive and repulsive forces, wo got a curve liko III of our figure. This represents a negative, attractive force at largo distances, changing .sign and becoming positive at small distances, the repulsion begins to outweigh tho attrac- tion. At the distance r f , where the force changes sign, there is a position of equilibrium. The attraction and repulsion just balance, and the atoms can remain at that distance apart indefinitely. This, then, is the normal distance of separation of the atoms in the molecule. For small deviations (a) (b) Fia. IX-1 Force (a) and energy (6) of interaction at two atoms in a molecule I, attractive term, II, repulsive term, III, resultant curve. of r from r t , the curve of force against distance can be approximated by a straight line: the force is given by constant (r /v), a force propor- tional to the displacement, the sort found in elastic distortion, and leading to simple harmonic motion. Under some circumstances, the atoms vibrate back and forth through this position of equilibrium, the amplitude increasing with temperature. At the same time, the molecule as a whole rotates about its center of gravity, with an angular velocity increasing with temperature, and of course finally it moves as a whole, the motion of the center of gravity being just as with a single particle. Rather than using the force, as shown in Fig. IX-1 (a), we more often need the potential energy of interaction, as shown in (6) of the same figure. Here we have shown the potential energy of the attractive force by 1, that of the repulsive force by II, and the total potential energy by III. At the distance r f , where the force is zero, the potential energy has a minimum; for we remember that the slope of the potential energy curve equals the negative of the force. The potential energy rises like a parah- INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX ola on both sides of the minimum; if the force is *(r- potential energy is where k is a constant. -k(r r e ), then the It continues to rise indefinitely as / decreases toward zero, since it requires infinite work to force the atoms into contact. At large values of r, however, it approaches an asymptotic value; it requires only a finite amount of work to pull the atoms entirely apart from each other. This amount of work, indicated by D on the figure, is the work required to dissociate the molecule, and is important in thermodynamic applications. In Table IX-1 we list values of r e and D for a number of important diatomic molecules. A few of these, as the hydrides of carbon, nitrogen, TABLE IX-1. CONSTANTS OF DIATOMIC MOLECULES Substance 7> r, A (electron volts)) a, A- 1 H 2 . . 103 4 454 0.75 1.94 CH 81 3 5 1 12 1.99 NH. : 97 1 2 1.08 1.96 OH 102 '' 4 4 0.96 2.34 HC1 102 4.40 1.27 1.91 NO 123 5.3 15 i 3.06 2 . . 117 5 09 20 2.68 N* 170 7 35 09 3.11 CO 223 9 fi 13 2.48 C 2 128 5 (i 31 2.32 cu 57 2 47 98 2 05 Br 2 10 1.96 2 28 1 1.97 I 2 . 36 1 53 2 66 1.86 Li 2 26 1 14 2 67 0.83 Na 2 18 7(i 3 07 0.84 K, . ; 12 51 3.91 78 The data are taken from Spom-r, " Molekulspekticri und ihie AnwendutiKcn auf chemische Piob- lerne," Spnnger, Berlin, 1933, which tabulates Din electron volts, /*,, and vihiational frequencies. The values of a in the table above are computed u-mg Eq. (4.5) of the pit-sent chapter, solved for a in terms of the vibrational frequency and D, as tabulated by Sponer Thus a calculation of the vibrational frequency fiorn data of the present table, using Kq. (4 o), \vill iiiitornutically give the right value Sponer'a data are taken from band spectra. and oxygen, do not ordinarily occur in chemistry, but they are formed in discharge tubes and are stable molecules. The values of r e are given in angstrom units (abbreviated A), equal to 10 8 cm. The values of D are given in kilogram-calories per gram mole, where we remember that 1 kg.-cal. is 1000 cal., or 4.185 X 10 10 ergs. We also give D in electron volts. One electron volt by definition is the energy acquired by an electron in falling through a difference of potential of one volt. This is the charge on the electron, 4.80 X 10~ l e.s.u., times one volt, or ?fa e.s.u. SBC. 1] POLYATOMIC GASES 133 Thus one electron volt is 4.80 X 10~ 10 /300 = 1.60 X 10~ 12 erg. To compare with the other unit, we note that the value of D in kilogram- calories is computed for a gram mole, that in electron volts for a single molecule. Thus we have 1 electron volt per molecule = 1.60 X 10~ 12 erg per molecule = 1.60 X 10- ia X 6.03 X 10 23 ergs per mole 1.60 X 10~ 12 X 6.03 X 10 23 , = 23.05 kg.-cal. per mole. (1.1) A very useful empirical approximation to the curves of Fig. IX-1 has been given by Morse, and it is often called a Morse curve. As a matter of fact, Fig. IX-1 was drawn from Morse's equation. This approximation is Force = Energy = C + D(e~^ r ~-^ - 2r n( > ->>). (1.2) Here C is a constant fixing the zero on the scale of ordinates and therefore arbitrary, since there is always an arbitrary additive constant in the energy. D is the energy of dissociation tabulated in Table IX-1, and finally a is a constant determining the curvature about the minimum of the curve, given in the last column of Table IX-1. Thus from the data given in Table IX-1 and the function (1.2), calculations can be made for the interatomic energy or force. In Eq. (1.2), the first term, the positive one, represents the repulsive part of the potential energy between the two particles, important at small distances, while the second, negative term represents the attraction at larger distances. While the Morse curve has no direct theoretical justification, still it proves to represent fairly accurately the curves which have been calculated in a few cases from quantum mechanics. Such calculations have shown that it is possible to explain in detail the interatomic energy curves, the magni- tudes of D, r ( , etc. Nevertheless, the explanations are so complicated that it is better simply to treat the constants of Table IX-1 as empirical constants, without trying to understand why some molecules have greater /)'s, some less, etc., in terms of any model. For future reference, how- ever, it is worth while pointing out that the smaller D is, the less energy is required to dissociate the molecule, and therefore the lower the tem- perature needed for dissociation. We shall later talk about thermal dissociation of molecules; from the table it is clear that the best molecules to use as examples, the ones which will dissociate at lowest temperatures, will be iodine and the alkali metals lithium, sodium, and potassium. Conversely, N2 and CO require such a high energy for their dissociation that they do not dissociate under ordinary circumstances. 134 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX 2. The Rotations of Diatomic Molecules. If molecules were governed by classical mechanics, the motions of their atoms would have the follow- ing nature. First, the molecules as a whole would have a uniform motion of translation, the mean kinetic energy being ffcT, on account of the equipartition of energy, discussed in Chap. IV, Sec. 2. Secondly, the molecules would rotate with uniform angular momentum about an arbi- trary axis passing through the center of gravity. Two coordinates are necessary to specify the rotation of the molecule; for instance, the latitude and longitude angles of the line joining the centers of the atoms. Thus, from equipartition, the mean kinetic energy of rotation would be (%)kT = kT. Finally, the atoms would vibrate back and forth along the line joining them. One coordinate, the interatomic distance r, deter- mines this vibration. Thus, from equipartition the mean kinetic energy of vibration would be %kT. At the same time, in simple harmonic motion, there is a mean potential energy equal to the mean kinetic energy and hence equal also to \kT, so that the oscillation as a whole would con- tribute kT to the energy. We should then find a mean energy of rotation and vibration of 2kT, with a contribution to the heat capacity per mole of 2Nok = 2R cal. per degree. This would bo the value of C,, the heat capacity of internal motions mentioned in Chap. VIII, Sec. 1, if the gas obeyed classical mechanics. Actually, the observed values are less than this, increasing from small values at low temperatures to something approaching 2R at very high temperatures, and the discrepancies come from the fact that the quantum theory, rather than the classical theory, must be used. We have seen in Chap. IV, Sec. 2, that equipartition of energy is found only when a distribution of energy levels is so closely spaced as to be practically continuous. The translational levels of a gas are spaced as closely as this, as we have seen in Chap. IV, Sec. 1, so that we are per- fectly justified in assuming equipartition for the translational motion, resulting in the heat capacity C v = InR. But the rotational and vibra- tional levels are not so closely spaced, and we must use the quantum theory to get evon an approximately correct value for this part of the specific heat. Our first problem, then, is to find what the energy levels of a diatomic molecule really are. This can be done fairly accurately by quite elementary methods. To a good approximation we can treat the rotation and vibration separately, assuming that the total energy is the sum of a rotational and a vibrational term. We can treat the vibra- tion as if the molecule wore not rotating, and the rotation as if it were not vibrating, but as if the atoms were fixed at the interatomic distance r e . Let us consider the rotation first. In Chap. Ill, Sec. 3, we have found that the energy of a rotating body of moment of inertia /, angular momentum p$, is p0 2 /27, which is equal to the familiar expression i/co 2 , SBC. 2] POLYATOMIC OASES 135 where co is the angular velocity. Furthermore, we have found that in the quantum theory the angular momentum is quantized : that is, it can take on only certain discrete values, pe nh/2ir, where n is an integer. Thus, the energy according to elementary methods can have only the values E n = n 2 /i 2 /87r 2 /, as in Eq. (3.7), Chap. III. As a matter of fact, as we saw in Eq. (3.9), Chap. Ill, we must modify this formula slightly. To agree with the usual notation used for molecular energy levels, we shall denote the quantum number by K, rather than n. Then it turns out that the energy, instead of being given by K 2 h~/8w*I, is given by the slightly different formula E^ = K(K + 1)^, (2.1) where K can take on the values 0, 1, 2, ... To evaluate these rotational energy levels, we need the moment of inertia /, in terms of quantities that we know. This is the moment of inertia for rotation about the center of gravity of the molecule. Let the masses of the two atoms be Wi, nii, and let nil be at a distance TI from the center of gravity, ra 2 at a distance r 2 . Then we have Using Eq. (2.2), we find at once I = - r*, * = - -p e mi + mo mi + m 2 But we have 7 = Srar 2 = m\r\ + //^l- Thus = r f , = ra 2 r 2 . (2.2) > \ r e . (2.3) t = rf, (2.4) l * e e1 v y where miniz 111 /0 K v M = - - , or - = -- --- (2.5) nil + nit mi m 2 That is, the moment of inertia is that of a mass /x (sometimes called the reduced mass) at a distance r e from the axis. Having found the rotational energy levels, we wish first to find how closely spaced they are, to see whether we can use the classical theory to compute the specific heat. The thing we are really interested in is the spacing of adjacent levels as compared with kT] if the spacing is small compared with fc!F, the summation in the partition function can be 136 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX replaced by an integration and equipartition will hold. Let us consider the two lowest states, for K = 1 and 0, and find the energy difference between them. We have - #<, = (1 X 2 - X = -A 2 - 47T 2 /' (2.6) We wish to compare this quantity with kT; it is more convenient to define a quantity which we can call a characteristic temperature 6 ro t by the equation A 2 and then our condition for the applicability of the integration is T 9 ro t. We now give, in Table IX-2, values for the characteristic temperatures TABLE IX-2. CHARACTERISTIC TEMPERATURE FOR ROTATION, DIATOMIC MOLECULES Orot, Substance abs. H 2 . 171 CH 41 4 NH. 44 1 OH . 55 HC1 30 5 NO 4.93 O 2 4 17 N 2 5 78 CO 5 53 C 2 4 70 C1 2 . . .0 693 Br 2 . 233 h 108 Li 2 I 96 Xa 2 447 K 5 162 By EIJ (2 7), we have defined O rot by the relation that fcO,,,t equals the eneigy difference between the two lowest rotational energy levels of the molecule The method of calculation from the value of r e in Table IX-1 is illustrated in the text. for the same diatomic molecules listed in Table IX-1. These values are calculated, using Eq. (2.7), from the masses of the atoms, known from the atomic weights and Avogadro's number, and the values of r e in Table IX-1. Thus, for instance, for H 2 we find ft 2 (6.61 X 10- 27 ) 2 (6.03 X 10 23 ) I X 10- 8 ) 2 (1.379 X 171 abs. SBC. 21 POLYATOMIC GASES 137 1.008 2 Here /* = = 6 Q3 x 10 2 3 ' and r = - 75 X 10 ~* cm - From Table IX-2, we see that the gases are divided distinctly into three types. In the first place, hydrogen stands entirely by itself, on account of its small mass. The characteristic temperature rot , having the value 171 abs., is the only one at all comparable with room tempera- ture. Next are the hydrides, with characteristic temperatures between 20 and 60 abs. Finally, the characteristic temperatures of all gases not containing hydrogen lie below 6 abs. Now it is not easy to calculate the specific heat of a rotating molecule in the quantum theory on account of mathematical difficulties, but the result is qualitatively simple. The specific heat rises from zero at low temperatures, comes to the classical value at high temperatures, and the range of temperature in which it is rising is in the neighborhood of the characteristic temperature 6 ro t which we have tabulated in Table IX-2. We may then infer from Table IX-2 that for molecules not containing hydrogen, the rotational specific heat will have attained its classical value at a very low temperature, so that we are entirely justified in using the classical value in our calculations. As an illustration, we give values computed for the specific heat of NO at low temperatures. We remember that the translational part of (7/ is f/2 = 4.97 caL, whereas if the rotational specific heat is added we have %R = 6.96 cal. The specific heat is actually computed 1 to be 4.97 at 0.5 abs., 5.12 at 1.0, 6.91 at 5.0, and 6.95 at 10. Of course, NO at atmospheric pressure liquefies at a higher temperature than this, but at sufficiently reduced pressure the boiling point can be reduced as far as desired, so that there is nothing impossible about having the vapor at a temperature as low as desired. The hydrides have a decidedly higher temperature range in which the rotational specific heat is less than the classical value. And for hydro- gen, the quantum theory value is appreciably less than the classical value even at room temperature. Thus, at 92 abs., we have the value 5.28 for C P ; at 197, 6.30; at 288, 6.78. The specific heat of hydrogen presents complications not occurring with any other substance. It turns out, for reasons which are too complicated to go into here, that in the energy levels of hydrogen and of other diatomic molecules made of two like atoms, we can make a rather sharp separation between the energy levels 1 For these values, and much other data relating to thermal properties of gases, see Landolt-Bornstein, "Physikalisch-chernische Tabellen," Dritter Erganzungsband, Dritter Teil, pp. 2315-2364, Springer, 1936. 138 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX in which K is even, and those in which K is odd. As a matter of fact, if a molecule is in a state with K even, for instance, almost no physical agency, such as collisions with other molecules, seems to have any tendency to transfer it to a state with K odd. It is almost as if the gas wore a mixture of two gases, one with K even, the other with K odd. Actually names have been given to these gases, the case of K even being called parahydro- gen, that of K odd being called orthohydrogen. At high temperatures, such as we ordinarily have, we have molecules of both types in thermal equilibrium. At first sight we should expect that there would be about equal numbers of molecules of both sorts, but an additionally complicating feature concerning the a priori probabilities of the states results in there being three times as many molecules of orthohydrogen as of parahydrogcn at high temperatures. When specific heat measurements are made at low temperatures, it has always boon the practice to start with hydrogen at room temperature, and cool it down. On account of the slow rate of conversion of one type of hydrogen into the other, the two types of hydro- gen appear in the same ratio of three to one \\hen the low temperature measurements are made. This is not the equilibrium distribution cor- responding to low temperature. At the very lowest temperature, we should expect all the molecules to be in the lowest possible state, that of K 0, a state of parahydrogen. Thus to compute the observed specific heat, we must assume a mixture of ortho- and parahydrogen in the ratio of three to one, find the specific heat of each separately, and add. When this is clone, the result agrees with experiment. To get the true equilib- rium mixture at low temperature, we must either wait a period of a number of days, or employ certain catalysts, which speed up the trans- formation from one form of hydrogen to the other. 3. The Partition Function for Rotation. Though we shall not be able to find the rotational specific heat on account of mathematical difficulties, still it is worth while setting up the partition function for rotation and showing the limiting value which it approaches at high temperatures. To do this, we must sum exp ( E lot /kT\ where E lot is given in Eq. (2.1), for all values of K. There is one point, however, which we have not yet considered. That is the fact that the energy levels are what is called degenerate: each level really consists of several stationary states and several cells in the phase space. The reason for this is what is called space quantization. We merely describe it, without giving the justification in terms of the quantum theory. It is natural that the angular momentum, Kh/2ir, of the rotating molecule can be oriented in different directions in space. As a matter of fact, it turns out that in quantum theory there are just (2K + 1) allowed orientations, each corresponding to a different stationary state and a different cell. One simple way of describing these orientations is in terms of a vector model, SEC. 3] POLYATOMIC GASES 139 as shown in Fig. IX-2. Here we have a vector of length Kh/2ir. Then it can be shown that the projection of this vector along a fixed direction is allowed to have just the values Mh/2ir, where M is an integer, cor- responding to the various orientations shown in the figure. Obviously, the maximum value of M is K, coming when the angular momentum is oriented along the fixed direction, and the minimum is M when it is opposite. But there are just (2K + I) integers in the, group K, K 1, K 2, (K 1), K, justifying us in our statement that there are (2K + 1) allowed orientations. One says thnt the state is (2K + l)-fold degenerate. Considering this degeneracy, we see that the term in the partition function corresponding to a given K must really be counted (2K + 1) times, since all these stationary states, corresponding merely to different orientations in space, obviously have the same energy. Thus, we have (3.1) z. *'u;. IX-2. where Z rot is the factor in the partition function of a single molecule, Z, of Eq. (3.3) of Chap. VIII, coming from rotation. It is this summation which unfortunately can- quantization not be evaluated analytically. But we can handle it in the limit of high temperature, for then the terms corre- spending to successive K's will differ so little that the summation can be replaced by an integration. We then have of 2 1, o, -i, -2, ""*" /oo -(+l\h' lim Z rot = I (2K + })c~**'ikT- <1K. /o (3.2) T 0rot The bulk of the integral in Eq. (3.2) will come from high quantum num- bers or high values of K. For these, we can neglect unity compared to K, obtaining lim T 0rot (3.3) (3.4) using the integrals (2.3) of Chap. IV. From the expression (3.4), we can in the first place find the rotational heat capacity, using Eq. (5.21) of Chap. III. This may be written C tr 140 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX where we must multiply by NQ because our quantity Z rot refers to a single molecule. Thus we have, substituting Eq. (3.4) in Eq. (3.5), C rot = Nok, (3.6) in accordance with equipartition. It is also interesting to compute tho contribution of tho rotation to the entropy, as given in Eq. (3.8) of Chap. VIII. From that equation, the contribution is *R* T (T In /,) = wft(ln ~- + 1 + In T\ (3.7) Thus, using Eq. (3.8) of Chap. VIII, the entropy in the temperature range where the rotation can be treated classically, but where the vibration is not excited enough to contribute appreciably to the entropy, is S = %nR In T + nR In V + nR[i' + 1 - In (n/Z)], (3.8) where ., (2rm)*'k''* . tar 2 / A' , Q , i' = In v ~ j/ 8 h In - jp - (3.9) The quantity i 1 of Eq. (3.9) can be considered as tho chemical constant of a diaTf"Tni'p f'"*r *" fionnontion with the formula (3.8) tor the entropy. We must remember, however, that Eq. (3.8) holds only in a restricted temperature range, as stated above; with some gases, the vibration begins to contribute to the entropy even at room temperature, as we shall see in the next section. It is sometimes useful to have the formula for Gibbs free energy of a diatomic gas in the range where Kq. (3.8) is correct. This is easily found to be (f = n(U - iRT In T + RT in P - RTi'), (3.10) where i' is given in Eq. (3.9). 4. The Vibration of Diatomic Molecules. In addition to their rotation, we have seen that diatomic molecules can vibrate with simple harmonic motion if the amplitude is small enough. We shall use only this approximation of small amplitude, and our first stop will be to calcu- late the frequency of vibration. To do this, we must first find the linear restoring force when the interatomic distance is displaced slightly from its equilibrium value r r . We can get this from Eq. (1.2) by expanding the force in Taylor's series in (r r t ). We have Force = 2aD[l - 2a(r - r f ) - I -f a(r - r.) ] = -2a*D(r - r.), (4.1) neglecting higher terms. Now we can find the equations of motion for the two particles of mass m\ and m^ at distances r\ and r^ from the center SBC. 4] POLYATOMIC GASES 141 of gravity, where TI + r 2 = r, under the action of the force (4.1). These are r 2 - ~ i r 2 - r c ). (4.2) We divide the first of these equations by MI, the second by W 2 , and add, obtaining or M~ = -2a'D(r - r.), (4,3) where /x is given by Eq. (2.5). The vibration, then, is like that of a particle of mass /x, with a force constant 2a 2 D. By elementary mechan- ics, we know that a particle of mass M, acted on by a linear restoring force kx, vibrates with a frequency 1 /* (AA\ v = 2^\t (4<4) Thus the frequency of oscillation of the diatomic molecule is (4.5) We have found in Eq. (3.8), Chap. Ill, that the energy levels of an oscillator of frequency i>, in the quantum theory, are given by fe\,b = (v +%)hv, (4.6) where v is an integer (called n in Chap. III). The spacing of successive* levels is hv. We may then expect, as with the case of rotation, that for temperatures T for which hv/kT is small, or temperatures large compared with a characteristic temperature the classical theory of specific heats, based on the use of the integration to find the partition function, is applicable, while for temperatures small compared with 6 vlb we must use the quantum theory. To investigate this, we give in Table IX-3 the characteristic vibrational temperatures of the molecules we have been considering. The values of Table IX-3 142 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX can be found from D and a as tabulated in Table IX-1. Thus for H 2 we have 6.61 X_10^ /2(1.95 X 1Q 8 ) 2 (103)(4.185 X 10 10 ) *" ar(1.379 X 10-") / 1.008 V ~2~ = 6140 abs. (4.8) We see from Tal?le IX-3 tlmt fnr prn.p.f.inp.l]y all thft molecules the charac- i Inrpro pnmpfljpd to room tempcratyrn,. so thftl; at all ordinary tttnippfatdip^ wo n ^ 11gr> *-hf> quantum theory of specific heat. We also note that in every case the characteristic temperature for vibra- TABLE IX-3. CHARACTERISTIC TEMPERATURE FOR VIBRATION, DIATOMIC MOLECULES Substnnoo ^ V1 b, a^>s. H 2 6140 CH . . 4100 NH. . . . 4400 OH .. 5360 HC1 4300 NO . . 2740 O 2 . . 2260 N 2 . . 3380 CO . . 3120 C 2 .. 2370 C1 2 ... 810 Br 2 470 I 2 . . 310 Li 2 . . 500 Na, 230 K 2 .... 140 These values are calculated as in Eq. (4.8). is:ifirv large compare^] lift i ^ n -*' nf rnt itinn^ That is. the rotational "^ ^"fh m^ rfi filoafilv spn.p.P<l tlin.n f.lio This is a characteristic feature of molecular energy levels, which is of great importance in the study of band spectra, the* spectra of molecules. 5. The Partition Function for Vibration. First, wo shall calculate the partition function and specific heat of our vibrating molecule by classi- cal theory, though we know that this is not correct for ordinary tempera- tures. Using the expression (4.1) for the force, we have the potential energy given by - r.), (5.1) and the kinetic energy is SBC. 5] POLYATOMIC GASES 143 where p r is the momentum associated with r, equal to /* dr/dt. Then, by analogy with Eq. (5.22) of Chap. Ill, the vibrational partition function Zvib can be computed classically as an integral, 1 r <'* D (r - r,) / * z - = /J '~"~* r "*J_ In the integral over r, we can approximately replace by an integral from oc to oo , for the exponential in the integrand is practically zero for negative values of r. If we do this, we have kT f . = TV' (5 ' 4) The use of an equation analogous to Eq. (3.5) gives the value R for the vibrational contribution to the specific heat, as mentioned at the begin- ning of Sec. 2. Next we calculate the specific heat in the quantum theory. The partition function is = Ye \ ^2jkT _JLL/ _*i _?*r \ = e a "\i + ^ * r + ^ ^ + ; a ~ V&T (5.5) using the formula for the sum of a geometric series, 1 + x + z 2 +.--=- - i- - (5.6) We note that at high temperatures, the numerator of Eq. (5.5) can be set equal to unity, the denominator becomes I 1 f 1 TTTJ ) = r' L \ Kl /J Ki so that the partition function reduces to kT/hv, in agreement with the classical value (5.4). Using Eqs. (5.5) and (3.5), we have for the vibra- tional specific heat per mole the value 144 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX This result was first obtained by Einstein and is often called an Einstein function. Introducing the characteristic temperature from Eq. (4.7), we have Ovlh Cr vib ~~~ v (;-_-, (5.8) It is also interesting to find the internal energy associated with the vibra- tion; proceeding as in Eq. (5.20) of Chap. Ill, we see this at once to be Nhv ~T~ Nh (5.9) JOL ht; The average energy and heat capacity per oscillator, from Eqs. (5.9) and (5.8), are plotted as functions of temperature in Fig. IX-3. It will be seen that the energy is ^hv at the absolute zero and increases from this value quite slowly. The slow rise, with horizontal tangent of the energy curve at- the absolute zero, is what leads to the vanishing specific hczit at the absolute zero. At higher temperatures, however, the energy approaches the classical equipartition value kT, and the heat capacity ap- proaches the classical value k. As an example of the application of FIG. ix-;*. Average energy and Eq. (5.8), we compute the vibrational heat capacity of an oscillator, according spoc ifi c heat for CO, ill Table IX-4, to the quantum theory. * ' then find the total specific heat by adding the vibrational heat to %R = 6.96 cal., which is the sum of f R for translation, R for rotation, and compare with the correct value. The agreement between our calculations and the "correct" values of CP in Table IX-4 is good but not perfect. More accurate calculation agrees practically perfectly with experiment; in fact, calculation is in general a more accurate method than the best experiments for finding the specific heat of a gas, and the "correct" values of Table IX-4 are really simply the results of more exact and careful calculation than we have made. It is worth while discussing the errors in our calculation. In the first place, the frequency v which we have found from the constants of the Morse curve is correct, for as a matter of fact the constants a in Table IX-1 were computed from the frequencies and values of D observed SEC. 6] POLYATOMIC GASES 145 from band spectra, using Eq. (4.5) solved for a. But the Einstein specific heat formula (5.7) is not exactly correct in this case, for the actual inter- atomic potential is not simply a linear restoring force, as we have assumed when we use the theory of the linear oscillator. Not only that, but as wo have mentioned there is interaction between the vibration and the rota- tion of the molecule. These effects make a small correction, which can be calculated and which accounts for part of the discrepancy between the last two columns in Table IX-4. They do not account, however, for the TABLE IX-4.-- COMPUTED SPECIFIC HEVT OF CO r o , i Vib rational ; Total specific, Cp J i specific heat , heat Cp correct 500 i 0.18 7 14 7 12 1000 0.94 7 90 7 94 2000 1 63 8 59 8 67 3000 1.81 8.77 8 90 4000 1 81) 8 8T> 9 02 5000 1 92 8 88 9 10 i 1 Cp is given in calorics per mole. Tho calculation is made by Eq. (5 8). Vtiluos tabulated in last column, "Cp correct," are from Landolt-Boinstem, " Physikahach-chemische Tabellen," Drifter Erganzungsband, Dnttor Teil, p. 2324, Springer, 1936. fact that the correct specific heat rises above the value %R 8.94 cal., for the quantum vibrational specific heat never rises above the classical value. This effect comes in on account of a ne\v feature, electronic excitation, which enters only at very high temperature. We can explain itrbnefiy by stating that electrons, as well as linear oscillators, can exist in various stationary states, as_a result of which they contribute to the specific heat. Their specific heat curves are somewhat similar to an Ein- stem curve^ but with extremely high p.hflTRo.t^rist.if. tempfimfrirraTso that even at 5000 they are at the very low part of the curve and contribute onlv slightly to the specific heat. When these small contributions are computed and added to the values found from rotation and vibration, the final results agree very accurately with observation. 6. The Specific Heats of Polyatomic Gases. We shall now discuss the specific heats of polyatomic gases, without going into nearly the detail we have used for diatomic molecules. In the first place, the rotational kinetic enerpry is different, from thai, in Hi.toinin molecules. It requires three, rather than two, coordinates to describe the orientation of a. poly- atomic molecule in space. Thus, imagine an axis rigidly fastened to the molecule. Two coordinates, say latitude and longitude angles, are enough to fix the orientation of this axis in space. But the molecule can still rotate about this axis, and an additional angle must be specified to determine the amount which it has rotated about the axis. These three 146 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. IX coordinates all have their momenta and their terms in the kinetic energy. And when we find the mean kinetic energy of rotation, the new variable contributes its $kT, according to equipartition. ThiiH ||A ^r^y, t,r Q r>- lational and rotational, amounts to 3kT per mnWuluffir 3nRT for n moles, and the transnational and rotational heat capacity is CV = 3nB = 5.96 cal. per degree per mole, C/> = 4nR = 7.95 cal. per degree per mole. In addition to translational and rotational energy, the polyatomic molecules like the diatomic ones can have vibrational energy. As a matter of fact, they can havo considerably more vibrational energy than a diatomic molecule, for they havo more vibrational degrees of freedom. A diatomic molecule has only one mode of vibration, but a triatomic molecule; has throe. Thus the water molecule can vibrate in the three ways indicated (a) 5170 Abs (b) 5400 Abs (c) 2290 Abs FIG. IX-4.- Modes ol vibration of the 1LO molnculc Tho arro\vs indicate the direc- tion of vibration ot each atom, for the normal mode whose characteristic temperature IP indicated. For similar information on a variety of molecules, see 11. Sponer, " Molekulspek- tren und ihre Anwfndungen aiif chomisrhe Problems," Springer, Berlin, 1935. by the arrows of Fig. IX-4. The arrowy show the directions of displace- ment of the throe atoms in the vibration. In general, to find the number of vibrational degrees of freedom, it can be shown that one takes all the degrees of freedom of the atoms of the molecule, regarded as free. This is 3N, if there are N atoms, each having three rectangular coordinates. Then one subtracts from this total the number of other degrees of free- dom: the translational degrees of freedom of the molecule as a whole, three, and the rotational degrees of freedom (none when N = 1, two when N = 2, three when N ^ 3). Thus the number of vibratiomil degrees of freedom is 3(1) 3 0=0 for a monatomie gas, 3(2) 3 2 = 1 for a diatomic gas, 3(3) 3 3 = 3 for a triatomic gas, and in general 3N - Gfortf ^ 3. (6.1) Each of the vibrational degrees of freedom given by Eq. (6.1) would have a mean kinetic and potential energy of kT, according to equiparti- tion, and would contribute an amount R to the specific heat. As with diatomic molecules, however, the quantum theory tells us, and we find SEC. 6] POLYATOMIC GASES 147 experimentally, that the vibrational specific heat is practically zero at low temperatures. We give a single example, the specific heat of water vapor, which will show what actually happens. Since this is a triatomic molecule, the specific heat should be Cp = nR = 7.95 cal. per degree per mole for translation and rotation, plus three Einstein terms, as given by Eq. (5.8), for characteristic temperatures which are 5170 abs., 5400 abs., and 2290 abs., for the modes of vibration (a), (b), mid (e) respectively in Fig. IX-4. 1 Calculations are given in Table IX-5, where the columns TABLE IX-5. COMPUTED SPECIFIC HEAT OF WATEK VAI-OR T, abs. () W (c) : C P , calculated 0-, correct 300 00 00 10 8 05 8 (X) 400 00 00 25 8 20 8 16 500 00 00 0.40 8 35 8 38 600 02 02 66 8 63 8 64 800 16 15 1 05 9 31 9 20 1000 31 30 1 30 9.86 9 80 1500 80 78 1 65 11 18 11 15 2000 1 18 1 15 1 79 12 07 12 09 3000 1 58 1 52 [ 90 12 90 13 10 See fomments under Table IX-4 headed (a), (b), (c) are the vibrational heat capacities for the three modes of vibration, the next column gives the calculated C P , and the last one the correct (V, found from more accurate calculation and agreeing well with experiment. As with CO, the slight discrepancies remaining between our calculation and the correct values can be removed by more elaborate methods, including the interaction between vibration and rotation, and electronic excitation. The calculation we have just made is based on the assumption that the vibrations of the molecule are simple harmonic, the force being prft- portional to the displacement and the potential energy to the square of the displacement. Ordinarily this is a fairly good approximation for the amplitudes of vibration met at ordinary temperatures, but there are some important cases where this is not true. An example is found in the so-called phenomenon of hindered rotation. There are some molecules, of which ethane CHa-CHa, shown in Fig. IX-5, is an example, in which one part of the molecule is almost free to rotate with respect to another part. Thus, in this case, one CHs group can rotate with respect to the other about the line joining the carbons as an axis. The rotation would be 1 See Sponer, " Molekiilspektren. und ihre Anwendungen auf chcmische Probleme," Vol. I, Springpr, 1935, for vibrational frequencies of this and other molecules. 148 INTRODUCTION TO CHEMICAL I'HYSICti [CHAP. IX perfectly free if the potential energy were independent of the angle of rotation 0, so that there were no torques. Then as far as this degree of freedom was concerned, there would be no potential energy, so that the mean energy on the classical theory would be ?kT, the kinetic energy, rather than kT, the sum of the kinetic and potential energies, as in an oscillator. Actually in such cases, however, there are slight torques, with a periodicity in the case of ethane of 120 in 6. These arise presumably from repulsions between the hydrogen atoms of the two methyl groups, suggesting that the potential energy might have a maximum value when the hydrogens in the two groups were opposite each other, and a minimum when the hydrogens of one group were opposite the spaces between hydro- gens in the other. In such a case, for small energies, the motion would Fio. IX-5 Tho ethane molecule, OII 3 -CH., be an oscillation about one of the minima of the potential energy curve, while for larger energies, greater than the maximum of the potential energy curve, the motion would be a rotation, but not with uniform angu- lar velocity. In such a case, in the classical theory, the mean kinetic energy would equal \kT in any case. The mean potential energy, how- ever, would increase as \kT for low temperatures, where the motion was oscillatory, but would approach a limiting value, equal to the mean potential energy over all angles, which it would practically reach at the temperatures at which most of the molecules were rotating rather than oscillating. Thus the heat capacity per molecule connected with this degree of freedom would be k at low temperatures, but would fall to ?k at higher temperatures where the rotation became more nearly free. With the quantum theory, of course the heat capacity would resemble that of an oscillator at low temperatures, starting from zero at the abso- lute zero, then rising to the neighborhood of &, but falling to $k at high temperatures as in the classical case. Measurements and calculations of specific heat and entropy of molecules which might be expected to show free rotation of one part with respect to another generally seem to indicate SEC. 6] POLYATOMIC GASES 149 that at ordinary temperatures the rotations are really hindered by periodic torques in this way, the heat capacity being more like that of an oscillator than that of a rotator. It is clear that from measurements of the specific heat one can work backward and find useful information about the magni- tude of the torques hindering the rotations, and hence about the inter- atomic forces. CHAPTER X CHEMICAL EQUILIBRIUM IN GASES In Chap. VIII we treated mixtures of gases in which the concentra- tions were determined. Now we take up chemical equilibrium, or the problem of mixtures of gases which can react with each other, so that the main problem is to determine the concentrations of the various gases in equilibrium. In this problem, as in all cases of chemical reactions, there are two types of question that we may ask. In the first place, there is the rate of reaction. Given two gases capable of reacting and mixed together, how fast will the reaction occur and how will this rate, depend on pressure and temperature? In the second place, there is the question of equilib- rium. To every reaction there is a reverse reaction, so that the final state of equilibrium will represent a balance between the direct and the reverse reactions, with definite proportions of all the substances in the equilibrium mixture. We may wish to know what these proportions are. The first type of problem, the rate of reaction, can be answered only by kinetic methods. Gas reactions take, place only when the reacting mole- cules are in collision with each other, and only when the colliding mole- cules happen to have a good deal more than the average energy. Thus to find the rate of reaction we must investigate collisions in detail and must know a great deal about the exact properties of the molecules. In almost no case do we know enough to calculate a rate of reaction directly from theory. We can, however, find how the rate of reaction depends on the concentrations of the various substances present in the gas, and even this small amount of information is useful. It allows us to use the kinetic method to find the concentration of substances in equilibrium, for we can simply apply the condition that the concentrations are such that they do not change with time, and this gives us equations leading to the so-called mass action law. The results we find in this way, however, are incomplete. They do not tell us how the equilibrium changes with tem- perature, a very important part of the problem. Fortunately, these* questions of equilibrium can be answered completely by the method of thermodynamics and statistical mechanics. For in equilibrium, the Gibbs free energy of the mixed gas must have the minimum value possible, and this condition leads not merely to the mass action law but to complete information about the variation of the equilibrium with temperature. As usual, thermodynamics gives us more complete and satisfactory informa- 150 SBC. 1] CHEMICAL EQUILIBRIUM IN GASES 151 tion, but about a more restricted problem, that of thermal equilibrium. In our discussion to follow, we shall start with the kinetic method, speak- ing about the mechanism of gas reactions and carrying the method as far as we can. Then we shall take up the thermodynamic treatment, deriving the conditions of equilibrium, and finding the interesting fact that the chemical constants of gases, introduced previously in connection with the entropy, are fundamental in the study of chemical reactions. 1. Rates of Reaction and the Mass Action Law. Let us write a simple chemical equation; for instance, 2H 2 + 2 ^2H 2 0, (1.1) describing the combination of hydrogen and oxygen to form water, and the reverse, the dissociation of water into hydrogen and oxygen. The equation expresses the fact that when two molecules of H 2 and one of () 2 disappear, two of H 2 O appeal-; or vice versa. Now lot us form the sim- plest kinetic picture of the reaction that we can. For the combination of two hydrogens and an oxygen to form two water molecules, we suppose in the first place that a triple collision of the two hydrogens and the one oxygen molecule is necessary; we suppose further that in a certain fraction of such collisions, a fraction which may depend on the temperature, the three molecules react. Thus the number of sets of molecules reacting per unit time will be proportional to the number of triple collisions per unit time. This number of collisions in turn will be proportional to the number of oxygen molecules per unit volume and to the square of the number of hydrogens per unit volume. For plainly if we double the number of oxygens, we double the chance that one will be found at tho point where the collision will take place; while if we double the number of hydrogens, we double the chance that one hydrogen will be found at the location of the collision, and furthermore we double the chance that, if one is there, another will also be found on hand. Since, at a given temperature, the number of molecules per unit volume is proportional to the pressure, we find for the number of sets of molecules that react per second CCDPnVV (1.2) Here (K^m^a^CQefficient depending on the size of the molecules,, their velocities, the probability that if they collide they will react, etc. The quantities P U2 and Po t are the partial pressures of H 2 and 2 ; that is, they are the pressures which these gases would exert by themselves, if their molecules only were occupying the volume. It is the evaluation of C(T) as a function of temperature which, as we have previously suggested, is almost prohibitively difficult by purely theoretical methods. 152 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X At the same time that direct reactions are taking place, there will be reverse reactions, dissociations of water molecules to produce hydrogens and oxygons. From the chemical equation (1.1) we see that two water molecules must be present in ordor to furnish the necessary atoms to break up into hydrogen and oxygen molecules. Thus, by the type of argument we have 1 just used, the rate of the 4 reverse reaction must be proportional to the square of the number of water molecules per unit volume or to the square of the partial pressure of water; we may write it as C'(T)P? M . (1.3) Suppose we start with only hydrogen and oxygen in the container, with no water vapor. Reactions will occur at a rate given by Kq. (1.2), producing water. As this happens, the oxygen and hydrogen will be gradually used up, so that their partial pressures will decrease and the number of molecules reacting per unit time will diminish. At the same time, molecules of water will appear, so that the partial pressure of water will build up and with it the number of dissociation reactions given by Kq. (1.3), in which water dissociates into hydrogen and oxygen. This will tend to diminish the amount of water and increase the amount of hydrogen and oxygen, until finally an equilibrium will occur, with sta- tionary amounts of the various gases, though individual molecules an; reacting, changing from water vapor to oxygen and hydrogen and back again with great rapidity, l,p equilibrium, frhp ^ymhnr ^f reactions of type (1.2) must just equal the number of type (1.3) per unit time. Thus we must have or where Kp(T) is a function of temperature, the subscript P indicating the fact that Eq. (1.4) is stated in terms of partial pressures (we shall pres- ently state it in a slightly different way). Eq. (1.4) expresses the law of mass fie ti on for tKB-Dnrtip.uliir reaction in Question. From Eq. (1.4) we can derive information about the effect of adding hydrogen or oxygen on the equilibrium. Thus suppose at a given tem- perature there is a certain amount of water vapor in equilibrium with a certain amount of hydrogen and oxygen. Now we add more hydrogen and ask what happens. In spite of adding hydrogen, the left side of Eq. (1.4) must stay constant. If the hydrogen did not combine with oxygen to form water, JP H2 would increase, the other P's would stay con- stant and the expression (1.4) would increase. The only way to prevent this is for some of the added hydrogen to combine with some of the / / P&Po* C'(T) ' - ^ SEC. 1] CHEMICAL EQUILIBRIUM IN GASES 153 oxygen already present to form some additional water. This will decrease both terms in the numerator of Eq. (1.4), increase the denominator, and so bring back the expression to its original value. Information of this type, then, can be found directly from our kinetic derivation of the mass action law. But we should know a groat doal more if we could calculate K P (T), for thru we could find the actual amount of dissociation and its variation with pressure and temperature. It is easy to formulate the mass action law in the general case, by analogy with what we have done for our illustrative reaction. In the first place, let us write our chemical equations in i\ standard form. Instead of Eq. (1.1), we write 2H 2 + 2 - 2H 2 O = 0. (1.5) We understand Eq. (1.5) to mean that two molecules (or moles) of hydro- gen, one of oxygen, appear in the reaction, while two molecules (or moles) of water disappear. The reverse reaction, according to this convention, would be written with opposite sign. We write our general chemical equation by analogy with Eq. (1.5), each symbol having an integral coefficient v, giving the number of molecules (or moles) of the correspond- ing substance appearing in the reaction, negative r's corresponding to the disappearance of a substance. Let there be a number of substances, denoted by 1, 2, . . . (as 112, O 2 , H 2 O in the example), with correspond- ing v\, j> 2 , . . . , and partial pressures PI, P 2 , . . . Then it is clear by analogy with our example that the general mass action law can be stated 7V'/Y ' ' ' = Kp(T). (1.6) Here the terms with negative v'$ automatically appear in the denomina- tor, as they should from Eq. (1.4). It is often convenient to restate Eq. (1.6), not in terms of partial pressures, but in terms of the number of moles of each substance present, or in terms of fractional concentrations. Thus let there be HI moles of the first substance, ?i 2 of the second, etc. Then we have by the perfect gas law, where V is the volume occupied by the mixture of gases. Substituting in Eq. (1.6), we have (1-8) EQ nation (1.8.) is con^ciiiQiilJJiiLfinding the effect iif_a_iiaiige of volume on the equilibrium. For example, in our case of water, from Eq. (1.5), v\ + *> 2 + = )>H 2 + ^02 + ''Hao = 2+12=1. Thus we have 154 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X aftQa _ py 2 ~~~ ' a quantity proportional to the volume. Now let the volume be changed at constant temperature. If the volume increases, the numerator must increase, showing that there must be dissociation of water into hydrogen and oxygen. On the other hand, decrease of the volume produces recom- bination. This is a special case of the general rule which is seen to follow from Eq. (1.8) : decrease of volume makes the reaction run in the direction to reduce the total number of moles of gas of all sorts. In our special case, if two moles of hydrogen and one of oxygen combine to give two moles of water vapor, there is one mole of gas less after the process than before. It seems reasonable that decrease of volume should force the equilibrium in this direction. It is also useful to write the mass action law in terms of the relative concentrations of the gases. From Eq. (1.6), using Eq. (2.4) of Chap. VIII, or P l = tf t P, where c, is the relative concentration of the M\ gas, we have 'i' 1 *' 1 ' ' ' = /^T } = K(l\ T). (1.10) Equation (1.10) is convenient for finding the effect of pressure on the equilibrium, as Eq. (1.8) was for finding the effect of volume. Thus in the case of the dissociation of water vapor, we have showing that increasing pressure increases the concentration of water vapor. Of course, this is only a different form of stating the result (1.9), but is generally more useful. 2. The Equilibrium Constant, and Van't Hoff's Equation. In the preceding section we derived the mass action law, but have not evaluated the equilibrium constant K P (T) or K(P, T). Now W T C shall carry out our thermodynamic discussion, leading to a derivation of this constant. The method is clear: we remember from Chap. II, EW 3 free energy is a minimum for a system at constant pressure and tempera- tureT Then we find the Gibbs free energy G of the mixture of gases, and vfljv fhfTpnnnpnf rations to rrmke it a minimum. From Eq. (2.15) of Chap. VIII, we have (J - w/r'/ + KTn, In c,. (2.1) For equilibrium, we must find the change of G when the numbers of moles of the various substances change, and set this~shange equat'To "zero. SEC. 21 CHEMICAL EQUILIBRIUM IN GASES 155 Using Eq. (2.1), we have dG = 2J(G, + RT In a)dn, + Jn, d(G ] + RT In c,) = 0. (2.2) y ; The second sum is zero. In the first place, the Or/s do not depend on the n/s, so that they do not change when the n/s are varied. For the con- concentrations, we have d In Cj = dcjcj. Hence the last term becomes 5jfir(n//c,)<fc,. But by Eq. (2.1), Chap. VIII, n,/c, = ni + n, + , ^y independent of j, so that the summation is really Furthermore, by Eq. (2.6), Chap. VIII, Sc/ = 1, so that dSc/ = 2 dc ; = 0, being the change in a constant. Hence, we have finally dG = (G, + fiZ 1 In c,)dn/ = (2.3) as the condition of equilibrium. But from the chemical equation wo know that the piimi^ej:jiLmQle^iii[es yf thq ^tJrJzyiKi_aiiDcai'iii^ in an actual reaction must be proportional to v t * the coefficient appearing in tho chemical equation. Hejp.fi the dn.'* ipnst ho nroportional to tho v/s, and we may rewrite Eq. (2.3) as (0, + RT In c,) = 0. (2.4) Taking the exponential and putting all terms involving the c's on the left, the others on the right, we have = K(P, T), whew (2.5) \ J In Eq. (2.5) weJiave found the saine^na^a_actiQrL-IawLJia in Ea. (1.1UL but^^ilJi^^complete evaluatioiLJiLJJie equilibrium constant K. Using Eq. (2.16), Chap. VIII, for G,, we verify at once that K(P, T) varies with P as in Eq. (1.10), and we find In K f (T) - - > - In T - T ^ dr - ^. (2.6) 156 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X In Eq. (2.6), U } is the arbitrary additive constant jciying the energy of the /th gas per mole at the absolute zero, C, is the heat capacity per mole of the jth gas coming from rotations and vibrations, and ?' ; is tho chemical constant of ihc j't There is an important relation connecting the change of either K or K P with temperature and a quantity called the heat of reaction. By definition, the heat of reaction is the heat absorbed. tih p ""** QS ^ of enthalpy A/7, when the reaction proceeds reversibly so that v\ moles of the_first type of molecule are produced, v* of tho second, etc. T at constant pressure? and temperature. From Eqs. (2.13), (2.14) of Chap. VIII, this is at once seen to be c, dT). (2.7) Now let us find the change of In K(P, T) or In K r (T) with temperature. Differentiating Eq. (2.6), we have /a \ _ W~ ~ " i V RT* Equation (2.8) is called Van't Hoff's equation and is a very imnortant one in physical chemistry. It can be shown at once that the same equa- tion holds for K(P, T). Van't Hoff's equation can be used in cither of two ways. First, we may know the heat of reaction, from thermal measurements, and we may then use that to find the slope of the curve giving K(P, T) against tem- perature. Let us see which way this predicts that the equilibrium should be displaced by increasing temperature. Suppose that heat is absorbed in the chemical reaction, so that A// is positive. Then the constant K (P, T) will increase with temperature. That means that at high temperatures more of the material is in the form that requires heat to produce it. For instance, to dissociate water vapor into hydrogen and oxygen requires heat. Therefore increase of temperature increases the amount of dissociation. In the second place, we may use Van't HofFs equation to find the heat of reaction, if the change of equilibrium constant with temperature is known. This, as a matter of fact, is one of the com- monest ways of measuring heats of reaction in physical chemistry. The heat of reaction at the absolute zero, from Eq. (2.7), is E/,. (2.9) SBC. 2] CHEMICAL EQUILIBRIUM IN GASES 157 It is interesting to see that this can be calculated from the quantities D of Sec. 1, Chap. IX. From Table IX- 1 we know the value of Z), the energy required to dissociate various diatomic molecules, and similar values can be given for polyatomic molecules. Thus lot us consider our case of the dissociation of water vapor. To remove one hydrogen atom from an H^O molecule requires 118 kg.-cal. per mole (not given in the table), and to remove the second hydrogen from the remaining OH molecule requires 102, a total of 220 kg.-cal. In our reaction, there are two 1120 molecules, requiring 440 kg.-cal. to dissociate them into atoms. That is, 440 kg.-cal. are absorbed in this process. But now imagine the four resulting hydrogen atoms to combine to form t\\o Ha molecules and the two oxygens to combine to form ( ) 2 . Each pair of hydrogens liberates 103 kg.-cal. in recombining, a total of 200 kg.-cal.. and the two OXV^MIS liberate 117 kg.-cal., so that 200 + 1 17 = 323 kg.-cal. are liberated in this part of the process. The net result is an absorption, of 440 323 == 117 kg.-cal., so that A// is 117 kg.-cal. This is in fairly good agreement with the experimental value of about 113 kg.-cal. It is interesting to notice that the final result is the difference of two fairly large 1 quantities, so that relatively small errors in the 7)'s can result in a rather large error in the heat of reaction. The calculation which we have just made for A// does not follow exactly the pattern of Eq. (2.9). To see just how that equation is to be interpreted, we must give values to the various ^/s. In general, since there is an undetermined constant in any potential energy, we can assign the f//s at will. But there is a single relation between them, on account of the possibility of formation of water from hydrogen and oxygen. Let Uut be the energy per mole of hydrogen at. tin* absolute zero, f/o_. of oxygen, UK& of water, all of course in the vapor state 1 . Then from the, last paragraph we know that the energy of two moles of hydrogen, plus that of one mole of oxygen, is 117 kg.-cal. greater than the energy of two moles of water vapor. That is, 2U m + l;o> = 2tV,o + 117 kg.-cal. (2.10) Statements like Eq. (2.10) are sometimes written in combination with the chemical equation, in a form like 2H 2 + 2 = 2H 2 O + 117 kg.-cal. (2.11) Aside from Eq. (2.10), the U,'s can be chosen freely; that is, any two of them can be chosen at will and then the third is determined. Now let us compute A// , using Eqs. (2.9) and (2.10). It is A//o = 217,1, + C7o 2 - 2C7 Hl o = 2C7 H20 + 117 kg.-cal. - 2U lli0 = 117 kg.-cal., (2.12) 158 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X in agreement with our previous value. From this example it is clear that all the undetermined constants among the t/o's cancel from the sum in Eq. (2.9), leaving a uniquely determined value of A// . We have just seen that a knowledge of the heats of dissociation, or constants D, of the various molecules concerned in a reaction allows us to find A//o, the heat of reaction at the absolute zero. A further knowledge of the specific heats of the molecules gives us all the information we need to find the equilibrium constant K P (T), according to Kq. (2.6), except for the final constant Siv' ; . In other words, this knowledge is enough to find the rate of change of In K /('/') with temperature, according to Eq. (2.8), but not enough to determine the constant of integration of the integrated equation (2.6). But we have seen in Chap. VIII how to find the con- stants i theoretically, find later we shall see how to find them experimen- tally from vapor pressure measurements. We now see why these constants are so important and why they are railed chemical constants: they determine the constants of integration for problems of chemical equilibrium. For this reason, a great deal of attention has gone to finding accurate values for them. 3. Energies of Activation and the Kinetics of Reactions. A curious fact may have struck the reader in connection with the example which we have used, the equilibrium between water vapor and hydrogen and oxygen. Calculating the equilibrium, we find that at room temperature the amount of hydrogen and oxygen in equilibrium with water vapor is entirely negligible; even at several thousand degrees only a few per cent of the water vapor is dissociated. This certainly accords with our usual experience with steam, which does not dissociate into hydrogen and oxygen in steam engines. And yet if hydrogen and oxygen gases are mixed together in a container at room temperature, they will remain indefinitely without anything happening. A spark or other such dis- turbance is required to ignite them; as a result of ignition, of course, a violent explosion results, the hydrogen and oxygen being practically instantaneously converted into water vapor. The heat of combustion, which is the sume thing as the heat of reaction of 117 kg.-cal. which we have just computed, is a very large one (one of the largest for any common reaction); since an explosion is an adiabatic process, this heat cannot escape, but will go into raising the temperature, and consequently the pressure*, of the resulting water vapor enormously. It is this sudden rise of pressure and temperature that constitute the explosion. But now we ask, w r hy was the spark necessary? Why do not the hydrogen and oxygen combine immediately when they are placed in contact? Our first supposition might be that the triple collisions of two hydro- gen molecules and one oxygen, which we have postulated as being neces- sary for the reaction, were rare events. But this is not the case. SEC. 3] CHEMICAL EQUILIBRIUM IN GASES 159 Calculation, taking into account the cross section of the molecules, shows that at ordinary temperatures and pressures there will be a tremendous number of such collisions per unit time. The only remaining hypothesis is that even when two hydrogen molecules and an oxygen are in the intimate contact of a collision, still it is such a rare thing for their atoms to rearrange themselves to form two water molecules that for all practical purposes it never happens. This is indeed the case. The proportion of all such triple collisions in which a reaction takes place is excessively small, at ordinary temperatures, though it is finite; if we waited long enough, equilibrium would be attained, but it might take thousands of years. But the probability of a reacting collision increases enormously with the temperature, which is the reason why a spark, a loeali/ed region of exceedingly high temperature, can start the reaction. Once it is started, the heat liberated by the reaction near the spark raises the gas in the neighborhood to such a high temperature that it in turn can react, liberating more heat and allowing gas still further away to react, and so on. In this way a sort of wave or front of reaction is propagated through the gas, with a very high velocity, and this is characteristic of explosion reactions. It is true in general that, given a collision of the suitable molecules for a reaction, the probability of reaction increases enormously rapidly with the temperature. When measurements of rates of reaction are made, it is found that the probability of reaction can be expressed approxi- mately by the formula Qi Probability of reaction = const. X c k '*\ (3.1) where Qi is a constant of the dimensions of an energy. Equation (3.1) suggests the following interpretation: suppose that out of all the collisions, only those in which the colliding molecules taken together have an energy (translational, rotational, and vibrational) of Qi or more can produce a reaction. By the Maxwcll-Boltzmann distribution, the frac- -5i tion of molecules having this energy will contain the factor e kT . (The fraction having an energy greater than Qi can be shown to contain this factor, as well as the fraction having an energy between Q\ and Qi + dQi, which is what we usually consider.) Thus we can understand the varia- tion of rate of reaction with temperature. We must next ask, why do the* molecules need the extra energy Qi, in order to react? This energy is ordinarily called the energy of activation, and we say that only the activated molecules, those which have an energy at least of this amount, can react. To understand why an energy of activation is required for a reaction, we may think about a hypothetical mechanism for the reaction. In our 160 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X particular case of hydrogen and oxygen combining to form water, we imagine a collision in which two hydrogen molecules hit an oxygen molecule (this will be the way the collision will appear, for on account of their light mass the hydrogens will be traveling much faster than the oxygen in thermal equilibrium). During the collision, the atoms rear- range themselves to form two water molecules, which then fly apart with very groat energy (on account of the heat of reaction). Now, obviously, those particular collisions will be favored for reaction in which the atoms need the minimum rearrangement in the reaction, and a little reflection shows that the most favorable configuration is that shown in Fig. X-l (a), in which the velocities of the various atoms are shown by arrows. As we follow the successive sketches of Fig. X-l showing the progress of the collision, we see that the hydrogens approach the oxygens, attaining in (c) a shape very much like two water molecules. In the first part of the collision, (a) arid (6), the hydrogens have most of the kinetic energy. For a favorable reaction, however, the relations between the velocities of hydrogens and oxygons on collision must be such that the hydrogen gives up most of its kinotic energy to the oxygon. The condition for this can bo found from elementary considerations of conservation of momen- tum and kinetic energy on collision, and demands that the oxygen atoms be moving in the same direction as the hydrogens on collision but with considerably smaller velocity. That is, the oxygen molecule must have had considerable vibnition.il kinetic energy and the correct phase of vibration, while the hydrogens must have had large translational kinetic energy. Now in the second part of the collision, (W), (c), (/), and (g), the oxygens have most of the kinetic energy. They fly apart, carrying the hydrogens with them, and form the atoms into two water molecules. The hydrogens end up bound to the oxygens, but with some vibrational kinetic energy in the mode of vibration indicated by (g). We can now follow the energy relations in the reaction by drawing a suitable potential energy curve. The potential energy of the whole system, of course, depends on the positions of all the atoms and would have to be plotted as a function of many coordinates. We can simplify, however, by considering it as a function only of the distance r between the oxygen atoms. For each value of r, there will be a particular position for the hydrogen atoms that will correspond to a minimum of energy. Thus in (a), where the oxygen atoms are forming an oxygen molecule, the hydrogen molecules and the oxygen molecule will attract each other slightly, provided the oxygen-hydrogen distance is considerable, but will repel provided they come too close together, as we shall learn later when we consider intcrmolecular forces in imperfect gases. There will be a position of equilibrium, with the hydrogens a considerable distance three or four angstroms away from the oxygen, and with an energy of perhaps SEC. 3] CHEMICAL EQUILIBRIUM IN GASES 161 162 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X a fraction of a kilogram calorie lower than the energy at infinite separation of hydrogen from oxygen. Similarly in (g) the atoms are formed into two water molecules, and the minimum of energy of the hydrogens comes when they are at the distances and angles with respect to the oxygen which we find in a water molecule. We now show, in Fig. X-2, a sketch of the potential energy of the whole system, when the oxygens are at distance r, and the hydrogens are in their positions of minimum energy for each value of r. First, we ask how Fig. X-2 was constructed. When the oxygens are close together forming an oxygon moloculo, the energy of the hydrogens, being only interinolecular attraction, is small and the curve is practically 10 20 r (Angstroms) Fr<; X-2. Potential onorgy of 4H + 2O, as function of < M) distanro r the interatomic energy for the oxygen molecule. This is the curve (a) of Fig. X-2, going to an energy at infinite separation which is greater by Z)( = 117 kg.-cal.) than at the minimum, which comes at 1.20 A. On the other hand, when the oxygens are far apart they form two water molecules. At infinite separation of these two molecules, the energy of the whole system is less by 117 kg.-cal. (= A// , which only happens to be equal to the D of the oxygen molecule by a coincidence), than when two hydrogen and one oxygen molecule are formed. The curve (6) shows the interaction between these water molecules. Starting with the asymptotic energy just mentioned, the curve rises with decreasing dis- tance, because the two water molecules, set with their negative oxygen ions facing each other, repel each other on account of electrostatic repul- sion of like charges. As the distance decreases, to something of the order of three angstroms, the molecules begin to hit each other, causing the curve (6) to rise steeply. Curves (a) and (6) both form limiting cases. For small r's, curve (a) must be correct, and for larger r's curve (6). The SEC. 3] CHEMICAL EQUILIBRIUM IN GASES 163 full line in Fig. X-2 represents a sketch of the way the actual curve may look, reducing to these two limiting curves. It will be noted that at intermediate distances the actual curve lies below either curve (a) or (b). Essentially, the reason is as follows: When we have quite separated molecules, as the two water molecules at large distances, each atom of one molecule repels each atom of the other. But as they approach, as in configuration (c) of Fig. X-l, there is a little uncertainty as to whether they form two water molecules, or two hydrogens and an oxygen. As a consequence, the oxygen atoms make a compromise between repelling each other, as they would in two water molecules, and attracting, as they would in an oxygen molecule. That is, the repulsion which causes the rise in curve (6), Fig. X-2, is diminished and the actual curve does not continue to rise as curve (6) does. Now that we have the curve of Fig. X-2, we can apply it to the reac- tion as shown in Fig. X-l. The first part of the reaction, diagrams (a), (6), and (c) of Fig. X-l, cannot be represented directly on Fig. X-2, for in it the hydrogen molecules have a great deal of kinetic energy and aro by no means in the position of minimum potential energy. But by (r) of Fig. X-l the hydrogen atoms have given up most of their kinetic energy to the oxygens, and during the rest of the process the curve of Fig. X-2 applies fairly accurately. As far as the, first part of the process is concerned, we can interpret it as a process in which the oxygens had their vibrational energy increased from such a value as E } in Fig. X-2 to Et, symbolized by the arrow in the figure. When they had the energy K\ they simply vibrated back and forth for a short range about the distance r e of minimum energy. But with the energy E 2 the motion changes entirely: the oxygens fly apart, carrying the hydrogens with them to form water molecules and ending up with infinite separation of the molecules and a very high kinetic energy. And now we see the need of the energy of activation. From Fig. X-2 wo see that there is a maximum of potential energy between the minimum at i\ and the still lower value at infinite separation. For the water molecules to separate, the energy E% must lie higher than this maximum. But this energy is supplied, as we have seen, by the combined energy of all the colliding molecules before collision. We thus sec that the energy of activation Qi is to be interpreted as the height of this maximum above the minimum at r . A minimum of potential energy such as that at r e in Fig. X-2, separated by a maximum from a still lower region, is often met in atomic and molecular problems and is called a position of metastablc equilibrium, or a metastable state. It is stable as far as small displacements are con- cerned, but a large displacement can push the system over the maximum, after which it does not return to the original position but to the entirely different configuration of really lowest potential energy. In all such 164 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. X cases, the rate of transition from the metastable to the really stable con- figuration, at temperature T, depends on a factor exp ( Qi/kT), where Qi is the energy of activation, or height of the maximum above the mini- mum, for in all such cases it is only the molecules with energy greater than Qi that can react. Let us see how rapidly such a factor can depend on temperature. From Fig. X-2 it seems reasonable that in that case Q\ could bo of the order of magnitude of 40 kg.-cal. Then the factor exp (-QiAT) will become exp (-40,000/tf 5T) = exp (- 40,000/1. 98 T) = exp (-20,000/T) approximately. For T = 300 abs., this factor is exp (-66.7) = 1Q-* 29 approximately, while for T = 3000 abs. it is exp ( 6.67) = 10~ 3 approximately. Thus an increase of temperature from room temperature at 3000 abs., which could easily be attained in a spark, might make a difference of 1C 26 in the* rate of reaction. A process that would take 10~ 16 sec. at the high temperature might take 10 10 sec., or 3 X 10 8 yrs., at the low temperature, and would for all practical purposes never happen at all. This is an extreme but by no means an unreasonable example. We are now in position to see why, though the energy of activation enters into the rate of reaction in such an important way, it does not affect the final equilibrium. The factor C(T\ in Eq. (1.2), determining the rate of combination of hydrogen and oxygen molecules to form water, will contain a factor exp ( Qi/kT), as we have seen. But a glance at Fig. X-2 shows that in the reverse reaction, in which two water molecules combine to form hydrogen and oxygen, the water molecules must have an energy at least equal to Q\ + A// , so that they can climb over the maximum of potential energy and approach closely enough to form an oxy- gen molecule. Thus the probability of the reverse collision, given by Eq. (1.3), contains the f actor C'(7 T ) with the exponential exp [-?i+A// )/*7 T ]. Finally in the coefficient K P (T)j given by Eq. (1.4), we must have C' (T} with the factor exp {[-(Qi + A// ) + Q\]/kT\, which equals exp [ (Ho/kT)], in agreement with Eqs. (2.6) and (2.9), the energy of activation canceling out. Of course, in this simple argument we have neglected such things as specific heats, so that we have not reproduced the whole form of Eq. (2.6) from a kinetic point of view, but this could be done if sufficient care were taken. There is one point about a reaction like the combination of two water molecules to form oxygen and hydrogen, which we have just mentioned, that is worth discussion. From Fig. X-2, if the molecules approach with energy E% sufficient to pass over the maximum of potential, they will not be trapped to form oxygen and hydrogen molecules unless the energy of the oxygens drops from J? 2 to some value like EI during the collisions. This of course can happen by giving the excess energy to the hydrogen SEC. 3] CHEMICAL EQUILIBRIUM IN (JASES 165 atoms, sending them shooting off as hydrogen molecules. But there are sometimes other reactions in which this cannot happen. For instance, consider the simple recombination reaction of two oxygen atoms to form an oxygen molecule, shown in Fig. X-3. Here if the atoms approach with FIG. X-.3. Kerombmation of atoms to form a molecule. energy E%, there is nothing within the system itself able to absorb the necessary energy to make them fall down to the energy E\, and be bound to form a molecule. Such a recombination of two atoms can only occur if they happen to be in collision with a third body, atom or molecule, at the same time, which can absorb the excess energy and leave the scene of collision with high velocity. CHAPTER XI THE EQUILIBRIUM OF SOLIDS, LIQUIDS, AND GASES We have ho far studied only perfect gases and have not taken up imperfect gases, liquids, and solids. Before we treat them, it is really necessary to understand what happens when two or more phases are in equilibrium with each other, and the familiar phenomena of melting, boil- ing, and the critical point and the continuity of the liquid and gaseous states. We shall now proceed to find the thermodynamic condition for the coexistence of two phases and shall apply it to a general discussion' of the forms of the various thermodynamic functions for matter in all three states. 1. The Coexistence of Phases. It is a matter of common knowledge that at the melting point, a solid and a liquid can exist in equilibrium with each other in any proportions, as can a liquid and vapor at the boiling point. There is no tendency for the relative proportions of the two phases, as they are called, to change with time. On the other hand, if we are not at the melting or boiling point, there is no such equilibrium. At 100C., for instance, water vapor above atmospheric pressure will immediately start to condense, enough liquid forming so that the remain- ing vapor and the liquid will come to atmospheric pressure; while if water at this temperature is below atmospheric pressure, enough liquid will evaporate or boil away to raise the pressure to one atmosphere, so that only at atmospheric pressure can the two coexist at 100C. in arbitrary proportions. For each temperature the equilibrium takes place at a definite pressure; that is, we can give a curve, called the vapor pressure curve or in general the equilibrium curve, in the P-T plane, along which equilibrium occurs. This curve separates those parts of the P-T plane where just one, or just the other, of the phases can exist. Thus in gen- eral, where a number of phases occur in different regions of the P-T plane, equilibrium lines separate the regions where each phase occurs separately. Along a line two phases exist; where three lines join at a point, three phases can exist, and such a point is called a triple point. The resulting diagram is called a phase diagram. In the figures below, such a diagram is drawn for water, a familiar and in some ways a remarkable example. Figure XI-1 shows the diagram for a scale of pressures on which the critical point is represented by a reasonable value. The ordinary melting and boiling points, at 1 atm. and at 0C., and 100C., respectively, are easily found. We see that the boiling point 166 SBC. 1] THE EQUILIBRIUM OF SOLIDS 167 rises rapidly to higher temperatures as the pressure is raised, until finally the critical point is reached, above which there is no longer discontinuity between the phases. The melting point, on the other hand, is almost independent of pressure, decreasing as a matter of fact very slightly with increasing pressure. Figure XI-2 gives a different pressure scale, on which small fractions of an atmosphere can be noted. The triple point is immediately observed, corresponding to a low pressure and a temperature almost at 0C., at which ice, water, and water vapor can exist at the same time, so that if a dish of water is cooled to this temperature in a suitable vacuum, a coating of ice will form and steam will bubble up from below the ice. Below this temperature, liquid water does not occur, but as we can sec an equilibrium is possible between solid and gas. If the solid is reduced below the 200 JS a 8 Jioo Ice Water rpor /ce Wafer! Vapor 100 200 300 400 Deg.C FIG. XI-1. Phase diagram of water. Critical point * PC = 218 atm., T c = 374O -20 20 Deg.C FIG. XI-2. Low-pressure phase diagram of water. Triple point: P = 4..5S mm., T = ().00750. pressure corresponding to equilibrium, it will evaporate directly into water vapor. This is the way snow and ice disappear in weather below freezing; and it is a familiar fact that solid carbon dioxide, whose triple point lies at a pressure greater than atmospheric;, evaporates by this method without passing through the liquid phase. In Fig. XI-3, the pressure scale is changed in the other direction, so that we show up to 12,000 atm. Here the gaseous phase, which exists for pressures only up to a few hundred atmospheres, cannot be shown on account of the scale. On the other hand, a great deal of detail has appeared in the region of the solid. It appears that, in addition to the familiar form of ice, there are at least five other forms (the fifth exists at higher pressures than those shown in the figure). These forms, called polymorphic forms, presumably differ in crystal structure and in all their physical properties, as density, specific heat, etc. The regions where these phases exist separately are divided by equilibrium lines, on 168 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI 10,000 -40 -20 40 Deg.C. FIG. XI-3. HiKh-pressuro phase diagram of water. 200- 0) a </> o E 100- 10 20 cc/gm FIG. XI-4. P-V-T surface for water. SEC. 2] THE EQUILIBRIUM OF SOLIDS 169 which two of them can coexist in equilibrium, and a number of triple points are shown. Transitions from one phase to another along an equilibrium line are called polymorphic transitions. There has never been found any suggestion of a critical point, or termination of an equilib- rium line with gradual coalescence of the two phases in properties, for any equilibrium between liquid and solid or between two solid phases. Critical points appear to exist only in the liquid-vapor transition. 2. The Equation of State. The three figures that we have drawn give only part of the information about the phase equilibrium; for greater 10,000- s <b a CO o i 5,000- '/ce anef vapor Icc/gm Fi. XI-5. P-V-T Hiirftu'o for water, high pressure. completeness, we should show the whole equation of state 1 , the relation between pressure, temperature, and volume. This is done in Figs. XI-4 and XI-5, where P-V-T surfaces arc shown in perspective, for the case of the liquid-vapor equilibrium and for the polymorphic forms in equilibrium with the liquid. A number of isothermals, lines of constant temperature, are drawn on each surface to make them easier to interpret. Some simple facts arc immediately obvious from these surfaces. For instance, water is exceptional in that the solid, ice, has a larger volume than the liquid. As we see, this is true of ice I, the phase existing at low pressure, but it is not true of the other phases II, III, V, VI, all of which have smaller volumes than water at the corresponding pressure. Furthermore, we see that though water seems quite incompressible as far as the low pressure 170 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI surface in the first figure is concerned, the second figure shows that a pressure of 12,000 atm. produces a diminution of volume of about 20 per cent. Again, the melting point of ice I is hardly affected by the pressures indicated in the low pressure surface, but the other surface shows that a pressure of about 2000 atm. lowers the melting point by more than 20C. One interesting fact to notice is that a vertical lino cutting either of tho surfaces will cut it in just one point; that is, for a given volume and tem- perature, the pressure is uniquely determined. We shall see shortly that this can be shown to be true quite generally. As we see, the P-V-T surfaces are divided into a number of different regions with sharp edges separating them. In some of these regions one phase alone can exist, while in others two phases can coexist. The regions of the second type, when projected onto the P-T surface, become the equilibrium lines that we have previously mentioned; thus they are ruled surfaces, the rulings being parallel to the volume axis. At a given pres- sure and temperature on an equilibrium line, in other words, tho volume ran have any value between two limiting values, tho volumes of tho two phases in question. The meaning of this is that there is a mixture of the two phases, so that the volume of the mixture depends on tin; relative concentrations of the two arid is not really a property of either phase, but is a measure of the relative concentrations. 3. Entropy and Gibbs Free Energy. Tho equation of state does not alone determine the thermodynamic behavior of a substance; wo must also know its specific heat, or its entropy or Gibbs free energy. Wo shall first give the entropy as a function of pressure and temperature. This can of course be determined entirely by experiment. We start with the solid at the absolute zero. There, according to tho quantum theory, as we have seen in Chap. Ill, the entropy is zero. The entropy of tho solid at a higher temperature can bo found from tho specific heat, for at con- stant pressure we have Since, according to the quantum theory, the specific heat goes to zero at the absolute zero, tho integral in Eq. (3.1) behaves properly at the abso- lute zero. By means of Eq. (3.1), we find the entropy of the solid at any temperature 1 , at a given pressure; since tho specific heat depends only slightly on pressure, this means practically that the entropy of the solid is a function only of temperature, not of pressure, on the scales used in Figs. XI-1 and XI-2, though not in Fig. XI-3. Next, we wish the entropy of the liquid and vapor. If the pressure is below that at the triple point, a horizontal line, or line of constant pressure, in the phase diagram will carry us from the region of solid into that of vapor. There SEC. 3] THE EQUILIBRIUM OF SOLIDS 171 is a discontinuous change of entropy as we cross the line, equal to the heat absorbed (the latent heat of vaporization) divided by the temperature (the temperature of sublimation for the pressure in question). This change of entropy, which we may call the entropy of vaporization and denote by A/S V , is AS. - Y: (3 - 2) where L v , T v are the latent heat and temperature of vaporization at the given pressure. Adding this change of entropy to the entropy of the solid (3.1) just below the sublimation point, we have the entropy of the vapor just above this point. Then, applying Eq. (3.1) to the vapor rather than the solid, we can follow to higher temperatures at constant pressure and find the entropy of the gas, as C Tv n i C T = --'-dT + J ' + JO ^ 1 v J T, (3.3) In Eq. (3.3), the first term represents the entropy of the solid at the sublimation point, the second the increase of entropy on vaporizing, and the third the further increase of entropy from the sublimation point up to the desired temperatures. If the pressure is above the triple point, the solid will first melt, then vaporize. In this case, we can proceed in a similar way. On melting, the entropy increases by the entropy of fusion, determined from the latent heat of fusion and temperature of fusion by the relation AS/ = '-, (3.4) analogous to Eq. (3.2). Then the entropy of the liquid at any tempera- ture is (* Ti c T l* T r Si- <ur+^ + L *'dT. JO * If JT, 1 (3.5) As the temperature rises further, the liquid will vaporize and the entropy will increase by the entropy of vaporization. The gas above this tem- perature will then have the entropy 8. = r^dT + % + F'CfdT + (7 + f^f'dT. (3.6) Jo :/ // jTf I I* JT, / It is interesting to note that a relation between the latent heat of vapori- zation of the solid, the heat of fusion, and the heat of vaporization of the liquid, at the triple point, arises from the fact that Eqs. (3.3) and (3.6) must give identical values for the entropy of the gas at the triple point. 172 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI Since in this case T f = T v = T, the integrals involving the specific heats of the liquid and gas drop out, and wo have at once the relation L v of solid = Lf + L v of liquid, at the triple point. (3.7) In Fig. XI-6 we show the entropy of water in its three phases, as a function of pressure and temperature, computed as we have described above. We are struck by the resemblance of this figure to that giving the volume, Fig. XI-4; the entropy, like the volume, increases with increase of temperature or decrease* of pressure. Lines of constant pres- 10 20 30 40 50 Ceil per Mole Degree FIG. XI-0. -Kntropy of Wiiter iia function ot pressure und temperature. sure are drawn in Fig. XI-6. The regions of coexistence of phases are shown in Fig. XI-6 as in Fig. XI-4, and the latent heat is given by the length of the horizontal line lying in the region of coexistence, multiplied by the temperature. Graphs of the form of Fig. XI-6 (generally pro- jected onto the T-S plane) are of considerable 1 practical importance in problems involving thcrmodynamic cycles, as heat engines and refrigera- tors, on account of the fact that the isothermals are represented by lines of constant T and adiabatics by lines of constant S, so that the diagram of a Carnot cycle in such a plot is simply a rectangle. Furthermore, the area of a closed curve representing a cycle in the T-S diagram gives directly the work done in the cycle, just as it does in the P-V diagram. SEC. 3] THE EQUILIBRIUM OF SOLIDS 173 This is seen at once from the first and second laws in the form Tdft = dU + PdV. Integrating around a closed cycle, we must have u) dV 0, since ( r is a function of the state of the system. Hence <j>T dS = fP dV, (3.8) where CD indicates an integral taken about a complete cycle, and since (foP dV equals the work done, u)T dS must equal it also. I 23456789 10 II -G (Kg. coil/mole) FIG. XI-7. Gihbs froo energy of water, as function of pressure and temperature The Gibhs free energy G as a fund ion of pressure and temperature is sketched in Fig. XI-7. It can also be found directly from experiment. At constant pressure, we have ft = (d(?/d7 7 )p, or G = J/SrfT 7 , and since V = (dG/dP)r, we have G = J V dP at constant temperature, from a combination of which the Gibbs free energy can be found from equation of state and specific heat. The surface of Fig. XI-7 looks quite different from those for volume and entropy; for while the volume and entropy change discontinuously with a change of phase, resulting in the ruled surfaces indicating coexistence of phases w r hich are so characteristic of Figs. XI-4, XI-5, and XI-6, the Gibbs free energy must be equal for the two phases in equilibrium. This has already been discussed in Sec. 3, Chap. II and follows from the fundamental property of the Gibbs free energy, that its value must be a minimum for equilibrium at given pres- 174 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI sure and temperature. Thus, if there is equilibrium between two phases at a given pressure and temperature, a transfer of some material from one phase to the other cannot change the Gibbs free energy, so that the value of G must be the same for both phases. As we observe from Fig. XI-7, each phase has a different surface for G as a function of P and T, and the intersection of two of these surfaces gives the condition for equilibrium. It is interesting to notice the behavior of the surface near the critical point: the lines of constant pressure, which are drawn on the surface, have discontinuities of slope below the critical point but merely continuous changes of slope above this point. We shall see in a later chapter how such lines can come about mathematically. 4. The Latent Heats and Clapeyron's Equation. There is a very important thermodynamie relation concerning the equilibrium between phases, called Clapeyron's equation, or sometimes the ( 1 ln,peyron-(Mausius equation. By way of illustration, let us consider the vaporization of water at constant temperature and pressure. On our P-V-T surface, the process we consider is that in which the system is carried along an iso- thermal on the ruled part of the surface, from the .state \\here it is all liquid, with volume Vi, to the state \\here it is all gas, with volume V a . As we go along this path, we wish to find ihe amount of heat absorbed. We can find this from one of Maxwell's relations, Kq. (4.12), Chap. II: dVjr ~ \dT/ (4 ' 1} The path is one of constant temperature, so that if we multiply by T this relation gives the amount of heat absorbed per unit increase of .volume. >But on account of the nature of the surface, (dP/dT) v is the same for any point corresponding to the same temperature, no matter what the volume is; it is simply the slope of the equilibrium curve on the P-T diagram, which is often denoted simply by rtP/dT (since in the P-T diagram there is only one independent variable, and we do not need partial derivatives). Then we can integrate and have the latent heat L given by or L = T^(V a - Vi), ' (4.2) which is Clapeyron's equation. It is often written dP L _ dT ~ T(V t - Vi)' SBC. 4] THE EQUILIBRIUM OF SOLIDJS 175 or W - T(V. - 7Q. . . dP - I (4<3) Clapeyron's equation holds, as we can see from its method of derivation . for any equilibrium between phases. In the general case, the difference of volumes on the right side of the equation is the volume after absorbing the latent heat L, minus the volume before absorbing it. There is another derivation of Clapeyron's equation which is very instructive. This is based on the use of the Gibbs free energy G. In the last section we have seen that this quantity must be equal for two phases in equilibrium at the same pressure and temperature, and that if one phase has a lower value of G than another at given pressure and tempera- ture, it is the stable phase and the other one is unstable. We can verify these results in an elementary way. We know that in going from liquid to vapor, the latent heat L is the difference in enthalpy between gas and liquid, or L = // HI. Bui if the change is carried out in equilibrium, the heat absorbed will also equal T (1$, so that the latent heat will be T(S U Si). Equating these values of HK latent heat, we have //. - Hi - L = T(S a - ,SU or H g - TK tt = Hi - TS tj G t , = <;,, (4.4) or our previous condition that the Gibbs free energy should be the same for the two phases in equilibrium. Since this must be true at each point of the VcMpor pressure line in the P-T plane, we can find the slope of the vapor pressure curve from the condition that, as P and T change* in the same way for both phases, G a and GI must undergo equal changes. That is to say, we set d(G ~ n \ n i v l) \ AT _L -Gi) - o = ^ -^ ^^y + = ~-(S a - Si)dT + (V - V t )dP = -L^~ + (V - VfidP, dP _ L W = (V~- Vd~T' (4 ' 5) which is Clapc^yron's equation. In deriving Eq. (4.5), we used the rela- tions (dG/dT), = -S, (dG/dP) T = F, from Chap. II. Hap^yron's equation, as an exact result of thermodynamics, is useful in several ways. In the first place, we may have measurements of the equation of state but not of the latent heat. Then we can compute the latent heat. This is particularly useful for instance at high pressures, 176 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI where measurements of volume and temperature are fairly easy, but where ealorimetric measurements such as would be required to find the latent heat are very difficult. Or, in the second place, we may know the latent hoat and then we can find the slope of the equilibrium curve, and by integration we may find the whole curve. We shall discuss the applica- tion of this method to tho vapor pressure curve in the next sections. Finally, we may have measurements of both the latent heat and tho equilibrium curve, but may not be sure of their accuracy. We can test them, and perhaps improve them, by soring whether the experimental values satisfy Chipeyron's equation exactly. If they do not, it is certain evidence that they are in error. 5. The Integration of Clapeyron's Equation and the Vapor Pressure Curve. The integration of Clapoyron's equation to got the vapor pres- sure curve over a liquid or solid from a measurement of the latent hoat is one of its principal uses. We may write the integral of Eq. (4.3) in tho form - f ~ J LtiT This can be evaluated exactly if \ve know the latent hoat L, tho volume of tho gas, and tho volume of tho solid, as functions of temperature. In many actual oases we know these 1 only approximately, but we can use thorn to got an approximate; vapor pressure curve. For instance, the simplost approximation is to assume that tho latent heat is a constant, independent of temperature. Furthermore, in the ease of low tempera- ture, where tho volume of tho gas will bo very large compared with the volume of the solid or liquid, we may neglect tho latter and furthermore assume that the gas obeys tho perfect gas law. Then V g V s = nRT/P, approximately, and Eq. (4.3) becomes dT JP T (5.2) PdT Equation (5.2) holds whether L is constant or not. Assuming it to be constant, we can integrate and find In P = --T>TT + const., or L__ P = const, e nRT . (5.3) Equation (5.3) giving P in terms of T gives a first approximation to a SBC. 5] THE EQUILIBRIUM OF SOLIDS 177 vapor pressure curve. Plainly it approaches e~ = at low tempera- ture, while as the temperature increases the pressure rapidly increases, agreeing with the observed form of the curve. By making more elaborate assumptions, taking account of the variation of L with pressure and temperature and the deviation of the volume of the gas from that for a perfect gas, the equation of the vapor pressure curve can be obtained as accurately as we please. Tho formula (5.3) in particular becomes entirely unreliable near the critical point. Since the latent heat approaches zero as we approach the critical point, and the volumes of liquid and gas approach each other at the same place, f,he ratio dP/dT becomes indeterminate and more accurate work is necessary to find just what the slope of the vapor pressure curve is. In spite of this difficulty, the formula (5.3) is a very useful one at temperatures well below the critical point. The constant factor, of course, must be obtained as far as thermodynamics is concerned from a measurement of vapor pressure at one particular temperature. To find the correct vapor pressure equation, we shall determine the variation of latent heat with temperature 4 . In introducing the* enthalpy H = U + PV in Chap. 11, we saw that the change in enthalpy in any process equalled the heat absorbed at constant pressure. Now the latent heat is absorbed at constant pressure; therefore it equals the change of the enthalpy between solid and gas. That is, L = II, - //, (5.4) Now we can find the change* in L, for an arbitrary change of pressure and temperature. We have + I'.- V. - T f + 7< <//' + (C fl - CMT, (5.5) when 1 we have used thermodynamic relations from the table in Chap. II. Now we assume? as in the last paragraph that the ve>lume of the solid can be neglected and that the gas obeys the perfect gas law. The* gas law gives at once V a T(dV ti /dT}p = 0. Thus the first bracket is zero, and we have ^ = C f . - C Pf (5.0) In Chap. VIII \ve have expressed the specific heat of a gas as C f , = %nR + nC,. (5.7) 178 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI Introducing this value into Eq. (5.6) and working with one mole, we can integrate Eq. (5.6) to get the latent heat L, in terms of L , the latent heat of vaporization at the absolute zero: L = Lo 4- %RT + (C* - C f .)dT. (5.8) Dividing by RT~ and integrating with respect to 7 T , wo have from Eq. (5.2) j *; C T jr C T In P = -~ + ? In T - ~ (C f , - CJdT + const. (5.9) ~ T This expression, or the corresponding one "'* _*. f T *Lf T (t . - P = const. T c ~ RT c JoJmJo (Cl " ' (5 10) is the complete formula for a vapor pressure curve, as obtained from thermodynamics, in the region where the vapor behaves like a perfect gas. We shall see in the next section that statistical mechanics can supply the one missing feature of Eqs. (5.9) and (5.10): it can give the explicit value of the undetermined constant. It is interesting to note the behavior of the latent heat of vaporization, as given by Eq. (5.8), through wide temperature ranges. At low tempera- tures, since CV. and C t both are very small, the latent heat increases with temperature. This tendency is reversed, however, as the specific heat of the solid becomes greater than that of the gas, which it always does. The latent heat then begins to fall again. At the triple point, as we have stated, the latent heat of vaporization of the solid just belo\\ the triple point equals the sum of the latent heat of fusion and the latent heat of vaporization of the liquid, directly above the triple point. Above this temperature, Eq. (5.8) is to be replaced by one in which C/> of the liquid appears rather than that of the solid. This allows us to use the same sort of method for finding the vapor pressure over a liquid. Finally, as the temperature approaches the critical point, it is no longer correct to approximate the vapor by a perfect gas, so that neither Eq. (5.2) nor (5.8) is applicable, though we already know that the latent heat approaches zero at the critical point, and of course Clapeyron's equation can be applied here as well as elsewhere. 6. Statistical Mechanics and the Vapor Pressure Curve. From statistical mechanics, we know how to write down the Gibbs free energy of the solid and perfect gas explicitly. All we need do, then, to find the complete equation of the vapor pressure curve is to equate these quanti- ties. Thus, remembering that (dG/dT) P = -S, we can write the Gibbs SEC. 6] THE EQUILIBRIUM OF SOLIDS 179 free energy of the solid G. = f/o - f 8 dT Jo (*T fTr, = Uo - \ dT\ *dT. (6.1) Jo Jo l Using Kqs. (1.19) and (1.20) of Chap. VIII, this can bo rewritten C T dT C T G. = /- H ~\ C P .dT. (6.2) Jo I Jo Here UQ is the internal energy, or free energy, at absolute zero, a function of pressure only. Next we wish the Gibbs free energy of the gas. We use Eq. (1.25) of Chap. VIII. We note, however, that l/ is used in that equation in a different sense from what it has been here; for there it means the internal potential energy of the gas at absolute zero, while here we have used it for the internal energy of the solid at absolute zero. It is plain that the energy of the gas at absolute zero must be greater than that of the solid by just the latent heat of vaporizal ion at the absolute zero, orLo. Using this fact, we have G g = C/o + Lo - ^RT In T + RT In P Equating Eqs. (6.2) and (6.3), we have ln p - - + ln T ~ ' (Cft - c ' )rfr + ' (6 ' 4) Equation (6.4) is the general one for vapor pressure, and it shows that the undetermined constant in In P, in Eq. (5.9), is just the chemical constant that we have already determined in Eq. (3.16) of Chap. VIII. The simplest experimental method of finding the chemical constants is based on Eq. (6.4) : one measures the vapor pressure as a function of the tem- perature, finds the specific heats of solid and gas, so that one can calculate the term in the specific heats, and computes the quantity T as a function of temperature. Plotting as a function of 1/T, Eq. (6.4) says that the result should be a straight line, whose slope is L and whose intercept on the axis 1/T = should be the chemical constant /. This gives a very nice experimental way of checking our whole theory of vapor pressure and chemical equilibrium: the same chemical constants obtained from vapor pressure measurements should correctly predict the 180 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XI results of chemical equilibrium experiments. It is found that in fact they do, within the orror of experiment. 7. Polymorphic Phases of Solids. Kxpcrimontally, polymorphism at high pressures is ordinarily observed by discontinuous changes of volume. As the pressure is changed at constant temperature, the volume changes smoothly as long as we are dealing with one phase only. At jhe equilibrium pressures, however, the volume suddenly changes discontinu- OUslv to another V^lue r whjf.fr nf nmirsn jfi always fijTmllor for fho ]} igJi pressure modification. Then another smooth change continues from the transition pressure. The measurement thus gives not only the pressure and temperature of a point on the equilibrium line, but the change of volume as well. Clapeyron's equation of course applies to equilibrium lines between solids, and that moans that from the observed slope of the transition line and the observed change of volume, we can find the latent heat of the transition, even though a direct thermal measurement of this latent heat might be very difficult. Thus we can find energy and entropy differences between phases. It is very hard to say anything of value theoretically about polymor- phic transitions. The changes of internal energy and entropy between phases are ordinarily quite small. Any calculation that we should tiy to make of the thermodynamics properties of each phase separately would have errors in both those quantities, at least of the order of magnitude of the difference which is being sought. Thus it is almost impossible to predict theoretically which of two phases should bo stable under given conditions, or whore tho equilibrium line between thorn should lie. Nature apparently is faced with essentially the same problem, for in many cases polymorphism scorns to be a haphazard phenomenon. It has been impossible to make any generalizations or predictions as to what sub- stances should be polymorphic and what should not, and in many cases substances that are similar chemically show quite different behavior as to polymorphism. We can, however, say a little from thermodynamics as to the stability of phases and the nature of equilibrium lines. We can think of two limiting sorts of transitions: one in which the transition always occurs at tho saino tcmporaturo independent of pros- sure, the other whore; it is always at the same pressure independent of temperature. These would correspond to vortical and horizontal lines respectively in Fig. XI-3. In Clapoyron's equation dP/dT = L/TAV, these correspond to the case dP/dT = oo or respectively. Thus in the first case we must have AF = 0, or the two phases have the same volume, in which case pressure does not affect the transition. And in the second case L 0, or AS = 0, there is no latent heat, or the two phases have the same entropy, in which case temperature does not affect the transition. Put differently, increase of pressure tends to favor the phase of small volume, increase of temperature favors the phase of large entropy. SEC. 7] THE EQUILIBRIUM OF SOLIDS 181 Of course, in actual cases we do not ordinarily find two phases witli just the same volume or just the same entropy. Qn_accaunt_oL_the parallelism between the entropy and _the_vplumc. thore is a tendency for n phase of larger volume also to haveji Jarger_on topy Thus t.ho fonHpuny isjfor_the latent heat and theHiange of vojumMpJha/vo the same sign. so that byjClapeyron's equation dP/dT tends to be positive, or the equilib- rium lines tend to slope upward tcTflie " ngliTTTi the phase diagram. A statistical study of the phase diagrams of many substances shows that in fact this is the case, though of course there an 1 many exceptions. In fact, there is^evenji tendency joward a fairly definite slope dP/dT charac- teristic of many substances, which according; To Bridgman 1 is a change of somcfhing less ilian 12,000 atm. for a temperature range'oT 200. In each region of the P-T diagram then 1 Ts only one stable phase, except on equilibrium lines or at triple points where there arc 1 two or three respectively. But a phenomenon analogous to supercooling is very widespread in transitions between solids. Particularly at temperatures well below the melting point, transitions occur very slowly. A stable phase can often be carried into a region where it is unstable, by change of pressure or temperature, and it may take a very long time to change over to the phase stable at- that pressure and temperature. This makes it- very hard in many cases to determine equilibrium lines with great accu- racy, for near equilibrium the transitions tend to be slower than far from equilibrium. It also makes it hard to continue investigations of poly- morphism to low temperatures. In Fig. XI-3, for instance, the lines are continued about as far toward low temperatures as it is practicable to go. Sometimes these slow transitions can be of practical value, as in the case of alloys. It often happens that a modification stable at high tempera- ture, but unstable at room temperature, has properties that arc; desirable for ordinary use. In such a case the material can often be quenched and cooled very rapidly from the high temperature 1 at which the desired modi- fication is stable. The material is almost instantly cooled so far down below its melting point that the transition to the phase; stable at room temperature is so slow as to be; negligible for practical purposes. Thus the desired phase is made practically permanent at room temperature, though it may not be thermodynamically stable. The* ordinary process of hardening or tempering steel by quenching is an example of this process. In some cases such unstable phases change; ove;r to the stable form in a period of years, but in the really valuable; cases the rate is so slow that it can be disregarded e;ven for many years. A moderate heat-- ing, however, can accelerate the* process so much as to change the prop- erties entirely, as a moderate heating can destroy the temper or hardness of steel. 1 P. W. Bridgman, Pmc. Am. Acnd., 72, 45 (1937); soe p. 129 CHAPTER XII VAN DER WAALS' EQUATION Real gases do not satisfy the perfect gas law PV = nRT, though they approach it more and more closely as they become less and less dense. There is no simple substitute equation which describes them accurately. There is, however, an approximate equation called Van der Waals' equation, which holds fairly accurately for many gases and which is so simple and reasonable that it is used a great deal. This equation is not really one that can be exactly derived theoretically at all. Van der Waals, when he worked it out, thought he was giving a very general and correct deduction, but it has since been seen that his arguments were not conclusive. Nevertheless it is a plausible equation physically, and it is so simple and convenient that, it is very valuable just as an empirical formula. We shall give, first, simply a qualitative argument for justify- ing the equation, then show to what extent it really follows from statistical mechanics. Being an equation of state, thermodynamics by itself can give no information about it ; we remember that equations of state have to be introduced into thermodynamics as separate postulates. Only statistical mechanics can be of help in deriving it. 1. Van der Waals' Equation. Van der Waals argued that the perfect gas law needed revision for real gases on two accounts. In the first place, he considered that the molecules of real gases must attract each other, exerting forces on each other which are neglected in deriving the perfect gas law. The fact that gases condense to form liquids and solids shows this. Surely the only thing that could hold a liquid or solid together would be intermolecular attractions. These attractions he considered as pulling the gas together, just as an external pressure would push it together. There is, in other words, an internal pressure which can assist the external pressure. In a liquid or solid, the internal pressure is great enough so that even with no external pressure at all it can hold the material together in a compact form. In a gas the effect is not so great as this, but still it can decrease the volume compared to the corresponding volume of a perfect gas. To find the way in which this internal pressure depends on the volume, Van der Waals argued in the following way. Consider a square centimeter of surface of the gas. The molecules near the surface will be pulled in toward the gas by the attractions of their neighbors. For, as we see in Fig. XII-1, these surface molecules are subjected to unbalanced attractions, while a molecule in the interior will 182 Ssr. 1) VAN DER WAALS' EQUATION 183 have balanced forces from the molecules on all sides. Now the range of action of these intermolecular forces is found to be very small. Thus only the immediate neighbors will be pulling a given molecule to any extent. To indicate this, we have drawn a thin layer of gas near the surface, including all the molecules exerting appreciable forces on the surface molecules. Now the total force on one surface molecule will be proportional to the number of molecules that pull it. That is, it will be proportional to the number of molecules per unit volume, times the volume close enough to the molecule to contribute appreciably to the attraction. The total force on all the molecules in a square centimeter of the surface layer will be proportional to the number of molecules in this square centimeter times the force Fi. xil-i. intermolecular attra- 011 each, so that it will be proportional tlons> to the square of the number of molecules per unit volume, or to (JV/F) 2 , if there are N molecules in the volume V, or to (n/V)' 2 , where n is the num- ber of moles. But the force on the molecules in a square centimeter of surface area is just the internal pressure, so that Internal pressure = a (~t7y ' (1-1) where a is a constant characteristic of the gas. The second correction which Van dcr Waals made was on account of the finite volume of the molecules. Suppose the actual molecules of a gas were rather large and that the density was such that they filled up a good part of the total volume. Then a single one of the molecules which we might consider, batting around among the other molecules, would really not have so large a space to move in as if the other molecules were not there. Instead of having the whole volume V at its disposal, it would move much more as if it were in a smaller volume. If there are n moles of molecules present, and the reduction in effective volume is b per mole, then it acts as if its effective volume were Effective volume = V nb. (1.2) If the volume were reduced by this amount, the pressure would be cor- respondingly increased, since the molecule would collide with any element of surface more often. Making both these corrections, then, Van dor Waals assumed that the equation of state of an imperfect gas was + a(F - nb) = nRT. (1.3) 184 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII This is Van der Waals' equation. We shall later come to the question of how far it ran be justified theoretically by statistical mechanics. First, however, we shall study its properties as an equation of state and see how useful it is in describing the equilibrium of phases. 2. Isothermals of Van der Waals' Equation. In Fig. XII-2 we give isothermals as computed by Van der Waals' equation. At first glance 1 , they are entirely different from the actual isothermals of a gas, as shown in perspective in Fig. XI-4, because for low temperatures the isothermals show a maximum and minimum, the minimum corresponding in some cases to a negative pressure. But a little reflection shows that this situation is not alarming. We note that there is one isothermal at which the maximum and minimum coincide, so that there is a point of inflection FIG. XII-2. Isothermals of Van dor Waata* equation. of the curve here. This is the point marked C on Fig. XII-2. At every lower temperature, there arc three separate volumes corresponding to pressures lying between the minimum and maximum of the isothermal. This is indicated by a horizontal line, corresponding to constant pressure, which is drawn in the figure and which intersects one of the isothermals at V\, V%, tind F. We may now ask, given the pressure and temperature as determined by this horizontal line and this isothermal respectively, which of the three volumes will the substance really have? This is a question to which there is a perfectly definite answer. Thermodynamics directs us to compute the Gibbs free energy of the material in each of the three possible states, and tells us that the one with the lowest Gibbs free energy will be the stable state. The material in one of the other states, if it existed, would change irreversibly to this stable state. We shall actually compute the free energy in the next section, and shall find which state has the lowest value. The situation proves to be the SEC. 2] VAN DEK WAALS 9 EQUATION 185 following. Suppose we go along an isothermal, increasing the volume at constant temperature, and suppose the isothermal lies below the tempera- ture corresponding to C. Then at first, the smallest volume V\ has the smallest Gibhs free energy. A pressure is reached, however, at which volumes V\ and F 3 correspond to the same free energy. At still lower pressures, F 3 corresponds to the lowest free energy. The state F 2 has a higher free energy than either FI or Fa under all conditions, so that it is never stable. We see, then, that above a certain pressure (below a certain volume) the state of smallest volume is stable, at a definite pressure this state and that of largest volume can exist together in equilibrium, and below this pressure only the phase of largest volume can exist. But this is exactly the behavior to be expected from experience with actual changes of phase. The isothermals of Van dor Waals' equation, then, correspond over part of their length to states that are not thermodynamieally stable, in Fm. XII-3. Isothcrmals of Van der Waals' equation, showing equilibrium of liquid and gas. the sense that their free energy is greater than that of other states, also described by the same; equation, at the same pressure and temperature. In Fig. XII-3 we give revised isothormals, taking this change of phase into account. In this figure, corresponding to each pressure and temperature, only the stable phase is shown. In the region whore two phases are in equilibrium, we draw horizontal lines, as usual in such diagrams, indicat- ing that the pressure and temperature are constant over the whole range of volumes between the two phases in which the stable state of the system is a mixture of phases. The isothermals of Fig. XII-3 are plainly very similar to those of actual gases and liquids. From Fig. XII-3, it is plain that the critical point is the point C of Fig. XII-2, at which the maximum and the minimum of the isothermal coincide. We can easily find the pressure, volume, and temperature; of the critical point in terms of the constants a and 6, from this condition. The most convenient way to state the condition analytically is to demand that the first and second derivatives of P with respect to V for an iso- thermal vanish simultaneously at the critical point. Thus, denoting the 186 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII critical pressure, volume, and temperature by P c , V e , T c , we have - n = _ ~ _ e ~ nb V s ,' nRT, ' '(F e - n6) _ _ 2nRT c _ 3FVr (V, - 6)* "FJ ' (F; - nbY ~ V*' ( } We can solve Kqs. (2.1), (2.2), (2.3) simultaneously for P c , T c , and F c . Dividing Eq. (2.3) by Kq. (2.2) we at once find F f . Substituting this in Eq. (2.2), we can solve for T.. Substituting both in Eq. (2.1), we find P c . In this way we obtain P = 27F v = 3nb > RT ''WH (2 ' 4) Equations (2.4) give the critical point in terms of a and b. Con- versely, from any two of the Eqs. (2.4) we can solve for a and b in terms of the critical quantities. Thus, from the first and third, we have '-i- These equations allow us to make a calculation of the critical volume: F c (Van der Waals) = 3nb = c - (2.6) o r c If Van der Waals' equation were satisfied exactly by the gas, the critical volume determined in this way from the critical pressure and temperature should agree with the experimentally determined critical volume. That this is not the case will be shown in a later chapter. The real critical volume and ^ p are not far different, but the latter is larger. This is o r c one of the simplest ways of checking the equation and seeing that it really does not hold accurately, though it is qualitatively reasonable. Using the values of P CJ V c , and T c from Eq. (2.4), we can easily write Van der Waals' equation with a little manipulation in the form Ip_ 3 _Z _ 11 = L p < (-IV L Fc 3 J g7 '- \ C J \ (2.7) This form of the equation is expressed in terms of the ratios P/P C , V/V C , T/Tc, showing that if the scales of pressure, volume, and temperature are adjusted to bring the critical points into coincidence, the Van der Waals' SBC. 3] VAN DER WAAL& EQUATION 187 equations for any two gases will agree. This is called the law of cor- responding states. Real gases do not actually satisfy this condition at all accurately, so that this is another reason to doubt the accuracy of Van der Waals' equation. 3. Gibbs Free Energy and the Equilibrium of Phases for a Van der Waals Gas. We have seen that the equilibrium between the liquid and vapor phase is determined by setting the Gibbs free energy equal for the two phases. Let us carry out this calculation for Van der Waals' equa- tion. From Eq. (4.2), Chap. II, we have dO = V (IP - AS Y dT. Thus we can calculate the Gibbs free energy by integrating this expression. Wo arc interested only in comparing free energies at various points along an isothermal, however, and for constant temperatures we can set the last term equal to zero, so that dG = V dP along an isothermal. This is not a convenient form for calculation, unfortunately, for Van der Waals' equation cannot be solved for the volume in terms of the pressure con- veniently. It involves the solution of a cubic equation, and this can usually be avoided by some means or other. To avoid this difficulty, we shall instead compute the Ilelmholtz free energy A = (V PV, and then find the Gibbs free energy from it. We have dA = -PdV - SdT = -PdV for an isothermal process. Thus A JP dV + function of temperature nRT M 2 a\ , , , .. , . v ^T~ / ~~ \ri r* ~*~ f unrtlon f temperature -/( = nRT In (V nb) ~^- + function of temperature, (3.1) and for the Gibbs free energy we have G = A +PV = PV nRT In (V nb) y- + function of temperature nRTV 2n 2 a nrri , /T , 1X , r A . f , = ^ Tr nRTm (V nb) + function of temperature. V no V (3.2) Equation (3.2) expresses G as a function of volume arid temperature. We wish it as a function of pressure and temperature, and it cannot con- veniently be put in this form in an analytic way, on account of the difficulty of solving Van der Waals' equation for the volume. It is an easy matter to compute a table of values, however. We plot curves, like Fig. XII-3, for pressure as a function of volume in Van der Waals' equation, compute values of G from Eq. (3.2) for a number of values of the 188 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII volume, and read off the corresponding pressures from the curves of pressure against volume. In this way Hie curve of Fig. XII-4 was obtained. In this, the Gihbs free energy is plot ted as a function of pres- sure, for a particular temperature (T = 0.95T C in this particular case). It is seen that for a range of pressures, which in this case runs from about P = 0.74P C to P = 0.84P C , there are three values of G for each pressure, of which the lowest one represents the stable state. The lowest curves cross at about P = 0.82P C , which therefore represents the vapor pressure or point of equilibrium between the phases, at this temperature. Compari- son with Fig. XI-7 shows that Fig. XII-4 really represents tho correct form of this function; it corresponds to a section of the solid shown in Fig. XI-7 cut at constant temperature with suitable rotation of axes. 0.7 09 10 0.8 P/Pc FIG. XII-4.-- Gibbs free energy vs. pressure, at constant tempera- ture, for Van der Waals' equation, at T = 0.957V. 07 09 10 08 T/T C Fits. XI 1-5. Vapor pressure by Van der Waals' equation compared with values for HjO, CO*. We remember that dG = V dP at constant temperature. That is, (3G/dP) T = V, or the slope of the curve in Fig. XII-4 measures the volume. Clearly at smaller pressure the stable state is that with greater slope or greater volume, while the state of smallest volume is stable at the high pressures. The figure makes it clear why the phase of intermediate volume (V z on Fig. XII-2) is not stable at any pressure, since its free energy is never lower than that of the other two phases. The discon- tinuity in slope of the Gibbs free energy at the point of equilibrium between phases measures the change of volume in vaporization. This discontinuity becomes less and loss as the temperature approaches the critical point, and the small pointed loop in the G curve diminishes, until finally at the critical point it disappears entirely, and the curve becomes smooth. In the way we have just described, we can find the Gibbs free energy for each temperature and determine the vapor pressure, and hence the SEC. 31 VAN DER WAALS' EQUATION 189 correct horizontal line to draw on Fig. XII-2. This gives us the vapor pressure curve, and we show this directly in Fig. XII-5. For comparison, we have plotted on a reduced scale the vapor pressures of water and carbon dioxide. We see that while the general form of the curve predicted by Van der Waals' equation agrees with the observed curves, the actual gases show a vapor pressure which diminishes a good deal more rapidly with decreasing pres- sure than the Van der Waals gas. Our methods also allow us to calculate the latent heat of vaporization from Van der Waals' equation. We have - U l + P(V { , - XIl-(i.~- Latent heat as func- Furthermore, we can see by migrating t ^fU|^; u ^ dpr Waala R1U , the equation and H 2 O and OO 2 I =r- T\ I P ~av) r ~ 1 \^T)^ (hat U -an-/V + function of temperature for a Van der Waals gas. Thus (3.3) All the quantities of Eq. (3.3) can be found when \\e have carried out the calculation above, for that gives us the volumes of gas and liquid. Thus we can compute the latent heat as function of temperature. To express it in terms of dimensionless quantities, we can write Eq. (3.3) in terms of P/Pc, etc., and find J_ _ '/ _^ \ I (3.4) showing that the latent heats of two gases at corresponding temperatures should be in the proportion of a/6 to each other. In Fig. XI 1-6 we plot the latent heat L/(o/6), as a function of temperature, as derived from Van der Waals' equation. For comparison we give the latent heats of water and carbon dioxide. Both Van der Waals' equation and experiment agree in showing that the latent heat decreases to zero at the critical point and the curves are of similar shape. However, the scale is quite different, Van der Waals' equation predicting much too small a value for the latent heat. 190 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII We have now examined Van der Waals' equation enough to see that it is very useful as an empirical equation, even if it has no theoretical justification at all. As a matter of fact, it can be justified from statistical mechanics as a first approximation, though no further. In the next sections we shall take up this justification, considering the problem of the equation of state of a gas whose molecules attract each other at large distances but have a finite size, so that they repel each other if they are pushed too closely into contact. We begin by taking up by statistical mechanics the general case of a gas with arbitrary intermolecular forces, then specializing to agree with Van der Waals' assumptions about the nature of the forces. 4. Statistical Mechanics and the Second Virial Coefficient. The way to derive the thermodynamic properties of an imperfect gas theoretically is clear: we find the energy in terms of the coordinates and momenta, compute the partition function, and derive the equation of state and specific heat from it. The only trouble is that the calculation is almost impossibly difficult, beyond a first approximation. In this section we shall just derive that first approximation, which can be carried through without a great deal of trouble. To understand the nature of the approxi- mation, we write the equation of state in a series form which is often useful experimentally. At infinite volume, we know that the gas will approach a perfect gas, with an equation of state PV = nRT, or PV/nRT = 1. At smaller volumes, the equation will begin to deviate from this. That is, we can expand the quantity PV/nRT in series in 1/F; the term inde- pendent of 1/F will be unity, but the other terms, which are different from zero for imperfect gases, will bo functions of the temperature. Thus we can write PV I n\ //A 2 * r i i rwrrrxl' 1 '! i s'i/rn\l' l '\ i /A -j \ nRT A ' Here the quantity PV/nRT is often called the virial; and the quantities 1, B(T), C(7 T )> etc., the coefficients of its expansion in inverse powers of the volume per mole, V/n, are called the virial coefficients, so that B(T) is called the second virial coefficient, C(T) the third, etc. The experi- mental results for equations of state of imperfect gases are usually stated by giving #(7"), C(T), etc., as tables of values or as power series in the temperature. It now proves possible to derive the second virial coeffi- cient B(T) fairly simply from statistical mechanics. The first thing we must know is the energy of the gas as a function of its coordinates and momenta. We use the same coordinates and momenta as in Chap. VIII, Sec. 3: the coordinates of the center of gravity of each molecule and other coordinates determining the orientation and vibration of the molecule. The difference between our present problem SEC. 4] VAN DEtt \YAALS' EQUATION 191 and the previous one of the perfect gas is that now we must add a term in the potential energy depending on the relative positions of the molecules, coming from intermolecular attractions and repulsions. Strictly speak- ing, these forces depend on the orientations of the molecules as well as on their distances apart, as is at once obvious if the molecules are very unsymmetrical in shape, but we shall neglect that effect in our approxi- mate treatment here. That allows us to write the energy, as before*, as a sum of terms, tho first depending only on the coordinates of the centers of gravity (the kinetic energy of the molecules as a whole, and the potential energy of intermolecular forces), and the second depending only on orien- tations and vibrations. Then the partition function will factor, the part connected with internal motions separating off as before and, since it is independent of volume, contributing only to the internal specific heat, and not affecting the equation of state. For our present purposes, then, we can neglect these internal motions, treating the gas as if it were mon- atomic and simply adding on the internal specific heat at the end, using the value computed for the perfect gas. We must remember, however, that this is only an approximation, neglecting the effect of orientation on intermodular forces. Neglecting orientation effects, then, we deal only with the centers of gravity of the molecules. We must now ask, how does the potential energy depend on those* centers of gravity? We have seen the general nature of Van dcr Waals's answer to this question. For the moment, let us simply write the total potential energy of interaction between two molecules i and j, at a distance r ty apart, as <t>(r lt ). Then we may reason- ably assume that the whole potential energy of the gas is (4.2) pairs t,y We now adopt Eq. (5.22), Chap. Ill, for the partition function, but remember that, as in Sec. 3, Chap. VIII, we must multiply by 1/Nl, or approximately by (c/N) N , in order to take account of the identity of molecules. We then have *'*'' (4 - 3) The energy E is like that of Eq. (3.1) of Chap. VIII, except that for simplicity we are leaving the internal part of the energy out of account, and we have our potential energy Eq. (4.2). The integral (4.3) still factors into a part depending on the momenta and another on the coor- dinates, however, and the part depending on the momenta is exactly as with a perfect gas and leads to the same result found in Chap. VIII. 192 INTRODUCTION TO CHEMICAL PHYSICS [CIIAF. XII Thus we have '* - 5 - ,, ., (4 - 4) *r f f - 5 -^ . J J J e </?i The integral over coordinates is the one that simply reduced to V N in the case of the perfect gas. The variables dq\ . . . can be written more explicitly as dx L dyi dz\ . . . dx dy\ dz N . Tin 1 integration over the coordinates can be carried out in steps. First, we integrate over the coordinates of the Nth molecule. The quantity e kT can be factored; it is equal to ~"kf ~/7, r ~Ar" c e (4.5) where 2' represents all those pairs that do not include the Nth molecule. The first factor then does not depend on the coordinates of the Nth molecule and may be taken outside the integration over its coordinates, leaving y, 0(f,Af) -*TT fffe ' dx N dy N dz N . (4.6) We rewrite this as rfrrv dy N dz, - J//(l - c~~^)dx N dy N dz = V - W, (4.7) /// where the first term is simply the volume, the second an integral to be evaluated, which vanishes for a perfect gas. To investigate W, imagine all the molecules except the Nth to be in definite positions. If the gas is rare, the chances are that they will be well separated from each other. Now if the point x N y^N is far from any of these molecules, the interatomic potentials <(r tV ) will all be small, and the integrand will be practically 1 e = 0. Thus we have contributions to this integral only from the immediate neighborhood of each molecule. Each of these will be equal to / *fr'*)\ = J/J\1 - e kT )dx dy dz. w = J/J\1 - e kT dx dy dz. (4.8) For simplicity we put the ith molecule at the origin of coordinates and integrate to infinity instead of just through the container; the integrand becomes small so rapidly that this makes no difference in the answer. Then we have /. / (r)\ w - f 4rrr 2 Vl - e kT )dr. (4.9) jo In terms of this, we then have W - (N - l)u>. (4.10) SEC. 4] VAN DER WAALS' EQUATION 193 Now when we integrate over the coordinates of the (N l)st mole- cule, we have the same situation over again, except that there are only (N 2) remaining molecules, and so on. Thus finally we have for the integral over coordinates in Eq. (4.4) [V - (N - l)w][V - (N - 2)w] V. (4.11) To evaluate the quantity (4.11), we can most easily take its logarithm; that is, N-l In (l - y (4.12) 8 = Replacing the sum over s by an integral, this becomes f In (l - NlnV Nw *T 1 T7 ""^ l = AT In V - - In 1 - (4-13) Our assumptions are only accurate if Nw/V is small; for it is only in this case that we can assume that all molecules are well separated from each other. In this limit, wo ran expand tho logarithm as , / 111 Nw\ Nw -- Substituting in Eq. (4.13) and retaining only the leading term, we Nln V - l ^N 2 ~ - - (4.15) The quantity (4.15) for the logarithm of the integral over coordinates in Eq. (4.4) can now be substituted in the expression for Helmholtz free energy, giving at once A = -WlnZ = -^NkT In T - NkT In V + ^^~ z zv - NkT\ lu (2^*1* + l - In (Nk) I- (4.16) L A 3 J Equation (4.16) agrees exactly with Eq. (3.6), Chap. VIII, except for the internal partition function Z, which we are here neglecting for simplicity, and for the extra term N 2 kTw/2V. This represents the effect of inter- 194 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII atomic forces and is characteristic of the imperfect gas. Differentiating A with respect to volume, we at once have for the equation of state p I ^ ~ Wr =. J T_+.I}?. V + 2V* ' or, substituting JVfc = nft, AT = nJVo, Equation (4.18) is in the form of Eq. (4.1 ) and shows that the second virial coefficient is given by B(T) = ; (4.19) where w is given by Eq. (4.9). This deduction of the second virial coeffi- cient is exact, in spite of the approximations wo have made; if further terms are retained, they prove to affect only the third and higher virial coefficients. But the calculation of these higher coefficients is much harder than the treatment we have given here. 5. The Assumptions of Van der Waals' Equation. The formula (4.19) for the second virial coefficient, together with Eq. (4.9), furnishes a method for deriving this quantity directly from any assumed inter- molecular potential function, though generally the integration is so diffi- cult that it must be carried out numerically. With the assumptions of Van der Waals' equation, however, the problem is simplified enough so that we can treat Eq. (4.9) analytically at high temperatures. We assume that the molecules attract each other with a force increasing rapidly as the distance decreases, so long, as they are not too close together. We assume, however, that the molecules act like rigid spheres of diameter r , so that if the intermodular distance is greater than r the attraction is felt, but if the distance r is equal to r a repulsion sets in, which becomes __ infinitely great if the distance becomes less than r . Then e kT is zero, if r is less than ro, so that Eq. (4.9) becomes f r 4irr 2 dr + f 47rr 2 (l - e~^)dr. /0 /ro w = 4irr 2 dr + 47rr 2 l - e~^dr. (5.1) /0 /ro The first term is simply -jTrrj!, the volume of a sphere of radius ro, or eight times the volume of the sphere of diameter ro which represents a molecule. In the second integral, we may expand in power series, since < is relatively SBC. 5] VAN DEE WAALS' EQUATION 195 small. The bracket is * ~ ~ ' ' ' = ' Thus the term is is * ~ ^ ~ tt ' ' ' )\ = pr' \ dr + Then for the second virial coefficient we have ~ ' (5 - 2) We may write this B(T) ^b-~> where , #o4 (5.3) Here b is four times the volume of No spheres of radius r*o/2, or four times the volume of all the molecules in a gram mole. Since the force repre- sented by the potential <t> is attractive, < is negative and the quantity a is positive and measures the strength of the intermolecular attractions. It is found experimentally that the formula (5,3) for the second virial coefficient is fairly well obeyed for real gases, showing that the assump- tions of Van der Waals are not grc'atly in error. This formula leads to the equation of state Equation (5.4) indicates that for high temperatures (where a/RT is less than 6) the pressure should be greater than that calculated for a perfect gas, while at low temperatures (a/RT greater than 6) the pressure should be less than for a perfect gas. The temperature r. = & (5-5) at which the second virial coefficient is zero, so that Boyle's law is satisfied exactly as far as terms in l/V arc concerned, is called the Boyle tempera- ture. We can now take Van der Waals' equation (1.3), expand it in the form of Eq- (4.1), and see if the second virial coefficient agrees with the value 196 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII given in Eq. (5.4). We have nRT /n\2 nRT, n6 n&V 1 /nV -T) - a (v) 1 /"V " \~ a (v)' nRT agreeing with Eq. (5.4) as far as the second virial coefficient. In our theoretical deduction, we have not found the third virial coefficient, but this can be done by a good deal more elaborate methods than we have used. When this is done, it is found that it does not agree with the cor- responding quantity in Eq. (5.6). In other words, Van der Waals' equation is correct as far as the second virial coefficient is concerned but no further, as a theoretical equation of state for a gas whose molecules act on each other according to Van der Waals' assumptions. 6. The Joule -Thomson Effect and Deviations from the Perfect Gas Law. The deviations from the perfect gas law are rather hard to measure experimentally, since they represent small fractions of the total pressure at a given temperature and volume. For this reason, another method of detecting the departure from the perfect gas law, called the Joule-Thom- son effect, is of a good deal of experimental importance. This effect is a slight variation on the Joule experiment. That experiment, it will be recalled, is one in which a gas, originally confined in a given volume, is allowed to expand irreversibly into a larger evacuated volume. If the gas is perfect, the final temperature of the expanded gas will equal the initial temperature, while if it is imperfect there will be slight heating or cooling. This experiment is almost impossible to carry out accurately, for during the expansion there are irreversible cooling effects, which com- plicate the process. The Joule-Thomson effect is a variation of the experiment which gives a continuous effect, and a steady state. Gas at a relatively high pressure is allowed to stream through some sort of throttling valve into a region of lower pressure in a continuous stream. The expansion through the throttling valve is irreversible, as in the Joule experiment, and the gas after emerging from the valvr is in a state of turbulent flow. It soon comes to an equilibrium state at the lower pressure, however, and then it is found to have changed its tempera- ture slightly. To make the approach to equilibrium as rapid as possible, the valve is usually replaced by some sort of porous plug, as a plug of glass wool, which removes all irregular currents from the gas before it. emerges. Then all one has to do is to get a steady flow and measure the difference of pressure and the difference of temperature, on the two sides of the plug. If AP is the change of pressure, AT 7 the change of tempera- SEC. 0] VAN DER WAALS' EQUATION 197 ture, on passing through the plug, the Joule-Thomson coefficient is defined to be AjT/AP. It is zero for a perfect gas and can be either positive or negative for a real gas. Wo shall now evaluate the Joule-Thomson coefficient in terms of the equation of state. It is easy to show that the enthalpy of unit mass of gas is unchanged as it flows through the plug. Let a volume V\ of gas be pushed into the pipe at pressure PI] then, since P\ is constant through this pipe, work /Pi (IV i P\V\ is done on this sample of gas. After passing through the plug, the same mass has a volume F 2 , and does work P 2 F 2 in passing out of the pipe. Thus the external work done by the gas in the process is PzV z PiTi- It is assumed that no heat is absorbed, so that if f/i is the internal energy when the gas enters, C7 2 when it leaves, the first law gives or Ui + P,Fi = lJ\ + P 2 K 2 , 7/ t = /7 2 . (6.1) Thus the change is at constant #, and the Joule-Thomson coefficient is (3T/dP)n. But this can be evaluated easily from our Table of Thermo- dynamic Relations in Chap. II. It is (dT\ = WA (dg\ \dTjr 71 |D - V (6.2) Vf From Eq. (6.2), we sec that for a perfect gas, for which V is proportional to T at constant P, the Joule-Thomson coefficient is zero. For an imper- fect gas, we assume the equation of state (5.4). We have ev (dP\ WA T7 , 2na V ~ nb + ~ 198 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XII where we have regarded (b nm)( 17) & s a small quantity compared with unity, neglecting its square. Substituting, we then have dT\ n 2a From Eq. (6.4), we see that the Joule-Thomson coefficient gives immediate information about a and b. If we measure the coefficient and know CP, so that we can calculate the quantity (Cp/n)(dT/dP) n , we can plot the resulting function as a function of l/T and should get a straight line, with intercept 6, and slope 2a/R, so that both b and a can be found from measurements of the Joule-Thomson effect as a function of temperature. We notice that at high temperatures the coefficient is negative, at low temperatures positive. That is, since AP is negative in the experiment, corresponding to a decrease of pressure, the change of temperature is positive at high temperatures, loading to a heating of the gas, while it is negative at low temperatures, cooling the gas. The temperature 2a/Rb, whore the effect is zero, is called the temperature of inversion; we see by comparison with Eq. (5.5) that, if our simple assump- tions are correct, this should be twice the Boyle temperature. The Joule- Thomson effect is used practically in the Lindc process for the liquefaction of gases. In this process, the gas is first cooled by some method below the temperature of inversion and then is allowed to expand through a throttling valve. The Joule-Thomson effect cools it further, and by a repetition of tho process it can be cooled enough to liquefy it. CHAPTER XIII THE EQUATION OF STATE OF SOLIDS Next to perfect gases, regular crystalline solids are the simplest form of matter to understand, being less complicated than imperfect gases near the critical point, or liquids. Unlike perfect gases, there is no simple analytic equation of state which always holds; we are forced either to use tables of values or graphs to represent the equation of state, or to expand in power scries. But the theory is far enough advanced so that we can understand the simpler solids fairly completely. As with gases, we shall start our discussion from a thermodynamic standpoint, asking how one can find information from experiment, and then later shall go on to the theory, seeing how far one can go by statistical mechanics in setting up a model of a solid and predicting its properties. Of course, it is obvious that in one respect the subject of solids is a much wider one than that of gases: there is tremendous variety among solids, whereas all gases act very much alike. This comes from the different types of forces holding the atoms together and the different crystal structures. We shall put off most of the discussion of the different types of solids until later in the book, when we take up chemical substances and their properties. When we come to that, we shall see to what a large extent the fundamental atomic arid molecular properties of a solid are brought out in the behavior of its solid state. 1. Equation of State and Specific Heat of Solids. To know the equation of state of a solid, we should have its pressure as a function of volume and temperature. Really we should know more than this: a solid can support a more complicated stress than a pressure, and can have a more complicated strain than a mere change of volume. Thus for instance it can be sheared. And in gn_eraljthp "equation of state " kjL set nfjvjfttfions giving the stress at every point of the solid^as a jujicliaiL of the^ strains and the temperature. TBut we shall" not concern ourselves with these general stresses "a'mPsTraihs, though they are of great impor- tance both practically and theoretically; we limit ourselves instead to the case of hydrostatic pressure, in which the volume and temperature are adequate independent variables. Let us consider what we find from experiment on the compression of solids to high pressures. At zero pres- sure, the volume of a solid is finite, unlike a gas, and it changes with temperature, generally increasing as the temperature increases, as given by the thermal expansion. As the pressure is increased at a given tern- 199 200 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII perature, the volume decreases, as given by the compressibility. Com- bining these pieces of information, we have a set of curves of constant temperature, or isothormals, as given in Fig. XIII-1. These are plainly very different from the isothormals of a perfect gas, which are hyperbolas, the pressure being inversely proportional to tho volume. If we know nothing experimentally but tho thermal expansion and the compressi- bility, we should have to draw the lines as straight lines, with equal spac- ing for equal temperature changes. Fortunately the measurements are more extensive. The pressure is known as a function of volume over a wide pressure rango, enough in most solids to change tho volumo by a per cent, and with very compressible solids by many per cont, and the volume is known as a function of tomporaturo for wido ranges of tompera- turo. Tho curves must stop experimentally at zoro pressure 1 , but wo can 4QOOO 30,000 20,000 10,000 i i i I 275obs 05 x<: V/V FIG. XIII-1. Isothermals for a solid (sodium) giving pressure as a function of volumo at constant temperature. imagine that they could bo extrapolated to negative pressures, as indi- cated by the dotted lines in the figure. To carry out any calculations with the equation of state, we wish to approximate it in some analytic way. First, lot us consider the most convenient variables to use. The results of experiment are usually expressed by giving the volume as a function oTpressurcltnd temperature. Thus thelhermal expansion Is" investigated as a function of temperature at atmospheric pressure, and in measurements of compressibility the volume is found as a function of pressure at certain fixed temperatures. On the other hand, for deriving results from statistical mechanics, it is convenient to find the Helmholtz free energy, and hence the pressure, as a function of volume and temperature. We shall express the equation of state in both forms, and shall find the relation between the two. We let F be the volume of our solid at no pressure and at the absolute zero of temperature. Then we shall assume V = Fn(l - a,(7')P (1.1) SEC. 1] THE EQUATION OF STATE OF SOLIDS 201 where a , ai, a 2 , etc., are functions of temperature, the signs being chosen so that they are positive for normal materials. Tho meaning of the q's is easily found. Thus, first at zero pressure (which for practical purposes is identical with atmospheric pressure, since the volume of a solid changes so slowly with pressure) the volume is F [l + ao(T)]. __ The coefficient of thermal expansion at zero pressure is then _ l(dV\ a - V\dTj I 1 dtto rftto ,1 /* rk\ 'A " TT^df - dT approximately. (1 . 2 ) If the material has a constant thermal expansion, so that the change in volume is proportional to temperature, we should have approximately dao/dT = a, where a is constant, leading to a (7 1 ) = aT. This is a special case, however; it is found that for real materials the coefficient of thermal expansion becomes smaller at low temperatures, approaching zero at the absolute zero; for this reason we prefer to leave a (T) as an undetermined function of the temperature, remembering only that it, reduces to zero at the absolute zero (by the definition of F ), and that it is very small compared to unity, since the temperature expansion of a solid is only a small fraction of its whole volume. The meaning of ai is simple: rt is almost exactly equal jto^jhc com- pimsibilit/^^ The compressibility x is ordinarily defined as (\/V)(dV/dP)r, to be computed at zero pressure. From Kq. (1.1), remembering that the volume at zero pressure is given by r n [l + a (7 7 )], we have where in the last form we have again neglected ao compared to unity. The compressibility ordinarily increases with increasing temperature, so that a\(T) must increase with temperature, enough to produce a net mcrease'ih spite of the increase of the factor 1 + ao in the denominator of Eq. (1.3). The increase is not very great, however; most compressi- bilities do not change by more than 10 per cent or so between absolute zera and hifih temperatures. I'h6 flUfthtity a* measures essentially the change of comgrejsgibility with pressure .^JLtittkJs^ known experimentally about its temperature variation, though it presumably increases with temperature in some such way as ai does. The terms of the series in P written down in Eq. (1.1) represent all that are required for most materials and the available pressure range. Most measurements of solids at high pressures have been carried out by Bridgman, 1 who has measured changes of volume up to pressures of 12,000 atm. with many l See P. W. Bridgman, "Physics of High Pressures," Chap. VI, The Macmillan Company, 1931, and later papers. 202 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII solids and to 45,000 atm. with a few solids. At these highest pressures, the most compressible solid, Cs, caesium, has its volume reduced to less than half the volume at atmospheric pressure, and the other alkali metals, Li, lithium, Na, sodium, K, potassium, and Rb, rubidium, have reductions in volume of from 20 to 50 per cent. To represent these large changes of volume accurately requires a considerable number of terms of such a series as (1). Those are extreme cases, however; most solids are much less compressible, and changes of volume of only a few per cent can be produced with the available pressure, so that we can approximate quite accurately by a quadratic function of pressure, as in Eq. (1.1). The experimental results are usually stated by giving the relative change of volume as a power scries in the pressure. That is, in our notation, we have 7 (1 + q Q ) V = aiP ___ az p2 n 4 N ^ F (l + a ) 1 + a 1 + a ' V ; I The constants ai/(l + a ) and 2/(l + o) are given as the result of experiments on compressibility. If a is known from measurements of thermal expansion, we can then find a\ and a 2 directly from experiment. The equation of state (1.1) is expressed in terms of pressure and temperature as independent variables. We shall next express it in terms of volume and temperature. We shall do this in the form P = P n (T) + P } (T)- + P 2 (T)- (1.5) Here Po(T), P\(T), and Pz(T) are functions of temperature, again chosen to be positive. The meaning of PO is simple: it is the pressure that must be applied to the solid to reduce its volume to F , the volume which it- would have at the absolute zero under no pressure. Obviously P goes to zero at the absolute zero. At ordinary temperatures, while it repre- sents a very considerable pressure, still it is small compared to the quan- tities Pi and Pz, so that it can be treated as a small quantity in our calculations and its square can be neglected. We shall see in a moment that PI is approximately the reciprocal of the compressibility, or equals the pressure required to reduce the volume to zero, if the volume decreased linearly with increasing pressure (which of course it does not). Obvi- ously this is much greater than the pressure required to reduce the volume to Fo. We shall now find the relations between the a's of Eq. (1.1) and the quantities P , PI, Pz of Eq. (1.5), assuming that we can neglect the squares and higher powers of a and P . To do this, we write Eq. (1.1) in the form ~ , (1.6) SBC. 1] THE EQUATION Of STATE OF SOLIDS 203 substitute in Eq. (1.5), and equate the coefficients of different powers of P. We have P - Po + Pi(-a + ofP 2 ), (1.7) where we have neglected a%. Equating coefficients, we have the equations = Po - - 2P 2 a a, = -P,a 2 + 2P 2 aoa 2 + P^al (1.8) Solving for the a's, AVO have Po a= p - - 2P 2 a ~ P P? Similarly solving for the P's we have Pi - - 1 ? / a t (1.10) '1 Since we know how to find the a's from experiment, Eqs. (1.10) tell us how to find the P's. We observe from Eqs. (1.10) that, as mentioned before, PI is equal, apart from small terms proportional to a , to the reciprocal of the compressibility given in Eq. (1.3). In addition to the equation of state, we must find the specific heat from experiment. Ordinarily ono finds the specific heat at constant pressure, C P) at atmospheric pressure, or practically at zero pressure. We shall call this C, to distinguish it from the general value of C Pj which can depencHm pressure. Le^usjSndJjic . jgp^dence^pn pressure. From Eq. (1.6), Chap. VIII, we have ~ ~~ ~ >*P/T Substituting for V from Eq. (1.1) and integrating with respect to pressure 204 INTRODUCTION TO CHEMICAL PHYSICS [Cinr. XIII from P to P, we have - v r*p - i p* + *p* - n in VoTP ^ P + ^ P (LH) In case a n , a\ t and 2 can be approximated by linear functions of tempera- ture, as we considered earlier for a , the second derivatives in Eq. (L.ll) will be zero and C P will be independent of pressure. Since da^/dT is essentially the coefficient of thermal expansion, we see that the term in Eq. (l.ll) linear in the pressure depends on the change of thermal expan- sion with the temperature. We have mentioned that the thermal expansion is zero at the absolute zero, increasing with temperature to an asymptotic value. Thus we may expect d 2 a Q /dT 2 to be positive, falling off to zero at high temperatures, so that from Eq. (1.11) the specific heat will decrease with increasing pressure, particularly at low temperature. For theoretical purposes, it is better to use the specific heat at constant volume, CV, computed for the volume V which the solid has at zero pres- sure and temperature. We shall call this C. CV will depend on the volume as indicated by Eq. (1.7) of Chap. VIII: Using Eq. (1.5) for the pressure, we obtain 2 1 (1.12) 1 3 dT From Eq. (1.9), P is proportional to a , so that its second derivative will likewise; be positive, and we find that CV will decrease with decreasing volume or increasing pressure, just as we found for Cp. Since it is impracticable to find CV, or Cy, from direct experiment, it is important to be able to find these quantities from CP. From Eq. (5.2), Chap. II, we know how to find C P CV: it is given by the formula T(dV/dT) P (dP/dT)v. This gives the difference of specific heats at a given pressure and temperature. We are more interested, however, in the difference Cp Cy, in which C P is computed at zero pressure, Cy at the volume Fo. To find this difference, let us carry out a calculation of CV at zero pressure, from Eq. (1.12). Here we have Fo F = F a , from Eq. (1.1). Then Eq. (1.12) gives us CV = C v + V Q Ta d ~~ Using this value and the; equation for CP CV, which we calculate for zero SBC. 2] THE EQUATION Ob' STATK OF SOLIDS 205 pressure, we have fK\ fin v Tn ^ 2 ^ - v T da dPQ Op o r Kn/a<r 2 ol ~' , ., (L13) In tho derivation of Eq. (1.13), we have neglected the variation of a^ with temperature. In case the thermal expansion is constant, so that ao = ctT, and the specific heat is independent of volume or pressure, Eq. (1.13) takes the simple form where we remember that a is Hie coefficient of thermal expansion, ai the compressibility, to a good approximation. When numerical values are substituted in Eq. (1.14), it is found that the difference of specific heats for a solid is much less than for a gas, so that no great error is committed if we use one in place of the other. This can be seen from the fact that the difference of specific heats depends on aji, as we see in Eq. (1.13), whereas elsewhere we have considered n as being so small that its square- could be neglected. We have now discussed all features of the specific heats, except for the dependence of C or CfJ themselves on temperature. Experimentally it is found that the specific heat is zero at the absolute zero and rises to an asymptotic valiuTat high temperatures, much like the specific heat of an oscillator, as shown in Fig. IX-a. We shall see later that the t.frqrmal energy oi a solid comes trom iheTscillations of the molecules, so that there is a fundamental reason tor this behavior of the specific heat. We shall also find theoretical formulas later which express the specific heat with fairly good accuracy as a function of the temperature, formulas that differ in some essential respects from Eq. (5.8), Chap. IX, from which Fig. IX-3 was drawn. For the present, however, where we are discussing thermo- dynamics, we must simply assume that the specific heat is given by experiment and shall treat C? and C v as unknown functions of the tem- perature, which however always reduce to zero at the absolute zero of temperature. 2. Thermodynamic Functions for Solids. In the preceding section we have seen how to express the equation of state and specific heat of a solid as functions of pressure, or volume, and temperature. Now we shall investigate the other thermodynamic functions, the internal energy, entropy, Helmholtz free energy, and Gibbs free energy. For the internal 206 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII energy as a function of volume and temperature, we have the relations (dU/dT) v = C v , (dU/dV) T = T(dP/dT) v - P. Let the energy of the solid at volume Fo and zero temperature be C/oo. Then we find the energy at any temperature and volume by starting at VQ at the absolute zero, raising the temperature at volume VQ to the desired temperature, and then changing the volume at this temperature. Using Eq. (1.5), we find at once U - t/oo + (2.1) The internal energy of metallic sodium is shown as a function of volume in Fig. XIII-2, as an illustration. On account of the large compression that can be attained with sodium, more terms of the power series must be retained than are given in Eq. (2.1), but it is easier to show the prop- erties of this metal than of a less compressible one. Lot us consider the behavior of the internal energy as a function of volume at fixod tempera- ture. If the thermal expansion is independent of temperature, so that P is proportional to the temperature and dPQ/dT is a constant, the coefficient of the term in (F - V) in Eq. (2.1) is zero and the term in (Vo V)' 2 is the principal term in U. In this term, PI, which is the reciprocal of the compressi- bility, is large compared to T dPi/dT, so that the coefficient of (F F) 2 is posi- tive and the internal energy has a mini- mum at Fo, just as it does at the absolute zero. If the thermal expansion depends ,.5 1.0 0.5 08 0.9 10 V/V for various temperatures. The dotted ordinarily to smaller volumes. line connects points at zero pressure. . . is an interesting consequence 12 on temperature, the term in (F V) will have a small coefficient different FIG. XIII-2. Internal energy of a . . 1.1,1 solid (sodium) as function of volume from zero, shifting the minimum slightly, There of the fact that the minimum of U is approximately at F . At ordinary temperatures, the volume of the solid at zero pressure, which as we have seen is Fo(l + o), will be greater than Vo. Then on compressing the solid, its internal energy will decrease until we have reduced its volume approximately to Fo, when it will begin to increase again. Of course, work is constantly being done on the solid during the compression, but so much heat flows out to maintain the temperature constant that the total energy decreases, with moderate compressions. The internal SBC. 2] THE EQUATION OF STATE OF SOLIDS 207 energy of course increases as the temperature is raised at constant volume, as we see from the obvious relation (dU/dT)v = CV, so that the curves corresponding to high temperatures lie above those for low temperature. Furthermore, since the specific heat is higher at large volume, as we saw from Eq. (1.12), the spacing of the curves is greater at large volume, resulting in the slight shift of the minimum to smaller volume with increasing temperature. The entropy is most easily determined as a function of volume and temperature from the equation (dS/dT) v = C V /T. At the absolute zero of temperature, the entropy of a solid is zero independent of its volume or pressure. The reason goes back to our fundamental definition of entropy in Chap. Ill, 8 ~^2/t In /t> where/,- represents the fraction of all systems of the assembly in the ith state. At the absolute zero, according to the canonical assembly, all the systems will be in the state of lowest energy, which will then have/ = 1, all other states having/ = 0. Thus automatically S = 0. We can then find the entropy at any temperature and volume as follows. First, at absolute zero, wo change the volume* to the required value, with no change of entropy. Then, at this constant volume, we raise the temperature, computing the chango of entropy from JCv/TdT. We can use Eq. (1.12) for the specific heat at arbitrary volume. Carrying out the integration from that equation, we have at once _ fV r - v\^(Y^L Jo *T y [dT\ V, YVo -2 , jL- + 2'dT\ V, ^' " ( } The entropy of sodium, as computed from Eq. (2.2), is plotted in Fig. XIII-3 as a function of temperature, for several volumes. Starting from zero at the absolute zero, the entropy first rises slowly, since its slope, Cv/Tj goes strongly to zero at the absolute zero. As tho temperature rises, the curve goes over into something more like the logarithmic form which it must have at high temperature, where CV becomes constant, and S = JC V dT/T = CV In T + const. From the curves, it is plain that the entropy increases with increasing volume, at constant temperature. This can be seen from Eq. (2.2), in which the leading term in the variation 7jp with volume can be written -j7ft(V F ), where from Eq. (1.10) we see p that -j~ is approximately the thermal expansion divided by the com- pressibility. It can also be seen from the thermodynamic relation 208 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII dV which is seen, if we multiply and divide by 1/F, to be exactly the thermal expansion divided by the compressibility. The reason for the increase of entropy with increasing volume is simple: if the volume increased, or the pressure decreased, adiabatically, the material would cool; to keep the temperature constant heat must flow in, increasing the entropy. The Helmholtz free energy A = U TS can be found from Kqs. (2.1) and (2.2) for U and S, or can be found by integration of the equations 100 200 Deg. Abs. Flu. XIII-3. Entropy of a solid (sodium) as function of the temperature tit constant volume. (dA/dT) v = S, (dA/dV) T = -P. The latter method is perhaps a little more convenient. At the absolute zero and volume To, the Helm- holtz free energy equals the internal energy and is given by f/oo, as in Eq. (2.1). From that point we increase the temperature at volume VQ to the desired temperature, and then change the volume at this temperature. We find at once A = In Fig. XIII-4 we show A as a function of volume for a number of tem- peratures. At the absolute zero, as we have mentioned above, the Helmholtz free energy equals the internal energy, as given in Fig. XIII-2. SEC. 2] THE EQUATION OF STATE OP SOLIDS 209 From the equation (dA/dV) T = P, we see that the negative slope of the Helmholtz free energy curve is the pressure, and the change of Helmholtz free energy between two volumes at constant temperature gives the external work done in changing the volume. It is for this reason, of course, that it is called the free energy. Thus the minimum of each curve corresponds to the volume where the pressure is zero. It is obvious from the graph that this minimum moves outward to larger volumes with increase of temperature; this represents the thermal expansion. In particular, it is plain that this shift of the mini- mum is very small for low temperatures, corresponding to the small thermal expansion at low temperatures. Since the slope of the free energy curve gives t he negative pressure, it is only the part of the curve to the left of the minimum that corresponds to positive pressure _ and has physical significance. Finally, we consider the Gibbs free energy, G = U +PV - TS = A + Pl r , as a function of pressure and temper- ature. This is most conveniently found from the relations (dU/dP)? = T, (dG/dT)?=-S. Starting at the abao- VTTT ^ U1 . 14 , ' ' ** FIG. XIII-4. Helmholtz free energy lute zero and zero pressure, where the of a solid (sodium) as function of tho value of G is Z7 o, we first increase the volume at c n * ant temperature, temperature at zero pressure, then increase the pressure at constant temperature, finding r j - f( PV.(l a,) - (2.4) 111 Fig. XIII-5, we plot G as a function of pressure, for a number of tem- peratures. The term PVo(l + a ) is by far the largest one in (7, resulting approximately in straight lines proportional to P. The spacing of the curves is determined by the entropy: (dG/dT) P = S, showing that G decreases with increasing temperature at constant pressure and that the decrease is greater at low pressure (large volume) than at high pressure. These details of the change of the Gibbs free energy with temperature are not well shown in Fig. XIII-5, however, on account of scale, and 210 I \TRODUCTWN TO CHEMICAL I'HYMC'ti [CHAP. Xii( this sort of plot does not give a great deal-of useful information. Before leaving it, it is worth while pointing out the resemblance to Fig. XII-4, where we plotted G as a function of pressure for a liquid and gas in equilib- rium, as given by Van der Waals' equation, and found again almost straight lines. The more useful way to give G graphically is to plot it as a function of temperature for constant pressure, as we do in Fig. XIII-6. The slope of these curves, being 5, is zero at the absolute zero, negative at all higher temperatures, so that the curves slope down. The Gibbs free energy decreases more slowly with temperature at high pressure, where the entropy is lower, than at zero pressure. At zero pressure, the term PV 10,000 2QOOO P Atmospheres FIG. XIII-5. Gibbs free energy of a solid (sodium) as function of the pressure at constant J temperature. is zero, so that the Gibbs free energy G equals the Helmholtz free energy A. The difference between the two functions is small at low pressures, so that at pressures of a few atmospheres the two functions can be used interchangeably for solids. This of course does not hold for gases, for which the volume V is much greater, and the term PV is very large even at small pressures. As we can see from Chap. XI, Sees. 3 and 4, this diagram, of G as a function of 7 7 , is the important one in discussing the equilibrium of phases, since the condition of equilibrium is that the two phases should have the same Gibbs free energy at the same pressure and temperature. Thus if we draw G for each phase, as a function of tem- perature, for the pressure at which the experiment is carried out, the point of intersection will give the equilibrium temperature of the two phases at the pressure in question. SEC. 3) THE EQUATION OF STATE OP' SOL1DX 211 We have already shown, in Figs. XI-4, XI-5, XI-6, and XI-7, the equation of state, entropy, and Gibbs free energy of a substance in all of its three phases. Examination of the parts of those figures dealing with solids will show the similarity of those curves to tho ones found in tho present section in a more explicit and detailed way. 3. The Statistical Mechanics of Solids. The first step in discussing a solid according to statistical mechanics is to set up a model, describing its coordinates and momenta, finding its energy levels according to the 1 quantum theory, and computing the partition function. This represents an extensive program, of which only the outline can bo given in the present chapter. The typical solid is a crystal, a regular repeating l_ s. 3 P- 20. 000 atm P- 10,000 atm 100 200 300 Deg Abs. FIG. XIII-6. Gibbs free energy of a solid (sodium) as function of temperature at constant pressure. structure composed of molecules, atoms, or ions. The repeating unit is called the unit cell. The crystal is hold together by forces between the molecules, atoms, or ions forces that resemble those between atoms in diatomic molecules, as discussed in Chap. IX, in that they lead to attrac- tion at large distances, repulsion at small distances, with equilibrium between. The interplay of the attractive and repulsive forces of all atoms of the crystal leads to a state of equilibrium in which each atom has a definite position, in which no forces act on it. At the absolute zero of temperature, the atoms will be found just in these positions of equilib- rium. At higher temperatures, however, they will vibrate about the positions of equilibrium, to which they are held by forces proportional to the displacement, if the displacements are small. We shall divide our discussion of the model into two parts: first, the crystal at the absolute 212 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII zero with its atoms at rest in their equilibrium positions; secondly, the thermal vibrations of the atoms about these positions. Let us first consider the crystal at the absolute zero. The, energy will depend on the state of strain of the crystal; as was mentioned in Sec. 1, we omit discussion of shearing strains and types of deformation other than change of volume. Thus we are interested simply in the dependence of energy on volume. As the volume is changed, of course each unit cell changes in the same proportion and the atoms change their positions in the crystal. The interatomic energies also change, and the change in energy of the whole crystal is simply the sum of the changes of the energies of interaction of the various atoms. We cannot say anything further about the energy as a function of volume, without investigating specific examples, as we shall do in later chapters. But at least we may assume that the energy of interaction of two atoms is most conveniently expressed as a function of the distance of separation, and if the whole energy is a sum of these energies of interaction, it also may be expected to be particularly simple when regarded as a function of a linear dimension of the crystal, rather than as a function of the volume. We shall, therefore, express the energy of the crystal at the absolute zero as a function of a quantity r, which may be a distance between atoms, a side of a unit cell, or some other linear dimension of the crystal, and shall find results that will later be useful to us, when we know r more about the nature of the interatomic forces. Consider a crystal of volume V, containing N atoms or molecules. (We purposely leave the description slightly vague, so as to allow more generality in the result.) Then V/N is the volume per atom or molecule, a quantity which of course can be changed by application of external pressure. We shall limit the present discussion to cubic crystals, in which only the volume, and not the shape, changes under pressure; many crystals do not have this property, but the ones that we shall discuss quantitatively happen to be cubic. Then V/N will be a numerical constant times r 3 , the volume of a cube of side r, since in a uniform com- pression the whole volume and the volume r 3 will change in proportion. Thus let jj = <**, (3-1) where c will be a definite number for each structure, which we can easily evaluate. We define a quantity r in terms of F , the values respectively of r and V when the crystal is under no pressure at the absolute zero. Thus we have Fo -V r* -r 3 _ V~o ~ rjj SBC. 3] THE EQUATION OF STATE OF SOLIDS 213 We now take the expression (2.1) for the internal energy as a function of volume, set the temperature equal to zero, and use Eq. (3.2), finding the internal energy at absolute zero as a function of the linear dimensions. Calling this quantity t/o, we have where PJ, P are the values of the quantities PI, P 2 of Eq. (2.1) at the absolute zero of temperature. (We note that P == at the absolute zero.) Substituting from Eq. (3.2) and retaining terms only up to the third, we have [/ 00 + NcrP~-~ - 9(P? - pojTir. (3 . 4 ) Equation (3.4) will later prove to bo convenient, in cases where we have a theoretical way of calculating t/o from assumed interatomic forces. In these cases, P? and PI can be found directly from the theory, using Eq. (3.4). Our next task is to consider the solid at a higher temperature than the absolute zero. The molecules and atorps will have kinetic energy and will vibrate. We can get a simple, but incorrect, picture of the vibrations by thinking of all the atoms but one as being fixed and asking how that one would move. It is in a position of stable equilibrium at the absolute zero, being held by its interactions with its neighbors in such a way that it is pushed back to its position of equilibrium with a force proportional to the displacement. Thus it will execute simple harmonic motion, with a certain frequency v. To discuss the heat capacity of this oscillation, we may proceed exactly as in Chap. IX, Sec. 5, where we were talking about the heat capacity of molecular vibrations. Each atom can vibrate* in any direction, so that its #, ?/, and z coordinates separately can execute simple harmonic motion. It is then found easily that the classical partition function for vibration for a single atom is (kT/hv)*, similar to Eq. (5.4), Chap. IX, but cubed on account of the three dimensions. This corresponds to a heat capacity of 3k per atom, or 3R per mole, if the material happens to be monatomic, with corresponding values for poly- atomic substances. This law, that the heat capacity of a monatomic substance should be 3R or 5.96 cal. per mole, at constant volume, is called the law of Dulong and Petit. It is a law that holds fairly accurately at room temperature for a great many solids and has been known for over a hundred years. It was first found as an empirical law by Dulong and Petit. At lower temperatures, however, the specific heats of actual solids are less than the classical value, and decrease gradually to zero at the absolute zero. 214 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII It was to explain these deviations from the law of Dulong and Petit that Einstein developed his theory of specific heats. He treated the vibrations of the separate atoms by quantum theory, just as we did in Sec. 5, Chap. IX, and derived the formula / hv \2 (w-i) k2 - 1; e 6 (3.5) where __ y - l) = ~ (3.6) Equation (3.5) is analogous to Eq. (5.7), Chap. IX, but is multiplied by 3 on account of the three degrees of freedom. As we have seen in Fig. IX-3, this gives a specific heat rising from zero at the absolute zero to the classical value 3/2 at high temperatures. It is found that values of the frequency i>, or of the corresponding characteristic temperature 9, of Eq. (3.6), can be found, such that the Einstein specific heat formula (3.5) gives a fairly good approximation to the observed specific heats, except at very low temperatures. Close to the absolute zero, the Einstein formula predicts a specific heat falling very sharply to zero. The actual specific heats do not fall off so rapidly, but instead are approximately propor- tional to T 3 at low temperatures. Thus, while Einstein's formula is certainly a step in the right direction, we cannot consider it to be correct. The feature that we must correct is the one in which we have already rioted that this treatment is inadequate: the atoms are not really held to positions of equilibrium but merely to each other. In other words, we must treat the solid as a system of many atoms coupled to each other, arid we must find the vibrations of these atoms. This is a complicated prob- lem in vibration theory, something like the problems met in the vibrations of polyatomic molecules. We shall not take it up until the next chapter. In the meantime, however, there are certain general results that we can find regarding such vibrations, which are enough to allow us to make considerable progress toward understanding the equation of state of solids. A system of N particles, held together by elastic forces, has in general 3AMJ vibrational degrees of freedom, as we saw in Eq. (6.1), Chap. IX, where we were talking about polyatomic molecules. Really a whole crystal, or solid, can be regarded as an enormous molecule, and for large values of N we can neglect the 6, saying merely that there are SN vibra- SEC. 4] THE EQUATION OF STATE OF SOLIDS 215 tional degrees of freedom. In general, there will then be 3N different normal modes of vibration, as they are called. Each normal mode consists of a vibration of all the atoms of the crystal, each with its own amplitude, direction, and phase, but all with the same frequency. Each atom then finds itself surrounded, not by a stationary group of neighbors, but by neighbors which are oscillating with the same frequency as its own motion. At each point of its path, it will always find the neighbors in definite locations, so that the forces exerted on it by its neighbors will depend only on its position; but the forces will not be the same as if the neighbors remained at rest, for the positions will be different. Thus the frequency will not be the same as assumed in Einstein's theory. Our problem in the next chapter will be to consider these 3N modes of vibra- tion and to find their frequencies, which in general will all be different. For the present, however, we may simply assume the frequencies to be known, and equal to vi . . . PS.V. The most general motion of the atoms, of course, is not one of these normal modes but a superposition of all of them, with appropriate amplitudes. This has a simple and in fact a very fundamental analogy in the theory of sound. The normal modes of a vibrating string or other musical instrument are simply the different harmonic overtones in which it can vibrate. Each one consists of a purely sinusoidal vibration, in which the string is divided up by nodes into cer- tain vibrating segments. The simplest type of vibration of the string is an excitation of only one of these overtones, so that it vibrates with a pure musical tone. But the more general and common type of vibration is a superposition of many overtones, each with an appropriate amplitude and phase; it is such a superposition which gives a sound of interesting musical quality. As a matter of fact, if we ask about the 3N vibrations of a piece of matter, for studying its specific heat, we find that the vibra- tions of low frequency are exactly those acoustical vibrations which are considered in the theory of sound. As we go to higher and higher fre- quencies and shorter and shorter wave lengths, however, the vibrations begin to depart from the simple ones predicted by the ordinary theory of sound, and finally when the wave length begins to be comparable with the interatomic distance, the departure is very great. We shall investigate the nature of these vibrations, as well as their frequencies, in the next chapter. 4. Statistical Mechanics of a System of Oscillators. I^jiamically, we have seen that a.jprystal can be approximated by a set of 3N vibra- tions, if there are jV^atonaa in the, crystal. These vibrations have fre- quencies which we may label vi . . . VZN, varying through a wide range of frequencies. To the approximation to which the restoring forces can be treated as linear, these oscillations are independent of each other, each one corresponding to a simple harmonic oscillation whose frequency is inde- 216 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII pendent of its own amplitude or of the amplitudes of other harmonics. This is only an approximation, but it is sufficient for most purposes. Then the energy is the sum of the energies of the various oscillators, and each of these is quantized. That is, the energy of the jth oscillator ran take on the values (n/ + ?)hvj, where n/, an integer, is the quantum number associated with this oscillator. We see that 3N quantum num- bers aro necessary to describe the total energy and to define a stationary state. All these quantum numbers should then appear as subscripts of the energy, and we have the relation where U' Q is the energy which the lattice would have if the amplitudes of all oscillations were zero. Actually, even at the absolute zero of temperature, however each oscillation has a half quantum of energy. Thus we may write 3N where 3tf f/o = U' + 5iAiv. (4.3) The quantity t/o is the same as that givon in Eq. (3.3), representing the energy of the lattice, as a function of volume, at the absolute zero of tem- perature. The subscripts n\ . . . HS.V take the place of the single index i which we ordinarily use in defining the partition function. Thus we find for the partition function _ __ - z = e kT = 2 2 c jkT ekT (4 * 4) HI U3N We can write the exponential as a product of terms each coming from a single value of j, and can carry out the summations separately, obtaining (4.5) Each of the summations in Eq. (4.5) is of the form already evaluated in Sec. 5, Chap. IX. Thus we have 1 SBC. 4] THE EQUATION OF STATE OF SOLIDS 217 a product of 3N terms. Taking the logarithm, we have at once A C7o + 2*!T In (l - c~$). (4 .7) 3 Differentiating with respect to 7\ we have (hv, v / _^\ IT \ -1 U -'' "7 +-J- ) (4-8) e* r - I/ Finally, from Eq. (5.20), Chap. Ill, we hav By differentiating Eq. (4.9), we find the specific heat in agreement with the value previously found, Eq. (3.5), for the special case whoro all 3N frequencies are equal. Having found the Helmholtz free energy (4.7), we can find the pressure by differentiating with respect to volume. We have p = I \dV where The first two terms in Eq. (4.10) are the* pressure at the absolute zero of temperature, which wo have already discussed. The summation repre- sents the thermal pressure. It is different from zero only because the 7/s are different from zero; that is, because the vibrational frequencies depend on volume. Wenaturally expect this dependence; as the crystal is compressed it becomes harder, the restoring forces become greater, and vibrational frequencies increase, so that the P/S increase with decreasing volume and the 7/5 are positive. If we consider the 7/3 to be inde- pen9ehEDf~ temperature, each term of the summation in Eq. (4.10) is proportional to the energy of the corresponding oscillator as a function of temperature, given by Eq. (4.9), divided by the volume V. At high temperatures, we know that the quantum expression for the i hv- energy of an oscillator, ^hvj + - A - - -> approaches the classical value ~ -~ - IIIIIMH_ imi n miiii a i II"WB<II nil < >uiiMMmMk ,,, , _ \ e kT 1 218 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII kT. Thus, at high temperature the thermal pressure approaches^ The first term is an expression similar to the pressure of a perfect gas, NkT/V, except that the constant of proportionality is now Vy/ instead of 3 N, the number of molecules. We shall find that the 7/s are generally between 1 and 2, so that, since there are 3N terms in the summation ovorj, whore N is the number of atoms, the thermal pressure asjndicated by Eq. (4.12) has a term which is from three to six times as greatjts the corresponding pressure of a perfect gas of the same number of atoms, at the same temperature and volume as the solid. The second term of Eq. (4.12), coming from the zero point energy, gives a decrease of thermal pressure compared to this gas pressure which is independent of tempera- ture, until wo go down to low temperatures. There the thermal pressure does not decmise so rapidly with decreasing temperature as we should estimate from the gas law, but instead falls to the value zoro at the absolute zero. The change of thermal pressure with temperature, involv- ing the derivative of Eq. (4.10), approaches zoro very strongly at the absolute zoro, just as the specific boat does, and sinoo this derivative enters the formula for thermal expansion, this quantity goes to zero at the absolute zoro. In fact, as we shall soo in the noxt paragraph, there is a close connection between the thermal expansion and the specific heat. To got a complete equation of state from statistics, we need only take the expression (4.10) for P and expand tho summation, the thermal pressure, as a power series in (Fo F)/Fo. This can be donqjf we can nniiJj de)endence of .the i//s on volume, from the theory. Then we can identify tho resulting equation with Eq. (1.5), equating coefficients, and findPo, -Pi, and P^ which determine the pressure, in terms of the structure of the crystal. Knowing the moaning of P , PI, and PZ from our earlier discussion, this allows us to find the thermal expansion, compressibility, and change of compressibility with pressure, as functions of temperature. Since very fow experiments are available dealing with the changes of these quantities with temperature, we shall confine our attention to the thermal expansion at zero pressure;. From Kq. (1.2), this is approximately ddo/dT, where from Eq. (1.9) we have a = Po/Pi. Comparing with Eq. (4.10) above, we soe that P is the value of the summation when V = Fo. The quantity PI, which we have seen to be the reciprocal of the compressibility, equals PJ plus a small term coming from the summa- tion, which we can neglect for a very rough discussion, though of course it would have to be considered for accurate work. Since P? is independent SEC. 4] THE EQUATION OF STATE OF XOLIDS 219 of temperature, this gives us k "%ri \kT/ Thermal expansion = X- >j y, \ u . ry (4.13) u . r ^- 1/ where x is the compressibility. Comparison of Eq. (4.13) with the formula for heat capacity of linear oscillators, for example Eq. (3.5), shows at once the close relation between the heat- capacity and the thermal expansion. Each term in Eq. (4.13) is proportional to the term in the heatjjapacity arising from the same oscillator, so that the thermal expan- sion shows qualitatively the same sort of behavior, becoming constant at high temperatures but reducing to zero as the temperature approaches the absolute zero. While experimental data for thermal expansions are not ne^lj^ojBxUinsive jis those for specific heats, still they are sufficient tojjjhow that this is actually the observed behavior. To allow the construction of a simple theory of thermal expansions, Grlineisen assumed that the quantities 7, were all equal to each other and to a constant 7, which he regarded as an empirical constant. To see the meaning of 7, we assume that the frequencies v } are given in terms of the volume by the relation ", = |~> (4.14) where c 3 is a constant, so that the frequencies are inversely proportional to the 7 power of the volume. Since surely the p/s increase with decreas- ing volume, this is a reasonable form of dependence to assume. Then we find at once that -l 1 ^ - y (Al & ) d In V " 7 ' C4 - 15 ' so that the 7 defined in Eq. (4.14) is the same as the 7, of Eq. (4.J1). We see that 7 = 1 or 7 = 2 respectively corresponds to the frequencies being inversely proportional to the volume or to the square of the volume. If we assume with Griineiseri that 7,- = 7, we then have from Eq. (4.13) C v ] Thermal expansion = -^-- ' (4.16) y o Equation (4.16) is a relation between the thermal expansion, compressi- bility, specific heat, volume, and the parameter 7. If we have an inde- pendent theoretical way of finding 7, we can use it to compute the thermal expansion. Otherwise, we can use measured values of thermal expansion, compressibility, specific heat, and volume, to find empirical values of y. Both types of discussion will be given in later chapters, where we discuss 220 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIII specific types of solids. We shall find that the agreement between the various methods of finding 7 is rather good, and that values for most ordinary materials are between 1 and 3, generally in the neighborhood of 2. 6. Polymorphic Transitions. It has been mentioned in Chap. XI, Sec. 7, that the transition lines in the P-T diagram between polymorphic phases of the same substance tend to have positive slope, a change of pressure of something less than 12,000 atm. corresponding to a change of temperature of about 200. With our present knowledge of the equation of state of solids, we can attempt a theoretical explanation of this relation. To work out the slope, we note that Clapeyron's equation can be written dP/dT = AS/A7, where AS is the difference of entropy between one phase and the other, AF the difference of volume. We have seen in the present chapter that the entropy of a single phase increases when its vol- ume increases, and we have found quantitative methods of calculating the amount of change. Let us, then, tentatively assume that the relation of change of entropy to change of volume in going from one phase to another is about the same as when we change the volume of a single phase. This is certainly a very crude assumption, but we shall find that it gives results of the right order of magnitude. The assumption we have just made amounts to replacing AS/AF by dS/dV, computed for a single phase. Now from Eq. (4.8) we can write the entropy of a substance per gram atom as f / _A hv/kT I S = 3Nok\ - In VI - e kT ) + -jr - , (5.1) L ** - ij where No is Avogadro's number, and we assume for simplicity that all frequencies v t are the same. We then have rfS _ dS dv^ _ __ v_ dS f dV ~ dvdV ~ 7 F dv (b ' Z) where 7 has the same significance as in the last section. Differentiating Eq. (5.1), this leads to dS _ 3AT fc 7 AA 2 F* _ _ yCv ,. ^ dV V \kf) ^ Ji)*~~ V ' Now Cv is about 3R per gram atom, and from the preceding section we see that 7 is about 2 for most substances. Furthermore, examination of experimental values shows that V is of the order of magnitude of 10 cc. per gram atom, for most substances. Putting in these values and putting proper units in Eq. (5.3), we find that dS/dV is approximately 50 atm. per degree, or 10,000 atm. for 200 degrees, just about the value that Bridgman SBC. 5] THE EQUATION OF STATE OF SOLIDS 221 finds to be most common experimentally. Thejaot 1ft ft t' this fifl.)milfl.t.inn agrees so welLwithjbhc average behavior of many materials is some justi- fication_for thinking that the major part of the entropy change from Qwe polymorphic phase .to another is simply that associated with the change of volume. The individual variations are so great, however, that no groat claim for accuracy can be made for such a calculation as we have just made. In the present chapter, we have laid the foundations for a statistical study of the equation of state of solids, though we have not made any use of a model, and hence have not been able to compute the thcrmodynamic quantities we have been talking about. We proceed in the next chapter to a discussion of atomic vibrations in solids, with a view to finding more accurate information about specific heats and thermal expansion. Later, when we study different types of solids more in detail, we shall make comparisons with experiment for many special cases. CHAPTER XIV DEBYE'S THEORY OF SPECIFIC HEATS We have seen in the last chapter that the. essential step in investigating the specific heat and thermal expansion of solids is to find the frequencies of the normal modes of elastic vibration. We shall take this problem up, in its simplest form, in the present chapter. The vibrations of a solid are generally of two sorts: vibrations of the molecules as a whole and internal vibrations within a molecule. This distinction of course can be found only in molecular crystals and is lacking in a crystal, like that of a metal, whore all the atoms are of the same sort. For this reason solids of the elements have simpler specific heats than compounds, and we take them up first, postponing discussion of compounds to the next chapter. At first sight, on account of the large number of atoms in a crystal, it might seem to be impossibly hard to solve the problem of their elastic vibrations, but as a matter of fact it is just the largo number of atoms that makes it possible to handle the problem. For the vibrations of a finite continuous piece of matter can bo handled by the theory of elas- ticity, and for waves long compared to atomic dimensions this theory is correct. We start therefore by considering the elastic vibrations of a continuous solid, and later ask how the vibrations arc affected by the fact that the solid is really made 1 of atoms. We have already mentioned briefly in Chap. XIII, Sec. 3, the close relation between the normal modes of vibration of a solid composed of atoms and the harmonic or overtone vibrations of acoustics. 1. Elastic Vibrations of a Continuous Solid. - It is well known that elastic waves can be propagated through a solid. The waves are of two sorts, longitudinal and transverse, having different velocities of propaga- tion. The longitudinal waves are analogous to tin; sound waves in a fluid, while the transverse waves, which cannot exist in a fluid, also have many properties similar to sound waves and are ordinarily treated as a branch of acoustics. The velocities of both sorts of waves are determined by the elastic constants of the material and are independent of the frequency, or wave length, of the waves, within wide limits. The waves with which we are familiar have frequencies in the audible range, less than 10,000 or 15,000 cycles per second. The velocities of clastic waves in solids are of the order of magnitude of several thousand meters per second (something like ten times the velocity in air). Since we have 222 SBC. 1] DE BYE'S THEORY OF SPECIFIC HEATS 223 \v = y, (1.1) where X is the wave length, v the frequency or number of vibrations per second, and v the velocity, the {shortest sound waves with whieh we are familiar have a wave length of the order of magnitude of (5 X 10 5 ) _ Q - =50 cm., taking the velocity to be 5000 m. per second, the frequency 10,000 cycles. By methods of supersonics, frequencies up to 100,000 cycles or more can be investigated, corresponding to waves of something like 5 cm. length. There is every reason to suppose, however, that this is not the limit for elastic, waves. In fact, we have every reason to believo that waves of shorter and shorter wave length, and higher and higher frequency, arc possible, up to the limit in which the wave length is comparable with the distance between atoms. It is obvious that the wave length cannot be appreciably shorter than interatomic distances. In fact, if the wave length were just the interatomic distance*, successive atoms would be in the same phase of the vibration, and there would not really be. a vibration of one atom with respect to another at all. The shortest wave which we can really have comes when successive atoms vibrate opposite to each other, so that the \\ave length is twice the distance between atoms. It is interesting to find the corresponding order of magnitude of the frequency of vibration. If we set X = 5 X 10~~ s , of the order of magnitude of twice 1 an interatomic distance in a metal, we have 1 v (5 X 10 5 ) inll . i = (5~X 10~ : ~ s ) = cycles per second. This is a frequency of an order of magnitude of those found in the infrared vibrations of light waves. There is good experimental evidence that such frequencies really represent the maximum possible frequencies of acoustical vibrations. The situation, then, is that there is a natural upper limit set to fre- quencies, and lower limit to wave lengths, of elastic waves, by the atomic nature* of matter. It can be shown theoretically that as this limit is approached, the velocity of the waves no longer is independent of wave* length. However, the change is not great; it changes by something not more than a factor of two. This change is the only important difference between a vibration theory based on the theory of elasticity and a theory based directly on interatomic forces, provided only that we recognize the lower limit to wave lengths. Our first approach to a theory, the one made by Debye, takes account of the lower limit of wave lengths but neglects the change of velocity with frequency. 224 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV In a finite piece of solid, such for instance as a rod, standing waves are set up. These arise of course from constructive interference between direct and reflected waves, and they exist only when the wave length and the dimensions of the solid have certain relations to each other. For the transverse vibrations of a string, these relations are very familiar: tho length of the string must be a whole number of half wave lengths. For other shapes of solid, the relations are similar though not so simple. Arranging the standing waves in order of decreasing wave length or increasing frequency, we have a series of frequencies of vibration, often called characteristic frequencies, or harmonics. For the string, these are simply a fundamental frequency and any integral multiple of this fundamental. The resulting harmonics or overtones form the basis of musical scales and chords. For other shapes of solids, the relations are not simple, and the overtones do not form pleasing musical relations with the fundamental. Now for all one knows in ordinary acoustical theory, the number of possible overtones is infinite, though of course few of them can be heard on account of the limitations of the ear. Thus if we have a string, with frequencies which are integral multiples of a funda- mental, there seems no reason why the integer cannot be as large as wo please. This no longer holds, as we can immediately see, when we con- sider the atomic nature of the solid. For we have just mentioned that there is an upper limit to possible frequencies, or a lower limit to possible wave lengths, set by interatomic distances. The highest possible over- tone will have a frequency of the order of this limiting frequency. That means that the solid has a finite number of possible overtone vibrations. And now we see the relation between our acoustical treatment and the vibration problem we started with: these overtone vibrations are just the normal modes of vibration of the atoms in the crystal, which we wanted to investigate. If there are N atoms, with 3N degrees of freedom, we have mentioned in Chap. XIII that we should expect 3N modes of oscillation; when we work out the number of overtones, we find in fact that there are just 3AT allowed vibrations. The most general vibrational motion of our solid is one in which each overtone vibrates simultaneously, with an arbitrary amplitude and phase. But in thermal equilibrium at temperature T, the various vibra- tions will be excited to quite definite extents. It proves to be mathe- matically the case that each of the overtones behaves just like an independent oscillator, whose frequency is the acoustical frequency of the overtone. Thus we can make immediate connections with the theory of the specific heats of oscillators, as we have done in Chap. XIII, Sec. 4. If the atoms vibrated according to the classical theory, then we should have equipartition, and at temperature T each oscillation would have the mean energy kT. This means that each of the N overtones would have equal SEC. 2] DE BYE'S THEORY OF SPECIFIC HEATS 225 energy, on the average, so that the energy of all of them put together would be 3NkT, just as we found in Chap. XIII, Sec. 3, by considering uncoupled oscillators. The fundamental and first few harmonics, which are in the audible range, would have the average energy kT, just like the harmonics of higher frequency. This does not mean that we should be able to hear the rod in thermal equilibrium, because kT is such a small energy that the amplitude of each overtone would be quite inappreciable. Of the 3N harmonics, by far the largest number come at extremely high frequencies, and it is here that the thermal energy is concentrated. The superposition of these high frequency overtone vibrations, each with energy proportional to the temperature, is just what we mean by tem- perature vibration, and the energy is the ordinary internal energy of the crystal. Actually the oscillations take place according to the quantum theory rather than the classical theory, and we have seen in Chap. XIII, Sec. 4, how to handle them. Each frequency v } can have a characteristic temperature 6, associated with it, according to the equation (1.2) Then the heat capacity is so that the heat capacity associated with each oscillator will be zero at temperatures much below 9,, rising to the classical value at temperatures considerably above O/. For the lower harmonics, the characteristic temperatures are extremely low, so that these vibrations are excited in a classical manner at any reasonable temperature. The highest harmonics, however, have values of 0, in the neighborhood of room temperature, and since many of the harmonics come in this range, the specific heat does not attain its classical value until temperatures somewhat above room temperatures are reached. 2. Vibrational Frequency Spectrum of a Continuous Solid. To find the specific heat, on the quantum theory, we must superpose Einstein specific heat curves for each natural frequency v n as in Eq. (1.3). Before we can do this, we must find just what frequencies of vibration are allowed. Let us assume that our solid is of rectangular shape, bounded by the surfaces x = 0, x = X, y = 0, y = Y, z = 0, z = Z. The fre- quencies will depend on the shape and size of the solid, but this does not really affect the specific heat, for it is only the low frequencies that art* very sensitive to the geometry of the solid. As a first step in investigating the vibrations, let us consider those particular waves that are propagated along the x axis. 226 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV It is a familiar fact that there arc two sorts of waves, traveling and standing waves, and that a standing wave can be built up by superposing traveling waves in different directions. We start with a traveling wave propagated along the x axis. Let us suppose that the point which was located at x, y y z in the unstrained medium is displaced by the wave to the point .r + , // + r/, z + f , so that , 17, f are the components of dis- placement. To describe the wave, we must know , r/, f as functions of x, y, z, t. For a wave propagated along the x axis, represents a longi- tudinal displacement, r? and f transverse displacements. For the sake of definiteness lot us consider a longitudinal wave. Then the general expres- sion for a longitudinal wave traveling along the x axis, with velocity v, frequency v y amplitude 1 A, and phase a, is HH- = A sin 27r pit - ^ ) - a (2.1) Rather than using the phase constant a, it is often convenient to use both sine and cosine terms, with independent amplitudes .4 and B, obtaining = A cos 2irp( t - - ) + B sin 2irv(t - -Y (2.2) \ v/ \ v) an expression equivalent to Eq. (2. 1) if the constants A and B of Eq. (2.2) have the proper relation to the A and a of Eq. (2. 1). Still another way to write such a wave, this time using complex notation, is - Ae\~\ (2.3) where the A of Eq. (2.3) is still another constant, which may be complex and so take care of the phase. In Eq. (2.3) it is to be understood that the real part of the complex expression is the value to be used. / A By writing expressions in l^+~) analogous to Eqs. (2.1), (2.2), and (2.3), we get waves propagated along the negative x axis. Adding such a wave to the one along the positive x axis, we have a standing wave. As a simple example, we take the case of Eq. (2.2) and let B = 0. Then we have = A cos 2wp(t - ~ J + A cos 2Trv\t + * A ( rt ,. 2irvx . . n J . 2irvx = A I cos 2wpt cos h sin 2wvt sin . rt . . n . . 2irvx\ + cos 2wvt cos sm 2vvt sin I v / = 2A cos 27Tpt cos - (2.4) SEC. 2] DEBYE'S THEORY OF SPECIFIC HEATS 227 By using different combinations of functions, we can get standing waves P , ,> 2irvx . rt A 2irvx i rt , - of the form cos 2irvt sin - > sin 2irvt cos - ; and sin 2^vt sm - as ' v v v well. The particular characteristic of a standing wave is that the dis- placement is the product of a function of the time and a function of the position x. As a result of this, the shape, given by the function of x, is the same at any instant of time, only the magnitude of the displacement varying from instant to instant. Certain boundary conditions must be satisfied at the surfaces of the solid. For instance, the surface may be hold rigidly so that it cannot vibrate, or it may be in contact with the air so that it cannot develop a pressure at the surface. The allowed overtones will depend on the particular conditions we assume, but again this is important only for tho low overtones and is immaterial for the high frequencies. To bo specific, then, let us assume that the surface is hold rigidly, so that the displace- ment is zoro on the surface*, or when x = 0, x = X. The first con- dition can be satisfied by using a standing wave containing the factor sin - j rather than cos - > since sin = 0. Then for the second condi- v v tion we must have . 2irvX _ /rt _ sin - = 0. (2.5) Condition (2.5) can be satisfied in many ways, for we know that the sine of any integer times IT is zero. Thus we satisfy our boundary condi- tion if we make "; = 0, 1, 2, =*, (2.6) where s is an integer. Using the relation v/v = I/A, this can be written 1 C tt\ I *A v - /rfc _ x A = 2X' r -2 - X < (2 ' 7) showing that a whole number of half wave lengths must be contained in the length of the solid. Equation (2.6) or (2.7) solves entirely the problem of the allowed vibrations of a continuous solid, so long as we limit ourselves to longitudinal waves propagated along the x direction. If we introduce the additional condition, demanded by the atomic nature of the medium, that the minimum wave length is twice the distance between atoms, we can immediately find the number of such possible overtones. Let there be No atoms in a row in the length X. Then the distance between atoms, along the x axis, is X/No. Our condition for the maximum possible overtone is then X/2 = X/N*, or AT X/2 = X, showing that there are just NQ overtones corresponding to propagation 228 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV along the x axis. If we investigate transverse vibrations, but propagation along the x axis, we obtain exactly analogous results, but with t\ or f substituted for . The allowed wave lengths for transverse vibrations are the same as for longitudinal ones, but on account of the fact that the velocity of transverse waves is different from that of longitudinal waves, the frequencies are different. There are No possible vibrations for each of the two directions of transverse vibration, giving 3No vibra- tions in all corresponding to propagation along the x axis. We can now use the results that we have obtained as a guide to the general problem of waves propagated in an arbitrary direction. To describe the direction of propagation, imagine a unit vector along the wave normal. Let the x, y, and z components of this unit vector be I, m, n. These quantities are often called direction cosines, for it is obvious that they are equal respectively to the cosines of the angles between the direction of the wave normal and the #, y, z axes. Then in place of the quantity sin 2wvlt ),' or similar expressions, appearing in Eqs. (2.1), (2.2), and (2.3), we must use the expression sin 2irAt - --J-^-l- -I- (2.8) Let us verify the fact that Eq. (2.8) represents the desired plane wave. At time t, the expression (2.8) is zero when rt /. Ijc + My + nz\ vv . , 2irv(t - I = TT X an integer, or Ix + my + nz = (- X 2") ~*~ **" ^'^ Now /j + ra?/ + nz = a (2.10) is the equation of a plane whose normal is a vector with components proportional to Z, rrc, n, and whose perpendicular distance from the origin, measured along the normal drawn through the origin, is a. Thus the surfaces given, by putting different integers in Eq. (2.9), are a series of equidistant parallel planes with normal I, m, n, the distance apart being $v/v, and the distance from the origin increasing linearly with the time, with velocity v. This is what we should expect for the zeros of a traveling wave of wave length X = v/v, so that the zeros come half a wave length apart. By superposing traveling waves of the nature of Eq. (2.8), we can set up the standing waves that we wish. We must superpose eight waves, SBC. 2] DE BYE'S THEORY OF SPECIFIC HEATS 229 having all eight possible combinations of signs for the three terms Ix, my, nz. One of the many typos of standing waves which wo can sol, up in this way has the form A . . 2-irvlx . Zirvmy . Zirvnz f . A sm 2wt sin - sin - - sin - > (2. 11) V V V and this proves to be the one that we nood. Wo impose the boundary condition that the displacement bo zero whon x = 0, x = X, y = 0, y = } r , z 0, z = Z. The conditions at a* = 0, y = 0, z 0, arc* auto- matically satisfied by the function we have chosen in Eq. (2.11). To satisfy those at x = X, y Y, z = Z, wo must make 2vlX _ 2vmY _ 2mZ _ (f> . ' 7T ~~ ' SiM 7, ~~ ' 9 '" ?> ~ **' v^'l^y where s^, s y , s a are integers. From Eq. (2.12), wo have I = -TO- V ' m = 5 ^ov ; w = ' s *o^> whrro X = (2.13) L\ tl &/J V Since I, m, n arc* tho components of unit voctor, we must have P + m- + n- = 1, or K\ 2 / \ 2 / V 2 1/\V 2 j) + (T) + \t) M = ' (2 - u) Equation (2.14) can be used to find the allowed wave lengths, in terms of the integers s x , s y , s z : 1 (2.15) or i //TV / o \ 2 7~TV (2.16) We can now introduce the condition demanded by the atomic nature; of the medium. We shall do this only for the simplest case of a simple cubic lattice, but similar results hold in general. Let the atoms bo spaced with lattice spacing d, such that X = JV*d, Y = Nyd, Z = Nzd, ami NJfJf. = N (2.17) is the total number of atoms in the crystal. We assume as the condition for the maximum overtone that the minimum distance between nodes, 230 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV along any one of the three axes, is d. That is, considering for instance the x direction and referring to Eq. (2.11), we assume that increasing x by the amount d increases the argument of the sine, or 2irvlx/v^ by TT. Expressed otherwise, this states that for the maximum overtone we must have s x = X/d, and nimilarly s y = F/d, s g = Z/d. This means that all values of s x , s v , s z are possible up to the values s x = N x , s v = N v , s z = N g . Wo can visualize the situation most easily by considering a three-dimensional space in which s x , s uy s z are plotted as three rectangular coordinates. Then the integral values of s x , s y , s z , which represent overtones, form a lattice of points, one per unit volume of the space. The overtones allowed for an atomic crystal are represented by the points lying between s x = 0, s x N x , s v = 0, s y = N y , s z = 0, s x = N z . The volume of space filled with allowed points is thus N X N V N Z = N, and since there is ono point per unit volume there are N allowed overtones. In place of using this three-dimensional space, it is often more convenient to use what is called a reciprocal space. This is one in which s x /X, s y /Y, s z /Z are plotted. The allowed points in the reciprocal space then form a lattice with spacings I/ A", 1/Y, l/Z, so that there is one point in volume l/XYZ = 1/F, if F = XYZ is the volume of the crystal. For the maxi- mum overtone we have s x /X = s y /Y = s z /Z = 1/d, so that the allowed overtones fill a cube of volume 1/d 3 , or the reciprocal of the volume of unit cell in the crystal. The number of allowed overtones, given by the volume (l/d tj ) divided by the volume (1/F) per allowed overtone, is F/d 3 = N, as before. It is plain why this space is called a reciprocal space, since distances, volumes, etc., in it are reciprocals of the cor- responding quantities in ordinary space. We have so far omitted discussion of the fact that we have both longi- tudinal and transverse vibrations. For a single traveling wave like Eq. (2.8), there are of course three possible modes of vibration, one longitudinal along the direction Z, m, n, and two transverse in two direc- tions at right angles to this direction. The longitudinal wave will travel with velocity vi, the transverse ones with velocity v t . We can superpose eight longitudinal progressive waves to form a longitudinal standing wave, and by superposing transverse progressive waves we can form two transverse standing waves. Three standing waves can be set up in this way for each set of integers s x , s y , s z . These three waves will all corre- spond to the same wave length, according to Eq. (2.16), but to different frequencies, according to Eq. (1.1). Considering the three modes of vibration, there will be in all 3N allowed overtones, just the same as in the theories of Dulong-Petit and Einstein, discussed in Chap. XIII, Sec. 3. From Eqs. (1.1) and (2.16), we can now set up the frequency distribu- tion, or spectrum, as it is often called from the optical analogy, of our SBC. 2] DE BYE'S THEORY OF SPECIFIC HEATS 231 oscillations. We have at once In our reciprocal space, where s x /X, s y /Y, s z /Z are the three coordinates, the quantity V(W^)M : T^7F)" r + (s z /Z) 2 is simply the radius vector, which we may call r. Thus we have the frequency being proportional to the distance from the center. Now we can easily find the number of overtones whose frequencies lie in the range dv of frequencies. For the points in the reciprocal space represent- ing thorn must lie in the shell between r and r + dr, where r is given by Eq. (2.19). This shell has the volume 4wr 2 dr, or 32-jrv* dv/v\ Only |X)sitivo values of the integers s x , s y , $ z are to be used, however, so that we have overtones only in the octant corresponding to all coordinates being positive. This means that the part of the shell containing allowed overtones is one eighth of the value above or 4irv 2 dv/tf. We have seen that the number of overtones per unit volume in the reciprocal space is V. Thus the number of allowed overtones in dv, for one direction of vibration, is dN = . v. (2 .20) Considering the three directions of vibration, the number of allowed overtones in dv is dN = Trv*dvV( ~ + -I- (2.21) \ v i v t/ Formulas (2.20) and (2.21) hold only when s x /X, s v /Y, s z /Z are less than 1 /d; that is, when the spherical shell lies entirely inside the cube extending out to s x /X l/d, etc. For larger values of the frequency, the shell lies partly outside the cube, so that only part of it corresponds to allowed vibrations. It is a problem in solid geometry, which we shall not go into, to determine the fraction of the shell lying inside the cube. When this fraction is determined, we must multiply the formula (2.20) by the fraction to get the actual number of allowed states per unit frequency range. In Fig. XIV-1 we plot the quantity y-j-> tbe number of vibrations per unit volume per unit frequency range, computed in this way, for one direction of vibration. The curve starts up as a parabola, 232 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV (1 \dN /43T\ , r J-J- = ( -y ]v 2 j but starts down when v = v/2d, the point where the sphere is inscribed in the cube, and reaches zero at v = \/3v/2d, where the sphere is circumscribed about the cube. The area under the curve of Fig. XIV-1 of course is N/V, where N is the total number of atoms. When we consider both types of vibration, the longitudinal and the transverse, we must superpose two curves like Fig. XIV-1, with different scales on account of the difference between vi and v t . This results in a curve similar to Fig. XIV-2, having two peaks, the one at lower fre- quencies corresponding to the transverse vibration, which has a lower velocity than the longitudinal vibration. In Fig. XIV-2 we have a representation of the frequencies of vibration of an clastic solid, under tho assumption that the waves are propagated as in an isotropic solid,' the velocity of propagation boing independent 13 05 10 15 FIG. XIV-1. Number of vibrations of one direction of polarization, per unit frequency range, in a simple cubic lattice with lattice spacing d, and constant velocity of propagation v. of direction and wave length, but the number of overtones being limited by the conditions that the maximum values of s x /X, s y /Y, s z /Z are 1/d, where d is the interatomic spacing. This is tho condition appropriate to a simple cubic arrangement of atoms, an arrangement which does not actually exist in the real crystals of elements. It is not hard to make the changes in the conditions that are necessary for other types of struc- ture, such as body-centered cubic, face-centered cubic, and hexagonal close-packed structures, which will be discussed in a later chapter. The general situation is not greatly changed. We can still describe a wave by the three integers s x , s yj s z and the frequency is still given by Eq. (2.19), in terms of the radius vector in tho reciprocal space. Thus the number of overtones in dv is still given by Kq. (2.20) or (2.21), provided the fro- quency is small enough so that the spherical shell lies entirely within the allowed region in the reciprocal space. The only difference comes in the shape of this allowed region. Instead of being a cube, it can be shown that the region takes the form of various regular polyhedra, depending SBC. 2] DE BYE'S THEORY OF SPECIFIC HEATS 233 on the crystal structure. These polyhedra, which are often called zones or Brillouin zones, are important in the theory of electronic conduction in metals as well as in elastic vibrations. The volumes of those zones in each case are such that they allow just N vibrations of each polarization. The zones for the three crystal structures mentioned resemble each other in that they are more nearly like a sphere than the cubical zone of the simple cubic structure. That is, the radii of the inscribed and circum- scribed spheres are more nearly equal than for a cube. That means that the region in which the curve of ( - T ? )~7~ ^ s falling from its maximum to FIG. XIV-2. Number of vibrations per unit fit-quonry range, in u simple cubic lattice with constant velocity of propagation. It is assumed that the \elocity of the longitudinal wave is twice that of the transverse waves. Dotted curve indicates Debye's assumption. zero is more concentrated than in Fig. XI V-l, and corresponds to a higher maximum and more precipitate fall. If the zone were a sphere instead of a polyhedron, the fall would be perfectly sharp, as shown by the dotted line iii Fig. XIV-2, the distribution being given by a parabola below a certain limiting frequency j> niax , and falling to zero above this frequency. The calculation which we have carried out in this section has been limited in accuracy by our assumption that the velocity of propagation of the elastic waves was independent of direction and of wave length. Actually neither of these assumptions is correct for a crystal. Even for a cubic crystal, the elastic properties are more complicated than for an isotropic solid and the velocity of propagation depends on direction. 234 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV And we have stated that on account of the atomic nature of the. material the velocity depends on wave length, when the wave length becomes of atomic dimensions. These limitations mean that the frequency spectrum which we have so far found is not vory close to the truth. Nevertheless, our model is good enough so that valuable results can be obtained from it, and we go on now to describe the approximations made by Debye, loading to the specific heat curve known by his name. 3. Debye's Theory of Specific Heats. Debye's approximation con- sists in replacing the actual spectral distribution of vibrations by the dotted line in Fig. XIV-2. That is, he assumed that dN is given by Eq. (2.21) as long as v is less than a certain v^^ and is zero for greater v's. It is obvious that this is not a very good approximation. Nevertheless, it reproduces the form of the correct distribution curve at low frequencies and has the correct behavior of predicting no vibrations above a certain limit. To find the proper j> max to use, Debye simply applied the condi- tion that the area under his dotted curve, giving the total number of allowed overtones, must be 3N to agree with the correct curve. That is, he assumed * 4. ( l J. 2 \ f""" 2 ,- 4x1 , + -; I I V \i Wo 2N V from which _/9A* 1 . . n "-"' ~ 4xF T~2 ' (AA > \ vl + v*J In terms of the assumed frequency distribution and the formula (1.3), we can now at once write down a formula for the specific heat. This is J^ The integration in Eq. (3.2) cannot be performed analytically. It is worth while, however, to rewrite the expression in terms of a variable x " EF' with Xn ** TT' (3 ' 3) We then have 1 C XQ ~ix .dx. (3.4) SEC. 3] DEBYE'S THEORY OF SPECIFIC HEATS 235 It is customary to define a so-called Dobye temperature 6/> by the equation 9* = *!=* (3.5) Then we have 1 T -- = (3.6) XQ 00 so that Eq. (3.4) gives the specific heat in terms of T/&D, the ratio of tho actual temperature to the Dobye temperature. That means that tho specific heat curve should bo the same for all substances, at temperatures which are tho same fraction of the corresponding Dobye temperatures. Whon integrated numerically, the function (3.4) proves to bo not unlike an Einstein specific heat curve, except at low temperatures. To facilitate calculations with tho Dobye function, wo give in Table XIV-1 TABLE XIV-1. SPECIFIC' HEAT AS A FUNCTION OF .r,, = O/>/7 T , ACCORDING TO DEBYE'S THEORY XQ C\ 5.955 1 5 670 2 4 918 3 3 948 4 2 996 5 2 197 6 1 582 7 1 137 8 823 9 0.604 10 452 11 0.345 12 267 13 211 14 169 15 137 16 113 17 0.0945 18 0796 19 0677 20 0.0581 A more extended table will be found in NeniHt, "Die GiundltiKcn des neucn Waiinenataes " the specific boat per mole, calculated by Debye's theory, as a function of XQ. We also give in Fig. XIV-3 a graph of the Debye specific heat curve as a function of temperature. We can easily investigate the limit of low temperatures analytically. If T 6z>, we have XQ > > 1. Then, approximately, we can carry the integration in Eq. (3.4) from to oo, since the integrand becomes very small for large values of x anyway. It 236 INTRODUCTION TO CHEMICAL PHYSICS (CHAP. XIV can be shown 1 that f * ~4*x A (3.7) f * 4/# J. Jo ^lAr ^ 4> Then we have, for low temperatures r - 12 , Cr = -=-n IF (3.8) From Eq. (3.8), we see that the specific heat should be proportional to 6 1 fX ~o 02 04 06 08 10 12 1-1 K> T/0 FIQ. XIV-3. Specific heat of a solid as a function of the temperature, according to Debyc's theory. the third power of the absolute temperature, for low temperatures. This feature of Debye's theory proves to be true for a variety of substances. TABLE XIV-2. OBSERVED SPECIFIC HEAT OF ALUMINUM, COMPARED WITH DEBYE'S THEORY 1 T, abs. CP observed Cv observed C v , Debye 54.8 1.129 1.127 1.11 70.0 1.856 1.851 1 88 84.0 2.457 2.446 2 51 112.4 3.533 3.502 3 54 141.0 4.239 4.183 4.23 186.2 4.932 4.833 4 87 257.5 5.558 ; 5.382 5.35 278 9 5.698 ! 5.499 5.42 296 3 5.741 5 526 5 48 Data for this table are taken from the article by Eucken, in "Handbuch der Experimentalphysik,' Vol. 8, a useful reference for the theory and experimental discussion of specific heats. 1 See Debye, Ann. Physik, 39, 789 (1912). SEC. 3] DEBYE'S THEORY OF SPECIFIC HEATS 237 Considering the crudeness of the assumptions, Debye's theory works surprisingly well for a considerable number of elements. Thus we give- in Table XIV-2 the observed specific heat of aluminum, and the values calculated from Debye's theory, using GD = 385 abs. In Table XIV-2, specific heats are given in calories per mole. The value of CV observed is computed from Cp observed by the use of Eq. (1.14) of Chap. XIII. It is interesting to note how much less the difference CP CV is for such a solid than the value R = 1.98 cal. per mole for a gas. Calculations for many other substances show agreement with experiment of about the accuracy of Table XIV-2. We have already pointed out the shortcom- ings of Debye's treatment, and the remarkable thing is that it agrees as well with experiment as it does. TABLE XIV-3. DEB YE TEMPERATURES FOR ELEMENTS Substance Oz>, high tem- perature, ahs. 6/> (T) 9y> calculated C (diamond) 1840 2230 Na 159 Al 398 385 399 K 99 Fe 420 428 467 Cu 315 321 329 Zn 235 205 Mo 379 379 Ag 215 212 Cd. 160 129 168 Sn 160 127 Pt 225 226 Au 180 162 Pb 88 72 Data for this table, as for Tablp XIV-2, are from Eucken's attiele in Vol. 8 of the "Handbuch der Experimentalphysik. ' ' In Table XIV-3, we give Dcbye temperatures for a number of elements for which the specific heat has been accurately determined. We give three columns, and the agreement of these three is a fair indication of the accuracy of Debye's theory. The first column, 0/, (high tempera- ture), gives temperatures determined empirically from the specific heat, so as to make the agreement between theory and experiment as good as possible through the temperature range in the neighborhood of 6/>/2, where the specific heat is fairly large. The second column, 6i>(r 3 ), is a Debye temperature determined to make the T 3 part of the curve, at very low temperatures, fit as accurately as possible. If the Debye curve agreed perfectly with experiment, these two temperatures would of course be equal. Finally, in the third column we give 9z> (calc.), calculated 238 INTRODUCTION* W ffHEM/(&L PHYSICS [CHAP. XIV from the elastic constants. To find these, Eq. (3.1) is used to find v max in terms of the velocity of propagation of longitudinal and transverse waves, and Eq. (3.5) to find Q D in terms of i> inftx . We shall not go into the theory of elasticity to find the velocity of propagation in terms of the elastic constants, but shall merely state the results, in terms of x the vol- ume compressibility, <r Poisson's ratio, and p the density. In terms of these quantities, it can be shown 1 that Using Eq. (3.9), the velocity can be found in terms of tabulated quantities. The agreement between the columns in Table XIV-3 is good enough so that it is plain that Debye's theory is a good approximation, but far from perfect. It is interesting to note from the table the inyer,s_rclatioii between compressibility and Debye temperature, which can be seen analytically "from Kqs. (3.1), (3.5), and (3.9)'. Thus cITamohd7'a suHstnnce with extremely low compressibility, has a very high Debye temperature, while lead, with very high compressibility, has a very low Dobye tempera- ture. This moans that at room temperature the specific heat of diamond is far below the Dulong-Petit value, while that of lead has almost exactly the classical value. 4. Debye's Theory and the Parameter y Debye's theory furnishes us with an approximation to the frequency spectrum of a solid, and we can use this approximation to find how the frequencies change with volume, arid hence to find the parameter 7 = j-. ^ which is important in the theory of thermal expansion, as we saw in Chap. XIII, Sec. 4. According to Debye's theory, the frequency spectrum is entirely deter- mined by the limiting frequency j> max , and if this frequency changes, all other oscillations change their frequencies in the same ratio. Thus Grimeisen's assumption that y is the same for all frequencies is justified, and we can set d In J>max / A . v From Eq. (3.1) we see that the Debye frequency v max varies proportion- ally to the velocity of elastic waves, divided by the cube root of the volume, and from Eq. (3.9) we see that the velocity of either longitudinal or transverse waves varies inversely as the square root of the compressi- bility times the density, if we assume that Poisson's ratio is independent of the volume. As we shall see later, this assumption can hardly be 1 For a derivation, see for instance, Slater and Frank, "Introduction to Theoretical Physics," McGraw-Hill Book Company, Inc., 1933. Combine results of paragraphs 109, 110 with result of Prob. 3, p. 183. SEC. 4] DE 'BYE'S THEORY OF SPECIFIC HEATS 239 justified; Poisson's ratio presumably increases as the volume increases. But for the moment we shall assume it to be constant. Then we have (4.2) On the other hand, the density is inversely proportional to the volume, so that we have In *> max = & In V In x + const., 7 " ~6 + 2 d In V (4 ' 3) The compressibility concerned in Eq. (3.9) is that computed for the* actual volume of solid considered; that is, it is v( Yp ) , where V is \ \&r/7 the actual volume, rather than the volume at zero pressure. Thus In x = - In V - and / '' din V (4.5) To evaluate the derivative in Kq. (4.5), we express P as a function of V according to Eq. (1.5), Chap. XIII: = ?P2 ; from which, computing for V = This simple formula will be compared with experiment in later chapters, computing PI and P 2 both by theory from atomic models, and by experi- ment from measurements of compressibility. We may anticipate by saying that in general the agreement is fairly good, certainly as good as we 240 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIV could expect from the crudity of the approximations made in deriving the equation. In the calculations we have just made, we have assumed that Poisson's ratio was independent of volume. We have mentioned, however, that actually Poisson's ratio increases with the volume. We recall the mean- ing of Poisson's ratio: it is the ratio of the relative contraction in diameter of a wire, to the relative increase in length when it is stretched. For most solids, it is of the order of magnitude of . For a liquid, however, it- equals \. One cannot see this directly, since a wire cannot be made out of a liquid, but the value $ indicates that a wire has no change of volume when it is stretched, and this is the situation approached by a solid as it becomes more and more nearly like a liquid. Now as the volume of a solid increases, either on account of heating or any other agency, it becomes more and more like a liquid, the atoms moving farther apart so that they can flow past each other more readily. Thus we may infer that Poisson's ratio increases, and experiments on the variation of Poisson's ratio with temperature indicate that this is in fact the case. If we look at Eq. (3.9), we see that an increase of Poisson's ratio decreases the velocities of both longitudinal and transverse waves. In fact, the change of v t is so great that for a liquid, for which a = , the velocity of transverse waves becomes zero, in agreement with our usual assumption that trans- verse waves cannot be propagated through a liquid. Thus, when we consider Poisson's ratio, we find that it provides an additional reason why the velocity of elastic waves and the Debyc frequency should decrease with increasing volume. In other words, it tends to increase 7 over the value found in Eq. (4.3). The exact amount of increase is impossible to calculate, since the available theories do not predict how Poisson's ratio should vary with volume, and there are not enough experimental data available to compute the variation from experiment. In considering thermal expansion, we must remember that Debye's theory is but a rough approximation, and that really the elastic spectrum has a complicated form, as indicated in Fig. XIV-2. If it were not for the variation of Poisson's ratio with volume, our discussion would still indicate that the whole spectrum should shift together to higher fre- quencies with decrease of volume, since the velocities of both transverse and longitudinal waves would then vary in the same way with volume, according to Eq. (3.9). When we consider the Poisson ratio, however, we see that the velocity of the transverse waves should increase more rapidly than that of the longitudinal waves with decreasing volume, so that the shape of the spectrum would change. Thus Griineisen's assumption, that the Vs should be the same for all frequencies, is not really justified, and we cannot expect a very satisfactory check between his theory and experiment. CHAPTER XV THE SPECIFIC HEAT OF COMPOUNDS In the preceding chapter, where we have been considering the specific heat of elements, there was no need to speak of internal vibrations within a molecule. In considering compounds, however, this is essential. A real treatment of the mathematical problem of the vibrations is far beyond the scope of this book. Nevertheless, we can take up a simple one- dimensional model of a molecular crystal, which can furnish a guide to the real situation. Suppose we have a one-dimensional chain of diatomic molecules. That is, we have an alternation of two sorts of atoms, with alternating spacings and restoring forces. The vibrations, transverse or longitudinal, of such a chain have analogies to the vibrations in a molecular crystal, and yot they form a simple enough problem so that we can carry it through completely. As a preliminary, we take up the simpler case of a chain of like atoms, equally spaced, analogous to the case of an elementary crystal. This preliminary problem in addition will give justification for the discussion of the preceding chapter, in which we have arbitrarily broken off the vibrations of a continuum at a given wave length and have said that that resulted from the atomic nature of the medium. Also it will allow us to investigate the change of velocity of propagation with wave length, which we have mentioned before but have not been able to discuss mathematically. 1. Wave Propagation in a One -dimensional Crystal Lattice. Let us consider N atoms, each of mass m, equally spaced along a lino, with distance d between neighbors. Let the x axis be along the line of atoms. We may conveniently take the positions of the atoms to be at x = rf, 2d, 3dj . . . Nd, with y = 0, z = for all atoms. These are the equilib- rium positions of the atoms. To study vibrations, we must assume that each atom is displaced from its position of equilibrium. Consider tho jth atom, which normally has coordinates x = jd, y = z = 0, and assume that it is displaced to the position x = jd + /, y = ?;/, z = f so that ;> 'nn f ; are the three components of the displacement of the atom. If the neighboring atoms, the (j l)st and the (j + l)st, are undisplaced, we assume that the force acting on the jth atom has the components F x - -a*/, F v - -&,,, F, - -ify, (1.1) each component being proportional to the displacement in that direct ion 241 242 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV and opposite to the displacement. We assume that the force constant a for displacement longitudinally, or along the line of atoms, is different from tho constant 6 for displacement transversely, or at right angles to tho lino of atoms. This can well happen, for in a longitudinal displace- ment tho jth atom changes its distance from its neighbors considerably, while in a transverse displacement it moves at right angles to the linos joining it to its neighbors and stays at an almost constant distance from the neighbors. Instead of assuming that the force (1.1) depends only on the position of the jih particle, wo now assume that it is really tho sum of two forces, exerted on the jih atom by its neighbors the (j l)st and (j + l)st atoms. The force exerted by the (j l)st on thojth, whon the (j l)st is at its position of equilibrium, has components ( a/2)(f/, ( 6/2)77;, ( 6/2)f/. But we must suppose that this force doponds only on the relative positions of the two atoms in question, not on their absolute positions in space. Thus it must dcporid on tho differences of coordinates of the j'th and (j l)st atoms, so that the general expression for the force has components ( a/2)(, /_i), ( b/2)(rjj r7 ; _i), ( 6/2)(f/ f/-i). Similarly, the force exerted on the jth atom by the (j + l)st must have components ( a/2)(/ /+i), ( 6/2)(tj/ T?, + I), ( 6/2)(f 7 - /+i). Combining, the total force acting on the jth atom is p = _& + &( + F, - -6f, +g(f,-i + f,+i). (1-2) Using the expressions (1.2) for the force, we can set up the equations of motion for the particles, using Newton's law that the force equals the mass times the acceleration. Thus we have a m%j = Q>%J + ^ , , & (1-3) where / indicates the second time derivative of {/. We now inquire whether we can solve the equations (1.3) by assuming that the displace- ments form a standing wave of the sort discussed in the preceding chapter. Let us consider a longitudinal vibration, for which is different from SEC. 1] THE SPECIFIC HEAT OF COMPOUNDS 243 zero, vf and f equal to zero, so that only the first equation of (1.3) is significant, and let us assume = A sin 2wvt sin-^-> (1.4) A by analogy to the standing waves in a continuous medium. In particular, for thojth particle, whose undisplaced position x is equal tojW, wo assume / = A sin 2irvt sin ~~- (1.5) Then we have j*i = A sin 2irvt sin 2ir(j 1)- A A o i = A sin 27ml sin cos -^- cos ^-- sin ~- ; A A A i > > o ,/ 27r;d 2?rd\ _i + fy+i = A sm 2?r^( 2 sm -y- cos 1 _ N (1.6) Substituting Eqs. (1.5) and (1.6) in Eq. (1.3), we find that the factor f A sin 2wvt sin - factor, we have A sin 27r^ sin -~- is common to each term. Canceling this common A A 9 9 i n 47r 2 i> 2 rw = a + o 2 cos & A a A 27rd\ 2a . I 1 COS -r- ) = - SI m\ X / w sin* ' A 1 /2a . ird ,, - N p = 7T--V/ sm T" ( J - 7 ) 2?r \ m X If the frequency ? and ihe wave length X are related by Ea. (1.7). the values of f, in Eq. (1.5) will satisfy the equations (1.3). If the velocity wcro constant, we should have v = v/\. the frequency being inversely proportional to the wave length. Fronf Eq. (1.7) we can see that this is the case for long waves, or low frequencies, where we can approximate the sine by the angle. In that limit we have *2awd X' . - X, - (1.8) a value that can easily be shown to agree with what we should find b}' Elasticity theory. For higher frequencies, however, since sin wd/\ is 244 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV less than ird/\, we see that the velocity of propagation must be less than the value (1.8). This is the dependence of velocity on wave length which was mentioned in the preceding chapter. Ncxt L we must impose. boundary conditions on our chain pf atoms as WG. did wjf-h t.hp ftoflfarmona solid. We are assuming N atoms, with undisplaced positions at x = d, 2d, . . . Nd. We shall assume that the chain is held at the ends, and to be precise we assume hypothetical atoms at x = 0, x = (N + l)d, which are held fast. As with the continuous solid, the precise nature of the boundary conditions is without effect on the higher harmonics. In Eq. (1.3), then, we assume that the equations can be extended to include terms and &\r+i, but with the subsidiary conditions fo = fr+i = 0. (1.9) The first of these is automatically satisfied by the assumption (1.5), setting,; = 0. For the second, we have the additional condition sin 2ir(N + 1)^ = 0. (1.10) A Since sin TTS = 0, where s is any integer, Eq. (1.10) can be satisfied by 2(N + 1)- = s, where s is an integer. 0-H) A The significance of s is the same as in Eq. (2.7), Chap. XIV: s = 1 corre- sponds to the fundamental vibration, s = 2 to the first harmonic, etc. Introducing the condition (1.11), we may rewrite Eqs. (1.5) and (1.7) for displacement and frequency. For the vibration described by the integer s, we introduce a subscript s, obtaining A 9 sin 2irv,t sin 1 I2a . ITS / rt \ * = 8in (L12) On examining Eq. (1.12), we see that both /, and v t are periodic in s. Aside from a question of sign, which is trivial, /, repeats itself when ,s increases by (N + 1) or any integral multiple of that quantity, and v s similarly repeats itself. That is, all the essentially different solutions are found in the range between s = and s = N + 1. These two limiting values, by Eq. (1.12), correspond to all (' s equal to zero, so that they are not really modes of vibration at all. The essentially different values then correspond to s = 1, 2, ... N, just N possible overtones. This verifies SEC. 1] THE SPECIFIC HEAT OF COMPOUNDS 245 the statement of the previous chapter that the number of allowed over- tones, for one direction of vibration, equals the number of atoms in the crystal. It is interesting to consider the periodicity in connection with the reciprocal space of Sec. 2, Chap. XIV. In that section, we imagined s/X to be plotted as an independent variable. Hero, since X, the length of the chain of atoms, equals (N + l)d, we should plot , AT --r +\~, T \J\ ~T~ 1 )Cl A as variable. Rather than plotting v 9 as a function of this quantity, ue prefer to plot its square, *>*. This is done in Fig. XV-1, where the perio- dicity is clearly shown. We see, furthermore, that the region from 2/X = to 2/X = \/d includes all possible values of 1%. This funda- mental region corresponds to those described in See. 2, Chap. XIV. In addition to the sinusoidal curve giving v 2 , we plot a parabola v 2 = 2 /X 2 , where v is the velocity of propagation for long waves. Tins parabola is FIG. XV-l. v- vs. 2/X, for one-dimensional crystal. l/d 2/X=S/X Dotted parabola, case of no dispersion. the curve which we should find if there were no dependence of velocity on wave length or no dispersion of the waves. We see from Fig. XV-1 that the effect of dispersion is to reduce tho frequencies of the highest overtones, compared to the values which we should find from the theory of vibrations of a continuum. While we cannot at once apply this result to the three-dimensional case, it is natural to suppose that, for instance in Fig. XIV-1 of the previous chapter, the effect will be to shift the peak of 1 dtf -r ) ; the number of overtones per unit frequency range, to lower frequencies, and at the same time to make the peak higher, so as to keep the number of overtones the same. This is a type of change that makes the curve resemble Debye's assumption (the dotted curve of Fig. XIV-2) more closely than before. Very few actual calculations of specific heat have yet been made using the more exact frequency spectrum that we have found. 246 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV In Fig. XV-1 we have seen graphically the way in which the square of the frequency, v 2 , is periodic in the reciprocal space. It is informing to see as well how the actual displacements ?/, repeat themselves. From Eq. (1.12) we see that /, contains the factor sin irjs/(N + 1). This of course equals zero for s = N + 1, and two values of s which are greater than N + 1 and less than N + 1 by the same amount respectively will havo equal and opposite values of the sine. In other words, the values of the sine which wo have when s goes from to N + 1 repeat each (a) s N + l (b)s = N-H M/ Cc) s almost N-M (d) s slightly greater thanN+1. Displacements as in (c) with opposite sign Ce) s = "2 (N+1). Displacements as in (b),with opposite sign (f) s almost 2 (N+1). Displacements as i'n(a),with opposi4esign FIG. XV-2. Displacements of atoms, for different overtones. other iii opposite order as s goes from N + 1 to 2(N + 1). As far as we can tell from the displacements of the particles, a value of s/X greater than !/(/, in other words, doos not correspond to a shorter value of wavo length than 2d but to a longer wave length, and when s/X equals 2/d the actual wave length is not (/ but infinity, corresponding to 110 wave at all. These paradoxical results are illustrated in Fig. XV-2, where we show TTJS curves of sin indicated as functions of a continuous variable SBC. 2] THE SPECIFIC HEAT OF COMPOUNDS 247 j, with the integral values of j shown by dots, for several values of s. It is clear from the figure that increase of 5 does not always mean decrease of the actual wave length, but that there is a definite minimum wave length for the actual disturbance, oqual as we have previously stated to twice the interatomic spacing. From the fact which we have pointed out that the range of s/X from \/d to 2/d repeats the range from to 1/d in opposite order, we understand why the curve of i> 2 vs. s/X, in Fig. XV-1, has a maximum for s/X = 1/d, falling to zero again for 2/d. 2. Waves in a One -dimensional Diatomic Crystal. In the preceding section we havo seen that the atomic nature of a one-dimensional mon- atomic crystal lattice leads to a dependence of the velocity of elastic waves on wave length and to a limitation of the number of allowed over- tones to the number of atoms in the crystal. Now wo attack our real problem, the vibrations of a diatomic one-dimensional crystal, using analogous methods. Wo assume each molecule to have two atoms, one of mass m, the other of mass m'. Lot there bo N molecules, and in equilibrium let the atoms of mass m bo at the positions X = d, 2rf, Nd, and those of mass m' at x = d + rf', 2ii + d', - Nd + d r , whoro d' is loss than d. Wo formulate only tho problem of longitudinal vibra- tions, understanding that tho transverse vibrations can bo handled in a similar way. By analogy with Sec. 1, wo assume that tho foroos on oach atom come from its two neighboring atoms. Those neighboring atoms are both of tho opposite typo to the one we are considering, but are at - e - 9 - e - - e - - X J-1 X J-! X J X J X j +l X 'j*' Flo. XV-3. Arrangement of atoms in ono-dimeiiHioritil diatomic molc-culnr lattice. different distances, one being at distance d' (in tho same molecule) the other at distance d d r (in an adjacent molecule). The arrangement of atoms will be clearer from Fig. XV-3. We assume two force constants: a for tho interaction between atoms in different molecules, a 1 for inter- action between atoms in tho same molecule. Thus the equations of motion are m'% = -a(tf - UO ~ a'U,' - fc), (2.1) where /, {J represent the displacement of the atoms of mass m and m' respectively in the jth molecule. To solve Eq. (2.1), we assume sinusoidal standing waves for both types of atoms, but with different phases: 248 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV ; = A sin 2Trvt sin A = /T sin 27r^ sin + B' sin 2r*< cos - (2.2) A A Substituting Eq. (2.2) in Eq. (2.1), we find that the factor sin 2irvt cancels from all forms. The remaining equations are [-4irVffi + (a + a')] A sin ^~ = .4' a sin 2*0' - 1)^ + a' sin ^ A L A A A J cos ' [-4irVm' + (n + a')l/T sin + B' cos X ' X / (1 = A] a si I sin 2x(j + 1) + ' sin A A (2.3) In Eqs. (2.3) wo expand the sines of 2ir(j 1) by the formulas for the A sines of sums and differences of angles and collect terms in sin ^~ and A cos -^- Then Eqs. (2.3) become sn + (a + n')] - X Ya cos -~ + a'J sn, -~ sin - ^a cos + '= 0, /A'[-4r*v*m' + (a -f- a')) - ^l(a cos ^ + a'H 'JB'f-igrVm' + (a + a')] - --ifa sin ^H = 0. (2. sn + cos B'f-igrVm' + (a + a')] - --ia sin = 0. (2.4) If our assumptions (2.2) arc to furnish solutions of Eq. (2.1), we must have Eqs. (2.4) satisfied independent of j; that is, for each atom in tin- chain. The only way to do this is for each of the four coefficients of sin -^- or cos -~ in Eq. (2.4) equal to zero. We thus have four simul- A A taneous equations for the four unknowns A, A', B', and v. Really there are only three unknowns, however, for we can determine only the ratios A' /A, B' I A, and not the absolute values of the three amplitudes A, A', SBC. 2] THE SPECIFIC HEAT OF COMPOUNDS 249 B f . Thus we should not expect to find solutions for our equations, since there are more equations than unknowns, but it turns out that the equa- tions are just so set up that they have solutions. We have the four equations J[~4irV7tt + (a + a')] - ;t'(a cos -^ + a'j - B'(a sin -) = ,,/ . 27r<A ,J 2ird . \ A'la Hiu -r- 1 B '( a cos h a ) = 0, A'[-4irVm' + (a + a')] - A\a cos ~ + a') = 0, B'[-4**v*m' + (a + a')] - A\a sin = 0. (2.5) To solve them, we first determine B r in terms of A' from the second. Sub- stituting in the other thrco, wo have equations relating A and A'. We find, however, that tho third and fourth equations load to tho same rosult, so that there arc only two independent equations for A in terms of A'. It is this which makes solution possible. These two equations arc at once found to be A[ - 47rVw + (a + a')] - A' ( 2ird . \ (a cos - h a 1 , , a cos ~ -- h a A = 0, (2.6) From oach of Eqs. (2.6) wo can solve for tho ratio A /A'. Equating those ratios wo get an equation for tho frequency: A\a cos ~ + a'J - A'(-Wv*m' + (a + a')] = 0. 1 r f 2 9/7 \ a SU1 ~ af/^c 1 ft -L. ^ r)" -4rrVm + (a + a') X 27rd a cos -r- A + ' 2 m' + (a + a') a cos (2.7) 250 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV From Eq. (2.7) we have at once [4*-Vm - (a + a')][4a-Vm' - (a + a')] / 2nd ^ ,V . ( . 27r<A 2 = I a cos -r + a' ] + (a am ) = a 2 + a' 5 + 2aa' cos ^ A = (a + a') 2 - 2oa'(l - cos ^\ = (a + a') 2 - 4aa' sin 2 ^- (2.8) A Expanding the left side of Eq. (2.8), (27n/) 4 mm' - (2Trv)*(m + m')(a + a') + (a + a') 2 = (a + a') 2 - 4aa ; sin 2 ^- (2.9) A Equation (2.9) is a quadratic in (27rv) 2 . Solving it, we have " m mm where / /\ if" / / /\ ~~1 o m /a + a \ / m -f- m /a + a \ r 4aa . ird I _ 1 _|_ ^ M [ i ..... i i _ sin ' \ 2 / ~ \ I mm' \ 2 / J mm' X _ . , [\ X/' mm ' ~ (2 ' 1()) m + r/i'/fl (2.11) / 4mm' (a + a') 2 (w + m') 2 Equation (2.11) is the desired equation giving frequency in terms of wave length, for longitudinal vibrations of a chain of diatomic molecules. As in Eq. (1.7), we see that the frequency depends on the quantity sin , so that we go through all possible values when - goes from zero A A to l/2d. Thus there is the same sort of periodicity seen in Fig. XV-1. Furthermore, when we introduce boundary conditions for a crystal of N molecules, we find as before that there are N allowed overtones in this fundamental region of reciprocal space. In the present case, however, Eq. (2.11) has two solutions for each value of 1/X, coming from the sign. That is, there are two branches to the curve, two allowed types of vibration for each wave length, and consequently 2N vibrations in all. This is natural, for while there are just N molecules, each of these has two atoms, so that there are 2N atoms and 2N degrees of freedom for longitudinal vibration. In Fig. XV-4 we give curves of v 2 vs. 2/X, for several different values of the constant K. From Eq. (2.11) we see that SEC. 2] THE SPECIFIC HEAT OF COMPOUNDS 251 except for scale the curve depends on only one constant K. This constant, equals unity when a = a', m = m'. Under all other circumstances it is easy to show from its definition that it is less than unity; when a is very different from a', or m very different from m', or both, it is very small. Thus the limit K = corresponds to very unlike atoms in the molecule, with very unlike forces between the atoms in the molecule and atoms in adjacent molecules. This is the case of strong molecular binding, with weak interatomic forces. The limit K = 1 corresponds to the ease where the atoms are similar and the binding within the molecule is almost the same as that between molecules. This limit, for instance, would be found approximately in the case of an ionic crystal like the alkali halides, where there is no molecular structure in the proper sense awl where the two typos of atom have approxi- mately the same mass. In (a), Fig. XV-4, we have the case of strong molecular binding. Here one branch of the curve* corresponds to low frequency vibrations, going to zero fre- quency in the limit of infinite wave length. These vibrations are those in which the mole- cules as a whole vibrate, and they are acous- tical vibrations of the same sort we have discussed in the preceding chapter. The other branch, however, corresponds to a much higher frequency, even at infinite \\avci length. These vibrations are vibrations of the two Vd (a) K \ 2/\ /wv l/d 2/d 2/X atoms in the molecule with respect to each (c) K Shgh% Less Thorn Unity other, almost exactly as the vibrations would FIG. xv-4. > 2 vs. 2/x, for Hia- occur in the isolated molecule. These are tonne molecular lattice. often called optical vibrations, for as we shall see later they can be observed in certain optical absorptions in the infrared. In (6), Fig. XV-4, there is an intermediate case between strong and weak binding. The forces between molecules are, here not much greater than those within a molecule, and the result is that the optical branch of the spectrum is not at much greater frequency than the acoustical branch, Finally in (c) we show almost the limiting case of equal atoms and equal binding. The exactly equal case would correspond to a crystal with 2N equal atoms with a spacing of rf/2. This is the case of Sec. 1, and we should expect the curve of v* against 2/X to correspond to Fig. XV-1, except that the first maximum of the curve should come at 2/d instead of l/d, on account of the spacing of d/2. In Fig. XV-4 it is shown how this limit is approached. We have already pointed out that on account 252 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV of periodicity we repeat all the overtones found between 2/X = and 2/X = l/d in the next, period, from l/d to 2/d. Let us then take the lower branch of the curve in the range to 1 A/, and the upper branch from l/d to 2/rf, as shown by the heavy line in (c), Fig. XV-4. If K were exactly unity, these two branches would join smoothly, and would give exactly the curve we expect. When K is slightly less than unity, so that the two types of atom arc slightly different from each other or there is a slight tendency to form diatomic molecules, there is a slight discontinuity in the curve at 1/rf, as shown in the figure, but still it is better to regard them as both parts of a single curve. It is clear from this discussion how the case of the present section reduces to that of Sec. 1 as the molecular lattice reduces to an atomic lattice. 3. Vibration Spectra and Specific Heats of Polyatomic Crystals. The simplified one-dimensional model which w r e have taken up in the last two sections is enough so that we can readily understand what happens in the case of real crystals. First, we consider definitely molecular crystals, with strong binding within the molecules, and relatively weak inter- molecular forces. The vibration spectrum in this case breaks up definitely into two parts. First there is the acoustic spectrum, connected with vibrations of the molecules as a whole. This reduces to the ordinary elastic vibrations at very low frequencies, and extends to a high enough frequency in the infrared to include 3N modes of oscillation, where there are N molecules in the crystal. If the intermolecular forces are weak, so that the compressibility of the crystal is large, this limiting frequency will be low, in the far infrared. Then there is the optical spectrum, connected with vibrations within the molecules. As we see from Fig. XV-4 (a), these vibrations in different molecules are coupled together to some extent, so that their frequencies depend on the particular way in which the vibrations combine to form a standing wave in the crystal. Never- theless, this dependence of frequency on wave length is relatively small, as the upper branch of Fig. XV-4 (a) shows. The important point is that the optical vibrations in molecular crystals come at a good deal higher frequency than the acoustical vibrations, lying in the part of the infrared near the visible. And these optical frequencies are only slightly different from what they would be in isolated molecules. This can be seen from our simple model. Thus in Eq. (2.12) let a', the force constant for molecular vibration, be large compared to , the force constant between molecules. Then K will be very small, and if we neglect a compared to a' we have (2irv) 2 = [(m + m')/mm']a', just the value for a diatomic mole- cule. We may see this, for instance, from Chap. IX, Eqs. (2.5) and (4.4), where we obtained the same result. Of course, with complicated molecules, there will be many optical vibration frequencies of the isolated molecule, and all of these will appear in the spectrum of the crystal with slight distortions on account of molecular interaction. SEC. 3] THE SPECIFIC HEAT OF COMPOUNDS 253 With a spectrum of this sort, it is clear how to handle the specific heat of such a molecular crystal. The acoustical vibrations can be handled by a Debye curve, using the elastic constants, or an empirical charac- teristic temperature, and using the number of molecules as N. Then we add a number of Einstein terms to take care of the molecular vibrations. These again can be found empirically, or they can be deduced from vibration frequencies observed in the spectrum. Those frequencies would be exported to bo approximately the samo as in tho molecules, so that this part of tho specific heat should agree with the vibrational part of the .specific heat of the corresponding gas. In some eases, as wo shall mention later, the; vibration frequencies can be found directly by optical investigation of the solid. In addition to the molecular vibrations, corresponding to tho Einstein terms, and tho molecular translation, cor- responding to tho Debye terms, in tho specific heat of the crystal, there must be something corresponding to the molecular rotation. In most solids, the molecules cannot rotate but are only free to change their orientation through a small angle, being held to a particular orientation by linear restoring forces. In their vibration spectrum, this will lead to vibwtional terms like the upper branch of Fig. XV-4 (a), and there will be additional Einstein terms in the specific heat coming from thi>s hindered rotation. These terms of course cannot bo predicted from tho properties of the separate molecules, but ordinarily must be found empirically to fit the observed specific heat curves. There are certain cases, on the other hand, in which the molecules at high temperatures really can rotate in crystals, though at low temperatures they cannot. In such a case, at high temperature, there would be a term in the specific heat of the solid much like tho rotational term in a gas. At low temperatures whore the rotation changes to vibration, the transition is more complicated than any that we have taken up so far and is really more like a change of phase. A crystal such as those of tho alkali halides. formed from a succession of equally spaced ions of alternating sign, is quite different, from JLho molecular crystals. The spectrum, as indicated in Fig. XV-4 (<:). is much more like that of an element, the distinction between the two types of ions being unimportant. Thus we can treat it bv methods of tho proced- ing chapter, using only a Dobye curve, but taking N to bo tho total number of atoms, not the total number of molecules. This is commonly done for the alkali halides, and it is found that one gets as good agreement between theory and experiment as for the metals. For more complicated ionic crystals, such as carbonates or nitrates, which aro formed of positive metallic ions and negative carbonate or nitrate radicals, the situation is midway between the ionic and molecular cases. In CaCOa r for instance, we should expect a Debye curve coming from vibrations of tho Cn. 4 "*- a.nrl CO ions as a whole, and also Einstein terms from the internal vibrations of the carbonate ions. 254 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV 4. Infrared Optical Spectra of Crystals. Though we have said nothing about the interaction of molecules and light, we may mention the infrared optical spectra of crystals, related to their optical vibrations. Light can be emitted or absorbed by an oscillating dipole; that is, by two particles of opposite charge oscillating with respect to each other. We should then expect that such vibrations as involved the relative motion of different charges could bo observed in the spectrum. This is never tho case with the acoustic branch of the vibration spectrum, for there we have molecules vibrating as a whole, and they are necessarily uncharged. But in the optical branches, for instance with the alkali halides, wo have just tho nooossary circumstances. The rase 1/X = 0, in the optical branch of an alkali halide, corresponds to a rigid vibration of all the positive ions with respect to all tho negative ions, so that each pair of positive and negative ions in the crystal can radiate light, and all these sources of radiation are in phase with each other. If such an oscillation were excited, then, the crystal would emit infrared radiation of tho frequency of the vibration. Only the case 1/X - corresponds to radiation, for if 1/X were different from zero, different parts of the crystal would be vibrating in different phases and tho emitted radiation from the various atoms would cancel by interference. Ordinarily the radiation is not observed in emission but in absorption, for there is no available way to excite this typo of vibration strongly. There is a general law of optics, however, stating that any frequency that can bo omitted by a system can also be absorbed by the same system. Thus light of this particular infrared wave length can be absorbed by an alkali halide crystal. A still further optical fact is that light of a frequency which is very strongly absorbed is also strongly reflected. Hence alkali halide crystals have abnormally high reflecting power for this particular wave length. This is observed in the experiment to measure residual rays, or " Reststrahlon." In this experiment, infrared light with a continuous distribution of frequencies, as from a hot body, i.s reflected many times from the surfaces of alkali halide crystals. For most frequencies, the reflection coefficient is so low that practically all the light is lost after the multiple reflection. Tho characteristic frequencies have such high reflecting power, however, that a good deal of the light of these wave lengths is reflected, and the emergent beam is almost monochromatic, corresponding to the absorption frequency. These beams which are left over, called residual rays, form a convenient way .of getting approxi- mately monochromatic light in the far infrared. By measurement of the wave length of tho residual rays, for instance with a diffraction grating, it is possible to get a direct measurement of the maximum frequency in the optical band of the vibration spectrum of an alkali halide. If we treat the spectrum by the Debye method, regarding SEC. 4] THE SPECIFIC HEAT OF COMPOUNDS 255 the crystal as an atomic rather than a molecular crystal, this frequency should agree approximately with tho Dehye characteristic frequency, which can be found from the characteristic temperature 9#. Such an agreement is in fact found fairly accurately, as will be shown in a later chapter on ionic crystals, in which we shall make comparison with experi- ment. For molecular lattices, it is also possible to got residual ray frequencies, in case the molecules contain ions which can vibrate with respect to each others. These frequencies have no connection with the Debye frequencies, however, and they have been much loss studied than in the case of ionic crystals. CHAPTER XVI THE LIQUID STATE AND FUSION For several chapters wo have been taking up the properties of solids. Earlier, in Chap. XI, we discussed the equilibrium of solids, liquids, and gases in a general way but without- using much information about the liquid or solid states. In Chap. XII, discussing imperfect gases and Van der Waals 1 equation, we again touched on the properties of liquids but again without much detailed study of them. Now that we understand solids better, we can again take up the problem of liquids and of melting. The liquid phase 1 forms a sort of bridge between the solid and the gaseous phases. It is hard to treat, because it has no clear-cut limiting cases, such as the crystalline phase of the solid at the absolute zero and the perfect gas as a limiting case of the real gas. The best we can do is to handle it as an approximation to a gas or an approximation to a crystalline solid. The first is essentially the approach made in Van der Waals' equation, which we have already discussed. The second, the approach through the solid phase and through fusion, is the subject of the present chapter. 1. The Liquid Phase.-- We ordinarily deal with liquids at tem- peratures and pressures well below the critical point, and it is here that they resemble solids. When studied by x-ray diffraction methods, it is found that the immediate neighbors of a given atom or molecule in a liquid are arranged very much as they are in the crystal, but more* distant neighbors do not have the same regular arrangement. We shall meet a particularly clear case of this later, in discussing glass. A glass is simply a supercooled liquid of a silicate, or other similar material, held together by bonds extending throughout the structure, just as in the crystalline form of the same substance. These materials supercool particularly easily, presumably because the atoms or ions of the liquid are so tightly bound together that they do not rearrange themselves easily to the positions suitable for the crystal. Thus we can observe their liquid phases at temperatures low enough so that they take on most of the elastic properties of solids. They acquire rigidity, resistance to torque. Nevertheless they never lose entirely their properties of fluidity. A rod of glass at room temperature, subjected to a continuous stress which is not great enough to break it, will gradually deform or flow over long periods of time. The study of materials which behave in this way, showing both fluidity and elasticity, is called rheology, and it shows that such a com- 256 SEC. 1] THE LIQUID STATE AND FUSION 257 bination of properties is very widespread. The fluidity of the glasses is a result of the fact that there is no unique arrangement of the atoms, as there is in a perfect crystal. Certain atoms may be in a situation where there are two possible positions of equilibrium, near to each other. That is, the atom is froe to move from tho position whore it is io an adjacent hole in the structure, with only a small expenditure of energy. Tho maximum of potential energy between tho minima is closoly analogous to the energy of activation in chemical reactions, mentioned in C -hap. X, Sec. 3. In Fig. XVI-1 (a) we show schematically how the potential energy acting on this atom might appear, as we pass from one position of equilibrium to the other. There will ordinarily not be much tendency for the atom to go from one position to the other. But if the material is under stress and one of the positions tends to relieve tho stress, the other (oO I r ir,. XVI-1. Schematic potential energy acting on an atom in n KhiHH. (a) Unstressed material, (//) material under stress not, the energy relations will be shifted so that the position which relieves tho stress, as shown in (16), Fig. XVI-1, will have lower energy than (26), which does not. Then if the atom is in position (2), it will have; a good chance of acquiring the energy c 2 , enough to pass over the; potential hill and fall to the position (1), simply by thermal agitation. By tho Max- vvell-Boltzmaim relation, wo should expect the probability of finding an atom with this energy to bo proportional to the quant ity exp ( 2 /&7 7 ), increasing rapidly with increasing temperature. On tho other hand, the probability that an atom in (1) should have the energy t\ necessary to pass over the hill to position (2) would contain tho much smaller factor exp ( \/kT). The net result would bo that atoms in positions like (2) would move to positions like (1), relieving the stress, but tho opposite; type of transition would not occur. This would amount to a flow of the material, if many such transitions took place. Furthermore, the rate; of the process woujd be proportional to exp ( e 2 /M 7 ), anel this woulel be expected to be^roportional to the rate of flow under fixed stress, or to the coefficient of" yisoosity. The* actual viscosities of glasses show a dependence on temperature of this general nature, the flow becoming very slow at low temperatures, so slow that it is ordinarily not observed ut all. But there is no sudden change between a fluid and a solid state. 258 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI The glasses are particularly informing fluids, because we can observe them over such wide temperature ranges. Other types of liquids do not supercool to any such extent, so that ordinarily they cannot be observed at temperatures low onough so that they have begun to lose their charac- teristically fluid properties. We may infer, however, that the process of flow in all liquids is similar to what we have described in the glasses. That is, the atoms surrounding a given atom arc approximately in the arrangement correct for a crystal, but there arc so many flaws in the structure that there are many atoms which can move almost freely from the position where they happen to bo, to a nearby vacant place where they would fit into the structure almost as well. One rather informing approximation to the liquid state treats it straightforwardly as a mixture of atoms and holes, the holes simply representing atoms missing from the structure, and it treats flow as the motion of atoms next the holes into the holes, leaving in turn empty holes from which they move. The essential point is that the structure is something like the solid, but like an extremely imperfect, disordered solid. And it is this disorder that gives the possi- bility of flow. A perfect crystal cannot flow, without becoming deformed and imperfect in the very process of flow. 2. The Latent Heat of Melting. With this general picture of the liquid state, let us ask what we expect to find for the thermodynamic properties of liquids. We shall consider two thermodynamic functions, the internal energy and the entropy, and shall ask how we expect these quantities to differ from the corresponding quantities for the solid. This information can be found experimentally from the latent heat of fusion L w , which gives directly the change in internal energy between the two phases (for solids and liquids the small quantity P(Vi V 8 ) by which this should be corrected is negligible), and from the melting point T m , for the* entropy of fusion is given by L m /T m . As a matter of general orienta- tion, we first give in Table XVI-1 the necessary information about a number of materials. In the first column we give the latent heat of fusion. We shall find it interesting to compare it with the latent heat of vaporization; therefore we give that quantity in the next column, and the ratio of heat, of fusion to heat of vaporization in the third. Next wo tabulate the molting point and finally the entropy of fusion. Referring to Table XVI-1, let us first consider the latent heat of fusion. We observe that in practically every case it is but a small fraction of the boat of vaporization. That is, the atoms or molecules are pulled apart only slightly in the liquid state compared with the solid, while in the vapor they are completely separated. Of course, this holds only for pressures low compared to the critical pressure; near the critical point, the heat of vaporization reduces to zero. To be more specific, we notice that in the metals the heat of fusion is generally three or four per cent of 2] THE LIQUID STATE AND FUSION TABLE XVI-1. DATA REGARDING MELTING POINT 259 L m , kg.-cal. per mole L v Lm Z7 T m , abs. ASm, cal. per degree Metals: Na 63 26 2 024 371 1 70 Mg 1 16 34 4 034 923 1 26 Al 2 55 67 6 038 932 2 73 K 58 21 9 020 330 1 72 Cr 3 93 89 4 044 1823 2 15 Mn 3 45 G9 7 050 14'J3 2 31 Fe 3 50 96 5 037 1802 1 97 Co 3 GO 1763 2 08 Ni 4 20 98 1 043 1725 2 44 Cu 3 11 81 7 038 1357 2 29 Zn .... Ga Se 1 00 1 34 1 22 31 4 051 (192 303 490 2 32 4 42 2 49 Rb 53 20 6 020 312 1 70 Ag ... 2 70 G ( ) 4 039 1234 2 19 Cd 1 40 7 054 594 2 46 In Sn Sb 78 1 72 4 77 08 54 4 025 088 429 505 903 1 82 3 40 5 29 Cs 50 18 7 027 302 1 GO Pt 5 33 125 043 2028 2 03 Au Hg 3 03 58 90 7 15 5 033 037 1330 234 2 27 2 48 Ti : :: :. Pb Bi Ionic substances NaF NaCl KF KC1 KBr AgCl asr . TlBr LiNOs . NaNOs KN0 3 AN0 3 NaClOa NaOH . . KOH K 2 Cr 2 O; BaCli CaCh PbCh PbBrz Pblj HgBrz HgI 2 Molecular substances. H, NO H:0 O 2 A NHj . . 70 1 22 2 51 7 81 7 22 (\ 28 G 41 2 84 3 15 2 18 4 2(1 3 '.)!) G 00 3 70 2 57 2 70 5 29 1 00 1 01 8 77 5 75 G 03 5 05 4 29 5 18 4 02 4 50 028 551 1 43 096 0.280 1 84 43 40 7 47 8 213 183 190 105 159 22 3 82 11 3 2 08 1.88 7 )4 018 02G 053 037 039 033 039 018 13 14 13 05 15 20 570 001 544 1 205 1073 1133 1043 Oil 728 703 700 733 523 583 581 481 528 591 033 071 1232 1047 771 761 648 508 523 14 110 273 54 83 198 I 32 2 03 4 01 19 72 5 53 15 4 05 4 33 3 10 09 8 18 11 G 6 45 4 42 5 72 9 90 2 71 2 54 13 07 4 65 5 77 7 32 5 63 8 00 9 09 8 60 2 5 02 5 25 1 78 3 38 9 30 Na 218 1 69 13 63 3 46 CO 200 1 90 11 68 2 04 HC1 506 4 85 10 159 3 20 CO* 1 99 6 44 31 217 9 16 CH 4 224 2 33 10 90 2 49 HBr 620 5.79 11 187 3 31 Ch 1 63 7 43 22 170 9 59 ecu 577 8 07 250 2 30 CHsOH 525 2 06 176 2 08 CzH 6 OH 1 10 10 4 11 156 7 10 CHsCOOII 2 64 20 3 13 290 9 21 CH 2 35 8 3 28 278 8 45 Data are from Landolt's Tables. The heats of vaporization tabulated for alkali hulide* are the energies required to break the crystal up into ions, rather than into atoms. INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI the heat of vaporization. This receives a ready explanation in terms of our model of a liquid as a crude approximation to a solid but with many holes or vacant places. Suppose there are three or four chances out of a hundred that there will be a vacant space instead of an atom at a given point in the imperfect lattice. Then the energy of the substance will bo a, corresponding amount less than if all points wore occupied, for each atom will have a correspondingly smaller number of neighbors on the average, and the energy of the crystal comes from the attraction of neigh- boring atoms for each other. This is about the right order of magnitude to account for the latent heat of fusion, then, and at the same time it indicates a density three or four per cent less for the liquid than for the solid, which is about the right order of magnitude. From Table XVi-1 we see that the situation is about the same for the alkali halides (and presumably for other ionic crystals) as for the metals, if we understand by the latent heat of vaporization the energy required to break up the sub- stance into a gas of ions. In the case of molecular substances, the latent heat of fusion is a larger fraction of the latent heat of vaporization, from 10 to 20 per cent or even higher, so that it seems clear that the liquid differs from the solid more in these cases than with metals and ionic substances. In many of the molecular crystals, the molecules are fitted together in a regular arrangement, whereas in the liquid there is presumably more tendency to rotation of the molecules, and they do not fit so perfectly. This tendency would make considerable change in the energy and in the volume, and would represent a feature which is absent with metals, where the atoms are effectively spherical. For example, in ice, as we shall see later, the triangular water molecules are arranged in a definite structure, each oxygen being surrounded by four others, with the hydrogens in such an arrangement that the dipoles of adjacent molecules attract each other. In liquid water, on the other hand, the arrangement is far less perfect and precise, the molecules are farther apart on centers, and one can understand the latent heat of fusion simply as the work necessary to increase the average distance between the dipoles, against their attrac- tions. Presumably in all the molecular substances, it is more accurate to think simply of the increased interatomic distance as leading to the heat of fusion, rather than postulating holes in the structure as definitely as one does with a metal. However one looks at it, the liquid is a more open, less well-ordered structure than the solid, and the latent heat represents the work necessary to pull the atoms or molecules apart to this open structure. 3. The Entropy of Melting. When we look at the entropies of melting, in Table XVI-1, we see that there is a certain amount of regu- larity in the table. For most of the metals, the entropy of fusion is ftEC. TUB STATE AMU FUS1UJN between two and three calories. For the diatomic ionic crystals it is about twice as much, so that if we figure the entropy per atom instead of per molecule it is about the same as for the metals. As a first step toward understanding the entropy of molting, we might use a rough argument similar to that of Chap. XIII, Soc. 5, where we discussed the entropy changes in polymorphic transitions. We were them interested in tho slope of the transition curves between phases, but the calculation we made was one of A$/AF, the change of entropy between two phases divided by the change of volume, and we assumed that the change of entropy with volume in going from one phase to another was the same as in changing the volume of a single phase. Tn this case, using the thermodynamic relation (es\ _ \dVjr 7 (ev\ Wi- dV we have AS AV thermal expansion compressibility (3.1) (3.2) From the relation (3.2), and tho observed change of volume on molting, we can compute the change of entropy. In Table XVI-2 we give values of volume of the solid per mole (extrapolated from room temperature to tho melting point by use of the thermal expansion), volume 4 of the liquid TABLE XVI-2. CALCULATION OF ENTKOPY OK MELTINCJ Molecular volume of solid, cc. Molecular volume oi liquid ,. Thermal A/S',,, expansion computed A&. observed Na 24 2 24 6 4 21 f> X 10 * ' 13 I 70 Mg 14 6 15 2 6 7 3 36 1 26 Al 10 6 11 0.4 6 8 0.49 2.73 K 46.3 ! 47.2 9 23.0 13 1.72 Fo 7 50 8 12 6 3 36 86 1.97 A*~ 10 8 11 3 f> 3 7 69 2.19 Cd 13 4 14 6 9 3 0.93 2.46 NaCl 29.6 37 7 8 1 12 1 5.6 6.72 KC1 40 5 48.8 8.3 11 4 4 6.15 KBr 45.0 56.0 11 12 6 4.9 4 65 AgCl 27.0 29.6 2 6 9.9 2.6 4 33 AgBr 30.4 33 6 3 2 10.4 3.2 3 10 Molecular volumes of the solid are calculated from observed densities at room temperature (as tabulated in Landolt's Tables), extrapolated to the melting point by using the thermal expansion. For the ionic ciystals, data on densities of liquids and solids are taken from Lorenz and Herz, X anory. allffem Chetn., 145, 88 (1025). 262 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI at the melting point per mole, change of volume, thermal expansion, and AS as computed from them by Eq. (3.2), for a number of solids for which the required quantities are known, and we compare with the values of A/S as tabulated in Table XVI-1, which we repeat for convenience. We see that the calculated entropies of melting are of the right order of magnitude, but that in most cases they are decidedly smaller than tho observed ones. In other words, wo must assume that the entropy actu- ally increases in melting more than we should assume simply from the change of volume, though as a first approximation that gives a useful and partially correct picture of what happens. The fact that the ratio of change of entropy to change of volume is approximately given by this simple picturo shows that the calculation of Chap. XIII, Sec. 5, on the slope of transition lines, will apply roughly to the slope of the melting curves, and this is found to be true experimentally. Of course, as with transitions between solids, there is great variation from one material to another, arid occasional materials, of which water is the most conspicuous example, actually have a decrease of volume on melting, though their entropy increases, so that in such cases relation (3.2) is obviously entirely incorrect. The simple argument we have given so far does not give a very ade- quate interpretation of the entropy of melting, and as a matter of fact no very complete theory is available. We can analyze the problem a little better, however, by considering it more in detail. We can imagine that the entropy of the liquid should be greater than that of the solid for two reasons. First, at the absolute zero, the solid has zero entropy. If we could carry the liquid to the absolute zero by supercooling, however, we should imagine, at least by elementary arguments, that its entropy would be greater than zero. The reason is that the atoms or molecules of the liquid are arranged in a much more random way than in the crystal, and since entropy measures randomness, this must lead to a positive entropy for the liquid. We shall be able to estimate this contribution to the entropy in the next paragraph and shall see that while it is appreciable, it is not great enough to account for nearly all of the entropy of fusion. Secondly, there aro good reasons for thinking that the specific hoat of the liquid, at temperatures between the absolute zero and the melting point, would be greater than that of the solid. Thus the integral I -= dT Jo * measuring the difference of entropy between absolute zero and the melting point will be greater for the liquid than for the solid, giving an additional reason why the entropy of the liquid should be greater than that of the solid. It is reasonable to think that this effect is fairly large, and the whole entropy of fusion can be regarded as a combination of the two effects we have mentioned. SEC. 3] THE LIQUID STATE AND FUSION 263 Let us try first to estimate the contribution to the entropy of fusion on account of the randomness of arrangement of the atoms in a liquid. This calculation can be carried out at the absolute zero, and we can get some- thing of the right order of magnitude by taking our simple picture of the liquid as a mixture of atoms and holes. Let us assume N atoms and Na holes, where a is a small fraction of unity. We may consider that these form a lattice of N(l + a) points and may say very crudely that any arrangement of the N atoms and the Na holes on these N(\ + a) points will constitute a possible arrangement of the substance having the same, lowest energy Vo. By elementary probability theory, the number of ways of arranging N things of one sort, and Na of another, in N(l + ) spaces, one to a space, is [JVQ +)]! ,,,. W ~ Nl(Na)\" (At6) Using Stirling's formula, AM = (N/e) N approximately, the expression (3.3) becomes w _ the powers of N and e canceling in numerator and denominator. Now we can calculate the entropy by Boltzmann's relation S = k In W, of Eq. (1.3), Chapter III. We have immediately S = Nk[(l + a) In (1 + a) - a In a]. (3.5) In the expression (3.5) let us put a. 0.04, the value which we roughly estimated from the latent heat. Then calculation gives us at once S = O.l&SNk = 0.33 cal. per degree per mole. (3.6) The value (3.6), while appreciable compared with the values of Table XVI-1, which are of the order of magnitude of two or more calories per degree, is definitely less, so much less that it cannot possibly account for the whole entropy of fusion. Let us see what value of a we should have to take to get the whole entropy of fusion from this term. If we set a = 1, for instance, we have S = l.38Nk = 2.75 cals. per degree per mole, (3.7) about the right value. But this would correspond to an equal mixture of atoms and holes, a substance with a density only half that of the solid, which is clearly impossible. It is unlikely that the crudity of our calcula- tion could make a very large difference in the result, so that we may conclude that the effect of randomness on the entropy of fusion is impor- 264 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI tant, but not the only significant effect. In this connection it is interest- ing to note that in one or two cases supercooled liquids have been carried down practically to the absolute zero and their specific heats measured, so that the entropy could be determined at the absolute, zero. To within an error of a fow tenths of a unit, tho entropy was found to be zero. This would not exclude an entropy at tho absolute zero of tho order of magni- tude of Kq. (3.5), which seems possible, but it definitely would show that tho out ropy difference between solid and liquid comes mostly at tempera- tures above the absolute 1 zero. It appears from the previous paragraph that the larger part of tho entropy of fusion must arise because the liquid has a larger specific heat than tho solid in the hypothetical state of supercooling, so that its entropy difference 1 between absolute zero and the melting point is greater than for tho solid. An indication as to why this should bo tnio is soon in the preceding paragraphs of the present section, whore we have discussed tho change of entropy with volume. This is represented graphically in Fig. XIII-3, whore tho entropy is shown as a function of temperature, for several volumes. At larger volumes (as V = V Q in Fig. XIII-3), the natural frequencies of molecular vibration are lower, so that tho specific heat rises to its classical value at lower temperatures, and the specific, heat, and consequently the entropy, are greater than at the smaller volume (V 0.7Fo in the figure, for instance). In tho particular case shown in tho figure, the entropy difference between tho two volumes shown, which differ by 30 per cent, amounts to about throe entropy units at temperatures above 300 abs. Something of the same effect, though on a smaller scale, would be expected in comparing solids and liquids, as wo have mentioned at the beginning of this section. The liquid is n more open structure, having therefore lower frequencies of molecular vibration and a more nearly classical specific heat at low temperatures. Thus its entropy difference between the absolute zero zmd the melting point, if the liquid really could bo carried to absolute zero, would bo greater than for the solid. By itself, however, us we ran judge from Table XVI-2, it seems unlikely that this effect would be large 1 enough. If 30 per cent difference in volume amounts to throe entropy units, wo should need something like 15 per cent difference in volume to account for the approxi- mately 1.5 entropy units needed, when we take account of possible entropy of the liquid coming from randomness. And this is more than the actual difference in volume, in most cases. Nevertheless, there is an additional feature of difference between the liquid and solid that might lead to still higher specific heat and entropy for the liquid. In Fig. XVI-1 we have seen the type of potential to be expected for an appreciable number of atoms, those that are capable of shifting to a near-by position of equilibrium with small expenditure of energy. This is so far from the SBC. 4] THE LIQriD STATE AND FUSION 265 potential of a linear restoring force that our whole discussion of specific heats, which rests on simple harmonic motion, does not apply to it. As a matter of fact, tho energy levels in a potential of the type shown in Fig. XVI-1 lie closer together than we should suppose from our study of linear restoring forces. But in general, the closer together the energy levels of any problem are, the lower the temperature at which its specific heat becomes approximately classical. This reason to expect a high specific heat for a supercooled liquid, in addition to those already discussed, is probably enough to account for the entropy difference between the liquid and tho solid. It is very hard to get a satisfactory way of calculating tho magnitude of this entropy difference, however, and we must remain con- tent with a qualitative explanation of the values of Table XVI-1. One thing is clear from our discussion: it will hardly bo possible to understand fusion without studying the liquid state as woll as tho solid state from the standpoint of the quantum theory, and this is a field that has hardly been explored at all. No treatment based puroly on classical theory can be expected to be very good. 4. Statistical Mechanics and Melting. Objection might bo made to our argument of tho preceding section, in which we considered a hypo- thetical supercooled state of tho liquid down to tho absolute zero, on tho ground lhat that state is not one of thermal equilibrium and that wo can- not proporly consider it by itself at all. A correct statistical treatment should yield the equilibrium state at any temperature; that is, below the melting point it should give the solid, above the molting point tho liquid, with a discontinuous change in properties at that point. Wo shall now show by a simple example that the statistical treatment really will give such a discontinuous change, but that nevertheless our method of treatment was entirely justified. We shall calculate the partition function, and from it the free energy, of a simple model of solid and liquid, and shall show that the free energy as a function of temperature is a function with practically a discontinuous slope at a given temperature, the molting point, below which one phase, the solid, is stable, and abovo which another, the liquid, is tho stable phase. To describe our model, we shall give its energy levels, so that we can calculate the partition function directly. The simplest model that shows the properties we wish is the following. We assume a single level, at energy NE 8 , whore N is the number of atoms, corresponding to tho solid. At a higher energy, NEi, we assume a multiple level corresponding to tho liquid. The energy is higher on account for example of the greater interatomic distance in liquids. The multiplicity of the level arises, for example, on account of the randomness of molecular arrangement. Wo assume that the multiple energy level at NEi really consists of W N coin- cident levels, where w is a constant. Then the partition function is 266 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI NE. ~ kT , NEi ~ kT' (4.1) and the Helmholtz free energy, which is equal to the Gibbs free energy in this case where there is no pressure, is / NE, NEj\ G = -kTln (e kT +w"e~' kT ). (4.2) If we had only the solid, or only the single level at NE a , the partition function would have contained only the first term in Eq. (4.2), and the free energy would have been G, = NE.. (4.3) If we had only the liquid, or the multiple level at NEi, we should have had only the second term in Eq. (4.2), and the free energy would have been - (Nklnw)T, (4.4) in which the first term, NEi, is the internal energy, and Nk In w is the entropy, exactly analogous to Kq. (1.3), Chap. Ill (the number of states NE, NE, T m T FIG. XVI-2. Gibbs free energy as function of temperature, for simplified model of solid and liquid, illustrating change of phase on melting. is here W N , so that the entropy should be k In (W N ) = Nk In w). For free energy as a function of temperature we should then have the two straight lines of Fig. XVI-2, the horizontal one representing the solid, the sloping one the liquid. The slope of the curve measures the negative of the entropy, as we see at once from Eqs. (4.3) and (4.4), where the solid has zero entropy, the liquid the positive entropy Nk In w. This accords at once with the thermodynamic relation S = (dA/dT)v = (dG/dT) P . From Fig. XVI-2, we see that the solid has the lower free energy at tem- peratures below the intersection, on account of its lower internal energy, while the liquid has lower free energy above the intersection, its greater entropy resulting in a downward slope which counteracts its greater internal energy. The melting point comes at the intersection, at the SBC. 4] THE LIQUID STATE AND FUSION 267 temperature where G 6 = Gi, or at m ~T-\ - A > 'W kin w AS m x ' where the latent heat of melting, L m , equals N(Ei E t ) and the entropy of melting, A/S W , equals JV/fc In w. The calculation we have just made, considering the solid and liquid separately, drawing a free energy curve for oach, for all temperatures, whether they are stable or not, and finding which free energy curve is lower at any given temperature, is analogous to the method used in this chapter to discuss fusion and also to the method used in Chap. XII, for example in Fig. XII-4, in discussing vaporization by Van der Waals' equation. Properly, however, we should have used directly the single free energy formula (4.2), and plotted it as a function of temperature. This almost precisely equals the function G 9 when T < T m , and GI when T > T m . For if T T m , the first term in the bracket of Eq. (4.2) is much larger than the second, and Eq. (4.3) is a good approximation to Eq. (4.2), while if T > > T m the second term is much larger than the first, and Eq. (4.4) is the correct approximation. The formula (4.2), however, represents a curve which joins these two straight lines con- tinuously, bending sharply but not discontinuously through a small range of temperatures, in which the two terms of Eq. (4.2) are of the same order of magnitude. For practical purposes, this rango of temperatures is so small that it can be neglected. Let us compute it, by finding the temperature T at which the second term in the bracket of Kq. (4.2) has a certain ratio, say c, to the first term. That is, we have -JVEi kT c kT , ., . N(Ei - E.) In c = N In w pr = JV hi w( } ^J = N In wf T^"/' ( 4 ' 6 ) using Eq. (4.5). Thus we have T - T m In c Nlnw Here In u; is of the order of magnitude of unity. If we ask for the tem- perature when the second term of Eq. (4.2) is, say, ten times the first, or 268 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVI one tenth the first, we have c equal respectively to 10 or -fa, so that In c is of the order of magnitude of unity, and positive or negative as the case may be. Thus (T - T m )/T is of the order of I/TV, showing that the range of temperature in which the correct free energy departs from the lower of the two straight lines in Fig. XVI-2 is of the order of 1/N of the melting temperature, a wholly negligible range if N is of the order of the number of molecules in an ordinary sample. Thus, our procedure of finding the intersection of the two curves is entirely justified. The real situation, of course, is much more complicated than this. There are many stationary states for the solid, corresponding to different amounts of excitation of vibrational energy; when we compute the parti- tion function considering all these states, and from it the free energy, we get a curve like those of Fig. XIII-6, curving down as the temperature increases to indicate an increasing entropy and specific heat. Similarly the liquid will have not merely one multiple level but a distribution of stationary states. There is even good reason, on the quantum theory, to doubt that the lowest level of the liquid will be multiple at all; it is much more likely that it will be spread out into a group of closely spaced, but not exactly coincident, levels, so that the entropy will really be zero at the absolute zero, but- will rapidly increase to the value characteristic of the random arrangement as the temperature rises above absolute zero. However this may be, there will surely be a great many more levels for the liquid than for the solid, in a given range of energies, and the liquid levels will lie at definitely higher energies than those of the solid. This is all we need to have a partition function, like Eq. (4.1), consisting of two definite parts : a set of low-lying levels, which arc alone of importance at low temperatures, and a very much larger set of higher levels, which are negligible at low temperatures on account of the small exponential Boltz- mann factor but which become much more important than the others at high temperature, on account of their great number. In turn this will lead to a free energy curve of two separate segments, joined almost with discontinuous slope at the melting point. It is not impossible, as a matter of fact, to imagine that the two groups of levels, those for the solid and for the liquid, should merge partly continuously into each other. An intermediate state between solid and liquid would be a liquid with a great many extremely small particles of crystal in it, or a solid with many amorphous flaws in it that simulated the properties of the liquid. Such states are dynamically possible and would give a continuous series of energy levels between solid and liquid. If there were enough of these, they could affect our conclusions, in the direction of rounding off the curve more than we should otherwise expect, so that the melting would not be perfectly sharp. We can estimate very crudely the temperature range in which such a hypothetical gradual SEC. 4] THE LIQUID STATE AND FUSION 269 change might take place, from our formula (4.7). Suppose that instead of considering the melting of a large crystal, we consider an extremely small crystal containing only a few hundred atoms. Then, by Eq. (4.7), the temperature range in which the gradual change was taking place might be of the order of a fraction of a per cent of the melting temperature, or a degree or so. Crystals of this size, in other words, would not have a perfectly sharp melting point, and if the material breaks up into as fine- grained a structure as this around the melting point, even a large crystal might melt smoothly instead of discontinuously. The fact, however, that observed melting points of pure materials are as sharp as they are, shows that this effect cannot be very important in a large way. In any case, it cannot affect the fundamental validity of the sort of calculation which we have made, finding the melting point by intersection of free energy curves for the two phases; for mathematically it simply amounts to an extremely small rounding off of the intersection. We shall come back to such questions later, in Chap. XVIII, where we shall show that in certain cases there can be a large rounding off of such intersections, with corre- sponding continuous change in entropy. This is not a situation to be expected to any extent, however, in the simple problem of melting. CHAPTER XVII PHASE EQUILIBRIUM IN BINARY SYSTEMS In the preceding chapter wo have been considering the equilibrium of two phases of the same substance. Some of the most important cases of equilibrium come, however, in binary systems, systems of two com- ponents, and we shall take thorn up in this chapter. We can best under- stand what is meant by this by some examples. The two components mean simply two substances, which may bo atomic or molecular and which may mix with each other. For instance, they may be substances like sugar and water, one of which is soluble in the other. Then the study of phase equilibrium becomes the study of solubility, the limits of solu- bility, the effect of the solute on the vapor pressure, boiling point, molting point, etc., of the solvent. Or the components may be metals, like copper and zinc, for instance. Then we meet the study of alloys and the whole field of metallurgy. Of course, in metallurgy one often has to deal with alloys with more than two components ternary alloys, for instance, with three components but they are considerably more complicated, and we shall not deal with them. Binary systems can ordinarily exist in a number of phases. For instance, the sugar-water system can exist in the vapor phase (practically pure water vapor), the liquid phase (the solution), and two solid phases (pure solid sugar and ice). The copper-zinc system (the alloys that form brasses of various compositions), can exist in vapor, liquid, and five or more solid phases, each of which can exist over a range of compositions. Our problem will be to investigate the equilibrium between these phases. We notice in the first place that we now have three independent variables instead of the two, which we have ordinarily had before. In addition to pressure and temperature, we have a third variable measuring the com- position. We ordinarily take this to be the relative concentration of ono or the other of the components, N\/(N\ + AT 2 ) or N 2 /(Ni + N 2 ), as employed in Chap. VIII, Sec. 2; since these two quantities add to unity, only one of them is independent. Then we can express any thermo- dynamic function, as in particular the Gibbs free energy, as a function of the three independent variables pressure, temperature, and composi- tion. We shall now ask, for any values of pressure, temperature, and composition, which phase is the stable one; that is, which phase has the smallest Gibbs free energy. In some cases we shall find that a single 270 SEC. 1] PHASE EQUILIBRIUM IN BINARY SYSTEMS 271 phase is stable, while in other cases a mixture of two phases is more stable than any single phase. Most phases are stable over only a limited range of compositions, as well as of pressure and temperature. For instance, in the sugar and water solution, the liquid is stable at a given temperature, only up to a certain maximum concentration of sugar. Above this concentration, the stable form is a mixture of saturated solution and solid sugar. The solid phases in this case, solid sugar and solid ice, are stable only for quite definite compositions; for any other composition, that is for any mixture; of sugar and water, the stable form of the solid is a mixture of sugar and ice. On the 1 other hand, we have stated that the solid phases of brass are stable ovor a considerable range of compositions, though for intermediate compositions mixtures of two solid phases are stable. In studying these subjects, the first thing is to get a qualitative idea of the various sorts of phases that exist, and we proceed to that in the following section. 1. Types of Phases in Binary Systems. A two-component system, like a system with a single component, can exist in solid, liquid, and gaseous phases. The gas phase, of course, is perfectly simple: it is simply a mixture of the gas phases of the two components. Our treatment of chemical equilibrium in gases, in Chap. X, includes this as a special case. Any two gases can mix in any proportions in a stable way, so long as they cannot react chemically, and we shall assume only the simple case where the two components do not react in the gaseous phase;. The liquid phase of a two-component system is an ordinary solution. The familiar solutions, like that of sugar in water, exist only when a relatively small amount of one of the components, called the solute (sugar in this case) is mixed with a relatively large amount of the other, the* solvent. But this is the case mostly with components of very different physical properties, like sugar and water. Two similar components often form a liquid phase stable over large ranges of composition, or even for all compositions. Thus water and ethyl alcohol will form a solution in any proportion, from pure water to pure alcohol. And at suitable tempera- tures, almost any two metals will form a liquid mixture stable at any composition . A liquid solution is similar in physical properties to any other liquid. We have; seen that an atom or molecule in an ordinary liquid is surrounded by neighboring atoms or molecules much as it would be in a solid, but the ordered arrangement does not extend beyond nearest neighbors. When we have a mixture of components, it is obvious that each atom or molecule will be surrounded by some others of the same; component but some of the other component. If an atom or molecule of one sort attracts an unlike neighbor about as well as a like neighbor, and if atoms or molecules of both kinds fit together well, the solution may well have an energy as low as the liquids of the individual components. In 272 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII addition, A golntjon. like p, mixture of gases t has ail entropy of mixing, so that the entropy of the solution will be greater than that of the com- ponents. In such a case, then, the Gibbs free energy of the solution will he lower than for the pure liquids and it will be the stable phase. On 1 he other hap^ r if thn fttoms or molecules of the two sorts fit together badlv or do not attmr.fr ^q-eh nf.hor, the rmerfiy of the solution may well be greater 1 1) an that of the j>ure liquids, enough greater to make the free energy greater in spite of the entropy of mixing, so that the stable situation will be a mixture of the pure liquids, or a liquid and solid, depending on the temperature. Thus oil and water do not dissolve in each other to any extent. Their molecules are of very different sorts, and the energy is lower if water molecules are all close together and if the oil molecules are congregated together in another region. That is, the stable situation will be a mixture of the two phases, forming separate drops of oil and water, or an emulsion or suspension. A very little oil will presumably be really dissolved in the water and a very little water in the oil, but the drops will be almost pure. As we have just seen, the condition for the existence of a liquid phase stable for a wide range of concentrations (that is, for a large solubility of one substance in another), is that the forces acting between atoms or molecules of one component and those of the other should be of the same order of magnitude as the forces between pairs of atoms or molecules of either component, so that the internal energy of the solution will be at least as low as that of the mixture, and the entropy of mixing will make the free energy lower for the solution. Let us consider a few specific cases of high solubility. In the first place, we are all familiar with the large 1 solubility of many ionic salts in water. The crystals break up into ions in solution, and these ions, being charged, orient the electrical dipoles of the water around them, a positive ion pulling the negatively charged oxygen end of the water molecule toward it, a negative ion pulling the positively charged hydrogen. This leads to a large electrical attrac- tion, and a low energy and free energy. The resulting free energy will be lower than for the mixture of the solid salt and water, unless the salt is very strongly bound. Water is not the only solvent that can form ionic solutions in this way. Liquid ammonia, for instance, has a large dipole moment and a good many of the same properties, and the alcohols, also with considerable dipole moments, are fairly good solvents for some ionic crystals. Different dipole liquids, similarly, attract each others' molecules by suitable orientation of the dipoles and form stable solutions. We have already mentioned the case of alcohol and water. In ammonia and water, the interaction between neighboring ammonia and water molecules is so strong that they form the ammonium complex, leading to NH 4 OH SEC. 1] PHASE EQUILIBRIUM IN BINARY SYSTEMS 273 if the composition is correct, and a solution of NH 4 OH in either water or ammonia if there is excess of either component. The substance NH^)!! is generally considered a chemical compound; but it is probably more correct simply to recognize that a neighboring water and ammonia mole- cule will organize themselves, whatever may be the composition of the solution, in such a way that one of the hydrogens from the water and the three hydrogens from the ammonia form a fairly regular tetrahedral arrangement about the nitrogen, as in the ammonium ion. There are no properties of the ammonia- water system which behave strikingly differ- ently at the composition NH<OH from what they do at neighboring concentrations. Solutions of organifi substances are almost invariably made in organi( solvents^ simply because here again the attractive forces between twi different types of molecule are likely to be large if the molecules an similar. Different hydrocarbons, for instance, mix in all proportions, ai- one is familiar with in the mixtures forming kerosene, gasoline, etc. The forces between an organic solvent and its solute, as between the molecule? of an organic liquid, are largely Van der Waals forces, though in some cases, as the alcohols, there are dipole forces as well. In the metals, the same type of interatomic, force 1 acts bet ween atom,- of different metals that acts between atoms of a single element. We have already stated that for this reason liquid solutions of many metals with each other exist in wide ranges of composition. There, are many other cases in which two substances ordinarily solid at room temperature are soluble in each other when liquefied. Thus, a great variety of molten ionic crystals are soluble in each other. And among the silicates and other substances held by valence bonds, the liquid phase permits a wide range of compositions. This is familiar from the glasses, which can have a continuous variability of composition and which can then supercool to essentially solid form, still with quite arbitrary compositions, and yet perfectly homogeneous structure. Solid phases of binary systems, like the liquid phases, are very com- monly of variable composition. Here, as with the liquid, the stable range of composition is larger, the more similar the two components are. This of course is quite contrary to the chemists' notion of definite chemical composition, definite structural formulas, etc., but those notions are really of extremely limited application. It happens that the solid phases in the system water ionic compound are often of rather definite com- position, and it is largely from this rather special case that the idea of definite compositions in solids has become so firmly rooted. In such a system, there are normally two solid phases: ice and the crystalline ionic compound. Ice can take up practically none of any ionic compound, so that it has practically no range of compositions. And many ionic crystals 274 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII take up practically no water in their crystalline form. But there are many ionic crystals which are said to have water of crystallization. Water molecules form definitely a part of the structure. And in some of these the proportion of water is not definitely fixed, so that they form mixed phases of variable composition. Water and ionic compounds are very different types of substances, and it is not unnatural that they do not form solids of variable composition. The reason why water solutions of ionic substances exist is that the water molecules can rotate so as to be attracted to the ions; this is not allowed in the solid, where the ice structure demands a fairly definite orientation of the molecule. But as soon as we think about solid phases of a mixture of similar components, we find that almost all the solid phases exist over quite a range. Such phases are often called by the chemists solid solu- tions, to distinguish them from chemical compounds. This distinction is valid if we mean by a chemical compound a phase which really exists at only a quite definite composition. But the chemists, and particularly the metallurgists, are not always careful about making this distinction; for this reason the notation is misleading, and we shall not often use it. Solid phases of variable composition exist in the same cases that we have already discussed in connection with liquids. Thus mixtures of ionic substances can often form crystals with a range of composition. The conditions under which this range of composition is large are what we should naturally suppose: the ions of the two components should be of about the same size and valence, and the two components should be capable of existing in the same crystal structure. We shall meet many examples of such solids of variable composition later, when we come to study different types of materials. The best-explored range of solid compounds of variable composition comes in metallurgy. Here an atom can replace another of the same size quite freely but not another of rather different size. Thus the copper and nickel atoms have about the same size; they form a phase stable in all proportions. On the other hand, calcium and magnesium have atoms of quite different sizes, normally existing in different crystal structures, and they cannot bo expected to substitute for each other in a lattice. They form, as a matter of fact, as close an approach to phases of definite chemical composition as w r e find among the metals. They form three solid phases: pure magnesium, pure calcium, and a compound CasMg4, and no one of these is soluble to any extent in any of the others; that is, each exists with almost precisely fixed composition. Most pairs of elements are intermediate between these. They form several phases, each stable for a certain range of compositions, and often each will be centered definitely enough about some simple chemical composition so that it has been customary to consider them as being chemical compounds, though this is not really justified except in SEC. 2] PHASE EQUILIBRIUM IN BINARY SYSTEMS 275 such a definite case as CaaMg4. Each of the phases in general has a different crystal structure. Of course, the crystal cannot be perfect, for ordinarily it contains atoms of the two components arranged at random positions on the lattice. It is the lattice that determines the phase, not the positions of the metallic atoms in it. But if the two types of atom in the lattice are very similar, they will not distort it much, so that it will be practically perfect. For compositions intermediate between those in which one of the phases can exist, the stable situation will be a mixture of the two phases. This occurs, in the solid, ordinarily as a mixture of tiny crystals of the two phases, commonly of microscopic size, with arbitrary arrangements and sizes. It is obvious that the properties of such a mixture will depend a great deal on the size and orientation of the crystal grains; these are things not considered in the thermodynamical theory at all. 2. Energy and Entropy of Phases of Variable Composition. The remarks we have just made about mixtiires of crystalline phases raise the question, what is a single phase anyway? We have not so far answered this question, preferring to wait until we had some examples to consider. A single phase is a mixture that is homogeneous right down to atomic dimensions. If it has an arbitrary composition, it is obvious that really on the atomic scale it cannot be homogeneous, but if it is a single phase we assume that there is no tendency for the two types of atom to segregate into different patches, even patches of only a few atoms across. On the other hand, a mixture of phases is supposed to be one in which the two types of atoms segregate into patches of microscopic or larger size. These two types of substance have quite different thermodynamic behavior, both as to internal energy and as to entropy. We shall consider this distinction, particularly for a metallic solid, but in a way which applies equally well to a liquid or other type of solid. Suppose our substance is made of constituents a and b. Let the relative concentration of a be c a = ; of 6, c& = 1 x. Assume the atoms arc arranged on a lattice in such a way that each atom has s neigh- bors (s = 8 for the body-centered cubic structure, 12 for face-centered cubic and hexagonal close packed, etc.). In a real solid solution, or homogeneous phase, there will be a chance x of finding an atom a at any lattice point, a chance 1 .r of finding an atom 6. We assume a really random arrangement, so that the chance of finding an atom at a given lattice point is independent of what happens to be at the neighboring points. This assumption will be examined more closely in the next chap- ter, where we take up questions of order and disorder in lattices. Then out of the 5 neighbors of any atom, on the average sx will be of type a, s(l x) of type b. If we consider all the pairs of neighbors in the crystal, we shall have 276 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIT Nsx* . , L - ^ - pairs of type na t Ns(\ - .r) 2 . , 4 M -- pairs of type 60 Nsx(l a:) pairs of type ab. (2.1) Hero N is the total number of atoms of both sorts. The factors in Eq. (2.1) arise 4 because each pair must be counted only once, not twice as we should if \\e said that the number of pairs of type aa equaled the number of atoms of type a (Nx) times the number of neighbors of type a which each one had (s.r). Now we make a simple assumption regarding the energy of the crystal at the absolute zero. We assume that the total energy can be written as a sum of terms, one for each pair of nearest neigh- bors. We shall be interested in this energy only at the normal distance of separation of atoms. At this distance, we shall assume that the energy of a pair aa is E aa , of a pair bb is 7?w>, and of a pair ab, E a b. All these quantities will be negative, if we assume the zero state of energy is the state of infinite separation of the atoms (the most convenient assumption for the present purpose) and if all pairs of atoms attract each other. Then the total energy of the crystal, at the absolute zero, will be ~E (Ul + N * (} ~ ^E bh + Nsx(l - x)E ab + 2r(l - x)(E ab - ^-f^)} (2.2) According to our assumptions, the energy of a crystal wholly of a is (Ns/2)E aa , and wholly of 6 is (Ns/2)E hb . Thus the first two terms on the right side of Eq. (2.2) give the sum of the energies of a fraction x of the substance , and a fraction (1 x) of 6. These two would give the whole energy in case w r e simply had a mixture of crystals of a and b. But the third term, involving x(l x), is an additional term arising from the mutual interactions of the two types of atoms. The function x(} x) is always positive for values of x between and 1, being a parabolic function with maximum of when x = ?, and going to zero at x = or 1 . Thus this last term has a sign which is the same as that of E a b (E att + Ebb)/2. If E ab is more positive than the mean of E aa and EM (that is, if atoms a and b attract each other less strongly than they attract atoms of their own kind), then the term is positive. In this case, the solution will have higher energy than the mixture of crystals, and if the entropy term in the free energy does not interfere, the mixture of crystals will be the more stable. On the other hand, if E ab is more negative than the mean of E au and EM, so that atoms of the opposite sort attract more strongly than either one attracts its own kind, the term will be negative and the solution SBC. 2] PHASE EQUILIBRIUM IN BINARY SYSTEMS 277 will have the lower energy. In order to get the actual internal energy at any temperature, of course we must add a specific heat term. We shall adopt the crude hypothesis that the specific heat is independent of composition. This will be approximately true with systems made of two similar components. Then in general we should have rr Ns\ T1 . ,. >. io/i \i v H< in + ^MM "2" a + ^ ~~ ^ + ^ "~ x \ Aab ~~ 2 ) f*T + CpdT. (2.3) Jo Next, lot us consider tho entropy of the homogeneous phase and compare it with the entropy of a mixture of two pure components. Tho entropy of tho puro oompononts will bo just the part determined from the specific heat, or I - ni dT. But in tho solution there will bo an additional Jo J term, the entropy of mixing. This, as a, matter of fact, is just the same* term found for gases in Kq. (2.12), Chap. VIII: it is Art - -Nk[x In x + (1 - x) hi (1 - jr)], in the notation of the present chapter. We can, however, justify it directly without appealing to the theory of gases, which certainly cannot be expected to apply directly to tho present case. Wo use a method like that used in deriving Eq. (3.5), Chap. XVI. We have a lattice with N points, accomodating NX atoms of one sort, N(l x) of another sort. There are then AM W = ______ - ' (9 4 (Nx)\[N(l - ( ways of arranging the atoms on the lattice points, almost all of whirl) have approximately the same energy. Using Stirling's formula, this becomes w - By Boltzmann's relation S = k In W, this gives for the entropy of mixing Atf = -Nk[x In x + (1 - x) In (1 - x)]. (2.6) The other thermodynamic functions are also easily found. If we confine ourselves to low pressures, of the order of atmospheric pressure, we can neglect the term P V in the Gibbs free energy of liquid or solid. Then the Helmholtz and Gibbs free energies are approximately equal and are given by 278 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII (a) E ab - (K aa + > 0; Ox, 0.5 (q)E qb - E ** Ebb >0 x, I 05 1 x A = (? = U + NkT[x In x + (1 - x) In (1 - x)] - T \ ^ dT, (2.7) Jo i where U is given in Eq. (2.3). We have now found approximate values for the thermodynamic functions of our homogeneous phase. The entropy is greater than that of the mixture by the term (2.6), and the internal energy is either greater or smaller, as we have seen. To illustrate our results, we give in Fig. XVII-1 a sketch of G as a function of a*, for three cases: (b) E ab - (E aa + E^/2 = 0; (c) E ab - (E on + E bb )/2 < 0. We observe that in each case the free energy begins to decrease sharply as x increases from zero or decreases from unity. This is on account of the logarithmic function in the entropy of mixing (2.6). But in case (a), whero the atoms prefer to segregate rather than forming a homo- geneous phase, the free energy then rises for intermediate concentrations, while in case (6), where tho atoms are indifferent to their neighbors, or in (c) where they definitely prefer unlike* neighbors, the free energy falls for intermediate con- centrations, moro strongly in case (c). In each case we have drawn a dotted line connecting the points x = and x = 1. This represents the free energy of tho mixture of crystals of tho pure compo- nents. We see that in cases (6) and (c) the solution always has a lower free energy than the mixture of crystals, but < in (a) there is a range where it does not. FIG. xvn-i.-GibbB free energy of However, we shall see in the next section a binary system, as function of that \VC must look a little more Carefully concentration. into the situation before bc}ng surc what forms the stable state of the system. 3. The Condition for Equilibrium between Phases. Suppose we have two homogeneous phases, one with composition XL and free energy Gij the other with composition z 2 and free energy Gr 2 . By mixing these two phases in suitable proportions, the resulting mixture can have a composition anywhere between x\ and rr 2 . And the free energy, being simply the suitably weighted sum of the free energies of the two phases, is given on a G vs. x plot simply as a straight line joining the points Gi, x\ (b)E ab - = SKC. 31 PHASE EQUILIBRIUM IN BINARY SYSTEMS 279 and (72, #2- That is, for an intermediate composition corresponding to a mixture, the free energy has a proportional intermediate value between the free energies of the two phases being mixed. We saw a special case of this in Fig. XVII-1, where the dotted line represents the free energy of a mixture of the two phases with x = 0, x = i respectively. Now suppose we take the curve of Fig. XVII-1 (a) and ask whether by mixing two phases represented by different points on this curve, we can perhaps get a lower Gibbs free energy than for the homogeneous phase. It is obvious that we can and that the lowest possible lino connecting two points on the curve is the mutual tangent to the two minima of the curve, shown by (?i(? 2 in Fig. XVII-1 (a). The point G\ represents the free energy of a homogeneous phase of composition Xi and the point (?2 of composition #2. Between these compositions, a mixture of thffije f,wo homogeneous phases represented by the dotte^ line will have lower Gibbs free energy than the homogeneous phase, and will represent the stable situation. For x loss than x\, or greater than x z , the straight line is meaningless; for it would represent a mixture with more than 100 per cent of one phase, less than zero of the other. Thus for x less than Xi, or greater than x z , the homogeneous phase is the stable one. In other words, we have a case of a system that has only two restricted ranges of composition in which a single phase is stable, while between these ranges we can only have a mixture of phases. We can now apply the conditions just illustrated to some actual examples. Ordinarily we have two separate curves of G against x t to represent the two phases; Fig. XVII-1 (a) was a special case in that a single curve had two minima. And_in a region where a common tangent to the curve lies lower than either curve between the points of tangcncy, the mixture of the two phases represented by the points of tanpencv will be the stable phase. First we consider the equilibrium between liquid and solid, in a case where the components are soluble in all proportions, both in solid and liquid phases. For instance, we can take the case of melting of a copper-nickel alloy. To fix our ideas, let copper be consti- tuent a, nickel constituent 6, so that x corresponds to pure nickel, .r = 1 to pure copper. We shall assume that in both liquid arid solid the free energy has the form of Fig. XVII-1 (6), in which the bond between a copper and a nickel atom is just the mean of that between two coppers and two nickels. This represents properly the case with two such similar atoms. Such a curve departs from a straight line only by the entropy of mixing, which is definitely known, the same in liquid and solid. Thus it is determined if we know the free energy of liquid and solid copper and nickel, as functions of temperature. From our earlier work we know how to determine these, both experimentally and theoretically. In particular, we know that the free energy for liquid nickel is above that for solid 280 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII nickel at temperatures below the melting point, 1702 abs. but below at iemperatures above the melting point, and that similar relations hold for copper with melting point 1356 abs. Wo further know that the rate of change of the difference between the free energy of liquid and solid nickel, with temperature, is the difference of thoir entropies, or T times the latent heat of fusion. Thus we have enough information to draw at least approximately a set of curves like those shown in Fig. XVII-2. Hero Ni Cu Ni Cu Ni x, x 2 Cu T, < 1,356 Abs T 2 = 1,356 Abs 1,356<T 3 < 1,702 Ni Cu Ni Cu T 4 = 1,702 Abs T 5 => 1,702 Abs KUJ. XVII-2.- (ribbs free energy for solid and liquid as function of concent ration, for different temperatures, Ni-Cu system. \ve give G for UK; solid, G 1 for tho liquid phase, as functions of composition, for five temperatures: Ti below 1356, 7\ at 1356, 7 7 3 between 1356 and 1702, T at 1702, and 7 T 5 above 1702. Below 1356, the free energy for the solid is everywhere below that for the liquid, so that the former is the stable phase at any composition. At 1356, the liquid curve touches the solid one, at 100 per cent copper, and above this temperature the curves cross, the solid curve lying below in systems rich in nickel, the liquid curve below in systems rich in copper. In this case, T* in the figure, we can draw a common tangent to the curves, from G\ to (?2 at SEC. 3] PHASE EQUILIBRIUM IN BINARY SYSTEMS 281 1700 concentrations x\ and x%. In this range, then, for concentrations of copper below x\, a solid solution is stable; above x$, a liquid solution is stable; while between x\ and x 2 there is a mixture of solid of composition Xi and liquid of composition .r 2 . These two phases, in other words, are in equilibrium with each other in any proportions. At 1702, the range in which tho liquid is stable has extended to the whole range of com- positions, the curve of G for the liquid lying lower for all higher temperatures. The stability of the phases can be shown in a diagram like Fig. X VII-3, railed a phase diagram. In this, temperature is plotted as ordinate, composition as abscissa, and lines separate the regions in which various phases are stable. Thus, at high temperature, the liquid is stable for all compositions. Running from 1702 to 1356 are iwo curves, one called the liquidus (the upper one) and the other called the solidus. For any T-x point lying between the liquidus and solidus, the stable state is a mixture of liquid and solid. Moreover, we can read off from the diagram the compositions of the liquid and solid in equilibrium at any temperature. The horizontal line drawn in Fig. XVII-3, at tem- perature Z' 8 (see Fig. XVII-2), cuts the solidus at composition x and the liquidus at :r 2 , agreeing with the compositions for 7 T 3 in Fig. XVII-2. Then .ri represents the composition of the solid, o* 2 of the liquid, in equilibrium with eaeh other at this temperature. Finally, below the solidus the stable phase is always the solid. From the phase diagram we can draw information not only about equilibrium but about the process of solidification or melting. Suppose we have a melt of composition or 2 , at a temperature above the liquidus, and suppose we gradually cool the material. The composition of course, will not change until we reach the liquidus, and solid begins to freeze out. But now the solid in equilibrium with liquid of composition .r 2 has the composition Xi, much richer in nickel than the liquid. This will be frozen out, and as a result the remaining liquid will bo deprived of nickel and will become richer in copper. Its concentration will then lie farther to the right in the diagram, so that it will intersect tho liquidus at a lower temperature. As the temperature is decreased, then, some of this liquid, perhaps of composition z 2 , will have solid of composition x( freeze from it, further enriching the liquid in copper. This process continues, more Fl * Ni XVII-'J Phiiso diagram system. for 282 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII and more liquid freezing out until the temperature reaches 1356, when the last portion of the liquid will freeze out as pure copper. There are two interesting results of this process. In the first place, the freezing point is not definite; material freezes out through a range of temperatures, all the way from the temperature corresponding to the point x z on the graph to the freezing point of pure copper. In the second place, the material which has frozen out is by no moans homogeneous. Ordinarily it will freeze out in tiny crystal grains. And we shall observe that the first grains freezing out are rich in nickel, while successive grains are more and more rich in copper, unt il the last material frozen is pure copper. The over-all composition of the solid finally left is of course the same as that of the original liquid, but it is not of homogeneous composition and henco is not a stable material. We can see this from Fig. XVII-2, where the curves of G vs. composition for the solid are convex downward. With such a curve, the G at a definite composition is necessarily lower than the average G's of two compositions, one richer and the other poorer in copper, which would contain the same total amount of each element. By an extension of this argument, the inhomogeneous material freezing out of the melt must have a higher free energy than homogeneous material of the same net composition, and on account of its thermodynamic insta- bility it will gradually change over to the homogeneous form. This change can be greatly accelerated by raising the temperature, since as mentioned in Sec. 1, Chap. XVI, the rate of such a process, involving the changing place of atoms, depends on a factor exp ( e/kT), increasing rapidly with temperature. Accordingly, material of this kind is often annealed, held at a temperature slightly below the melting point for a considerable time, to allow thermodynamic equilibrium to take place and at the same time to allow mechanical strains to be removed. The reverse process of fusion can be discussed much as we have con- sidered solidification. Of course, if we take the solid just as it has solidi- fied, without annealing, there will be crystal grains in it of many different compositions, which will melt at different temperatures, the liquids mixing. But if we start with the equilibrium solid, of a definite composi- tion, it will begin to melt at a definite temperature. The liquid melting out will have a higher concentration of copper than the solid, however, leaving a nickel-rich material of higher melting point. The last solid to melt will be rich in nickel, of such a composition as to be in equilibrium with the liquid. It is interesting to notice that the process of melting which we have just described is not the exact reverse for solidification. This is natural when we recall that the solid produced directly in solidi- fication is not in thermodynamic equilibrium, so that when the proc- ess is carried on with ordinary, finite velocity it is not a reversible process. SEC. 4] PHASE EQUILIBRIUM IN BINARY SYSTEMS 283 4. Phase Equilibrium between Mutually Insoluble Solids. In the preceding section we have considered phase equilibrium between solid and liquid, in the case where the components were soluble in each other in any proportions, in both liquid and solid. Now we shall consider the case where there is practically no solubility of one solid in the other; that is, there are two solid phases, each one stable in only a very narrow range of FIG. XVII-4. Gibbs free energy for solids and liquid a& function of concentration, for different temperatures, in a system with almost mutually insoluble solids. concentration about x = or x = 1. The free energy for the solid will have much the form given in Fig. XVII-1 (a). But we shall assume that the minima of the curve are extremely sharp and we shall not assume that both minima belong to the same curve. In a case like this, it is most likely that the pure phase of one component will have different crystal structure and other properties from the pure phase of the other, and there will be no sort of continuity between the phases, as in Fig. XVII-1 (a). For the liquid we shall again assume the form of Fig. XVII-1 (b). Then we give in Fig. XVII-4 a series of curves for G against x at increasing temperatures, and in Fig. XVII-5 the corresponding phase diagram. The 284 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII method of construction will be plain by comparison with the methods used in Figs. XVII-2 and XVII-3. At low temperatures like 7\, there is a very small range of compositions from to Xi, in which phase a is stable, and another small range, from .r 2 to 1, in which phase ft is stable. Here we have given the name a to the phase composed of pure a with a little b dissolved in it, and ft to pure b with a little a dissolved in it. If the substanc.es a and 6 are really mutually insoluble in the solid, the two curves representing G vs. x for solids a and ft will have infinitely sharp minima in Fig. XVII-4, rising to groat heights for x infinitesimally greater than zero, or infinitesimally less than 1. For the whole range of composi- tions between x\ and Xz, at these low temperatures, the stable form will be a mechanical mixture of crystals of a and 0. At a somewhat higher temperature, between r l\ arid 7 T 2 , the G curve for the liquid will fall low enough to be tangent to the straight line representing the mix- ture of Xi and a > 2 . This is for the com- position denoted by x^^ in Fig. XVII-5 and will be discussed later. At higher temperatures, as T 2 , there is a range of compositions from 0*2 to x 3 , in which the liquid is the stable phase, while for com- a * b x positions from :TI to x z the stable form is Fi< } xvn-5. Ph!se IC oquihbrium a mixture of liquid and phase a, and from diagram for a system with almost # 3 to #4 it is a mixture of liquid and phase mutually insoluble solids, as given in A ,, , . , rn .1 Fig. XVII-4. P* As the temperature rises to 7 3 , the melting point of pure material b, the phase ft disappears, and at 7 T & , the melting point of pure a, the phase a disappears, leaving only the liquid as the stable phase above this temperature. The process of freezing is similar to the previous case of Fig. XVII-3. Suppose the liquid has a composition between x = and # e utoetic. Then as it is cooled, it will follow along a line like the dotted line in Fig. XVII-5, which intersects the line marked # 2 in the figure at temperature T 3 . At this temperature it will begin to freeze, but the material freezing out will be phase a with the composition x\ appropriate to that temperature, very rich in component a. The liquid becomes enriched in 6, so that it has a lower melting point, and we may say that the point representing the concentration and temperature of the liquid on Fig. XVII-5 follows down along the curve x*. When the composition reaches the eutectic composi- tion and the temperature is still further reduced, a liquid phase is no longer possible, and the remaining liquid freezes at a definite temperature SEC. 4] PHASE EQUILIBRIUM IN BINARY SYSTEMS 285 and composition. It is to bo noticed, however, that the resulting solid is still a mixture of phases a and ft. With the usual methods of freezing, the two phases freeze out as alternating layers of platelike crystals. Such a solid is called a eutectic and is of importance in metallurgy. It is inter- esting to observe that if the composition of the original liquid is just the eutectic composition, it will all freeze at a single temperature, which will be the lowest possible freezing point for any mixture of a and 6. If the original composition is between x eu tocti t . and x = 1, the situation will be similar to what we have described, only now the point representing the liquid will move down curve .Ta to tho eutectic composition, and the solid freezing out will be phase ft, and tho liquid will become enriched in component a until it reaches the eulectic composition, when it, will all freeze as the cutcctie mixture of a and ft. The temperature where this freezing of the eutectir occurs, we; notice, represents a triple point : phases a, ft, and the liquid are all stable at this temperature in any proportions, corresponding to the fact that a single tangent can be drawn in the (f-x diagram to the curves representing all three phases. For every pressure, there is a temperature at which there is such a triple point, in contrast to the situation with a one-component system, where triple points exist only for certain definite combinations of pressure and temperature. The difference arises because there are more independent variables, the composition as well as pressure and temperature. Familiar examples of the situation wo have just described are found in the solubility of substances in water and other solvents. Thus in Fig. XVII-6 we give the phase diagram for the familiar system NaCl-water. This diagram is not carried to a very high concentration of salt, for then the curve corresponding to x 3 would rise to such high temperatures that we should be involved with the vaporization of the water, which we have not wished to discuss. In this system, as we have already mentioned, the solid phases are practically completely insoluble in each other, the phase corresponding to a being pure ice, ft being pure solid NaCl, combined with the water of crystallization at low temperature. Thus the curves Xi and 2 4 of Fig. XVII-5 do not appear in Fig. XVII-6 at all, coinciding prac- tically with the lines x = and x = 1. We can now find a number of interesting interpretations of Fig. XVII-6. In the first place, if water with a smaller percentage of salt than the eutectic mixture is cooled down, the freezing point will be below 0C., the freezing point of pure water, showing that the dissolved material has lowered the freezing point. We shall calculate the amount of this lowering in the next section. At these compositions, the solid freezing out is pure ice. This is familiar from the fact that sea water, which has less salt than the eutectic mixture, forms ice of pure water without salt. On the other hand, if the liquid has a larger percentage of salt than the eutectic mixture, the solid freezing 286 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII out as the temperature is towered is pure salt. Under these circum- stances we should ordinarily describe the situation differently. We should say that as the temperature was lowered, the solubility of the salt in water decreased enough so that salt precipitated out from solution. In other words, the curve separating the liquid region in Fig. XVII-6 from the region where liquid and NaCl are in equilibrium may be interpreted as the curve giving tho percentage of salt in a saturated solution or the solubility as a function of temperature. The rise to the right shows that the solubility increases rapidly with increasing temperature. 100 90 80 70 60 </) 50 $ 40 g* 30 ^ 20 10 -10 -20 Liquid Ice + Liquid" NaCl NaCl- 2H 2 -+Liq. 5 10 15 20 25 30 Grams NaCl per 100 Grams Solution FIG. XVII-6. ---Equilibrium between NaCl Jiud water. From Fig. XVII-6 wo can also understand the behavior of freezing mixtures of ice and salt. Suppose ice and salt, both at approximately 0C., are mixed mechanically in approximately the right proportions to give the eutcctic mixture. We see from Fig. XVII-6 that a solid of this composition is not in thennodynamic equilibrium at this temperature; the stable phase is the liquid, which has a lower free energy than the mixture of solids. Thus the material will spontaneously liquefy, the solid ice and salt dissolving each other at their surfaces of contact and forming brine. If the process were conducted isothermally, we should end up with a liquid. But in the process a good deal of h^at would have to be absorbed, the latent heat of fusion of the material. Actually, in using a freezing mixture, the process is more nearly adiabatic than isothermal: heat can flow into the mixture from the system which is to be cooled, but that system has a small enough heat capacity so that its temperature is rapidly reduced in the process. In order to get the necessary latent heat, in other words, the freezing mixture and external system will all cool down below 0C., falling to lower and lower temperatures as more and more of the freezing mixture melts. The process can continue, if the proportion of ice SBC. 4] PHASE EQUILIBRIUM IN BINARY SYSTEMS 287 to salt is just the eutectic proportion, down to the temperature 18, the lowest temperature at which the liquid can exist. The most important examples of the phase diagrams we have discussed are found in metallurgy. There, in alloys of two metals with each other, 1,000 800 - 600 - 400 Cu MGJ FIQ. XVII-7. Phase equilibrium diagram for the system Cu-Mg, in which tho two metals are insoluble in each other, forming mtermetallie compounds of definite composition. 1000 ~ 400 - Cu Zn FIG. XVII-8. Phase equilibrium diagram for the system Cu-Zn, in which a number of phases of variable composition arc formed, mixtures of the phases being stable between tho regions of stability of the pure phases. The phase a is face centered cubic, as Cu is, /3 is body centered, 7 is a complicated structure, and 17 are hexagonal. The transition between ft and 0' is an order-disorder transition, (3 being disordered, and /8' ordered, as discussed in the following chapter. we generally find much more complicated cases than those take up so far, but still cases which can be handled by the same principles. Thus in Fig. XVII-7 we show the phase diagram for the system Cu-Mg, two 288 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII metals that are almost entirely insoluble in each other. In this case there are four solid phases, each having its own crystal structure, and each stable in only an extremely narrow range, about the compositions Cu, MgCu 2 , Mg 2 Cu, and Mg. The free energy of each composition will then have an extremely sharp minimum, so that the construction neces- sary to derive the phase diagram will be similar to Fig. XVII-4, but with four sharp minima instead of two, so that there are three regions, rather than one, in which a mixture of two phases is the stable solid, and three eutectics. For contrast, we give in Fig. XVII-8 the phase diagram for the system Cu-Zn, or brass. In this case there are a number of phases, again each with its own crystal structure but each with a wide range of possible compositions. The free energy curves of the various phases in this case are then not sharp like the case of Cu-Mg but have rather flat minima, more as in Fig. XVII-2. We 11 shall not try to follow the construction of the phase diagram through in detail but shall merely state that it can be derived from hypothetical free energy curves according to the type of reason- ing already used in this section and the preceding one. 5. Lowering of Melting Points of Solutions. We have just seen that the lowering of the melting point of a solvent by action of the solute can 1 x easily be explained in terms of the FKJ. xvil-9. Gibbs free energy aa phase diagram, and it is an easy matter function of ronoontrntion, for lowering to find a numerical Value for this of freezing point. i . T T^- VTTTT r i_ lowering. In rig. XVII-9 we have a diagram of G against x, appropriate to this case. The solid solute corre- sponds to the point G 8 , with rr = 0, and the liquid is given by the curve. We wish to find the value of x at which a straight line through x = 0, G = G a is tangent to the liquid curve. To do this, we must first find the equation of the liquid curve. We assume the liquid to correspond to case (6) of Fig. XVII-1, the internal energy being a linear function of concen- tration. Then, if GI O is the free energy of the liquid for x = 0, G^ for x = 1, we have Gi = G h + x(G h - G to ) + NkT[x In x + (1 - x) In (1 - x)], (5.1) where Gi is the free energy for the liquid. The desired tangent is now determined by the condition (5.2) SEC. 51 PHASE EQUILIBRIUM IN BINARY SYSTEMS 289 the geometrical condition that the tangent to the curve GI at the point Gi should pass through the point G 8 when x = 0. Differentiating Eq. (5.1), this gives Gi. + NkT In (1 - ar) = G 9 . (5.3) Wo can now find the difference (7j fc G 8 in terms of the latent heat of fusion of the solvent. From fundamental principles we have (d(G io Gs)\ , ~, L m - ST --- )P = ~ (Sl "~ &) = ~~f~' (5 ' 4) where Si is the entropy of the liquid, S a of the solid, and L m the latent heat of fusion. Computed just at the melting point, the quantity in Eq. (5.4) becomes L m /T m . Now we shall not use the result except for temperatures very close to the melting point, so that we may assume that (G io Gs) can be expanded as a linear function of temperature. Just at the melting point, by the fundamental principle of equilibrium, it is zero. Thus we have G ln -G 8 = ^(T m - T). (5.5) J- m Inserting in Eq. (5.3), setting T = T m approximately, and writing Nk = /, this gives us _ In (1 - x) = (Tr* - T). (5.6) For dilute solutions, to which alone we shall apply our results, x is very small, and we may write In (1 x) x. Then we have (T _ T\ \* m * /> (T m ~ T) - x. (5.7) JU m Equation (5.7) gives the lowering of the freezing point, T m T 7 , by solution of another substance with relative concentration x. We note the important fact that the result is independent of the nature of the solute: all its properties have canceled out of the final answer. Thus the lowering of the freezing point can be used as a direct method of measuring x, the relative number of molecules of solute in solution. This is some- times a very important thing to know. Suppose one knows the mass of a certain solute in solution but does not know its molecular weight. By measuring the depression of the freezing point, using Eq. (5.7), we can find the number of moles of it in solution. By division, we can find at once the mass per mole, or the molecular weight. This method is of 290 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVII practical value in finding the molecular weights of complicated substances. It is also of importance in cases where there is association or dissociation of a solute in solution. Some materials form clusters of two, three, or more molecules in solution, each cluster traveling around as a single molecule. Each cluster will count as a single molecule in the entropy of mixing, and consequently in the depression of the freezing point. Thus really there are fewer molecules than one would suppose from the known amount of material in solution and the usual molecular weight, so that tho depression of the freezing point is smaller than we should suppose. On the contrary, in some cases substances have their molecules dissociated in solution. The well-known case of this is ionic substances in water solu- tion, in which the ions, rather than molecules, form the separate objects in the solution. In these cases there are more particles in solution than we should suppose, and the freezing point is depressed by an abnormally large amount. From Eq. (5.7) we can find at once the amount of depression of the freezing point of different solvents. Thus for water, T m = 273 abs., L m = 80 X 18 cal. per mole, giving T n - T = 103 for x = 1. To get useful figures, we calculate for what the chemists denote as a normal solution, containing 1 mole of solute in 1000 gm. of water, or nnhr of a mole of solute in 1 mole of water. Thus in a normal solution we expect a lowering of the* freezing point of 103 X .018 = 1.86C., provided the solute is neither associated nor dissociated. CHAPTER XVIII PHASE CHANGES OF THE SECOND ORDER In an ordinary change of phase, there is a sharp transition tempera- ture, for a given pressure, at which the properties change discontinuously from one phase to a second one. In particular, there is a discontinuous change of volume and a discontinuous change of entropy, resulting in a latent heat and allowing the application of Clapeyron's equation to tin* transition. In recent years, a number of cases have been recognized in which transitions occur which in most ways resemble real changes of phase, but in which the changes of volume and entropy, instead of being discontinuous, are merely very rapid. Volume and entropy change greatly within a few degrees' temperature range, with the result thai- there is an abnormally large specific; heat in this neighborhood, but no latent heat. Often the specific heat rises to a peak, then discontinuously falls to a smaller value. To distinguish these transitions from ordinary changes of phase, it has become customary to denote ordinary phase changes as phase changes of the first order, and these sudden but not dis- continuous transitions as phase changes of the second order. Sometimes the discontinuity of the specific heat is regarded as the distinguishing feature of a phase change of the second order, but we shall not limit our- selves to cases having such discontinuities. There is one well-known phenomenon which might well be considered to be a phase change of the second order, though ordinarily it is not. This is the change from liquid to gas, at temperatures and pressures above the critical point. In this case, as the temperature is changed at con- stant pressure, we have a very rapid change of volume from a small volume characteristic of a liquidlike state to the larger volume charac- teristic of a gaslike state, yet there is no discontinuous change as then* is below the critical point. And there is a very rapid change of entropy, from the small value characteristic of the liquid to the large value charac- teristic of the gas, as we can see from Fig. XI-6, resulting in a very abnormally high value of C P at temperatures and pressures slightly above the critical point. At the critical point, where the curve of S vs. T becomes vertical, so that (6S/dT) P is infinite, CP becomes infinite. At this temperature and below, we cannot use the specific heat to find the change of entropy, but must use a latent heat instead, representing, so to speak, the finite integral under the infinitely high, but infinitely narrow, peak in the curve of T(dS/dT) P vs. T. 291 292 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII Although the liquid-gas transition above the critical point, as we have seen, has the proper characteristics for a phase change of the second order, that name is ordinarily used only for phaso changes in solids. Now it seems hardly possible that there could be a continuous transition from ono solid phase to another one with different crystal structure. There have boon some suggestions that such things are possible; that, for instance, ordinary equilibrium lines in polymorphic transitions, as shown in Fig. XI-3, might terminate in critical points, above which one could pass continuously from one phase to another. But no such critical points have been found experimentally, and there is no experimental indication, as from a decreasing discontinuity in volume and entropy between the two phases as we go to higher pressure and temperature, that such critical points would be reached if the available ranges of pressure and temperature could be increased. Thus it seems that our na'ivc sup- position that two different crystal structures are definitely different, and that no continuous series of states can be imagined between thorn, is really correct, and that phase changes of the second ordor arc impossible between phases of different structure and must be looked for only in changes within a single crystal structure. Then* fire at least three types of change known which do not involve changes of crystal structure and which show the properties of phase changes of the second order. The best known ono is the ferromagnetic change, between the magnetized state, for instance of iron or nickel, at low temperatures, and the unmagnotizod state at high temperatures. There is no change of crystal structure associated with this transition, at. least in pure metals, no discontinuous change of volume, and no latent hoat. The magnetization decreases gradually to zero, instead of changing discontinuously, though then* is a maximum temperature, called the Curio point, from P. Curio, who investigated it, at which it drops rather suddenly to zero. And there is no latent hoat, the entropy increasing rather rapidly as wo approach the Curio point, but nowhere changing discontinuously, so that there is an anomalously largo specific heat. This anomaly in the specific hoat is sometimes concentrated in a small enough temperature range so that it almost seems liko a latent hoat to crude observation; the metallurgists, who aro accustomed to determining phaso, changes by cooling curves, which essentially measure discontinuities or rapid changes in entropy, have sometimes classified these ferromagnetic changes as real phaso changes. As a matter of fact, mathematical analy- sis shows that under some, circumstances in alloys, it is possible for the ferromagnetic change to bo associated with a change of crystal structure and a phase change of the first ordor, one phase being magnetic up to its transition point, above which a new nonferromagnetic phase is stable, but this is a complication not found in pure metals. Though this ferro- efcsc. 1] PHASE CHANGES OF THE SECOND ORDER 293 magnetic change is the most familiar example of phase changes of the second order, we shall not discuss it hero. A second type of phase change of the second order is found with certain crystals like NH 4 C1 containing ions (NH 4 * in this rase) which might be supposed capable of rotation at high temperature but not at low. The ammonium ion, being tot rahcd rally symmetrical, is not far from spherical, and we can imagine it to rotate freely in the crystal if it is not packed too tightly. At low temperatures, however, it will fit into the lattice best in one particular orientation and will tend merely to oscillate about this orientation. The rotating state, it is found, has the higher entropy and is preferred at high temperatures. The change from one state to the other comes experimentally in a nither narrow temperature range, giving a specific heat anomaly but no latent heat, and forming again a phase change of the second order. Unfortunately the theory is rather involved and we shall not try to give it here. The third type of phase change of the second order is fortunately easy to treat theoretically, at least to an approximation, and it, is the one which will be discussed in the present chapter. This is what is known as an order-disorder transition in an alloy, and can be better understood in terms of specific examples, which we shall mention in the next section. 1. Order -Disorder Transitions in Alloys. The best-known example of order-disorder transitions comes in the phase of brass, Cu-Zn, a phase which is stable at compositions in the neighborhood of 50 per cunt of each component. The crystal structure of this phase is body-centered cubic, an essential feature of the situation. In this type of lattice, the lattice points are definitely divided into two groups: half the points an; at the corners of the cubes of a simple cubic lattice, the other half at the centers of the cubes. It is to be noticed that, though they are dis- tinct, the centers and corners of the cubes are interchangeable. Now we can see the possibility of an ordered state of Cu-Zn in the neighborhood of 50 per cent composition: the copper atoms can be at the corners of the cubes, the zinc at the centers, or vice versa, giving an ordered structure in which each copper is surrounded by eight zincs, each zinc by eight cop- pers; whereas in the disordered state which we have previously considered, each lattice point would be equally likely to be occupied by cither a copper or a zinc atom, so that each copper on the average would be surrounded by four coppers and four zincs. Just as the body-centered cubic structure can be considered as made of two interpenetrating simple cubic lattices, the face-centered cubic structure can be made of four simple cubic lattices. There are some interesting cases of ordered alloys with this crystal structure and ratios of approximately one to three of the two components. An example is found in the copper-gold system, where such a phase is found in the neighbor- 294 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII hood of the composition CuaAu. Evidently the ordered phase is that in which the gold atoms are all on one of the four simple cubic lattices, tho copper atoms occupying the other three. We shall now investigate phase equilibrium of the Cu-Zn type, start- ing with the simple case of equal numbers of copper and zinc atoms, later taking the general case of arbitrary composition. We shall make the same assumptions about internal energy that we have made in Sec. 2, Chap. XVII, so that the problem in computing the internal energy is to find the number of pairs of nearest neighbors having the type aa, a6, and 66; a and b being the two types of atoms. We assume that the only neighbors of a given atom to be considered are the eight atoms at the corners of a cube surrounding it, so that all the neighbors of an atom on one of tho simple cubic lattices lie on the other simple cubic lattice. We shall now introduce a parameter w, which we shall call the degree of order. We shall define it so that w = 1 corresponds to having all the atoms a on one of the simple cubic lattices (which we may call the lattice a), all the atoms b on the other (which we call ft), w = will correspond to having equal numbers of atoms a and 6 on each lattice; w = 1 will correspond to having all the atoms 6 on lattice a, all the atoms a on lattice ft. Thus 10 = 1 will correspond to perfect order, w = to complete disorder. Let us now define w more completely, in terms of the number of atoms a and 6 on lattices a and ft. Let there be N atoms, N/2 of each sort, and N lattice points, N/2 on each of the simple cubic lattices. Then we assume that XT 1 e 9 1 J.J.' (1 + Number ot as on lattice a = - Number of a's on lattice ft = 4 (1 - w)N 4 Number of 6's on lattice a = T- 4 Number of fr's on lattice ft = (1 . (1.1) Clearly the assumptions (1.1) reduce to the proper values in the cases w = 1, 0, and furthermore they give w as a linear function of the various numbers of atoms. To find the energy, we must find the number of pairs of neighbors of types aa, a&, 66. The number of pairs of type aa equals the number of a's on lattice a, times 8 /(N/2) times the number of a's on lattice ft. This is on the assumption that the distribution of atoms on lattice ft surround- ing an atom a on lattice a is the same proportionally that it is in the whole lattice ft, an assumption which is not really justified but which we make SEC. 1] PHASE CHANGES OF THE SECOND ORDER 295 for simplification. Thus the number of pairs aa is - , times 4(1 w), or N(l w> 2 ). Similarly we have Number of pairs aa = Number of pairs 66 = N(l w 2 ) Number of pairs ab = N(l + w) 2 + N(l w) 2 w 2 ). (1.2) To find the internal energy at the absolute zero, we now proceed as in Sec. 2 of Chap. XVII, multiplying the number of pairs aa by E an , etc. Then we obtain Energy = f/ = N(l ~ to 2 )(E a(t + E) + 2N(l + w*)E*. (1.3) r fhis can be rewritten in the form xE a<l + (1 ~ X)Eu + 2X(1 - + $Nx 2 w 2 (Eab - ~ aa ^~\ (1.4) \ / where x = ^ is the relative composition of the components. Wo use this form (1 .4) because it turns out to be the correct one in the general case where x 7* % and because it is analogous to Eq. (2.2), Chap. XVII. We note that for w = 0, the disordered state, Eq. (1.4) reduces exactly to Eq. (2.2), Chap. XVII, as it should. To find the energy at any tempera- PT ture, we assume as in Chap. XVII that we add an amount I Cp dT, where /o Cp is the specific heat of a completely disordered phase. The actual specific heat will bo different from this CP, because w in Eq. (1 .4) will prove to depend on temperature, giving an additional term in the deriva- tive of energy with respect to temperature. With these assumptions, we then have U = Uo + fJCpdT, (1.5) vvhero Uo is given in Eq. (1.4). Next we consider the entropy. We have ^ - atoms a and (1 w)N . , , ... , (1 w)N . i (1 + w)A r .^ L atoms 6 on lattice a, and ~ atoms a and - ' 4 4 4 atoms 6 on lattice ft. The number of ways of arranging these is w ,2 (1.6) 296 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII each of the lattices a and furnishing an equal factor, resulting in the square in Eq. (1.6). Using Boltzmann's relation, this leads to an entropy S - -Nk(- + w . 1 + w 1 w , 1 w ___ 111 ~ -f" rt m =- T C P dT Nk r = Nk In 2 - -^[(1 + w) In (1 + w) + (1 - w) In (1 - w)} T< dT. (1.7) f Jo T-T c /2 When w = 0, the second term of Eq. (1.7) reduces to zero, leaving S = Nk In 2, agreeing with the value of Kq. (2.6), Chap. XVII, when we sot x = -J, checking the correctness of Eq. (1.7) in this special case. When w = 1 , however, the expression (1.7) reduces to zero, showing that tho ordered state has zero entropy. This is as we should expect; there is only one arrangement of the atoms, all the a's being on ono lattice, all the b's on the other, so that then* is no randomness at all. 2. Equilibrium in Transitions of the Cu-Zn Type. Having found the internal energy and entropy as a function of the degree of order and the temperature, in Eqs. (1 .4) and (1 .7), we can at once set up the free energy, and find which value of the degree of order gives the stablest phase at any given temperature. In Fig. / XVIII-1 we plot the Gibbs free energy G KKJ. xvin-i. -Gibbs free energy as a function of w, for various tempcra- :IH function of the degree of order, t ulvs< Of course, since equal positive and for various temperatures. . , negative values of w really correspond to the same state, the curves are symmetrical about tho line w = 0. The, curves are drawn on the assumption that #?> (E an + E bb )/2 is negative. This ease, in which unlike atoms attract each other more than like atoms, rase (c) of Fig. XVII-1, is the only one in which we may expect the ordered state to be more stable than the disordered ono. For if like atoms attract more than unlike, as in case (a), Fig. XVII-1, the case in which atoms tend to segregate into two separate phases, we shall surely find that the disordered state, iu which each atom has on the average four neighbors of the same kind as well as four of the opposite kind, will be more stable than the ordered state, where all neighbors are of the oppo- site kind, even at the absolute zero. SEC. 2] PHASE CHANGES OF THE SECOND ORDER 297 We see that at low temperatures the minimum of tho G curve, giving the stable phase, comes at values of w different from zero, approaching w = 1 as the temperature approaches zero. As the temperature rises, the minima move inward toward w = 0, and at a certain temperature (T c in the figure), there is a double minimum, with a very flat curve, at, w = 0. Above this temperature there is a single minimum at w = 0. In other words, the degree of order gradually decreases from perfect order at T = 0, to complete disorder at and above a certain temporal uro 7\. This limiting temperature corresponds to the Curio temperature in forro- magnetism, and by analogy it is of ton referred to as tho ( 'urio temporal uro in this case as well. To got tho minimum of tho curve, tho natural thing is to differentiate G with respect to w, keeping T constant. Then wo have 1 o = in - (2. i Equation (2.1) is a transcendental equation for w and cannot bo solved explicitly. We can easily solve it graphically, however, using the form + >) _ " 8 E aa + E,,, " "* ----- 2 l-w Wo plot In (1 + w)/(\ w) as a func- tion of w, and on the same graph draw the straight line w(-8/kT)[K alt - 2 (E atl + A T M,)/2j. The intersections give the required value of ir. As we see from 1 Fig. XVIII-2, at low temperatures the straight line is steep and there will be tliree intersections, one at w = (evi- dently corresponding to the maximum of the curve, as wo see from Fig. "I XVIII-1) and two others, which wo desire, at equal positive and negative values of w. As the temperature in- creases and the slope of the 4 straight line decreases, those intersections move Flu - XVlil-a. Craphiral solution of toward w = and finally coalesce when the slope of the straight line equals that of In (1 + w)/(\ w) at the origin. Now (1 +w) -2 - In (1 -w) starts out from the origin like 2w, with a slope 2, so that for the Curie point we must have 298 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII Egg + Ebb = ' (2 . 3) In terms of this, the straight line in Fig. XVIII-2 is 2T c w/T. By the graphical method of Eq. (2.2) and Fig. XVIII-2, the curve of Fig. XVIII-3 is obtained for the stable value of w, as a function of temperature. This shows the decrease of w from 1 to 0, first very gradual, then as the Curie point is approached very w "" x rapid, so that the curve actually has a vertical tangent at the Curie point. The curve of Fig. XVIII-3 cannot be T/Tc expressed analytically, though it can be FIG. XVIII-3. Degree of order as approximated in the two limits of T = function of temperature. i 71 __ //r Having found the variation of w with temperature, we can find the specific heat anomaly, or the excess of specific heat over the value C P characteristic of disorder. This excess is evidently dU\ dw dwrdT E * + E \fa -- g - )df NkT . (1 +w)dw 2 (1 - w)dT - -NkT c w^ (2.4) using Eqs. (1.4), (1.7), (2.2), and (2.3). In Eq. (2.4), it is understood that dw/dT is the slope of the curve of Fig. XVIII-3 and that it is to be determined graphically. Since the slope is negative, the excess specific heat is positive. We give the resulting curve for specific heat in Fig. XVIII-4, where we see that it comes to a sharp peak at the Curie point and above that point drops to zero. From the discussion we have given, it is plain that the change from the ordered to the disordered state occupies the whole temperature range from zero degrees to T e , though it is largely localized at temperatures slightly below T c . Thus this change, a gradual one occurring over a large temperature range, is just of the sort that we wish to call a phase SEC. 2] PHASE CHANGES OF THE SECOND ORDER 299 change of the second order. We can make the situation clearer by plotting curves for G as a function of T. We do this for a number of values of w, ranging from zero to unity. In a sense, we may consider that we have a mixture of an infinite number of phases, corresponding to the continuous range of w, and at each temperature that particular phase (or particular w) will be stable whose curve of G against T lies lowest. The resulting curves are shown in Fig. XVIII-5. To mako thorn clearer, 1.5 FIG. XVI 11-4. Kxc-css specific* heat arising from the ordered state, in units of Nk, as func- tion of temperature. W=l FIG. XVIII-5. Gibhs free energy as function of temperature, for different degrees of order, in the order-disorder transition. The envelope of the straight lines represents the free energy of the stable state. we leave out the terms coming from the specific heat Cp of the disordered state, which are common to curves for all w's, and do not affect the , relative positions of the curves. When this is done, the curves become straight lines, since the internal energy and entropy are then independent of temperature. At the absolute zero, the lowest curve is the one with the lowest internal energy or the ordered state. The disordered states have greater entropy, however, even at the absolute zero, so that their curves slope down more, and at higher temperatures their free energies 300 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII lie lower than that of the ordered state. From Fig. XVIII-5 we see that there is an envelope to the curves, and this envelope represents the actual curve of G vs. T 7 , whose slope is the negative entropy and whose second derivative gives the specific heat. The particular value of w whose curve is tangent to the envelope at any temperature is the stable w at that temperature, as given by Fig. XVIII-3. Graphs like Fig. XVIII-5 show particularly plainly the difference between phase changes of the first and second order. We can readily imagine that, by slightly altering the mathematical details, the curves could be changed to the form of Fig. XVIII-6, in which, though we have a w=I FIG. XV11I-0. - Gibbs free energy as function of temperature, for different degrees of order, in a phase change of the first order, in which the ordered state is stable below a temperature T , the completely disordered state above this temperature. continuous set of phases from w = to w = 1, the envelope lies aboye rather than below the axis of abscissas. In this case the stable state is that with w = 1 up to a certain temperature, w = from there on, all other values of w corresponding to states that are never stable. This would then be a phase change of the first order, as shown in Fig. XVI-2, with a discontinuity in the slope of the G vs. T curve, or the entropy, and hence with a latent heat. When we see the small geometrical difference between these two cases, we see that in some cases the distinction between phase changes of the first and the second order is not very fundamental. In this connection, it is interesting to note that the rotation vibration transition in NH 4 C1, which we mentioned in a preceding paragraph, is clearly a phase change of the second order, the change occurring through a considerable range of temperature or pressure. However, there is a similar transition in NH 4 Br, undoubtedly due to the same physical cause, which at least at high pressure takes place so suddenly that it certainly seems to be a phase change of the first order. This is probably a case SEC. 3] PHASE CHANGES OF THE SECOND ORDER 301 where the distinction is no more significant than in Figs. XVIII-5 and XVIII-6. We must not forget, however, that there is one real and definite distinction between most phase changes of the first order and all those of the second order: in every phase change of the second order, we must be able to imagine a continuous range of phases between the two extreme ones under discussion, while in a phase change of the first order this is not necessary (though, as we have seen in Fig. XVIII-6, it can sometimes happen), and in the groat majority of cases it is not possible. 3. Transitions of the Cu-Zn Type with Arbitrary Composition. It is not much harder to discuss the general case of arbitrary composition than it is the simple case of 50 per cent concentration taken up in the two preceding sections. We assume that there are NX atoms a, JV(1 x) 6's, and we shall limit ourselves to the case where x is loss than \\ the same formulas do not hold for x greater than ^, but to get this caso we can merely interchange the names of substances a and b. As before, we lot the degree of order bo w. Then we assume N Number of atoms a on lattice a = ^ (1 + w)x N Number of atoms I) on lattice a = -~[l (1+ w)x] Z N Number of atoms a on lattice ft = -^(1 w)x N Number of atoms 6 on lattice ft = - -[1 (1 w)x]. (3.1) & To justify these assumptions, we note that they lead to the correct num- bers in the three cases w = 0, 1, and that they give the numbers as linear functions of w, conditions which determine Eqs. (3.1) uniquely. Then for the numbers of pairs we find Number of pairs aa: 4/Vz 2 (l w*) Number of pairs bb: 4JV[(1 :r) 2 x' 2 w~] Number of pairs ah: ZN[x(l - x) + x*w*l (3.2) and for the internal energy we have U = 4N\ xE aa + (1 - x)E bb + 8Nx*w{E ab - E -^ E ^\ + ( T C P dT. (3.3) \ ^ / Jo The steps in the derivation of Eqs. (3.2) and (3.3) have not been given above, but the principles used in their derivation are just like those used in Sec. 1. We note that Eq. (3.3) is the one already written in Eqs. (1.4) and (1.5) but previously justified only for the case x = . 302 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XV1U The derivation of the entropy is also exactly analogous to that of Sec. 1 and the result is f S - --/{(I + w)x In (1 + w)x + (1 - w)x In (1 - w)x Z }- fl - (1 + w)x] In [1 - (1 + w)x] + [1 - (1 - w)x] In [1 - (1 - w)x\\ dT. (3.4) It is easy to verify that in the case x = this leads to the value already found in Eq. (1.7). From Eqs. (3.3) and (3.4) we can find tho froo onorgy and carry out the same sort of discussion that we havo above, but, for any concentration. To find the value of w for tho stable state, at any value of x, we differentiate with respect to w and sot it equal to zoro. Then we havo Q = x In O -w)[l - (1 +)*]' or In + ln ~(i~- w) [i [1 - (1 - - (1+ whore the T c used in Eq. (3.6) is the ono defined in Eq. (2.3), holding for the concon- x=0J ^/\ Cation x i.. Equation (3.6) can bo solved as in the special case x = , plotting the left side of Eq. (3.6) against w and finding the intersection with the straight line given by the right side. Qualitatively we find the same sort of result as in our previous case, the dogree of order going from unity at absolute zero to zero at a compositions, T => Curie point. The Curie point, however, depends on concentration. We find it, as before, by lotting the slope of the straight line representing the right side of Ea. (3.6) be the same as the slope of the left side at the origin, which is 2/(l r). Equating these, we have w tor different ' 8 Ti ' T cx = (3.7) where T cx is the Curie temperature for concentration x, T c for concentra- tion x = 4. From Eq. (3.7) we see that T c * is a parabolic function of SEC. 3] PHASE CHANGES OF THE SECOND ORDER 303 z, having its maximum for x = , and falling to zero at the extreme concentration x = 0. This is Of metallurgical interest, for on phase diagrams in cases where there is a transition of the second order it is quite common to draw a line of Curie temperature vs. composition to indicate the transition, though thore is no real equilibrium of phases to be indicated by it. In a case like the Cu-Zn transition, this curve should theoretically have the form (3.7). The experimental data are hardly good enough to soo whether this is verified or not. w=08- 0.1 02 03 0.4 05 FIG. XVIII-8. Gibbs free energy as function of composition, for different decrees ot order, T = 0.8 T e . At a temperature below the Curie point T C1 it is plain from Eq. (3.7) that for concentrations nearer than a certain critical concentration the alloys will be below thoir Curie points and will be in partly ordered phases, while for x loss than this critical concentration they will bo above* their Curie points and will bo in the disordered state. This is indicated in Fig. XVIII-7, where wo show G as a function of w for different values of x, at a temperature of 0.8 T c . The critical concentration for this tem- perature is 0.277, as can bo found at once from Eq. (3.7); it is noted in Fig. XVIII-7 that the curves for x = 0.1 and 0.2 definitely have thoir minima at w = 0, indicating complete disorder, while that for x = 0.3 is very flat at the center, and those for 0.4 and 0.5 definitely have minima for w j 0, indicating a partly ordered state. Finally, in Fig. XVIII-8 we show G as a function of x, for different values of w, at this same tern- 304 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XVIII perature T = 0.8 T c . For compositions up to 0.277, as we have men- tioned, the curve for w lies the lowest. At higher concentrations, the other curves begin to cross it and the stable state corresponds to the envelope of these curves, the lowest w rising from w = to a maximum of about w = 0.80 at x = \. This envelope is of interest, for it is the curve of G vs. x which should really be used to represent the stable stato in such a system and which should bo usod in investigating the equilibrium between this phase and other phases, in the manner of Chap. XVII. We notice that this envelope is a smooth curve, convex downward, just as the curve for w is, and in fact it does not greatly differ from that for w = 0. Thus our discussion of phase equilibrium of the preceding chapter, where we entirely neglected the order-disorder transition, is not seriously in error for a phase in which such a transition is possible. The reason is that, though there is a considerable difference in energy and entropy separately between the ordered and disordered states, these make contributions of opposite sign in the free energy, so that it is only slightly affected by the degree of order. PART III ATOMS, MOLECULES, AND THE STRUCTURE OF MATTER CHAPTER XIX RADIATION AND MATTER In the development of quantum theory, light, or electromagnetic radiation of visible wave lengths, has had a very special place. It was the study of black-body radiation that first showed without question the inadequacy of classical mechanics, and that led Planck to the quantum theory. One of the first triumphs of quantum theory was Einstein's prediction of the law of photoelectric emission, a prediction which was beautifully verified by experiment. And in the development of the theory of atomic and molecular structure, the most complicated and involved test which has yet been given the quantum theory, the tool has been almost entirely optical, the spectrum, the light emitted and absorbed by matter. Some of the most difficult logical concepts of the quantum theory have come in the field of light. The difficulty of reconciling prob- lems like interference of light, which clearly indicate that it is an electro- magnetic wave motion, with problems like the photoelectric effect, which equally clearly* indicate that it is made of individual particles of energy, or photons, is well known. And these difficulties, indicating that light really has a sort of dual nature, gave the suggestion that matter might have a dual nature too, and that the particles with which we were familiar might also be associated with waves. This was the suggestion which led to wave mechanics and which raised the quantum theory from a rather arbitrary set of rules to a well-developed branch of mathematical physics. Throughout the development of modern ideas of light, black-body radiation has played an essential role. This is simply light in thermal equilibrium the distribution of frequencies and intensities of light which is in equilibrium with matter at a given temperature. Our study in this chapter will be of black-body radiation, and we shall handle it by direct thermodynamic methods, using the quantum theory much as we did in the theory of specific heats. In the following chapter we shall take up the kinetics of radiation, the probabilities of emission and absorption of light by matter. This will lead us to a kinetic derivation of the laws of black body radiation, and at the same time to a usable method of handling the kinetics of radiation problems out of equilibrium, which we very commonly meet in the laboratory. 1. Black-body Radiation and the Stefan-Boltzmann Law. Light is simply electromagnetic radiation, a wave motion in space, in which the 307 ,308 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX electric and the magnetic fields oscillate rapidly with time. It can carry energy, just as sound or any other wave can carry energy. We are all familiar with this; most of the available energy on the earth was carried here from the sun, by electromagnetic radiation. Like all waves, it can be analyzed into sinusoidal or monochromatic waves, in which the electromagnetic field oscillates sinusoidally with time, with a definite frequency v\ the possibility of such an analysis is a mathematical one, based on Fourier's series, and does not imply anything about the physics of radiation. The velocity of light, at least in empty space, is inde- pendent of the frequency of oscillation, and is ordinarily denoted by c, equal to 2.998 X 10 10 cm. per second. We can associate a wave length with each frequency of oscillation, by the equation \v = c, (1.1) where A is the wave length. The mathematics of the light waves is essentially like that of sound waves, given in Sec. 2 of Chap. XIV, and we shall not repeat it here. In that section, however, we found that elastic waves were of two sorts, longitudinal and transverse. Light on the contrary is only transverse, with two possible planes of polarization, or directions for the electric or magnetic field, at right angles to the direction of propagation. We ordinarily deal, in discussions like the present, with fairly short wave lengths of light. The long waves, as -found in radio transmission, are of small significance thermodynamically or in atomic structure. Among waves shorter, say, than a tenth of a millimeter, it is customary to speak of those longer than 7000 A as infrared or heat waves, those between 7000 and 4000 A as light (since the eye can see only these wave lengths), those between 4000 A and perhaps 50 A as ultraviolet, and those shorter than 50 A but longer than perhaps 0.01 A as x-rays. Waves shorter than this are hardly met in ordinary thermodynamic or atomic processes, though of course they are essential in nuclear processes and cosmic rays. Although there is this classification of wave lengths, it is purely a matter of convenience, and we shall not have to bother with it. For our purposes, we may consider as light any radiation from perhaps ^ mm. to T V A; it is only in this range that the radiations we consider are likely to have appreciable intensity. Ordinary bodies at any temperature above the absolute zero auto- matically emit radiation, and are capable of absorbing radiation falling on them. Thus an enclosure containing bodies at a temperature above the absolute zero cannot be in equilibrium unless it contains radiation as well. In fact, in equilibrium, there must be just enough radiation so that each square centimeter of surface of each body emits just as much radia- tion as it absorbs. It seems clear that there must be a definite sort of radiation in equilibrium with bodies at a definite temperature. For we SBC. 1] RADIATION AND MATTER 309 know that all bodies at a given temperature are in thermal equilibrium with each other, and if they are all in a container with the same thermal radiation, this radiation must be in equilibrium with each body separately, and must hence be independent of the particular type of body, and characteristic only of the temperature, and perhaps the volume, of the container. It is an experimental fact, one of the first laws of temperature radiation, that the type of radiation its wave lengths, intensities, and so on is independent of the volume, depending only on the temperature. This type of radiation, in thermal equilibrium, is called black-body radiation, for a reason which we shall understand in a moment. The first and most elementary law of black-body radiation is Kirch- hoff's law, a simple application of the kinetic method. To understand it, we must define some terms. First we consider the emissive power e\ of a surface. We consider the total number of ergs of energy emitted in the form of radiation per second per square centimeter of a surface, in radiation of wave length between X and X + d\, and by definition set it equal to e\d\. Next we consider the absorptivity. Suppose a certain amount of radiant energy in the wave length range d\ falls on 1 sq. cm. per second, and suppose a fraction a\ is absorbed, the remainder, or (1 a\), being reflected. Then a\ is called the absorptivity, and (1 a\) is called the reflectivity. Now consider the simple requirement for thermal equilibrium. We shall demand that, in each separate range of wave lengths, as much radiation is absorbed by our square centimeter in thermal equilibrium as is radiated by it. This assumption of balance in each range of wave lengths is a particular example of the principle of detailed balancing first introduced in Chap. VI, Sec. 2. Now let I\d\ be the amount of black-body radiation falling on 1 sq. cm. per second in the wave length range dX. This is a function of the wave length and temperature only, as we have mentioned above. Then we have the following relation, holding for 1 sq. cm. of surface: Energy emitted per second = e\d\ = energy absorbed per second = I\a\d\, or = I\ = universal function of X and T. ( 1.2) G&X Equation (1.2) expresses Kirchhoff s law: the ratio of the emissive power to the absorptivity of all bodies at the same wave length and temperature is the same. Put more simply, good radiators are good absorbers, poor radiators are poor absorbers. There are many familiar examples of this law. One, which is of particular importance in spectroscopy, is the following : if an atom or other system emits a particularly large amount of 310 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX radiation at one wave length, as it will do if it has a line spectrum, it must also have a particularly large absorptivity at the same wave length, so that a continuous spectrum of radiation, passing through the body, will have this wave length absorbed out and will show a dark line at that point. A black body is by definition one which absorbs all the light falling on it, so that none is reflected. That is, its absorptivity a\ is unity, for all wave lengths. Then it follows from Eq. (1 .2) that for a black body the emissive power ex is equal to /x, the amount of black-body radiation falling on 1 sq. cm. per second per unit range of wave length. We can understand the implications of this statement better if we consider what is called a hollow cavity. This is an enclosure, with perfectly opaque walls, so that no radiation can escape from it. It contains matter and radiation in equilibrium at a given temperature. Thus the radiation is black-body radiation characteristic of that temperature. Now suppose we make a very small opening in the enclosure, not big enough to disturb the equilibrium but big enough to let a little radiation out. We can approximate the situation in practice quite well by having a well insulated electric furnace for the cavity, with a small window for the opening. All the radiation falling on the opening gets out, so that if we look at what emerges, it represents exactly the black-body radiation falling on the area of the opening per second. Such a furnace makes in practice the moist convenient way of getting black-body radiation. But now by Kirchhoff 's law and the definition of a black body, we see that if we have a small piece of black body, of the shape of the opening in our cavity, and if we heat it to the temperature of the cavity, it will emit exactly the same sort of radiation as the opening in the cavity. This is the reason why our radia- tion from the cavity, radiation in thermal equilibrium, is also called black- body radiation. A black body is the only one which has this property of emitting the same sort of radiation as a cavity. Any other body will emit an amount a\I\ d\ per square centimeter per second in the range d\, and since a\ must by definition be less than or equal to unity, the other body will emit less light of each wave length than a black body. A body which has very small absorptivity may emit hardly anything. Thus quartz transmits practically all the light that falls on it, without absorp- tion. When it is heated to a temperature at which a metal, for instance, would be red or white hot and would emit a great deal of radiation, the quartz emits hardly any radiation at all, in comparison. Now that we understand the emissive power and absorptivity of bodies, we should consider /x, the universal function of wave length and temperature describing black-body radiation. It is a little more con- venient not to use this quantity, but a closely related one, u*. This represents, not the energy falling on 1 sq. cm. per second, but the energy contained in a cubic centimeter of volume, or what is called the energy SEC. 1] RADIATION AND MATTER 31 1 density. If there is energy in transit in a light beam, it is obvious that the energy must be located somewhere while it is traveling, and that we can talk about the amount of energy, or the number of ergs, per cubic centimeter. It is a simple geometrical matter to find the relation between the energy density and the intensity. If we consider light of a definite direction of propagation, then the amount of it which will strike unit cross section per second is the amount contained in a prism whose base is 1 sq. cm. and whose slant height along the direction of propagation is the velocity of light c. This amount is the volume of the prism (c cos 0, if is the angle between the direction of propagation and the normal to the surface), multiplied by the energy of the light wave per unit volume. Thus we can find very easily the amount of light of this definite direction of propagation falling on 1 sq. cm. per second, if we know the energy density, and by integration we can find the amount of light of all direc- tions falling on the surface. We shall not do it, since we shall not need the relation. In addition to this difference between u v and 7\, the former refers to frequency rather than wave length, so that u v dv by definition is the energy per unit volume in the frequency range from v to v + dv. In addition to energy, light can carry momentum. That means that if it falls on a surface and is absorbed, it transfers momentum to the surface, or exerts a force on it. This force is called radiation pressure. In ordinary laboratory experiments it is so small as to be very difficult to detect, but there are some astrophysical cases where, on account of the high density of radiation and the smallness of other forces, the radiation pressure is a very important effect. The pressure on a reflecting surface, at which the momentum of the radiation is reversed instead of just being reduced to zero, is twice that on an absorbing surface. Now the radiation pressure can be computed from electromagnetic theory, and it turns out that in isotropic radiation (radiation in which there are beams of light traveling in all directions, as in black-body radiation), the pressure against a reflecting wall is given by the simple relation P = X energy density \,dv. (1.3) From Eq. (1.3) we can easily piove a law called the Stefan-Boltzmann law relating the density of radiation to the temperature. Let us regard our radiation as a thermodynamic system, of pressure P, volume 7, and internal energy [7, given by U * V*u,d*. (1.4) Then, from Eqs. (1.3) and (1.4), the equation of state of the radiation is PV - W, (1.5) 312 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX which compares with PV ft/ for a perfect gas. There is the further fact, quite in contrast to a perfect gas, that the pressure depends only on the temperature, being independent of the volume. Then we have W)T * 3/> ' (L6) using the fact that P is independent of V. But by a simple thormo- dynamic relation we know that in general -' Combining Eqs. (1.6) and (1.7), we have - (1.8) The pressure in Eq. (1.8) is really a function of the temperature only, so that the partial derivative can be written as an ordinary derivative, and we can express the relation as (IJ) which can be integrated to give In P = 4 In T + const., P = const. T\ (1.10) or, using Kq. (1.3), f*u v dv = const. T 4 . (1.11) /o Equation (1.11), stating that the total energy per unit volume is propor- tional to the fourth power of the absolute temperature, is the Stefan- Boltzmann law. Since the intensity of radiation, the amount falling on a square centimeter in a second, is proportional to the energy per unit volume, we may also state the law in the form that the total intensity of black-body radiation is proportional to the fourth power of the absolute temperature. This law is important in the practical measurement of high temperatures by the total radiation pyrometer. This is an instru- ment which focuses light from a hot body onto a thermopile, which absorbs the radiation energy and measures it by finding the rise of tem- perature it produces. The pyrometer can be calibrated at low tempera- tures that can be measured by other means. Then, by Stefan's law, at higher temperatures the amount of radiation must go up as the fourth SEC. 2] RADIATION AND MATTER 313 power of the temperature, from which we can deduce the temperature of very hot bodies, to which no other method of temperature measurement is applicable. Since Stefan's law is based on such simple and fundamental assumptions, there is no reason to think that it is not perfectly exact, so that this forms a valid method of measuring high temperatures. 2. The Planck Radiation Law. The Stefan-Boltzmann law gives us a little information about the function u vj but not a great deal. We shall next see how the function can be evaluated exactly. There are many ways of doing this, but the first way we shall use is a purely statistical one. We can outline the method very easily. We consider a hollow cavity with perfectly reflecting walls, making it rectangular for con- venience. In such a cavity we can have standing waves of light; these waves, in fact, constitute the thermal radiation. There will be a discrete set of possible vibrations or overtones of a fundamental vibration, just as we had a discrete set of sound vibrations in a rectangular solid in Chap. XIV; only instead of having a finite number of overtones, as we did with the sound vibrations on account of the atomic nature of the material, the number of overtones here is infinite and stretches up to infinite fre- quencies. As with the sound vibrations, each overtone acts, as far as the quantum theory is concerned, like a linear oscillator. Its energy cannot take on any arbitrary value, but only certain quantized values, multiples of hv, where v is the frequency of that particular overtone. This leads at once to a calculation of the mean energy of each overtone, just as we found in our calculation of the specific heat of solids, and from that we can find the mean energy per unit volume in the frequency range dv, and so can find u v . First, let us find the number of overtone vibrations in the range dv. We follow Chap. XIV in detail and for that reason can omit a great deal of calculation. In Sec. 2 of that chapter, we found the number of over- tones in the range dp, in a problem of elastic vibration, in which the velocity of longitudinal waves was i>/, that of transverse waves v t . From Eqs. (2.20) and (2.21) of that chapter, the number of overtones of longi- tudinal vibration in the range di>, in a container of volume F, was dN = 4 di. (2.1) v i and of transverse vibrations dN = 87TJ/ 2 drt- (2.2) There was an upper, limiting frequency for the elastic vibrations, but as we have just stated there is not for optical vibrations. In our optical case, we can take over Eq. (2.2) without change. Light waves are only 314 INTRODUCTION TO CHEMICAL PHYSICS [HAP. XIX transverse, so that the overtones of Eq. (2.1) are not present but those of Eq. (2.2) are. Since the velocity of light is c, we have dN - 8 2 d (2.3) as the number of standing waves in volume V and the frequency range dv. Next we want to know the moan energy of each of these standing waves at the temperature T. Before the invention of the quantum theory, it was assumed that the oscillators followed classical statistics. Then, being linear oscillators, the mean energy would have to be kT at temperature T. From this it would follow at once that the energy density u, 9 which can be found by multiplying dN in Eq. (2.3) by the mean energy of an oscillator, and dividing by V and dv, is , A . u v = ^ - (2.4) Kquation (2.4) is the so-called Kayleigh-Jcans law of radiation. It was derived, essentially as we have done, from classical theory, and it is the only possible radiation law that can be found from classical theory. Yet it is obviously absuru, as was realized as soon as it was derived. For it indicates that u v increases continually with v. At any temperature, the farther out we go toward the ultraviolet, the more intense is the tempera- ture radiation, until finally it becomes infinitely strong as we go out through the x-rays to the gamma rays. This is ridiculous; at low tem- peratures the radiation has a maximum in the infrared, and has fallen practically to zero intensity by the time we go to the visible part of the spectrum, while even a white-hot body has a good deal of visible radiation (hence the "white heat"), but very little far ultraviolet, and practically no x-radiation. There must be some additional feature, missing in the classical theory, which will have as a result that the overtones of high frequency, the visible and even more the ultraviolet and x-ray frequencies, have much less energy at low temperatures than the equipartition value, and in fact at really low temperatures have practically no energy at all. But this is just what the quantum theory does, as we have seen by many examples. The examples, and the quantum theory itself, however, were not available when this problem first had to be discussed; for it was to remove this difficulty in the theory of black-body radiation that Planck first invented the quantum theory. In Chap. IX, Sec. 5, we found the average energy of a linear oscillator of frequency v in the quantum theory, and found that it was A v . v /n ,. Average qnergy = -TT + ~rz -- > (2.5) * BBC. 2] RADIATION AND MATTER 315 as in Eq. (5.9), Chap. IX. The term ?hv was the energy at the absolute zero of temperature; the other term represented the part of the energy that depended on temperature. The first term, sometimes called the zero-point energy, arose because we assumed that the quantum condition was E n = (n + %)hi>, instead of E n = nhv. We can now use the expres- sion (2.5) for the average energy of an overtone, but we must leave out the zero-point energy. For, since the number of overtones is infinite, this would lead to an infinite energy density, even at a temperature of absolute zero. The reason for doing this is not very clear, oven in the present state of the quantum theory. We can do it quite arbitrarily, or we can say that the quantum condition should be E n = nhv (an assumption, how- ever, for which there is no justification in wave mechanics), or we can FIG. XIX-l.- -Enorgy density from Planck's distribution law, for fom temperatures in thw ratio 1.2.3.4. say that the infinite zero-point energy is really there but, since it is inde- pendent of temperature, we do not observe it. No one of these reasons is very satisfactory. Unfortunately, though it was the branch of physics in which quantum theory originated, radiation theory still has more difficulties in it than any other parts of quantum theory. We shall then simply apologize for leaving out the term ^hv in Eq. (2.5), and shall hope that at some future time the theory will be well enough understood so that we can justify it. If we assume the expression (2.5), without the zero-point energy, for the average energy of a standing wave, and take Eq. (2.3) for the number of standing waves in volume V and frequency range dv y we can at once derive u v , and we have u v 1 C 3 *! (2.6) e kT - 316 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX Equation (2.6) is Planck's radiation law, and as far as available experi- ments show, it is the exactly correct law of black-body radiation. Curves of u v as a function of v, for different temperatures, are shown in Fig. XIX-1. At low frequencies, even for room temperatures, the frequencies are so low that the energy of an oscillator has practically the classical value, and the Rayleigh-Jeans law is correct. At higher frequencies, however, this is not the case, and the curves, instead of rising indefinitely toward high frequencies, curve down again and go very sharply to negligi- ble values. The maximum of the curve shifts to higher frequencies as the temperature rises, checking the experimental fact that bodies look red, then white, then blue, as their temperature rises. The area under the curve rises rapidly with temperature. It is, in fact, this area that must be proportional to the fourth power of the temperature, according to the 4 Stefari-Boltzmaiin law. We can easily verify that Planck's law is in accordance with that law, and at the same time find the constant in Eq. (1.11), by integrating u v from Eq. (2.6). We have I u v dv = I ~ -j- dv Jo Jo e kf _ i = ^TiS- 4 f YS) V 1 ~ d (lf) /t C/ I (\ \lv J. I \ Iv JL I / \ / e kT _ ] \ / 4 T 4 r ! c 3 ~Jo ** e* - 1 x -, (2.7) where we have used the relation a = (l + ~ + ~ +') = 1.0823 (2.8) 3. Einstein's Hypothesis and the Interaction of Radiation and Matter. To explain the law of black-body radiation, Planck had to assume that the energy of a given standing wave of light of frequency v could be only an integral multiple of the unit hv. Thus this carries with it a remarkable result: the energy of the light can change only by the quite finite amount hv or a multiple of it. This is quite contrary to what the wave theory of light indicates. The theory of emission and absorption of energy has been thoroughly worked out, on the wave theory. An oscillat- ing electric charge has oscillating electric and magnetic fields, and at distant points these fields constitute the radiation field, or the light wave SEC. 3] RADIATION AND MATTER 317 sent out from the charge. The field carries energy out at a uniform and continuous rate, and the charge loses energy at the same rate, as one can see from the principle of conservation of energy, and gradually comes to rest. To describe absorption, we assume a light wave, with its alternating electric field, to act on an electric charge which is capable of oscillation. The field exerts forces on the charge, gradually setting it into motion with greater and greater amplitude, so that it gradually and continuously absorbs energy. Both processes, emission and absorption, then, are continuous according to the wave theory, and yet the quantum theory assumes that the energy must change by finite amounts hv. Einstein, dearly understanding this conflict of theories, made an assumption that seemed extreme in 1905 when he made it, but which has later come to point the whole direction of development of quantum theory. He assumed that the energy of a radiation field could not be considered continuously distributed through space, as the wave theory indicated, but instead that it was carried by particles, then called light quanta, now more often called photons, each of energy hv. If this hypo- thesis is assumed, it becomes obvious that absorption or emission of light of frequency v must consist of the absorption or emission of a photon, so that the energy of the atom or other system absorbing or emitting it must change by the amount hv. Einstein's hypothesis, in other words, was the direct and straightforward consequence of Planck's assumption, and it received immediate arid remarkable verification in the theory of the photoelectric effect. Metals can emit electrons into empty space at high temperatures by the thermionic effect used in obtaining electron emission from filaments in vacuum tubes. But metals also can emit electrons at ordinary room temperature, if they are illuminated by the proper light; this is called the photoelectric effect. Not much was known about the laws of photo- electric emission in 1905, but Einstein applied his ideas of photons to the problem, with remarkable results that proved to be entirely correct. Einstein assumed that light of frequency v, falling on a metal, could act only as photons hv were absorbed by the metal. If a photon was absorbed, it must transfer its whole energy to an electron. Then the electron in question would have a sudden increase of hv in its energy. Now it requires a certain amount of work to pull an electron out of a metal; if it did not, the electrons would continually leak out into empty space. The minimum amount of work, that required to pull out the most easily detachable electron, is by definition the work function <. Then if hv was greater than the work function, the electron might be able to escape from the metal, and the maximum possible kinetic energy which it might have as it emerged would be = hv - 0. (3.1) 318 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX If the electron happened not to be the most easily detachable one, it would require more work than < to pull it out, and it would have less kinetic energy than Eq. (3.1) when it emerged, so that that represents the maximum possible kinetic energy. Einstein's hypothesis, then, led to two definite predictions. In the first place, there should bo a photoelectric threshold: frequencies less than a certain limit, equal to <l>/h, should be incapable of ejecting photoelec- trons from a metal. This prediction proved to be verified experimentally, and with more and more accurate determinations of work function it continues to hold true. It is interesting to see where this threshold comes in the spectrum. For this purpose, it is more convenient to find the wave length X = c/v corresponding to the frequency <t>/h. If we express < in electron volls, as is commonly done, (see Eq. (1.1), Chap. IX), we have the relation __ he X 300_ _ _ _ 4~80~X l(F 10 < ~ " ' ( ' All wave lengths shorter than the threshold of Eq. (3.2) can eject photo- electrons. Thus a metal with a small work function of two volts (which certain alkali metals have) has a threshold in the red and will react photoelectrically to visible light, while a metal with a work function of six volts would have a threshold about 2000 A, and would be sensitive only in the rather far ultraviolet. Most real metals lie between these limits. The other prediction of Einstein's hypothesis was as to the maximum velocity of the photoeleotrons, given by Eq. (3.1). This is also verified accurately by experiment. There is a remarkable feature connected with this: the energy of the electrons depends on the frequency, but not on the intensity, of the light ejecting them. Double the intensity, and the number of photoelectrons is doubled, but not the energy of each indi- vidual. This can be carried to limits which at first sight seem almost absurd, as the intensity of light is reduced. Thus let the intensity be so low that it will require some time, say half a minute, for the total energy falling on a piece of metal to equal the amount hv. The light is obviously distributed all over the piece of metal, and we should suppose that its energy would be continuously absorbed all over the surface. Yet that is not at all what happens. About once every half minute, a single electron will be thrown off from one particular spot of the metal, with an energy which in order of magnitude is equal to all that has fallen on the whole plate for the last half minute. It is quite impossible, on any continuous theory like the wave theory, to understand how all this energy could have become concentrated in a single electron. Yet it is; photoelectric cells can actually be operated as we have just described. SBC. 3] RADIATION AND MATTER 319 An example like this is the most direct sort of experimental evidence for Einstein's hypothesis, that the energy in light waves, at least when it is being emitted or absorbed, acts as if it were concentrated in photons. For a long time it was felt that there was an antagonism between wave theory and photons. Certainly the photoelectric effect and similar things are most easily explained by the theory of photons. On the other hand, interference, diffraction, and the whole of physical optics cannot be explained on any basis but the wave theory. How could these theories be simultaneously true? We can sec what happens experimentally, in a case where we must think about both types of theories, by asking what would happen if the very weak beam of light, falling on the metal plate of the last paragraph, had previously gone through a narrow slit, so that there was actually a diffraction pattern of light and dark fringes on the plate, a pattern which can be explained only by the wave 1 theory. We say that there is a diffraction pattern; this does not seem to mean anything with the very faint light, for there is no way to observe it. We mean only that if nothing is changed but the intensity of the light, and if that is raised far enough so that the beam can be observed by the eye, a diffrac- tion pattern would be- seen on the plate. But now even with the weak light, it really has meaning to speak of the diffraction pattern. Suppose we marked off the light and dark fringes using an intense light, and then returned to our weak light and made a statistical study of the points on the plate from which electrons were ejected. We should find that the electrons were all emitted from what ought to be the bright fringes, none from the dark fringes. The wave theory tells us where photons will be absorbed, on the average. This can be seen even more easily if we replace the photoelectric plate by a photographic plate. This behaves in a very similar way: in weak light, occasionally a process takes place at one single spot of the plate, producing a blackened grain when the plate is developed, and the effect of increasing the intensity is simply to increase the number of developed grains, not to change the blackening of an individual grain. Then a weak diffraction pattern, falling on a plate for a long time, will result in many blackened grains in the bright fringes, none in the dark ones, so that the final picture will be almost exactly the same as if there had been a stronger light beam acting for a shorter length of time. Nature, in other words, does not seem to be worried about which is correct, the wave theory or the photon theory of light: it uses both, and both at the same time, as we have just seen. This is now being accepted as a fact, and the theories are used more or less in the following way. In any problem where light is concerned, an electromagnetic field, or light wave, is set up, according to classical types of theories. But this field is not supposed to carry energy, as a classical field does. Instead, its intensity at any point is supposed to determine the probability that a 320 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XIX photon will be found at that point. It is assumed that there is no way at all of predicting exactly where any particular photon will go; we cannot say in any way whatever, in weak light, which electron of the metal will be ejected next. But on the average, the wave theory allows us to pre- dict. This type of statistical theory is quite different from any that has been used in physics before. Whenever a statistical element has been introduced, as in classical statistical mechanics, it has been simply to avoid the trouble of going into complete detail about a very complicated situation. But in quantum theory, as we have already mentioned in Chap. Ill, Sec. 3, we consider that it is impossible in principle to go into complete detail, and that the statistical theory is all that there is. When one meets wave mechanics, one finds that the laws governing the motion of ordinary particles, electrons, and atoms, are also wavelike laws, and thai the intensity of the wave gives the probability of finding the particle in a particular spot, but that no law whatever seems to predict exactly where it is going. This is a curious state of affairs, according to our usual notions, but nature seems to be made that way, and the theoiy of radia- tion has been the first place to find it out. CHAPTER XX IONIZATION AND EXCITATION OF ATOMS In the second part of this book, we have been concerned with the behavior of gases, liquids, and solids, and we have seen that this behavior is determined largely by the nature of the interatomic and intermolecular forces. These forces arise from the electrical structure of the atoms and molecules, and in this third part we shall consider that structure in a very elementary way, giving particular attention to the atomic and molecular binding in various types of substances. Most of the information which we have about atoms comes from spectroscopy, the interaction of atoms and light, and we must begin with a discussion of the excited and ioni- zated states of atoms and molecules, and the relation between energy levels and the electrical properties of the atoms. 1. Bohr's Frequency Condition. We have seen in Chap. Ill, Sec. 3, that according to the quantum theory an atom or molecule can exist only in certain definite stationary states with definite energy levels. The sparing of these energy levels depends on the type of motion we are con- sidering. For molecular vibrations they lie so far apart that their energy differences are largo compared to kT at ordinary temperatures, as we saw in Chap. IX. For the rotation of molecules the levels are closer together, so that only at very low temperatures was it incorrect to treat the energy as being continuously variable. For molecular translation, as in a gas, we saw in Chap. IV, Sec. 1, that the levels came so close together that in all cases we could treat them as being continuous. Atoms and molecules can also have energy levels in which their electrons are excited to higher energies than those found at low temperatures. Ordinarily the energy difference between the lowest electronic state (called the normal state, or the ground state) and the states with electronic excitation is much greater than the energy differences concerned in molecular vibration. These differences, in fact, are so large that at ordinary tem- peratures no atoms at all are found in excited electronic levels, so that we do not have to consider them in thermal problems. The excited levels have an important bearing, however, on the problem of interatomic forces, and for that reason we must take them up here. Finally, an electron can be given so much energy that it is entirely removed from its parent atom, and the atom is ionized. Then the electron can wander freely through the volume containing the atom, like an atom of a perfect 321 322 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX gas, and its energy levels are so closely spaced as to be continuous. The energy levels of an atom or molecule, in other words, include a set of dis- crete levels, and above those a continuum of levels associated with ionization. Such a set of energy levels for an atom is shown schematically in Fig. XX-1. The energy difference between the normal state and the beginning of the continuum gives the work required to ionize the atom or molecule, or the ionization potential. The ionization potentials are ordinarily of the order of magnitude of a number of volts. The lowest excited energy level of an atom, called in some cases the resonance level (sometimes the word resonance level is used for an excited level somewhat higher than the lowest one, but one which is reached particularly easily from the lowest one by the absorption of radiation), is ordinarily several volts above the normal state, the energy difference being called the resonance potential. This verifies our state- ment that electrons will not be appreciably excited at ordinary temperatures. We can see this by finding the characteristic tempera- luro associated with an energy difference of the order of magnitude of a volt. If we let fcO = energy of one electron volt, we have = (4.80 X 10- 10 )/(300 X 1.379 X 10~ 1G ) = 11,600 abs. Thus ordinary temperatures tire very small compared with such a charac- teristic temperature. If we consider the possibility of an electronic specific heat com- ing from the excitation of electrons to excited levels, we see that such a specific heat will be quite negligible, for ordinary substances, at temperatures below several thousand degrees. A few exceptional elements, how- ever, such as some of the transition group metals, have excited energy levels only a few hundredths of a volt above the normal state, and these elements have appreciable electronic specific heat at ordinary temperatures. There are two principal mechanisms by which atoms or molecules can jump from one energy level to another. These are the emission and absorption of radiation, and collisions. For the moment we shall con- sider the first process. An atom in a given energy level can have transi- tions to any higher energy level with absorption of energy, or to any lower level with emission, each transition meaning a quite definite energy difference. By the conservation of energy, the same quite definite Ionization potent/a/ Resonance potc ihal FIQ. XX-1. Schematic set energy levels for an atom. of SEC. 1] ION1ZATION AND EXCITATION OF ATOMS 323 energy must be converted into a photon if light is being emitted, or must have come from a photon if it is being absorbed. But by Einstein's hypothesis, the frequency of a photon is determined from its energy, by the relation energy = hv. Thus a transition between two atomic energy levels, with energies E\ and #2, must result in the emission or absorption of a photon of frequency v, where # 2 - E l = hv. (1.1) With sharp and discrete energy levels, then, we must haw definite fre- quencies emitted and absorbed, or must have a sharp lino spectrum. Tho relation (1.1), as applied to the spectrum, is duo to Bohr, and is often called Bohr's frequency condition; it is really tho foundation of the theory of spec! roscopy. Regarded as an empirical fact, it states that the frequencies observed in any spectrum can be written as the differences of a sot of numbers, called terms, which are simply the energy levels of tho system, divided by h. Since with a given table of terms wo can find a groat many more differences than there are terms, this means that a given complicated spectrum can bo greatly simplified if, instead of tabulating all the spectral frequencies, wo tabulate only the much smaller number of term values. And the importance of Bohr's frequency condition goes much further than this. For by observing tho frequencies in the spec- trum and finding the terms, we can get the energy levels of tho atom or molecule emitting the spectrum. Wo can use these directly, with no more theory, in such things as a calculation of tho specific heat. For instance, the energy levels of molecular vibration and rotation, used in finding tho specific heat of molecules in Chap. IX, arc the results of spoc- troscopic observation. Furthermore, wo can use tho observed energy levels to verify, in a very precise way, any theoretical calculation which we have made on the basis of the quantum conditions. The relation between the sharp linos observed in spectra, and the energy levels of the atoms or molecules making the spoctra, has boon the most important fact in the development of our knowledge of the structure of atoms and molecules. Bohr's frequency condition has one surprising feature. The fre- quency of emitted light is related, according to it, to tho energy rather than the frequency of the motion in the atom that produces it. This is entirely contrary to classical theory. A vibrating charge, oscillating with a given frequency, in classical electromagnetic theory, sends out light of the frequency with which it vibrates. According to wave mechanics, however, there is really not a contradiction here. For in wave mechanics, the particles, ordinarily the electrons, which produce the light do not move according to classical theory, but the frequencies actually present in their average motions are those given by Bohr's frequency condition. Thus 324 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX the relation between the motion of the particles, and the light they send out, is more nearly in accord with classical electromagnetic theory than we should suppose at first sight. 2. The Kinetics of Absorption and Emission of Radiation. With Bohr's picture of the relation between energy levels and discrete spectral lines in mind, Einstein gave a kinetic derivation of the law of black-body radiation, which is very instructive and which has had a great deal of influence. Einstein considered two particular stationary states of an atom, say the fth and jth (whore for definiteness wo assume that the ith lies above tho ji\i), and the radiation which could be omitted and absorbed in going between these two states, radiation of frequency v-/, where hv t] = Ei - E 3 . (2.1) Suppose the atom, or group of atoms of the same sort, capable of existing in these stationary states, is in thermal equilibrium with radiation at temperature T. Then for equilibrium, using the principle of detailed balancing, the amount of energy of frequency v lt absorbed by the atoms per second in making the transition from state j to state i must equal the amount of the same frequency emitted per second in going from state i to state j. Einstein made definite assumptions as to the probability of making those two transitions. In the first place, consider absorption. The number of atoms absorbing photons per second must surely be pro- portional first to the number of atoms in the lower, ^th energy level, which we shall call N,', and to the intensity of radiation of the frequency v lj , which is u vij . Thus Einstein assumed that the number of atoms absorb- ing photons per second was NtB^u,,,, (2.2) where /?,- is a constant characteristic of the transition. Next consider emission. Quite clearly an atom in an excited state can emit radiation and jump to a lower stato without any outside action at all. This is called spontaneous emission, and the probability of it was assumed by Einstein to be a constant independent, of the intensity of radiation. Thus he assumed the number of atoms falling spontaneously from the ith to the jth levels per second with emission of radiation was NiAi,-, (2.3) / where AH is another constant. But at the same time there must be another process of emission, as Einstein showed by considering very high temperatures, where u,a is very large. In this limit, the term (2.2) is bound to be very large compared to the term (2.3), so that with just these two terms equilibrium is impossible. Guided by certain arguments based on classical theory, Einstein assumed that this additional probabil- SEC. 21 10NIZAT10N AND EXCITATION OA' ATOMS 325 ity of emission, generally called induced emission, was ,,* (2.4) proportional as the absorption was to the intensity of external radiation. For equilibrium, then, we must have equal numbers of photons emitted and absorbed per second. Thus we must have N*(A>, + J3tfU M -,) = N^u^ (2.5) But at the same time, if there is equilibrium, we know that the number of atoms in the ith and jth states must be determined by the Boltzmann factor. Thus we must have _Ei Ni = const, e kT , _Ej NJ = const, e kr , N _<!?,-.,) _h^ jf = c kT = e kT - (2.6) We can now solve Eqs. (2.5) and (2.6) to find u^. We have A 1 ,~ ( i The energy density (2.7) would bo the Planck distribution law, if we had A tl _ Sirhv?, fl> ft . B^ ~ ~* ' (2 ' 8) as we see by comparison with Eq. (2.6), Chap. XIX. Einstein assumed that Eq. (2.8) was true, and in that way had a partial derivation of the Planck law. Einstein's derivation of the black-body radiation law is particularly important, for it gives us an insight into the kinetics of radiation processes. Being a kinetic method, it can be used even when we do not have thermal equilibrium. Thus if we know that radiation of a certain intensity is falling on atoms, we can find how many will be raised to the excited state per second, in terms of the coefficient By. But this means that we can find the absorptivity of matter made of these atoms, at this particular wave length. Conversely, from measurements of absorptivity, we can deduce experimental values of B^. And from Eq. (2.8) we can find the rate of emission, or the emissive power, if we know the absorptiv- 326 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX ity. Equation (2.8), saying that these two quantities are proportional to each other, is really very closely related to Kirchhoff's law, discussed in Chap. XIX, Sec. 1, and Einstein's whole method is closely related to the arguments of Kirchhoff. We can put Einstein's assumption of spontaneous and induced emis- sion in an interesting light if we express Eq. (2.5), not in terms of the energy density of radiation, but in terms of the average number of photons N v in the standing wave of frequency v. Let us see just what we mean by this. We are assuming that the energy of this standing wave is quantized, equal to nhv, as in Chap. XIX, Sec. 2; and by Einstein's hypothesis we are assuming that this means that there are really n (or N v ) photons associated with this wave. In terms of this, we see that the cnorgy density u v is dotorminod by the relation Total energy in dv = u v V dv = number of waves in dv times number of photons in each wave times energy in each photon __ \7 A7 Ij .. /O Q\ - -jjj- vJM r nv, v.j) using the result of Eq. (2.3), Chapter XIX. From this, we have ^ = TT (2-10) I/ i V \f Then we can rewrite Eq. (2.5), using Eq. (2.8), as Number of photons emitted per second - A l3 N t (N v + 1) = Number of photons absorbed per second = A V N,N,. (2.11) The interesting feature of Eq. (2.11) is that the induced and spontaneous emission combine into a factor as simple as N v + 1. This is strongly suggestive of the factors Nj + 1, which we met in the probability of transition in the. Einstein-Bose statistics, Eq. (4.2), of Chap. VI. As a matter of fact, the Einstein-Bose statistics, in a slightly modified form, applies to photons. Since it does not really contribute further to our understanding of radiation, however, we shall not carry through a discus- sion of this relation, but merely mention its existence. 3. The Kinetics of Collision and lonization. In the last section we have been considering the emission and absorption of radiation as a mechanism for the transfer of atoms or molecules from one energy level to another. The other important mechanism of transfer is that of collisions with another atom, molecule, or more often with an electron. In such a collision, the colliding particles can change their energy levels, SEC. 3] IONIZATION AND EXCITATION OF ATOMS 327 and at the same time change their translational kinetic energy, which is not considered in calculating their energy levels, by such an amount that the total energy is conserved. A collision in which only the translational kinetic energy changes, without change of the internal energy levels of the colliding particles, is called an elastic collision; this is the typo of collision considered in Chap. VI, where we were finding the effect of colli- sions on the molecular distribution function. The type of collision that results in a change of energy level, however, involving either excitation or ionization, is called an inelastic collision, and it is in such collisions that we are particularly interested here. Wo consider the kinetics of such collisions in the present section, coming later to the treatment of thermal equilibrium, in particular the equilibrium between ionization and recom- bination, as treated by thermodynamics and kinetic theory. For generality, we begin by considering the general nature of collisions between atomic or electronic particles. The processes which we consider are collisions, and most of them arc collisions of two particles, which separate again after their encounter. The probabilities of such collisions are described in terms of a quantity called a collision cross section, which we now proceed to define. First let us consider a simple mechanical collision. Suppose we have a small target, of area A (small compared to 1 sq. cm.). Then suppose; we fire many projectiles in its direction, but suppose they arc not well aimed, so that they are equally likely to strike any point of the square centimeter containing the target. Then we ask, what is the chance that any one of the projectiles will strike the target? Plainly this chance will be the ratio of the area of the target, A, to the area of the whole square centi- meter, which is unity. In other words, A, which we call the collision cross section in this particular case, is the fraction of all projectiles that hit the target. If instead of one target we had N in the region traversed by the beam of unit cross section, and if even the N targets filled only a small fraction of the square centimeter, so that there was small chance that one target lay behind another, then the chance that a particular projectile would have a collision would be NA, and to get the collision cross section of a single target we should have to take the fraction having collisions, and divide by N. In a similar way, in the atomic or molecular case, we allow a beam of colliding particles to strike the atoms or molecules that we wish to investi- gate. A certain number of the particles in the incident beam will pass by without collision, while a certain number will collide and be deflected. We count the fraction colliding, divide this fraction by the number of particles with which they could have collided, and the result is the colli- sion cross section. This can plainly be made the basis of an experi- mental method of measuring collision cross sections. We start a beam 328 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX of known intensity through a distribution of particles with which they may collide, and we measure the intensity of the beam after it has trav- ersed different lengths of path. We observe the intensity to fall off exponentially with the distance, and from that can deduce the cross section in the following manner. Lot the beam have unit cross section, and let a: be a coordinate measured along the beam. The intensity of the beam, at point x, is defined as the number of particles crossing the unit cross section at x per second. We shall call it I(x), and shall find how it varies with x. Consider the collisions in the thin sheet between x and x + dx. Let the number of particles per unit volume with which the beam is colliding be N/V. Then in the thin sheet between x and x + dx, with a volume dx, there will be N dx/V particles. Let each of these have collision cross section A. Then the fraction of particles colliding in the sheet will by definition be NA dx/V. This is, however, equal to the fractional decrease in intensity of the beam in this distance. That is, Integrating, this gives dl NA . , 01 . -y = -y dx. (3.1) N A In 7 = ~x + const., (3.2) or, if the intensity is / when x = 0, we have (3.3) From Eq. (3.3) we see the exponential decrease of intensity of which we have just spoken, and it is clear that by measuring the rate of exponential decrease we can find the collision cross section experimentally. The intensity of a beam falls to 1/e of its initial value, from Eq. (3.3), in a distance (3.4) (ft This distance is often called the mean free path. As we can see, it is inversely proportional to the number of particles per unit volume, or the density, and inversely proportional to the collision cross section. The mean free path is most commonly discussed for the ordinary elastic collisions of two molecules in a gas. For such collisions, the collision cross sections come out of the order of magnitude of the actual cross sectional areas of the molecules; that is, of the order of magnitude of SEC. 3] IONIZATION AND EXCITATION OF ATOMS 329 10~~ 16 cm 2 . In a gas at normal pressure and temperature, there aro 2.70 X 10 19 molecules per unit volume. Thus, the mean free path is of the order of 1/(2.70 X 10') = 3.7 X 10 4 cm. As a matter of fact, most values of A are several times this value, giving moan free paths smaller than the figure above*. As the pressure is reduced, however, the mean free paths become quite long. Thus at 0C., but a pressure of 10~ 5 atm., the mean free paths become of the order of magnitude of 1 cm.; with pressures several thousand times smaller than this, which are easily realized in a high vacuum, the mean free path becomes many meters. In other words, the probability of collision in the dimensions of an ordinary vacuum tube becomes negligible, and molecules shoot from one side to the other without hindrance. The collision cross section is closely related to the quantities Afa, which we introduced in discussing collisions in Sec. I, Chap. VI. We were speaking there about a particular sort of collision, one in which one of the colliding particles before collision was in cell i of the phase space, the other in cell j, while after collision the first was in cell k, the second in cell L The number of such collisions per unit time was assumed to be A l k J iN l N ] . In the present case, we are treating all collisions of two molecules, one moving, the other at rest, irrespective of the velocities after collision. That is, the present case corresponds to the case where one of the two cells i or j corresponds to a molecule at rest, and where we sum over all cells k and L Furthermore, there we were interested in the number of collisions per unit time, here in the number per unit distance of path. It is clear that if we knew the Aft's, we could compute from them the collision cross section of the sort we are now using. Our collision cross section gives less specific information, however. Wo expect to find a different collision cross section for each velocity of impinging particle, thoiigh our restriction that the particle with which it is colliding is at rest is not really a restriction at all, for it is an easy problem in mechanics to find what would happen if both particles were initially in motion, if we know the more restricted case where one is initially at rest. But the Aft's give additional information about the velocities of the two particles after collision. They assume that the total kinetic energy after collision equals the total kinetic energy before collision; that is, they assume an elastic collision. Then, as mentioned in Sec. 1, Chap. VI, there are two quantities which can be assigned at will in describing the collision, which may be taken to be the direction of one of the particles after collision. To give equivalent information to the Aft in the language of collision cross sections, we should give not merely the probability that a colliding particle of given velocity strike a fixed parti- cle, but also the probability that after collision it travel off in a definite direction. This leads to what is called a collision cross section for 330 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX / scattering in a given direction: the probability that the colliding particle have a collision, and after collision that it travel in a direction lying within a unit solid angle around a particular direction in space. This cross section gives as much information as the set of A$s, though in a different form, so that it requires a rather complicated mathematical analysis, which we shall not carry out, to pass from one to the other. We are now ready to consider the collision cross sections 1 for some of the processes concerned in excitation and ionization. First we consider the collision of an electron with a neutral atom. In the first place, there are two possible types of collision, elastic and inelastic. If the energy of the incident electron is less than the resonance potential of the atom, then an inelastic collision is not possible, for the final kinetic energy of the two particles cannot be less than zero. Thus below the resonance potential all collisions arc elastic. The cross section for elastic collision varies with the velocity of the impinging electron, sometimes in what seems a very erratic manner. Generally it increases as the velocity decreases to zero, but for some atoms, particularly the inert gas atoms, it goes through a maximum at a velocity associated with an energy of the order of 10 electron volts, then decreases again as the velocity is decreased, until it appears to become zero as the velocity goes to zero. This effect, meaning that extremely slow electrons have extremely long mean free paths in these particular gases, is called the Ramsauer effect, from its discoverer. The collision cross sections for elastic collision of electrons and atoms have been investigated experimentally for all the convenient atoms, and many molecules, disclosing a wido variety of behaviors. They can also be investigated theoretically by the wave mechanics, involving methods which cannot be explained here, and the theoretical predictions agree very satisfactorily with the experiments, even to the extent of explaining the Ramsauer effect. Above the resonance potential, an electron has the possibility of colliding inclastically with an atom, raising it to an excited level, as well as of colliding elastieally. The probability of excitation, or the collision cross section for inelastic collision, starts up as the voltage is raised above the excitation potential, rising quite rapidly for some transitions, more slowly for others, then reaches a maximum, and finally begins to decrease if the electron is too fast. Of course, atoms can be raised not merely to their resonance level, but to any other excited level, by an electron of suitable energy, and each one of these transitions has a collision cross section of the type we have mentioned, starting from zero just at the suitable excitation energy. The probabilities of excitation to high energy levels are small, however; by far the most important inelastic 1 For further information about collisions, see Massey and Mott, "The Theory of Atomic Collisions," Oxford University Press, 1933. SBC. 3] IONIZATION AND EXCITATION OF ATOMS 331 types of collision, at energies less than the ionization potential, are those in which the atom is raised to one of its lowest excited levels. As soon as the energy of the impinging electron becomes greater than the ionization potential, inelastic collisions with ionization become possible. Here again the collision cross section starts rather rapidly from zero as the potential is raised above the ionization potential, reaches a maximum at the order of magnitude of two or throe times the ionization potential, and gradually falls off with increasing energy. The reason for the falling off with rising energy is an elementary one: a fast electron spends less time in an atom, and consequently has less time to ionize it and less probability of producing the transition. A collision with ioniza- tion is of course different from an excitation, in that the ejected electron also leaves the scene of the collision, so that after the collision we have three particles, the ion and two electrons, instead of two as in the previous case. This fact is used in the experimental determination of resonance and ionization potentials. A beam of electrons, of carefully regulated voltage, is shot through a gas, and as the voltage is adjusted, it is observed that the mean free path shows sharp breaks as a function of voltage at certain points, decreasing sharply as certain critical voltages are passed. This does not tell us whether the critical voltages are resonance or ionization potentials, but if the presence of additional electrons is also observed, an increase in these additional electrons is noticed at an ionizatiou potential but not at a resonance potential. Of course, each of these types of collision must have an inverse type, and the principle of microscopic reversibility, discussed in Chap. VI, shows that the probability, or collision cross section, for the inverse collision can be determined from that of the direct collision. The oppo- site or inverse to a collision with excitation is what is called a collision of the second kind (the ordinary one being called a collision of the first kind). In a collision of the second kind an electron of low energy collides with an excited atom or molecule, the atom has a transition to its normal state or some lower energy level than the one it is in, and the electron comes off with more kinetic energy than it had originally. The inverse to an ionization is a three-body collision: two electrons simultaneously strike an atom, one is bound to the atom, which falls to its normal state or some excited state, while the other electron, as in a collision of tho second kind, is ejected with more kinetic energy than the two electrons together had before the collision. Such a process is called a recombina- tion; and it is to be noticed that we never have a recombination just of an atom and an electron, for there would be no body to remove the extra energy. In addition to the types of collision we have just considered, where an electron and an atom or molecule collide, one can have collisions of 332 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX two atoms or molecules with each other, with excitation or ionization. It is perfectly possible to have two fast atoms collide with each other, originally in their normal states, and result in the excitation or ionization of one or both of the atoms. The collision of the second kind, inverse to this, is that in which an excited atom and a normal one collide, the excited one falls to its normal state, and the atoms gain kinetic energy. Then one can have an exchange of excitation: an excited and a normal atom collide, and after the collision the excited one has fallen to its normal state, the. normal one is excited, and the discrepancy in energy is made up in the kinetic energy of translation of the atoms. Or one can have an interchange of ionization: a neutral atom and a positive ion collide, and after collision the first one has become a positive ion, the second one is neutral. We shall consider this same process from the point of view of statistical mechanics in the next section. In these cases of collisions of atoms, it is very difficult to calculate the probabilities of the various processes, or the collision cross sections, and in most cases few measure- ments have been made. In general, however, it can be said that the probability of elastic; collision, with the collision of two atoms, is much greater than the probability of any of the various types of inelastic collision. In Chap. X, we have taken up the kinetics of chemical processes, the types of collisions between molecules which result in chemical reactions. Thorn is no very fundamental distinction between those collisions and the type we have just considered. In ordinary chemical reactions, the colliding molecules are under all circumstances in their lowest electronic state; they arc not excited or ionized. The reason is that excitation or ionization potentials, of molecules as of atoms, arc ordinarily high enough so that the chance of excitation or ionization is negligible at the tempera- tures at which reactions ordinarily take place, or what amounts to the same thing, the colliding molecules taking part in the reaction almost never have enough energy to excite or ionize each other. This does not mean, however, that excitation and ionization do not sometimes occur, particularly in reactions at high temperature; undoubtedly in some cases they do. It is to be noted that in the case of colliding molecules, unlike colliding atoms, inelastic collisions are possible without electronic excita- tion: the molecules ean lose some of their translational kinetic energy in the form of rotational or vibrational energy. In this sense, an ordinary chemical reaction, as explained in Sec. 3, Chap. X, is an extreme case of an inelastic collision without excitation. But such inelastic collisions with excitation of rotation and vibration are the mechanism by which oquipartition is maintained between translational and rotational and vibrational energy, in changes of temperature of a gas. Sometimes they do not provide a mechanism efficient enough to result in equilibrium SEC. 4] IONIZATION AND EXCITATION OF ATOMS 333 between these modes of motion. For example, in a sound wave, there are rapid alternations of pressure, produced by the trauslational motion of the gas, and these result in rapid alternations of the translational kinetic energy. If equilibrium between translation and rotation can take place fast enough, there will bo an alternating temperature, related to the pressure by the adiabatic relation, and at each instant there will be thermal equilibrium. Actually, this holds for low frequencies of sound, but there is evidence that at very high frequencies the inelastic collisions are too slow to produce equilibrium, and the rotation does not partake of the fluctuations in energy. Another interesting example is found in some cases in gas discharges in molecular gases. In an arc, there are ordinarily electrons of several volts' energy, since an electron must be* accelerated up to the lowest- resonance potential of the gases present before it can have an inelastic*, collision and reduce its energy again to a low value. These electrons have a kinetic energy, then, which gas moleeules \\ould acquire only at temperatures of a good many thousand degrees. The electrons collide* elastically with atoms, and in these collisions the electrons tend to lose energy, the atoms to gain it, for this is just the mechanism by which thermal equilibrium and cquipartition tend to be brought about. If there are enough elastic collisions before the electrons are slowed down by an inelastic collision, the atoms or moleeules will tend to get into thermal equilibrium, as far as their translation is concerned, corresponding to an extremely high temperature. That such an equilibrium is actually set up is observed by noticing that the fast; electrons in an arc have a distribution of velocities approximating a Maxwellian distribution. But apparently the inelastic collisions between molecules, or between electrons and molecules, are not effective enough to give the molecules the amount of rotational energy suitable to cquipartition, in the short length of time in which a molecule is in the arc, before it diffuses to the wall or otherwise can cool off. For the, rotational energy can bo observed by band spectrum observation, and in some cases it is found that it corre- sponds to rather cool gas, though the translational energy corresponds to a very high temperature. 4. The Equilibrium of Atoms and Electrons. From the cases we have taken up, we see that the kinetics of collisions forms a complicated and involved subject, just as the kinetics of chemical reactions does. Since this is so, it is fortunate that in cases of thermal equilibrium, we can get results by thermodynamics which are independent of the precise mechanism, and depend only on ionization potentials and similarly easily measured quantities. And as we have stated, thermodynamics, in the form of the principle of microscopic reversibility, allows us to get some information about the relation between the probability of a direct process 334 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX and of its inverse, though we have not tried to make any such calculations. To see how it is possible, we need only notice that every equilibrium con- stant can be written as the ratio of the rates of two inverse reactions, as we saw from our kinetic derivation of the mass action law in Chap. X, so that if we know the equilibrium constant, from energy considerations, and if we have experimental or theoretical information about the rate of one of the reactions concerned, we can calculate the rate of the inverse without further hypothesis. A mixture of electrons, ions, and atoms forms a system similar to that which we considered in Chap. X, dealing with chemical equilibrium in gases. Equilibrium is determined, as it was there, by the mass action law. This law can bo derived by balancing the rates of direct and inverse collisions, but it can also be derived from thermodynamics, and the equilibrium constant can be found from the heat of reaction and the chemical constants of the various particles concerned. The heats of reaction can be found from the various ionization potentials, quantities susceptible of independent measurement, and the chemical constants are determined essentially as in Chap. VIII. Thus there are no new principles involved in studying the equilibrium of atoms, electrons, and ions, and we shall merely givo a qualitative discussion in this section, the statements being equivalent to mathematical results which can bo established immediately from the methods of Chap. X. The simplest type of problem is the dissociation of an atom into a positive ion and an electron. By the methods of Chap. X, we find for the partial pressures of positive ions, negative electrons, and neutral atoms the relation where P(+), -P(-), P(n) are the pressures of positive ions, electrons, and neutral atoms respectively. From Eq. (2.6), Chap. X, we can find the equilibrium constant K P explicitly. For the reaction in which one mole of neutral atoms disappears, one mole of positive ions and electrons appears, we have v+ = 1, i/_ = 1, v n = 1. Then the quantity ^VjUj 3 becomes U+ f/_ + U n = I. P., where I. P. stands for the ionization potential, expressed in kilogram-calories per mole, or other thermal unit in which we also express KT. Then we have - K P (T) = e?++*--**r*e RT . (4.2) From Eq. (4.2), we see that the equilibrium constant is zero at the absolute zero, rising very slowly until the temperature becomes of the SBC. 4] IONIZATION AND EXCITATION OF ATOMS 335 order of magnitude of the characteristic temperature LP./R, which as we have seen is of the order of 10,000. Thus at ordinary temperatures, from Eq. (4.1), there will be very little ionization in thermal equilibrium. This statement does not hold, however, at very low pressures. We can see this if we write our equilibrium relation in terms of concentrations, following Eq. (1.10), Chap. X. Then we have **= - **&). (4.3) C H * From Eq. (4.3), we see that as the pressure is reduced at constant tem- perature, the dissociation becomes greater, until finally at vanishing pressure the dissociation can become complete, even at ordinary tem- peratures. This is a result of importance in astrophysics, as has been pointed out by Saha. In the solar atmosphere, there is spectroscopic evidence of the existence of rather highly ionized elements, even though the temperature of the outer layers of the atmosphere is not high enough for us to expect such ionization, at ordinary pressures. However, the pressure in these layers of the sun is extremely small, and for that reason the ionization is abnormally high. Another example that can be handled by ordinary methods of chemical equilibrium is the equilibrium between an ion and a neutral atom of another substance, in which the more electropositive atom is the one forming the positive ion, in equilibrium. Thus, consider the reaction Li + Ne + <= Li+ + Ne, in which Li has a much smaller ionization poten- tial than Ne, or is more electropositive. The equilibrium will be given by v = Kp (T), (4.4) CLI ^NO the pressure canceling in this case. And the equilibrium constant K P (T) is given by -I.P.(Ne) + _ e f )-i(Li+)-<Ne)0 RT ^ (^gj Since the ionization potential of neon is greater than that of lithium, the equilibrium constant reduces to zero at the absolute zero, showing that at low temperatures the lithium is ionized, the neon unionized. In other words, the element of low ionization potential, or the electropositive 4 element, tends to lose electrons to the more electronegative element, with high ionization potential. This tendency is complete at the absolute zero. At higher temperatures, however, as the mass action law shows, there will be an equilibrium with some of each element ionized. CHAPTER XXI ATOMS AND THE PERIODIC TABLE Interatomic forces form the basis of molecular structure and chemis- try, and we cannot understand them without knowing something about atomic structure. We shall for this reason give in this chapter a very brief discussion of the nuclear model of the atom, its treatment by the quantum theory, and the resulting explanation of the periodic table. There is of course not the slightest suggestion of completeness in our discussion; volumes can be written about our present knowledge of atomic structure and atomic spectra, and the student who wishes to understand chemical physics properly should study atomic structure independently. Since however there are many excellent treatises available on the subject, we largely omit such a discussion here, mentioning only the few points that wo shall specifically use. 1. The Nuclear Atom. An atom is an electrical structure, whose diameter is of the order of magnitude of 10~ 8 cm., or 1 angstrom unit, and whoso mass is of the order of magnitude of 10~ 24 gm. More precisely, an atom of unit atomic weight would have a mass 1.66 X 10~ 24 gm., and the mass of any atom is this unit, times its atomic weight. Almost all the* mass of the atom is concentrated in a small central body called the nucleus, which determines the properties of the atom. The diameter of the nucleus is of the order of magnitude of 10 13 cm., a quantity small enough so that it can be neglected in practically all processes of a chemical nature. The nucleus carries a positive charge of electricity. This charge is an integral multiple of a unit charge, generally denoted by the letter c, equal to 4.80 X 10~ 10 e.s.u. of charge. The integer by which we must multiply this unit to get the charge on the nucleus is called the atomic number, and is often denoted by Z. This atomic number proves to be the ordinal number of the corresponding element in the periodic table of the elements, as used by the chemists. Thus for the first few elements wo have hydrogen H, Z = 1 ; helium He, Z = 2; lithium Li, Z = 3; and so on, up to uranium U, Z = 92, the heaviest natural ele- ment. The electric charge of the nucleus, or the atomic number, is the determining feature of the atom chemically, rather than the atomic weight. In a large number of cases, there are several types of nuclei, all with the same atomic number but with different atomic weights. Such nuclei arc called isotopes. They prove to have practically identical 336 SEC. 1] ATOMS AND THE PERIODIC TABLE 337 properties, since most properties depend on the nuclear charge, not its mass. Almost the only property depending on the mass is the vibra- tional frequency, as observed in molecular vibrations, specific heat, etc. Thus, different isotopes have different characteristic temperature** and specific heats, but since the masses of different isotopes of the same ele- ment do not ordinarily differ greatly, these differences are not very important. Almost the only exception is hydrogen, where the heavy isotope has twice the maSvS of the light isotope, making the properties of heavy hydrogen, or deuterium, decidedly different from those of ordinary hydrogen. The atomic weights of isotopes are almost exactly whole number multiples of the unit 1.66 X 10" 24 gm., but the atomic weight measured chemically is the weighted mean of those of its various isotopes, and hence is not a very fundamental quantity theoretically. For our purposes, which are largely chemical, we need not consider the possibility of a change in the properties of a nucleus. But many reac- tions are known, some spontaneous (natural radioactivity) and some artificial (artificial or induced radioactivity), by which nuclei can be changed, both as to their atomic; weight and atomic number, and hence converted from the nuclei of one element to those of another. We shall assume that such nuclear transformations are not occurring in the proc- esses we consider. In addition to the nucleus, the atom contains a number of light, nega- tively charged particles, the electrons. An electron has a mass of 9.1 X 10~ 28 gm., iVrs- of the mass of a nucleus of unit atomic weight. Its charge, of negative sign, has the magnitude of 4.80 X 10~ 10 e.s.u., the unit mentioned above. There seem to be no experiments which give information about its radius, though there are some theoretical reasons, not very sound, for thinking it to be of the order of 10~ 13 cm. , If the atom is electrically neutral, it must contain just as many electrons as the nucleus has unit charges; that is, the number of electrons equals the atomic number. But it is perfectly possible for the atom to exist with other numbers of electrons than this. If it loses electrons, becoming positively charged, it is a positive ion. It can lose any number of electrons from one up to its total number Z, and we say that it then forms a singly charged, doubly charged, etc., positive ion. A positive ion is a stable structure, like an atom, and can exist indefinitely, so long as it does not come in contact with electrons or matter containing electrons, by means of which it can neutralize itself electrically. On the other hand, an atom can sometimes attach one or more electrons to itself, becoming a singly or multiply charged negative ion. Such a structure tends to be inherently unstable, for it is negatively charged on the whole, repelling electrons and tending to expel its own extra electrons and become neutral again. It is doubtful if any multiply charged negative ions are really 338 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI stable. On the other hand, a number of elements form stable, singly charged, negative ions. These are the so-called electronegative elements, the halogens F, Cl, Br, I, tho divalent elements O and S, arid perhaps a few others. These elements have slightly lower energy in the form of a negative ion, as F", than in the dissociated form of the neutral atom, as F, and a removed electron. The energy difference between these two states is called the electron affinity; as we see, it is analogous to a heat of reac- tion, for a reaction like F~<=>F + e. (1.1) The energy required to remove an electron from a neutral atom is its ionizatiori potential; that required to remove the second electron from a singly charged positive ion is the second ionization potential; and so on. In each case, the most easily removed electron is supposed to be the one considered, some electrons being much more easily detachable than others. Successive ionization potentials get rapidly larger and larger, for as the ion becomes more highly charged positively, an electron is more strongly hold to it by electrostatic forces and requires more work to remove. Ionization potentials and electron affinities, as we have already mentioned, are commonly measured in electron volts, since electrical methods are commonly used to measure them. For the definition of the electron volt and its relation to thcrmodynamic units of energy, the reader is referred to Eq. (1.1), Chap. IX, where it is shown that one electron volt per atom is equivalent to 23.05 kg.-cal. per gram mole, so that ionization potentials of several electron volts represent heats of reaction, for the reaction in which a neutral atom dissociates into an electron and a positive ion, which are large, as measured by thermodynamic standards, as mentioned in the preceding chapter. 2. Electronic Energy Levels of an Atom. The electrons in atoms arc governed by the quantum theory and consequently have various sta- tionary states and energy levels, which arc intimately related to the excitation and ionization potentials and to the structure of the periodic table. We shall not attempt here to give a complete account of atomic 1 structure, in terms of electronic levels, but shall mention only a few important features of the problem. A neutral atom, with atomic num- ber Z, and Z electrons, each acted on by the other (Z 1) electrons a*- well as by the nucleus, forms a dynamical problem which is too difficult to solve except by approximation, either in classical mechanics or in quantum theory. The most useful approximation is to replace the forcr acting on an electron, depending as it does on the positions of all othei electrons as well as on the one in question, by an averaged force, averaged over all the positions which the other electrons take up during theii motion. This on the average is a central force; that is, it is an attraction SBC. 2] ATOMS AND THE PERIODIC TABLE 339 pulling the electron toward the nucleus, the magnitude depending only on the distance from the nucleus. It is a smaller attraction than that of an electron for a bare nucleus, for the other electrons, distributed about the nucleus, exert a repulsion on the average . Nevertheless, so long as it is a central force, quantum theory can quite easily solve the problem of finding the energy levels and the average positions of the electrons. An electron in a central field has three quantum numbers, connected with the three dimensions of space. One, called the azimuthal quantum number, is denoted by I, and measures the angular momentum of the electron, in units of h/2ir. Just as in the simple rotator, discussed in Section 3, Chap. Ill, the angular momentum must be an integer times h/2ir, and here the integer is I, taking on the values 0, 1, 2, ... For each value of I, we have a series of terms or energy levels, given by integral values of a second quantum number, called the principal or total quantum number, denoted by n, and by convention taking on the values I + 1, I + 2, Following spectroscopic notation, all the levels of a given I value are grouped together to form a series and are denoted by a letter. Thus I = is denoted by s (for the spectroscopic Sharp series), I = 1 by p (for the Principal series), I 2 by d (for the Diffuse series), I = 3 by/ (for the Fundamental series), and I = 4, 5, 6, . . . by 0, h, j, . . . , using further letters of the alphabet. A given energy level of the electron is denoted by giving its value of n, and then the letter giving its I value; as 3p, a level with n = 3, I = 1. The third quantum number is connected with space quantization, as discussed in Sec. 3, Chap. IX, and is denoted by mi. Not only the angular momentum I is quantized, but also its component along a fixed direction in space, and this is equal to mih/2ir. The integer mi, then, can go from the limits of lh/2ir (when the angular momentum points along the direction in question) to lh/2ir (when it is oppositely directed), resulting in 21 + 1 different orientations. Being a problem with spherical symmetry, the energy does not depend on the orientation of the angular momentum. Thus the 21 + 1 levels corresponding to a given n and Z, but different orientations, all have the same energy, so that the problem is degenerate, and an level has one sublevel, a p has three, a d five, an / seven, etc. This number of levels is really doubled, however, by the electron spin. An electron has an intrinsic permanent magnetism, and associated with it a permanent angular momentum of magnitude %h/2ir. This can be oriented in either of two opposite directions, giving a component of %h/2ir along a fixed direction. This, as will be seen, is in harmony with the space quantization just described, for the special case I = . For each stationary state of an electron neglecting spin, we can have the two possible orientations of the spin, so that actually an s level has two 340 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI sublevels, a, p has six, a d ten, an / fourteen. These numbers form the basis of the structure of the periodic table. The energies of these energy levels can be given exactly only in the case of a single electron rotating about a nucleus of charge Z units, in the absence of other electrons to shield it. In this case, the energy is given by Bohr's formula In Eq. (2.1), m is the mass of an electron (9.1 X 10~ 28 gm.), e is its charge (4.80 X 10~ 10 c.s.u.), K is the so-called Rydberg number, 109,737 cm.- 1 , so that Rhc, where c is the velocity of light (3.00 X 10 10 cm. per second), is an energy, equal to 2.17 X 10~" n erg, or 13.56 electron volts, or 313 kg.-cal. per gram mole. The zero of energy is the state in which the electron reaches an infinite distance from the nucleus with zero kinetic energy. In all the stationary states, the energy is less than this, or is negative, so that the electron can never be entirely removed from the atom. The 1 smaller the integer n, the lower the energy, so that the lowest states correspond to n = 1, 2, etc. At the same time, the lower the energy is, the more closely bound to the nucleus the electron is, so that the orbit, or the region occupied by the electron, is small for small values of n. The tightness of binding increases with the nuclear charge Z, as we should expect, and at the same time the size of the orbit decreases. We notice that for an electron moving around a nucleus, the levels of different series, or different I values, all have the same energy provided they have the same principal quantum number n. For a central field like that actually encountered in an atom, the energy levels are quite different from those given by Eq. (2.1). They are divided quite sharply into two sorts: low-lying levels corresponding to orbits wholly within the atom, and high levels corresponding to orbits partly or wholly outside the atom. For levels of the first type, the energy is given approximately by a formula of the typo E = - Rhc (Z--Z-Z)\ (2.2) In Eq. (2.2), Z is called a shielding constant. It measures the effect of the other electrons in reducing the nuclear attraction for the electron in question. It is a function of n and /, increasing from practically zero for the lowest n values to a value only slightly less than Z for the outer- most orbits within the atom. For levels outside the atom, on the other hand, the energy is given approximately by E = -Rhc- (2.3) SEC. 2] ATOMS AND THE PERIODIC TABLE 341 Here 6 is called a quantum defect. It depends strongly on Z, but is approximately independent of n in a single series, or for a single I value. The value of 5 decreases rapidly with increasing I] thus the s series may have a large quantum defect, the p series a considerably smaller one, and the d and higher series may have very small values of 6, for some particu- lar atom. We may illustrate these formulas by Fig. XXI-1, in which the energies of an electron in a contra! field representing ooppor, as a function of n, are shown on a logarithmic scale. Tho sharp break between FIG XXI-1. ICnergies of electrons in tho copper atom, in Rydbcrg unit**, IM a function of principal quantum number ?i. Eneigies are shown on. a logni ithmic scale. The energies in the hydrogen atom are shown for comparison. the two typos of energy levels is woll shown; Is, 2s, 2p, 3s, 3;;, 3d belong very definitely to the orbits lying within the atom, while tho others are outside and are governed approximately by Kq. (2.3). It has been mentioned that the region occupied by tho electron's orbit increases in volume, as tho binding energy becomes less or as the quantum number n increases. For our later use in studying the sizes of atoms, it is useful to know the size of the orbit quantitatively. These sizes are not definitely determined, for the electron is sometimes found at one point, sometimes at another, in a given stationary state, and all we can give is the distance from the nucleus at which there is the greatest 342 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI probability of finding it. This is not given by a simple formula, though it can be computed fairly accurately by wave mechanics, but to an approximation the radius r of maximum charge density or probability is given by fW = a ~; (2.4) for the case of an electron moving about a bare nucleus of charge Z, in an orbit of quantum number n, where a = h*/4ir 2 me 2 = 0.53 A. This is the formula connected with the type of orbit whose energy is given by Eq. (2.1). We observe the increase of size with increasing n, and the decrease with increasing nuclear charge, which we have mentioned before. Similarly, if the energy is given by Eq. (2.2), the radius is given approximately by n 2 r-max = a^ 7 , (2.5) and if the formula (2.3) holds for the energy, the radius is r um x = o(n - 6) 2 . (2.6) We may expect that the radius of the atom, if that expression has a meaning, will be of the order of magnitude of the radius of the largest orbit ordinarily occupied by an electron in the neutral atom. In the case of copper this is the 4s orbit, while in the copper ion it is the 3d. In the next section we tabulate such quantities for the atoms, and in later chapters we shall find these radii of interest in connection with the dimensions of atoms as determined in other ways. We can now use the energy levels of an electron in a central field in discussing the structure of the atom. At the outset, we must use a fundamental fact regarding electrons: they obey the Fermi-Dirac statis- tics. That is, no two electrons can occupy the same stationary state. The principle, stated in this form, is often called the Pauli exclusion principle, and it was originally developed to provide an explanation for the periodic table, which we shall discuss in the next section. As a result of the Pauli exclusion principle, there can be only two Is electrons, two 2s's, six 2p's, etc. We can now describe what is called the con- figuration of an atom by giving the number of electrons in each quantum state. In the usual notation, these numbers are written as exponents. Thus the symbol (ls) 2 (2s) 2 (2/;) 6 (3s) 2 (3p) 6 (3rf) 10 4s would indicate a state of an atom with two Is electrons, two 2s, etc., the total number of elec- trons being 2 + 2 + 6 + 2 + 6 + 10 + 1 = 29, the number appro- priate to the neutral copper atom. If all the electrons are in the lowest available energy level, as they are in the case above, the configuration SEC. 2] ATOMS AND THE PERIODIC TABLE 343 corresponds to the normal or ground state of the atom. If, on the other hand, some electrons are in higher levels than the lowest possible ones, the configuration corresponds to an excited state. In the simplest case, only one electron is excited; this would correspond to a configuration like (ls) 2 (2s) 2 (2p) 6 (3s) 2 (3p)H3d) 10 (5p) for copper. To save writing, the two configurations indicated above would often be abbreviated simply as 4s and 5p, the inner electrons being omitted, since they are arranged as in the normal state. It is possible for more than one electron to bo excited ; for instance, we could have the configuration which would ordinarily be written as (3d) 9 (4p)(5s) (the Is, 2s, 2p, 3s, 3p electrons being omitted), in which one of the 3d electrons is excited, say to the 4p level, and the 4.x is excited to the 5s level, or in which the 3d is excited to the 5s level, the 4s to the 4p. (On account of the identity of electrons, implied in the Fermi-Dirac statistics, there is no physical distinction between these two ways of describing the excitation.) While more than two electrons can theoretically be excited at the same time, it is very unusual for this to occur. If one or more electrons are entirely removed, so that we have an ion, the remaining electrons will have a configuration that ran be indi- cated by the same sort of symbol that would be used for a complete atom. For example, the normal state of the Cu + ion has the configura- tion (ls) 2 (2) 2 (2p) 6 (3) 2 (3p) 6 (3d) 10 . The energy values which we most often wish are excitation and ionization potentials, the energies required to shift one or more electrons from one level to another, or the differences of energy between atoms or ions in different configurations. We can obtain good approximations to these from our one-electron energy values of Eqs. (2.2) and (2.3). The rule is simple: the energy required to shift an electron from one energy level to another in the atom is approximately equal to the difference of the corresponding one-electron energies. If two electrons are shifted, we simply add the energy differences for the two. This rule is only qualitatively correct, but is very useful. In particular, sinee the absolute values of the quantities (2.2) and (2.3) represent the energies required to remove the corresponding electrons from the central field, the same quantities in turn are approximate values of the ionization potentials of the atom. An atom can be ionized by the removal of any one of its electrons. The ordinary ionization potential is the work required to remove the most loosely bound electron; in copper, for instance, the work required to remove the 4s electron from the neutral atom. But any other electron can be removed instead, though it requires more energy. If an atom is bombarded by a fast electron, the most likely type of ionization process is that in which an inner electron is removed, as for instance a Is, 2s, 2p, etc. For such an ionization, in which the ionization potential may be many thousands of volts, the impinging 344 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI electron must of course have an energy greater than the appropriate ionization potential. After an inner electron is knocked out, in this way, a transition is likely to occur in which one of the outer electrons falls into the empty inner shell, the emitted energy coming off as radiation of very high frequency. It is in this way that x-rays are produced, and on account of their part in x-ray emission, the inner energy levels are known by a notation derived from x-rays. Thus the Is electrons are known as the K shell (since the x-rays emitted when an electron falls into the K shell are called the K series of x-rays), and 2s and 2p grouped together are the L shell, the 3s, 3p, and 3d together are the M shell, and so on. In contrast to the x-ray ionization, which is not often important in chemical problems, an impinging electron with only a few volts' energy is likely to excite or ionize the outermost electron. This electron has an energy given approximately by Kq. (2.3), which thus gives roughly the energies of the various excited and ionized levels of the atom. As a matter of fact, the real situation, with all but a few of the simplest atoms, is very much more complicated than would be indicated by Eq. (2.3), on account of certain interactions between the outer electrons of the atom, resulting in what are called multiplets. A given configuration of the electrons, instead of corresponding to a single stationary state of the atom, very often corresponds to a large number of energy levels, grouped more or less closely about the value given by our elementary approxima- tion of one-electron energies. An understanding of this multiplet structure is essential to a real study of molecular structure, but we shall not follow the subject far enough to need it. One principle only will be of value: a closed shell of electrons, by which we mean a shell contain- ing all the electrons it can hold, consistent with the Pauli exclusion principle [in other words, a group like (Is) 2 , (2p) 6 , etc.] contributes noth- ing to the multiplet structure or the complication of the energy levels. Thus an atom all of whose electrons are in closed shells (which, as we shall see in the next section, is an inert gas) has no multiplets, and its energy level is single. And an atom consisting mostly of closed shells, but with one or two electrons outside them, has a multiplet structure characteristic only of the electrons outside closed shells. Thus the alkali metals, and copper, silver, arid gold, all have one electron outside closed shells in their normal state (as we have found that copper has a 4s elec- tron). As a result, all these elements have similar spectra. 3. The Periodic Table of the Elements. In Table XXI-1 we list the elements in order of their atomic numbers, which are given in addition to their symbols. The atoms in the table are arranged in rows and columns in such a way as to exhibit their periodic properties. The diagonal lines are drawn in such a way as to connect atoms of similar SBC. 3] ATOMS AND THE PERIODIC TABLE 345 properties. Table XXI-2 gives the electron configuration of the normal states of the elements. Table XXI-3 gives the ionization potentials of the various electrons in the lighter atoms, in units of Rhc, the Rydberg energy, mentioned in the preceding section. And Table XXI-4 gives the radii of the various orbits, as computed by wave mechanics. We can now use these tables, and other information, to give a brief discussion of the properties of the elements, particularly in regard to their ability to form ions, which is fundamental in studying thoir chemical behavior. In this regard, we must remember that low ionization potentials cor- respond to easily removed electrons, high ionization potentials to tightly held electrons. * TABLE XXI-1. THE PERIODIC TABLE OK THE ELEMENTS Li 3 Na 11 Be 4 Mgl2 B 6 Al 13 C 6 Si 14 7 P 15 S 16 Cl 17 N O 8 F 9 NelO- -A 18 In a way, the most distinctive elements are the inert gases, He, Ne, A, Kr, and Xe. As we see from Table XXI-2, all the electrons in these elements are in closed shells. They form no chemical compounds and have high ionization potentials, showing very small tendency to form ions. The reason for their stability is fundamentally the fact that electrons in closed shells are difficult to remove, as is shown by an exami- nation of ionization potentials throughout Table XXI-3. That is, closed shells form a very stable structure, difficult to deform in such a way as to form ions or molecules. To see why the inert gases appear where they do in the periodic table, we may imagine that we are building up the periodic table, adding more and more electrons to a nucleus. The first 346 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI TABLE XXI-2. ELECTRON CONFIGURATIONS OP THE ELEMENTS, NORMAL STATES K L M N P 7s Is 2s 2p 3-9 3p U 4s 4p 4d V 5s 5p U 6s 6p 6<* H 1 He 2 Li 2 1 Be 2 2 B 2 2 1 C 2 2 2 N 2 2 3 i 2 2 4 , P 2 2 5 Ne 2 2 (i , Na 2 2 6 1 Mg 2 2 6 2 Al 2 2 6 2 1 Si 2 2 6 2 2 P 2 2 6 2 3 S 2 2 6 2 4 Cl 2 2 6 2 5 A 2 2 6 2 6 K 2 2 6 2 6 1 Ca 2 2 G 2 6 2 Sc 2 2 6 2 6 I 2 Ti 2 2 6 2 6 2 2 V 2 2 6 2 6 3 2 Cr 2 2 6 2 6 5 1 Mn 2 2 6 2 6 5 2 Fc 2 2 6 2 6 6 2 Co 2 2 6 2 6 7 2 Ni 2 2 6 2 6 8 2 Cu 2 2 6 2 6 10 1 Zn 2 2 6 2 6 10 2 Ga 2 2 6 2 6 10 2 1 Gc 2 2 6 2 6 10 2 2 As 2 2 6 2 6 10 2 3 Se 2 2 6 2 6 10 2 4 Br 2 2 6 2 6 10 2 5 Kr 2 2 6 2 6 10 2 6 Rb 2 2 6 2 6 10 2 6 1 Sr 2 2 6 2 6 10 2 6 2 Y 2 2 6 2 6 10 2 G 1 2 Zr 2 2 6 2 6 10 2 6 2 2 Cb 2 2 6 2 6 10 2 6 4 1 Mo 2 2 6 2 6 10 2 6 5 1 Ma 2 2 6 2 6 10 2 6 6 1 Ru 2 2 6 2 6 10 2 6 7 1 Rh 2 2 6 2 6 10 2 6 8 1 Pel 2 2 6 2 6 10 2 6 10 SBC. 3] ATOMS AND THE PERIODIC TABLE 347 TABLE XXI-2. ELECTRON CONFIGURATIONS OF THE ELEMENTS, NORMAL STATES. (Continued) K L M N p Is 28 2p 3s 3p 3d 4s AP. 4d 4/ 5s IP 5rf 6s 6p 6d 7* Ag 2 2 6 2 6 10 2 6 10 T Cd 2 2 6 2 6 10 2 6 10 2 In 2 2 6 2 6 10 2 6 10 2 1 Sn 2 2 6 2 6 10 2 6 10 2 2 Sb 2 2 6 2 6 10 2 6 10 2 3 i Te 2 2 6 2 6 10 2 6 10 2 4 I 2 2 6 2 6 10 2 6 10 2 5 Xe 2 2 6 2 6 10 2 6 10 2 6 Cs 2 2 6 2 6 10 2 6 10 2 6 1 Ba 2 2 6 2 6 10 2 6 10 2 6 2 La - 2 2 6 2 6 10 2 6 10 2 6 1 2 Ce 2 2 6 2 6 10 2 6 10 1 2 6 1 2 Pr 2 2 6 2 6 10 2 6 10 2 2 6 1 2 Nd 2 2 6 2 6 10 2 6 10 3 2 (5 1 2 II 2 2 6 2 6 10 2 6 10 4 2 6 1 2 Sa 2 2 6 2 6 10 2 6 10 f> 2 6 1 2 Er 2 2 6 2 6 10 2 6 10 6 2 6 1 2 Gd 2 2 6 2 6 10 2 6 10 7 2 6 1 2 Tb 2 2 6 2 6 10 2 6 10 8 2 6 1 2 Ds 2 2 6 2 6 10 2 6 10 9 2 1 2 Ho 2 2 6 2 6 10 2 6 10 10 2 6 1 2 Er 2 2 6 2 6 10 2 6 10 11 2 6 1 2 Tu 2 2 6 2 6 10 2 6 10 12 2 6 1 2 Yb 2 2 6 2 6 10 2 6 10 13 2 6 1 2 Lu 2 2 6 2 6 10 2 6 10 14 2 6 1 2 Hf 2 2 6 2 6 10 2 6 10 14 2 6 2 2 Ta 2 2 6 2 6 10 2 6 10 14 2 6 3 2 W 2 2 6 2 6 10 2 6 10 14 2 6 4 2 Re 2 2 6 2 6 10 2 6 10 14 2 5 2 Os 2 2 6 2 6 10 2 6 10 14 2 6 2 Ir 2 2 6 2 6 10 2 6 10 14 2 6 7 2 Pt 2 2 6 2 6 10 2 6 10 14 2 6 9 1 Au 2 2 6 2 6 10 2 6 10 14 2 6 10 1 Hg 2 2 6 2 6 10 2 6 10 14 2 6 10 2 Tl 2 2 6 2 6 10 2 6 10 14 2 6 10 2 1 Pb 2 2 6 2 6 10 2 6 10 14 2 6 10 2 2 Bi 2 2 6 2 6 10 2 6 10 14 2 10 2 3 Po 2 2 6 2 6 10 2 6 10 14 2 6 10 2 4 2 2 6 2 6 10 2 6 10 14 2 6 10 2 5 Rn 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 1 Ra 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 2 Ac 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 1 2 Th 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 2 2 Pa 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 3 2 U 2 2 6 2 6 10 2 6 10 14 2 6 10 2 6 4 2 348 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI TABLE XXI-3. IONIZATION POTENTIALS OP THE LIGHTER ELEMENTS, IN RYDBERGS K L M ^ N O 18 2s 2p 3 3p 3d 4 4p 4d 5 II 1.00 He 1.81 Li 4.80 0.40 Be (9.3) 0.69 B (15.2) 1.29 0.61 C (22.3) 1.51 0.83 N (31.1) 1.91 1.07 O (41.5) 2.10 1.00 F (53.0) 2.87 1.37 Ne (66.1) 3.56 1.59 Na (80.9) (5.10) 2.79 0.38 Mg 96.0 (6.96) 3 7 0.56 Al 114.8 (9.05) 5.3 78 0.44 Si 135.4 (11.5) 7.2 1.10 0.60 P 157.8 (14.2) 9.4 (1 40) (0 65) S 181.9 (17.2) 11.9 1.48 0.76 Cl 207.9 (20.4) 14.8 1.81 0.96 A 235.7 (23.9) (18.2) 2 14 1.15 K 265.6 (27 8) 21.5 (2.6) 1.2 0.32 Ca 297 4 (31 9) 25.5 (3.1) 1.9 45 Sc 331.2 (36 2) 30 (3 6) 2.7 0.54 .50 Ti 365.8 (41.0) 33.6 (4 2) 2.6 0.51 0.50 V 402 7 (46.0) 37 9 (4.8) 3.0 0.50 0.52 Cr 441 1 (51.2) 42.3 (5.4) 3.1 61 50 Mn 481 9 (56.7) 47 4 (6.7) 3.8 0.68 0.55 Fe 523.9 62 5 52 2 6.9 4.1 0.60 0.58 Co 568.1 (68.5) 57.7 7.6 4.7 0.63 66 Ni 614.1 74 8 63 2 8.2 5.4 (0.68) 64 Cu 661 6 81 fi8.9 8 9 5.7 0.77 0.57 Zn 711 7 88.4 75.4 10.1 6.7 1.26 0.69 Ga 765.6 (96.0) 84.1 12.4 8.8 1.8 0.87 0.44 Ge 817.6 (104.0) 89.3 13.4 9.5 3.2 1.39 0.60 As 874.0 112.6 97.4 14.9 10.3 3.0 (1-0) 0.74 So 932.0 (121.9) 108.4 16.7 11.6 3.9 (1.7) 70 Br 992.6 (131.5) 117.8 19.1 13.6 5.4 (1.9) 0.87 Kr (1055) (141.6) (127.2) (21.4) (15.4) (6.8) (2.1) 1.03 Rb 1119.4 152.0 137.2 (23.7) 17.4 (8.3) (2.3) 1.46 0.31 Sr 1186.0 162.9 147.6 26.2 19.6 9.7 2.5 (2.1) 42 Y 1256.1 175.8 159 9 30.3 23.3 13 4.7 2.9 48 49 Zr 1325.7 186.6 170.0 31.8 24.4 13.3 3.8 2.1 53 51 Cb 1398.5 198.9 181.7 34.7 26.9 15.2 4.3 2 5 (0.5) (0 5) Mo 1473.4 211.3 193.7 37.5 29.2 17.1 5.1 2.9 (0.5) 0.54 The ionization potentials tabulated represent in each case the least energy required to remove the electron in question from the atom, in units of the Rydberg energy Rhc (13.54 electron volts). Data for optical ionization are taken from Bacher and Goudsmit, "Atomic Energy States,' 1 McGraw-Hill Book Company, Inc., 1932. Those for x-ray ionization are from Siegbahn, " Spektroskopie der Rontgen- strahlen," Springer. Intermediate figures are interpolated. Interpolated or estimated values are given in parentheses. SEC. 3] ATOMS AND THE PERIODIC TABLE 349 TABLE XXI-4. RADII OF ELECTRONIC ORBITS IN THE LIGHTER ELEMENTS (Angstrom units) K L M N Is 25 2p 3s 3p 3d 4s 4p H 0.53 Ho 30 Li 0.20 1.50 Be 143 1.19 B 0.112 0.88 0.85 C 0.090 0.67 0.66 N 0.080 0.56 53 0.069 0.48 0.45 F 061 41 0.38 No 0.055 0.37 0.32 Na 0.050 0.32 28 1.55 Mg 046 0.30 0.25 1.32 Al 042 0.27 0.23 1.16 1 21 Si 040 24 0.21 0.98 1.06 P 0.037 0.23 0.19 0.88 0.92 S 035 21 0.18 0.78 0.82 Cl 032 20 0.16 0.72 0.75 A 031 0.19 0,155 0.66 0.67 K 029 18 145 0.60 0.63 2.20 Ca 0.028 16 133 0.55 0.58 2 03 Sc 0.026 0.16 0.127 52 54 0.61 1.80 Ti 0.025 0.150 122 0.48 0.50 0.55 1.66 V 024 143 0.117 0.46 0.47 0.49 1 52 Cr 0.023 0.138 0.112 0.43 0.44 0.45 1.41 Mn 0.022 0.133 106 0.40 0.41 0.42 1 31 Fc 0.021 127 101 39 0.39 39 1.22 Co 0.020 0.122 0.096 0.37 0.37 0.36 1.14 Ni 0.019 0.117 0.090 0.35 0.36 0.34 1.07 Cu 0.019 112 0.085 0.34 0.34 0.32 1.03 Zn 0.018 0.106 0.081 0.32 0.32 0.30 0.97 Ga 0.017 0.103 078 0.31 0.31 0.28 0.92 1.13 Gc 0.017 100 0.076 0.30 0.30 27 0.88 1.06 As 0.016 0.097 0.073 0.29 0.29 0.25 0.84 1.01 Se 0.016 0.095 0.071 0.28 0,28 24 0.81 0.95 Br 0.015 0.092 0.069 27 0.27 0.23 0.76 90 Kr 0.015 0.090 0.067 0.25 0.25 22 0.74 86 The radii tabulated represent the distance from the nucleus at which the radial charge density (the charge contained in a shell of unit thickness) is a maximum. They are computed from calculations of Hartree, in various papers in "Proceedings of the Royal Society," and elsewhere. Since only a few atoms have been computed, most of the values tabulated are interpolated. The interpolation should be fairly accurate for the inner electrons of an atom, but unfortunately is quite inaccurate for the outer electrons, so that these values should not be taken as exact. 350 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXI two electrons go into the K shell, resulting in He, a stable structure with just two electrons. The next electrons go into the L shell, with its sub- groups of 2s and 2p electrons. These electrons are grouped together, for they are not very different from hydrogenlike electrons in their energy, and as we see from Eq. (2.1), the energy of a hydrogen wave function depends only on n, not on I, so that the 2s and 2p have the same energy in this case. For the real wave functions, as we see from Fig. XXI-1, for example, the energies of 2s and 2p are not very different from each other. The L shell can hold two 2s and six 2p electrons, a total of eight, and is completed at neon, again a stable structure, with two electrons in its K shell, eight in its L shell. The next electrons must go into the still larger M shell. Of its three subgroups, 3s, 3p, and 3d, the 3s and 3p, with 2 + 6=8 electrons, have about the same energy, while the 3d is defi- nitely more loosely bound. Thus the 3s and 3p electrons are completed with argon, with two K, eight L, and eight M electrons, again a stable structure and an inert gas. It is in this way that the periodicity with period of eight, which is such a feature of the lighter elements, is brought about. After argon, the order of adding electrons is somewhat peculiar. The next electrons added, in potassium and calcium, go into 4s states, which for those elements have a lower energy than tho 3d. But with scandium, the element beyond calcium, the order of levels changes, the 3d becoming somewhat more tightly bound. In all the elements from scandium to copper the new electrons are being added to the 3d level, the normal state having either one or two 4s electrons. For all these elements, the 4s and 3d electrons have so nearly the same energy that the configurations with no 4s electrons, with one, and with two, have approxi- mately the same energy, so that there are many energy levels near the normal state. At copper, the 3d shell is filled, so that the M shell con- tains its full number of 2 + 6 + 10 = 18 electrons, and as we have seen from our earlier discussion, there is one 4s electron. The elements following copper add more and more 4s and 4p electrons, until the group of eight 4s and 4p's is filled, at krypton. This is again a stable configura- tion. After this, very much (he same sort of situation is repeated in the atoms from rubidium and strontium through silver, which is similar to copper, and then through xenon, which has a complete M shell, and complete 4s, 4p, 4d, 5,s, and 5p shells. Following this, the two electrons added in caesium and barium go into the 6s shell, but then, instead of the next electrons going into the 5d shell as we might expect by analogy with the two preceding groups of the periodic table, they go into the 4/ shell, which at that point becomes the more tightly bound one. The fourteen elements in which the 4/ is being filled up are the rare earths, a group of extraordinarily similar elements differing only in the number of 4/ electrons, which have such small orbits and are so deeply buried inside Sfic. 3] ATOMS AND THE PERIODIC TABLE 351 the atom that they have almost no effect on chemical properties. After finishing the rare earths, the 5d shell is filled, in the elements from hafnium to platinum, and the next element, gold, is similar to copper and silver. Then the 6s and 6p shells are completed, leading to the heaviest inert gas, radium emanation, and finally the 7s electrons are added in radium, with presumably the Qd or 5/ in the remaining elements of the periodic table. Now that we have surveyed the elements, we are in position to under- stand why some atoms tend to form positive ions, some negative. The general rule is simple: atoms tend to gain or lose electrons enough so that the remaining electrons will have a stable structure, like one of the inert gases, or some other atom containing completed groups or subgroups of electrons. The reason is plain from Table XXI-3, at- least as far as the formation of positive ions is concerned: the electrons outside closed shells have much smaller ionization potentials than those in closed shells and are removed by a much smaller amount of energy. Thus the alkali metals, lithium, sodium, potassium, rubidium, and caesium, each have one easily removed electron outside an inert gas shell, and this electron is often lost in chemical processes, resulting in a positive ion. The alkaline earths, beryllium, magnesium, calcium, strontium, and barium, similarly have two easily removable electrons and become doubly charged positive ions. Boron and aluminum lose three electrons. Occasionally carbon and silicon lose four and nitrogen five, but these processes are certainly very rare and perhaps never occur. The electrons become too strongly bound as the shell fills up for them to be removed in any ordinary chemical process. But oxygen sometimes gains two electrons to form the stable neon structure, and fluorine often gains one, forming doubly and singly charged negative ions respectively. Similarly chlorine, bromine, and iodine often gain one electron, and possibly sulphur occasionally gains two. In the elements beyond potassium, the situation is somewhat different. Potassium and calcium tend to lose one and two electrons apiece, to simulate the argon structure. But the next group of elements, from scandium through nickel, ordinarily called the iron group, tend to lose only two or three electrons apiece, rather than losing enough to form a closed shell. Nickel contains a completed K, L, and M shell and is a rather stable structure itself, though not so much so as an inert gas; and the next few elements tend to lose electrons enough to have the nickel structure. Thus copper tends to lose one, zinc two, gallium three, and germanium four electrons, being analogous to a certain extent to sodium, magnesium, aluminum, and silicon. Coming to the end of this row, selenium tends to gain two electrons like oxygen and sulphur, and bromine to gain one. Similar situations are met in the remaining groups of the periodic table. CHAPTER XXII INTERATOMIC AND INTERMOLECULAR FORCES One of the most fundamental problems of chemical physics is the study of the forces between atoms arid molecules. We have seen in many preceding chapters that these forces are essential to the explanation of equations of state, specific heats, the equilibrium of phases, chemical equilibrium, and in fact all the problems we have taken up. The exact evaluation of these forces from atomic theory is one of the most difficult branches of quantum theory and wave mechanics. The general prin- ciples on which the evaluation is based, however, are relatively simple, and in this chapter we shall learn what these general principles are, and see at least qualitatively the sort of results they lead to. There is one general point of view regarding interatomic forces which is worth keeping constantly in mind. Our problem is really one of the simultaneous motion of the nuclei and electrons of the atomic or molecular system. But the electrons are very much lighter than the nuclei and move very much faster. Thus it forms a very good approximation to assume first that the nuclei are at rest, with the electrons moving around them. We then find the energy of the whole system as a function of the positions of the nuclei. If this energy changes when a particular nucleus is moved, we conclude that there is a force on that nucleus, such that the force times the displacement equals the work done, or change of energy. This force can be used in discussing the motion of the nucleus, studying its translational or vibrational motion, as we have had occasion to do in previous chapters. Our fundamental problem, then, is to find how the energy of a system of atoms changes as the positions of the nuclei are changed. In other words, we must solve the problem of the motion of the electrons around the nuclei, assuming they are fixed in definite posi- tions. The forces between electrons arc essentially electrostatic; there are also magnetic forces, but they are ordinarily small enough so that they can practically be neglected. Then the problem of solving for the motion of the electrons can be separated into several parts. It is a little difficult to know where to start the discussion, for there is a sort of circular type of argument involved. Suppose we start by knowing how the electrons move. Then we can find their electrical charge distribution, and from that we can find the electrostatic field at any point of space. But this field is what determines the forces acting on the electrons. And those forces must lead to motions of the electrons which are just the ones we 352 SEC. 1] INTERATOMIC AND INTERMOLECULAR FORCES 353 started with. An electric field of this type, leading to motions of the electrons such that the electrons themselves, together with the nuclei, can produce the original field, is sometimes called a self-consistent field. As a first attempt to solve the problem, let us assume that each atom is a rigid structure consisting of a nucleus and a swarm of electrons sur- rounding it, not affected by tho presence of neighboring atoms. This leads to a problem in pure electrostatics: the energy of the whole system, as a function of the positions of the nuclei, is simply the electrostatic energy of interaction between the charges of the various atoms. This electrostatic energy is sometimes called the Coulomb energy, since it follows directly from Coulomb's law stating that the force between two charges equals the product of the charges divided by the square of the distance between. This first approximation, however, is far from ade- quate, for really tho electrons of each atom will be displaced by the electric fields of neighboring atoms. We shall later, then, have to study this deformation of the atoms and to find tho forces bo t ween tho distorted atoms. 1. The Electrostatic Interactions between Rigid Atoms or Molecules at Large Distances. In this section, we are to find the forces between two atoms or ions or molecules, assuming that each can be represented by a rigid, undistortod distribution of charge. The discussion of these electrostatic, or Coulomb, forces is conveniently divided into two parts. First, we find the electric field of the first charge distribution at all points of space; then, we find the force on the second charge distribution in this field. By fundamental principles of electrostatics, tho force on tho second distribution exerted by the first is equal and opposite to the force on the first exerted by the second, if we make a corresponding calculation of the field exerted by the second on the first. Let us first consider, then, the field of a charge distribution consisting of a number of charges e, located at points with coordinates x lj y^ z t -. Rather than find the field, it is more convenient to compute the potential, the sum of the terms e t -/r t - for the charges, where r is the distance from the charge to the point x, y, z where the potential is being found. That is, r t - is the length of a vector whose components are x x lt y ?/ t , z z t , so that we have -~*7) 2 +72/~^7) 2 ~ + '(z - *J 2 , (1.1) and the potential is (1.2) There is a very important way of expanding the potential (1.2), in case we wish its value at points far from the center of the charge distribu- 354 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII tion. This is the case which we wish in investigating the forces between two atoms or molecules at a considerable distance from each other. Let us assume, then, that all the charges ei arc located near a point which we may choose to be the origin, so that all the z/s, y/s, and z/s are small, and let us assume that the point #, y, z where we are finding the potential is far off, so that r = \/x 2 + y 2 + z 2 is large. Then we can expand the potential in power series in z t -, y^ and z,, regarded as small quantities. We have (1.3) = z, = 0. (1.4) (1.5) The derivatives of (l/r) are to be computed when x t = But from Eq. (1.1) we have When x t = 0, this becomes r t x L(i\ dx\rj 1 x On the other hand, we have d_ /A ^ _ 1 ax\r/ r 2 r ' 0.6) Thus, comparing Eqs. (1.5) and (1.6), we can rewrite Eq. (1.3) as d (1.7) From Eq. (1.7), the potential of the charge distribution depends on the quantities Se t -, I>e l x iy Se t i/ t , Se t Zi, and higher terms such as Se t z?, etc., which we have not written. The quantity 2e* is simply the total charge of the distribution, and the first term of Eq. (1.7) is the potential of the total charge at a distance r. This term, then, is just what we should have if the total charge were concentrated at the origin. The next three terms can be grouped SBC. 1] INTERATOMIC AND INTERMOLECULAR FORCES 355 together. The quantities Ze0< 9 2^-, Se^< form the three components of a vector, which is known as the dipole moment of the distribution. A dipole is a pair of equal charges, say of charge +q and q y separated by a distance d. For the sake of argument let the charges be located along the x axis, at d/2 and d/2. Then the three quantities above would be qd > e *' ~ fi * = - That is> the dipol moment is equal in this case to the product of the charge and the distance --Line of force FIG. XXII-1. Lines of foice and equipotentials of a <lip<il of separation, and it points along the axis of the dipole, from tin* negative to the positive end. We now see that as far as the terms written in Eq. (1.7) are concerned, any two distributions with the same net charge and the same dipole moment will have the same potential. In the particular case mentioned above, the potential, using Eqa. (1.6) and (1.7), is (qd/r*)(x/r). Here x/r is simply the cosine of the angle between the radius r and the x axis, a factor depending on the direction but not the magnitude of r. As far as magnitude is concerned, then, the poten- tial decreases as 1/r 2 , in contrast to the potential of a point charge, which falls off as 1/r. Thus at large distances the potential of a dipole is unimportant compared to that of a point charge. In Fig. XXII-1, we 356 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII show the equipotentials of this field of a dipole and the lines of force, which are at right angles to the equipotentials, and indicate the direction of the force on a point charge in the field. The lines of force are those familiar from magnetostatics, from the problem of the magnetic field of a bar magnet, which can be approximated by a magnetic dipole. In addition to the terms of the expression (1.7), there are terms involving higher powers of x^ y^ and Zi, and at tho same time higher derivatives of 1 /r, so that these terms fall off more rapidly with increasing distance. Tho next terms after the ones written, quadratic in the #/s, and with a potential falling off as 1/r 3 , are called quadrupole terms, the corresponding moment being called a quadrupolo moment. We shall not have occasion to use quadrupole moments and hence shall not develop their theory here, though sometimes they are important. Now that we have found the nature of the potential of a charge distribution, we can ask what sorts of eases we are likely to find with real atoms, molecules, and ions. First we consider a neutral atom. Since there are as many electrons as arc necessary to cancel the positive nuclear charge, 2e t - is zero, and there is no term in the potential falling off as 1/r. The atom at any instant will have a dipole moment, however; the electrons move rapidly from place to place, and it is unlikely that they would be so arranged at a given instant that the dipole moment was exactly zero, though it is likely to be small, since some electrons will be on one side of the nucleus, others on the other side. On account of the motion of the electrons, this dipole moment will be constantly fluctuating in magnitude and direction. It is not hard to show by wave mechanics that its average value must be zero. Now, for most purposes, we care only about the average dipole moment, for ordinarily we are interested only in the time average force between atoms or molecules, and the fluctuations will average to zero. Thus, generally, we treat the dipole moment of the atom as being zero. Only two important cases come up in which the fluctuating dipole moment is of importance. One does not concern interatomic forces at all: it is the problem of radiation. In Chap. XIX, Sec. 3, we have mentioned that an oscillating electric charge in classical theory radiates energy in the form of electromagnetic waves. It turns out that the oscillating dipole moment which we have mentioned here is closely connected with the radiation of light in the quantum theory. The frequencies present in its oscillatory motion are those emitted, according to Bohr's frequency condition, and there is a close relation between the amplitude of any definite frequency in the oscillation and the intensity of the corresponding frequency of radiation. The other application of the fluctuating dipole moment comes in the calcula- tion of Van der Waals forces, which we shall consider later. It appears that the fluctuating external field resulting from the fluctuating dipole SBC. 1] INTERATOMIC AND INTERMOLECULAR FORCES 357 moment can produce displacements of charge in neighboring atoms, in phase with the fluctuations. The force exerted by the fluctuating field on this displaced charge does not average to zero, on account of the 1 phase relations, but instead results in a net attraction between the molecules, which as we shall see is the Van der Waals attraction. It is rather natural from what we have said that it is possible in wave mechan- ics to give a formula for the Van der Waals force between two atoms which depends on the probabilities of the various optical transitions which the atoms can make, though we shall not be able to state this formula since it involves too much application of quantum theory. As far as the time average is concerned, we have seen that an atom has no field coming from its net charge or from its dipolc moment. As a matter of fact, in most important cases, an atom has no not field at all at external points. The reason is that atoms, at least in the special ease where all their electrons are in closed shells, as in inert gas atoms, arc spherically symmetrical in their average charge distributions. This can be proved from wave mechanics and is a property of closed shells. But it is a familiar theorem of electrostatics that a spherically symmetrical charge distribution has a field just equal to that which it would have if all its charge were placed at the center. Thus a neutral atom has no external field. The reason is seen easily from Eq. (1.7). Each term of this expression after the first one depends on the angle between the radius vector and the axes. This is plain for the terms written, where we have seen that they vary as the cosines of the angles between the radius and the x, y, and z axes respectively, but it proves to be true also for the remaining terms. But a spherically symmetrical charge distribution must obviously have a spherically symmetrical potential, so that all these terms depending on angles must be zero. In other words, a spher- ically symmetrical distribution, like an atom, not only has no average dipole moment, but has no average quadrupole moment or moment of any higher order. Next after a neutral atom, we may consider a positive or negative ion of a single atom, such as Na + , Ba ++ , or Cl~. As we have seen in the preceding chapter, such an ion always has the configuration of an inert gas, and hence is always spherically symmetrical on the average. Thufc an ion has no dipole or higher moments, and its potential and field are just as if its whole charge were concentrated at the nucleus. As a next more complicated example, we take a molecule, charged or uncharged, formed from two or more atoms or ions. If the molecule is charged, forming an ion like NH 4 + , OH~, NO 3 ~, S0 4 , etc., then in the first place it has a term in the potential varying as 1/r, determined by the total charge on the ion. In addition to this, the ion or molecule may have a dipole moment. When we come to discussing specific ions and molecules, 358 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII in later chapters, we shall see which ones have dipole moments, which do not; in general, for there to be a dipole moment different from zero, the ion or molecule must be unsymmetrical in some way, with positive charge localized on one side, negative on the other. The ions NHU*, NOa ", and 864 , as we shall see, prove to be very symmetrical, and have no dipole moment, while OH~ has a dipole moment, the negative charge being at the oxygen end, the positive at the hydrogen end. Similarly there are some unsymmetrical neutral molecules which have dipole moments. An example is HC1, in which the H end tends to be positive, the Cl negative. The dipole moments have been measured in many of these cases and are generally found to be much less than one would suppose from a crude ionic picture. One might at first think, for instance, that HC1 was made of a H+ and a Cl" ion, joined together without distortion, so that each was spherically symmetrical. Then the resulting charge distribution would have a field at external points like a unit positive charge at the position of the hydrogen nucleus, and a unit negative charge at the chlorine nucleus, and the dipole moment would equal the product of the electronic charge and the internuclear distance. The measured dipole moment is only a small fraction of this, showing that there has been a large distortion of the electronic distribution in the process of forming the molecule. This is the sort of distortion that we must take up in a later section. We see, then, that at a considerable distance a single atom has no electric field, an ion consisting of a single charged atom has a field like a point charge concentrated at its center, and a molecule or ion consisting of several atoms or ions may have in addition a dipole moment, with its accompanying field, as well as having the field of its net charge, if it is an ion. In addition, the molecule or molecular ion may have quadrupole and higher moments. The effect of these is usually small compared to the others, but in the case of an uncharged molecule with no dipole moment, the quadrupole term would be the first important one in the expansion of the field. Having found the nature of the field of an atom or ion, our next problem is to find the forces exerted by this field on another atom or ion, always assuming both to be rigid charge distribu- tions. Fundamentally, the problem is very simple: the force exerted by the field of one atom or ion on each element of charge of the second atom or ion is simply the product of the field intensity and the charge, by definition, and we need merely treat the problem as one in statics, adding the forces vectorially to find the total force on the atom or ion, and adding their moments about the center of gravity to get the resultant moment or torque. Thus, let the potential of the electrostatic field be <, and let the field strength have components E x , E v , E, where by well-known methods SBC. 1] INTERATOMIC AND INTERMOLECULAR FORCES 359 of electrostatics the field is the negative of the derivative of < with respect to displacements along the axes, so that the product of the force and the displacement gives the work done, or the negative of the change of potential. That is, The components E x , E v , and E z will be functions of position. Now assume that the ion or molecule on which the force acts has charges e % at positions # t , y^ z t -, whore the origin is chosen to be at the center of gravity of the ion or molecule. Then, for example, the x component of total force on the ion or molecule is the sum of the x components of force on all its charges, and if we write E x at an arbitrary position by the Taylor expansion E x (xyz) = EM + ^ + d -^y + *jfz + - , (1.9) whoro E X (Q) and the derivatives arc all to be computed at the origin, we have the following expression for the total x component of force on the ion or molecule : .+ (UO) The first term in Eq. (1.10) represents the field at the center of gravity, times the total charge. This term of course is zero if the molecule is uncharged. The next three terms depend on the dipole moment arid the rate of change of field strength with position. Their interpretation is very simple. If the field strength is independent of position, the electro- static forces on the two poles of a dipole will be equal and opposite and will give no net force on the dipole as a whole. But if the field is stronger at one end than at the other, one charge will be pulled more strongly in one direction than the other one is in the other direction, and there will be a net pull on the dipole as a whole. This pull depends on the orienta- tion of the dipole with respect to the external field; if the dipole is reversed in direction, so that each component of its dipole moment changes sign, the dipole terms in the force expression (1.10) change sign, showing that the force is reversed. A dipole is acted on not only by a force, but also by a torque, in an external field, and this torque is proportional to the field strength rather than to its rate of change with position. The x component of this torque, regarded as a vector, is seen to be M x = (Se#<)^ - (2t*)* (1-11) 360 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII showing that the torque is proportional to the dipole moment, the external field, and the sine of the angle between them. To see this in an elemen- tary way, we show in Fig. XXII-2 a simple dipole consisting of charges q at a distance of separation d, the line of centers making an angle with the external field. Then we see that the field exerts a force of magnitude qE on each charge, with a lever arm d/2 sin 0, so that the torque exerted on each charge is q(d/2)E sin 0, and the total torque is qdE sin 0, where qd is the dipole moment. The potential energy asso- ciated with this torque is Potential energy = qdE cos 0, (1.12) having a minimum when the dipole points along the direction of the electric field. That is, the field tends to swing the dipole around so that it is parallel to the field. External field Fio. XXII-2. Illustrating the torque on a dipole in an external force field. We are now in position to understand the forces between rigid ions or molecules at a distance from each other. With two ions, of course the largest term in the force is the Coulomb attraction or repulsion between the net changes of the ions an attraction if the ions have unlike charges, repulsion if they have like charges. If the molecules are uncharged, however, the largest term in the interaction comes from dipole-dipole interaction. Each dipole is acted on by a torque in the field of the other, and if we look into the situation, we see that these torques are in such directions as to tend to place the dipoles parallel to each other, the posi- tive end of one being closest to the negative end of the other. Also, the dipoles exert a net force on each other, an attraction or repulsion depend- ing on orientation. If the orientation is that of minimum potential energy, with the positive end of one dipole opposite the negative end of the other, the net force will be an attraction, for the attraction between the close unlike charges will more than balance the repulsion of the more distant like charges. We may anticipate by mentioning the sort of appli- cation we shall make later to the force between two dipole molecules in a gao. In this case, both dipoles will be rotating. If they rotated uni- formly, they would be pointing in one direction just as often as in the SEC. 2] INTERATOMIC AND 1NTERMOLECULAR FORCES 361 opposite direction, so that the net force between them would cancel, since as we have seen this net force changes sign when the dipole reverses its direction. But they will really not rotate uniformly, for there are torques acting on them, tending to keep them in a parallel position. These torques will result in a potential energy term, of the nature of Eq. (1.12), between them, and if we insert this term into the Maxwell- Boltzmann distribution law, we shall find that the dipoles will be oriented in the position of minimum potential energy more often than in other positions. Thus, on the average, the attractions between the dipoles will outweigh the repulsions, and the net effect of dipole-dipole interaction is an intermolecular attraction. As two molecules or ions get closer and closer together, higher terms in the expansion of the potential and the force become important, and we must consider quadrupoles and higher multipoles. The whole expansion in inverse powers of r, and direct powers of the x l 's, becomes badly convergent when the molocules approach to within a distance com- parable to their own dimensions. When the charge distributions of two atoms or molecules really begin to overlap each other, the situation becomes entirely different and must be handled by different methods. We shall take up in the next section the electrostatic or Coulomb inter- action of two rigid charge distributions representing atoms, when they approach so closely as to overlap. 2. The Electrostatic or Coulomb Interactions between Overlapping Rigid Atoms. We have seen in the preceding section that two neutral spherically symmetrical atoms exert no forces on each other, so long as they do not overlap and so long as we can treat their charge distributions as being rigid, so that they do not distort each other. Once they overlap, however, this conclusion no longer holds. A rigid neutral atom consists of a positive nucleus surrounded by a spherical negative distribution of charge, just great enough to balance the charge on the nucleus. Such a distribution exerts no electrostatic force at outside points. At points within the charge distribution, however, it does exert a force, determined by a well-known rule of electrostatics : the electrostatic field at any point in a spherical distribution of charge is found by constructing a sphere, with center at the center of symmetry, passing through the point where the field is to be found. The charge within the sphere is imagined to be concentrated at the center, that outside the sphere is neglected. Then the electric field is that computed by the inverse square law from the charge concentrated at the center of the sphere, disregarding the outside charge. At a point outside the atom, this reduces to the same result already quoted: the net charge within the sphere is zero, so that there is no field. But as we get closer to the nucleus, we penetrate into the negative charge distribution, so that some of the negative charge lies outside 362 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII our sphere and is to be disregarded. The charge within the sphere, which we are to imagine concentrated at the center, then has a net positive value, becoming equal to the charge on the nucleus as the sphere grows smaller and smaller. Thus the electric field approaches that of the positive nuclear charge, in the limit of small distances. It is correct to consider that the electrons shield the nucleus at external points, counteracting its field, but this shielding effect decreases as we penetrate the electron shells. At a given distance from the nucleus, the field is like that of a charge of (Z Z ) units concentrated at the center, where Z is the nuclear charge, Z a shielding constant representing the amount of elec- tronic charge within the sphere, a quantity which decreases from Z to zero as we go from great distances in to the nucleus. This shielding constant Z is essentially the same as that introduced in Eq. (2.2), Chap. (a) (b) (c) FIG. XXII-3. Schematic representation of the overlapping of two atoms. The points represent the nuclei, the circles the regions occupied most densely by negative electronic charge distributions. XXI, where we were considering the effect of electronic shielding on the motion of one of the electrons of the atom. It is now easy, at least in principle, to find the interatomic forces between two rigid atoms whose charge distributions penetrate each other. We simply find the force on each clement of the charge of one atom, exerted on it by the field of the other. It is a difficult problem of integra- tion actually to compute this force, but the results are qualitatively simple. Suppose the distributions have only penetrated slightly, as shown in (a), Fig. XXII-3. Then some negative charge of each atom is within the distribution of the other, and hence is attracted by part of the nuclear charge. Thus the first effect of overlapping is an attraction between the atoms. This effect begins to be counteracted in the case (6) in the figure, however, when the nucleus of one atom begins to pene- trate the charge distribution of the other. For the nucleus will be repelled, not attracted, by the other nucleus. Finally, in case (c), where the atoms practically coincide, there will be great repulsion. For the nuclei will repel very strongly, being very close together, and exposed to all of each others' field, while the electronic distribution of each atom is still at a considerable average distance from the nucleus of the other, and hence is not very strongly attracted. Furthermore, part of the electronic SEC. 3] INTERATOMIC AND INTERMOLECULAR FORCES 363 distribution of each atom is on one side of the nucleus of the other, part on the other side, so that the forces on it almost cancel, and exactly cancel when the two atoms exactly coincide. The net effect of the Coulomb forces, then, is a potential energy curve similar to Fig. XXII-4, with a minimum, corresponding to a position of equilibrium, and an infinitely high potential energy as the nuclei are brought into contact. It might be thought at first sight that the curve of Fig. XXII-4, which surely has close resemblance to the Morse curve of Fig. IX-1, would give an adequate explanation of the interatomic forces that hold atoms together into molecules. On closer examination, however, this proves not to be the case. The attractions of Fig. XXII-4 are not nearly strong enough to account for molecular binding, and the distances of separation Internuclecir distance Fio. XXII-4. Schematic representation of the electrostatic 01 Coulomb energy of inter- action of two overlapping rigid atoms, as shown in Fig. XXII-3. are not what they should be. The reason is that our assumption of rigid atoms breaks down completely when the electronic distributions begin to overlap. The charge distribution becomes greatly distorted, and this must be taken into account in calculating the energy and forces. We shall now pass on to a discussion of this distortion, first taking up the effect of polarization, the type of distortion met at large distances of separation, then the effect that is usually called exchange interaction, which is important when atoms overlap. 3. Polarization and Interatomic Forces. An atom or molecule in a uniform external electric field is polarized; that is, it acquires an induced dipole moment, parallel and proportional to the field. This is the phenomenon so well known from electrostatics, when a charge brought near a conductor induces a charge of opposite sign on the near-by parts of the conductor. It is not so marked with an insulator as with a conductor, but it always occurs. It is illustrated in Fig. XXII-5, where we show simply a sphere of matter in an external field, with induced positive charge on the right hand part of it and negative charge on the left. We see that the induced charge is similar to a dipole. The induced dipole moment, as we have stated, is proportional to the external field, and the 364 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII constant of proportionality is called the polarizability and denoted by a, so that the induced dipole moment is a times the external field. The polarization can be brought about in either of two ways. In the first place, the electrons can be displaced in the direction opposite to the field, so that the electronic distribution is distorted or deformed. This is the only mechanism for polarization with atoms or symmetrical molecules. With dipole molecules, however, an additional form of polarization is possible; on account of the Maxwell-Boltzmann distribution, the dipoles can be oriented in such a way as to have a net dipole moment along the direction of the field. We can easily compute the net dipole moment on account of this effect. Let the permanent dipole moment of a molecule be /*. Then, as wo saw in Eq. (1.12), its potential energy in a field E is pE cos 0. Its component of dipole moment along the direction of the field is p cos 9. If all orientations were equally likely, the average component along the field would be zero. But on ac- count of the Maxwoll-Boltzmann distribution, the probability of finding the axis of the dipole in unit solid angle about a given orien- Fio. XXII-5. Induced polarization of a ... . , . , , - rr- sphere in an external field. tatlOn IS proportional to 6 kT . Thus to find the mean moment, we multiply /x cos by the Boltzmann factor above and integrate over solid angles. The solid angle contained between 6 and 8 + dd, or the fraction of the surface of unit sphere between these angles, is 2?r sin dB. Thus we have fV cos de kT 2?r sin dd Mean dipole moment = ^ cos (3.1) C*i~M r ~2v sin dS Jo The integrals in Eq. (3.1) can be evaluated at once by substituting cos = x, sin 6 dd = dx, and introducing the abbreviation nE/kT = y, from which at once we have /i _ 1 XeXV dX ^ e y _J_ e -y ^ I _ _ '..I ~ ^ - ^ V ' J-l The function (3.2) is shown as a function of y, which is proportional to the external field, in Fig. XXII-6. We see from the figure that at low fields the mean dipole moment is proportional to the field, but at high SEC. 3] INTERATOMIC AND INTERMOLECULAR FORCES 365 fields there is a saturation, all the dipoles being parallel to the external field. It is only at low fields, where there is proportionality, that we can speak of a polarizability. To get the value of the polarizability, we should find the initial slope of the function (3.2), We easily find, by expanding in power series in y y that for small y the function (3.2) can be approximated by the straight line y/3. Thus, remembering the definition of y, we have the dipolc moment at low fields equal to 2 E/3kT and the polarizability equal to n*/3kT, decreasing with increasing temperature as we should expect. __ FIG. XX1I-6. Function - _ ?/ as function of y, giving mean dipole moment arising from the rotation of dipole molecules, as a function of ?/ = nE/kT, where /z is the dipole moment, K the field strength, according to Eq. (3.2). If the part of the polarizability resulting from the electronic distortion is ao, we then have = ao 3kf (3.3) as the total polarizability of a molecule. This quantity can be found experimentally, on account of its connection with the dielectric constant. The molecular theory of dielectrics shows that a substance having N molecules in a volume V, each having a polarizability a, will have a dielec- tric constant equal to 1 N (3.4) If we measure the dielectric constant as a function of temperature, then, it should be a linear function of 1/T, and from the constants of the curve we can find both the electronic polarizability ao and the dipole moment 1 See P. Debye, "Polare Molekeln," Hirzel, 1929, for further discussion of dielec- tric constants. 366 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXH M. It is in this way that the dipole moments of a great many molecules have been determined. We now understand the polarization of a molecule in an external field. Next, we must ask about intermolccular forces resulting from this polari- zation. If a molecule has an induced dipole moment equal to aE in the direction of the field, then we see from Eq. (1.10) that it is acted on by a force equal to aE(dE/dx)j if the x axis is chosen in the direction of the external field and of the dipole. This can be written showing that the force pulls the molecule in the direction in which the magnitude of the field increases most rapidly. In the type of problem wo are considering, this means a force of attraction toward the other mole- cule. The attraction will depend on the nature of the field of the other molecule. Thus suppose we are considering the force between a polar- izable molecule and an ion. The field of the ion varies as 1/r 2 , so that E~ varies as 1/r 4 , its derivative with respect to x (which in this case is r) is proportional to 1/r 5 , so that the force varies inversely as the fifth power of the distance, and the potential energy inversely as the fourth power. The commoner ease, however, is that in which a dipole molecule produces a field that polarizes its neighbors, resulting in an attraction. A molecule of dipole moment // produces a field proportional to ju'/r 3 at another molecule. This field results, according to Eq. (3.5), in a force on the second molecule proportional to 3a/z' 2 /r 7 . The attractive energy will then be proportional to 1/r 6 . This depends on the angle between the dipole moment of the first molecule and the line of centers of the two, and calculation shows that the average over all directions is given by 1 Energy =-|^- (3.6) Equation (3.6) is essentially the formula for Van der Waals attrac- tions between molecules. There are two distinct cases: the attractions between molecules with or without permanent dipole moments. First let us consider molecules without permanent moments. Even in this case, we have seen in Sec. 1 that the molecule will have a fluctuating dipole moment, which will average to zero. Nevertheless it can polarize another molecule instantaneously, producing an attraction, and the net result, averaged over the fluctuations, will be an attraction given by Eq. (3.6), where a is the electronic polarizability of a molecule, and /*' 2 the 1 See for instance Slater and Frank, " Introduction to Theoretical Physics," Sec. 301, McGraw-Hill Book Company, Inc., and Pauling and Wilson, "Introduction to Quantum Mechanics/' Sec. 47, McGraw-Hill Book Company, Inc., 1935. SEC. 4] INTERATOMIC AND INTERMOLECULAR FORCES 367 mean square dipole moment. It is significant that it is the mean square moment that is concerned in the attraction, and this mean square is different from zero even when the mean moment vanishes. This attrac- tion is the typical Van der Waals attraction, a force whose potential is inversely proportional to the sixth power of the interatomic distance and is independent of temperature. On the other hand, if we are considering the forces between two dipole molecules, there will be two changes. First, the moan square dipole moment /*' 2 of the first molecule will now include two terms : the one coming from electronic fluctuations, which we have already considered, and the one coming from the fixed average dipole moment. Thus, in the first place, the external field will be greater than before. Then, in the second place, the polarizability will be given by Eq. (3.3), including both the electronic polarizability, and that coming from orientation of the fixed dipoles. In many cases the second term proves to bo several timos as large as the first, but it decreases with increasing temporaturo. Thus we may expect the Van der Waals attrac- tion between molecules with permanent dipoles to be several times as large as that between similar molecules without the permanent dipoles, and furthermore we may expect the Van der Waals force to decrease with temperature in the dipole case. Both these predictions prove to be borne out by experiment, as we shall see in a later chapter where we take up Van der Waals forces numerically for a variety of molecules. 4. Exchange Interactions between Atoms and Molecules. In the preceding section, we have found how an atom or molecule is distorted in a uniform electric field, and have used this to discuss its distortion in the field of another atom or molecule. This is clearly an approximation, for the field of a molecule is not uniform, though it approaches uniformity at great distances. When two atoms or molecules approach closely, this type of approximation, using merely an induced dipole, becomes very inaccurate. We must consider in this section how the charge distribu- tions of two atoms or molecules are really distorted when they come so close together that they touch or overlap. We shall find that there are two very different types of behavior possible: there may be forces of attraction between the atoms or molecules, tending to bind them together, or there may be forces of repulsion. The first case is that of valence binding, the second the case of the type of repulsion considered in Van der Waals constant 6. We shall first take up the simplest case, the interac- tion of two atoms, then shall pass on to molecular interactions. The problems we are now meeting are among the most complicated ones of the quantum theory, and we shall make no attempt at all to treat the ana- lytical background of the theory. When one studies that background, one finds that there are two different approximate methods of calcu- lation used in wave mechanics, sometimes called the Heitler-London 368 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII method and the method of molecular orbitals respectively. These differ, not in their fundamentals, but in the precise nature of the analytical steps used. For that reason, we shall not discuss them or their differences. We shall rather try to focus our attention on the fundamental physical processes behind the intermolecular actions and shall find that we can understand them in terms of fundamental principles, without reference to exact methods of calculation. Let us, then, bogin with the simplest possible problem, the interaction of two atoms. Unless they overlap, the only force between them will be the Van der Waals attraction, coming from the polarization of each atom by the fluctuating dipole moment of the other. This type of inter- action persists even when the valence electrons of the atoms do overlap. Po ten fior/ energy (a) (b) FIG. XXII-7. Potential energy of an electron 111 (a) the central field representing an atom; (b) the field representing two overlapping atoms. It is simple to describe in words. When the valence electron of one atom is at a given point, the valence electron of the other atom, which of course is repelled by it, tends to stay away from it. Thus the electrons do not approach each other so closely as if the repulsion were absent, and as a result the interaction energy between them is lower than if we neglected this type of interaction. This effect, tending to keep the electrons in the pair of atoms, or the molecule, away from each other, is sometimes called a correlation effect, since it depends on a correlation between the motions of the two electrons. It results in a lowering of energy or an increase of the strength of the binding between the atoms. As we see, it is the direct extrapolation of the Van der Waals attraction to the case of close approach of the atoms. But as we shall soon see, it is by no means the principal part of the interatomic force but rather forms a fairly small correction term. In discussing the Coulomb interactions between overlapping atoms, in Sec. 2, we saw that as the electronic charge of one atom begins to penetrate into the electron shells of the other, it becomes attracted to the nucleus of the other atom. That is, it is in a region of lower potential energy than it otherwise would be. This is illustrated in Fig. XXII-7. There, in part (a), we show the potential energy of an electron at different SBC. 4] INTERATOMIC AND INTERMOLECULAR FORCES 369 points within an atom, taking account of the decrease of shielding as we go closer to the nucleus. In (fc), the potential curves of two overlapping atoms are superposed, the resulting curve showing the potential energy of an electron in the combined molecule. We see that the potential energy is lower in the region where they overlap than it would be in the cor- responding part of either atom separately. This change of potential energy would mean that the electrons from both atoms were attracted to this region of overlapping, so that the tendency would be for extra electronic charge to concentrate itself in this region. This in turn would decrease the total energy, for it would mean the concentration of more charge in a region of lower potential energy. Thus this would result in an added attraction between the atoms. We could regard this as an increase in the magnitude of the Coulomb attraction, if we chose, giving a much deeper minimum to the potential energy curve than one finds in Fig. XXII-4. This effect is different from the Van dor Waals attraction or correlation effect, in that it depends on the average field rather than on the fluctuating field, distorting or polarizing the charge distribution and hence decreasing the energy. Even this effect, however, is not the whole story. For we have forgotten one essential fact : the electrons obey the Fermi-Dirac statistics or the Pauli exclusion principle. Let us state the Fcrmi-Dirac statistics in a very simple form, remembering the existence of the electron spin. We set up a molecular, or rather an electronic, phase space, in which the coordinates of each electron are given by a point. This phase space has cells of volume A 3 in it. Then the Fermi-Dirac statistics states that no complexion of the system is possible in which more than one electron, of a given spin orientation, is in the same cell. Since two orientations of the spin are possible, as we saw in Chap. XXI, Sec. 2, this means that at most we can have two electrons in a cell, one of each spin. There is, in other words, a maximum possible density of electrons in phase space. We can translate this statement into one regarding the maximum density of electrons in ordinary coordinate space : for a given range of momenta, there is a maximum possible density of electrons in coordinate space. If we wish to pack more electrons into a region of coordinate space, they must have a different momentum from those already there. If the elec- trons already present have a low kinetic energy, this means that any additional electrons must have higher kinetic energy, and hence higher total energy, in order not to have the same momenta as those already present. The exclusion principle in this form can be used immediately to discuss the electronic interactions when two atoms begin to overlap. We have already talked about the lowering of the potential energy between two atoms when they begin to overlap, and have illustrated it in Fig. XXII-7. And we have stated that there is a tendency for electrons 370 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII to be concentrated in this region, seeking a lower potential energy and hence decreasing the energy of the whole system. But this would involve an increasing density of electrons in the region between the atoms, and from what we have just said, this might well make difficulties with the Fermi-Dirac statistics. We now meet very different situations, depend- ing on whether the electrons in the two atoms already are in closed shells, or not. First lot us assume that they are in closed shells. This can be interpreted in terms of the Fermi-Dirac statistics: we choose our cells in the electronic phase space to coincide with the stationary states of the one-electron problem of an electron in a central field. When we state that the electrons are all in closed shells, we mean that there are two electrons each, one of each spin, in the lowest cells or stationary states, while the higher stationary states are empty. All the region of space occupied by electrons of either atom, then, is filled to such a density with electrons that no additional charge can enter the region, without having a higher kinetic energy and total energy, than the charge already there. Now let us see how this affects the situation. If two atoms begin to overlap, we can certainly not have the charge shifting into the region between the atoms, for this would involve an increase of charge density. We can not even have the charge of the two atoms overlap without redistribution, for the same reason. For this would involve such a large density of charge between the atoms that the electrons would have to increase their kinetic energy, and hence their total energy, considerably. The thing that hap- pens is that some of the charge actually shifts away from the region between the atoms, to the far sides of the nuclei. This involves some increase of kinetic energy, for the electrons must increase the charge density everywhere except between the atoms, to make up for the decrease of density there; it involves increase of potential energy, since electrons are moving away from the region of low potential energy between the nuclei. Nevertheless, it does not mean so much increase of total energy as if the electrons piled up between the atoms. The net effect of this redistribution of charge is an increase of energy and hence a repulsion between the atoms. The effect of which we have spoken, giving a repulsion between atoms all of whose electrons are in closed shells, is the origin of the impenetra- bility of atoms and of the correction to the perfect gas law made by Van der Waals constant b. It is illustrated in Fig. XXII-8, where in (a) we show the charge distribution surrounding two repelling atoms, by means of contour lines. It is clear that the charge has been forced out of the region between atoms, by the effect of the exclusion principle. As a matter of fact, we can get similar effects, even if the outer electrons are not all in closed shells. Thus, consider two atoms of hydrogen, or of an alkali metal, each with one valence electron. Suppose the electrons of SEC. 4] INTERATOMIC AND 1NTERMOLECULAR FOHCKti 371 both atoms have their spins oriented in the same way. Then as far as electrons of that spin are concerned, the shells are filled, although they are empty of electrons of the opposite spin. When the electrons begin to overlap, then, there is the same difficulty about an increase of charge (a) (b) FIG. XXII-8. Electronic charge density represented by contours*, foi (a) two repelling atoms, (b) two attracting atoms. density that there would be with really closed shells, the charge distribu- tion becomes distorted as in Fig. XXII-8 (a) and the atoms repel. There really is a mode of interaction of two hydrogen or two alkaline atoms leading to a repulsion of this sort. We show it graphically in Fig. XXII-9 (a). In this figure, we plot the total energy of a pair of hydrogen atoms, Interatomic distance FIG. XXII-9. Interaction energy of two hydrogen atoms, as a function of the distance of separation, (a) repulsive state, (b) attractive state, with molecular formation. as a function of the distance of separation. At large distances, the energy is negative, on account of the Van der Waals attraction. But at small distances, when the atoms overlap, the curve (a), indicating the case where the spins of the two electrons are parallel, gives a repulsion. There is a minimum in this curve, leading to a position of equilibrium between the atoms, but it corresponds to a large interatomic distance and 372 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XX11 very small binding energy, and does not correspond in any way to the binding of the two atoms to form a molecule. The cases which we have taken up so far are those of repulsion between atoms, resulting from the exclusion principle. But this principle can also operate, in a somewhat less obvious way, to give an attraction between atoms which is even greater than the other forms of attraction previously considered. In the case of two hydrogen or alkaline atoms, which we have just discussed, it may be that the two electrons will have opposite spins. Then the exclusion principle does not operate directly; there is no obstacle in the way of the electrons from the two atoms overlapping, as much as they please, since their spins are different. The electrons are then free to pile up in the region between the nuclei, as we have mentioned before, thus decreasing the potential energy and leading to a binding between the atoms. But the exclusion principle, or Fermi-Dirac statis- tics, comes into wave mechanics in a more fundamental way than we have indicated, a way that can hardly be explained at all without going much further into wave mechanics than we can. The effect in this caso is something of the following sort: if two electrons have the same spin, as we have seen, the exclusion principle prevents them from being in the same cell of phase space. But if they have opposite spins, it operates just in the opposite direction, making electrons tend to occupy the same cell rather than different cells. A hint as to why this should be so can be found from Chap. V, Sec. 6, where we discussed the Einstein-Bose statis- tics. We remember that that form of statistics resulted merely from the identity of molecules, and that it could be qualitatively described as a tendency for molecules to stick together or condense, as if there wore attractive forces between them. If the exclusion principle is added to the principle of identity, the Fermi-Dirac statistics results, leading to something like a repulsion between electrons. Now two electrons of the same spin must obey the exclusion principle, and we have already seen the effect of the resulting repulsion. But two electrons of the opposite spin no longer need to satisfy an exclusion principle, as far as their coor- dinates and momenta are concerned, and yet they still satisfy a principle of identity. Hence, in essence, they obey the Einstein-Bose statistics and tend to crowd together as closely as possible. The effect of which we have just spoken is often called exchange, on account of a feature in the analytical calculation connected with it, in which the essential term relates to an exchange of electrons between the two atoms. The exchange interaction results in an additional piling up of electrons in the region of lowest potential energy, between the nuclei, and hence in an addition to the strength of binding. This is indicated in Fig. XXII-8 (6), where the charge distribution for this case of attraction is shown, and in Fig. XXII-9 (6), where we show the potential energy of SBC. 4] INTERATOMIC AND INTERMOLECULAR FORCES 373 this type of interaction. The type of attraction which we have in this case is what is generally called homopolar valence attraction, the word "homopolar" meaning that the two atoms in question have the same polarity, rather than one being electropositive and one electronegative, as in attraction between a positive and a negative ion. We can now see the various features involved in it: there is a tendency, from pure electro- statics, for the outer or valence electrons of the atoms to concentrate in the region between the atoms; if the electrons have the same spin this is prevented by the exclusion principle, but if they have opposite spin the exclusion principle indirectly operates to enhance the concentration of charge; since this charge is concentrated in a region of low potential energy, the net result is a binding of the two atoms together; and finally, the correlation effect, analogous to the Van der Waals attraction, tends to keep the electrons out of each others' way, still further decreasing the energy. All those effects result in interatomic attraction at moderate distances of separation. As the distance is further decreased, however, two effects tend to produce repulsion. First, there is the simple effect of the Coulomb forces, discussed in Sec. 2: as the nucleus of one atom begins to penetrate inside the charge distribution of the other, the nuclei begin to repel each other, the repulsion growing stronger as they approach. But secondly, all atoms but the simplest ones have inner, closed shells. It is only the outer electrons, which are not in closed shells, that can take part in valence attraction. When the atoms come close enough together so that the closed shells begin to overlap each other, the same sort of repulsion produced by the exclusion principle sets in which we have previously mentioned. In many cases this repulsion of closed shells is the major feature in producing the rise of the potential energy curve which is shown in Fig. XXII-9 (fe). We have spoken of the effect of the exclusion principle and exchange on the forces between two atoms. Now we shall see what happens with more than two atoms. For the sake of illustration, let two hydrogen atoms be bound into a molecule by the action of exchange forces, and then ask what happens when a third hydrogen atom approaches the molecule. The two electrons of the first two atoms have cooperated to form the valence bond between them. One of these electrons has one spin, the other the other, and they have shifted into the region between the atoms, filling that region up to approximately the maximum density allowed by the exclusion principle, with electrons of both spins. Such a pair of electrons, shared between two atoms, is the picture furnished by the quantum theory for the electron-pair bond. But now imagine a third atom, with its electron, to approach. The spin of this third elec- tron is bound to be the same as that of one of the two electrons of the electron-pair bond. Thus this third atom cannot enter into the attrac- 374 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII tive, exchange type of interaction with either of the atoms bound into the molecule. Instead, the exclusion principle will force its electron away from the other atoms, and there will be repulsion between them. This effect is what is called saturation of valence: two electrons, and no more, can enter into an electron-pair bond, and once such a bond is formed, the electrons concerned in it can form no more bonds. It might have been, however, that one of the atoms concerned in the original molecule had two valence electrons which it could share. In that case, one of them would be used up in forming a valence bond with the first hydrogen atom, leaving the other one to form a bond with another hydro- gen atom. In this way, we can have atoms capable of forming two or more valence bonds. If all the possible bonds are already formed, how- over, the structure will act as if all its electrons were in closed shells, and any additional atom or molecule approaching it will be repelled. There is just one case in which the formation of a bond by one electron does not prevent the same electron from taking part in another bond. This is the case of the metallic bond. If two sodium atoms approach, with opposite spins, their electrons form a valence bond botwoon them. But if a third sodium atom approaches, it turns out that the first valence bond becomes partly broken, so that the valence electrons of the first two atoms spend only part of their time in the region between those two atoms and have part of their time left over to form bonds with the new atom. As more and more atoms are added, this effect can continue, the electrons forming bonds which are essentially homopolar in nature but spread out throughout the whole group of atoms, holding them all together. The reason why metallic atoms behave in this way, while nonmetallic ones do not, is probably largely the fact that the valence electrons of metals are less tightly held than in non metals, as we can see in Table XXI-3, giving the ionization potentials of the elements, and con- sequently their orbits are larger, as we see in Table XXI-4. Then the orbit of one atom overlaps other atoms more than in a nonmetal, and it is easier for a number of neighbors to share in valence attraction. The conspicuous features of the metallic bond are two: first, there is no satura- tion of valence, so that any number of atoms can be held together, forming a crystal rather than a finite molecule; and secondly, the electron density is not so great as the maximum allowed by the exclusion principle. This second fact makes it possible for electrons to move from point to point without significant increase in their energy, whereas in a molecule held by valence bonds this is impossible, since the electron would have to acquire enough extra energy to rise to a higher quantum state, or more excited cell in phase space, before it could enter regions which already had their maximum density of electrons. This free motion of the electrons in a metal is what leads to its electrical conductivity and its typical metallic properties. SBC. 5] INTERATOMIC AND 1NTERMOLECULAR FORCES 375 6. Types of Chemical Substances. In the preceding sections we have made a survey of the types of interatomic and intermolecular forces. Now we can correlate these by making a brief catalog of the important types of chemical substances and the sorts of forces found in each case. Following this, the remaining chapters of this book will take up each type of substance in more detail, making both qualitative and quantitative use of the laws of interatomic and intermolecular forces found in each particular case, and deriving the physical properties of the substances as far as possible from the laws of force. The simplest class of anhst,fl.m*Ps r in ^ way, is the class of inorpjamn salts, or ionic crystals. J?amiliar examples are NaCl, NaNO 3 , BaS0 4 . These substances are definitely constructed of ions, as Na+, Ba++, Cl~, NOa~, and SO 4 . The ions act on each 'other by Coulomb attraction or repulsion, and an ion of one sign is always surrounded more closely by neighbors of the opposite sign than by others of the same sign, so that the attractions outweigh the repulsions and the structure holds together. As the distance between ions decreases and they begin to touch each other, they repel, on account of the exclusion principle; for as we have seen, the electrons in an ion form closed shells and the repulsion between them is the typical repulsion of closed shells. Thus a stable structure can be formed, the electrostatic attractions balancing the repulsions. There is nothing of the nature of saturation of valence; even though two ions, as Na + and Cl~, may be bound into a molecule, the electrostatic effect of their charges extends far away from them, since the molecule NaCl has a strong dipole moment. Thus further ions can be attracted, and the tendency is to form an extended structure. If the atoms in this structure are arranged regularly, it is a crystalline solid, the most characteristic form for ionic substances. On the other hand, at higher temperatures, where there is more irregularity, the solid can liquefy, and at high enough temperatures it vaporizes. It is only in the vapor state that we can say that the substance is composed of molecules; in the liquid or solid, each ion is surrounded at equal distances by a number of ions of the opposite sign and there is no tendency of the ions to pair off to form molecules. The electrostatic attractions met in ionic crystals are large, so that the materials are held together strongly with rather high melting and boiling points. We shall see in a later chapter that we can account for their properties satisfactorily by quite simple mathematical methods. The ionic substances are those in which an clectroposjt f i vfi ftlftmftnf,, Na. and an electronegative one ? as Cl, are held together by electrostatic forces. The other types of substances are compounds of two electro- negative elements or of two electropositive ones. The first group, made of electronegative elements, is the group of homopolar compounds. These are held together by homopolar valence bonds, coming from shared electron pairs, as we have described in the previous section. Since the 376 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXII bonds have the property of saturation, we ordinarily have molecular formation in such cases, the molecules being of quite definitely deter- mined size. Two molecules attract each other only by Van der Waals forces, and if they are brought too closely into contact, they repel each other on account of their closed shells. The simpler compounds of this type, like H2, 62, CO, etc., are the materials most familiar as gases. The Van der Waals forces holding the molecules together are rather weak, while the interatomic forces holding the atoms together in the molecule are very strong. Thus a relatively low temperature suffices to pull the molecules apart from each other, or to vaporize the liquid or solid, while an extremely high temperature is necessary to dissociate the molecule or pull its atoms apart. As we go to more complicated cases of compounds held together by homopolar valence, we first meet the organic compounds. They arise on account of the tendency of the carbon atom to hold four other atoms by valence bonds, using its four available electrons in this way. The carbon atoms, on account of their many valences, can form chains and still have other valence bonds available for fastening other atoms to them; in this way the great variety of organic compounds is built up. Another very important class of materials held together by valence bonds contains the minerals, silicates, and glasses, and various refractory materials, like diamond, carborundum, or SiC, and so on. In these cases, the valence bonds hold the atoms into endless chains, sheets, or three-dimensional structures, so that the materials form crystals in their most characteristic form and are held together very tightly. These materials have very high melting and boiling points, since all the bonds between atoms are the very strong valence bonds, rather than the weak Van der Waal$ f^rnes as jp the iflolecniar substances. _ Finally we have the metals, made entirely of electropositive atoms. We *""" "-" that these atoms are hold together by the metallic bond, similar to the valence hnnHa, W. without, the propqrftes of saturation. Thus the metals, like the ionic crystals and the silicates, tend to form indefinitely large structures, crystals or liquids, arid tend to have high melting and boiling points and great mechanical strength. We have already seen that the same peculiarity of the metallic bond which prevents the saturation of valence, and hence which makes crystal formation possible, also leads to metallic conduction or the existence of free electrons. With this brief summary, we have covered most of the important types of materials. In the next chapter we shall make a detailed study of ionic substances, and in succeeding chapters of the various other sorts of materials, interpreting their properties in terms of interatomic and inter- molecular forces. CHAPTP:R xxm IONIC CRYSTALS The ionic compounds practically all form crystalline solids at ordinary temperatures, and it is in this form that they have been most extensively studied. The reason for this crystal formation has been seen in the pre- ceding chapter: ionic attractions are long range forces, falling off only as the inverse square of the distance, so that more and more ions tend to be attracted to a minute crystal which has started to form, and the crystal grows to large size, held together by electrostatic attractions throughout its whole volume. Above the melting point, the liquids of course are held together by the same type of force, and in the gaseous phase undoubt- edly the same sort of thing occurs, one molecule tending to attract others, >so that presumably there is a strong tendency for the formation of double and multiple molecules in the gas. Unfortunately, however, the liquids and gases of ionic materials have been greatly neglected experimentally, so that there is almost no empirical information with which to compare any theoretical deductions. For this reason, we shall be concerned in this chapter entirely with the solid, crystalline phase* of these substances, but venture to express the suggestion that further experimental study of the liquid and gaseous phases would be very desirable. In considering the solids, the first step naturally is to examine the geometrical arrange- ment of the atoms or the crystal structure. Then we shall go on to the forces between ions, and the mechanical and thermal properties, first taking up the case of the behavior at the absolute zero, then studying temperature vibrations and thermal effects. 1. Structure of Simple Binary Ionic Compounds. To be electrically neutral, every ionic compound must contain some positive and some negative ions. The simplest ones are those binary compounds that contain one positive and one negative ion. Obviously the positive ion must have lost the same number of electrons that the negative one has gained. Thus monovalent metals form such compounds with monova- lent bases, divalent with divalent, and so on. In other words, this group includes compounds of Li + , Na+, K+ , Rb+, Cs+, Cu+, Ag+, and Au+ with F-, C1-, Br-, I-; of Be++ Mg++ Ca++, Sr++ Ba++, Zn++, Cd++ Hg++ with S Se , Te~; of B+++ A1+++ Ga+++, I n +++ T1+++ with N , P , As , Sb , Bi . Even this list contains some negative ions which ordinarily do not really exist, as As , Sb , Bi . Never- theless some of the compounds in question are found. One can formally 377 378 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII go even further and set up such compounds as carborundum, SiC, as if it were made of 81++++ and C , or C++++ and Si . But by the time an atom has gained or lost so many electrons, it turns out that the ionic description does not really apply very well, and we shall see later that such compounds are really better described as homopolar compounds. The positive ions which we have listed above by no means exhaust the list of possibilities, on account of the fact that most of the elements of the iron, palladium, and platinum groups are found as divalent, or tri- valent positive ions, and consequently form binary oxides, sulphides, selenides, and tellurides in their divalent form, and nitrides and phos- phides in their trivalent form. In addition to these single ions, there .are a few complex positive ions, which are so much like metallic ions that they can conveniently be grouped with them. Best known of these is the ammonium ion, NH 4 +, and somewhat loss familiar is the analogous phosphonium ion PH 4 +. Tic* The ammonium ion has ten elec- trons: seven from nitrogen, one from each of the four hydrogens, FIG. xxiil-i. The sodium chloride i ess one because it is a positive ion. structure. n ^ , . ., , . , ,, lhat is, it has just the same num- ber as neon, or as Na+. Similarly the phosphonium ion has eighteen, like argon, or K+. The hydrogen ions are presumably imbedded in the distribution of negative charge, in a symmetrical tetrahedral arrangement, and do not greatly affect the structure, so that these ions act surprisingly like metallic ions. We shall group these compounds with the binary ones, though really the positive ion is complex. Most of the binary ionic compounds occur in one of four structures, and by far the commonest is the sodium chloride structure. This is shown in Fig. XXIII-1. It can be described as a simple cubic lattice, in which alternate positions are occupied by the positive and negative ions. Each ion thus has six nearest neighbors of the opposite sign, and the electrostatic attraction between the ion and its oppositely charged neigh- bors holds the crystal together. This illustrates a principle which we mentioned in the preceding chapter and which obviously must hold for stability in an ionic crystal, namely that for stability each ion must be surrounded by as many ions of the opposite charge as possible, and the nearest ions of the same sign must be as far away as possible. The second common structure is the caesium chloride structure. In this structure, ions of one type are located at the corners of a cubic lattice SEC'. I) IONIC CRYSTALS 379 and ions of the other sign at the centers of the cubes. In this structure each ion has eight neighbors of the opposite sign. This structure is shown in Fig. XXIII-2. The third and fourth structures are sometimes called the zmchlcndr and the wurtzite structures, on account of two forms of ZnS. The zinc- blende structure is also often called the diamond structure, since it is found in diamond and some other crystals. The fundamental features of both structures are similar: each ion is tetrahedrally surrounded by four ions of the opposite sign, as in Fig. XXIII-3 (a). There arc a number of ways of joining such tetrahedra to form regular crystals, however. The diamond, or zincblende, lattice is the simplest of these. In the first place, tetrahedra like Fig. XXIII-3 (a), can be formed into sheets like FIG. XXIII-2. Tho caesium chloride structure. Fig. XXIII-3 (6) and (c), the latter looking straight down along the vertical leg of the tetrahedron, so that the ion at the center and that directly above it coincide in the figure. Then in the diamond structure, the next layer up is just like that shown, but shifted along so that atoms like a, 6, c coincide with a', 6', c'. The wurtzite structure, on the other hand, has the next layer looking just like the one shown in (6), as far as its projection is concerned, but actually being a mirror image of it in a hori- zontal plane. When examined in three dimensions, the wurtzite struc- ture proves to be less symmetrical than the diamond structure, in that the vertical direction stands out as a special axis in the crystal. For this reason, the length of the vertical distance between ions in the wurtzite structure does not have to equal the other three distances, while in the diamond structure all distances must be the same. The diamond, or zincblende, structure is shown in Fig. XXIII-3 (d), and the wurtzite structure in (e). In addition to these two, a number of other structures built of tetrahedra can exist, in which the two types of planes, the one 380 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII Zn (e) FIG. XXIII-3. The zincblende and wurtzite structures, (a) one ion tetra- hedrally surrounded by four others. (6) and (c) sheets formed from such tetrahedra, in perspective and plan, (d) the zincblende structure (which is identical with the dia- mond structure if all atoms are alike). (e) the wurtzite structure. shown in (6) and its mirror image, are arranged in various regular ways through the crystal. Several of these structures are found, for in- stance, in different forms of carbo- rundum, SiC. We shall now list in Table XXIII- 1 some of the binary ionic compounds crystallizing in the four structures just discussed. Under each struc- ture, we shall arrange the compounds according to their valence, starting with monovalent substances. We note that some compounds exist in several polymorphic forms, as ZnS in both zincblende and wurtzite structures. We tabulate not merely the substances crystallizing in each form, but also several quantities characterizing each substance. First we give the distance between nearest neighbors, r , in angstroms. This is necessarily the distance be- tween an ion of either sign and the nearest ions of the opposite sign, of which there are six in the sodium chloride structure, eight in the caesium chloride, and four in the zincblende and wurtzite structures. This is followed by a calculated value of r , discussed in Sec. 2. Finally we give the melting point where this is known. This is simply an indication of the tightness of bind- ing, since a strongly bound material has a high melting point. We notice that the divalent substances con- sistently have a much higher melting point than the monovalent ones. This is a result of the tighter binding. Since each ion in a divalent crystal has twice as great a charge as in a similar monovalent one, the inter- SBC. 1J IONIC CRYSTALS 381 TABLE XX 1 1 1-1. LATTICE SPACINGS OF IONIC CRYSTALS Material ro observed, angstroms ro calculated Melting point, C. Sodium Chloride Structure LiF 2.01 2.57 2.75 3.00 2.31 2.81 2.98 3.23 2.67 3.14 3.29 3 . 53 2.82 3.27 3.43 3.66 3.00 3.27 3.45 3 62 2.46 2.77 2.88 2.10 2.60 2.73 2.40 2 84 2.96 2.97 2.58 3.01 3.12 3.33 2.77 3.19 3.30 3.50 2.10 2.60 2.75 3.00 2.35 2.85 3.00 3.25 2.65 3.15 3.30 3 . 55 2.80 3.30 3.45 3.70 3 05 3.25 3.40 3 65 2.30 2.80 2 1)5 2.15 2 60 2.70 2.40 2.85 2.95 3 15 2.60 3.05 3.15 3.35 2.75 3.20 3.30 3.50 870 613 547 446 980 804 755 651 880 776 730 773 760 715 682 642 684 435 455 13 1 2800 2572 2430 882 1923 Lid LiBr Lil NaF. . NaCl NaBr. . Nal. KF KC1. KBr. KI RbF . RbCl . RbBr. Rbl. .. . CaF NH 4 C1 . . NH 4 Br . NHJ AgF. .. . AgCl AgBr . MgO MgS.. . MgSe . . CaO CaS CaSe CaTe SrO. . SrS .. . SrSe ... SrTe BaO BaS BaSe BaTe Caesium Chloride Structure CsCl .... CaBr 3.56 3.71 3.95 3.34 3.55 3.70 3.95 3.25 046 636 621 Csl . . NH4C1 382 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII TABLE XXIII-1. LATTICE SPACINGS OF IONIC CRYSTALS. (Continued) Material ro observed, angstroms ro calculated Melting point, C. Caesium Chloiide Stiucture (Continued) NH 4 Br NHJ. TICi . TlBr CuCl CuBr CuT Bi-8 BeSe BeTe ZnS ZnSe ZnTe CdS CdSe CdTe HgS HgSe HgTe 3 51 3 78 3.33 3.44 3.40 3.65 Zincblende Structure 2.34 2.46 2 62 2 10 2 18 2 43 2 35 2 45 2 04 2 52 2. 02 2 80 2 53 2.62 2.79 2.30 2.45 2 70 2 10 2 20 2 40 2 35 2 45 2.65 2 50 2 60 2.80 2 50 2.60 2 80 430 460 422 504 005 1800 1750 Wurtzite Stiucture (Fir.sl distance is that to neighbor along axis, second to three neighbo in same layer) NH 4 F . , 2 63, 2.76 2 75 BeO 1.64, 1.60 1 65 2570 ZnO 1.94,2.04 1.90 ZnS 2.36, 2.36 2.35 1850 CdS . . 2.52, 2.56 2.50 1750 CdSe 2.63, 2.64 2.60 Data regarding crystal stiucture, here and in other tables in this book, are taken from the "Struk- turbencht," issued as a supplement to the Zettachrift fdr Krixtallographie in several volumes from 1931 onward This is the (standard reference for crystal structure data. ionic forces are four times as great, with correspondingly large latent heats of fusion. Since the entropy of fusion is not very different for a divalent crystal from what it is for a monovalent one, this means that the melting point of a divalent crystal must be several times as large as for a mono- valent one, as Table XXIII-1 shows it to be. 2. Ionic Radii. The first question that we naturally ask about the crystals is, what determines the lattice spacings? Examination of the SBC. 2] IONIC CRYSTALS 383 experimental values shows that these spacings are very nearly additive; that is, we can assign values to radii of the various ions, such that the sums of the radii give the distance between the corresponding ions of the crystals. A possible set of ionic radii is given in Table XXIII-2. Using TABLE XXIII-2. IONIC RADII (Angstroms) Be ++ 0.20 80 Mg +H Na + F- o 0.70 1 05 1 30 1.45 Ca ++ K + ci- S~ Zn +4 Cu + 0.95 1 35 1.80 1.90 0.45 0.50 Sr ++ Rb + Br~ So Cd-* + Ag + 1 15 I 50 1.95 2 00 0.60 1.00 Ba^ + Cs f I- Te Hg ++ 1 30 1 75 2.20 2 20 0.60 1 45 these radii, which as will be observed are smoothed off to or 5 in the last place, the values r calculated of Table XXIII-1 are computed. The agreement between calculated and observed lattice spacing is surely rather remarkable. There have been a good many discussions seeking to show the reasons for the small errors in the table, the departures from additivity. In particular, there are good reasons for thinking that different radii should be used for the sodium chloride structure, where every ion is surrounded by six neighbors, from those used in the zinc- blende and wurtzite structures, where there are four neighbors. But the comparative success of the calculations of Table XXIII-1, where both types of structure are discussed by means of the same radii, shows that these corrections are comparatively unimportant and the significant fact- is that the agreement is as good as it is. There is one point in which our assumed valuqp for ionic radii are not uniquely determined. The observed interionic spacings can determine only the sum of the radii for a positive and negative ion of the same valency. We can add any constant to all the radii of the positive ions of one valency, subtract the same constant from the radii of the negative ions, without changing the computed results. On the other hand, the difference between the assumed radii of two ions of the same sign cannot be changed without destroying agreement with experiment. We have chosen this arbitrary additive constant in such a way as to make the positive ion, as K+, an appropriate amount smaller than the negative ion, such as Cl~, which contains the same> number of electrons, considering the tendency for extra negative electrons to be repelled from an atom, making negative ions large, positive ones small. Our estimate is probably 384 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII not reliable, however, and the absolute values should not be taken seri- ously as representing in any way the real radii of the ions. In particular, the ion of Be++ is pretty certainly not so small as its extremely small radius, 0.20 A, would suggest. It is interesting to compare the radii of Table XXIII-2 with those of Table XXI-4. It will be seen that though they are of the same order of magnitude, the ionic radii of Table XXIII-2 are several times the radii of the corresponding orbits in Table XXI-4. Remembering that Table XXI-4 gives the radius of maximum density in the shell, we see that the region occupied by electrons, given by the ionic radii, is several times the sphere whose radius is the radius of maximum density. This is surely a natural situation, since the charge density falls off rather slowly from its maximum. In Table XXIII-2, we can interpolate between the monovalent posi- tive and negative ions to get radii for the inert gas atoms. Thus we find approximately the following: No 1.1 A, A 1.5 A, Kr 1.7 A, Xe 1.95 A. It is interesting to compute the volumes of the inert gas atoms which we should get in this way, and compare with the volumes which we find for them from the constant b of Van der Waals' equation. For neon, for instance, the volume from Table XXIII-2 would be ^r(l.l) 3 = 5.53 X 10~ 24 cc. per atom = 5.53 X 10~ 24 X 6.03 X 10 23 cc. per mole = 3.33 cc. per mole. In Table XXIII-3 we give the volumes computed this way, the values of TABLE XXIII-3. VOLUMES OF INERT GAS ATOMS Volume from i, b Volume of ionic radius volume liquid Ne 3 33 17.1 5 1 16.7 A 8 6 32 2 3 8 28 1 Kr 12 5 39 7 3 2 38 9 Xe 18 8 50.8 2 7 47 5 The volumes computed from ionic radii are interpolated as described in the text from the ionic radii of Table XXIII-2. Values of b are taken from Table XXI V-l. Volumes are in cubic centimeters per mole. Van der Waals 6, computed from the critical pressure and temperature, the ratio of 6 to the volume computed from the ionic radius, and finally the molecular volume of the liquid. From the table we see that the values of b, and the volumes of the liquid, agree fairly closely with each other, and are three to five times the volume of the molecule, as computed from Table XXIII-2. Since the liquid is a rather closely packed struc- ture, this seems at first sight a little peculiar. The explanation, however, is not complicated. The molecules really are not hard, rigid things but SEC. 3] IONIC CRYSTALS 385 are quite compressible. Thus when they are held together loosely, by small forces, they have fairly large volumes, while when they are squeezed tightly together they have much smaller volumes. Now the ions used in computing Table XXIII-2 are held together by strong electrostatic forces, equivalent to a great many atmospheres external pressure. Thus their atoms are greatly compressed, as we saw in the preceding paragraph; by interpolating, we find volumes for the inert gas atoms in a very com- pressed state. On the other hand, the real inert gas liquids are held together only by weak Van der Waals forces, which cannot squeeze the atoms to nearly such a compressed state. Thus it is reasonable that the volumes computed from the radii of Table XXIII-2 should be much smaller than the volumes of the liquids. These remarks lead to a conclu- sion regarding the divalent ions. Being doubly charged, the forces in the divalent crystals are much greater than in the monovalcnt ones, as we have mentioned earlier, and the ions are correspondingly more squeezed. Thus the sizes of the divalent ions in Table XXIII-2 are really too small in comparison with the monovalent ones, and we cannot get a correct idea of the relative sizes of the ions by studying Table XXIII-2. 3. Energy and Equation of State of Simple Ionic Lattices at the Abso- lute Zero. The structure of the ionic lattices is so simple that we can make a good deal of progress toward explaining their equations of state theoretically. In this section we shall consider their behavior at the absolute zero of temperature. Then the thermodynamic properties can be derived entirely from the internal energy as a function of volume, as discussed in Chap. XIII, Sec. 3, and particularly in Eq. (3.4) of that chapter. The internal energy at the absolute zero is entirely potential energy, arising from the forces between ions. As we saw in the preceding chapter, these forces are of two sorts. In the first place, there are the electrostatic forces between the charged ions, repulsions between like ions and attractions between oppositely charged ones. The net effect of these forces is an attraction, for each ion is closer to neighbors of the opposite sign, which attract it, than to those of the same sign, which repel it. In addition to this electrostatic force, there are the repulsive forces between ions, resulting from the exclusion principle, vanishingly small at large distances, but increasing so rapidly at small distances that they prevent too close an approach of any two ions. First we take up the electrostatic forces. We shall consider only the sodium chloride structure, though other types are not essentially more difficult to work out. Let the charge on an ion be ze t where z = 1 for monovalent crystals, 2 for divalent, etc. We shall now find the electrostatic potential energy of the crystal, by summing the terms z*e*/d for all pairs of ions in the crystal, where d is the distance between the ions. We start by choosing a certain ion, say a positive one, and 386 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII summing for all pairs of which this is a member. Let the spacing between nearest neighbors of opposite sign be r. Assume the ion we are picking out is at the origin, and that the $, y, z axes point along the axes of the crystal. Then other ions will.be found for x = n\r, y = n 2 r, z = n 8 r, where n\, w 2 , w 3 are any positive or negative integers. It is easy to see that if n\ + n 2 + n$ is even the other ion will be positive, and if it is odd it will be negative; for this means that increasing one of the three integers by unity, which corresponds to a translation of r along one of the three axes, will change the sign of the ion, which is characteristic of the sodium chloride structure. The distance from the origin to the ion at nir, n 2 r, n$r is of course r\Ai? + n\ + n|, so that the potential energy arising from the pair of ions in question is z 2 e 2 /(rv / ^i + ! + 1)> where the sign is + if n\ + r? 2 + n 3 is even, if it is odd. Now there will be a number of ions at the same distance from the origin, and since the potential is a scalar quantity, these will all contribute equal amounts to the potential energy and can be grouped together. We can easily find how many such ions there are. They all arise from the same set of numerical values of Wi, ^2, fta, but arranged in the different possible orders, with all possible combinations of sign. If rii, n 2 , n 3 are all different and different from zero, there are six ways of arranging them, and each can have either sign, so there are eight possible combinations of signs, making 48 equal terms. If two out of the three n's are equal in magnitude, but not zero, there are only three arrangements, but eight sign combinations, making 24 equal terms. If all three arc equal in magnitude, there are still the eight com- binations of signs, making 8 terms. If one of the rz's is zero, there is no ambiguity of sign connected with it, so that there are only half as many possible terms, and if two of the n's arc zero there are only a quarter as many terms. By use of these rules, we can set up Table XXIII-4, giving TABLE XXIII-4. CALCULATION OP ELECTROSTATIC ENERGY, SODIUM CHLORIDE STRUCTURE n\n^n^ Number of terms Distance Contribution to potential energy 1 6 rVT (z*e*/r) X (-6/VT) = -6.00C 1 1 12 rV2 (z*e*/r) X (12/V2) 8.48* 1 1 1 8 r\/3 (zV/r) X (-8/\/3) = -4.62( 200 6 r\/4 <V/r) X (6/V4) 3. OCX 2 1 24 r\/5 (zW/r) X (-24/V6) = -10.73( 2 1 1 24 r\/6 (V/r) X (24/\/6) 9.80C 220 12 r\/8 frV/r) X (12A/8) 4.244 2 2 1 24 r\/9 (rfe/r) X (-24/V9) - -8.00C 222 8 r\/12 (iV/r) X (8A/I5) = 2.31C SBC. 3] IONIC CRYSTALS 387 the values of n\n^n^ arranged in order so that n\ : n* ^ ns; the number of neighbors associated with various combinations of these n's; the distance; and the total contribution of all these neighbors to the potential energy. This table can be easily extended by analogy to any desired distance. On examining Table XXIII-4, we see that the terms show no tendency to decrease as the distance gets greater, though they alternate in sign. It is plainly out of the question to find the total potential energy simply by adding terms, for the series would not converge. But we can adopt a device that brings very rapid convergence. As shown in Fig. XXIII-4, we set up a cube, its faces cutting through planes of atoms. Then if we count each ion on a face of the cube as being half in the cube, each one on an edge as being a quarter inside, and each one at a corner as being one-eighth inside, the total charge within the cube will be zero. Enlarging the cube by a distance d all around will then add a volume that contains a net charge of zero, FIO. xxni-4. Cube of ions and so contributes fairly little to the total , tak ! n from sodmm cUoride type of . , , . . T , , . lattice. potential at the origin. In other words, if we set up the total potential energy of interaction of the ion at the origin, and all ions in such a cube, the result should converge fairly rapidly as the size of the cube is increased. This is in fact the case. If the cube extends from r to +r along each axis, the points 100 will be counted as half in the cube; those at 110 will be one quarter in; and those at 111 one-eighth in. Thus the contribution of these terms to the potential energy will be 6.000 ,8.485 4.620 If it extends from 2r to 2r, the points 100, 110, 111 will be entirely inside the cube, those at 200, 210, 211 will be half inside, those at 220, 221 one quarter inside, and those at 222 one-eighth inside. Thus for the potential energy we have -6.000 + 8.485 - 4.620 + - + The next approximation is found similarly to be 1.714, and successive terms oscillate slightly but converge rapidly, to the value 1.742, the correct value, which as we see is very close to the value (3.2). 388 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII As we have just seen, the sum of potential energy terms between one positive ion and all its neighbors is ~2f>2 -1.742^-- (3.3) The number 1.742 is often called Madelung's number, since it was first computed by Madelung. 1 We should have found just the same answer if we had started with a negative ion instead of a positive one, since the signs of all charges would have been changed, and each term involves the product of two charges. Now to get the total energy, we must sum over all pairs of neighbors. Let there be N molecules in the crystal (that is, N positive ions and N negative ions). Then each of the positive ions contributes the amount (3.3) to the summation, and each of the negative ions contributes an equal amount, so that at first sight we should / z*c*\ say that the total energy was 2AT( 1.742 ) This is incorrect, how- ever, for in adding up the terms in this way we have counted each term twice. Each pair of ions, say ion a and ion 6, has been counted once when ion a was the one at the origin, b at another point, and then again when b was at the origin. To correct for this, we must divide our result by two, obtaining Electrostatic energy = -7VM.742 V (3.4) Next we consider the repulsive forces between ions. The only thing we can say about these is that they are negligible for large interionic dis- tance, and get very large as the distance becomes small. A simple function having this property is l/d m , where d is the distance between ions, m is a large integer. We shall tentatively use this function to repre- sent the repulsions. To give results agreeing with experiment, it is found that m must be of the order of magnitude of 8 or 10 in most cases. Thus, let the potential energy between two positive ions at distance d apart be a++/d m , between two negatives a __ /d m , and between a positive and a negative a+_/d m . The coefficients a are all assumed to be positive, leading to repulsions. Then we can compute the total repulsive potential energy, just as we have computed the electrostatic attraction, only now the series converges so rapidly that we do not have to adopt any special methods of calculation. Thus for the sum of all pairs of ions in which one is a positive ion at the origin, we have (3-5) 1 For other methods of computation, see for instance M. Born, "Problems of Atomic Dynamics," Series II, Lecture 4, Massachusetts Institute of Technology, 1926. SBC. 3] IONIC CRYSTALS 389 To have a specific case to consider, let us take w = 9, which works fairly satisfactorily for most of the crystals. Then the series (3.5) becomes 0.530a++ + 0.0571a+_ + 0.0117a++ + 0.0171a+_ + 0.00750++ + 0.0010a++ + 0.0012a+_ ) 0.550a + +)- (3.6) There is a similar formula for the sum of all pairs in which one ion is a negative one at the origin, with a __ in place of a++. Then, as before, we can get the total energy by multiplying each of these formulas by N/2. That is, the total repulsive energy is Repulsive energy = AT^)[ 6 .075a + _ + O.OH(y + a.-jj - (3.7) where A is a constant. More generally, ^ Repulsive energy = (3.8) We can now combine the electrostatic energy from Eq. (3.4) and the repulsive energy from Eq. (3.8) to obtain the total internal energy at the absolute zero, f/ = -Ni.7* + A. (3.9) To make connection with our discussion of the equation of state in Chap. XIII, we should expand Eq. (3.9) in power series about r , the value of r at which UQ has its minimum value. First we have df/o * 71 - An **e* 4 -r- = Nl. 742 ~ -- w-^ri ttT T 2 f.m+1 = when r = r . (3.10) From Eq. (3.10) we can write A in terms of other quantities, finding We then have A = N1.742z*e*- (3.11) m ^ ' U. - -tfl.74a - . (3.12) TOr 390 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII Expanding Eq. (3.12) in power series in r r , we have U - -#1.742 2 ( 1 - 1) ro \ m/ Equation (3.13) is of the form of Eq. (3.4), Chap. XIII, [7 00 + ATc7ips - 9(P? - Pto ' ' ' ' (3-14) To see the significance of c in this equation, it will be remembered that the volume per molecule is given by jj = cr. (3-15) In this case, consider a cube of edge r, with a positive or negative ion at each corner. There are 8 ions at the corners, each counting as if it were one eighth inside the cube, so that the cube contains just one ion, or half a molecule. In other words, V/N = 2r 3 , or c = 2, for the sodium chloride structure. It will also be remembered that the quantities PJ arid P in Eq. (3.15) are coefficients in the expansion of the pressure as a function of volume, at the absolute zero of temperature, as shown in Eq. (1.5) of Chap. XIII. These can be found from experiment by Eqs. (1.10) of Chap. XIII. Identifying Eqs. (3.13) and (3.14), we can then solve for t/oo, the energy at zero pressure at the absolute zero, P? andPJj, finding (3.16) ^ = ^~r(m-l), (3.17) *-^%("-( + W- < 3 - 18 > 4. The Equation of State of the Alkali Halides. The alkali halides, the fluorides, chlorides, bromides, and iodides of lithium, sodium, potas- sium, rubidium, and caesium have been more extensively studied experi- mentally than any other group of ionic crystals. For most of these materials, enough data are available to make a fairly satisfactory com- parison between experiment and theory. The observations include the compressibility and its change with pressure, at room temperature, from which the quantities ai(T), a 2 (T) of Eq. (1.1), Chap. XIII, can be found t/oo = -tfl.742^Yl - A SEC. 4] IONIC CRYSTALS 391 for room temperature; very rough measurements of the change of com- pressibility with temperature, giving the derivative of ai with respect to temperature; the thermal expansion, giving the derivative of ao with respect to temperature; and the specific heat. There are two sorts of comparison between theory and experiment that can be given. In the first place, we have found a number of relations between experimental quantities, not involving the detailed theory of the last section, which we can check. Secondly, we can test the relations of the last section and see whether they are in agreement with experiment. The relations between experimental quantities mostly concern the temperature effects. First, let us consider the specific heat. In Chap. XV, Sec. 3, we have seen that it should be fairly accurate to use a Debye curve for the specific heat of an alkali halido, using the total number of ions in determining the number of characteristic frequencies in that theory. It is, in fact, found that the experimental values fit Debye curves accurately enough so that we shall not reproduce them. We can then determine the Debye temperatures from experiment, and in Table XXIII-5 we give these values for NaCl and KC1, the two alkali halides TABLE XXIII-5. DEBYE TEMPERATURES FOR ALKALI HALIDES NaCl, abs. KC1, abs. O/> from specific heat ... Qr> from elastic constants Bo from residual rays 281 305 277 230 227 227 Data are taken from the article by Schrodinger, "Spezifische Warme," in "Handbuch der Physik," Vol. X, Springer, 1926. This volume contains a number of articles bearing on topics taken up in this book and is useful for reference. which occur in a crystalline form in nature and for which most measure- ments have been made. But from Eqs. (3.1), (3.5), and (3.9), Chap. XIV, we have information from which the Debye temperature can be calculated from the elastic constants and the density. These constants are known for NaCl and KC1, and in the table we also give the calculated Debye temperature found from the elastic constants. Finally, in Chap. XV, Sec. 4, we have seen that the frequency of the residual rays should agree with the Debye frequency. In the table we give the observed frequency of the residual rays, in the form of a characteristic temperature. We see that the agreement between the three values of Debye temperature in Table XXIII-5 is remarkably good, indicating the general correctness of our analysis of the vibrational problem. As a matter of fact, the agree- ment is better than we could reasonably expect, on account of the approx- imations made in Debye's theory, and there are many other crystals for which it is not so good, so that we may lay the excellent agreement here partly to coincidence. ' 392 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII Next we may consider the equation of state. From the compressi- bility and its change with pressure we have the quantities ai, a 2 of Eq. (1.1), Chap. XIII, as we have mentioned before, and from the thermal expansion we have the derivative of ao with respect to temperature, but not its value itself. Since the thermal expansion of most of the materials has not been measured as a function of temperature we cannot integrate the derivative to find values of a . The quantity a comes in only as a small correction term in applications, however, and if we are willing to assume Griineisen's theory we can calculate it to a sufficiently good approximation. From Eq. (1.9) or (1.10) of Chap. XIII, we can find a from the thermal pressure and the compressibility. The thermal pressure PO, the pressure necessary to reduce the volume to the volume at the absolute zero, is given by Eq. (4.12) of Chap. XIII. For the present case, if there are N molecules, 2N atoms, and 6AT degrees of freedom of the atoms, and if we assume according to Gruneisen that all the 7/8 are equal to 7, this equation gives where v is a suitable mean of the natural frequencies Vj. The Eq. (4.1) is the limiting case suitable for high temperatures, where the thermal expansion is constant, and can be regarded as the integral of Eq. (4.16), Chap. XIII. If we assume as a rough approximation that v can be replaced by the Debye frequency, Eq. (4.1) leads to Po = ^Fl T - ^ v o 6Nk /, 9 D p , r D ,, , ao = aiP = -y^yx( T - -^l (4.2) where 6Nk is the heat capacity at high temperatures, x the compressi- bility, GD the Debye temperature. Equation (4.2) should hold for temperatures considerably above half the Debye temperature and should be fairly accurate at temperatures as high as the Debye temperature, where the specific heat is fairly constant. From Table XXIII-5, we see that the Debye temperatures for these materials are of the order of magnitude of room temperature, so that we should expect Eq. (4.2) to be fairly accurate at room temperature where the observations have been made. Using the approximation (4.2) and measured values of ai and a* at room temperature, we can use Eqs. (1.10), Chap. XIII, to find PO, Pi, and P 2 . We find, as a matter of fact, that the term in a , in Eq. (1.10) for Pi, is a small correction term, so that to a gojod approximation Pi and P 2 SEC. 4] IONIC CRYSTALS 393 can be found directly from the observed compressibility and its change with pressure. In Table XXIII-6, we give values of PI and P^ computed TABLE XXIIT-6. QUANTITIES CONCERNED IN EQUATION OF STATE OF ALKALI HALIDES Pi P 2 7 (Grttn- eisen) 7 (Eq. 4.3) m P 2 calculated 7 (Eq. 4.4) LiF 0.652 X 10 12 2.41 X 10 12 1.34 3.02 5.80 1.72 X 10 12 1.97 LiCl... . 0.293 815 1 52 2 11 6 75 819 2 12 LiBr 0.232 0.635 1.70 2.06 6.95 0.655 2.16 NaCl 0.238 0.600 1 63 1.85 7.66 0.700 2 27 NaBr 0.197 0.476 (1 56) 1.75 7.97 0.590 2.33 KF 0.302 885 1.45 2.26 7.90 0.900 2.32 KC1 0.178 402 1.60 1.59 8.75 0.557 2.46 KBr 149 341 1 68 1 62 8 82 468 2 47 KI 0.117 0.259 1.63 1.54 9.15 0.373 2.52 RbBr.. .. 0.126 0.268 (1.37) 1.46 8.82 0.395 2.47 Rbl . 104 226 (1.41) 1.50 9 37 335 2.56 Values of Pi and P* are computed from data of J. C. Slater, Phys. Rev., 23, 488 (1024). Values of y by Gruneisen's method are taken from the article by Gruneisen, "Zustand des festen Korpers," in "Handbuch der Physik," Vol. X, Springer, 1926. Values of m, Pz calculated and the two calculated values of 7, are found as described in the text. in this way, for those of the alkali halides for which suitable measurements have been made. We can now make a calculation that will serve to check Gruneisen's theory of thermal expansion. In the first place, from Eq. (4.16), Chapter XIII or from Eq. (4.2) above, we see that y can be com- puted from the thermal expansion, specific heat, compressibility, and density. But if we assume Debye's theory and neglect the variation of Poisson's ratio with volume, we have seen in Chap. XIV, Eq. (4.6), that we can write y in terms of P\ and P*: (4.3) This gives us two independent ways of computing 7, and if they agree with each other we can conclude that Gruneisen's theory is fairly accurate. In Table XXIII-6, we use the thermal expansion, specific heat, and volume per mole of the crystals at room temperature, in order to compute y by Gruneisen's theory. In the next column we give y com- puted by Eq. (4.3). It will be seen that the two sets of numbers agree in order of magnitude, and for most of the crystals they are in rather close quantitative agreement. Putting this in another way, if we knew merely 394 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII the compressibility and its change with pressure, w^ could make a good calculation of thcTtherma^^^ the thermaTJ^pansion and compressihilHy we could calculate jthei Changs of 'gngp^ssjil^^Y_yth pressure. The only serious discrepancies come with thejluosdes, the most^incompressible of the crystals, and it will be foundjin latcjr^clmprers that thliT'situation holds also for metals: the more incompressible the crystal* the poorer the agreement between the 7 computed from the clastic congtants^'and that found from the thermal expansion. Experi- mentally, the change of compressibility with pressure is greater than we should conclude from the thermal expansion, or conversely the thermal expansion is less than we should suppose from the change of compressi- bility with pressure. The reason for this discrepancy is not understood, but it presumably arises from inaccuracies in Griineisen's assumptions, since there is no indication that the experimental error could be great enough to explain the lack of agreement between theory and experiment. The comparisons with experiment which we have made so far do not involve the assumptions of the preceding section about interatomic forces. We shall now see how far those assumptions are correct. From Eqs. (3.17) and (3.18) we can find values of Pi and P 2 at the absolute zero, in terms of r , which we can take from experiment, and the one parameter m. For approximate purposes, we can replace the values of PI and P 2 at the absolute zero by the values at room temperature. Then we can ask whether it is possible to find a single value of m that will reproduce both PI and P 2 . To test this, we have used PI to find a value of m and then have substituted this in Eq. (3.18) to compute a value of P 2 , compar- ing this computed value with experiment. These computed values are given in Table XXIII-6, and it is seen that the values agree as to order of magnitude but not in detail. In jpther jvyords^ pur assumptioji.jtha.t the repulsive potential varies inversely as a power of r is not very accurate, and to do better one would have .to. use a function with an extr^diappsable constant. In the table we give values of m, and it is seen that they are in the neighborhood of 9 for most of the crystals, as we have stated earlier. Using the values (3.17) and (3.18) for Pi and P 2 , and Eq. (4.3), we at once find 7=^ + 1. (4.4) Values of 7 computed in this way are tabulated in Table XXIII-6, and it is seen that the agreement with the value found from the thermal expan- sion is only moderately good, much poorer than that found with values computed by Eq. (4.3) from the experimental values of Pa/Pi. In other words, if we had a theoretical formula for the repulsive potential which gave a better value for the change of compressibility with pressure than SBC. 4] IONIC CRYSTALS 395 the inverse power function, it would at the same time give a better value for the thermal expansion. In the preceding paragraph, we have seen that the potential energy curve (3.9) derived from theory, gives qualitative but not vory good quan- titative agreement with experiment for the change of compressibility with pressure, and the thermal expansion. From Eq. (3.16), we can also use this curve to find the energy of the crystal at the absolute zero and zero pressure, C/oo. The negative of this quantity is the heat of dissociation of such a crystal into ions, which we may call D. Of course, a crystal would not really dissociate in this way if it were heated. It would dissociate into neutral molecules, for example of NaCl, or possibly into atoms of Na vapor and molecules of CU, instead. Nevertheless, thermo- chemical measurements are available from which we can get experimental values for D. We may imagine that we go from the crystal to the ionized gas in the following steps, each of which is understood experimentally: (1) we vaporize the crystal, obtaining NaCl molecules, the necessary energy being found from the heat of vaporization, which has been meas- ured; (2) we dissociate the NaCl molecules into atoms, the energy being the heat of dissociation of the diatomic molecule, as used in the Morse curve; (3) we ionize the Na atom to form a positive Na + ion, the energy being the ionization potential; (4) we add the electron so obtained to the chlorine atom, obtaining a Cl~ ion, releasing an amount of energy that is called the electron affinity of the chlorine ion. By adding the amounts of energy required for all these processes, we find experimental values that TABLE XXI 11-7. LATTICE KNEHGJES OF THE ALKALI (Kilogrnm-enlorics per mole) Observed Calculated LiF . . LiCl 240 199 238 191 LiHr NaCl . NaBr . ... KF . .. KC1 188 183 175 190 165 180 179 169 189 163 KBr 159 156 RbBr 154 149 Rbl 145 141 The observed values are taken from Landolt-Bornstein's Tables, Dritter Erg&nzungsband, p. 2870, Dritter Teil. Calculated values are found by Eq. (3.16), using numerical values from Tables XXIII- 1 and XXIII-6. 396 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII should agree with the values calculated from Eq. (3.16). In Table XXIII-7 we give a number of these values and the calculated ones. The units are kilogram calories per mole. The agreement between theory and experiment in these values is quite striking and is one of the most satisfactory results of the theory of ionic crystals. It is not hard to see why the agreement here is so much better than in the calculation of change of compressibility with pressure. The heat of dissociation depends on the value of C/o as a function of F, while the compressibility depends essentially on the second derivative of this curve, and the change of compressibility with pressure on the third derivative. It is a well-known fact that differentiating exaggerates the errors of a curve which is almost, but not quite, correct. It thus seems likely that the energy of Eq. (3.16) is really quite accurate, but that its second and third derivatives are slightly in error. 6. Other Types of Ionic Crystals. In the preceding sections we have been talking about simple binary crystals, formed from a positive and a negative ion of the same valency. Of course, there are many other types of ionic crystals, and we shall not take up the other sorts in nearly such detail. We shall, however, list a number of crystal structures, with the substances crystallizing in them. First we may mention the fluorite structure, named for fluorite, CaF 2 , which crystallizes in it. This is one of the simplest crystals, having twice as many ions of one sort as of the other. The structure is shown in Fig. XXIII-5. It can be considered as a cube of calcium ions, with an ion at the center of each face of the cube as well as at the corners, and inside this a cube of 8 fluorine ions. It is really better, however, to consider the neighbors of each ion. As we see from the figure, each fluorine is surrounded tetrahcdrally by 4 calciums. On the other hand, each calcium is at the center of a cube of 8 equally distant fluorine ions. Thus each calcium has twice as many fluorine neighbors as each fluorine has calciums, as the chemical formula demands. It is plain that molecules have no more independent existence in such a structure than they do in sodium chloride. In Table XXIII-8 we give the crystals that exist in the fluorite struc- ture and the distances between nearest neighbors. In addition, we tabulate the sum of the ionic radii of Table XXIII-2. Though these were computed from binary compounds of elements of equal valency, they FIG. XXIII-5. Fluorite structure. 5] IONIC CRYSTALS 397 give fairly good results for the ionterionic distances even in these rather different compounds. TABLE XXIII-8. SUBSTANCES CRYSTALLIZING IN FLTJORITE STRUCTURE Substance Distance, angstroms Distance computed CaF 2 ... . 2.36 2.25 SrF 2 2.50 2.42 SrCl 2 3.02 2.92 BaF 2 ... 2.68 2.60 CdF 2 . . . 2.34 PbF 2 . . . 2.57 Ce0 2 2.34 PrO 2 . .. 2.32 ZrO 2 . . . . 2.20 ThO 2 . . 2.41 Li 2 2.00 2.2, Li 2 S.. 2.47 2 70 Na 2 S. . 2.83 2 95 Cu 2 S. 2.42 Cu 2 Sc. ... 2.49 In addition to the fluorite structure, there are a number of other structures assumed by similar compounds. We shall not attempt to enumerate or describe them. Some of them are considerably more complicated than the fluorite structure, but they resemble it in that there is no semblance of separate molecules. Each positive ion is surrounded by a number of negative ions and each negative by a number of positives, at equal distances, so that it is in no sense correct to say that one ion is FIO. XXIII-G. - Tho rairite bound to one or two neighbors more than ture - to others. It is rather interesting to consider the crystal structure of substances containing more complicated negative ions. Simple examples are nitrates, sulphates, and carbonates. These are all similar to each other in that the negative ion exists as a structure by itself, like an ionized molecule, while the positive and negative ions are arranged in a lattice without suggestion of molecular structure, as in the other ionic crystals. Thus in the calcite structure, CaC0 3 , the CO 8 ion exists as a triangular structure, with the carbon in the middle, the oxygens around the corners of the triangle. This structure is built of hexagonal units, as shown in Fig. XXIII-6, with a COa ion at the center, surrounded by six Ca++ 398 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIII ions. Units like that of Fig. XXIII-6, and others which are the mirror image of it in a horizontal plane through the carbonate ion, are built up into a crystal. The substances which crystallize in this structure are tabulated in Table XXIII-9. Several distances are necessary to describe TABLE XXIII-9. SUBSTANCES CRYSTALLIZING IN CALCITE STRUCTURE Substance C-0 distance, angstroms C-metal distance 0-metal distance CaCO 8 MgCO 3 1.24 3.21 2 92 2.37 ZnCOa 2.93 MnCO 3 . . 1.27 3 00 2 14 FeCO,.. . ... NaNO 3 1 27 1.27 3.01 3.25 2.18 2 40 the structure, and these cannot be found so accurately from x-ray methods as in the simpler crystals. In a few cases they are not known, and in any case they are not very certain. We tabulate the carbon to oxygen (or nitrogen to oxygen) distance, giving the size of the negative ion, and also the distances from positive ion to carbon and oxygen. We see that while the lattice spacing depends on the positive ion, the carbonate or nitrate ion is of almost the same size in each case, forming a practically independent unit. The sulphate ion, in sulphates, is a tctrahedral structure, with the sulphur in the center, the four oxygens at the corners of a regular tetra- hedron surrounding it. The sulphur-oxygen distance is about 1.40 A in all the compounds. Examples are CaS04 and BaSC>4. These form different lattices, rather complicated, but as we should expect they are structures formed of positive metallic ions and negative sulphate ions, each ion being surrounded by a number of ions of the opposite sign. There are a number of other compounds crystallizing in the BaSO 4 struc- ture: BaS0 4 , SrS0 4 , PbS0 4 , (NH 4 )C10 4 , KC10 4 , RbClO 4 , CsClO 4 , T1C10 4 , KMnO 4 . 6. Polarizability and Unsymmetrical Structures. In discussing the energy of ionic crystals, we have assumed that the only forces acting were electrostatic attractions and repulsions, and the repulsions on account of the finite sizes of ions. But under some circumstances there can also be forces and changes of energy arising from the polarizability of the ions. Of course, just as in Chap. XXII, we can have Van der Waals attractions between ions, but this is ordinarily a small effect compared to the electrostatic attraction and can be neglected. There can be other, larger effects of polarizability, however. We remember that according to Sec. 3 of Chap. XXII, an atom or ion in an electric field E SBC. 6] IONIC CRYSTALS 399 acquires a dipole moment aE, where a is the polarizability of the ion. Furthermore, the force on the resulting dipole is equal to the dipole moment, times the rate of change of electric field with distance. This force and the resulting term in the energy can be large if a polarizable ion is in an external field, such as can arise from other ions. Now in most of the structures we have considered, this does not occur. In the sodium chloride, caesium chloride, zincblende, wurtzite, and fluorite structures, each ion is surrounded by ions of the opposite sign in such a symmetrical way that the electric field at each ion is zero, so that it is not polarized. But the calcite and barium sulphate structures are quite different. There the oxygens in the carbonate or sulphate ions are by no means surrounded symmetrically by other ions, and there is a strong field acting on them. Furthermore, they are very polarizable, and the result is a large added attraction between the parts of the complex ion. Adopting an ionic picture of the structure of the C0 3 and other ions, we should have it made of C ++ + + and three O 's, giving the net charge of two negative units. Similarly NOa" would be made of N +f ~ +H ~ and three O X and SC>4 of S +4 " f " l " f + and four 's. Each of the ions, in these cases, would form a closed shell, the carbon and nitrogen being like helium, the sulphur and oxygen like neon. The central ion of the complex ion in each case would be very strongly positively charged and would polarize the oxygen very strongly, adding greatly to its attraction to the carbon, nitrogen, or sulphur. We shall not try to estimate the effect of this added attraction at present, but we can easily get evidence of it. Thus we have mentioned that the sulphur-oxygen distance in the sulphates is about 1.40 A. On the other hand, the 0~ radius, from Table XXIII-2, is about 1 .45 A. Of course, S~ H ~ f+ ~ H ~ would have an extremely small radius, but still we should expect that the sulphur-oxygen distance would be something like 1.50 A in the absence of extra attraction. Even more striking is the carbon- oxygen distance in the carbonates, about 1.27 A, well below the ionic radius of oxygen alone. These facts suggest that some additional attrac- tion is acting between the ions in question, decreasing the distance of separation. One way of interpreting this added attraction is the polariza- tion effect we have mentioned. In the next chapter, we shall see that another interpretation is to suppose that the ions are not really as highly charged as the ionic picture would suggest, but that in addition there are homopolar bonds between the atoms making up these complex ions. The homopolar binding would in this interpretation furnish the extra attrac- tive force resulting in the small spacing between atoms. Thus we are not perhaps forced to think about polarizability at all in such a case. There are many cases, however, where it is definitely important, and the effect of polarization can be calculated easily from known polariza- bilities and charge distributions. CHAPTER XXIV THE HOMOPOLAR BOND AND MOLECULAR COMPOUNDS Ionic compounds, as we have seen in the preceding chapter, exist most characteristically in crystalline solids, for the electrostatic forces that hold them together extend out in all directions, binding the ions together into a structure that has no trace of molecular formation. Compounds held together by homopolar bonds, on the contrary, form definitely limited molecules, which are bound to each other only by the relatively weak Van der Waals forces. Thus their most characteristic form is the gaseous phase in which the molecules have broken apart from each other entirely. The ordinary gases with which we are familiar, and the ordinary liquids, belong to this group of compounds. They arc the only group to which the idea of the molecule, so common in chemistry, really applies. We shall begin our discussion by taking up some of the familiar molecular com- pounds and discussing the homopolar bonds which hold their atoms together, and the nature of homopolar valence. Then we shall go on to a discussion of tho Van der Waals constants of these substances, as indicat- ing their behavior in the gaseous and liquid phases, and finally we shall take up the solid forms of the homopolar substances. 1. The Homopolar Bond. The Qrincipa^glements sometimes form- Te: H. F. CL Br. In these groups of elements we must add four, three, two, or one electron respectively to form a closed shell. But if two such elements combine together, where are the extra electrons to come from? There is no positive ion losing electrons and ready to donate them to help form nega- tive ions. The expedient which these elements adopt in their effort to form closed shells is the sharing of electrons, as we have discussed in Chap. XXII. An electron can sometimes be held by two atoms in com- mon, spending part of its time on one, part on the other, and part in the region between; in so doing it helpslfill UP the shells of both atoms'. There is just one conspicuous rule that holds for almost all such bonds, and that is tbftt ordinarily two electrons are shared in a similar way, the two together forming what is called a homopolar or electron-pair bond. The reason why two cooperate, as we saw in Chap. XXII, is essentially the electron spin in conjunction with the exclusion principle. In the first place, we can symbolize the process of forming a homopolar bond by a simple device used by G. N. Lewis, to whom many of the ideas of homopolar binding are due. Most of the elements forming this type of bond are trying to complete a shell, or subshell, of eight electrons, as we 400 SEC. 1] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 401 have explained in Chap. XXII. Lewis indicates the eight electrons by eight dots surrounding the symbol of the element. Thus, for instance, the neutral fluorine atom, which has only seven electrons in its outer shell, would be symbolized as : F . . This does not have a completed shell. But by combining two such atoms, we can form the structure :F:F:, contain- ing fourteen electrons but sharing two of them in a homopolar bond, so that each atom in a sense has a completed shell. It is clear from this symbolization that the halogons, F, Cl, Br, I, can form one homopolar bond; the divalent elements 0, S, Se, Te, can form two; and so on. In this symbolization, hydrogen takes a special place, for by adding electrons it forms a completed helium shell of two electrons. It thus forms one homopolar bond, and in this type of bonding it is in many respects analo- gous to a halogen. In such a way, for instance, we can indicate the structure of hydrogen chloride as H : Cl : , the electron pair sharod between the two atoms helping to fill up the hydrogen shell of two, and the chlorine shell of eight. Similarly and illustrating also the valences of 0, N, and C, we may write water, ammonia, and methane respectively as H:O:, H H H : N : H, and H : C : H. In each of these compounds, we observe that the H H total number of electrons indicated is just the number furnished by the outer shells of the atoms entering into the molecule. Thus, in NH 3 the nitrogen furnishes five electrons, each of the hydrogens three, making eight in all. Hydrogen is in a very special position, in that it forms a closed shell (heliumlike) by adding an electron, and also a closed shell (the nucleus without any electrons) by losing an electron. It can act, in the language of ions, like either a univalent positive or a univalent negative element. This gives the possibility of an ionic interpretation of most of the hydrogen compounds. Thus we may symbolize hydrogen chloride as H+(:C1:)~, and water as H+(:6:) H+, or as H+(:6:)~. We shall see later that H there is good reason to think, however, that in most of these cases the homopolar way of writing the compound is nearer the truth than the ionic method. The elements carbon, nitrogen, and oxygen have a peculiarity rarely shown by other elements: they form sometimes what is called a double 402 INTRODUCTION TO CHEMICAL PHYSICS [CHA*. XXIV bond, and in the cases of carbon and nitrogen a triple bond. This means that two or three pairs of electrons, rather than one, may be shared between a pair of atoms. This is seen in its simplest form in the molecules O 2 and N 2 . If two oxygens shared only one pair of electrons, they would not achieve closed shells; they must share two pairs, so that two electrons of each atom count in the shell of the other as well. Similarly two nitro- gen atoms must share three pairs. We can symbolize these compounds by : O : : O : , where both pairs of electrons between the O's are counted in each group of eight, and by :N: : :N:. Compounds having double or 1 riple bonds generally have a rather unsaturated nature; they tend to add more atoms, breaking down the bonds into single ones and using the valences left over in order to attach the other atoms. A familiar example is the group of compounds acetylene C2H 2 , ethylene C 2 H 4 , and ethane C 2 H 6 , in which the last named is the most stable. These are symbolized HH H:C:::C:H, ij- C:;C -H> H: C:C:H, formed with triple, double, and HH single bonds respectively. One can derive a good deal of information about the three-dimensional structure of a molecule in space from the nature of the homopolar bonds, and it must not be supposed that the chemical formulas, written as we have been writing them in a plane, express the real shape of the molecule. We have written them in each case so as to approximate the shape as closely as possible, but in many cases have not succeeded very well. In general, the four pairs around an atom tend to be arranged in the only symmetrical way they can be, namely at the four corners of a regular tetrahedron. The vectors from the center to the corners of a tetrahedron form angles of 109.5 with each other, often called the tetrahedral angle, and in a great many cases it is found that when two or more atoms are attached to another atom by homopolar bonds, the lines of centers actually make approximately this angle with each other. We shall dis- cuss this more in detail in the next section, in which we take up the struc- tures of a number of homopolar molecules. 2. The Structure of Typical Homopolar Molecules. Many of the homopolar molecules are among the most familiar chemical substances. In this section we shall describe a few of them, discussing the nature of their valence binding and giving information about their shape and size. For the diatomic, and some of the polyatomic, molecules, this information is derived from band spectra. In other cases, it is found by x-ray diffrac- tion studies of the solid, using the fact that homopolar molecules generally are very similar in the solid and gaseous phases, or by electron diffraction with the gas. We begin with some of the diatomic molecules listed in Table IX-1, including H 2 , C1 2 , Br 2 , I 2 , NO, 2 , N 2 , CO, HC1, and HBr. SBC. 2] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 403 The first molecule in the list, and the simplest diatomic molecule, is hydrogen, H 2 . Its structure of course is H:H, the two electrons being shared to simulate a helium structure about each atom. The internuclear distance 1 0.75 A is the smallest internuclear distance known for any compound, as is natural from the small number of electrons in hydrogen. As we have seen, hydrogen acts a little like a halogen when it forms homo polar bonds, and we might consider next the molecules F 2 , C1 2 , Br 2 , I 2 . We have already mentioned their bonding, by a single electron pair bond, in the previous section. The internuclear distances in these molecules are large, being 1.98 A for C1 2 , 2.28 A for Br 2 , and 2.66 A for I 2 , as we saw in Table IX-1. It is interesting to compare these inter- nuclear distances with the ionic radii, from Table XXIII-2. There we found radii of 1.80 A, 1.95 A, and 2.20 A for C1-, Br~, and I~. If these radii represented the sizes of the atoms in the diatomic molecules, the internuclear distances would be twice the radii, or 3.60 A, 3.90 A, and 4.40 A, almost twice the observed distances. This is an illustration of the fact, which proves to be quite general, that interatomic distances in homopolar binding are decidedly less than in ionic binding. The reason is simple. While the sharing of a pair of electrons is in a sense a way of building up a closed shell of electrons, still the shell is really not filled to capacity. There is, so to speak, a soft place in the shell just where the bond is located, and the atoms tend to pull together closer than if the shell were really filled. The remaining molecules in our list are NO, 2 , N 2 , CO, HC1, and HBr. The first of these, NO, is the most peculiar compound in the list and one of the most peculiar of the known compounds. We note that nitrogen supplies five, and oxygen six, outer electrons to the compound, making a total of eleven, an odd number. It is quite obvious that an odd number of electrons cannot form closed shells, electron pairs, or anything else associated with stable molecules. As a matter of fact, out of all the enormous number of known chemical compounds, only a handful have an odd number of electrons, and NO is almost the only well-known one of these. We shall make no effort to explain it in terms of ordinary valence theory, for it is in every way an exception, though it can be understood in terms of atomic theory. Oxygen, with a double bond, and nitrogen, with a triple one, have already been discussed. The internuclear distances, from band spectra, are 1.20 A for oxygen, and 1.09 A for nitrogen. In line with what we have just said about the halogens, it is interesting to notice that the inter- nuclear distance in oxygen is a great deal less than the double radius of O . That ionic radius was 1.45 A, so that if it represented the size of 1 See Sponer, " Moleklilspektren und ihre Anwendungen auf chemische Probleme," Vol. I, Springer, 1935, for interatomic distances of diatomic and polyatomic molecules in this chapter. 404 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV the oxygen in O 2 , the internuclear distance would be 2.90 A, more than twice the actual distance. In the case of 02, with its double bond, the tendency of the atoms to pull together in homopolar binding is particu- larly pronounced, for the shell is even less nearly filled than with a single bond and can be even more compressed by the interaction forces. In nitrogen with its triple bond, the interatomic distance is even smaller, in line with this fact. The next molecule on the list, CO, does not fit in very well with our rules. A clue to its structure is provided by the fact that it has 4 + 6 == 10 outer electrons, just like N 2 , and that in many of its properties it strongly resembles N 2 . We have stated that the internuclear distance in nitrogen was 1.09 A; in CO it is 1.13 A. The suggestion is very natural that in a sense a triple bond is formed in this case also, with the structure : C : : : : , though a triple bond is not usually formed by oxygen. We have already discussed the structure of the next two molecules, HC1 and HBr, whose valence properties are indicated by the symbols H:C1:, H:Br:. The internudear distances are 1.27 A and 1.41 A respec- tively. We note, as before, how much smaller these are than the values given by ionic radii. We have no ionic radius for H, but for Cl~ we have 1.80 A, and for Br~ the distance is 1.95 A. The internuclear distances in these cases are actually less than the ionic radii. This is good evidence for the homopolar, rather than the ionic, nature of these compounds. Another reason comes from the magnitudes of the electric dipole moments of these two molecules, which are found to be 1.03 X 10~ 18 and 0.78 X 10~ 18 e.s.u.-cm. If the molecules were really ionic, we should expect that electrically they would consist of a unit positive charge on the hydrogen nucleus, and a net negative charge of one unit, spherically symmetrical, and therefore acting as if it were on the chlorine or bromine nucleus. That is, there would be charges of one electronic unit located 1.27 A and 1.41 A apart respectively. This would give dipole moments, equal to the product of charge and displacement, of (4.8 X 10- 10 ) X (1.27 X 10~ 8 ) = 6.1 X 10~ 18 , and of 6.8 X 10~~ 18 units, respectively. The observed dipole moment of HC1, as we have seen, is only about one-sixth of this value, and of HBr about one-ninth of the value given by the polar model. To explain this, we must assume that the negative charge is not located symmetrically about the Cl or Br but is displaced toward the hydrogen. This is what we should expect if there is really a homopolar bond, for then the shared electrons would be in the neighborhood of the hydrogen, displaced in that direction from the halogen ion. The dipole moments then furnish arguments for the homopolar nature of the bond and for thinking that it SBC. 2] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 405 is less polar in HBr than in HC1. There is still another way of regarding this situation : we may make use of the polarizability of the halide ion. A hydrogen nucleus close to a halide ion would produce an extremely strong electric field, which would polarize the ion, changing it into a dipole with the negative charge displaced slightly toward the positive hydrogen ion. This dipole moment would tend to cancel the moment produced by the two undistorted ions, so that the net moment would be less than the figure 6.1 X 10~ 18 calculated above. As a matter of fact, calculations by Debye 1 show that the resulting dipole moment calculated in this way would be of the right order of magnitude. We should not regard this calculation as indicating that our homopolar shared electron theory is not accurate, however. For in this case the polarizability is simply a rather crude way of taking account of the shifting of electric charge which we can describe more precisely as electron sharing between the atoms. The situation, however we describe it, is essentially this: that the electrons, instead of being arranged in a spherically symmetrical way about the halogen nucleus, tend to be somewhat displaced toward the hydrogen, so that it also is partly surrounded by electrons, rather than acting as an entirely isolated ion. We have now completed our list of diatomic molecules. Next wo might well take up various hydrides: H 2 O, NH 3 , CH 4 , H 2 S, PH 3 , SiH 4 . We have already given structural formulas for H 2 O, NH 3 , and CH 4 ; the others are analogous, H 2 S resembling H 2 O, PH 3 being like NH 3 , and SiH 4 like CH 4 . The hydrogens are bound, on the homopolar theory, by single bonds, and the angles made by the radius vectors to different hydrogens are very closely the tetrahedral angle 109.5. Thus H 2 O is a triangular molecule, the hydrogen-oxygen distance being 0.96 A, and tho H-O-H angle being 104.6. NH 3 is pyramidal, the nitrogen-hydrogen distance being 1.01 A and the H-N-H angle 109. CH 4 is tetrahedral, the carbon-hydrogen distance being 1.1 A. H 2 S is triangular, like water, the sulphur-hydrogen distance 1.35 A and the angle 92.1, rather smaller than we should have supposed. PH 3 is presumably pyramidal like NH 3 but the distances and angles do not seem to be known. SiH 4 is tetra- hedral but again the distances are not known. There are only a few other common inorganic molecules to be mentioned. CC>2 is a linear structure, with valences symbolized by :O: :C: :O:. That is, there are double bonds between the carbon and each oxygen. The C-0 distance is 1.16 A, slightly greater than the value 1.13 A found in CO, where there is a triple bond. N 2 O is also a linear molecule, presumably with the structure : N : : N : : : , again formed with double bonds like CO 2 , which has the same number of outer electrons. 1 See Debye, "Polare Molekel," Sec. 14, Hirzel, 1929. 406 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV The distance between end atoms is 2.38 A, slightly greater than the value 2 X 1.18 = 2.32 A between end atoms in C02. Here again we see the resemblance between N2 and CO, in that they form similar molecules when another oxygen atom is added. The molecule 802 is a triangular molecule shaped something like water. Its structure presumably is :0:S:. This is the first example we have met of a case where an atom is not surrounded, even with the aid of shared electrons, by a closed shell: the sulphur has only six electrons around it. We shall come to other examples later, when we talk about inorganic radicals. The sulphur-oxygen distance in 802 is 1.37 A. Carbon disulphide CS 2 resembles CO2 in being a linear molecule, with carbon-sulphur distance of 1.6 A, decidedly larger than the value 1.16 A in C02, in accordance with the fact that sulphur is a larger atom than oxygen. The molecules taken up so far are all very simple, composed of very few atoms. The more complicated examples of homopolar molecular compounds are found almost entirely in the field of organic chemistry. We shall postpone a discussion of organic compounds until the next chapter, since they form a field by themselves. Before closing our discus- sion, however, we shall take up a different sort of homopolar structure, namely, a few inorganic negative ions, formed very much like molecules. The most important ones are NOa~, COa , 804 , C104~, mentioned in the preceding chapter. The first two, as stated in Chap. XXIII, Sec. 5, are triangular structures in a plane, the N or C being in the center, the oxygens at the corners. Each has 24 electrons (when we take account of the negative charge 011 the ion), so that it was possible in the last chapter to treat them as ionic structures, the oxygen having a closed shell of eight electrons, the nitrogen or carbon having no outer electrons. .. :6: .. :6: A structure much nearer the truth, however, is :O:N \\ and :0:C *.'. . " :0: " :0: These structures differ from the ionic one in that we have indicated two electrons from each oxygen as being shared with the central nitrogen or carbon. This case resembles that of SO2 in the preceding paragraph, in that one of the atoms, in this case the central one, has only six rather than eight electrons surrounding it. Another molecule showing similar structure is SO 3 . We have stated in the preceding chapter, Table XXIII-9, that the C-0 or N-O distance in the carbonates and nitrates is about 1.27 A. This is decidedly greater than the C-0 distance in CO, which is 1.13 A, and in C02, 1.16 A, but in the present case there is only a single bond, rather than the triple or double bonds found in those two SEC. 3] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 407 compounds. The agreement is close enough so that it is quite plain that the bonds in these cases are homopolar and not ionic, as was stated in Chap. XXIII, Sec. 6. In the sulphate and perchlorate ions, S0 4 ~ and C1O 4 ~, there are enough electrons, 32 outer ones per ion, to form com- plete shells around the central atoms. The valence can be symbolized : O : S : O : and : O : Cl : O : , and the compounds can be described as having ":6:" " :6:" single valence bonds between the central atom arid each oxygen. They have a tetrahedral form, the sulphur-oxygen distance in the sulphates being about 1.40 A. 3. Gaseous and Liquid Phases of Homopolar Substances. We have already mentioned that the gaseous phase is the most characteristic one for molecular substances with homopolar binding. We shall begin, therefore, by examining the Van dor Waals constants a and 6 for a con- siderable number of homopolar substances. The constant b will give us information about the dimensions of the molecules, information that we can correlate with the known interatomic distances, and a will lead to information about the strength of the Van der Waals attraction. In Table XXIV-1, we give Van der Waals constants for quite a series of gases, arranged in order of their &'s. We include not merely the gases mentioned in the preceding section, but the inert gases, for comparison, and then a considerable number of organic substances, which as we have mentioned furnish the largest and most characteristic group of homopolar substances. In the table, the units" of a are (dynes per square centimeter) times (cubic centimeters per mole) 2 . The units of 6 are cubic centimeters per mole. These constants are obtained from the critical pressure and temperature by Eq. (2.5), Chap. XII. They do not, therefore, have just the same significance as the a and 6 of Eq. (5.3), Chap. XII, for those are the constants that would lead to agreement between Van der Waals' equation and experiment at low density, while the values we use are suit- able to pressures and temperatures around the critical point, which will disagree with the other ones unless Van der Waals' equation is really applicable over the whole range of pressures and temperatures. We note from Eq. (2.6) of Chap. XII, that if Van der Waals' equation were correct, we should have V c /3 = 6, where V e is the observed critical volume. To test this relation, Table XXIV-1 lists observed values of Fc/3. We see that these values do not agree very closely with the values of 6, as was stated in Chap. XII, though they are not widely different. In addition to these quantities, we also tabulate the molecular volume of the liquid, in cubic centimeters per mole, for the lowest temperature for which figures are available, and also the dipole moment for dipole mole- 408 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV TABLE XXI V-l. VAN DER WAALS CONSTANTS FOB IMPERFECT GASES Gas Formula a ft TV3 Molec- ular vol- ume of liquid Electric moments Neon Ne 21 X 10 12 17 1 14 7 16 7 X 10~ Helium He 035 23 f> 20 5 27 4 o Hydrogen Nitric oxide .... Water .... Oxygen . H 2 NO H 2 O O2 25 1 36 5.53 1 40 26 5 27 8 30 4 32 2 21 6 19.1 18.9 24 8 26 4 23 7 18.0 25 7 1.85 o Argon Ammonia Nitrogen A NHs N 2 1.36 4 22 1 36 32.2 36.9 38 3 26 1 24.2 30 28 1 24.5 32 8 1.44 o Carbon monoxide CO 1 50 39 7 30 32 7 10 Krypton Kr 2 35 39 7 36 38 9 o Hydrogen chloride HC1 3 72 40 7 20 8 30 g 1 03 Nitrous oxide ... Carbon dioxide ..... N;O COs 3 61 3.64 41.1 42 5 32.3 32 8 44 41 7 0.25 Methane CH4 2 28 42 6 32 9 49 5 o Hydrogen sulphide . Hydrogen bromide H 2 S HBr 4 49 4 51 42.7 44 1 35.4 37 5 0.93 78 Xe 4.15 50 8 38 47 5 Acetylene . .... Phosphtne .... C 2 H 2 P1I 3 4 43 4 69 51.3 51 4 37.5 37 7 50.2 49 2 55 Chlorine Ch 6.57 56 41 41 .2 Sulphur dioxide SOa 6 80 56 1 41 43 8 1 61 Ethylene C 2 H4 4 46 56 1 42 3 49 3 o Silicon hydride SiH4 4.38 57 6 47 Methylamine Ethane . CHaNH 2 CHa CHs 7 23 5.46 59.6 63 5 47 6 44.5 54 9 1.31 Methyl chloride Methyl alcohol Methyl ether Carbon bisulphide CHaCl CHaOH (CH 3 ) 2 CS 2 7.56 9 65 8.17 11 75 64.5 66.8 72 2 76 6 45.4 39.0 67 5 49 2 40.1 59 1.97 1.73 1.29 Dimethylamine Propylene (CHa)aNH CsHe 9 77 8.49 79.6 82 4 66 2 69 o Ethyl alcohol CaHsOII 12 17 83 8 41 57 2 1 63 Propane CHsCHz CHd 8.77 84 1 75 3 o Chloroform . CHCh 15.38 102 77 1 80.2 1.05 Acetic acid . . Trimethylamine . . . CHaCOOH (CHa)aN 17.81 13 20 106 108 57.0 56.1 89 3 iso-Butane . . .... CH(CHa)3 13.10 114 96 3 Benzene Cell 6 18 92 120 85 5 86 7 o n-Butane Ethyl ether CH 3 (CH 2 ) 2 CH3 (C2H5) 2 O 14.66 17.60 122 134 94 96 5 100 1.2 Triethylamirie ... (C 2 H 6 ) 3 N CioH 27.5 40.3 183 193 139 112 0.69 w-Octane CHa(CH 2 )aCHa 37 8 236 162 162 o Decane CIIa(CHa)8CHi 49.1 289 195 o The unit of pressure in the constants above is the dyne per square centimeter, the unit of volume is cubic centimeters per mole. The electric moments are expressed in absolute electrostatic units. Data for Van der Waals constants and volumes are taken from Landolt's Tables; for the electric moments from Debye, " Polare Molekeln," Leipzig, 1929. SBC. 3] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 409 cules, in electrostatic units; as we have seen in Chap. XXII, this has a bearing on the Van der Waals attraction. The first thing which we shall consider in connection with the con- stants of Table XXIV-1 is the set of b values. It will be remembered that b represents in some way the reduction in the free volume available to a molecule, on account of the other molecules of the gas. That is, it should bear considerable resemblance to the actual volume of the molecules. According to statistical mechanics, we have seen in Chap. XII, Sec. 5, that the b appropriate to the limit of low densities should be four times the volume of the molecules, but this prediction is not very accurately fulfilled by experiment. The reason no doubt is that that prediction was based on the assumption of rigid molecules, whereas we have scon in this chapter and the preceding one that molecular diameters really depend a great deal on the amount of compression produced by various forms of interatomic attraction. This was made particularly plain in Table XXIII-3, where wo computed volumes of the inert gas atoms by using radii interpolated between the ionic radii of the neighboring positive and negative ions and compared these volumes with the b values and the volumes of the liquids. We found, as a matter of fact, that the b values were from three to five times the computed volumes of the molecules, in fair agreement with the prediction of statistical mechanics, but wo found that the volumes of the liquids, in which the molecules are held together only by Van der Waals forces, were of the same order of magnitude as the b values, indicating that the Van der Waals forces, being very weak compared to ionic forces, cannot compress the molecules very much. To see if this situation is general, we give the molecular volume of the liquid for each gas in Table XXIV-1. A glance at the table will show the striking parallelism between the volume of the liquid and the constant b. For the smaller, lighter molecules, b is of the same order of magnitude as the volume of the liquid, as a rule somewhat larger, but not a great deal larger. For the heavier molecules, there seems to be a definite tendency for b to be larger than the volume of the liquid, but even here it is not so great as twice as large. The actual magnitude of the constants 6 is of considerable interest. In the first place, we may ask how much of the volume of a gas, under ordinary conditions of pressure and temperature, is occupied by the molecules. One mole of a gas at atmospheric pressure and 0C. occupies 22.4 1., or 22,400 cc. The molecules, under the same circumstances, occupy the volume tabulated in Table XXIV-1, in cubic centimeters. For the common gases, these volumes are of the order of 30 or 40 cc. This is the order of magnitude of two-tenths of 1 per cent of the actual volume, so that it is correct to say that most of the volume occupied by a gas is really empty. On the other hand, even under these circumstances, the 410 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV molecules are not very far apart in proportion to their size. We may take an extreme case of helium, where at normal pressure and temperature the molecules would occupy some 23 cc., or about one one-thousandth of the volume. To get an idea of the spacing, we may imagine the atoms spaced out uniformly, each one in the center of a cube (though of course actually they will be distributed at random). Then a cube of the volume of the atom would have one one-thousandth the volume of one of these cubes containing an atom, but the side of the small cube would be one- tenth [= (TWIT)^] *he s ^ e f the l ar g e cube, meaning that the average distance between atoms is only about 'ten times the diameter of a single atom. This is for a small molecule; with the larger molecules in the table, the molecules are twice as large or more in diameter and are correspond- ingly closer together in proportion. We can imagine that under these circumstances real gases depart quite appreciably from perfect gas condi- tions. Furthermore, it is natural that collisions between atoms are frequent and that they are of great importance in many phenomena. This is all, however, for one atmosphere pressure. A pressure of 10" 6 mm. of mercury can easily be obtained in the laboratory. This is about 10~ 9 atm. and corresponds to atoms spaced a thousand times farther apart, or something like 10,000 atomic diameters apart. It is clear that a gas in this condition must be very much like a perfect gas and that deviations from the gas law and interactions between molecules can be neglected. It is interesting to ask how much space, on the average, is occupied by each atom or molecule. In a gram molecular weight, as we have said before, there are about 6.03 X 10 23 molecules. Thus if we divide b by this figure, we shall get the volume per molecule. It is obvious from the table that the molecules with many atoms have much larger volumes than those with few atoms, and it appears very roughly that the volume is something like 12 cc. per mole per atom, a figure which we could get by dividing the b for a particular molecule by the number of atoms in the molecule. Variations of more than 100% from this figure are seen in the table, but still it is correct as to order of magnitude. This gives 12/(6.03 X 10 23 ) = 2 X 10~ 23 cc. as the volume assigned to an atom. This is the volume of a cube 2.7 X 10~ 8 cm. on a side; this figure seems reasonable, being of the order of magnitude of the dimensions of most of the atoms, within a factor of two at most. From what we have said, the values of the Van der Waals constants b for the gases of our table look very reasonable. Next we can consider their a's. In Eq. (3.6) of Chap. XXII, we have seen that the Van der Waals interaction energy between molecules of polarizability a, mean square dipole moment /z*, at a distance r, is Energy- -JJ2L*. (3.1) SBC. 3] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 411 In Eq. (5.3), Chap. XII, we have seen that the Van der Waals a for molecules of radius r /2, with an intermolecular attractive potential of s (3.2) ) where N Q is Avogadro's number. Using Eqs. (3.1) and (3.2), we can now write an explicit formula for the Van der Waals a. It is (3.3) The terms of Eq. (3.3) are all things that can be estimated. From Table XXI V-l, we see that the permanent dipole moments of dipole molecules are of the order of magnitude of 2 X 10~ 18 absolute units, and we may expect the root mean square fluctuating moments of other molecules to be of the same order of magnitude. The polarizability can be found from the measured dielectric constant, as we have explained in Chap. XXII, Sec. 3. In addition to these quantities, we need the volume of the sphere, ^TrrJ, which appears in the denominator of Eq. (3.3). This will certainly be of the general order of magnitude of the molecular volume, and for the present very crude calculation we may take it to be the same as 6, which we have tabulated in Table XXI V-l. (For air we use a value inter- mediate between oxygen and nitrogen.) We have now computed values of ^ which, substituted into the formula (3.3), will give the correct value of a, and have tabulated these in Table XXIV-2. We see that the values of M necessary to give the observed a values are of the order of TABLE XXIV-2. CALCULATIONS CONCERNING VAN DER WAALS ATTRACTIONS Gas Dielectric constant e AT a, cubic centimeters M H 2 Air 1.000264 .000590 0.470 1.05 1 . 7 X 10-" 18 3 1 CO .000690 1.23 3.1 CO 2 .000985 1 75 4 2 CH 4 .000944 1 .68 3.4 C 2 H 4 .00131 2.33 4 7 The dielectric constant c is given for gas at 0C., one atmosphere pressure. gas occupies 2.24 X 10* cc. at this pressure and temperature, we have Thus, since a mole of - 2.24 X 10* using Eq. (3.4), Chap. XXII. The value of *i is calculated from Eq. (3.3), as described in the text, and represents the dipole moment necessary to explain the observed Van der Waals a. magnitude which we expected to find. As we should naturally expect, they increase as we go to larger and more complicated molecules. Direct calculations of /*, or rather of the whole Van der Waals force, have been 412 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV made for a few of the simple gases like hydrogen and helium, with fairly close agreement with the experimental values. It seems likely, therefore, that our explanation of these attractions is quite close to the truth. The explanation just given for the magnitude of the Van der Waals attractions must be modified for strongly polar molecules, those having large dipole moments. From Chap. XXII, Sec. 3, we remember that in such cases there is an extra term in the polarizability, on account of the orientation of the molecule in an external field. We mentioned that this would increase the Van der Waals attraction, because pairs of molecules would tend to orient each other into the position of maximum attraction, and suggested that it might result in a Van der Waals attraction several times as great as for nonpolar molecules. Examination of Table XXIV-1 shows that, in fact, the strongly polar molecules have constants a which are much greater than those of nonpolar molecules near them in the list. Thus water has an a value about four times those of its neighbors, and ammonia about three times. We can understand in detail what happens by considering the most conspicuous case, water. The crystal structure of ice is well known and will be described in the next section. It is a molecular crystal, in which each oxygen is surrounded tetrahedrally by four other oxygens. Between each pair of oxygens is a hydrogen. Each oxygen thus has four hydrogens near to it. But two of these four hydro- gens are close to the oxygen, forming with it a water molecule, with its two hydrogens at an angle, just like the water molecule in the gas. The other two hydrogens are attached to two of the four neighboring oxygens, forming part of their water molecules. This structure puts each of the hydrogens of one molecule near the oxygen of another, so that their oppo- site electrical charges can attract each other, helping to hold the crystal together. This arrangement undoubtedly persists to a large extent in the liquid and even to some extent in the gas, though it undoubtedly decreases as the temperature is raised, for at high temperatures the mole- cules tend to rotate, spoiling any effect of orientation. And it is this extra attraction, on account of the particular orientations of the molecules, which results in the very largo value of a for water. Similar explanations hold for the other molecules with large dipole moments, but examination of their structure shows that the others cannot form such tightly bound structures as water. There is another feature of Table XXIV-1 that bears out the unusually large forces between dipole molecules, and that is the molecular volumes of the liquids. If the polar molecules have unusually large attractive forces, we should expect that these forces, which after all hold the liquid together, would bind it particularly tightly, so that the liquids would be unusually dense. Consistent with this, we note that water and ammonia conspicuously, and some of the other polar liquids to a lesser extent, have SBC. 3] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 413 molecular volumes for their liquids decidedly smaller than their b values, while the nonpolar molecules have molecular volumes rather closely equal to their b's. To put these facts in somewhat more striking form, if water had no dipole moment we should expect it to have a density only about two-thirds what it does, and we should expect the intermolecular forces to be so small that it would boil many degrees below zero, as its neighbors NO and 02 in the table do, and to be known to us as a permanent gas difficult to liquefy. The extra intermolecular force in water resulting from dipole attrac- tion, which we have just discussed, is closely tied up with one of the most remarkable properties of water, its ability to dissolve and ionize a great many ionic compounds. We have seen in the last chapter that it requires a large amount of energy to pull a crystal of an ionic substance apart into separated ions. Such a process surely will not occur naturally; if we examined the equilibrium between the solid and the ionic gas, the heat of evaporation would be so enormous that at ordinary temperatures there would be practically no vapor pressure. If an ion is introduced into water, however, there is a strong binding between the water molecules and the ion, corresponding to a large negative term in the energy, with the result that the heat of solution, or the work required to break up the crystal into ions and dissolve the ions in water, is a small quantity. In other words, referring back to the type of argument met in Chap. XVII, an ion is about as strongly attracted to a water molecule as to the ions of opposite sign in the crystal to which it normally belongs; this is the necessary condition for solubility. We now ask, why is the ion so strongly bound to the water molecules? The reason is simply that as we have seen the hydrogens of the water molecule are positively charged, the oxygen negatively, so that a positive ion can locate itself near the oxygens of a number of neighboring water molecules, a negative ion near a number of hydrogens, which attract it electrostatically approximately as much as two ions would attract each other, or as the negative oxygen of one water molecule would attract the positive hydrogens of its neighboring water molecules. This effect is particularly strong in water, for the same reason that the Van der Waals binding is strong in water; similar effects are found in a lesser extent in liquid ammonia, which is also a powerful ioniz- ing solvent, with many of the same properties as water. We have mentioned several times that the characteristic of the molecular compounds is that the Van der Waals forces between molecules are small compared to the valence forces holding the atoms together to form a molecule. Thus the substances vaporize at a low temperature, whereas their molecules do not dissociate chemically to any extent except at very high temperatures. For instance, the dissociation H 2 ^ 2H is a typical example of chemical equilibrium, to be handled by the methods 414 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV of Chap. X, and the forces holding the atoms together are the sort taken up in Chap. IX. We saw in Table IX-1 that the heat of dissociation of a hydrogen molecule was 103 kg.-cal. per gram mole, such a large value that the dissociation is almost negligible at any ordinary temperature. On the other hand, the latent heat of vaporization, the heat required to pull the molecules of the liquid or solid apart to form the gas, is only 0.256 kg.-cal. per gram mole in this case, so that a temperature far below 0C. will vaporize hydrogen. To illustrate how general this situation is, Table XXIV-3 gives the latent heat of vaporization and the heat of dissociation TABLE XXIV-3. LATENT HEAT OF VAPOKIZATION AND HEAT OF DISSOCIATION Substance Latent heat, kg.-cals. per gram mole Heat of dis- sociation, kg.-cals. per gram mole H 2 0.220 103 O 2 2.08 117 N 2 1.69 170 CO I 90 223 CO 2 6 44 NH 8 7.14 90 HC1 4.85 102 H 2 O 11.26 118 Latent heats are from Landolt's Tables, and in each case are for as low a temperature as possible, since the latent heat of vaporization decreases with increasing temperature, going to zero at the critical point. Heats of dissociation are from Sponer, " Molekulspektren und ihre Anwendungen auf chemische Probleme," Berlin, 1935, some of them having been quoted m Table IX-1. of a few familiar gases. The heat of dissociation in each case is the energy required to remove the most loosely bound atom from the molecule. We see that in each case the latent heat is only a few per cent of the heat of dissociation; water is a distinct exception on account of its high latent heat, ten per cent of the heat of dissociation, which of course is tied up with the large Van der Waals attraction arising from the dipole moments of the molecules. 4. Molecular Crystals. The molecules which we have been discussing in this chapter are tightly bound structures, held together by strong homopolar forces. Mathematically, these forces can be described approximately by Morse curves, as discussed in Sec. 1, Chap. IX. On the other hand, the forces holding one molecule to another are simply the Van der Waals forces, which we have spoken about in Chap. XXII, and which are very much weaker than homopolar forces, as we saw from Table XXIV-3. It thus comes about that the crystals of these materials consist of compact molecules, spaced rather widely apart. Since the forces between molecules are so weak, the crystals melt at low tempera- SBC. 4] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 415 tures, are very compressible, and are easily deformed or broken, in con- trast to the ionic crystals with their considerable mechanical strength, low compressibility, and high melting points. In this section we shall discuss the structure of a few of the molecular crystals. We start with the inert gases. The atoms of these substances are spherical, and we should naturally expect that their crystals would be formed simply by piling the spheres on top of each other in the closest manner possible. This is, in fact, the case. There are two alternative lattices, corresponding to the closest packing of spheres. Of these, the inert gases choose the type called the face-centered cubic structure. This structure is shown in Fig. XXI V-l. In the first place, we can regard the structure as arising from a simple cubic lattice, as shown in (a). There is an atom at each corner of the cube and in the center of each face. Comparison with Fig. XXIII-1, showing the sodium chloride structure, () (b) (c) FIG. XXI V-l. The face-centered cubic structure, (a) atoms at the corners of a cube and the centers of the faces. (6) the same atoms connected up in planes perpendicular to the cube diagonal, (c) view of successive planes looking along the cube diagonal, illustrat- ing the close-packed nature of the structure. will show that in the latter type of structure the positive ions by them- selves, or the negative ions by themselves, form a face-centered cubic structure. Another way of looking at the face-centered cubic structure can be understood from Fig. XXI V-l (6). In this, we have drawn the atoms just as in (a) but have connected them up differently, so as to form two parallel triangles of six atoms each, oppositely oriented, with two extra atoms. If we had considered, not simply the cube of (a) but the whole crystal, each of these triangles would have been part of a whole plane of atoms. These same six atoms are shown by the heavy circles in (c), where we now look down along the normal to the planes of the tri- angles; that is, along the body diagonal of the cube, shown dotted in (a). In (c), we have drawn the circles representing the atoms large enough so that they touch, as if they were closely packed spheres. The next layer of spheres is shown in (c) by dotted lines, and we see that it is a layer similar to the first but shifted along, atoms of the second layer fitting into every other one of the depressions between atoms in the first layer. The third layer, of which only one atom is shown in (6), fits on top of (c) 416 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV in a similar manner, the one atom shown in (6) going in the common center of the triangles of (c). After these three layers, the structure repeats, the fourth layer being just like the one drawn in heavy lines in (c). From this description in terms of Fig. XXI V-l, (c), it is clear that the face- centered structure is a possible arrangement for close packed spheres. Each atom has twelve equally spaced neighbors, six in the same plane in (c), three each in the planes directly above and directly below. As we have stated, the inert gases crystallize in the face-centered cubic structure. The distances between nearest neighbors are given in Table XXIV-4. In this table we give also the volume of the crystal per TABLE XXIV-4. CRYSTALS OF INERT GASES Interatomic distance, A Volume, cc. per mole Volume of liquid, cc. per mole Ne 3 20 14 16.7 ' A 3.84 24 3 28 1 Kr ... . 3.94 26.4 38 9 Xe 4 37 35.8 47 5 mole, computed in a simple way from the crystal structure, and finally we give tho volume of the liquid per mole, from Table XXI V-l, for compari- son. We see that the volumes of the crystals are somewhat but not a great deal less than the volumes of the liquids, as we should probably expect, since in the liquids the same atoms are packed in a less orderly arrangement. The interatomic distances in these crystals are much greater than interatomic distances in any cases where the atoms are held by either homopolar or ionic bonds. This has already been commented on in Sec. 3 and has been explained by saying that the large attractive forces of the homopolar or ionic bonds pull atoms together, essentially compressing them, so that they get much closer together than when held only by the weak Van der Waals forces. The inert gases are the only strictly spherical molecules, but a number of the other gases have molecules nearly enough spherical to pack together in similar ways. The hydrogen molecule, except at the very lowest temperatures, is in continual rotation, so that while it is not spherical at any instant, still it fills up a spherical volume on the average, the volume swept out by its two atoms when they are pivoted at the midpoint of the line joining them and are free to rotate in any plane about this point. Hydrogen molecules, then, pack as rigid spheres, but they adopt the other method of close packing, the so-called hexagonal close-packed structure. This is shown in Fig. XXIV-2. It starts with the same layer of atoms shown by the heavy lines in Fig. XXI V-l (c), then has the dotted layer of SBC. 4] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 417 Fia. XXIV-2.- -Hexagonal riose-paekcd structure. (c), but after these two layers it has another layer like the first one, and so on, having just two alternating types of layer instead of three as in the face-centered cubic structure. Part of the structure is shown in perspec- tive in Fig. XXIV-2. This indicates plainly the hexagonal unit from which the structure takes its name. As we have stated, hydrogen crystal- lizes in this form, a hydrogen molecule being at each lattice point. The distance between molecules, on centers, is 3.75 A, the volume per molo is 21.7 cc., to be compared with 26.3 cc. in the liquid. The molecules N 2 and CO, though they are rather far from spherical, still crystallize approximately though not exactly like close-packed spheres. Their crystal is a slightly distorted face-centered cubic structure, one dumbbell-shaped molecule being located at each lattice point. The molecules of those sub- stances do not rotate enough to simulate a spherical shape at ordinary temperature, but in- stead they oscillate about definite directions in space. These directions are determined for the various molecules in the crystal in a rather com- plicated though regular way, there being several different orientations for different molecules. The molecules are spaced about 3.96 A apart on centers, resulting in a volume of 27.0 cc. per mole, both for N 2 and CO, compared to 32.8 cc. per mole in the liquid for N, and 33 cc. for CO. Many molecules that are not spherical still rotate enough at high temperatures so that they simulate spheres, like the hydrogen molecule. In some cases, such molecules oscillate about definite directions at low temperatures, as with N 2 and CO, but simulate spheres at higher tempera- tures when they are rotating with more energy. In such cases, the substance has two crystal forms, and there is a transition from one to the other at a definite temperature. The low temperature phase is likely to be complicated in structure, with the molecules pointing in definite directions, while the high-temperature phase is one of the close-packed structures. Hydrogen chloride HC1 is a case in point. Below 98 abs., the molecules are hindered from rotating and the structure is a compli- cated one which has not been completely worked out. At this tempera- ture there is a transition, and above 98 the molecules rotate freely and the substances crystallize in a face-centered cubic structure. In many cases, where we might expect such a transition, it does not occur in the available temperature range. Thus we should expect that hydrogen, which shows free rotation under ordinary conditions, might conceivably show a transition to another structure with hindered rotation at low enough temperatures, while CO and N 2 , with hindered rotation at ordi- 418 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXIV nary temperatures, might have a transition to a state of free rotation at high enough temperatures. But the necessary temperatures might be above the melting point, in which case the transition could not really be observed. The diatomic molecules which show hindered rotation in the solid generally have quite complicated molecular crystals. This is true, for instance, of the halogens. CU forms a crystal composed of molecules, each of interatomic distance 1.82 A (compared to 1.98 A in the gas), arranged in a complicated way which we shall not describe. Iodine I 2 forms a layer lattice. In Fig. XXIV-3 we show one of the layers, showing \) FIG. XXIV-3. Layer of molecules in la structure. the diatomic molecules arranged in two sorts of rows. The spacing between atoms in a molecule is 2.70 A compared to 2.66 A in the gas. The spacing between different molecules in the same row is 4.79 A; between rows, 4.89 A on centers; between layers, 3.62 A. This structure is typical of the sort that one finds in other cases. Among the hydrides, water has received more attention than the others and is fairly well understood. The hydrides are hard to analyze by x-ray diffraction methods, because the hydrogens are not shown by the x-ray photographs; we must use other evidence to find where they are located. As we have mentioned earlier, we find that each oxygen is tetrahedrally surrounded by four other oxygens. Between each pair of oxygens is a hydrogen, two of the four hydrogens near an oxygen being joined to it to form a water molecule, the other two being attached to two of the four neighboring oxygens to form part of their molecules. This structure, as we have mentioned in Sec. 3, puts each hydrogen of one molecule near the oxygen of another, so that their opposite electrical SBC. 4] HOMOPOLAR BOND AND MOLECULAR COMPOUNDS 419 charges can attract each other, helping to hold the crystal together almost as if it were an ionic crystal. In Fig. XXIV-4 we show a layer of the crystal, indicating the oxygens by spheres, with vectors drawn out to the hydrogens. Three neighbors of the upper molecules in the layer, which we have drawn, are shown; the fourth lies directly above and is not shown. The molecules are spaced 2.76 A on centers. It is interesting to notice that this is slightly less than twice the ionic radius 1.45 A of oxygen given in Table XXIII-2, showing that ice is not entirely different from an ionic crystal. The molecules have such a spacing that the volume of the solid is 20.0 cc. per mole, compared to 18 cc. per mole for the liquid, agree- FIG. XXIV-4 Layer of ice structure. ing with the well-known fact that ice is less dense than water. This is because the molecules in ice are unusually loosely packed. In water, the individual molecules are actually farther apart, about 2.90 A on centers compared to 2.76 A in ice, but they are packed so much more efficiently that there are more molecules in less space and a greater density. In addition to showing how each oxygen is surrounded by four others, Fig. XXIV-4 shows the hexagonal structure which is so characteristic of ice crystals and which is well known from the form of snow flakes. CHAPTER XXV ORGANIC MOLECULES AND THEIR CRYSTALS In Chap. XXIV, we have been talking about substances held together by homopolar bonds. We have had a rather small list of compounds to work on; but we have hardly touched the most fertile field for discussing the homopolar bond. Organic chemistry of course presents the best organized and most extensive field for the theory of homopolar valence. The carbon atom can form four single bonds, which tend to be oriented toward the four corners of a tetrahedron, and this furnishes the funda- mental fact on which the chemistry of the aliphatic or chain compounds is based. The great difference between organic and inorganic chemistry is the way in which more and more carbons can be bonded together to form great chains, resulting in molecules of great complexity. These carbon chains form the framework of the organic compounds, the other atoms merely being attached to the carbons in most cases. In the first section, we discuss the ways in which carbons can be joined together. 1. Carbon Bonding in Aliphatic Molecules. In the first place, two carbon atoms can join together, as for instance in ethane, by a single H H bond. Thus ethane has the structure H:C:C:H, as indicated in Sec. 1, HH Chap. XXIV. The carbon-carbon distance in this case is about 1 .54 A, a value approximately correct for the carbon-carbon distance in all aliphatic molecules with single bonds. The hydrogens are arranged around the carbons so that the three hydrogens and one carbon surrounding either carbon have approximately tetrahedral symmetry. The carbon-hydro- gen distance is presumably about 1.1 A, as in methane. Unfortunately this carbon-hydrogen distance is almost impossible to determine accu- rately, since the hydrogen atom represents too small a concentration of electrons to be shown in x-ray or electron diffraction pictures. Each of the CH 3 groups is able to rotate almost freely about the axis joining the two carbons, as shown in Fig. XXV-1. Thus there is no fixed relation between the positions of the hydrogens on one carbon and those on another. If more than two carbon atoms join together to form a chain, they necessarily form a zigzag structure, on account of the tetrahedral angle between bonds. Thus in Fig. XXV-2 we show propane 420 SBC. 1] ORGANIC MOLECULES AND THEIR CRYSTALS 421 and butane CHaCI^CEUCHs. The carbon-carbon distances as before are about 1.54 A and the carbon-hydrogen distances about 1.1 A. On account of this zigzag nature, the chains with an even number of carbons act differently from those with an odd number, and there is an alternation FIG. XXV-l. The ethane molecule, CH 3 -OH 3 . in physical properties as we go up the scries of chain compounds, the even-numbered compounds falling on one curve, the odd-numbered on another. Chains of practically indefinite length can be built up, and it is interesting to see how the physical properties of the substances change Propane Fia. XXV-2.- Bufane -The molecules of propane, CHCH2CH 3 , and butane, CH 8 CH 2 CH2CH 8 . as the chains get longer and longer. For example, in Fig. XXV-3 we show the melting point and boiling point of the chain compounds as a function of the number of carbon atoms in the chain. The alternation of which we have spoken is obvious in the melting points, though not in the boiling 422 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXV points. This can be explained as follows. In the solids, as we shall mention later, the zigzag molecules are arranged with their carbons all in a plane, so that the line joining carbons to their neighbors makes a sort of saw-tooth shape. Then the molecules with an odd number of carbons, in which the lines joining the two end carbons point in different directions at the two ends of the chain, are definitely different from those with an even number, in which the end lines point in the same direction at the two ends of the chain. Thus we can expect the solids to show an alternation in properties. But in the liquid, or the gas, the possibility of *ree rotation about a carbon-carbon bond results in a great flexibility of 300 - -200 345 6 7 8 9 10 11 12 13 14 15 16 17 Number of Carbons 18 FIG. XXV-3. Melting and boiling points of chain compounds, as a function of the number of carbon atoms in the chain. the chain. It can turn and twist, forming anything but an approximately straight chain, and the result is that the two ends will be oriented quite independently of each other. Thus there will be no average difference, in the liquid or gas, between the chains with even and odd numbers of car- bons. Now the melting point measures the equilibrium between liquid and solid, so that the alternation in the properties of the solid will show in the curve, but the boiling point depends only on liquid and gas and will not show the alternation. In addition to this feature, Fig. XXV-3 is interesting in that it shows that the melting and boiling points of the chain hydrocarbons increase rapidly as the chain gets longer. Not only that, but the viscosity of the liquids goes up as carbons are added to the chain. These effects are qualitatively reasonable. Anything tending to hold molecules together tends to increase the melting and boiling points. Now a chain hydro- carbon, as far as the carbons are concerned, is much like a string of SBC. 1] ORGANIC MOLECULES AND THEIR CRYSTALS 423 methane molecules fastened together like a string of beads. In a very rough way, the valence forces hold the carbons and their hydrogens together tightly, whereas the Van der Waals forces, the only ones opera- tive in methane, hold that substance together only very loosely. It is only reasonable, then, that the boiling points of these long chain hydro- carbons should be higher than for the short ones. The increased viscosity of the liquids is also reasonable. After all, a liquid made of long flexible chains will certainly get tangled up, just as a mass of threads will get tangled and knotted. Quite literally, if the chains are long enough, the molecules will tie each other up in knots and prevent flow of one molecule past another. H H H H H M H C C C C C C H I I I I I I H H H H H H H H H-C-H H-C-H H | H H H H H I H H I I I I I I I I I I H-C C- C C C-H H-C C C C C H I I I I I I I I I I HHHHH HHHHH H H H H HH H I I I I I I I H C C C C H H C C C C H K J Flo. XXV-4. Isomers of hexane, CH U . We have spoken of the simple chain hydrocarbons, in which there is a single chain of carbons, with their attached hydrogens. These arise when each carbon of the chain, except the end ones, is joinod to only two other carbons and two hydrogens. But there is nothing to prevent a carbon being bonded to three or to four other carbons. In other words, branch chains can be formed. In this way, new branching compounds are formed, which in general will have the same chemical composition as some one of the simple chains, but of course will be rather different in physical properties. Two such compounds, having the same number of atoms of each element but with different arrangements, are called isomers. As an illustration, Fig. XXV-4 shows the formulas for the isomers of hexane. Of course, on account of the possibility of rotation about C-C bonds, these 424 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXV molecules can assume many complicated shapes. When we begin to get complicated side chains, however, the possibility of rotation is somewhat diminished, since different parts of the molecule can get into each other's way with some orientations. This effect is called steric hindrance, and it operates to stiffen the molecule to gome extent. It can hardly stiffen a long chain to any great extent, however, and the various branches of a hydrocarbon with long branches are presumably very flexible. If the branching process extends very far, there is no reason why the extremity of one branch cannot join onto another, forming a closed loop. The simplest compound in which this occurs is cyclohexane, shown in Fig. XXV-5. A geometrical investigation, made most easily with a model, will show that with less than the six carbon atoms of cyclohexane FIG. XXV-5. The cyclohexane mole- FIG. XXV-6. The diamond cule (CH2)e structure. it involves considerable distortion of the tetrahedral bond angles to form a closed ring, but that six carbons can join up with no distortion of the tetrahedral angles. The atoms are rigidly held in position by their bond- ing, in this case, so that cyclohexane is a much more rigid molecule than the flexible chain hydrocarbons. The branching process and the formation of closed loops can continue much further than it is carried in cyclohexane. The ultimate in this line is the diamond. This is the structure obtained when every carbon is joined tetrahedrally to four other carbons. It makes a continuous lat- tice filling space, as shown in Fig. XXV-6, which is essentially the same as the zincblende structure of Fig. XXIII-3 except that it is formed of only one type of atom. The type of rigidity present in cyclohexane is found here in its most extreme form. The structure is braced in every direction, and the result is that diamond is the hardest and most rigid material known. One can trace out hexagons like cyclohexane in the SEC. 2] ORGANIC MOLECULES AND THEIR CRYSTALS 425 diamond crystal; one only has to replace the hydrogens in cyclohexane by carbons and continue the lattice indefinitely to get diamond. Not only the arrangement but the lattice spacing of diamond is the same as in the aliphatic chain compounds: the carbon-carbon distance in diamond is 1.54 A, just as in the chains. The possibility of carbon chains is what leads to the richness of organic chemistry. A diamond is really a molecule of visible dimensions, held together by just the same forcoa actirig in small molecules. There is no reason why there cannot be all intermediate stages between the small molecules made of a few atoms which we usually think about and mole- cules of enormous size. Obviously carbon atoms can link themselves together in innumerable ways, if we only have enough of them. The organic chemists have discovered a very great number of kinds of mole- cules, but there seems no reason why they cannot go on forever without exhausting the possibilities. For by the time a structure is built up of carbon atoms, many atoms in length, one end can no longer be expected to know what the other end is doing. There is no reason why one mole- cule cannot add chains in one way, another in another, and form a con- tinually increasing variety of new molecules. One gets to the point very easily where it hardly pays to speak about molecules of a single type at all, but where one may have chains of indefinite length and things of that sort. Such situations are presumably met in problems of living matter, where many molecules are of almost microscopic size. We shall meet one other field in which we have similar chain formation, and therefore a great variety of compounds: the silicates, which form a chain of alternating silicons and oxygens. As the carbon chain leads to the great variety of materials in organic chemistry and living matter, so we shall see that the silicon-oxygen chain leads to the great variety of materials in the field of mineralogy. 2. Organic Radicals. The carbon chains form the skeleton, so to speak, of aliphatic organic compounds. But in place of the hydrogens which are attached to them in the simple hydrocarbons, there are many organic radicals which can be bonded to the carbons in various positions of the molecule, thus greatly increasing the complexity of the possible compounds. We shall mention only a few of the simple radicals in this section. In the first place, a single electronegative atom can act like an organic radical, being substituted for a hydrogen. The monovalent halogens, F, Cl, Br, I, replace hydrogen freely. Like hydrogen, they can form a single homopolar bond with carbon. For instance, methyl chloride has H the structure H : C : Cl : . Like methane, it is tetrahedral. The carbon- H " 426 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXV hydrogen distance is presumably about 1.1 A, as in methane, and the carbon-chlorine distance is 1.77 A. Similarly two, three, or all four, of the hydrogens can be replaced by a halogen atom, not necessarily all the same halogens. In these compounds, the carbon-halogen distances are always approximately the following: Carbon-fluorine 1.36 A Carbon-chlorine. . . . 1.77 A Carbon-bromine 1.93 A Carbon-iodine 2.28 A. (2.1) These distances are not far from the ionic radii of Table XXIII-2, which were 1.30, 1.80, 1.95, 2.20 A respectively for F", C1-, Br~, I". Since the carbon certainly has nonvanishing dimensions, this means that in these bonds there is considerable shrinkage from the atomic distances in ionic crystals, but not so much shrinkage as in some other cases, so that we should not be surprised to find that the halogen atoms have quite a little of the properties of negative ions. As a matter of fact, these com- pounds have rather strong dipole moments: in CH 3 C1, for instance, we see from Table XXI V-l that the moment is 1.97 X 10~ 18 absolute units, corresponding to about 0.23 of an electron at the distance 1.77 A. We may conclude, then, that the halogen atoms pull the electrons that they share with the methyl or other organic group rather strongly toward them, so that they have quite a little the structure of negative ions. In the matter of physical properties, we can see from Table XXIV-1 that replacing hydrogen by halogens increases both Van der Waals a and 6, as we should expect from the fact that the halogen atoms are much bigger than hydrogen. Thus for the series CH 4 , CH 3 C1, CH 2 C1 2 , CHC1 8 , CC1 4 , we have the properties shown in Table XXV-1. The Vs increase fairly TABLE XXV-1. PROPERTIES OF SUBSTITUTED METHANES a b Boiling point, C CH 4 ... CH 8 C1 . 2.28 X 10 12 7.56 42.6 64.5 -161.4 - 23 7 CH 2 C1 2 CHCla 15.38 102 61 2 ecu 20.65 138 76 regularly as more chlorines are added, and the amount of increase per chlorine is not far from 28 cc. per mole, which is half the b value for C1 2 (56.2, from Table XXIV-1). The increase in the a's and &'s leads to an increase in boiling points, as is shown, and as is natural with larger and heavier molecules. SBC. 2] ORGANIC MOLECULES AND THEIR CRYSTALS 427 Divalent and trivalent as well as monovalent electronegative atoms can also attach themselves to organic carbon-hydrogen chains. Thus, in particular, oxygen plays a very important part in organic compounds. If an oxygen atom attaches itself by a single bond to a carbon, it has another bond free, with which to attach itself to something else. This second bond may go to a hydrogen, in which case we have the organic OH group, H forming an alcohol. Thus methyl alcohol is H:C:0:. The OH group, HH like the halogens, though it does not exist as a separate ion in the organic molecules, still has a considerable tendency to draw negative charge to it, pulling the shared electrons away from the methyl group. Thus the dipole moment of methyl alcohol is 1.73 X 10~ 18 , almost as large as that of methyl chloride CH 8 C1. Instead of being bound to one organic group and one hydrogen, as in the alcohols, the oxygen may join to two simple organic groups, forming an ether, like dimethyl ether (CH 3 )O(CH3), diethyl ether (C 2 H5)O(C2H 5 ), etc. Here the organic groups come off from the oxygen more or less at tetrahedral anglos. The oxygen in the ethers also has some tendency to draw negative charge to itself, so that the ethers have a dipole moment, though not quite so large as in the alcohols. It is plain -from this discussion that the alcohols and ethers can in a way be derived from water by replacing one or both of the hydrogens by methyl, ethyl, or more complicated groups. The carbon- oxygen distances in these compounds are about 1.44 A and the angles between the bonds are roughly the tetrahedral angle, though there are considerable variations both in distance and in angle from one compound to another. As the alcohols and ethers can be derived from water by replacing one or both of the hydrogens, so the amines come from ammonia by replacing one, two, or three of the hydrogens by organic groups. In Table XXI V-l, for instance, we give properties of methylamine CHsNH2, dimethylamine (CHs^NH, and trimethylamine (CHa^N. Evidently a complicated set of compounds can be built up in this way, using more and more complicated groups to tie to the nitrogen atom. One of the most important organic radicals is the carboxyl group, COOH. This group attaches itself by a single bond to any carbon atom, forming an organic acid. For instance, acetic acid has the structure H .0: H:C:C" . That is, one oxygen is held to the carbon by a double H:O:H bond, the other by a single bond, so that this latter can also attach itself to a hydrogen. Even simpler is formic acid, HCOOH. The conspicuous 428 INTRODUCTION TO CHEMICAL PHYSICS [CHAP. XXV tendency of the carboxyl group is for it to lose the H as a positive ion, leaving the remainder of the molecule as a real negative ion, as for exam- ple (CH 8 CQO)~. Then a metallic ion, for instance sodium, can attach itself, forming for instance sodium acetate, (CH 3 COO)~Na + . While we have written this as if it were a real ionic compound, to emphasize this quality more than in most organic compounds, still such a substance is not as definitely ionic as in the inorganic salts. For the sodium in this case furnishes a single electron, just like hydrogen, so that we can consider it as forming a homopolar bond with the oxygen. In sodium acetat