J Marine Biological Laboratory Library [ H Woods Hole, Mass. [[ J Presented by 1 ■n Prentice-Hall, Inc. m n July 30, 1965 m i ra E D] II ID i 01 3BD PRENTICE-HALL BIOLOGICAL SCIENCE SERIES William D. McElroy and Carl P. Swanson, Editors CLASSIC PAPERS IN GENETICS, by Jamcs A. Petcrs EXPERIMENTAL BIOLOGY, by Richard W. Van Norman MECHANISMS OF BODY FUNCTIONS, by Dcxtcr M. Easton MILESTONES IN MICROBIOLOGY, by Thomas D. Brock PRINCIPLES OF BIOLOGY, by Ncal D. Buffaloe A SYNTHESIS OF EVOLUTIONARY THEORY, by Herbert H. Ross Concepts of Modern Biology Series BEHAVIORAL ASPECTS OF ECOLOGY,* by Peter H. Klopfer Foundations of Modern Biology Series ADAPTATION, by Brucc Wallace and A. M. Srb ANIMAL BEHAVIOR, by Vincent Dethier and Eliot Stellar ANIMAL DIVERSITY, by Earl D. Hanson ANIMAL GROWTH AND DEVELOPMENT, by Maurice Sussman ANIMAL PHYSIOLOGY, by Knut SchmidtNiclsen THE CELL, by Carl P. Swanson CELLULAR PHYSIOLOGY AND BIOCHEMISTRY, by William D. McElroy HEREDITY, by David M. Bonner THE LIFE OF THE GREEN PLANT, by Arthur W. Galston MAN IN NATURE, by Marston Bates THE PLANT KINGDOM, by Harold C. Bold * This title is also in the Prentice-Hall International Series in Biological Science. Prentice-Hall, Inc.; Prentice-Hall International, United Kingdom and Eire; Prentice-Hall of Canada, Ltd., Canada; Berliner Union, West Germany and Austria. PRENTICE-HALL INTERNATIONAL London • Tokyo • Sydney • Paris PRENTICE-HALL OF CANADA, LTD. PRENTICE-HALL DE MEXICO, S.A. Experimental Biology Richard W. Van Norman Associate Professor of Experimental Biology University of Utah Prentice-Hall, Inc. Englewood Cliffs, New Jersey 1963 to D. K. V. © ] 963 by All rights reserved. No part of this book may be reproduced Prentice-Hall, Inc. in any form, by mimeograph or any other means, without Englewood Cliffs, N. J. permission in writing from the pubhsher. Library of Congress Catalog Card Number 63-7313 Printed in the United States of America 29447-C Preface Modern biology is the product of years of evolution. Many problems have been solved by the classical descriptive approach. More and more, the remaining questions are intimately concerned with the responses of the indi- vidual cell, or with subtle interrelationships. These problems remain because they are difficult to observe. It has become increasingly important to rely upon quantitative observations and upon the tools provided by chemistry, physics, and mathematics. The great middle ground between the biological and physical sciences can can be approached from either side. Biologists continue the attack using their own methods. Numerous competent physical scientists have been challenged by the intricacy of the living organism. Unfortunately, communications between these two groups are often difficult. The biologist considers the physical scientist with awe, and the physical scientist considers the biologist with either awe or disdain. Some of us are foolhardy enough to think that we can operate at the middle of this area and can arrive at answers most directly. We prefer to be called physically-oriented biologists rather than biophysicists or biochemists. But the experimental approach is the same. A fraction of the failure of communications between the biological and physical sciences results from impressions or prejudices developed during the training period. The present work was undertaken in an attempt to introduce the biologist and the physical scientist to each other at an early age, before these preconceptions have a chance to deA'elop. The chief hope is that young biologists can learn to appreciate the physical approach to problems and that the young physical scientist can learn some of the fascina- tion of biological research. The book has grown out of a course that is offered at the University of Utah. This course in Experimental Biology is itself an experimental intro- duction to biological research. Experience has taught us that undergraduates can apply the experimental approach to biological problems. This book was written to help make the learning processes faster and easier. In addition, some graduate students seem not to know some of the things they should have learned as undergraduates. But where and when were they supposed vi PREFACE to learn these things? If this book can help to start their research with fewer mistakes, I shall have earned a bonus. There is no intention of assembling a set of instructions which, if scrupu- lously followed, will guarantee success in research. Research must be a creative, personal process. In our investigations we learn by our failures as well as our successes. Therefore, the most effective training program is a number of years of experience. On the other hand, young minds are agile and imaginative and full of potential contributions. A general guide may help these minds to produce sooner, to avoid some of the failures, and to escape from the necessity of learning everything the hard way. The book is intended as a starting point rather than an end in itself. With this thought in mind, most of the chapters have references to more advanced discussions. Some of the advanced discussions referred to will be very diffi- cult for the usual bright college freshman or sophomore to read. By introduc- ing the subject in simple terms, perhaps this book can make the more ad- vanced discussions easier. Some readers may object that the level of writing is too low. In that case, I recommend skimming rapidly and proceeding immediately to the more advanced works referred to. Anyone who can start at a higher level should surely do so. If some statements seem ridiculously obvious, remember that they may not be obvious to everyone. I have rearranged the table of Contents, that is, the order of the chap- ters, several times. The problem is that the various topics are so closely interrelated that it is difficult to speak of one subject without also mentioning the others. Probably Experimental Design should come first, for example, but the subject is difficult and can best be illustrated by examples which will be unfamiliar at the beginning. The best organization, apparently, would be to start in the middle and proceed in all directions. The final schedule of topics, then, is a compromise. The first few chapters form a general introduction to quantitative, experimental biology. The next major section is devoted more specifically to those techniques and instruments which are so important a part of the everyday work of the laboratory. Finally, the data obtainable by such techniques are treated mathematically and statistically, and converted into something meaningful and useful. Many kind people have assisted in the preparation. A few sharp-eyed former students may recognize their own data in some of the examples. Many of these same students were among the best critics of most of the chapters. They rarely refrained from telling me about a section that was not clear. Dr. John D. Spikes has been helpful by providing facilities, suggestions, advice, and criticism. We share many opinions, but many of them I think I learned from him. To Mrs. Elizabeth T. Cole goes my gratitude for her ceaseless interest and encouragement, not to mention the fact that she typed most of the manuscript several times. My most generous and persistent helper PREFACE vii was Dolores K. Van Norman, who contributed more than she knew. In addi- tion to typing the final manuscript, she read, criticized, encouraged, and assisted in so many other ways that the final product is almost as much hers as mine. Richard W. Van Norman Salt Lake City, Utah Contents \ CHAPTER 1 • Science What is science? What are the goals of scientific activity? What is research? What is a scientist? What are the methods of science? Scientific method. Limitations of science. Selected references. CHAPTER 2 • Research In Biology 12 Descriptive and experimental biology. Difficulties in biology. The biologist's assumptions. Biological problems. Selected references. CHAPTER 3 • The Biological Literature 19 Laboratory records. Technical papers. Reviews. Monographs, symposia, books. Abstract services. Other literature. Searching the literature. Other sources of information. Reprints. Record systems. Selected references. CHAPTER 4 • Measurements 30 Standards. Examples of measurement. Volumetric glassware. Theory of measurement. Selected references. CHAPTER 5 • Selection of Techniques Some questions worth asking. Instrument design. Assembly of components. Selected references. 1 X 8 iD79 53 X CONTENTS CHAPTER 6 • Selection and Preparation of Organisms 61 Essential features of experimental organisms. Desirable fea- tures. General comments on choice of organism. Preparation of organisms for experiment. Preparation of parts of cells. Preparation of chloroplast fragments — an example. Selected references. CHAPTER 7 • Centrifuges 78 Centrifugal force. Sedimentation. Types of centrifuges. CHAPTER 8 • Microscopy 83 The compound microscope. Optical theory. Magnification and resolution. Aberrations. Preparation of materials. Use of the microscope. The phase contrast microscope. The polariz- ing microscope. The electron microscope. Selected references. CHAPTER 9 • Colorimetry-Spectrophotometry 110 General considerations. Analytical instruments. Recording spectrophotometers. Ultraviolet and infrared spectrophotom- eters. Measurements of concentration. Use of the instruments. Fluorescence measurements. Flame photometry. Selected ref- erences. CHAPTER 10 • Measurements of Gas Exchange 130 The manometric technique. The infrared gas analyzer. Mag- netic oxygen analyzer. Selected references. CHAPTER n •Chromatography 143 Separation of plant pigments. Physical principles involved in chromatography. Practical chromatography. Paper electro- phoresis. Gas chromatography. Selected references. CONTENTS xJ CHAPTER 12 • Isotopic Tracers 160 The tracer experiment. Radioactivity and radiation. The de- velopment of tracer experimentation. Selection of tracer iso- topes. Commonly used tracers. Detection methods. Laboratory safety. Selected references. CHAPTER 13 • Electrical Measurements 173 Electrical theory. Electronic systems. Input transducers. Out- put transducers. Power supplies. Amplifiers. Potentiometric techniques. Circuit diagrams. Selected references. CHAPTER 14 • Calculation of Data 190 Amounts of biological material. Manipulations of raw data. Aids in calculation. Mathematical treatments. CHAPTER 15 • Statistical Treatments 199 Probability. The normal curve. Parameters of samples. Tests of significance. Analysis of variance. Regression and correla- tion. Other statistics. The application of statistics. Selected references. CHAPTER 16 • Experimental Design 212 Sampling and randomization. Some simple designs. More complex designs. Factorial experiments. Selected references. CHAPTER 17 • The Manuscript 223 Organization of the paper. The job of writing. Presentation of results. Typing the manuscript. Title and abstract. Selected references. Bibliography 232 Index 235 CHAPTER 1 Science Biology has become a quantitative, experimental science. So much emphasis is now being placed upon the use of mathematical, physical, and chemical tools in the attempt to learn about living things that the beginner may be bewildered. It is the purpose of this book to provide a starting point from which modern research in biology can be explored. Science surrounds us in modern America. Every day we are confronted with some "scientific" marvel. Today's children probably know more about space flight, radar, and viruses than science itself knew a genera- tion ago. Science is prominent in all our lives, and it is the duty of any educated person to find out what science is about. As a beginning, therefore, one should ask such questions as: What is science? What are the goals of scientific activity? What is research? Is all of science devoted to improving human comfort and convenience? What is the difference between the scientist and an "ordinary man"? By what methods are the goals of science achieved? Does science have any limitations? Let us briefly explore these and some other questions. What is science? Science has been with us for a very long time, and yet it is difficult to agree on a precise definition. According to the Latin origins, the word means "knowledge," or better, "systematized knowledge." Science then would be a set of facts, understandings, and explanations arranged in some orderly manner. But is this definition adequate? I have knowl- 2 SCIENCE edge that I prefer Brahms over Bartok, and I can give a dozen systemati- cally arranged reasons why. I "know" that the mountains are beautiful any time of year; given a year I could prove it to you. Does this "know- ing" make me a scientist? Perhaps, but usually such matters of taste and judgment are excluded from science. Apparently we must specify the subject matter of science also. To many people science is not "knowledge" at all; science is an activity. "Science is investigation"; "Science is discovering new knowl- edge"; "Science is what scientists do." All these definitons have been suggested by college students. The last leads to some difficulties in logic when the next question, "What is a scientist?" is answered, "A person who works in science." Probably the easiest way to resolve this problem is to refer to the work of a philosopher of science who has given considerable thought to this subject. Although we have our choice of many of these, Lach- man presents a concise yet comprehensive statement: "Science refers to those systematically organized bodies of accumulated knowledge con- cerning the finite universe which have been derived exclusively through techniques of direct objective observation." ^ This description implies a continuous activity of adding to the body of knowledge. Inasmuch as science concerns itself with the whole finite universe, no one can hope to comprehend it all. We must recognize our human limitations; for this reason, science is subdivided in a number of ways. There are physical sciences, biological sciences, and behavioral or social sciences. There is pure or basic science and its partner, applied science or technology. Biological sciences are also subdivided— on several different bases— into specialties, examples of which are given here: (1) On the basis of the kind of organisms studied: Zoology— animals Botany— plants Entomology— insects Bacteriology— bacteria Protozoology— protozoa Bryology— bryophytes (2) On the basis of the approach or the features of the organisms: Taxonomy— naming and classification Morphology— structure and form Physiology— functions or processes Ecology— relationships to environment 1 Sheldon J. Lachman, The Foundations of Science (Detroit: Hamilton Press, 1956), p. 15. SCIENCE 3 (3) On the basis of the outcome of activity: Basic Science— the outcome is knowledge Apphed Science— the outcome is the solution of human prob- lems (4) On the basis of the technique of investigation: Descriptive— as in taxonomy or morphology Experimental— as in physiology or genetics Obviously, these lists, intended only as examples, are not complete. Even so, there is a certain amount of overlapping. An individual might be a basic scientist working by experimental methods on the physiology of bacteria. However, no professional biologist can completely lack inter- est in all fields of science except his specialty. In addition, certain biolo- gists must be concerned with physical and chemical relationships. Should biophysicists and biochemists be called biologists or physical scientists? What are the goals of scientific activity? The public impression of the goal of science seems to be approximately the following: Science is working hard (1) to find ways of raising enough food to feed all the people, (2) to improve medicine so that the people can stay alive to eat the food, and then (3) to develop more terrible weapons to wipe out this larger, healthier population. These and other technological aspects of science make good news stories, even though they give a distorted picture of science. Actually the goals of all scientific activities are the same: comprehen- sion or understanding of the universe. Once an understanding of a par- ticular phenomenon is gained, the solution of human problems may follow as a beneficial by-product. It is virtually impossible for one per- son to perform the whole system of activities involved in making some discovery about the universe and then to adapt this knowledge for the direct benefit of man. People specialize in one phase of activity or the other. Much of the scientific research introduced in this book is basic science, or an intellectual search for knowledge and comprehension of the living world. Ultimately, of course, all these understandings will be useful, but we leave it to others to find out just how. Fundamental biology is the foundation upon which a structure of applications useful to man is built by the applied biologists, chiefly those in medicine and agriculture. 4 SCIENCE The ultimate outcome of science would be a set of explanatory expres- sions or theories which would very nearly coincide with the "natural laws" that govern the universe. Once such a set of explanations is formulated, it will be easy to predict the results of any set of circum- stances and then to solve any human problems. Of course, this ideal need not be anticipated in the next few years. What is research? This term, which can be applied to any careful study or search for knowledge, is frequently used to indicate careful Hbrary study to learn what others have thought and known before. There is a certain amount of this type of study in science because progress is a matter of building on what has been learned earlier. Usually, however, the scientist uses "research" in a slightly different way. To him, research is seeking for what was previously unknown. It involves an organized program of observation and study, resulting in new knowledge. A piece of scientific research may be conducted in a laboratory or "in the field" by one per- son or by a group or "team." It may be motivated by pure curiosity or by a human problem that needs a solution. In the text to follow, "re- search" is used almost entirely in this latter sense as a program designed to add to our knowledge facts that were not known before. What is a scientist? If any one attribute could characterize a scientist, it would be his curiosity. I suspect that all of us have curiosity, but often it is lost somewhere in the process of growing up. A non-scientist may be con- tent to notice an unusual event; the scientist is likely to follow through by asking, "How (or why) did this happen?" Scientists, to be sure, have other attributes, such as reasonable intel- ligence, extensive training, the ability to organize ideas and information, an interest in the natural material world, and a little persistence. Fortu- nately for most of us, a scientist does not have to be a genius. The personal philosophies of scientists are as individual as the scien- tists themselves. Each must find his own answers to a number of very serious questions. Science is supposed to be completely separated from morals and value judgments. But does this dichotomy mean that scien- SCIENCE 5 tists can have no opinions about morals? Is it right for a scientist to seek knowledge for its own sake, feeling no obligation to point out the pos- sible values of his findings? Should a new discovery about some aspect of cell division be considered exciting for its own sake and not because of its possible relationship to cancer? In contrast, is it possible for a scientist to feel no guilt when the results of his work are used for the purposes of destruction? The scientist is in a difficult position philosophi- cally, because it is both necessary and impossible to separate the scientist as a scientist from the scientist as a human being. Since each must work out his own answers to these weighty problems, it is not surprising that there is no archetype of all scientists. What are the methods of science? Science is unique among the fields of intellectual activity in that acquisitions of knowledge occur only through objective observation. Philosophy, literature, art, all these are dependent upon the creative ability of the philosophers, wnriters, and artists, and they may or may not have a direct relationship to reality. Science also depends on creative mental talent, because observations can be useful only if they become part of generalizations that explain, but all of science is directly related to the real world and is dependent on this real world for its subject matter. As Santayana says, "Science contains all trustworthy knowledge." Scientific information may be discussed, interpreted, reorganized, or rejected, but in the final analysis, the only source of new information is observation. Observation is the act (or the result) of careful, attentive watching of a natural event. Any of the senses might be used in observation, even though we commonly think of it as a result of vision. Many observa- tions have been made with the unaided senses, but now scientific obser- vation more commonly requires the assistance of instruments of various kinds. Magnification makes possible the observation of things too small to be seen by the eye alone. Electrical instruments allow us to observe events for which there are no senses. One of the principal aims of this book is to point out some of the methods by which we observe. Early biology was scientific, just as modern biology is. The classical biologists observed and then formed explanations for their observations. Even today some aspects of biology are best observed in their natural state, with a minimum of interference from the observers. One pattern 6 SCIENCE of observation, which might be called descriptive biology, simply studies and watches living things as they occur naturally and then describes these organisms. The pattern of distribution of a group of animals over a part of the earth's surface would be studied this way. The biologist, noting and recording most of the climatic, geological, and biological features of this area, would then be in a position to explain what factors influence the distribution of these animals. Or, as another example, the careful study, measurement, and description of a large number of indi- vidual plants could lead a botantist to an understanding of the natural kinship of various groups. Indeed, several hundred years of observations of this type have produced much of what is known as biology. The zoologist who studies the distribution of animals by direct ob- servation might ask himself, "What would happen to the mice if erosion removed the topsoil from this mountainside?" He can answer his ques- tion only by waiting for this disaster to occur, or, sometimes by finding a similar mountain where it has already occurred. His observations are limited; he is helplessly dependent on the whims of nature. In certain phases of science the experiment offers a means of avoid- ing this difficulty. An experiment is merely an observation in a con- trived, artificial situation. Imagine how much more efficient the process of observation becomes if the scientist asks himself, "What would hap- pen if . . . ?" and then makes the "if" happen. What would happen if a fly with red eyes were to mate with a fly with white eyes? What would happen if a plant were to grow on a soil deficient in nitrogen? What would happen if a part were separated from a living cell? Would it continue its activity? The answers to these questions can be found quickly and efficiently by experiment. The observer has some control over his observations. He can prepare a set of circumstances of his own choosing and then observe the results at his convenience. Some aspects of biology are not easily observed by experiment, but, more and more, modern biology is using this useful observational tool which was borrowed from the physical sciences. Most of our recently acquired knowledge about the activities of living cells, most of what we know of genetics and inheritance, and most of what we have learned about the organization and coordination in plants and animals was learned by experiment. The ideal experiment requires imagination and careful planning on the part of the investigator. The results of the experiment must be explained, of course, and the "controlled experiment" is an application of logic which makes the explanation easier. Usually the "control" is another experiment, performed at the same time, in which most but SCIENCE 7 not all of the conditions are the same. Any difference in the results of the two experiments can be attributed to these differences in the con- ditions. The "control" thus provides a frame of reference by which the results can be judged. More detailed information on the designing and planning of experiments is included in several later chapters. Observ^ation is the source of new scientific information, but this infor- mation is meaningful only if it is interpreted and meshed with what is already known. The formation of explanatory statements is an equally important part of scientific activity. The results of any single observa- tion or experiment represent a small bit of information and may be use- less alone. The scientist is likely to suggest, however, that the results of the experiment are representative of a much larger body of informa- tion. If the leaves of a plant grown without nitrogen turn yellow, perhaps the leaves of all plants grown without nitrogen would turn yellow. Or, to put it another way, perhaps all yellow leaves are the result of growth without nitrogen. Either of these suggestions is a generalization founded on little evidence, but each is at least reasonable. Inductive reasoning has been used to arrive at an explanatory statement. Information from a particular situation has been applied to the more general case. Ac- tually, one cannot have much confidence in these suggestions, because they result from only one experiment, and the two statements are rather different from each other. Is one of them correct? Are both correct? Each of these two suggestions must be called a hypothesis, a tentative explanation based on some evidence. If the first hypothesis is correct, it should be reasonable to predict that plants raised without nitrogen, even under somewhat different condi- tions, should have yellow leaves. This type of reasoning is deductive; a general statement is used to predict what will happen in a particular set of circumstances. This new suggestion can be tested by performing another experiment. If the prediction turns out to be correct, a new, better generalization is in order, and further predictions can be made. You might have heard that science did not develop until Francis Bacon suggested inductive reasoning. This statement is probably true, but do not forget that science uses deductive reasoning too. Scientific method "Scientific method" is a term which has become distasteful to some scientists. There are many "scientific methods," and the number of steps and the order of the steps will depend upon the author of the list. Conant 8 SCIENCE has presented an interesting discussion of scientific method in Modern Science and Modern Man. Here he gives his own description, as follows: Scientists collect their facts by carefully observing what is happen- ing. They group them and try to interpret them in the light of other facts that are already known. Then a scientist sets up a theory or pic- ture that will explain the newly discovered facts, and finally he tests out his theory by getting more data of a similar kind and comparing them with the facts he got through earlier experiments. When his theory does not quite fit the facts, he must modify it and at the same time verify the facts by getting more data.^ In addition, he has included a "scientific method" which he obtained from a biologist: Recognize that an indeterminate situation exists. This is a con- flicting or obscure situation demanding inquiry. Two, state the prob- lem in specific terms. Three, formulate a working hypothesis. Four, devise a controlled method of investigation by observation ... or by experimentation or both. Five, gather and record the testimony or "raw data." Six, transform these raw data into a statement having meaning and significance. Seven, arrive at an assertion which appears to be wananted. If the assertion is correct, predictions may be made from it. Eight, unify the warranted assertion, if it proves to be new knowledge in science, with the body of knowledge already estab- lished.^ If this set of steps is studied carefully, it is found to include essen- tially the same operations or activities described by Conant. As I see it, scientific method is an alternation of two types of activities, the observational and the explanatory, or an alteration of inductive and deductive reasoning. Scientific method is a cyclic method of accumulat- ing knowledge; nowadays it never starts and never ends. Observation leads to a hypothetical explanatory induction which must be tested deductively by further observation. Each new observation produces new explanations, and each new explanation suggests new observations. At each turn of this cycle, the explanations become better; that is, they more nearly coincide with the natural laws. The hypotheses become less hypothetical as they are founded on more observations. Eventually, 2 James B. Conant, Modern Science and Modern Man (New York: Columbia University Press, 1952), p. 20. ^ Ibid., p. 20. SCIENCE 9 although it is hard to say just when, the explanations become sufficiently general, and, since the observations have been repeated by so many observers under so many conditions, widely accepted. When these state- ments are believed and agreed upon by the majority of competent authorities, they are often called theories, principles, or laws. Although these terms differ slightly in connotation, they are commonly used to indicate varying degrees of reliability and acceptability. Interestingly enough, this alternation of observation and explanation works in two directions. Some individual phenomenon becomes better understood, so that more is known about this little fragment of the universe. At the same time, the observations turn up new relationships, and we find that we also know more about broader, more general aspects of the universe. Limitations of science One of the obvious limitations of science is that it can include only what can be observed. Questions of human values, ethics, morals, and religion cannot be touched by science because they cannot be observed objectively. The scientist is free to form his own conclusions on these matters as long as they do not interfere with the objective and imper- sonal observation in his investigations. Some scientists are atheists be- cause they have not observed a god directly. Others call themselves agnostics; that is, they say "I do not know" because scientific methods cannot be used to investigate gods. Still others are deeply religious, believing that the order and consistency of the universe is evidence of a god. There is no inconsistency in any of these positions, and science does not force a man to believe anything that is outside of science. The only quahfication is that his thinking on areas outside of science must not influence his thinking on scientific matters. Another less obvious limitation on science depends upon the assump- tions that must be made before starting any scientific activity. We may assume that something is true; we take it for granted, even if it cannot be tested by observation. There are several features of the universe that the scientist so commonly takes for granted that it may even be surpris- ing to see them written down. These assumptions are so deeply en- trenched in everyday thinking that it is difficult to conceive of their tentative nature. And yet none of these assumptions can be tested, and any of them might not be true. 10 SCIENCE We take it for granted that the universe is real. We usually do not question the reality of space, of matter, and of time. Space is there, it contains objects and matter, and things move through it. The matter can be seen and touched, and its movement takes place over a period of time. But is it possible that my whole idea of the universe is the product of the perceptions of my sense organs and of my mind? Is it possible that there is another universe, left-handed or inside-out, which occupies the same space, but which I cannot detect with my sense organs? Could time stop and with it all my mental activities, to start again only when time started again? The philosophers have debated the question, "What is reality?" We scientists have become so accustomed to thinking of our universe as matter occupying space and changing in time that it is disturbing to think that it might not be real. While we are assuming that matter is real, we also assume that this matter is present in some amount and that it can be measured. There is some total amount of matter. Imagine the confusion if a certain piece of material varied in amount according to no predictable pattern and for no apparent cause. Measuring instruments would be useless. We would measure lengths with a rubber ruler. We assume that the universe is consistent. We take it for granted that there is a set of natural laws, and that our scientific investigations produce theories that approximately explain these natural laws. Another assumption is the cause and effect relationship. Every cause will bring about an effect, and every phenomenon is caused by some set of circumstances. If we understand the natural law, then if we are given a set of circumstances or a "cause," we should be able to predict the "effect." A final assumption is that this real, consistent, deterministic universe can be comprehended by the mind of man. There is no secret in the natural laws that will not eventually be explained if enough observations are made. There is no great mystery, no whimsical maker and changer of laws, that is beyond the mental ability of man. This is a very neces- sary assumption, as without it all investigation is futile. If we cannot understand the universe anyway, and if there is no hope of solving our problems, why bother to try? The defeatist attitude is a natural conse- quence of the failure to make this assumption. These assumptions are so much a part of our thinking that we are hardly aware that we take them for granted and that they cannot be tested. It is almost beyond reason to admit that any one of these might SaENCE 1 1 not truly represent the universe. Many of the best science-fiction stories depend upon the failure of one or more of these assumptions. Yet, at least one of them is a little shaky. The cause-effect relation- ship works on the ordinary size level but must be slightly modified on the atomic or the cosmic level. Instead of effect from some cause, it is possible to give only a "most probable effect." Individual atoms and sub- atomic particles do not necessarily obey the law of cause and effect. The new theoretical physics, wave mechanics or statistical mechanics, has not yet exerted its full influence on science. It will be interesting to watch the revolution in thinking when it does. SELECTED REFERENCES Conant, James B., Modern Science and Modern Man. New York: Columbia University Press, 1952. This is one of a series of short books in which Conant discusses science and particularly the under- standing of science by non-scientists. Lachman, Sheldon J., The Foundations of Science. Detroit: Hamil- ton Press, 1956. Contains a clear description of science, its activities, goals, and. conceptions. The writing is clear and easy to follow. In some cases it is deceptively simple; some of the paragraphs are more profound than they seem at first reading. Polanyi, Michael, The Study of Man. London: Routledge and Kegan Paul, 1959. The Study of Man is a condensation and extension of ideas expressed in the longer book. Personal Knowledge. Polanyi is a chemist who has become concerned with how we know things. One may not agree with him, but he certainly stimulates thinking. CHAPTER 2 Research in Biology It would be very easy to suggest that a major revolution has occurred in biology since the end of World War II; certainly there are more biologists using physical and chemical methods. Much of the recent progress in biology can be traced to a shift in emphasis from descriptive to experimental biology, but to call this a major revolution would prob- ably be going too far. Of the important factors in the encouragement of experimental re- search, one has been the increased availability of funds from several sources. Certain instruments and techniques— such as isotopic tracers, optical devices, and chromatography— have become generally available, but none of these was really new in 1946. When money became avail- able, more people could work in experimental laboratories, and the laboratories could be better equipped. Several important instruments could be placed in production economically. The modern experimental biology laboratory contains an array of sparkling expensive instruments, a fair share of which would necessarily have been homemade only a generation or so ago. In addition to a better financial situation, there is now a rather differ- ent body of concepts upon which to build. The discovery that nucleic acids can control the synthesis of enzymes has completely altered the course of both genetics and enzyme biochemistry. The knowledge that living cells can be broken and that the parts will continue some of their activities even when separated from the rest of the cell has had a pro- found influence in cellular physiology. Human beings have been biologically minded for a very long time, 12 RESEARCH IN BIOLOGY 13 since man could never have become civilized without being aware of the world of living things. Biology has been scientific in the usual sense, however, for only a few centuries. For a number of reasons biology has always developed only after developments in the physical sciences. The earliest scientific biologists were descriptive scientists, concerned with naming and describing organisms. The invention of the microscope permitted the examination of smaller units, but the approach of the biologists remained about the same; that is, they still described what they saw. In time, it became possible to consider more abstract relationships. Changes that take place over a period of time, as in the growth of an animal or plant, or as in the various physiological processes within an individual, obviously require a rather different kind of observation. If a biologist is to explain the interrelationships of the activities of various organs in an animal, a higher level of intellectual activity is required than if he merely describes their structure. The experimental approach is especially valuable in these highly ab- stract phases of biology. In fact, about 1625 William Harvey used some of the first experiments in biology to demonstrate the continuous circula- tion of the blood. At about the same time, van Helmont's experiment showed that plants do not take all their food from the soil. About a cen- tury later Stephen Hales performed many experiments on the pressures and the movement of liquids within plants and animals. His books, Vegetable Staticks (1727) and Haemastaticks (1732), demonstrate the ingenuity of the man, as well as providing very interesting reading. Physicists at the time were measuring pressure by observing the height to which a liquid would rise in a tube. Hales applied similar observa- tions in biology. In one heroic experiment he measured the blood pres- sure of a horse by attaching a long glass tube to one of the large arteries in the horse's neck. Fortunately this method of measuring blood pressure never became popular among the physicians. It is enlightening to read a modern laboratory manual for plant physiology, to compare it with Vegetable Staticks, and to note how many of the usual experiments in present-day courses were actually designed by Stephen Hales. One of the major philosophical battles in the field of biology was the mechanist-vitalist controversy about the end of the nineteenth cen- tury. The mechanists maintain that there is nothing about the living organism that cannot be explained in terms of physics, chemistry, and mathematics. According to the vitalists, there was some vital force, some living being, above and beyond the physical laws. Progress in the biological sciences has depended on the assumption of the mechanistic 14 RESEARCH IN BIOLOGY interpretation. We take it for granted that life can be explained by physics and chemistry and that living things obey the physical laws. This assumption, although it cannot be tested, is very necessary because the vitalist assumption acknowledges that life cannot be understood by science. Even if we think and experiment as mechanists, a number of serious problems still exist. The goal of all biological research must be the explanation of life according to the physical laws. Therefore an understanding of living things can never be more complete than our comprehension of the physical laws. Biology must always lag behind the physical sciences, just as physics has lagged behind mathematics. Allow- ing investigations in biology to contribute to the physical theories is the only alternative. Difficulties in biology Biology offers several other challenging difficulties. The living system is far more complex than any physical entity. Biological chemicals are large molecules, and the arrangement of these molecules in a particular pattern is a necessary condition for life. No mere list of all the kinds of chemical compounds and the amounts of each will ever explain life, and yet, no picture showing the exact location of each molecule within a cell would be adequate either, because these locations change with time. Altogether, the living cell is more complex than anything the physi- cal scientist is used to working with. In fact, some modern scientists feel that the whole living organism is more than the sum of its parts. Although reminiscent of vitalism, this new approach is perhaps on better intellectual grounds and even suggests that findings in biology can con- tribute to understanding of the chemical and physical laws. The little book by E. Schrodinger, the great theoretical physicist, called What is Life? gives an extremely stimulating discussion of this idea. Even though the ideas were first proposed in 1943, they are still pertinent. Living things are more difficult to observe than nonliving things. Their extreme complexity is only a part of the difficulty. Living material is naturally variable. Its responses during an experiment may be influ- enced by a great variety of factors, and the experimenter may not be aware of some of these. Often a difference in the treatment of an organ- ism before the experiment will cause changes in responses. Materials present in amounts so small that they almost defy microanalysis can have profound effects. RESEARCH IN BIOLOGY 15 Ideally, the experimenter hopes that the experimental treatment will not influence the results, but this desire can never be realized. The best that can be hoped for is to reduce the interference to a minimum. Obvi- ously, if the techniques of the experiment influence the organism, con- trolled experiments are essential before any conclusions can be drawn. The blologisfs assumptions Although some of them have been mentioned previously, it is probably wise to explain the assumptions which, in addition to the general assump- tions of science, are made by the biologist. (1) Living organisms obey the laws of physics and chemistry. This item has already been discussed. The only step necessary to prevent the invalidation of this assumption is to incorporate any inconsistent dis- coveries in biology into the physical laws. (2) The whole living organism is nothing more than the sum of its parts. This assumption is made with two diff^erent meanings. The first meaning is an outgrowth of the previous assumption, indicating that there is nothing about the life of the whole organism that cannot be explained by the physics and chemistry of the individual activities. The second meaning refers to the use in experiments of parts of organisms, rather than whole organisms. Much of our current knowledge of cellular physiology has been derived from studies with isolated cells, or even with parts of cells like mitochondria, ribosomes, chloroplasts, and puri- fied enzymes. The assumption is made that these parts respond the same when isolated as when contained in the intact organism and that, once the activities of all the isolated parts are understood, it will be possible to integrate these understandings into an explanation of the life of the intact cell. (3) The experimental treatment does not aff^ect the process being investigated too much. This is the hardest of the assumptions to make with any confidence. The biologist often must use considerable ingenuity in the planning of experiments in order to feel confident in interpreting the results. (4) Related organisms, or parts thereof, will have identical or similar behavior under the same circumstances. This assumption is not always needed but is especially useful when dealing with human problems. Most experiments with human beings would be both unethical and immoral, and we must depend upon information from other animals. 16 RESEARCH IN BIOLOGY Biological problems Many of the most serious problems remaining for the biologist today are problems which can only be solved by experimental methods. Even the areas of biology which have classically used descriptive methods are now turning to experiments. The taxonomist who used to be concerned only with the sizes, shapes, and colors of various parts now performs experiments to learn the effect of environmental changes on these sizes, shapes, and colors. The embryologist who formerly described the embryo at the ages of one, two, three, . . . n days has become concerned with the reasons for the observed changes. The general nature of the major remaining problems in biology might be grouped into the following interrelated categories: (1) Relationships of materials (2) Energy relationships (3) Control and integration phenomena The relationships of materials, that is, the kinds of chemical com- pounds present and the chemical reactions that occur, make up the prov- ince of biochemistry. The general pattern of cellular biochemistry has been evolved within a period of about the last thirty years. Many details of cellular chemistry remain to be discovered, but as most of the tech- niques are available, it is difficult to visualize any major changes in con- cepts. Such statements are dangerous, however, because new concepts have a way of appearing without advance warning. Energy relationships involve a more difficult problem. Energy is an abstract concept in its simplest forms. Living organisms bring about transformations of chemical energy, heat, electricity, motion, and Hght, from one to almost any of the others, usually with high efficiency. The physics comprising these energy relationships is among the most inter- esting and challenging of biological problems. The control and integrative systems constitute a series of almost purely biological problems. Certainly some man-made control systems operate on similar principles, but it might not stretch the imagination too much to call these systems biological, at least in origin. The questions of the regulation of all of the various processes within an animal, or the in- tegration of the activities of all the individual plants or animals within a group, are among the most interesting but most frustrating. It has been exceedingly difficult to find means of investigating these elusive phe- nomena. RESEARCH IN BIOLOGY 17 Any attempt at classification of problems or areas of inquiry is cer- tain to lead to oversimplification. A more realistic picture of the questions which still face the biologist can be gained by examining some problems as examples. In most of these, all three of the categories above are in- volved. What controls the development of an organism? How is it possible that the fertilized egg of a chicken always develops into a chicken and never into a duck? How are the transformations of groups of cells into tissues and organs controlled? The nucleic acid is the hereditary ma- terial, but how does a nucleic acid molecule bring about such profound changes and differences? If nucleic acids control the synthesis of enzymes, what determines when a given enzyme is formed and when it is not? How can the cell membrane exhibit such control over what enters and leaves the cell? How is energy utilized to move molecules or ions in what seems to be the vnrong direction, that is, against diffusion gradients? How does a kidney cell sort molecules and ions? How can plant cells continue to absorb mineral ions even after they are more concentrated inside the cells than in the surrounding soil or water? How do some cells or organisms measure temperature and time? How can the hypothalamus control temperature so precisely in mammals and birds? How does it measure temperature and what does it use as a refer- ence point? How do plants and animals measure the length of a day or night in order to respond to the seasons? If all the birds in a flock sud- denly change their direction of flight, is it because they have communi- cated with each other or because all have detected the same minute change in environment? How do cells convert light energy into useful chemical energy as they do in photosynthesis and in vision? What chemical and physical proc- esses are involved in color vision? What is the nature of the specific structure of the minute parts of cells? How are enzymes and other molecules arranged in space to allow sequences of chemical reactions to proceed in an orderly fashion? Such a list of questions makes it seem that the biologist does not know very much. The magnitude of some of the questions should make anyone humble, but at least we now know some of the questions. In their attempts to answer these and other questions, biologists have used a great variety of approaches. Some have led to the great progress we have already made; some have ended in frustration. However, biologists are a persistent lot. They continue their attempts with whatever tech- 18 RESEARCH IN BIOLOGY niques are available. Certainly the attempt is worth the effort, because an understanding of these problems will be a truly noble achievement. SELECTED REFERENCES Beveridge, W. I. B., The Art of Scientific Investigation. Revised edi- tion. New York: W. W. Norton & Co., 1957. This is a friendly and personal discussion, particularly related to the mental processes, crea- tive activities, and responsibilities of the scientist. It is delightful read- ing, as it is sprinkled with anecdotes from current and historical biol- ogy- Commoner, Barry, 1961. In defense of biology. Science 133:1745- 1748. A pointed argument, attempting to put "life" back in biology. Gabriel, M. L., and S. Fogel, eds.. Great Experiments in Biology. Englewood Chffs, N. J.: Prentice-Hall, Inc., 1955. A collection of papers from the original biological literature, reprinted from the peri- odicals in which they appeared. Each paper has been selected to dem- onstrate some quite significant finding. The collection as a whole is a group of specific examples of biological research. Gerard, R. W., and R. B. Stevens, eds.. Concepts of Biology. Wash- ington, D. C: National Academy of Science-National Research Council Pubhcation 560. 1958. This is a transcription of several days of discussions held by a group of eminent biologists, including both conservatives and liberals. Although they did not reach complete agreement on what should be the major concepts of biology, the dis- cussions cover a great range of ideas. Schrodinger, Erwin, What is Life? New York: The Macmillan Com- pany, 1945. Every biologist should read this discussion of the relation- ship between the biological and physical sciences, even though it is several years old. The findings of some of the most active years of biological research do not detract from the stimulating discussion. The general theme is contained in the following quotation from pages 68 and 69: "Living matter, while not eluding the 'laws of physics' as established up to date, is likely to involve 'other laws of physics' hitherto unknown, which, however, once they have been revealed, will form just as integral a part of this science as the former." CHAPTER 3 The Biological Literature Progress in science builds upon previous progress in science, which, of course, is possible only because written records have been maintained for centuries. Scientific achievement depends only partly on brilliant experimental work and astute observation; the formulation of useful explanations depends upon the interlacing of current work with the observations and interpretations of the past. Since a research program becomes meaningful only if it is interrelated with all the previous work on the subject, the need for maintaining per- manent records is obvious. The "biological literature" comprises all that has been written on biological subjects in the last several hundred years. All this writing can be subdivided into two general types: (1) primary publication describing the original observations, and (2) secondary pub- lication which summarizes, describes, or discusses these first reports. Al- most all the primary reports now appear in periodical or serial publica- tions, or "journals." Secondary discussions and summaries are consider- ably more varied. Laboratory records A part of biological literature not generally accessible includes orig- inal laboratory records and notebooks. Everyone who performs experi- ments in the laboratory has an obligation to keep records of what was done and what happened. The form in which these laboratory records is kept is a matter of 19 20 THE BIOLOGICAL LITERATURE personal preference. Some people like to write up each experiment individually on separate sheets of paper— describing the materials used, the methods, and the results— and then file this unit. Probably a more satisfactory procedure is to write everything into a bound notebook. A description of the reason for doing the experiment, the minute details of materials and methods, complete notes on the results (including even observations that may seem inconsequential), and finally, calculations and discussions can all be included in a few succeeding pages of the notebook. Bound notebooks containing quadrille-ruled paper, that is, paper ruled in squares, are useful because preliminary graphs can be recorded directly with the results. Regardless of their form, these original laboratory records are the most important part of the permanent records because it is upon these that the rest of the literature is built. Technical papers Once a piece of experimental research has reached a reasonable con- clusion, the work is described in a technical "paper." Generally this is a brief report, describing the work in as much detail as ever will appear in the public literature. The organization of the report varies somewhat, but the paper usually includes a statement of the problem, a description of experimental methods, a summary of results, and a section on inter- pretation or conclusions. Many serial or periodical publications are devoted almost exclusively to these technical papers. Some, like the American Journal of Botany and the Journal of the American Chemical Society, cover an extremely wide range of subjects. Others are restricted to a single group of organ- isms or even to a single technique. Table 3-1 lists as examples a num- ber of journals which are valuable to the experimental biologists. Some of these are sponsored by organizations, societies, or institutions, and others are produced by commercial publishing firms. Since the primary technical report is the most detailed form in which a piece of work will ever be described in the literature, it is imperative that this description be adequate. The report should permit evaluation of the work and should allow repetition by persons in other laboratories. If the length of the report had no limitations it would be easy to include more details than necessary, to take no chances of leaving out something essential. Unfortunately, papers must be short; there simply is not room THE BIOLOGICAL LITERATURE 21 Table 3-1. Examples of Periodicals Useful to the Experimental Biologist Technical Journals Review Periodicals American Journal of Botany Advances in Enzymology Archives of Biochemistry and Biophysics American Scientist Biochemical Journal Annual Reviews of: Biochimica et Biophysica Acta Biochemistry Doklady Akad. Nauk S.S.S.R. Microbiology Journal of the American Chemical Physical Chemistry Society Plant Physiology Journal of Biological Chemistry Bacteriological Reviews Journal of Biophysical and Biological Reviews of the Biochemical Cytology Cambridge Philosophical Journal of Cellular and Comparative Society Physiology Botanical Reviews Journal of Molecular Biology Physiological Reviews Plant Physiology Scientific American Zeitschrift fiir Naturforschung in the journals to publish every paper at full length. The result, of course, is a compromise in which the writer assumes that the reader is familiar with ordinary laboratory techniques. The technical paper, then, is a report in which one research worker communicates with others like himself. A technical paper could be written in almost any language, but now most are in English, German, French, or Russian. A little of the litera- ture from Japan is in the Japanese language, but fortunately for us, most of the Japanese workers now write in English. Papers from the Scandinavian countries are likely to be written in English or in Ger- man. Most American biologists can read at least one other language, but almost none could write a paper in any language except English. Some Americans unrealistically solve the language problem by simply ignoring the foreign literature. Reviews So many technical papers appear each year that no one could hope to follow the whole literature. A valuable aid is the review article, which attempts to summarize and evaluate some small segment of the literature. For example, any one year might see the publication of several 22 THE BIOLOGICAL LITERATURE hundred technical papers all somehow related to muscle contraction. The author of a review article on muscle contraction examines all avail- able papers on the subject in great depth and prepares a summary of current thinking in the field as he sees it. The review article thus helps to pull the literature together, but it is subject to the opinions and judg- ment of the author. This restriction is not altogether bad because where there are two schools of thought a member of the other school is likely to write the next review article on the subject. The review paper typically contains no new data. Instead, selected tables or graphs from the previously published reports may be repro- duced. The review article might cover one small area completely, re- viewing all the work ever done on the subject. More commonly, each review paper covers the work of some brief recent period, such as the previous year. The writing and publication of the review takes time, so the review will be one to three years behind the original work. Papers in this category may be at any level of technicality. Several examples of review publications are given in Table 3-1. Some are highly technical; others are written for the general scientific or educated public. Physiological Reviews, for example, is written for physiologists, but the review articles in Scientific American are aimed at the educated public. Many of the Scientific American articles succeed admirably. Monographs, symposia, books A monograph or book describing the work of one individual on one subject used to be the only way of publishing scientific information. Any monograph published today is usually a long review, particularly covering one subject or the work of one laboratory. For example, taxo- nomic biology produces monographs describing one genus or one family of organisms. One of the relatively recent innovations in the scientific literature is the symposium volume. Increased availability of funds and modern rapid transportation have made it possible for institutions or societies to call together the various persons working on some subject to discuss recent work and problems. This meeting, or symposium, may be highly formal or very loosely organized, but generally each contributor sum- marizes the work of his laboratory or comments at length on interpre- tations. The "minutes" of such meetings, whether as formal reports or as transcriptions of informal discussions, can be exceedingly valuable. THE BIOLOGICAL LITERATURE 23 By way of example, the Johns Hopkins University annually sponsors a symposium on some biological topic, and then publishes the proceed- ings of this meeting. Their recent symposium on the various effects of light on organisms has been a particularly valuable summary of current thinking. Abstract services Since no one can hope to read every issue of every periodical, some system for keeping up with the literature becomes essential. The vari- ous abstract services perform this function quite well. An abstract is a one-paragrah summary of a technical paper. It may be written by the author of the paper or by some other scientist familiar with the field who uses the technical paper as his source of information. So that any reader can find the original paper if he chooses, the abstract lists the title, authors, the periodical with volume and page numbers, and the date of publication. Ideally the abstract should include all the information contained in the original paper, but to compress several or many pages into one paragraph without loss of information would take extremely careful writing. This ideal is approached but never attained. The biologist depends upon Biological Abstracts, a periodical which presents abstracts of a very large segment of the world's new biological literature. Two issues appear each month, and each issue is subdivided into general topics. Another of the world's leading abstract services is Chemical Abstracts, published semimonthly by the American Chemical Society. Since much of the modern work in biology includes some chem- istry. Chemical Abstracts is as useful as Biological Abstracts. Computer techniques and modern filing and communications procedures make it possible for Biological Abstracts and Chemical Abstracts to describe a paper a surprisingly short time after its original publication. They also cover many of the foreign periodicals which are not available in our libraries. Other literature A few periodicals do not seem to fit into any special category. Science, published by the American Association for the Advancement of Science, 24 THE BIOLOGICAL LITERATURE is such a general periodical. It contains some original technical reports but also two or three reviews in each weekly issue, along with editorials, news of science, news about scientists, book reviews, and advertisements. Nature is a British publication similar in coverage. Letters to editors have become an important means of communication, many periodicals reserving a section of each issue for such letters. The usual technical paper requires a fairly long time for editing and pub- lication and may not appear in print for six months or a year after the manuscript is submitted to the editor. If for any of several reasons an author wishes to have information published much faster, he can write a letter to the editor. This letter is not subjected to the same edi- torial scrutiny as the formal paper, so new data or interpretive discussions can be printed earlier than if a technical paper were written. Presumably, the letter to the editor will be followed at some later date by the com- plete technical report. Unfortunately some people have a tendency to publish all their work as letters to editors, and they never get around to publishing the full details. This practice is not acceptable, of course, but it is very difficult to regulate. Some other parts of the literature, not the work of individual scien- tists, but useful as sources of information, will be described in a follow- ing section. Searching the literature Anyone working in a scientific field must keep himself aware of other work in that field. The scientist just entering a new area has a particularly difficult task because the whole literature is unfamiliar to him. Anyone who does not bother to read the previous literature, but instead goes into the laboratory and starts experimenting, faces the very real likelihood of an unnecessary duplication of effort. He must realize that his mental processes are not unique and that someone else has had or will have ideas similar to his own. No one can hope to read all the literature, even in a fairly small segment of science. In fact, new literature is appearing more rapidly than one individual can list it, let alone read it. Biological Abstracts is now presenting abstracts of almost 100,000 papers each year. Chemi- cal Abstracts provides coverage of about 7000 journals. Even reading these two abstract periodicals from cover to cover is impossible or at least is a full time job. THE BIOLOGICAL LITERATURE 25 A new field can best be approached by first getting a broad general picture. One simple means of accomplishing this objective is to learn it first-hand from someone familiar with the field. If this is impossible, a recent textbook provides a point of beginning. The textbook may be a dead-end source, but most now include at least a few references to more advanced literature. Several encyclopedias also are excellent starting points. Encyclopaedia Britannica and Encyclopedia Americana give quite detailed coverage of many biological topics, and many of the articles provide references. The McGraw-Hill Encyclopedia of Science is gen- erally excellent. Several one-volume specialized encyclopedias have ap- peared recendy, such as the Encyclopedia of Microscopy and the Encylo- pedia of Spectroscopy. Eventually, references obtained from textbooks and from encyclo- pedias take one to the review articles, and references given in these lead quite directly to the original technical reports. By this time the reader will have a good idea of what is known in the specific field. Other papers can be found in the abstract journals. Chemical Abstracts, for example, publishes annually a subject index, an author index, and an index of chemical formulae. At ten-year intervals, they produce decennial in- dexes. If you wished to find all the papers on a subject covered by this service, you could trace the annual indexes back to the most recent ten-year index. The abstracts you find will tell you whether it is worth seeking the original paper. Somewhere you must stop your search through literature, lest it go on forever and keep you out of the laboratory. Just when to consider your knowledge of the field adequate is a matter of personal judgment. I think we learn primarily by experience. Other sources of information Handbooks of various sorts are just as indispensable as are diction- aries. The physical and chemical handbooks give tabulated data on almost any imaginable subject. The Handbook of Biological Data in- cludes specifically biological information. The first edition is somewhat difficult to use until you become famiHar with it, but the chances are good that if a fact about an organism is known it has been included. A source of ideas that is sometimes neglected is the collection of materials provided by the concerns that make or sell supphes and equip- ment. The catalogue of a glassware manufacturer, for example, might 26 THE BIOLOGICAL LITERATURE contain sketches of specialized pieces designed for petroleum testing that could be used for biological experiments. The exchange of ideas among the various sciences as a result of reading catalogues and adver- tisements is probably more important than many people realize. The successful biologist depends upon a variety of sources of infor- mation. One of his most valuable tools is the kind of library provided by a university, and one of the most valuable techniques he can learn is to use the library effectively. Reprints When a scientific periodical is printed, extra copies of each paper are usually run off. These are not bound with other papers, but are given or sold to the authors. These separate copies, or "reprints," are printed from the same plates, and therefore are identical to the published paper. The author distributes these reprints to persons he thinks would be interested or to other scientists who request them. Anyone working on cellular metabolism could request reprints from other writers on the subject and thus could have his own collection of papers for careful study. In turn, he provides reprints of his own papers to the other work- ers. Sometimes it is easier to obtain a reprint from an author than to obtain the journal in which the paper was published. Record systems Since the literature is voluminous— every year we read or become aware of a vast number of papers pertaining to our work— some system of listing these papers is essential. Most such record systems consist of filing cards of one sort or another. Each researcher must develop his own system for keeping track of the literature. For cards to be useful, there should be at least one card for each paper, and that card should give authors; title; citation to journal, volume, year, and pages; and preferably a brief summary of contents. The information on the cards is useless unless it can be recov- ered when needed. It must not he thought that some system is effortless; any system requires expense and work. Elaborate cross-references become necessary if the file of cards is large. You might choose to file the cards alphabeti- THE BIOLOGICAL LITERATURE 27 cally by author, but many papers have more than one author, and the best-known author is not always Hsted first. The system described in the following paragraphs is included as an example only and cannot be recommended without qualification. It is doubtless as expensive and difficult to keep up as any other system, but it does offer the advantage of rapid retrieval of any card in the file. Fig. 3-1. One form of edge-punched card. Several firms produce edge-punched cards, of which Fig. 3-1 is an example. Many sizes are available, some of which have several rows of holes. The card shovm is about the size of an IBM card, for which filing cabinets are easily available. The holes can be punched out, as several in the illustration are, so that when a needle is passed through a given hole in a stack of cards only those that are punched will fall off the needle. One corner is cut off the card so that the cards in a stack can be quickly sorted and placed in order, all right-side-up and facing forward. Each card is prepared by copying the authors' names, the tide of the paper, citation, and an abstract on the face of the card. The holes in the card are then punched according to a code. Holes are arranged in "fields" of four, labeled 1, 2, 4, 7. By punching no more than two holes it is possible to arrive at any digit from 0 to 9, as 1, 2, 2 + 1, 4, 4 + 1, etc. If two fields are used, one field can be used for units, the second for tens. For example, 7 + 1 in the first field is 80, 4 + 2 in the second field is 6, and the two together give 86. The card illustrated has 19 fields, which means that 10^^ coded bits of informa- tion can be stored. This is not fully efficient because a set of four holes could stand for any number from 0 to 1 5 if the number of punches per 28 THE BIOLOGICAL LITERATURE field were not limited to two. Therefore the card could store 2^*^ (about 10^^) bits of information. The type of information stored can be described briefly. Authors' names are given a code designation, a number from 00 to 99, by divid- ing the alphabet into 100 more-or-less equal categories and using tw^o fields for the name of the first author on the paper. A second pair of fields gives the second or best-known author. The year of publication requires one pair of fields plus one extra hole for a few papers before 1900. Jour- nals have been coded by a modified alphabetical scheme. If a straight numerical alphabetical code were used, some categories such as BIO and JOU would be overloaded; for this reason certain journals are given a separate numerical designation. Since most papers can be classified in more than one way, subject mat- ter coding uses two sets of three fields each. A decimal system similar to that used in the library has been adopted. A paper on "the fluorescence of pigments" is likely to be coded under "fluorescence" in one set of three fields and under "pigments" in the other. The subject matter file is the most difficult to work out. It is necessary to decide which are the broad general categories and then to decide what are the reasonable subdivisions of each. The subject matter file must be reasonably detailed and yet must be flexible enough to be modified when your ideas of what is important change, as they will in the future. In addition to author, subject, journal, and date, several other bits of information have been included as separate punches. One hole is punched if the card concerns a book or monograph, another if a reprint of the paper is on hand. A set of four punches indicates the language: no punch for English, first hole for German, second for French, third for Russian, and fourth for any other foreign language. The preparation of the cards requires a great deal of time, of course, but then so does the use of any other card system. Once the cards have been prepared, there is no need to file them in any particular order. From among, say, two thousand cards, all the papers by one author can be found with four or five passes of the needle, followed by hand sorting of an average of twenty cards in that alphabetical category. I reiterate, this system can work, but everyone must develop his own scheme of coding. Sorting by computer, as in the IBM system, would be even easier but a good deal more expensive. Sorting by hand from a set of cards arranged in alphabetical order might be faster, but the file must be kept in alphabetical order and cards must be duplicated for cross-reference. THE BIOLOGICAL LITERATURE 29 SELECTED REFERENCES Most of the reFerence works mentioned in this chapter are so generally useful that they are listed in the Bibliography at the end of the book. Casey, Robert S., James W. Perry, Madeline M. Berry, and Allen Kent, Punched Cards, their Applications to Science and Industry, 2nd ed. New York: Reinhold Publishing Corporation, 1958. A de- scription of the cards and equipment available, descriptions of appli- cations to research (including biology), and valuable suggestions on coding. McElroy, William D., and Bentley Glass, eds., A Symposium on Light and Life. Baltimore: Johns Hopkins Press, 1961. CHAPTER 4 Measurements Classical or descriptive biology has developed a tremendous vocabulary for describing various aspects of organisms. Even a description of an organ of a plant or animal, however, must eventually depend upon some quantitative measurement. The fact that the tip of a leaf is more-or-less rounded is frequently less important than some estimate of the over-all size of the leaf. Experimentation in biology certainly could not function except in a quantitative manner. If materials are produced by cells they are pro- duced in some quantity. If a part of an animal moves, it moves some distance. If some environmental factor is important in the behavior of an animal, the intensity of that environmental factor may have a pro- found influence. It is almost impossible to imagine experimentation in biology that did not depend upon measurement. In this respect we are completely dependent upon methods developed by the physical scien- tist. Standards Any measurement is a comparison of one thing with another. If we compare A with B and then later compare A with C, the information gained can be used to compare B with C. In this simple case A becomes a form of standard. Modern measurement uses a set of standards which have been selected by mutual agreement. The importance of the stand- ards in science and in commerce should be obvious. 30 MEASUREMENTS 31 Because we have become accustomed to certain standards, others sound strange to us. We can read that GoHath stood 6 cubits and a span and carried a spear weighing 600 shekels of iron. Was this GoHath one to inspire awe? Should his spear be feared? Not unless we have an evalu- ation of these standards. It turns out that Goliath was about 9 feet tall and had a 25- or 30-pound spear. Cubits and shekels could be as useful to an ancient population as our units are to us. Would the average Ameri- can know any more about Goliath if he were described as about 270 cm tall? It would be entirely possible for any laboratory to devise its own set of standards. As long as these standards were used consistently the laboratory could continue to function as an independent unit. There obviously could be very little communication between laboratories, how- ever, and science could not have made the advances that it has. It is only because of the universal agreement upon one set of standards that a measurement made in western United States will be the same as a com- parable measurement made in western Germany. The metric system is in almost universal use in science, even though English-speaking countries continue to use another system in their com- merce. Three quantities, length, mass, and time, were selected as a set of basic standards, and most other quantities are derived from these. For example, volume is length cubed, and velocity is length divided by time. Probably some other set of basic standards could be devised, but these three have worked well and have become so ingrained in our thinking that it is difficult to imagine another system. The principal advantage of the metric system is the ease of computa- tion. Multiplying or dividing by ten is much easier than working with a system in which a pound contains 16 oz and a typical small length is ys2 in. The system of pounds and yards was based originally on some arbitrary standards. The metric system was intended to represent cer- tain natural features, just as the meter was a fraction of the earth's circumference and weight units were based upon a volume of water. After the metric standards were established, methods of measurement improved, and it was necessary to admit that a meter was not precisely the value it was supposed to be. But the "standard meter," a unit accepted by international agi^eement, could still be used. Nearly all figures used in this book are expressed in metric units. Most of the measurements in biology are essentially the same as those used in the physical sciences. Occasionally, however, biological phe- nomena do not lend themselves to this simple, direct expression, and it is 32 MEASUREMENTS necessary for the biologist to develop some other kind of units. An anti- biotic, for example, might be a mixture of several different chemical compounds. The effectiveness of the antibiotic in preventing the grou^th of bacteria depends upon the source of the preparation and upon the proportion of the different compounds in the mixture. As merely refer- ring to a certain number of milligrams of the substance does not convey adequate information, some unit which expresses the activity and can be based upon a standardized laboratory test is much more useful. In- stead of an absolute amount of material w^e refer to an amount which will cause a certain effect. Numerous units of this sort have been used in experimentation in biology because of the variability and complexity of biological materials. Direct Measurements and Null Measurements: The simplest method of comparing one object with another is to place them side by side. Units of length are commonly measured in this way. A scale for weigh- ing objects could be constructed from a simple steel spring. The heavier an object is, the more the spring will be deflected. An electrical quantity such as voltage can cause a certain deflection of the needle in a meter. The weight measured by the spring and the voltage measured by the meter are examples of determinations of the effect caused by the phenom- enon being measured. Many kinds of measurements are as simple and direct as these, but in certain instances it is advantageous to make a "null" measurement. Instead of measuring a value direcdy we measure that amount of force required to oppose it. A balance used to measure mass consists of a pair of pans suspended on opposite ends of a f)eam; the material we wish to weigh exerts a force, and we add weights to counteract this force. It is easier to make precise weights than precise springs. Electrical quantities can be measured by means of a Wheatstone bridge, as shown in Fig. 4-1. A pair of matched resistors is connected, as shown, with another pair of resistors, and a voltage is applied from the battery. If the resistance of Rs is greater than Ri, a current in one direc- tion is indicated by the meter; or if R4 is greater than Rs, current will flow in the opposite direction. The resistor Rs can be adjusted until there is no current so that we can measure R4 in terms of R3. Because voltage, current, and resistance are precisely interrelated by Ohm's law, the basic bridge circuit can be adapted to a variety of measurements where a third "unknown" value is calculated from the other two. Greater precision is available here also because it is easier to make and evaluate resistors than meters. MEASUREMENTS 33 Simple and Derived Units: Measurements of length, mass, and time require simple units: either the basic units of the metric system or frac- tions or multiples of these. Certain other quantities may become exceed- ingly complex in that they involve tw^o or more of the basic units. Force, for example, is mass times acceleration, but acceleration is change in velocity per unit time, while velocity is measured in length units per unit time. Some of the complex measurements necessary in biology are rather difficult to reduce to basic units. Fig. 4-1. "Null measurement" of resistance is performed with this Wheatstone bridge. Ri = R2, R3 is adjustable and of known resistance, R4 is unknown. When Rg = R4, zero current is shown on the meter. Effects of Measurement upon the System: One of the great difficulties in biological measurement is the response of the living material to the very act of measuring. It is theoretically impossible to measure anything writhout affecting it, but the physicist and chemist have learned that the effects caused by their instruments are small. Living organisms, hou^ever, respond to small changes in environment. Emotional disturbances in animals are well known, but even plants show differences in response upon handling. The experimental biologist must therefore specify the conditions under which measurements were made and be prepared to accept the fact that his figures are not normal. He can only hope that his numbers can be repeated in other measurements under similar con- ditions and that sets of measurements under a variety of conditions will 34 MEASUREMENTS provide him with information from which he can draw generahza- tions. Examples of measurement The units used in measurement may be the standard units (meters, kilograms, or seconds) or fractions or muhiples thereof. Generally the fractions or multiples are chosen so that the numbers are of a convenient size. For example, 8 cm is a slightly more convenient expression for the length of my finger than 0.08 meters. Several of the units which have been given names of their own are listed in Table 4-1. Table 4-1. Fractions and Multiples of Units Fraction or Multiple Prefix Symbol 10« mega- M 103 kilo- k 100 unit 10-2 centi- c 10-3 milli- m 10-6 micro- /^ 10-9 nano- n 10-12 pico- P There are some exceptions in the use of the prefixes, but generally they apply to the fundamental units of length, mass, and time and to derived and electrical units, as in microwatt and kilocalorie. Length: Measurements of length are comparisons with standards, which are copies derived from the standard meter. Some measure- ments of length can be made more accurately now than previously. Until 1960, the standard meter was the distance between two lines scribed on a bar of platinum-iridium kept in the vault of the International Bureau of Weights and Measures in Paris. The standard meter is now defined as 1,650,763.73 times the wavelength of a specified orange-red line in the light emitted by krypton^. The new standard is a value believed to be constant, and can be reproduced in any well-equipped laboratory. By mutual agreement in 1959 among the countries using the English system, the international yard is defined as exactly 0.9144 meter. The inch (Vse yd) is thus exacdy 25.4 mm. A number of fractions or multiples of meters are commonly used in measurement. These units are chosen to permit the use of small whole MEASUREMENTS 35 numbers, rather than very large numbers or extremely small fractions. The centimeter and millimeter follow the prefix system described in Table 4-1, but smaller units have names of their own. One-thousandth of a millimeter (10~^ m) is called a micron and is given the symbol /a. One-thousandth of a micron (10~'' m) is the millimicron (m/^), and a tenth of this (10"^*' m) is the Angstrom unit (A or A). Measurements of length are among the most familiar types of meas- urement. Differences between parts of organisms, or living cells them- selves, frequendy fall in a size range smaller than is convenient for the human senses. One method of measuring small objects is to magnify both the object and the scale with which it is compared. Measurement with a microscope is possible in this manner, but more commonly we use an "ocular micrometer," a scale engraved on a transparent disk which fits into the eyepiece of the microscope. The optics are arranged so that the scale and the object are seen at the same time. The ocular micrometer is calibrated; that is, definite values are assigned to the divisions of the scale by comparison with an accurately and finely divided scale engraved on a microscope slide. Almost all measurements of the sizes of cells or parts of cells are accomplished in this manner. Measurements of even smaller units are exceedingly difficult indirect measurements and frequently are calculated from the known geometry of an optical system. Area and volume are derived directly from length, two or three meas- urements of which enable us to calculate these quantities. Several special units of area exist, such as the acre and the are or hectare, but only square centimeters (cm^) and similarly derived units are common in the experimental laboratory. Volume can be expressed in terms of length units cubed (cm^) if the volume is calculated geometrically from the dimensions. The liter, the volume (about 1000 cm^) occupied by 1 kg of water under certain specified conditions, is the basic metric unit of fluid volume, while the gallon (231 in.^ in the U. S.) is the commercial unit. Fractions of liters generally follow the system of Table 4-1. The microliter (/u.1) has sometimes been designated by the symbol lambda (X). This practice seems to be less common now, fA being more common, but the symbol A. does exist in the literature and should be interpreted as one microliter. Mass: Mass is a measure of the amount of material, a concept derived from Newton's second law of motion which says that force is directly proportional to acceleration. Expressed in this way, mass becomes a quantitative unit related to the qualitative idea of inertia. Weight is easily confused with mass, but weight is the force which gives a body the acceleration of gravity and thus should be expressed in units of force. 36 MEASUREMENTS Mass is a constant property of a body, while weight will vary from place to place as the acceleration of gravity varies. If we say that an object "weighs" 15 g, we really are referring to its mass. The standard unit of mass is the international kilogram, a cylinder of platinum-iridium kept in the vault in Paris. It was originally intended to be equal to the mass of 1000 cm^ of water at 4° C, but later more precise measurements have shown that this is not exactly correct. Thus the kilogram, like the "old" meter, becomes an arbitrary standard. Since July 1, 1959, the international pound is defined as 0.45359237 kg. Multiples larger than the kilogram are rarely used in the laboratory, and, in fact, the gram (10^^ kg) is probably the most commonly used unit. The prefixes and symbols of Table 4-1 apply quite directly to mass units. The microgram has sometimes been given the symbol gamma (y), but this, like the use of A. for (A, is being discouraged. Measurements of mass are perhaps the easiest of all measurements to make. It is unfortunate that we speak of "weighing" for the comparison of unknown masses with standard masses. We now can use a variety of balances operating on slightly different principles. The simplest idea, probably, is to place the unknown and standard masses on the opposite and equal arms of a beam. Standard weights are added until there is no deflection of the beam. The usual analytical balance is of this type. Routine weighings of larger objects might be performed on a triple beam balance, where a set of standards counteracts the unknown mass, not by changing the amount of standard mass, but by changing the distance between the standard and the point at which the beam is suspended. From the simple law of the lever the scale can be graduated in mass units. The "trip scale" is a combination of the previous two types. The material to be weighed is placed on the left pan, known weights are placed on the right pan to the nearest gram, and then the fractions of grams are found by sliding a "rider" weight along a beam. A torsion balance contains a wire or band of metal which is twisted during the measurement. Within the range of the balance, the amount of twisting is proportional to the load. Most torsion balances are used as null instruments, being brought back to the undeflected position by adding weights to oppose the load or by moving riders. Several brands return the balance to the undeflected position by moving an arm which twists the wire in the opposite direction. The arm moves over a scale from which the unknown mass can be read. Several new analytical balances are now being made that may even- tually replace the equal-arm analytical balance. One difficulty with plac- MEASUREMENTS 37 ing the item of unknown mass and the standard masses on opposite sides of the balance point is that the two arms of the beam may not be identi- cal. This possibility can be avoided by comparing the unknown and standard masses on the same arm against some inert material on the opposite arm. In following such a procedure, weighing would be too cumbersome, but some of the new balances operate on this principle. A full set of standard weights is hung on one arm of the balance, counter- acted by an equal mass on the opposite arm. The unknown material is placed on the same arm as the standards, and then standards are removed in various combinations until balance is restored. Recording the values of the standard weights removed gives the mass of the unknown object. These new balances fit into our mechanical world because the standard weights are removed by a system of hooks, arms, and gears, and this mechanical system is coupled to a dial which shows the mass of the unknown object directly. Those of us who grew up with the equal-arm analytical balance, however, are still a little suspicious of anything this easy. Time: The units in which the scientist expresses time are the everyday units: seconds, minutes, etc. The second is the standard unit, defined formerly as 1/86,400 of a mean solar day, but since the international agreement in 1960 as 1/31,556,925.9747 of the year 1900. This is prob- ably a temporary standard, since it is likely that the period of vibration of some simple molecule will be adopted as a new fundamental standard of time. The new standard second is necessary for some purposes, as in astronomy, but for most practical purposes clocks are still good enough. Time is an especially important unit in biology because living material is always changing, and measurement of rates gives important informa- tion about mechanisms. Time can be measured with a clock or stop- watch, but where the human reaction time can contribute to errors some electrical or mechanical device may be used to start and stop the clock. Some exceedingly fast reactions are followed and timed by com- plex electronic equipment. Energy: Of all the quantities that must be measured, the abstract collection known as energy is probably the most difficult to understand. Energy is commonly defined as the capacity or ability to do work, but work itself is of several kinds. Energy can be converted from one form to another, and the mathematical expressions denoting these transforma- tions can become quite complicated. The term "heat" means many things to many people, but here we shall refer only to the quantity of thermal energy which a body contains. 38 MEASUREMENTS •220 100 -^ 1e-200 90 -$ 80 E=- 180 70 60 50 40 30 -^ 20 -A\ 10 ^r 160 i-— 140 The amount of heat depends upon the mass of the body, and under a given set of conditions a given body of a certain material must contain the same amount of heat. Heat is rarely measured directly, but instead the amount of heat in a body is calculated from other measurements. Even the calorimeter, which comes close to measuring heat, actually measures changes in temperature of a known amount of water or other material. Heat meas- ured in this way is expressed in calories, one calorie being the heat required to raise the temperature of one gram of water one Celsius degree. Calculating heat in joules from electrical measurements is usually more precise than calorimetry. The joule, when used for heat, is the energy given off in one second when a current of one ampere flows through a resistance of one ohm. Thus heat can be related quite directly to electrical quantities. Temperature is a measure of the "concentration" of heat, that is, the amount of heat per unit of material. The temperature of a uniform object is independent of its size. The Celsius (centigrade) temperature scale is used almost exclusively in biology. Degrees on the Absolute or Kelvin scale are the same size as the centigrade degrees, but 0° K is about —273° C. If temperatures must be converted from Celsius to Fahrenheit, the easy way is to carry a diagram on which the scales are printed side by side, as in Fig. 4-2. Alternatively, remember a few reference points; e.g., 20° C = 68° F, 37° C = 98.6° F, 40° C = 104° F, and 5 Celsius degrees equal 9 Fahrenheit degrees. As a last resort, 0 -i^ -10 -4[ -20 -]l -30 -^ -40 120 100 80 60 40 20 0 ^-20 ^^-40 Fig. 4-2. Celsius (centigrade) and Fahrenheit tem- perature scales. tF = ltc + 32, an{ tc = -^ Of 32). The familiar way of measuring temperature with a thermometer depends upon the thermal expansion of mercury. The amount of expansion per degree is constant over a wide range of temperatures. Temperatures below the range where mercury can be used require some other liquid, such as an alcohol or a hydro- carbon. MEASUREMENTS 39 Several electrical devices also measure temperature. A thermocouple is formed by joining wires of two different kinds. For example, a piece of copper wire could be joined to a piece of wire of the alloy called constantan to form a loop half copper and half constantan. If one junc- tion is at a higher temperature than the other, a small but measurable voltage will exist in the circuit, the magnitude of the voltage depending on the temperature difference. A resistance thermometer measures temperature by detecting changes in the resistance of a platinum wire. The platinum wire can be used as one of the resistors in a Wheatstone bridge (Fig. 4-1), and tempera- tures can be measured with great precision. A somewhat similar device called a thermistor depends upon changes in resistance of a semiconduct- ing material. Because thermistors offer the advantage that the detecting element can be extremely small, they are particularly useful in some biological experiments. Thermistors have made it possible to measure temperatures in such improbable places as stomachs. Several thermistor probes are illustrated in Fig. 4-3. Fig. 4-3. Several thermistor probes, each of which can measure temperature. Size can be estimated from the wires at the bottom, each about 3 mm in diameter. (Courtesy Yellow Springs Instrument Co.) 40 MEASUREMENTS Energy exists in other forms: as electrical energy, as the kinetic energy of motion, as radiant energy, or in one of several forms of potential energy. The interrelationships of the various kinds of energy and trans- formations from one form to another will not be considered here. Meas- urements of energy depend very strongly upon electrical instruments of one kind or another. Electrical measurements will be taken up in Chap- ter 13, and measurements of radiant energy are dealt with in Chapters 12 and 13. Potential energy actually cannot be measured directly but instead is calculated from the work required to transform the energy to this storage form or from the energy released when the potential energy is transformed into one of the other kinds. Examples of potential energy are the energy possessed by a body held in an unstable position and the bonding energy of chemical compounds. We might say that a molecule of glucose contains a certain amount of energy, but almost certainly we mean the energy that will be liberated if and when this molecule is converted chemically into some more stable, lower energy compound. Chemical potential energy is usually expressed in kilocalories per mole of reacting material. Chemical Quantities and Concentrations: Chemical compounds exist as molecules, but molecules are much too small to be treated as units. A much larger unit, the mole, is Avogadro's number (6.02 X 10^^) of molecules. Avogadro's number (N) was originally devised in studies with gases when it was found that a given volume of any gas at a given temperature and pressure contains the same number of molecules. One mole of any gas at 0° C and a pressure of one atmosphere occupies 22.4 liters. Molecular weight is a relative figure, calculated by adding atomic weights, which in turn are determined relative to oxygen, which is given an atomic weight of 16. An amount of a compound with a mass equal to its molecular weight expressed in grams contains N molecules. Most biological reactions occur in solutions; that is, the reacting mole- cules are dissolved in water or occasionally in some other solvent. Con- centration is a measure of the amount of the dissolved substance (solute) in a unit volume of solution; several methods of expressing concentration are in common use. If we dissolve one gram-molecular weight of glucose (180 g) in enough water to make one liter of solution, the concentration is one mole per liter (1 m/1), which is frequently contracted to one molar (1m). The physical chemist uses an expression, molal, for a solu- tion in which one mole of a material is dissolved in 1000 g of solvent. Solutions of acids which ionize to produce only one H"*" per mole- cule can be labeled in molar concentrations, but if the acid should MEASUREMENTS 41 liberate two or more H+ ions, the behavior might make the acid solu- tion seem two or more times as concentrated as it actually is. To meet this difficulty the chemist devised the notation, equivalents per liter (eq/1); a solution of 1 m H2SO4 contains 2 eq/1. A solution containing 1 eq/1 is called a one normal (1 n) solution. Dilute molar, molal, or equivalent solutions may be designated as millimoles per liter (mM) or as microequivalents per liter (/u eq/1). For certain purposes, concentra- tions expressed in percentages are adequate, particularly in the biology laboratory. Two kinds of percentage solutions are possible. A 5 per cent salt solution could be made by dissolving 5 g of NaCl in enough water to make 100 ml of solution. This is percentage by weight, and if there is likely to be any doubt, it should be so designated (w/v). A 5 per cent solution of alcohol (a liquid) contains 5 ml of alcohol made up to 100 ml with water and does not necessarily contain 5 g of solute. Such solu- tions must be designated as volume percentage solutions (v/v). Con- centrations of some solutions are expressed as g/1 or mg/1, and it will be be seen that these are basically similar to percentage by weight. A solu- tion expressed in milligrams per liter is sometimes spoken of as parts per million (p. p.m.), an expression which is awkward when dealing with liquid solutions but valuable when working with gases in air or other complex solutions. There are several ways of expressing concentrations, and even the professional biologist becomes confused occasionally. If it is possible, we prefer to express concentrations in moles per liter. However, we do not always know the molecular weights of our material, or the reaction mix- tures are so complex that it is no help to know the molar concentra- tion of the various solutes. Many biologically active solutions are quite dilute, and it should be no surprise to find a solution described as 10-«M. The hydrogen ion CH + ) concentration, or perhaps more properly the concentration of the hydrated ion (HaO"*"), has a very profound influ- ence on all kinds of biological reactions. Concentrations of H"*" ions might vary from more than 1m to 10~^*^m or less, so we use the loga- rithmic pH scale to denote these concentrations. If [H"^] is the concen- tration of these ions in moles per liter, then pH = — logio [H + ] = log 10 [H + ] The notation pH 1 means 10 'm hydrogen ions, or pH 6.8 means the H+ concentration is 10~^-^m. The pH of a solution might be measured 42 MEASUREMENTS by noting the effect on certain dyes or, more precisely, with an electrical pH meter (Chapter 13). Pressure: The pressure exerted by a gas is expressed in a variety of terms. The atmosphere exerts on a unit area of the earth's surface a pressure which is equal to the mass of all the air vertically over that area. Under a set of standard conditions, this pressure would be called one atmosphere. Atmospheric pressure varies gready with temperature, however, so some more easily measurable unit of pressure is desirable. A column of mercury could be arranged so that it exerted the same pressure as the standard atmosphere. Since mercury is so much more dense than air, this column is only 760 mm high. In other words, a layer of mercury 760 mm deep weighs the same as the whole thickness of the atmosphere. If we used water, the column would be about 34 ft high. In the laboratory, pressures usually are expressed in terms of the equiva- lent column of mercury (mm Hg). The mass of such a column of mer- cury depends on its cross-sectional area. A column of mercury 1 cm" in cross section and 760 mm high weighs about one kilogram. This is roughly equivalent to 14.7 lb/in". Even though most laboratory work designates pressures in mm Hg, the international unit of pressure is the bar, which is equal to lO*' dynes/cm", or 1.013 kg/cm^, or 0.987 atm. Weathermen express pressure in milli- bars. Volumetric glassware It is possible to perform worthwhile biological experiments using noth- ing more than ordinary glassware, and even those experiments employ- ing the most elegant instruments are likely also to require volumetric ware. This glassware is a special kind of laboratory equipment, designed for measuring the volumes of liquids. Burettes, pipettes, volumetric flasks, and graduated cylinders are the most commonly used pieces. A burette is a glass tube, graduated in milliliters, with a stopcock at the lower end so that measured volumes of liquid can be drained off into another container. Some burettes have special stopcocks and reservoirs to make filling easier. When water or aqueous solutions are used in a burette, the upper surface of the liquid forms a curve or meniscus. The level of the liquid is measured by placing the bottom of this curve at the graduation on the burette. For very precise reading some dark material should be placed behind and below the meniscus so that the exact hot- MEASUREMENTS 43 zs / o O o o O CVJ £ O / # 'fiQ torn of the curve will be easily identified. If the eye is placed shghtly below the graduation line, the graduation ring looks like an ellipse. Placing the bottom of the meniscus at the center of this ellipse gives a precise reading and avoids errors as great as the thickness of the gradua- tion line. A pipette is really a miniature bu- rette, consisting of a glass tube with one or more graduations engraved along its length. Measured quantities of liquid are transferred from one container to another by sucking up a pipetteful and then discharging the amount of liquid designated by the scale marks. A volumetric or transfer pipette (Fig. 4-4a), designed for transferring certain exact volumes of liquid, has only one mark on the tube. The pipette is filled to about 15 to 20 mm above the line, and then held vertically with the index finger on top controlling the flow of liquid. (Some beginners seem to prefer to use a thumb, but this is about as awkward as holding a fork as you would a hammer.) A slight rolling motion of the finger allows perfect control, and the excess liquid is drained off down to the mark. The liquid is then allowed to flow out at Measuring, (c) Serological, an unrestricted rate until the level has reached the bottom of the bulb of the pipette. Finally the tip is held against the wet side of the vessel until the liquid has stopped flowing. The last portion of a drop is ordinarily not blown out of a volumetric pipette. Measuring pipettes (Fig. 4-4b) are graduated in milliliters, with 0 at the top. The full capacity is contained between this mark and a mark near the bottom. The liquid is allowed to drain as rapidly as possible and still retain enough control to stop at the desired point. When the full quantity has been "delivered," touch the tip to the side of the V (o) V (b) = 1 .12 (c) Fig. 4-4. Three kinds of pipettes. (a) Volumetric or transfer, (b) 44 MEASUREMENTS receiving container and remove the pipette. Never blow out the last "drop" because this last "drop" may be a milliliter or more. The serological pipette (Fig. 4-4c) is graduated to the very tip, and the last drop is to be blown out. Such pipettes are identified by a ground or etched ring at the top. Serological pipettes and measuring pipettes are superficially so similar that special care is necessary to avoid mistakes. Biologists use pipettes more frequently than chemists do, probably for several reasons. The quantities of materials transferred are more likely to be about the right size for a pipette, and pipettes are faster than burettes. Ordinarily, more precise control is achieved with a burette, but the variation in biological materials is usually much greater than any error introduced by pipetting. Finally, the chemist is much more likely to handle acids, caustic solutions, or strong poisons, and a pipette always involves the danger of drawing up a mouthful of the liquid. Fig. 4-5. Several volumetric flasks. (Courtesy Corning Glass Works.) Volumetric flasks (Fig. 4-5) are used primarily for making solutions or for diluting more concentrated solutions. A convenient method for dissolving many materials is to place the weighed quantity of solute in the flask and then to add distilled water to about K' or % of the capacity. MEASUREMENTS 45 Shake or swirl until the solute is dissolved and then add water up to the line on the neck. Finally, pour the solution into an Erlenmeyer flask for storage. Please do not store solutions for long periods in volumetric flasks. The graduated cylinder (Fig. 4-6) is the convenient piece of glass- ware for rapid measurements of quantities of liquid. Since the diameter is greater, relatively, than that of any other volumetric item, an error in height of liquid means a greater volume error. Fig. 4-6. Several graduated cylinders. (Courtesy Corning Glass Works.) Calihration: This is a procedure for assigning values to the various graduations on the glassware. Ordinarily the calibration is performed by the manufacturer. Volumetric glassware is calibrated either "to con- tain" or "to deliver." The difference is the film of water that adheres to the glass after delivery. That is, a pipette might contain exactly 10.01 ml, but only 10.00 ml would flow out, or be delivered. Burettes and pipettes are almost always calibrated "to deliver." Volumetric flasks usu- ally are calibrated "to contain." Graduated cylinders might be calibrated either way. The manufacturer indicates on the glassware how it was 46 MEASUREMENTS calibrated, either with words or with the abbreviations TC or TD. Most cahbrations are performed at 20° C. Several grades of volumetric glassware are available, differing chiefly in the tolerances of calibration. Class B tolerances are usually about twice as large as Class A tolerances, as indicated in Table 4-2. As ex- pected. Class A glassware costs more. At an even higher cost, the manu- facturers provide pieces tested individually, numbered with a serial number, and accompanied by a certificate from the manufacturer's standards laboratorv. Theory of measurement Measurement, if you think of it as a process of applying numbers to a sequence of units, is just counting. Some variable quantities are dis- crete; that is, each unit occurs individually with no fractions of units. The number of apples in a bushel or the number of people in a popu- lation illustrates such discrete numbers. The number of cents in a cer- tain number of dollars is also discrete, but the annual interest on one dollar at Wi per cent is a fractional value. The length of a room is a number of meters plus a number of centimeters plus a number of millimeters, etc. Quantities which vary in this way are called continu- ous. Most measurements in the laboratory deal with continuous quan- tities, and the counting consists of applying numbers to units of a chosen size. We might suppose that under a given set of conditions (tempera- ture, humidity, etc.) a bench in the laboratory possesses some actual exact value of length. If we measure the bench, we obtain an estimate of this exact value. With a meter stick we find the length to the nearest centimeter, but the true length might be a millimeter or two longer or shorter than our estimate. Using a set of optical instruments, we measure (and calculate) the length to the nearest millimeter. The estimate of the true length is better than before, but still an estimate. If better and better techniques are used, the results approach the true value but never actually reach it. Let us now repeat our best measurements several times. Experience teaches us that we should not expect exact duplication of results. Slight human variation, small changes in the instruments, and other fluctua- tions, some of which are too small to be noticed, will combine to affect the final measurement. Our series of several numbers are close to each CD U-l H a. Z < H Class A * Tolerance, ± ml. o o 6 \0 o q q q q 0^ q q q ro q in q 00 q O go * oT 111 «-5 +1 (N O O 6 o q q q q q in q I i« 2 ^ O O 6 o q q q q q in q C/3 H H D pa * i . < i'e OH o 6 (N q ro q in q o VOLUMETRIC FLASKS * < in 0 o Ti o in O 6 q in q o in ^ in nj O ro O in o q l-H E \ o o 6 q q q (~0 q q on q in q 00 q o (M in O PO o in C/5 H < D Q < o s Q Z CO u ^ +1 00 o 6 o o CO o ^ ^ Tf so q in q 6 q 00 l-H < cd u O 6 q rf o ro o in o in in q ON ro ON o 6 in q 00 O o 1; in oo • ■ q ro in in 1— 3 ■—I 1—1 6 (SI d I— « (S m '^ Lr\ o in o in O in o o o o (N o in o o in o o o O o o (N O o o U' -O 1) o < -3 '»-( a, * 47 48 MEASUREMENTS Other but not identical. If only random variations affect the results, a graph of a large series of measurements tends to follow the "normal" curve in Fig. 4-7. The higher a point on the curve, the more frequently that value is found. A sharper and steeper curve indicates a more precise measurement. Precision of measurement refers to the closeness of agree- ment among the various values. Fig. 4-7. The "normal curve" of error. Values being measured extend along the horizontal axis, and the height of the curve at any point is an indication of the frequency with which that value occurs. If the measurement is accurate, the numbers obtained cluster around the real and true value. Some individual numbers will be larger, some smaller, but taken together the set of numbers gives a useful estimate of the true value. Measurements cannot be accurate without being rea- sonably precise. If, in contrast, some systematic error exists in the meas- uring technique, the estimate can be precise without being accurate. If you failed to notice that someone who needed a piece of maple wood had cut it from the end of your meter stick, you might make precise but inaccurate measurements. Measurements in physics strive for precision, which is necessary for accuracy. This is the goal in biology, also, but such precision is rarely achieved because biological materials are inherently variable. Even so, the variability of biological material is no adequate reason for not making quantitative measurements. The biologist simply must realize that his MEASUREMENTS 49 measurements can never be quite as precise and, therefore, never quite as accurate as the physicist's. One might suggest that better instruments would detect smaller differ- ences and be able to give more precise values. This is true only to a cer- tain point. Eventually random fluctuations in atomic or molecular struc- ture, kinetic activity of particles, or other such changes become as large as the differences we are trying to detect. The human ear, for example, is an extremely sensitive detector of slight variations in pressure. The natural kinetic or thermal movement of air molecules causes very small changes in pressure when the molecules strike the tympanic membrane. If the ear were only a trifle more sensitive it would detect the bombard- ment by random movement of single air molecules, and all sound would be superimposed on a steady rumble. This delicate instrument is the product of evolution, but the instruments built by man are subject to the same limitations. There is a limit beyond which attempts to refine measurements are pointless. Significant Digits: The number of significant figures resulting from a measurement is an indication of the precision of the instrument. If we weigh a pebble on a triple beam balance we find that it weighs 4.7 g. We realize that this number means that the actual mass is between 4.65 g and 4.74 g. When we use a torsion balance to weigh the same pebble, we find a weight of 4.72 g. This obviously represents a value between 4.715 and 4.724, but we cannot tell whether the pebble actually weighs slightly more or less than 4.72 g. On a good analytical balance, we might weigh to the nearest tenth of a milligram, expressing the result as 4.7208 g. These five figures are significant, revealing the precision of the meas- urement. It would be a mistake to weigh on the triple beam balance, obtain a weight of 7.2 g, and then write the weight as 7.2000 g. Zeroes placed after the decimal point are counted as significant figures indica- tive of precision. In the case of large numbers, like 563,000,000, only the first three figures have any meaning. It is interesting to speculate on the precision required in a financial institution like a bank or the steps re- quired to preserve accuracy and precision in taking the decennial United States Census. More significant digits become available only by improving the meas- uring technique. Calculation, especially multiplication and division, tends to increase the number of figures, but the final result can be no more precise than the least precise of the individual measurements. Measurements can never be more precise nor more accurate than the standards used in the measurement. I once saw a bottle of a standard 50 MEASUREMENTS acid labeled 0.100406281 n. I was told that this was prepared by weigh- ing potassium acid phthalate to five significant figures. The solution prepared from this substance was used to standardize an alkaline solution, the akali concentration being expressed to seven figures. The alkali was then used as a standard for the final acid solution, which became even two figures better. Possibly this acid was 0.10041 n, but certainly the extra numbers are meaningless. In fact, the weight of the potassium acid phthalate could be no better than the set of balance weights, and the several titrations can be no better than the volumetric glassware used. Certain kinds of measurement require all the precision available. Biological materials rarely demand such precision, however, so the extra effort is wasted. It is quite possible to spend too much time in careful measurement if, for example, the living material changes during the measurement. Beginners sometimes handle solutions so slowly and with such care that evaporation significantly changes the concentration. A reaction rate might be found by measuring the amount of one of the reactants, say every five minutes, but the estimate of the rate is not very good if it takes three minutes to obtain each value. Nearly always the largest error in any biological measurement is in the living material. Measurements should not introduce error larger than the biological error, but there is no point in using measurements a good deal more precise than needed. Dimensions: Measurements of physical quantities are given in terms of a unit, and the label defining the unit is as important as the number. We commonly use the term dimension when referring to these labels Admittedly, dimension usually refers to length, but it takes only a little imagination to think of seconds, degrees, and grams as dimensions also. When calculations are performed using dimensional values, that is, numbers together with their dimensions, the values will change but frequently the dimensions change also. The dimensions of velocity are in terms of length units per unit of time, or cm/sec. This relationship could also be expressed exponentially as cm^ X sec~^ If an object travels X cm in y sec, its velocity is x/y cm/sec. Here "cm/sec" is a new kind of dimension. Calculations can be shown by equations, as A = B, where the "equals" sign means that A and B are identical. If A and B have dimensions, the dimensions must be identical as well as the accompanying numbers. Many measurements made in the laboratory lead to expressions of pro- portionality. For example, we find that a given material obeys the rela- tionship "mass is proportional to volume." The relationship is true MEASUREMENTS 51 whether we know the units or not, but we cannot write an equation M g^V cm^ because grams are not cubic centimeters. We can say, however, M g=^kV cm'\ if we let fe be a constant. Any increase in V brings a corresponding increase in M. Multiplication by the constant must bring about an identity in dimensions, and therefore k must have dimensions of its own. M g = k g/cm-'' X V cm-^ This particular constant is called density. If M were expressed in pounds and V in gallons, the relationship between mass and volume would be the same, but k would have a different numerical value and different dimensions. Some biological measurements must be expressed in rather complex dimensions. A measurement of rate of metabolism, for example, might be in terms of microliters of oxygen used per gram of cells per hour or fA. 02/g of cells X hr. It would be improper to vvnrite this as fA/g/hr be- cause such expressions give even the mathematician fits. The main word of caution to be offered here is to remember the dimen- sions. A measurement of millimeters of pressure change is not identical to fA of O2 unless an appropriate correction is made. If you count scale divisions on a dial, the divisions have meaning only if you know what they stand for. Biological laboratories frequently use electrical recording devices, but the record produced is in chart paper divisions until the proper dimensional corrections are applied. Calculations can be prop)- erly interpreted only if the dimensions are changed along with the numbers. Indirect Measurement: Probably even more in biology than in physics, certain measurements must be made indirectly. You cannot very well measure the concentration of a substance inside a cell by ordinary chem- istry without destroying the cell. Stephen Hales measured the blood pressure of a horse directly, but his techniques cannot be used on humans. Ingenuity on the part of the experimenter will often yield an indirect measurement of a quantity. A plant cell contains an amount of water which is not directly measurable, but generally the volume of the cell can be estimated under a microscope. It might be guessed that the amount of water is some function of the volume, W = f(V), where the relationship might be proportional, logarithmic, or some more complex function. If you have reason to suppose that water content is proportional to volume, then W = kV, and it becomes possible to compare two cells. 52 MEASUREMENTS Even if the value of k is unknown, the results give relative values of M'^ater content. The reasoning demonstrated is generally applicable. If you know that A = kB, and if you know the value of the proportionality constant, it is possible to compute the value of B from measurements of A. Even if the value of the constant is unknown, relative values for B can be obtained by measuring A. Micro-methods: Biologists and biochemists have found it necessary to develop a set of microanalytical methods. These methods, in general, follow the same physical and chemical principles as ordinary analytical procedures, but working with very small quantities may introduce special difficulties. Pipetting 10 ml of water is easy, but pipetting 10/^1 is more difficult because the surface tension of the water has more influence in the smaller pipette. Certain micromethods are limited by the random fluctuations or movements of molecules. Obviously special care is re- quired when working with extremely small quantities of materials. Some other micromethods depend upon physical and chemical prin- ciples which are not ordinarily used on a larger scale. Several of the tech- niques described in the later chapters qualify as micromethods by dealing with small quantities and commonly used principles or by using prin- ciples not ordinarily used on larger amounts of material. SELECTED REFERENCES No separate publications specifically covering Measurement are recommended here. From the Bibliography: Wilson, E. Bright, Jr., An Introduction to Scientific Research. Measurements and the execution of experiments are discussed in great detail. Richards, James A., Francis Weston Sears, M. Russell Wehr, and Mark W. Zemansky, Modern University Physics. Several discussions of the theory and practice of physical measurement are included. CHAPTER 5 Selection of Techniques Most of the modern experiments in biology employ instruments, tools, or techniques assembled from the various physical sciences. Usually several different methods or instruments are available, and the investiga- tor must make a choice. Sometimes it is not easy to choose. Of course the techniques to be used in a research project must be suitable and related to the problem. The most elaborate set of instruments is useless if it does not measure the right thing. Physical science has placed many instruments at the disposal of the biologists— some simple, some extremely complex. Several features of these are noted in the following sections, and several general methods are described in detail in the chapters which follow. The method to be chosen is the one that gives the most precise and reliable information with the least difficulty and expense. Many papers in the literature contain descriptions of complicated assemblies of parts put together for use in rather simple experiments. If such an instrument (or combination of instruments) is unique and expensive, or if it depends on principles not commonly used, it is likely to be called "elegant." Many of the elegant instruments are constructed in the laboratories where they are used. Some people, who are born gadgeteers, do all their experimenting with such instruments. It is fun and stimulating to the imagination. Papers describing work done by such instruments are impressive, we must admit. Frequently these instruments offer the only way to make a certain kind of measurement. Not all experimental research requires elaborate instrumentation, how- ever. Even the most impassioned gadgeteer, with pliers for hands and 53 54 SELECTION OF TECHNIQUES vacuum tubes for brains, would agree that the most completely mecha- nized work is not necessarily the most "scientific." Some of the definitive solutions to biological problems have been achieved through the use of simple glassware and careful observation. Each investigator must choose for himself between the simple and the elaborate. Another choice, not always so obvious, is the choice between a direct observation and an indirect observation. Living cells sometimes resist direct observation by the contemptible trick of dying. Even some biologi- cal chemicals can be observed only indirectly. For example, because most of the ordinary protein molecules are not visible, information about their structure must be obtained by indirect methods. It is possible to measure viscosity, density, mechanical properties, solubility in various solvents, and chemical make-up, but the picture pieced together from these bits of information will never be as complete as if the individual atoms and bonds could be seen. y Some questions worth asking Probably the easiest way of organizing the evaluation of an instrument or technique is to ask a series of questions about it. The following set will illustrate some features that are usually worth considering. (1) Purpose. For what purpose was the instrument designed? Is the technique adaptable to the present needs? Under what circumstances should this method be used? Is the instrument likely to be used for a pur- pose more exacting than originally intended? (2) Theory. What is the basic principle of the technique? What is really being measured? Which physical or chemical laws are the bases of this technique? Does it measure what we want to measure? (3) Details. How is the instrument constructed to accomplish its purpose? What kind of equipment or supplies are required for this method? What does each knob control? Is a schematic diagram of the electrical or electronic system, the mechanical components, or the optical system available? Could the instrument be repaired or adjusted if neces- sary? The knowledge of these details increases the pleasure in the experi- ment, and the operator of the instrument becomes most competent when he is familiar with them. Any unusual behavior is quickly noticed and easily diagnosed. More than one graduate student has been embarrassed during the final examination covering his thesis when one of the exam- iners asked what was in the "black box." SELECTION OF TECHNIQUES 55 (4) Precautions. What are the major sources of error? How can these errors be minimized? What kind of measurement should not be at- tempted? Does the technique give only one kind of information, or is it more versatile? Are there safety hazards? The safety hazards deserve special mention. Because many modern instruments are electrical, severe shocks are possible. Any large capacitor might store its charge for some time after the instrument is turned off, and discharge through a human body may produce a flattening jolt. Many of the chemicals used in laboratory studies are more or less poison- ous; some are extremely dangerous. Carbon monoxide is used in some metabolic experiments, and its presence in the air is not easily detected until it is too late. Poisonous materials are produced in some chemical reactions, one of which might occur as a side reaction along with the reaction being studied. Some pieces of glass apparatus are extraordinarily fragile. Hard glass such as Pyrex breaks with especially sharp edges and may give nasty lacerations. If an instrument or technique involves safety hazards, it might be well to ask "Is there a safer way to do it?" (5) Precision. Of what degree of precision is the technique capable? Is this precision adequate for the problem? Does this technique offer more precision than the problem requires? Would a less expensive, more convenient technique be as effective in providing answers to problems? (6) Requirements of operation. What must the operator know about the instrument in order to make the measurements? Does the technique require any special dexterity or any difficult manipulations? Can the operator reach all the controls easily? Does the technique contribute to excessive fatigue by causing the operator to work in uncomfortable positions? Is there a possibility of eye strain? Is nervous tension a usual result of operating the instrument for long periods? (7) Cost. Will the budget stand the original expense, as well as the cost of maintenance and operating supplies? Is the instrument adaptable to various kinds of measurements? Can a higher initial cost be justified because a versatile instrument can be used for many purposes? Instrument design Occasionally it becomes necessary to build an instrument in the laboratory, especially if a measurement is to be made for which no commercial instrument is available. Physical scientists are more likely to build their own instruments than are biologists. The biologist does 56 SELECTION OF TECHNIQUES encounter this situation at times, however, and quite frequently he needs to modify a commercial instrument. The "design" of any object, as the term is used here, refers to the complete operation of planning and drawing specifications. A properly designed instrument should give desirable answers to all the pertinent questions in the previous section. The design covers the purpose and theory; the materials to be used; the arrangement of the controls, meters, and mechanical parts; details of construction, including tolerances; and even the shape and color of the outer covering. A well-designed instru- ment performs the task for which it was built with accuracy, precision, and convenience. The instrument is "functionally designed" and has no "ruffles." The design stage of instrument building might take a very long time and, if well done, often results in a superior instrument. Many home- made instruments, however, are needed immediately, so that six months or a year of designing would seem too long. Would it be better to build a "haywire" gadget that might work? Perhaps, but one must expect diffi- culties if theory and design are only surmised. As an example, a biologist once needed a small blower to provide a stream of air in an experimental chamber. He could find no commercial blower or pump with the proper qualifications, so he made a few rough sketches and started gathering materials to build a blower. Before actually starting construction, how- ever, he thought it would be wise to look briefly into blower design. He found that the blower he had in mind would provide the air stream he needed, but only if it could turn almost a million revolutions per minute. In this case it was easier to redesign the whole experimental arrange- ment. The question of whether to spend the effort required to build instru- ments is a serious one. Some biologists throw together the most out- rageous assemblages of miscellaneous parts and produce equipment that performs beautifully. One biologist I know has achieved a reputation for building complicated devices that never work. Only experience and a cer- tain natural knack enable one to decide whether to proceed to the con- struction stage. The actual construction may be done by the biologist himself or by a machinist, glass blower, or some other expert. Some biologists enjoy doing their own work and gain the advantage of being able to modify the design as the work is in progress. It is desirable for the experimental biologist to know something about machine shop procedures, sheet metal work, carpentry, electronics, glass blowing, and assorted other special- SELECTION OF TECHNIQUES 57 ties. The biologist cannot be expert in any of these fields, but at least he should know what can be done and what is impossible. Otherwise he might ask the machinist to bore a square hole, or he might ask the glass blower to repair a cracked lens without damaging the optical perform- ance. Crude sketches assist the mechanic, but drawings prepared accord- ing to the practices of mechanical drawing, complete with all dimensions and tolerances, will make his work easier. Assembly of components Because there is almost no limit to the types of parts and materials available today, the assembly of parts or components is limited only by the imagination of the biologist. If parts are to be made, they might be constructed of metal, wood, plastics, glass, foam rubber or plastic foam, cork, or almost any other material. Plastic or rubber tubing; assorted wires; glass tubing and ground glass joints; metal tubing with connectors and valves; and round, square, or rectangular rods of several materials are easily available commercially. Distributors stock standardized items such as screws, but also ball bearings, nylon spheres, precision gears, and an infinite variety of other specialized parts. The parts of the apparatus may be held in place in several different ways. Scientific supply houses furnish rods and fixtures for making sturdy frames. The aluminum alloy rods are strong enough to support rather heavy components, and the connectors available allow completely flexible arrangement. Usually it is advantageous to build such a frame on a sturdy table in such a way that both front and back are accessible. Burette clamps, larger condenser clamps, and special devices for holding thermometers, heaters, and other equipment are standard parts. Another type of supporting material consists of sheets of fiberboard with rows of holes. Parts can be attached with screws, wire, or specially- made hooks. Such boards are often used to display tools, but they work as well to support laboratory apparatus. Much smaller pieces, with smaller holes more closely spaced, also are available. These small boards are useful as "bread boards" for temporary arrangements of parts. A number of items built originally as toys have also been used in laboratories. You might arrange for a toy electric train to carry materials to inaccessible places. Children's steel construction sets provide sheets, bars, gears, and other parts that often are adequate for the purpose. Just use your imagination. 58 SELECTION OF TECHNIQUES Construction of Parts and Properties of Materials: Once it has been decided to build equipment or component parts for larger assemblies, the choice of material becomes important. It is always worthwhile to study any properties of a possible material which will be important in the operations of construction and in the finished product. Many metals and alloys are available. Brass, even though it is expensive, has been found to be one of the best metals for general use in constructing small parts. It is easy to cut and machine, it can be soldered easily, and it is resistant to corrosion. If organisms are to be kept in a brass container, however, toxic levels of Cu or Zn mav leach out of the metal. Aluminum is less dense, usually cheaper, but somewhat harder to work and more likely to corrode by electrolysis. One of the favorite materials is acrylic plastic (Plexiglas or Lucite), available in rods, tubes, or sheets of many sizes. It comes as the clear, transparent plastic, in a variety of colors or in opaque black. It machines beautifully on the lathe or milling machine, and pieces can be cemented together with a chlorinated hydrocarbon like chloroethane or with a cement containing the monomer from which the plastic is polymerized. For many uses this plastic is unexcelled. Glass Apparatus: Glass has a number of properties which make it a desirable laboratory material. It is chemically inert, transparent, avail- able in many forms, cheap, not too difficult to work with, etc. Glass com- ponents can be assembled with con- necting pieces of flexible tubing, but for many applications ground joints are superior. The glassware manu- facturers provide tapered joints or ball joints in standard, interchangeable sizes. Both are illustrated in Fig. 5-1. Table 5-1 lists some of the features of Standard Taper joints, along with some representative sizes. The joints themselves can be fused to glass tub- ing of almost any size. Flasks, con- densers, extractors, and many other glass items are available with ground joints, so that a great variety of combinations can be assembled. Stopcocks of many designs and sizes complete the assembly. A few minutes spent examining the pictures in a glassware catalogue can be quite instructive. Fig. 5-1. Left, standard ball joint; right, standard taper joint. (Courtesy Corning Glass Works.) SELECTION OF TECHNIQUES 59 Table 5-1. Standard Dimensions for Full-length Interchangeable Taper-ground Joints Computed Approxi- Diameter at Standard Approximate mate Length Large End of Joint Size Diameter at of Ground Ground Zone Number (^ Small End Zone (Gaging Point) Designation) mm. mm. mm. 7/25 5 25 7.5 10/30 7 30 10.0 12/30 9.5 30 12.5 14/35 11 35 14.5 19/38 15 38 18.8 24/40 20 40 24.0 29/42 25 42 29.2 34/45 30 45 34.5 40/50 35 50 40.0 45/50 40 50 45.0 50/50 45 50 50.0 55/50 50 50 55.0 60/50 55 50 60.0 71/60 65 60 71.0 103/60 97 60 103.0 Prepared by Kontes Glass Company from National Bureau of Standards Cir- culars. Used with permission. Most glass apparatus made in the laboratory is formed from Pyrex tubing or rod. Handling this glass requires a hot flame; usually a gas- oxygen torch is used. With a little practice, almost anyone can make simple items, and most laboratory biologists eventually become quite proficient. Sheets for the construction of parts of an apparatus can be cut from ordinary window glass. For very small animal or plant cham- bers, or for situations where optical properties are important, windows can be made from microscope slides or from the glass of 2 by 2 in. or 3V4 by 4 in. projection slides. Sheet Pyrex with good optical qualities is available and can be fused into all-glass apparatus, usually a job for a professional glass blower. If we re-examine the information in this chapter we find a good deal of what could be called common sense. The selection of techniques or instruments, as well as the assembly of parts, becomes easier with experi- 60 SELECTION OF TECHNIQUES ence. The beginner would do well to "waste" a fair amount of time thinking and making sketches before starting to build his own apparatus. SELECTED REFERENCES Strong, J., Procedures in Experimental Physics. New York: Prentice- Hall, Inc., 1938. A most fascinating book, just to read. More impor- tant, it tells how to do a variety of things, from blowing glass to grinding telescope lenses. Review of Scientific Instruments, a periodical. This journal is de- voted to scientific instrumentation, and anyone contemplating a new instrument will want to check current and older issues. If you need an instrument, the chances are good that someone else has needed and built a similar instrument. You can gain from their experience. From the Bibliography: Wilson, E. Bright, Jr., An Introduction to Scientific Research. Has an excellent chapter on Design of Apparatus. CHAPTER 6 Selection and Preparation of Organisms The whole concept of experimentation in biology implies that animals or plants are the subjects of these experiments. Biologists no longer ex- periment on "just whatever is handy." Once, when biology was young, hundreds of isolated observations, on as many kinds of organisms, all contributed to the general store of information. As biology has become a more mature science, concepts and principles have become more im- portant. The notion of a research program suggests the existence of problems or questions that need answering. Sometimes, "whatever is handy" is exacdy the right organism to provide the experimental answers to a problem. Most often, however, several kinds of plants or animals would be available. The short time spent considering the desirable and undesirable features of experimental organisms usually is time well spent. More than once the success or failure of a program of experimental research has been determined by the choice of experimental organism. In some cases, the choice of animal or plant seems to have been a lucky or unlucky accident. In other cases, an especially fortunate choice was made by an astute, experienced scientist. Several examples of fortunate choices of organisms are given, along with one unfortunate selection that led to a notable failure, apparendy entirely as a result of the whims of chance. T. H. Morgan was one of several individuals who helped found the science of genetics. In the course of some of his first experimental at- 61 62 SELECTION AND PREPARATION OF ORGANISMS tempts to learn the pattern of heredity in animals, he chose the little fruit fly, Drosophila melanogaster, from among more than a million members of the animal kingdom. Drosophila offers some immediately obvious advantages: it is small and easy to handle, it breeds rapidly (one generation about every 14 days), it produces fairly large numbers of offspring from one mating, and w^as soon found to grow readily under the artificial conditions of the laboratory. Later Drosophila was used in experiments which explained the inheritance of sex determination, a result we now know would have been much more difficult to achieve with some other organism. Drosophila also was used to show the relation- ships between chromosomes and heredity, partly because this small fly possesses a special set of "giant chromosomes." It is difficult to give enough emphasis to the contributions of Drosophila to advances in genetics. The collection of Classic Papers in Genetics demonstrates quite clearly the importance of this fortunate choice of organism. Another organism enjoying a great reputation in experimental research is Chlorella (mostly Chlorella pyrenoidosa'). Otto Warburg, the German biochemist who was one of the first great contributors in the field of cell physiology and enzyme chemistry, describes in one of his papers his need for an organism for experiments on photosynthesis. He very deliber- ately searched for a plant that would be easy to handle, would grow easily under artificial conditions, would carry on photosynthesis actively, and would be free of a number of annoying complications. After a num- ber of preliminary experiments, he finally settled on this single-celled green alga, Chlorella. Since 1920, Chlorella has been used in a fantastic number of experiments, some quite remote from photosynthesis. Chlo- rella even enjoyed a period of popularity as a possible solution to the world's food problems. Even today, this simple plant is probably used in photosynthesis research, including studies of life support systems for space craft, almost as often as all other kinds of plants combined. An example of an unfortunate choice of organisms also comes from research on photosynthesis. One of the first demonstrations that parts of living cells, in this case chloroplasts, can continue at least some of their activity after being separated from the rest of the cell was given by R. Hill about 1938. He ground up spinach, separated the chloroplasts by centrifugation, and then showed that when illuminated under the right conditions these isolated chloroplasts gave off oxygen. Since Hill's original experiment, this ability has been demonstrated in chloroplasts from a variety of plants and under a variety of conditions. Many years earlier, between about 1910 and 1918, Willstatter and Stoll had tried SELECTION AND PREPARATION OF ORGANISMS 63 without success to detect some activity in broken-up cells of sunflower, the ordinary geranium, and a few other plants. We now know that even the best of modern technique for some reason fails to provide active chloroplasts from these species. If Willstatter and Stoll had happened to try spinach, the whole course of research in cell metabohsm might have been changed drastically. Now, what does this prove? Nothing, of course. A few isolated, spec- tacular examples need not be convincing, and indeed, they were chosen partly because they are exceptional. It might be said that none of these investigators could forsee the value of a particular organism, nor the great effect it would have on future science. This is true, but Drosophila and Chlorella were chosen for a set of very good common-sense reasons. Several other available organisms had obvious disadvantages by com- parison. The choice of experimental organisms is an important one, and a number of considerations are well worth weighing. In addition to the strict scientific aspects of choosing the organism which will best answer the question, there is no harm in remembering the feelings of the general public. Experiments on animals are very necessary, but they may seem cruel to the layman who has had little experience or training and who does not understand experimental re- search. Sometimes a careful choice of experimental animals can make the layman feel better, without in any way damaging the experiment. Occasionally a group of do-gooders becomes fanatical about the use of animals in research. Picturing the biological scientist as a cruel, in- humane monster without a conscience, they even apply pressure to legis- lative bodies urging the passage of laws which would in fact stifle medical research, but these same persons are as willing as anyone else to accept modern medicine's ability to keep them alive longer. Fortunately, the fanatics are a minority. Several organizations such as some Humane Societies and the Animal Welfare League, having biologists among their memberships, have reasonable and worthy aims. Actually, modern biolo- gists have an appreciation of animal life that the fanatics can never achieve. They never condone needless cruelty. For personal reasons, as well as for the reason that it makes experimental results more reliable, the biologist treats his animals extremely well. We can ignore the fanatics, but a studied antagonism is neither necessary nor profitable. Within the limits imposed by the research problem itself, it is often pos- sible to select a generally acceptable animal. The following paragraphs describe some of the criteria which must be remembered in choosing experimental organisms. The list is divided 64 SELECTION AND PREPARATION OF ORGANISMS into two groups: first, those features that are absolutely essential, and, second, those features that are desirable but do not make the experiment impossible if these features are not present. Most of the essential features are probably quite obvious, but even these occasionally are forgotten. Forgetting to consider these features in advance might lead one to a rather embarrassing situation. Essential features of experimental organisms Compatibility With the Prohlem: If some special activity of living organisms is to be studied, then it is important to be sure that this special activity occurs in the organism. Often the problem or hypothesis to be investigated was suggested by previous work with some organism, and it is only natural to proceed with the same animal or plant. Especially when one is beginning in a new area, however, or with a new kind of organism, it is well to perform some preliminary experiments to make sure that the process one is interested in actually occurs in the organism chosen. Coynpatihility With the Techniques of Investigation: Most experi- mental research employs chemical methods or physical instruments. Is it possible to use the available methods or instruments upon the organ- ism? A negative answer to this question might be obviated in either of two ways: choose a different technique, or select a different organism. Sometimes an animal or plant is too large or too small. Often special structural features of the cells or higher organizational units make a par- ticular organism impractical. If one kind of plant has cell walls so thick that it is difficult to release the cell contents, perhaps another species should be sought. If the cells of one animal contain so much of one amino acid that it interferes with the analysis of other amino acids, try another animal. Availability: As a rule, organisms to be used in experiments should be easily obtainable. This may involve collecting the animals or plants in the vicinity of the laboratory. If the species being investigated is a rare one, the research may be seriously hindered. Difficulties along this line are being experienced in some of the marine laboratories. Extended studies on squid nerve cells have reduced the supply of squids in the vicinity of the Marine Biological Laboratory at Woods Hole, Massachu- setts. The worm Eurekas, a longtime favorite at west coast laboratories, is difficult to find where it used to be plentiful. One means of avoiding SELECTION AND PREPARATION OF ORGANISMS 65 these difficulties is to select an animal or plant that is a common item in commerce. If the pronghorn antelope and the sheep both show a feature which is to be studied, then use the sheep. If cither wheat or buffalo grass would be suitable, choose the wheat because seeds are more readily available. The cost of the living material is to be considered also. In some studies it is desirable to raise the experimental organisms within the research facilities. Many laboratories maintain colonies of mice, rats, or rabbits because they can produce more uniform animals at any desired time. Many plant investigations are carried out with plants that have been raised in a greenhouse, or better, in chambers where all growth conditions can be carefully controlled. To summarize, then, an organism for experimental purposes must be easily collected in the vicinity of the lab, must be cheap, or must be susceptible to cultivation or maintenance under artificial conditions. Lest it be thought that this consideration is very obvious, let it be known that the present author was once involved in a study using marine seaweeds in a laboratory in Minnesota, more than a thousand miles from the nearest ocean. Durahility: Another less obvious essential feature of the experimental organism is durability. Some animals or plants are better able to stand the experimental treatments than others. If a certain organism is ex- tremely susceptible to temperature changes, and it is impossible to main- tain a constant temperature, perhaps another organism might serve as well. If a long term experiment is planned, the natural life span of the organism should be considered. Organisms occasionally confound an experiment by dying anyway, and one would be foolish to increase the likelihood of this disaster by choosing a fragile species. Desirable features General Knowledge of the Species: If either a well-known species or a rare species can be used, the well-known species should be chosen. In this respect, there may be some economic or political advantage in using the well-known species. One of the reasons for using sugar beets in our own laboratories is that the sugar beet is an important crop in our area. It is somewhat easier to justify the expenditure of funds on sugar beets than on dandelions. There is no benefit to be gained from the use of an exotic species because it is exotic. Background Information: Some species of animals and plants have been used for many studies in the past, and a great volume of informa- ee SELECTION AND PREPARATION OF ORGANISMS tion has been collected. This background information may offer an advantage over a less widely studied form. The laboratory rat and mouse, the rabbit, corn (maize), Chlorella, the bacterium Escherichia coli, and several other species are all covered by a tremendous amount of litera- ture. If there is any question about the compatibility of a certain organ- ism with the problem under investigation, a search through this literature may provide the answer. Genetic Backgrotmd: The genetic background of the animals or plants may be important. Usually genetic homogeneity is desirable in that it decreases the natural variation among the organisms. Because the rats and mice in common laboratory use have been inbred for many genera- tions, there is relatively little genetic variation. Organisms with little genetic variability are not as strong as "wild types," however, and, in at least one of the strains of laboratory rats, "wild rats" are introduced at planned intervals to maintain vigorous stock. If records of genetic background or pedigrees are known, so much the better. One of the largest groups of pedigreed dogs is maintained at the University of Utah for use in studies on the effects of radiation. Since each dog's ancestors are known for several generations, it is pos- sible to relate inheritance with susceptibility to radiation damage. Avoidance of Unnecessary Complications: Usually biological experi- ments are difficult enough without the addition of unnecessary compli- cations. If it is possible at all, it is desirable to avoid these complications. Many of the early investigations on cell metabolism used yeast cells be- cause they are simple, complete organisms. One of the advantages of Chlorella in photosynthesis studies is that each cell is a unit in contact with the experimental environment. There is almost no direct interac- tion between cells. In higher plants, and even more in higher animals, the internal organization is so complex and there is such close inter- relationship betweens cells that the interpretation of experimental results is often difficult. Representation of General Group: It is desirable that the experimental organism be representative of a general group. Rats are more-or-less typical mammals; sunflowers are typical flowering plants. Exceptional behavior may be interesting, but it may contribute Httle to our under- standing of general biological principles. If we are to induce the gen- eral principles from the experimental results, the ordinary is preferable to the unique. As an example, a great to-do is raised about insectivorous plants although they are relatively rare (except in pictures in biology texts). Experiments have been performed on these plants, but they contribute only to our knowledge of insectivorous plants and very little SELECTION AND PREPARATION OF ORGANISMS 67 to our understanding of general botanical principles. A process that can be studied in sunflower or soybeans, however, leads to much broader generalizations. Even in pure, fundamental, basic research, where prob- lems are investigated only to increase knowledge, most of the problems are "human problems." Any study of an unusual or unique organism may be justified as satisfying curiosity, but the results often stand alone as details and may never be incorporated into the general advance of science. General comments on choice of organism These several features, then, are necessary or desirable in experimental organisms. In the selection of an animal or plant for experimentation, a period of reflection on these features may save complications later. Some of these are considered without much thought, it is true; others are commonly neglected. Once the various essential and desirable features have been considered, there may still be a choice of organisms. In this event the experimental biologist usually chooses the organism that is the most convenient. In the selection of organisms for experiment, it is commonly possible to consider the use of only part of an organism rather than the whole organism. In the study of muscle contraction, it is better to dissect out a muscle and throw away the rest of the frog because the rest of the frog only adds to the complication. If we are interested in the photo- synthesis of tomato plants, separate leaves or parts of leaves may be used to avoid the necessity of maintaining all the other activities that nor- mally go on at the same time as photosynthesis. The same general rules that govern the selection of species apply to the selection of parts of organisms, although there is now the added assumption that the removal of the part for study has not altered the activity being studied. It usually seems, however, that the separation of parts makes relatively little difference in the processes being measured. Common sense, good judgment, and experience can he used to advantage in deciding to use a part of an organism instead of the whole organism. Preparation of organisms for experiment This section includes a number of ideas important to the preparation of biological materials for experiments. The list is intended as a set of 68 SELECTION AND PREPARATION OF ORGANISMS examples and certainly cannot describe in detail the methods that are in use. Often the investigator's ingenuity is required to prepare his own organism for his own experiment. One very important consideration in the handling of experimental organisms is the treatment before the experiment. Failure to consider the previous treatment can lead to disastrous results. A favorite examination question of some professors describes two sets of organisms that show quite different behavior in apparently identical circumstances. The de- sired answer or explanation is that the previous history has influenced the responses. In many laboratories, plants are raised in special insulated chambers where temperature is controlled, humidity may be controlled, illumination is provided by a color-balanced set of fluorescent tubes, lights are turned off and on by a clock, and the plants are watered with a mineral solution of known composition. Plants raised under these conditions show much less natural variation than those raised in a green- house or out of doors. Animals deserve and require elaborate handling. Certain minimum standards must be met in providing comfortable quarters, adequate sani- tation, food and water, and cages that are large enough in order to main- tain the animals in good health. Certainly, you would expect that one of the assumptions in animal experiments is that the animals are in good health. Diseased animals necessarily produce abnormal results. Sev- eral useful manuals and pamphlets on the general care of animals are included in the references at the end of this chapter. Preparation of Plant Parts: The preparation of plant parts often is easier than comparable preparations of animal parts. Many of the processes of plants proceed in each cell, and the degree of coordination is lower than in animals. In dissecting out a muscle we must worry about the coordi- nation with the circulatory system and the nervous system. If a leaf is removed from a plant, however, there may be relatively little effect on either the plant or the leaf. A number of investigations have shown some differences, usually stimulations, in the respiration of isolated plant parts, but these are differences in degree rather than in quality. Young leaves and old leaves are never identical, but it is not difficult to select leaves of about the same morphological age. Leaves or pieces of leaves are well adapted to measurements of meta- bolic activities. If we wish to express the rate of ox}'gen production per milligram of green tissue, we merely weigh the pieces of leaf tissue. The shape need not concern us. The measurements sometimes are more meaningful when related to leaf area. If we must know the area, some SELECTION AND PREPARATION OF ORGANISMS 69 shapes are more desirable than others. A cork borer easily punches out uniform discs of leaf tissue. By carefully selecting ^ cork borer of 1.12 cm diameter, we get discs with an area of 1 cm". Determination of the total area of leaf tissue thus requires no calculation, only counting. Sections of other tissues can be cut in the same way. Storage roots or tubers are often quite uniform in cellular composition and have been used to advantage in many experiments. A cork borer conveniendy punches out cylindrical plugs of such tissues. The plugs may then be sliced to any desired thickness so that the volume of tissue is calculated easily. Alternatively the cork borer will cut through a stack of tissue sHces to yield the same result. No extra effort is required, and the number of calculations is reduced. This, in turn, decreases the chance of compu- tational errors. If the process under investigation demands it, whole leaves, stems, roots, flowers, or fruits may be used. Stems, leaves, or roots of germinat- ing seeds are useful because at this stage the plant is more actively per- forming a variety of functions than at any other time. Often there is a choice of several kinds of plants. If one species will provide the desired organs more conveniently than another species without any attendant disadvantage, then by all means select the convenient species. Preparation of Animal Tissues: Usually more care must be used in the preparation of animal tissues because of the more highly coordinated organization of the animal. Changes in the acidity, in the concentration of oxygen or other gases, or in the osmotic balance in the vicinity of a tissue may alter its behavior markedly. With proper attention to such details, however, many different animal tissues will continue to function for some time after being removed from the animal. Before any such dissections are attempted, it is imperative to immo- bilize the animal somehow. Rats, rabbits, and other mammals can be anesthetized with barbiturates or similar drugs. Frogs are usually immo- bilized by destroying the brain and spinal cord. The manner in which animals are handled is important because they can become emotionally excited. Of all the various animal tissues, perhaps muscle has been used as commonly as any other for physiological experiments. Pigeon breast muscle was exceedingly important in unraveling the sequence of reac- tions in cellular respiration. Much of what we know of movement and reflex action was learned from experiments on isolated muscles. Liver has also been used very extensively, partly because this easily accessible organ provides a large mass of fairly uniform tissue, and partly because 70 SELECTION AND PREPARATION OF ORGANISMS this organ is extremely active metabolically. Nerve tissue, such as rat brain, may be desirable because the metabolic rates are very high. Under proper conditions, almost any animal tissue will continue to perform its functions for a while after removal from the animal. Since osmotic relationships are extremely important, the tissues are often bathed in a solution of various salts having approximately the same composition as the liquid which normally bathes them. Microorganisms: An experiment can be made more convenient by eliminating other activities that proceed simultaneously, a purpose achieved by the use of a part of an animal or plant. The limit in simplifi- cation is reached when single-celled organisms, or microorganisms, are used. Here there can be no interaction between cells as in the higher organisms. Algae, fungi, bacteria, and protozoa of various kinds may be very desirable experimental organisms. The contribution of CMorella to photo- synthesis was mentioned earlier. The single green cells grow rapidly in artificial nutrient solutions. Among the fungi, ordinary baker's yeast is undoubtedly the most convenient. Any grocery store sells yeast cakes which are pure cultures of Saccharomyces cereviseae. In any of the larger cities, the same yeast is available by the pound. If the yeast cake contains more starch and other substances than is desirable, the cells are washed by suspending them in water or a buffer solution and then cen- trifuging at low speed. The cells will settle, leaving most of the starch and the soluble materials in the liquid. Amoeba, Paramecium, and several other single-celled animals have been used. One which has become increasingly popular in the past sev- eral years is Tetrahymena pyriformis. It is a tough little animal that grows well under artificial conditions and offers some decided advantages in experiments on metabolism. Microorganisms are raised in the laboratory in "pure culture"; that is, the culture contains the one desired organism and no other species. A pure culture of CMorella contains the minute green spheres character- istic of this species and presumably no bacteria, no protozoa, and no other species of algae. The original separation of a pure culture is a tedious task because microorganisms almost never exist singly in nature. Once a pure culture is achieved, it must be maintained in such a way that it will not become contaminated with the foreign organisms which are present everywhere. The techniques by which microorganisms are raised and maintained are so complex that they can only be learned first hand. Every biologist SELECTION AND PREPARATION OF ORGANISMS 71 should have at least one good laboratory course in bacteriology or micro- biology. Only a brief, very general description can be provided here. Every organism must be supplied with certain chemical materials in order to grou^ and multiply. Microorganisms commonly are grown on artificially prepared media which will provide a source of carbon, water, nitrogen, several mineral salts, oxygen or other gases, and vitamins or any other required compounds. Sometimes this medium is a liquid; other times it is semi-solid because of the addition of gelatin or agar. If the microorganism is autotrophic and can make its own carbon compounds from carbon dioxide, it must be provided with light or another energy source. The medium is sterilized in an autoclave by heating under pres- sure to a temperature well above the boiling point at atmospheric pres- sure. Some unwanted bacteria are able to withstand any less drastic sterilization procedure. After the solution has cooled in its carefully closed container, a small amount of the desired microorganisms is intro- duced or "inoculated." With reasonable skill and some luck, only cells of the desired type enter the nutrient medium and in a few days will have multiplied to produce enough cells for experiment. The sterile handling and aseptic techniques require constant practice. More than one experiment has been ruined or delayed by the presence of foreign organisms in a supposedly pure culture. This type of disaster becomes even more dangerous if the investigator is not aware of the invasion by the other species. Usually a routine microscopic examination is worth the few minutes that it takes. Occasionally experiments are performed with "enrichment cultures" of microorganisms, or, as they are sometimes called, "almost pure cul- tures." Any such experiment is subject to some question— especially if the proportion of the various species is changed by the growth during several transfers to new media— because the measured responses might be produced by the other organisms present. I cannot urge strongly enough that every biologist learn the sterile techniques required in bacteriolog)'. Of course they apply directly to microorganisms, but the general principles apply to the handling of the larger plants and animals as well. Tissue Culture: Some kinds of cells or tissues can, if properly handled, continue to grow after being removed from the higher plant or animal. A solution or other growth medium is prepared which will contain all the chemical compounds the cells jiormally require. Most of these solutions are very complex mixtures, often containing extracts of poorly knowTi composition from living or dead cells. A representative solution for the 72 SELECTION AND PREPARATION OF ORGANISMS growth of isolated plant roots would supply sugar, mineral salts, vitamins which normally are made in the leaves, and perhaps other compounds. The cultivation of animal tissues requires carefully prepared media, attention to supplies of oxygen, and controlled temperature. Most cells in higher animals have become so highly specialized that they have lost the power to divide and therefore are unsuitable for tissue culture. Embryonic cells or cells from malignant tumors are the most commonly used animal materials. Certain advantages are offered by animal cells raised in an artificial medium. The cells usually do not become special- ized, but remain "young" and metabolically active. Tissue culture cells can provide a convenient medium in which to raise viruses, which can be cultivated only in living cells. The exact causes for the differentiation and specialization of cells in animals are not understood. Tissue culture cells, therefore, are a potent field for investigation in their own right. Plants normally grow at the tips of roots and stems as long as they are alive. These growing tips can be cut off and will continue to grow in the proper medium. Stem tips will produce new shoots and roots and become new plants, but roots continue to grow as roots almost indefi- nitely. P. R. White maintained a culture of tomato roots for several years, using them in a variety of experiments. Plant or animal tissue culture is not easy and requires the same aseptic handling as the culture of microorganisms. Mutation is always possible in either microorganisms or tissue culture. Occasionally an experiment calls for special living materials, however, and tissue culture may be worth the effort. Preparation of parts of cells In present-day research, many of the important problems are those involving the processes associated with all cells. More and more experi- ments have been performed on isolated parts of cells. Until relatively recently, it was believed that cells died immediately, completely, and irreversibly upon being broken. It is now possible to isolate almost any of the parts of cells and keep them functional at least for a short time. Nuclei have been removed from Amoeha cells, leaving only the cyto- plasm. The cytoplasm alone cannot grow or divide, but many of the vital activities continue. Nuclei and even chromosomes have been separated in fairly large quantities from a variety of plant and animal tissues. Chloroplasts separated from green plant cells carry on an amazingly SELECTION AND PREPARATION OF ORGANISMS 73 complex chemistry. Mitochondria have been isolated and continue to metabolize. Even the ribosomes, which are below the limits of visibility of the light microscope, continue to perform chemical reactions if prop- erly treated. Several hundred enzymes have been purified from biological materials. Many of these purified enzymes can be handled like any other chemical materials. A set of routine steps has evolved for the separation of almost any of these subcellular particulates (particles of cellular material below the cellular level of organization). The procedure used for the preparation of one particle obviously will not work for the preparation of another. Even the steps involved in the preparation of one entity, such as mito- chondria, are different from one laboratory to another. As you might expect, particulates prepared by one method do not show exactly the same kinds of behavior as particulates prepared by another method. If experiments are to be repeated, the preparation of the parts of cells must follow the same routine as nearly as possible. Nevertheless, all of the various procedures involve many of the same steps, including grinding and differential centrifugation. The differences between methods are differences in composition of solutions, in the time and speed of centrifugation, and in the order in which operations are performed. It seems reasonable to present a general discussion of the various steps, followed by one specific technique as an example. Refer- ences at the end of the chapter should make it possible to find techniques for preparing other subcellular particulates. Temperature: Most of the preparative steps are conducted at low tem- peratures. The parts of cells are unstable anyway, breaking down very rapidly at room temperature. The most convenient means of maintaining a temperature of 0 to 4° C is to work in a walk-in refrigerator. Most laboratories conducting cellular physiology arfd biochemistry experiments now have at least one such cold room. Grinding: The tissue is usually broken up first by some mechanical method. A mortar and pestle offers a gentle means of grinding. The tissue is ground together with some water, buffer solution, or a complex solu- tion of sugars, salts, etc. Sometimes sharp sand is added to speed the grinding. Another grinding device, almost as gentle, is the homogenizer, constructed of a hollow glass cylinder with a closely-fitted cylindrical pestle. Several models are on the market. Larger quantities of material are chopped finely in the Waring Blendor or its equivalent. The high- speed blades can macerate almost any plant or animal material (includ- ing fingers) in a short time. 74 SELECTION AND PREPARATION OF ORGANISMS High-frequency sound waves, often called "supersonic" or "ultrasonic" vibrations, can be applied to some cells with reasonable success. An oscillator serves as a source of sound waves which are transmitted through oil to the container in which the biological material is placed. Separation of ParticMlates: Most of the grinding procedures result in a thin paste or fluid suspension containing a mixture of cell parts and other materials. Most such suspensions contain unbroken cells, cell walls, pieces of connective tissue, and other odds and ends. Straining through a coarse filter material removes much of this unwanted matter. Filter materials commonly used are muslin, pads of glass wool, cheese- cloth, and facial tissue supported by cheesecloth. Further separation of the particulates depends upon centrifugation. Spinning at a low speed will cause settling of particles more dense than those desired, leaving the desired particles in suspension. The liquid is poured off and centrifuged at a higher speed. During this centrifuga- tion the desired particles setde out, leaving any less dense particles sus- pended in the liquid. The sediment is resuspended in a clean solution, ready for use. Or it can be "washed" by recentrifuging and then is finally resuspended in the desired solution. Preservation of Isolated Particulates: The procedures involved in the preparation of subcellular particles are usually quite time consuming. It is often impractical to prepare a new batch of particles for each experi- ment, and yet, under ordinary conditions, the particles rnay not survive from one day to the next. Fortunately, as many of these materials can be preserved by freezing, large amounts may be prepared at one time. Chloroplasts frozen and stored at —40° C have maintained their activity until the supply was exhausted, a period of many months. The freeze-dying technique, sometimes called "lyophilization," freezes and dries at the same time. The material to be treated is placed in a con- tainer that can be evacuated. One wall of the chamber is in contact with solid carbon dioxide at about —60° C. The reduced pressure causes the evaporation of water from the biological material at the expense of heat from the material. This water is then trapped by freezing against the cold wall of the container. The resulting dry material can he stored for long periods and then reconstituted by adding water. Some materials preserved in this way retain most of their original activity. Assay Method: In most isolation procedures it is necessary to make sure that the particles retain their chemical abilities by making meas- urements of activity at various stages of the preparation. If it is suspected that the particles will catalyze a certain chemical reaction, the assay SELECTION AND PREPARATION OF ORGANISMS 75 consists of measuring the rate of that reaction. After a routine procedure has been developed, the assay can be carried out at less frequent inter- vals or delayed until the end of the preparative steps. Preparation of chloroplast fragments— an example The following example is only one of many methods of preparing chloroplasts or chloroplast fragments. These chloroplasts are prepared by Dr. J. D. Spikes and his colleagues at the University of Utah for the principal purpose of studying the light-absorbing phase of photosyn- thesis. Much of the other activity normally carried on in chloroplasts does not occur in these fragments. Although this inactivity would be unfortunate in some circumstances, it is advantageous here because it simplifies the experimental setup. Other chloroplast preparations can perform other activities, but the preparative steps, of course, are slightly different. If the chloroplasts are prepared according to one set of prescribed steps, they can carry on phos- phorylation reactions in the light and may even fix some carbon dioxide. In Dr. Spikes' method, sugar beet leaves are the usual source of chloro- plasts. The sugar beet plants are raised in a special chamber under con- trolled conditions of light and temperature. The leaves are harvested and washed in cold water. In the cold room, approximately 50 g of washed leaves are placed in a Waring Blendor with 60 ml of a solution 0.5 m in sucrose and 0.01 m in KCl. The mixture is blended for two minutes, resulting in a thin paste which is filtered through four layers of cheese- cloth. The green liquid is centrifuged for five minutes at 1000 X g. The supernatant fluid, consisting of a mixture of chloroplast fragments and other cytoplasmic materials, is separated from the sediment of whole cells, nuclei, and cell walls. The fluid is then centrifuged in a Spinco Model L Preparative Ultracentrifuge for fifteen minutes at 144,000 X g. The supernatant fluid is discarded, and the sedimented chloroplast frag- ments are resuspended in the sucrose-KCl solution and centrifuged again. Several such washing steps are carried out, each of which removes much of any remaining cytoplasmic material other than chloroplast fragments. After the final centrifugation the chloroplast fragments are suspended in sucrose-KCl solution. Chlorophyll in the chloroplast fragments is determined spectrophoto- metrically by comparison with a standard curve. The suspensions are diluted with sucrose-KCl solution until they have approximately 500 mg 76 SELECTION AND PREPARATION OF ORGANISMS of chlorophyll per liter. The final dark green suspension is stored in small test tubes in the freezer at —36° C. At this storage temperature the chloroplast fragments retain their photochemical activity for months. The small tubes are removed from the freezer and thawed for use in experi- ments as needed. This, then, is one method. Others will be found in the following refer- ences. SELECTED REFERENCES Anonymous, Basic Care of Experimental Animals. Revised edition. Prepared by the Animal Welfare Institute, 22 East 17th Street, New York 3, New York. 1958. Useful information on housing, feeding, handling, etc., with special emphasis on humane treatment of the animals. Included are eight mammal species, and poultry, amphibians, turtles, and fish. Arnon, D. I., "Cell-free photosynthesis and the energy conversion process," in A Symposizim on Light and Life, William D. McElroy and Bentley Glass, eds. Baltimore: Johns Hopkins Press, 1961. pp. 489-569. A review which discusses (among other things) the prob- lems in the use of isolated chloroplasts. Provides references to most of the methods of preparing chloroplasts. Bonner, James. 1959. Protein synthesis and the control of plant processes. American Journal of Botany 46:58-62. Bonner comments on the use of isolated parts of cells, and discusses specifically the prep- aration and use of microsomes (now called ribosomes) from plants. Brachet, Jean, and Alfred E. Mirsky, eds.. The Cell: Biochemistry, Physiology, Morphology, vol. I-V. New York: Academic Press, 1959-61. A comprehensive, authoritative reference work: Vol. I (1959)-Methods and problems in biology; vol. II (1961)-Cells and their component parts; vol. Ill (1961)— Meiosis and mitosis; vol. IV and V (1 960-6 1)-Specialized cells. Cass, Jules S., Irene R. Campbell, and Liili Lange, A Gxiide to Pro- duction, Care and Use of Laboratory Animals: An Annotated Bib- liography. Federation Proceedings 19(4) Part III, Supplement No. 6. 1960. A list of references to literature with abstracts of the papers. Information on normal animals, disease, nutrition, breeding programs, and most of the other topics that might concern the experimentalist. SELECTION AND PREPARATION OF ORGANISMS ^^ Hayashi, Teru, ed., Subcellular Particles. New York: The Ronald Press Company, 1959. Twenty contributors discuss the relationships of structure and function, as demonstrated by modern techniques. Ingle, Dwight J., Principles of Research in Biology and Medicine. Philadelphia: J. B. Lippincott Company, 1958. An interesting dis- cussion of many aspects of biological research, with most of the examples taken from medical physiology. Scattered throughout are notes about the importance of the care and handling of experimental organisms. Mercer, Frank, 1960. The submicroscopic structure of the cell. Ann. Rev. of Plant Physiol. 1 1 : 1-24. An especially pertinent review on the preparations, characterization, experimental use, and interpreta- tions of parts of cells. Peters, James A., ed.. Classic Papers in Genetics. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1959. As the title suggests, a collection of papers selected from the original genetics literature. Many fine ex- amples of reasoning from experimental data, and several instances of work with Drosophila. Umbreit, W. W., R. H. Burris, and J. F. Stauffer, Manometric Techniques, 3rd ed. Minneapolis: Burgess Publishing Co., 1957. In- cludes an excellent chapter on the preparation of subcellular par- ticulates. White, Philip R., The Cultivation of Animal and Plant Cells. New York: The Ronald Press Company, 1954. Worden, A. N., and W. Lane-Petter, eds.. The U. F. A. W. Hand- hook on the Care and Management of Laboratory Animals, 2nd ed. London: Universities Federation for Animal Welfare. 1957. 951 pp. Published by this British society; the single most complete handbook on the subject. CHAPTER 7 Centrifuges A centrifuge is an instrument designed to separate materials of different density from each other by virtue of a centrifugal force. Since the cen- trifugal force is similar in its effects to gravity, most things that can be separated in a centrifuge u^ould eventually settle because of gravity, but a very long time might be required. The centrifuge allows us to hasten this effect by applying a larger force. In laboratory centrifugation, at least one of the components to be separated is a liquid. The other might be solid particles, another liquid, or, rarely, even bubbles of gas. The separation of parts of cells, as de- scribed in the previous chapter, is one common use of the centrifuge. So many other kinds of mixtures must be separated in the laboratory routine, that a centrifuge is used almost daily. In addition to its use in preparing materials, the centrifuge is a valuable analytical tool. The usual centrifuge consists of a rotor or head, driven by a motor. Several types of rotors are available. Some hold only a few very small tubes or vials of material, while others hold bottles with a total capacity of a liter or more. As the rotor turns, the liquid and its suspended ma- terial are subjected to the centrifugal force. The various rotors in ordi- nary use fall into two general classes: those in which tubes of liquid are held firmly at some fixed angle (like 35°), and those in which tubes or bottles are placed in metal buckets which swing out to a horizontal position as the rotor turns. Each type has certain advantages. Centrifugal force Any rotating body is subject to a constant acceleration inward, toward the center of the circle. A weight whirled on a string must be pulled 78 CENTRIFUGES 79 inward constantly to prevent the weight from taking its natural course, that is, flying off at a tangent. This inward force, which accelerates the mass toward the center of the circle, is centripetal force. Equal and oppo- site to it is the outward centrifugal force. The centripetal force happens to be easier to calculate. The magnitude of the force depends upon the speed of rotation and upon the radius of the circle. If a wheel is turning with an angular velocity of o) radians per second (a radian is the portion of the circum- ference of a circle equal in length to the radius R), the velocity (v) of a point on the surface is v = (oR. The velocity (in units of length per unit of time) does not change, but because the point on the surface of the circle is constandy changing direction, the point is subjected to an acceleration oc = co-R. The centripetal force Fe is the mass (m) times the acceleration, or Fc = m X w-R. The angular velocity (w) can be con- verted to revolutions per second because lir radians is one full circle. The centripetal force then becomes Fe = w(277-N)-R = m47r-N-R where N is revolutions per second. Centrifugal force is equal in mag- nitude. The amount of force relative to gravity is a more useful figure than this absolute Fr. Usually a relative centrifugal force (RCF) is calculated by dividing by the force of gravity. The relative centrifugal force is ex- pressed as "so many times g" or "so many g's." The force of gravity is mass times the acceleration of gravity (980 cm/sec") so ^^^ F. m47T-Nm KCr = -pr- = tg mg If we know R and can measure N, we can calculate the g's. Any new centrifuge must be calibrated if we are to describe its performance adequately. We measure N with a tachometer or stroboscope. These instruments usually give revolutions per minute, so N = rpm/60. Fig- ure 7-1 shows the results of the calibration of a centrifuge. Angle Heads: The heads or rotors in which the tubes are held at a fixed angle develop a higher apparent centrifugal force than the swing- ing bucket rotors. As shown in Fig. 7-2, particles moving downward (away from the center) must move against the viscosity of the liquid in which they are suspended. If the tube is inclined, the distance the par- ticles must move against this counterforce is only the distance across the tube instead of the full length. The particles in the angle head move 80 CENTRIFUGES 1600 1200 o q: 800 400 - I EC sw. buck, head Clinical electrifuge 55B, ser. no.i20 4-15 ml polyethylene tubes • of water RCF, bottom of tube II Jon 1961 / / /• / J \ L 1 J \ \ I 3 4 5 6 7 Dial setting 8 9 10 Fig, 7-1. Typical calibration curve for a laboratory centri- fuge; student data. across the tube and then slide down the wall against less resistance. The effect on the settHng of particles is the same as increasing the relative centrifugal force. The initial settling is faster and more thorough. The final result of a long centrifugation, of course, must be the same as in the horizontal tube. The chief advantage is the reduction in time re- quired for adequate separation. The apparent RCF can be calculated from a complex equation, but usually it is easier to take the manufac- turer's word for it. Sedimentation The rate at which particles will settle or sediment in a centrifugal field can be calculated. The information thus gained can be extremely useful in two different ways. The length of time required to separate a given-sized particle from its suspending liquid can be computed, per- haps saving a good deal of trial-and-error work. In other cases, measure- ment of the rate of sedimentation provides information about the size, shape, density, or molecular weights of the particles. The molecular weights of many proteins, for example, have been determined in the analytical centrifuge. CENTRIFUGES 81 In general, the rate at which a particle settles depends upon the rela- tive centrifugal force, the size and shape of the particle, the viscosity of the liquid in which it is suspended, and the difference between the density of the particle and the den- sity of the liquid. Let us call the rate of sedimentation s cm/sec, and assume a spherical particle of radius r and density S, suspended in a liquid of density p and vis- cosity r). Then ^~ \.\25ri where N is revolutions per second and R is the radius of rotation as before. This is a rather complex interrelationship of factors, but if we know all but one, we can cal- culate the last. In addition, if we measure all the factors, s, r, 8, etc., Fig. 7-2, Sedimentation in the angle head. Particles need not move the full length of the tube against the viscosity we can check on the validity of the of the medium. (Courtesy Ivan Sorvall, assumption that the particles are Inc.) spherical. If particles are rods in- stead of spheres, their sedimentation behavior will not obey this equation. Even the viscosity of protoplasm has been determined by measuring the rate at which particles sediment through it. Types of centrifuges Such a variety of centrifuges is available that it is impossible to de- scribe them all. Probably the easiest way to gain information about the types and sizes is to read catalogue descriptions and advertisements. Even any attempt to classify the various types is completely arbitrary, but for the sake of discussion they can be divided into cHnical centrifuges, "high-speed" centrifuges, and preparative and analytical ultracentrifuges. In common usage any centrifuge which turns faster than about 20,000 rpm is now called an ultracentrifuge. The clinical centrifuge is a small, portable model, easily used on a bench-top. The capacity is usually not more than about 200 ml, and 82 CENTRIFUGES relative centrifugal forces up to about 2000 X g can be attained. A variety of swinging bucket and conical or angle heads can be used. Also, several special rotors are available, such as a horizontal rotor for spinning the small tubes used in blood analysis. The larger "high-speed" centrifuges exist in the greatest variety. Most modern units rotate at speeds up to about 20,000 rpm, developing cen- trifugal fields up to about 50,000 X g. As air friction generates consider- able heat, most of these centrifuges are refrigerated. Because friction on the bearings is reduced to a minimum, the rotor would spin freely for some time after the motor is shut off if it were not for the magnetic brakes. The Servall KSA system provides for continuous flow of liquid through the instrument while the rotor is turning at high speed. Sedi- ment is collected in centrifuge tubes, and the clarified liquid flows into a collecting ring and then into a separate container. The Spinco Preparative Ultracentrifuge develops centrifugal fields up to about 150,000 X g. Refrigeration is provided. Air friction becomes so important at speeds up to 50,000 rpm that the rotor of this instrument turns in an evacuated chamber. The liquid to be centrifuged is placed in sealed tubes inside a sealed rotor. Several different rotors are available, one of which has a capacity of about 600 ml— so fairly large amounts of material can be handled. Analytical ultracentrifuges usually spin very small samples of material, primarily for the purpose of determining sedimentation rate. The rotor must turn at very high speeds. Thus, compressed air bearings, magnetic bearings, or other novel bearings are used. Sedimentation in the sample tube can be viewed or photographed through a special illumination and optical system. Flashes of light are synchronized with the rotation, so in effect the rotation is stopped. The rotor, of course, operates in a vacuum under refrigeration. Some commercial models are available, but some of the analytical ultracentrifuges are built in the laboratories where they are used. Research on centrifugal principles still continues. Pea-sized rotors have been subjected to centrifugal forces exceeding five million X g, and much smaller rotors have been turned at 1.5 million revolutions per second, providing a force of over a billion X g. Under these conditions the rotor tears itself apart and explodes. An interesting illustrated article by Beams describes some of this work.^ 1 Jesse W. Beams, "Llltrahigh-speed rotation," Scientific American 204: 4 (April, 1961), pp. 134-147. CHAPTER 8 Microscopy If any instrument should be given credit for contributing more than any other to the development of biology as a science, that credit should go to the compound microscope. The history of detailed observation of liv- ing things parallels the development of magnification. Every improve- ment in the art of combining lenses made it possible to see smaller struc- tural units. First the cells and later the minute structures inside cells were discovered. It is now possible to photograph biological structures even as small as single virus particles. Although the microscope has contributed to biology, biology has con- tributed to microscopy as well. The need for better magnifying systems with which to examine parts of organisms has been the most important stimulus for the opticians. The microscope was invented in the seventeenth century, and it did not take the original microscopists long to realize the importance of a number of factors, such as the quality of the lenses, the way they were combined, and the type of illumination. By the end of the nineteenth century, the microscope had become a precision instrument. The theo- retical and practical ability of the great German scientists of the time produced microscopes almost as good as any available today. The formula- tion of the cell theory and the accumulation of the wealth of informa- tion on the cellular structure of plants and animals occurred at this time. The microscope permitted the biologist to see a great variety of cells, combinations of cells, and parts of cells. Each living thing is slightly different from any other living thing; some are composed of more-or-less 83 84 MICROSCOPY standard parts, like mass-produced machinery, whereas others contain very special individual components. One of the tasks of those biologists who were able for the first time to examine cells in detail was to describe what they saw. Descriptions require words, mostly nouns and adjectives. Because of the great diversity of living things, a tremendous vocabulary of strictly biological terms arose. Any beginning biologist at that time faced the initial barrier of learning this vocabulary. To a certain extent this is still true, but unfortunately, there are some who feel that this vocabulary is biology and consider the terminology more important than the concepts the words were invented to describe. At any rate, the importance of the microscope in the development of the science of biology cannot be minimized. The convenient models available now are used almost daily in every laboratory. The materials used by the experimental biologist are made up of cells which are too small to be seen by the unaided eye. The investigator interested in muscle contraction must consider changes in the appearance of muscle cells. Microorganisms are favorable experimental organisms, but must be examined routinely. Because living cells are below the limit of visi- bility, the compound microscope will always be one of the most useful tools to either the descriptive or the experimental biologist. The compound microscope The compound microscope is so called because it contains two sets of lenses: an objective lens, which produces an image, and an eyepiece or ocular, which further magnifies the image. The magnification available is the product of the magnifications produced by the separate lens units. The objective and ocular systems of the compound microscope are elabo- rate combinations of individual components fitted together according to a formula which is intended to provide the best possible view of the object being examined. Microscopes differ in details of construction, partly because of competi- tion among producers, but all are constructed according to a general pattern. Figure 8-1 shows a typical microscope, from which the major parts can be identified. The two lens systems, the objective (A) and the ocular (B), are at opposite ends of a body tube (C). The object to be examined is placed on a glass slide on the stage (D) just below the objective lens. The object is illuminated from below through a hole in the stage, and the light may be focused by a substage condenser (E) and regulated by a diaphragm or other devices. The microscope is MICROSCOPY 85 mounted on a strong supporting arm (F), which also serves as a handle. The body tube is moved upward or downward by a coarse (G) and a fine (H) focal adjustment in order to bring the object to focus at the eye point. Most microscopes have two, three, or four interchangeable objec- tives, all mounted on a revolving nosepiece (/). The slide and object may be moved about by a mechanical stage (K). (b) Fig. 8-1. A typical comp>ound microsope. For explanation of labels, see text. (Courtesy Bausch and Lomb Optical Company.) To the conservative old timer, some of the newer models hardly look like microscopes. Figure 8-2 shows the Leitz Labolux Ilia. The body tube contains a set of prisms arranged to form the image at the eyepiece, as in the conventional microscope. The prism system, however, permits the tilting of the eyepiece for greater comfort and even allows the upper 86 MICROSCOPY part of the body tube to be rotated. The eyepiece can point backward, more nearly hke the conventional microscope, or forward as shown. In many of the new models, the prism system also divides the light between two ocular lenses so that both eyes can be used in viewing. Focusing is accomplished by raising and low- ering the stage rather than the body tube. This arrangement is advan- tW'M^WK N. tageous because the eye and head \^Sp>j ^^^^ \ need not move upward and down- ward, but it may make the control of illumination from below slightly more difficult. Several of the newer models even incorporate a third ocular tube to which a camera is attached. The object can thus be located and examined and then photographed very conveniently. The object might be illuminated with daylight from a window by means of a substage mirror, but now- adays artificial light is used more commonly. Light from a lamp can be directed toward the mirror or a small lamp can be placed beneath the condenser in place of the mirror Figure 8-3 is a diagram of image formation in a compound micro- scope. The objective lens produces an image of the object at the level of the eyepiece. Further magnification occurs when the eyepiece, with the help of the lens of the eye itself, focuses the image on the retina. The eye interprets what it sees as if the ocular lens produced an enlarged image, or virtual image, at a distance of about 250 mm from the eye. Probably the ability to imagine the image at that distance requires some practice initially, but it comes quite naturally to anyone with a little experience. Fig, 8-2. A recent model, high-grade microscope. (Courtesy E. Leitz, Inc.) Optical theory All optical instruments, including the eye, depend upon light. Light moves as vibratory energy in straight lines outward from a source. In MICROSCOPY 87 Real image Eyepiece image Virtual image Fig. 8-3. Image formation in the compound micro- scope. any one medium it moves with constant speed, but the speed is different in a different material. Any point on a moving beam can be pictured as a new^ source of hght from which new beams spread. The vibration occurs in all the planes perpendicular to the direction of movement. These statements will be elaborated in the following sections. Visible light is but a small part of a much broader spectrum of elec- tromagnetic energy. We commonly think of this electromagnetic energy 88 MICROSCOPY as vibratory in nature, and its movement can be pictured as a wave (as shown in Fig. 8-4). The wavelength is the distance from any point on one wave to the same jwint on the next wave, as indicated in the dia- gram. Another feature of vibratory energy is a frequency, which is the Fig. 8-4. Wave representation of radiant energy. In curve A, the wave- length (a) is twice as great as in curve B; in curve B the frequency is twice as great as in curve A. number of vibrations in a unit of time. In the diagram, curve A has a low frequency, while curve B has a higher frequency. The speed of light (c) is constant (3 X 10^*^ cm/sec in a vacuum), and the frequency and the wavelength multiplied by each other must equal this constant. In other words, the speed of light becomes a proportionality constant, and we arrive at this equation : X ^ - or V Xv = Lambda (X) is the symbol commonly used for wavelength in centi- meters or in fractions or multiples of centimeters. Frequency is given the symbol nu (i^), and is expressed in reciprocal seconds (sec~0, or what amounts to vibrations per second. Another useful way of describ- ing the frequency is in terms of a wave number, that is, the number of vibrations in a certain unit of length. In the electromagnetic spectrum we find a continuous variation in wavelength and frequency from one end of the spectrum to the other. At one extreme, wavelengths are very long, and frequencies are very low. At the opposite end, wavelengths MICROSCOPY . i . f : 89 are very short, and frequencies are high. The visible region of, the spec- trum hes near the middle of the much broader spectrum. There is noth- ing unique about visible light; it merely corresponds to the ability of our eyes to perceive light. The red end of the visible spectrum' consists of the longer wavelengths and lower frequencies, while the blue-violet end has shorter wavelengths and higher frequencies. Immediately be- yond the visible spectrum and continuous with it are the infrared and the ultraviolet. It is difficult to draw a line between the red, visible light and the infrared, but usually this line falls around a wavelength of 700 millimicrons (m/w.). The average human eye can see wavelengths as short as about 400 mfi; anything below this limit lies in the ultraviolet. Refraction: When a light beam passes from one medium into another in which it travels more rapidly or more slowly, the direction of move- ment is changed. The moving beam can be imagined as a progression of "wavefronts" moving outward from a distant source, each front one wavelength ahead of the next. Near the source these fronts will be spheres, but, as the distance from the source increases, any small segment of the sphere behaves as a plane. If such a beam of light, moving through air, approaches the surface of a piece of glass— through which it moves more slowly— either of two events might occur. The beam might be reflected, with an angle of reflection equal to the angle of incidence, or the light beam might enter the glass. However, since it moves more slowly in the glass, any oblique beam will change direction sHghdy upon entering the glass. The angle of bending depends upon the ratio of the speeds in the two media. Figure 8-5 shows a train of wavefronts approach- ing an air-glass surface. The angle of reflection is equal to the angle of incidence, but the direction through the glass is different. This bending, resulting from the entry into a different medium, is called refraction. The angles 6 and 6', which the wavefronts make with the surface, depend upon the speed of the light in the two media according to the relationship sm 9 Vi W2 sin 9' V2 fii where vi and V2 are the speeds in the two media, and n2 and wi are the respective refractive indexes. The refractive index is a ratio comparing the speed of light in any medium with that in a vacuum. Because the refractive index of air is only very slightly greater than unity, for practical purposes, glass and other materials are commonlv referred to air as a standard. 90 MICROSCOPY Although it cannot be seen from Fig. 8-5, the frequency of vibration of the hght beam remains constant, the speed of propagation changing because the wavelength changes. The degree of change in wavelength is dependent upon the wavelength itself. For example, shorter wavelengths are retarded more upon passing from air to glass. For this reason, the refractive index of a material is expressed relative to a certain wavelength of light, usually the sodium line at 5893^4(589.3 mju,). Reflected Fig. 8-5. Refraction at an air-glass interface. When light passes from glass to air, the same kind of bending will occur, but in the opposite direction. Thus a pair of parallel surfaces causes a jog in the light path, but it is not apparent because the entering and emerging beams move in the same direction. A lens, however, has opposite surfaces inclined at angles to each other, so that the emerging beam may have quite a different direction from the entering beam. In a microscope, every plane in the light path where hght passes from one medium to any other must be given careful consideration. Diffraction: According to Huygen's principle, every point on an ad- vancing wavefront acts as a new source of light. This leads to an apparent bending of light around corners, known as diffraction. If two parallel wavetrains happen to be vibrating in the same direc- tion, their amplitudes are additive, and the two waves are said to be in MICROSCOPY 91 phase. If the two waves vibrate in opposite directions, they cancel each other, and interference resuhs (see Fig. 8-6). A + A= 2A -A A-A = 0 Fig. 8-6. Reinforcement and interference. In the upper set of curves, the waves are in phase, and their ampHtudes are additive. In the lower set of curves, the waves are exactly out of phase, so they cancel each other. Imagine a beam of Hght of one wavelength approaching a barrier with two parallel slits, as in Fig. 8-7. Each slit acts as a new source of light. Point 0 on the screen is the same distance from a as from h, so light arriving from a and from h are in phase and reinforce each other, pro- ducing a bright spot on the screen. Point 1 on the screen is one wave- 92 MICROSCOPY length farther from a than from h, but again the two beams are in phase. Midway between 0 and 1, hght from a and from h are exactly out of phase, with the result that they cancel each other, giving a dark spot on the screen. The pattern seen on the screen, then, is a series of bright lines at 0, 1, 2, etc., and also on the other side of 0, as at 2'. Screen Slits 0 I 2 Fig. 8-7. Diffraction pattern produced by two slits. For explanation see text. If another color of light with a longer wavelength is used, similar lines appear, but the distance between 0 and 1, and between all other lines, is greater. If white light, with its mixture of all wavelengths, is used, the result is a dispersed spectrum with the blue end nearer the center (point 0) and the red end more remote. Each point on the screen (1,2, etc.) becomes a complete spectrum. A diffraction grating consists of a large number of parallel lines drawn on a surface. As long as the Hnes are almost perfectly parallel and the spacing of the lines is uniform, the large number of lines produces a diffraction pattern like that resulting from a single pair of slits. The grating disperses a beam of white hght into a number of complete spectra, labeled first order, second order, and so on. The preparation of diffraction gratings for use in optical instruments is an elaborate process. Lines— actually grooves— are engraved on a glass blank with an optically flat surface by means of a very precise "ruling engine." Because the master grating is extremely expensive to make, practical and useful "replicas" MICROSCOPY 93 are molded in a plastic material, using the master grating as a pattern or model. The replica grating may transmit light or it may be silvered for use as a reflection grating. Diffraction also occurs in other materials with regions of difference in light transmission or in refractive index, but irregular shapes and spac- ings are much harder to analyze. Polarization: Light ordinarily vibrates in all possible planes. Imagine light descending vertically from the sun. It vibrates not only in the north- south plane but also in the east-west plane and at every point of the compass. In certain situations, as by reflection at a certain angle or by passage through special materials, the vibration is restricted to a single plane. Such light is polarized. The ability to polarize a beam, to permit the passage of a beam of polarized light, or to rotate the plane of vibra- tions all depend upon the nature of a material, either the molecular structure or the arrangement of molecules in some larger unit. In ordi- nary microscopy, polarization is of no consequence, but the special tech- niques of polarization microscopy, described in a later section, can give valuable clues to the structure of materials. Magnification and resolution The magnification of a microscope is stated by the manufacturer. An objective with "43 X" printed on the side might be used with an eye- piece labeled "10 X." The magnification by the objective lens refers to the ratio of the size of the image produced to the size of the object at a specified body tube length. If the body tube is made longer than the customary 160 mm, the magnification by the objective is increased. The magnification by the eyepiece is calculated, assuming the virtual image at 10 in. or 250 mm from the eye point. The total magnification, using 160 mm tube length and virtual image at 250 mm, is the product of the two stated figures, or 430 X in this example. Magnification is essential in order that images of individual spots on the object will fall on the retina of the eye sufficiently far apart from each other to excite nonadjacent cells. If the magnification is inade- quate, two separate spots will strike the same or closely adjacent retinal cells and will be interpreted as a single spot. In addition, the object to be examined may occupy a very small portion of the field of view. If the magnification is too great the object may appear much larger than the field of view. The "zoom" microscope (Fig. 8-8) offers continuously v\l / Fig. 8-8. Top, a pair of objective lenses, to show the angle through which a lens can gather light. Here a is the half-angle. Angu- lar aperture = 2a. Bottom, a "zoom" micro- scope. Courtesy Bausch and Lomb, Inc., Rochester, New York. variable magnification and can be adjusted to provide the most con- venience in viewing. By building a microscope with a very long body tube, any measure of magnification could be achieved. Given a bright light source, a very large image of a bacterial cell could be cast upon the moon. But magnification alone is not the most important quality in microscopy. The ability to 94 MICROSCOPY 95 distinguish or resolve small objects is much more important. Further magnification does not necessarily improve the ability to see details; only an increase in resolving power can do this. The limit of resolution of any optical system is defined as the ability to separate adjacent objects. Under a microscope two objects or lines can be seen as separate and distinct. As two lines are moved closer together, they eventually seem to merge into a single line. The distance between the lines just before they can no longer be distinguished from each other is the limit of resolution. This limit, also called resolving power, for the human eye is about 0.1 mm. Since this is about the maximum size of ordinary cells, the microscope becomes very necessary in biology. Fig. 8-9. In the figures, a is the angle between the axial ray and the first diffracted ray. On the left, axial lighting is used and the lens would capture the blue, but not the red diffracted light. Oblique illumination (right) in- creases the effective angle through which the lens gathers light. The resolving power of a lens system depends upon the angle through which it can gather light. Figure 8-8 also shows a pair of objective lens systems. One has a long working distance and gathers light through a small angle. The other gathers light over a wider angle and would be ex- pected to resolve smaller objects. The angle has been called the angular aperture (A. A.), and its size can vary from a few degrees to a theoretical maximum of 180°. The refractive indexes of the lens, of the slide and cover glass, and of the substance between the cover glass and the lens all influence the resolving power, however, so that angular aperture alone is not an adequate expression of the ability to resolve. Another expression, the numerical aperture (N. A.), more nearly indicates the resolving ability. N.A. ^wXsin^A.A. 96 MICROSCOPY n being tlie refractive index of the least refractive material between the object and the objective lens. The greater the value of N. A., the better the resolution. An objective lens with a working distance of 16 mm, labeled 10 X by the manufacturer, has a numerical aperture of about 0.25. With other objectives the N. A. values range upward to a practical limit of about 1.4. Ernst Abbe, probably the greatest of the German opticians, proposed the value N. A. and developed the theory of resolving power of micro- scopes. Perhaps at low magnification the image seen in the microscope depends upon transmission and absorption of light by the object. The examination and interpretation of very small units, however, probably depends upon diffraction phenomena. Each small object is accompanied by a series of interference bands which are responsible for the contrast observed. If the interference bands of two objects overlap, the two objects appear as one and cannot be resolved. Abbe reasoned that the limits of resolution could be found by examining a diffraction grating. If the lines of the grating are to be seen, the objective lens must include the direct or axial ray plus at least the first diffracted ray (Fig. 8-9). The angle (a) depends upon the wavelength (A.), upon the distance (ci) between lines on the grating, and upon the refractive index of the me- dium, or M sin a = \/d. However, n sin « = N. A. so that d = X/N. A. This is the smallest value of d resolvable by an objective. If oblique illumination is used, as from an Abbe condenser, this limit is reduced to about half, ox d ■= A/2 N. A. This limit is unattainable for several geometric reasons; the theoretical limit of resolution is usually given as 1.2 V2 N. A. If an object is illuminated with blue light (A. = 400 m/x) and examined with a lens system (N. A. = 1.40), the limit of resolution is , 1.2X400 ^_ ^= 2X1.40 =™^t' or about half the wavelength of the light used. Object^ ;? X -^V ^ \n\ ^ Objective s=3 Cover glass V55 V ? Slide Fig. 8-10. Oil immersion microscopy. MICROSCOPY 97 Since the numerical aperture is limited by the refractive index of the least refractive medium in the light path, high-resolution microscopy uses oil-immersion lenses. A drop of a transparent "oil" with a refractive index of about 1.5 is placed between the cover glass and the objective lens and usually also between the condenser and the slide (see Fig. 8-10). The material to be examined is mounted between the slide and the cover glass in another medium of high refractive index. The numerical aper- ture is thus increased, reducing the limit of resolution. Fig. 8-11. A rotary microtome. (Courtesy American Optical Company.) Optical Company.) An objective lens is moved upward or downward until the object is in focus on the retina. A slight deviation upward or downward produces an inferior image, but the eye might not notice the imperfections. Fur- ther deviation, however, produces an image which is obviously out of focus. The thickness of the object which seems to be in focus at any one time is known as the depth of field. The greater the resolving power, the thinner this depth will be. If great detail is to be seen, high numerical aperture is required, but visibility of details is achieved with a sacrifice of depth of field. Usually an object is examined first with a low power objective (greater depth) and later with the high power objective. The narrow depth of field in high power objectives is not entirely a disadvan- tage because it permits "optical sections" of cells. If the center plane of a cell is in focus, both the top and bottom will be invisible and thus will not confuse the image of that center plane. The net result is the 98 MICROSCOPY same as taking a very thin slice out of the center of the cell. By succes- sive examinations of various optical sections within the cell, a fairly complete three-dimensional picture can be built up. Aberrations High quality optical components for microscopes must incorporate corrections for certain "aberrations" if the image is to represent the object. Otherwise several aberrations inherent in optical systems can be ex- tremely annoying. Spherical Aherration: Lens elements with spherical curvature do not focus at a point because the edges produce relatively greater refraction than the center. An object cannot be brought to focus at a point but rather is spread over some distance along the optical axis. This spherical aberration could be corrected by using lenses with nonspherical curvature, but microsope components are so small that the sphere is about the only curvature that can be made practically. More commonly, spherical aber- ration is corrected by combining lens elements in such a manner that one lens compensates for the spherical aberration of another. Chromatic Aherration: Because refraction depends on the wavelength of the light, a single lens does not focus all colors at the same point. The object seems to be surrounded by halos of different colors. This color discrepancy is called chromatic aberration. It can be corrected by using combinations of lens elements of different refractive index. Achromatic objectives composed of different kinds of glass achieve adequate correc- tion for ordinary purposes. High-resolution microscopy demands more correction, however, and apochromatic objectives made up of several ele- ments of glass and the mineral fluorite are used. Achromatic and apo- chromatic objectives correct for spherical as well as chromatic aberra- tions. Corrections for the other aberrations (astigmatism, coma, curvature of field) are of little concern to the microscopist since the manufacturers have nearly eliminated these problems. Microscopes corrected for spheri- cal and chromatic aberration are magnificent instruments, but they must be used properly. For example, correction for spherical aberration was designed for use with a cover glass 0.18 mm thick, and any other thick- ness introduces spherical aberrations. MICROSCOPY 99 Preparation of materials Some biological materials can be placed on a slide with water, cov- ered by a cover glass, and examined immediately. Many others, however, are too thick to transmit light or there is inadequate contrast between parts. The biological specialty known as microtechnique has evolved as a result of the need to prepare materials for examination. Water mounts are satisfactory for some purposes, but the water tends to evaporate. If a specimen must be examined for a longer period of time, mounting in glycerol gives better results. If living materials must be kept alive during examination, either water or glycerol is satisfactory, and certain non-poisonous dyes can be added. Neutral red, methylene blue, and janus green are "nontoxic" and tend to collect mainly in certain parts of cells, increasing the contrast between these parts and the rest of the cell. If the material is too large or too thick, slices can be made freehand with a razor blade. With practice, a steady hand can cut slices only one or two cells thick. Prepared slides or permanent mounts are made when more than a preliminary examination is required. The techniques are complex and variable, are frequently long and tedious, but usually involve four major steps: fixation, embedding, sectioning, and staining and mounting. The order of the steps may vary somewhat. Fixation is a method of killing the cells. Ideally, the living material is killed rapidly and in such a way that the various structures do not become disarranged with respect to each other. We should hope that the nucleus of a dead cell on a prepared slide looks about the same as the living nucleus. Obviously changes do occur, but certain fixation procedures apparently bring about a minimum of change. The killing usually is accomplished by chemical fixatives, such as a combination of formaldehyde, acetic acid, and alcohol or a mixture of potassium dichromate, chromic acid, and osmic acid. Some fixatives tend to demineralize bone or to soften hard materials. Often water is withdrawn from the tissue, and another material substituted in its place. Before a fixed and softened material can be sliced into thin sections it must be provided with some support. The simplest means of embed- ding is to pour melted paraffin around the object and then allow the paraffin to permeate the material and finally to harden. Paraffin melts at a fairly low temperature, so the biological material ordinarily need not be damaged by a high temperature treatment. More recently several syn- 100 MICROSCOPY thetic plastic materials have been used. Embedding for electTon micros- copy employs a methacrylate polymer— similar to Lucite and Plexiglas— which is prepared from small monomer molecules during the embedding process. Slices or "sections" of a block of embedded tissue are cut on a micro- tome, an example of which is shown in Fig. 8-11. The block of material Image Objective Phase plate Object Condenser Annular diaphragm Fig. 8-12. Phase contrast optics. The ocular is not shown. Diffracted rays (dashed Hnes) take a differ- ent path through the instrument than direct rays (solid lines). The phase plate modifies and bal- ances the light from the two sources. MICROSCOPY 101 is held in a clamp and moved mechanically over a very sharp blade where a thin slice is cut off. On the next stroke, the embedded tissue is advanced tow^ard the blade by a distance equal to the desired thickness of the tissue slice. Each stroke thus advances the tissue, usually by means of a lead screw, and then cuts off the amount of the advance. The sections of tissue, surrounded by the embedding material, slide off the knife, one after another, in a more-or-less continuous ribbon. If a series of sections is prepared in this way, each slice can be studied separately, and the whole series allows the biologist to assemble a three-dimensional picture. The microtome is a rugged but precise instrument, permitting the slicing of sections thinner than a single cell. Permanent prepared slides are made from sections cut on the micro- tome. The section of biological material is attached to the slide, and the embedding material is dissolved away. Further treatment involves soak- ing in one or more stains or dyes, chemical treatment to make the staining permanent (at least in certain parts of the cell), and washing to remove excess dye. Dyes are selected on the basis of their affinity for certain chemical substances in the cells. For example, one dye might be attracted only to lignin and thus would collect in the cell walls of plant cells, staining them, say, red. A second staining treatment might stain cytoplasmic proteins green. The finished slide would show individual cells with red cell walls and green cytoplasm, which certainly is more contrast than the original material furnished. Once the staining is com- pleted and chemical treatment has withdrawn all water, a cover glass is cemented over the biological material with a natural or synthetic cement of high refractive index. The whole preparation might require several hours or days, but slides prepared in this manner last for years and can be examined under the microscope at any time. If the slide has been well made, there will be a minimum of disarrangement and swelling or shrink- age of the parts of cells. Use of the microscope The compound microscope is a familiar instrument to anyone who has taken a biology course, so only a few general pointers on the use of the microscope will be presented here. The first steps are preparatory. Make sure that the lenses of the micro- scope are clean, using nothing but specially prepared lens paper which will not scratch the glass. Several commercial solvents are available for 102 MICROSCOPY removing stubborn spots that should never have been there if the micro- scope had had proper care. If fresh material is to be examined, it can be mounted in a drop of water on a slide, covered by a cover glass, and placed on the stage. A permanent slide might need a little cleaning but otherwise is ready for use. If the microscope is equipped with a mechani- cal stage, the slide fits into a rectangular opening and is held in place by a metal finger and spring. The controls of the mechanical stage are then turned until the object is more or less centered over the hole in the stage. Illumination may be provided by a substage lamp or by a mirror and separate lamp. Adjust the light source and focus the condenser until the object looks bright when viewed from the side. Further adjustment in lighting will have to be made later. If the object is new and unfamiliar, start with the lowest power objec- tive (10 X or lower). It is even desirable to use a low power eyepiece, say 5 X, because it gives a wider field. Watching from the side, turn the ob- jective lens down with the coarse focal adjustment until the lens is about a centimeter from the slide. Look through the eyepiece and slowly raise the objective with the coarse adjustment until the object comes into focus. Make final focal adjustments with the fine adjustment knob. Move the slide around, varying the fine adjustment more-or-less continually, until you find the part of the specimen that you want to examine more closely. Then the 10 X eyepiece can be used to give greater magnifica- tion (but not improved resolution). Before switching to the 43 X objec- tive, place the desired object in the center of the field because the higher power lenses have a much smaller field of view. Turn the revolving nose- piece until the higher power objective snaps into place. The microscope is parfocal, which means that both lenses will be in focus at the same position. Only minor focusing with the fine adjustment may be neces- sary. Clear images depend upon optimum illumination, and focusing of the condenser and regulation of the iris diaphragm should be varied until the best image is obtained. Never focus downward with the coarse ad- justment without watching from the side. More than one objective lens has been driven through a slide. The slide may be easily replaced, but a scratch on the objective lens is serious. To use the oil-immersion lens, first center the object in the field of the 43 X objective. Raise the body tube and place a drop of immersion oil on the center of the slide. Watching from the side, slowly lower the oil-immersion objective. As the lens approaches the drop of oil, the oil seems suddenly to jump across the gap, making a bridge between the MICROSCOPY 103 cover glass and the lens. At this point, finish focusing with the fine adjustment. Increase the light if necessary. The slide can still be moved around but, so that the oil connection between the cover glass and the lens won't be broken, only to a limited extent. When finished, be sure to clean the oil off the lens and the slide. Oil-immersion microscopy is not very satisfactory with fresh mounts in water. Focusing tends to move the cover glass, which often moves the specimen. The microscope, a rugged instrument, will stand some abuse but will never perform ideally afterward. As a precision instrument it de- serves the very best of care, partly by way of appreciation of the lens- makers' art. The practical limits and the theoretical limits of the micro- scope almost coincide; probably no other instrument has reached such a high level of development. The microscope is rarely called upon to de- liver its full potentialities. Practice is necessary, but almost anyone can learn to use the ordinary microscope effectively. The beginner may get lost behind great black shapes of objects too thick for light to penetrate, or he may spend some time focusing on the lovely air bubbles that are common in fresh mounts in water. Plant cells usually are more readily recognized than animal cells because of the prominent cell wall in plants. A little experience, however, enables one to navigate among even the most difficult of ma- terials. In the following sections, several microscopic methods are described which demand too much for the casual operator. Considerable training and skill is required, both to use the microscope and to interpret what is seen. Only a very general description will be given. The phase contrast microscope Some biological materials and some parts of cells, especially when alive, offer so little contrast in density, refractive index, or color that they are difficult to study with an ordinary microscope. Some of these can be studied more effectively through the techniques of phase microscopy. In ordinary "bright field" microscopy, high resolution depends upon diffraction of light and the fringes that appear on the edges of structures. The diffracted light must be caught by the objective lens, and the image is formed from the interference pattern of the direct or axial light and the diffracted light. If there is sufficient contrast (difference in absorp- tion, reflection, or refraction) at these edges, the image is sharp and ob- 104 MICROSCOPY jects are easily recognizable. Sometimes when the contrast is high, the bright or dark interference bands around small objects are so obvious that they become annoying. If the contrast is insufficient, the diffracted rays will be weak, the direct rays will seem too bright, and the object will be difficult or impossible to see. Phase contrast is basically a means of balancing the direct and the diffracted rays to increase the visibility of the interference fringes. Fig. 8-13. Hypothetical birefringent structure. Polarized light passes through such a structure more rapidly in some directions than in others. As as rule, the direct rays and the diffracted rays travel to the image level by paths of different length and therefore arrive out of phase with respect to each other. One of the features of the phase contrast micro- scope is a modification of these phase relationships to provide a more favorable image. In addition, the direct rays are commonly reduced in intensity so that diffracted rays contribute relatively more to the image. The diagram in Fig. 8-12 shows one common means of achieving this balance. The ring-shaped opening (annulus) in the condenser causes the illumination of the object by a hollow cone of light. The undiffracted rays pass through the objective to the "phase plate" where they are modi- fied. Diffracted rays pass through other parts of the objective lens and phase plate. Various types of plates are available to produce bright MICROSCOPY 105 images against a dark background or dark images against a light back- ground. Boundaries within the object that differ only slightly in refrac- tive index might not be visible in the ordinary microscope but are seen easily with phase contrast. Phase contrast microscopy requires a good deal of experience and study. Very faint boundaries or edges become readily visible, and it is too easy to see structures that are not real. A wrong choice of the type of phase modifying plate could give a completely erroneous impression of the object. Skilled users, however, have seen and photographed parts of living cells that would be completely invisible without the phase contrast microscope. The polarizing microscope The use of the polarizing microscope yields information about ma- terials with an ordered internal structure. Chemists use this microscope to examine crystals because the crystal contains atoms or molecules arranged in a definite order. Certain biological materials seem to have similar definite arrangements, and information about this fine structure can be obtained by studying the materials under polarized light. The Microscope: In the polarizing microscope, light is supplied through a disk of polarizing material, such as high-grade "Polaroid," so that the specimen is illuminated by light which vibrates in only one plane. The disk of polarizing material, known as the polarizer, is mounted in or adjacent to the condenser in such a manner that it can be rotated. A similar disk of polarizing material, called the analyzer, is mounted behind the objective or in the eyepieCe and also can be rotated. Scales marked in degrees tell the direction of orientation of the polarizers and analyzers. If the two are parallel to each other, the polarized light passes through the analyzer easily, and the field appears bright (with no specimen on the stage). If the analyzer is rotated 90°, it prevents the passage of the polar- ized light, and the field will be dark. This is the condition known as "crossed polarizers." Polarization microscopes are also fitted with a rotat- ing stage so that the object can be rotated within the polarized illuminat- ing beam. Effects of Materials Upon the Polarized Beam: Some kinds of ma- terials have no effect upon polarized light regardless of the direction in which the light passes through. The polarizing microscope will behave the same as if the material were not in the light path. Such materials 106 MICROSCOPY are optically isotropic. Other materials, notably crystals and colloidal aggregates, show a different behavior depending on the direction in which the light beam passes through them. Such materials exhibit one or more of several forms of optical anistropy. Most optically anistropic materials differ in refractive index in differ- ent directions, which means that the polarized light passes through the material more rapidly in one direction than in others. Such materials are doubly refractive or birefringent. The effect would be apparent in a material such as that shown in Fig. 8-13. Rods of one substance, with a characteristic refractive index, are aligned parallel to each other in a medium of different refractive index. The polarized light is propagated in the direction parallel to the fibers with a velocity different from the velocity across the fibers. If this birefringence is a property of the material itself and not of the medium in which it is mounted under the micro- scope, the material exhibits intrinsic birefringence. A similar effect occurs when particles with one refractive index are oriented in a medium of different refractive index, but this effect— form birefringence— might dis- appear if the medium is changed. Components of protoplasm seem to show both these effects under certain conditions. Dichroism is dependent upon differences in absorption of polarized light passing through in different directions. Certain colors of light are absorbed when the material is oriented in a particular way, while another pattern of absorption appears if the material is rotated 90°. Thus the material will change color as it is rotated. The analysis of materials with the polarizing microscope depends upon changes occurring when the polarizer, the material, or the analyzer are rotated. The observer may see alternations of light and dark fields, changes in color, or a series of color fringes reminiscent of spectra. Interpretations require experience, and only relatively few biologists have used the polarizing microscope. There is a body of literature including studies on protoplasm, however, and some of our knowledge of the orientation of materials in protoplasm has resulted from such studies. Another fairly simple effect of some materials on the beam of polarized light is a rotation of the plane of polarization. If the polarizer of the microscope is set so that the light is vibrating in the "twelve o'clock to six o'clock" plane, the material on the stage will rotate this plane to the right or left by an amount which depends on the nature of the material and its thickness. The degree of rotation can be determined by rotating the analyzer until the field looks bright. Ordinarily this effect is small. MICROSCOPY 107 and the polarizing microscope would not be used to measure it. Instead, we would use an instrument called a polarimeter, which is similar in principle but has provision for a much thicker layer of the material to be examined. The electron microscope The electron microscope employs a beam of electrons instead of a beam of Hght. These electrons, which are produced by a heated filament, can be focused into a beam by an electrostatic or a magnetic field. The electrons behave as if they had a frequency and a wavelength, but this wavelength is much shorter than the wavelengths of visible light. The "optical" parts of the electron microscope are analogous to those in the light microscope, consisting of an electrostatic or magnetic "objective lens" and a similar "projector lens." The electron beam passes through the object, where electrons are either transmitted or scattered in various directions, depending on the nature of the material in the object. The transmitted electrons are brought to focus on a photographic plate in a pattern corresponding to regions of high transmission or high scattering in the object. Since the wavelength of the electrons is short, the resolu- tion is greater than that available in the light microscope. Biological materials offer difficulties in electron microscopy, but the solution of these problems has permitted pictures showing exceedingly fine details of structure. Most recent biology books contain excellent examples. Even though these show great detail, much is lost in the printing process; the original photographs are truly magnificent. The electron beam must operate in a vacuum, which means that the biological material must be dry (and therefore dead). Exceedingly thin layers of material must be used, and usually, since biological materials are relatively ineffective in electron scattering, atoms of metal are added to increase contrast. Earlier electronmicrographs were prepared by drying a thin film of biological material and then "shadowing" with metal atoms. In a vacuum chamber the metal atoms are "sputtered" off a heated coil from a position above and to the side of the biological material. The metal atoms form a thick layer on one side of any raised places in the biological material and a thin "shadow" behind these spots. The resulting electronmicro- graph provides a three-dimensional effect more or less like an aerial photograph, with alternations in light and shadow. 108 MICROSCOPY More recently it has been possible to photograph internal details of biological structures. The tissue to be examined is chopped and then fixed in a solution of osmic acid (osmium tetroxide). The cells are killed, and, at the same time, metallic osmium atoms are deposited in certain parts of the cells. The fixed material is embedded in plastic and sectioned on a special microtome. The slices produced must be thinner (< 1/a) than those used for light microscopy. These thin slices are sup- ported on a screen analogous to the glass slide and placed in the elec- tron microscope. Direct viewing is possible if the electron beam is focused on a fluorescent screen, or a photograph can be made. The theoretical limit of resolution of the electron microscope is in the vicinity of 2 X 10~^" cm, compared to a theoretical limit for the light microscope of about 2 X 10~^ cm. This calculation, how^ever, as- sumes an electron "lens" of high N. A. In practice, magnetic and elec- trostatic lenses have large aberrations, and only low values of N. A. can be used. The practical limit of resolution for the electron micro- scope is about 1 m/Lt, or 10"'^ cm. This limit is still a considerable im- provement over that found in the light microscope. The interpretation of electronmicrographs requires some caution. The biological material has been subjected to a variety of treatments and of course is no longer alive. It is entirely possible that fixing, staining with metal, and sectioning have introduced "artifacts," or apparent features which are not real. The tissue will have been sliced in many different planes, and the selection of a particular spot to photograph depends on the microscopist's idea of what the preparation ought to look like. None- theless, so many fine pictures showing exquisite detail have appeared recently that the electron microscope has had a profound influence upon biological research. SELECTED REFERENCES Chamot, Emile Monnin, and Clyde Walter Mason, Handbook of Chemical Microscopy, 3rd ed., vol. I. New York: John Wiley & Sons, Inc., 1958. This volume presents a better discussion of gen- eral microscopy and special methods than some of the books on general microscopy. An especially detailed treatment of polarization microscopy is included. Clark, George L., ed.. The Encyclopedia of Microscopy. New York: Reinhold Publishing Corporation, 1961. Complete coverage of all the various topics of microscopy. MICROSCOPY 109 Frey-Wyssling, A., Suhmicroscopic Morphology of Protoplasm, 2nd English ed. Amsterdam: Elsevier Publishing Company, 1953. In- cludes an excellent description of the application of p>olarization microscopy to biological materials. "Microscope." Encyclopaedia Britannica. 1956 edition. XV:434-443B. This article is one of the best of the brief discussions even though the subject is approached from a viewpoint somewhat different from that presented in this chapter. Several excellent photographs are in- cluded. CHAPTER 9 Colorimetry-Spectrophotometry The colorimetric procedure, one of the most common methods used in analytical chemistry today, finds application in biology also. The method depends upon those physical principles which are related to the color of various substances. In its simplest form, colorimetry measures the amount of a material by measuring the intensity of its color. The greater the concentration, the more highly colored the solution. An extension and refinement of the technique is commonly given the term spectrophotome- try. Spectrophotometry also can be used to determine concentrations, but has the added advantage that it can be used to identify materials and measure rates of reactions. Any material that has color, or more properly, any material that absorbs radiant energy in the visible region, in the ul- traviolet, or in the infrared, is adaptable to measurement by this pro- cedure. General considerations Measurement of the color of materials depends upon the nature of light itself. The wave nature of light was described in Chapter 8. The conception of light as wave motion, however, does not entirely and adequately describe its behavior. Although seemingly incongruous at first, it is now relatively easy to accept a different nature for light. It can be shown mathematically that when it interacts with matter, light behaves as if it were made up of corpuscles or packets of energy. These packets of energy are commonly known as quanta (singular: 110 COLORIMETRY-SPECTROPHOTOMETRY 1 1 1 quantum) or occasionally as photons. A quantum is simply a certain amount of energy, whose energy equivalent could be measured in cal- ories, electron-volts, or ergs. The size of the quantum increases as the frequency increases. The extremely long wavelength and low frequency radio waves contain relatively little energy per quantum, while gamma radiation, with its short wavelength and very high frequency, contains a great deal of energy per quantum. Planck was able to show that the relationship is in strict proportion and was able to arrive at a value for the proportionality constant (/i) of the equation E = hv (9-1) If we know the frequency, we can use Planck's constant, approximately 6 X 10"^'^ erg-sec, and arrive at the energy (E) in ergs per quantum. Since a single quantum is an extremely small amount of energy, we usually deal in terms of N (Avogadro's number) quanta. By way of illustration it can be pointed out that N quanta of red light are equivalent to about 40 kilocalories of energy, while N quanta of blue light are equivalent to about 70 kilocalories. Colorimetry and spectrophotometry depend upon the interaction be- tween light and matter. Almost every material will absorb radiant energy of some wavelengths. If these wavelengths happen to be within the visible region, we say that the material has color because the eye sees only those parts of the spectrum which are not absorbed but are reflected or transmitted. If the absorbed wavelengths happened to be in the in- frared or ultraviolet, and relatively little of the visible light were ab- sorbed, the material would be white or colorless. If nearly all the visible light were absorbed, we would call the material black. When a molecule absorbs light energy it must absorb one and only one full quantum. It cannot absorb parts of quanta or several at a time. When the atom or molecule accepts this bundle of energy, so that it contains more than the usual amount of energy, it goes into an "excited state." The most likely explanation on the basis of atomic structure is that one of the orbital electrons is lifted from a stable configuration into a less probable, higher energy state. The amount of energy required for this transformation is exactly the amount in the particular quantum. In any kind of atom or molecule there may be several possible ways in which the different electrons can be displaced. Each of these would re- quire its own characteristic amount of energy. Therefore the atom or molecule could absorb several different wavelengths of radiant energy. If we examine the material by means of the spectroscope we see dark 1 1 2 COLORIMETRY-SPECTROPHOTOMETRY lines wherever light is absorbed; the pattern of these dark lines is char- acteristic of the material being examined. The absorption of light depends upon the displacement of electrons, and the phenomenon should also occur in reverse. If a molecule is heated to a sufficiently high temperature, electrons can be displaced. As they fall back to their normal levels they emit the excess energy as light. If we examine this light with a spectroscope we find that the bright lines, or emission lines, correspond at various regions in the spectrum with the dark absorption lines in our previous spectrum. The molecule in the excited state is unstable, and the energy must be dissipated in a small fraction of a second. There are several possible fates for this excitation energy. Far more common than any of the others is the conversion of this energy into heat, which is transferred to other atoms and molecules in the vicinity. The displaced electron falls back to its normal energy level. In some cases, the energy may be used to drive a chemical reaction. The whole field of photochemistry is founded upon this ability. The absorbing molecule itself may be changed in the photo- chemical reaction or it may serve as a sensitizer for some other reaction. A third possible fate for the quantum of absorbed energy is its re-emis- sion as light in the process of fluorescence. Since, during the lifetime of the excited state, a small fraction of a second, a portion of the energy will be dissipated as heat, the quantum re-emitted must be smaller and therefore of a longer wavelength than the quantum absorbed. In certain kinds of molecules a fourth fate is possible. The excited molecule may change from the unstable excited state into a longer-lived excited state. After a period of seconds, or even minutes or hours, the quantum is re-emitted as light, this time of a much longer wavelength. This phenom- enon, known as phosphorescence, probably occurs only rarely in biologi- cal materials. Analytical instruments The simplest kind of colorimetric instrument uses the human eye as a light detector. The technique consists of comparing the color of the "unknown" solution with the color of a set of solutions of known con- centration. The human eye is an amazingly sensitive instrument, and quite small differences in concentration can be detected. In the ordinary practice the "unknown" is held between two of the "known" solutions, and by trial and error we find that pair of standard solutions between COLORIMETRY-SPECTROPHOTOMETRY 113 which the unknown fits. Various devices have been developed to faciH- tate this comparison, to reduce stray hght, and otherwise to increase the accuracy of the measurement. For the qualitative identification of mate- rials, by locating the black absorption bands it is possible to use a direct vision spectroscope. This is an art which has not been properly developed in the United States but is much more popular among the European workers, particularly several at the University of Cambridge in England. In the United States we are much more Hkely to depend upon in- struments which contain photoelectric cells to measure light intensities. In order to determine the concentration of a colored material, it is neces- sary to measure the amount of light actually absorbed by that material. The greatest response is obtained if measurements are made with light of a color strongly absorbed by the molecules. The typical colorimetric and spectrophotometric instruments isolate a band of wavelengths in the vicinity of this maximum absorption. This may be done by a system of colored glass filters or by the so-called interference filters. For routine measurements these filter systems are extremely convenient and rapid to use. One merely places the pure solvent in the light path and measures the amount of light striking the photocell. This measurement is often used to establish an electrical zero point. The solvent is then replaced by the colored solution and the diminution in light intensity is noted. The concentraton of the solution and the reduction in the light trans- mitted through it are related, as will be described later. Such an instru- ment would be called a colorimeter. Spectrophotometers: Other instruments known as spectrophotometers depend upon a monochromator which uses a prism or a diffraction grat- Entrance slit Lamp Dispersed spectrum Beam of green light Sample i) Photocell Exit slit Fig. 9-1. A simplified spectrophotometer, set to allow a beam of green light to pass through the exit slit. The wavelength is adjustable in any of three ways: by rotating the light source through the arc at A, by rotating the prism (B), and by sliding the slit and detector apparatus along the plane at C. The output of the photocell is measured by an appropriate electrical circuit. 114 COLOmMETRY-SPECTROPHOTOMETRY ing to resolve a beam of white light into its spectrum. The spectrum is allowed to fall upon an adjustable slit which allows light of only one color to pass through. A simplified system is diagrammed in Fig. 9-1. Theoretically it makes no difference whether the colored solution is placed in front of or behind the prism. In practice it is usually placed as shown in the diagram. For comparison and to show some of the operating details, the optical system of the Beckman Model DU Spectrophotom- eter is shown in Fig. 9-2. It will be seen that the principle is the same. Many models of spectrophotometers are available on the commercial market. They differ chiefly in the quality of the optical system and in the electrical measuring system. The smaller and less expensive instru- ments typically use a direct reading galvanometer of some sort. The deflection of this meter is proportional to the output of the photocell. The more refined instruments use a Wheatstone bridge as a null measuring instrument, as was described in Chapter 4. ^Collimating mirror Condensing mirror- Tungsten lamp Diagonal mirror- Sample Phototube Fig. 9-2. Diagram of Beckman Model DU Optical system. Light from the tung- sten lamp is focused by the condensing mirror and directed in a beam to the diagonal slit entrance mirror. The entrance mirror deflects the light through the entrance slit and into the monochromator to the collimating mirror. Light falling on the collimating mirror is rendered parallel and reflected to the quartz prism where it undergoes refraction. The back surface of the prism is aluminized so that light refracted at the first surface is reflected back through the prism, under- going further refraction as it emerges from the prism. The desired wavelength of light is selected by rotating the Wavelength Selector which adjusts the position of the prism. The spectrum is directed back to the collimating mirror which centers the chosen wavelength on the exit slit and sample. Light passing through the sample strikes the phototube, causing a current gain. The current gain is amplified and registered on the null meter. (Courtesy Beckman Instrument Co.) The design and construction of spectrophotometers is an extremely complex subject, so we can discuss only the most gross aspects. The useful range, accuracy, precision, and convenience of a spectrophotometer COLORIMETRY SPECTROPHOTOMETRY 1 1 5 depend upon the source of radiant energy, the optical system for focus- ing beams and for dispersion of the spectrum, the Hght-detecting devices, and the electrical system. Energy Sources: The source of radiant energy must produce a con- tinuous spectrum of illumination over the entire range of wavelengths to be used. Ideally, the brightness of the source should be the same, regardless of color. Since this ideal is impossible to achieve, the rest of the spectrophotometer must compensate for differences in the output of the energy source. The lamp must be bright enough to allow the isola- tion of a narrow band of one color with enough energy to actuate the light-detecting device even after all the reduction in intensity caused by the various elements in the light path. The lamp must be operated from a power supply which does not vary appreciably, so that the energy output of the lamp does not change in time. Usually a tungsten filament lamp is used for the visible region of the spectrum. The lamp is powered by batteries or by electronically-regulated power supplies. A tungsten lamp operating at a different temperature, or an electrically-heated block of metal, may be used as a source of infrared radiation. A gas discharge lamp, usually charged with hydrogen, is used for the ultraviolet region of the spectrum. Hydrogen, when excited by an electric arc, emits a fairly broad, continuous spectrum of ultraviolet radiation. The hydrogen lamp must be encased in a material other than glass since ordinary glass does not transmit much ultraviolet radiation. Monochromator: The optical system focuses a beam of radiant energy from the source, disperses the spectrum of this energy, and focuses a monochromatic beam through the sample to the light-detecting device. A system of lenses can be used for focusing if the reduction in light in- tensity by the lenses themselves is not objectionable. If the losses are too great, a system of concave mirrors can be used to focus light beams. The spectrum can be dispersed by a prism or by a diffraction grm:ing, each of which has some advantages and some disadvantages. The prism is efficient in that a high proportion of the energy entering the prism is recovered in the single dispersed spectrum. Only one spectrum is produced, but the dispersion is nonlinear; that is, the short wavelengths are separated from each other more than the long wavelengths. The prism is slighdy sensitive to temperature, but this is usually unimportant. The diffraction grating produces spectra by the interference principle (Chapter 8) and may operate by either transmission or reflection. Several spectra appear, designated first order, second order, etc. The short wave- length end of the second order spectrum may overlap the long wave- 116 COLORIMETRY-SPECTROPHOTOMETRY length end of the first order spectrum, necessitating special precautions to eliminate stray light of undesirable colors. The grating, however, is advantageous in producing linear dispersion, which simplifies the me- chanical system used to control the wavelength setting of the instrument. Some commercial instruments use prisms; others employ gratings; the Gary Model 14 has one of each. A set of slits is an integral part of the optical system. A first slit passes a beam of light from the lamp to the prism or grating and blocks out light moving in other directions. A second slit isolates a band of light from the dispersed spectrum. These slits may be adjustable in width. If the instrument is to be used only in the visible region, glass optical parts are satisfactory. If the instrument is to be useful in the ultraviolet, any elements (lenses, prism) which transmit energy are made of quartz. Radiation Detectors: The devices used to detect radiant energy are of several types. Any of these responds more strongly in some regions of the spectrum than in others. Commercial instruments commonly em- ploy two or more detectors of different spectral sensitivity so that the entire spectral range can be covered. The light-sensing device must be able to produce an electrical signal adequate to drive the electrical meas- uring system. Relatively insensitive detecting and measuring systems are easy to construct and use and are adequate for some purposes. The precision of measurement is limited, since the system will not be able to measure small amounts of energy or small differences in energy. A more sensitive, more precise combination may be desirable, but is usually ac- companied by difficulties in operation, noise, delicacy of control, and more elaborate, expensive circuitry. 0) c LlI Wavelength Fig. 9-3. Effect of slit width and of spectral band width upon energy passing through exit sHt. COLORIMETRY-SPECTROPHOTOMETRY 1 1 7 Any spectrophotometer must be a properly balanced collection of com- ponents. Obviously there is no point in coupling a very sensitive detector to a crude optical system. As a rule, the better the instrument, the nar- rower the spectral width of the band of monochromatic light that may be used. Imagine a spectrophotometer which produces a spectrum whose energy distribution is shown by the curve in Fig. 9-3. The actual response of the photocell will depend upon the fraction of the area under the curve which passes through the exit slit. For example, the shaded portion between Xi and X4 would produce a certain electrical response. If the slit is made narrower, a band extending from ^2 to X3 will pass through, and the electrical output is reduced by approximately one-half. As the slit becomes narrower and narrower, the amount of energy falling on the photocell becomes smaller and smaller until eventually there is insuf- ficient energy to produce a usable electrical output. The width of the usable spectral band can be reduced by increasing the responsiveness of the detecting system. The minimum usable spectral band width varies from one part of the spectrum to another, and there is considerable varia- tion among the different commercial instruments. Usually the lower the price, the wider the band widths must be. The effect of spectral band width upon actual measurement is illustrated in Fig. 9-9 on page 124. Recording spectrophotometers Perhaps the ultimate in convenience is attained in the recording spectrophotometers. These instruments contain a motor-driven mecha- nism that will scan the entire spectrum, that is, the entire spectrum is swept across the exit slit. At the same time the amount of light falling on the photocell is recorded by a moving pen on paper. Several instruments are available on the market, and each uses slightly different principles in its operation. Figures 9-4 and 9-5 compare two of the available instru- ments. Each uses a double monochromator, or, the beam of monochro- matic light produced by one monochromator is passed through a second monochromator to bring about a further reduction in stray light. The Perkin-Elmer Model 350 operates in the visible and ultraviolet and em- ploys two similar quartz prisms. Radiant energy from either the tungsten lamp or the hydrogen lamp enters the monochromator where it is dis- persed, and the narrow beam emerges through the exit slit. In the Gary Model 14 radiant energy from either the tungsten source or the hydrogen lamp follows a somewhat similar path, except that the second mono- 118 COLORIMETRY-SPECTROPHOTOMETRY chromator employs a giating instead of a prism. The Gary instrument is also equipped to operate in the near infrared. Because of the possibility of producing stray infrared in the measuring system, the infrared beam is produced by a second tungsten bulb and passes backward through the double monochromator. Any stray energy is thus removed. — [|-H0 Phototube Sample ■^^- — T— Q — >-^ I Beam splitter n Chopper ^ Hydrogen lamp Tungsten lamp Fig. 9-4. Optical Schematic Diagram of the Perkin-Elmer Model 350. Note the double monochromator and the use of two matched phototubes as detectors. (Courtesy Perkin-Elmer Cor- p>oration.) In any spectrophotometer it is necessary to standardize the instru- ment at every wavelength with the pure solvent in the light path because the response of the instrument varies with wavelength. In the recording instruments, the comparison of the sample with the reference solvent is accomplished automatically since the entire spectrum is swept across the exit slit and the wavelength changes rapidly. Reference to Figs. 9-4 and 9-5 will show two different means of performing this operation. In the Perkin-Elmer instrument the light beam emerging from the monochrom- ator is chopped by a rotating shutter. This shutter provides pulses of light to the phototubes, yielding a pulsating output which can be ampli- fied. The beam of light is then divided so that half the energy passes through the sample and half through the reference tube, each half- beam arriving ultimately at a separate detector. The outputs from the two detectors are compared electronically, and the difference between the two is plotted on a chart. This arrangement requires a reliable beam- splitter and a pair of carefully matched phototubes but allows the photo- tubes to be placed close to the sample and reference containers. In the COLORIMETRY-SPECTROPHOTOMETRY 119 Gary spectrophotometer, a motor-driven shutter-beam-splitter combina- tion provides a time sequence of "dark interval-through reference-dark interval-through sample-dark interval-through reference, etc." Later these two beams are recombined and directed to the same phototube. The two beams are compared by means of an electronic system synchronized with the rotating mirror which splits the beams. This arrangement avoids the possible mismatch of two phototubes but moves the detector farther away from the sample and reference containers. Tungsten lamp Of- H2 lamp [h- Qr -^-- Grcting-* Slit 2 ^ Reference \ Motor I Phototube j^ Tungsten lamp" Fig. 9-5. Optical Schematic Diagram of Gary Model 14 Recording Spectrophotome- ter, for comparison with Fig. 9-4. Note that the spHt beam, after passing through the reference and sample, is recombined in a single phototube. (Courtesy Applied Physics Corp.) Ultraviolet and infrared spectrophotometers Measurements in the ultraviolet region require a special source of energy and an optical system which will transmit ultraviolet. Usually quartz optics are used, and the cuvettes or tubes which hold the solu- tions are made of quartz or fused silica. Otherwise operation in the ul- traviolet is the same in principle as operation in the visible region. Ultra- violet spectrophotometers are important in biology because a number of 120 COLORIMETRY-SPECTROPHOTOMETRY important colorless compounds (proteins, nucleic acids, vitamins, hor- mones) absorb strongly in the ultraviolet. Infrared spectrophotometry requires special detectors— often lead sul- fide—but an otherwise similar optical system. Absorption in the infrared depends upon different principles than absorption of visible and ul- traviolet radiation. Infrared quanta are generally too small to bring about transitions in electronic energy levels. Instead, the quantum size corre- sponds to the energy differences of the various vibrational states of certain chemical bonds. Most important of these are the heteroatomic bonds, such as O— H, C— H, and N— H. Any molecule which possesses such bonds will absorb infrared radiation at a series of wavelengths. Since the actual collection of absorption bands depends upon the number and kinds of bonds, the infrared spectrum is a characteristic property of a certain molecular structure. I 2 3 Concentration Fig, 9-6. Effect of concentration of a colored solution upon the transmittancy. Measurements of concentration We can observe visually that the more concentrated a solution of a colored material, the more hght is absorbed and the more intense its color. More formally, however, it is possible to express these relationships by means of equations. Imagine a spectrophotometer which is emitting a beam of monochromatic light. We place a container of the solvent in the light path and measure the output of the photocell, or use this amount of light to set the instrument at "100 per cent light transmission." In effect, we are setting the instrument so that it appears that all of the light passes through the pure solvent. With the colored solution in the COLORIMETRY-SPECTROPHOTOMETRY 1 2 1 light path we find that only a fraction of the light strikes the photocell, and the response is accordingly lower. If we refer to the light passing through the solvent as the initial incident light, L, and the light passing through the colored solution as I., we arrive at the following relationship: ^ = T (9-2) where T is the fraction of light transmitted through the colored solu- tion. The value T arrived at in this way (expressed as a decimal fraction or as a percentage) is commonly called transmittancy. It is related to the concentration of the solution according to an equation of the general form T = fe/conc. A series of measurements is shown in Fig. 9-6. It would be possible to use this curve in measuring concentrations of un- known solutions. We construct the curve for any particular material at a given wavelength and then, by measuring the transmittancy of an unknown solution and referring to the curve, compute its concentration. This is difficult, however, because of the very curvature of this rela- tionship. A relatively simple transformation of the equation leads to a much more convenient curve, shown in Fig. 9-7. This transformation is shown in the following equation : A = ( — logio 7^ ) = logio -T-= logio jj (9-B) The value A has been given various names. It is directly related to the amount of light absorbed and therefore is commonly called absorbancy. It is also a measure of the extinction of the light and is sometimes called extinction. It may also be referred to as optical density. All these terms are in relatively common use but mean essentially the same thing. Determining concentrations in this manner is much more convenient be- cause absorbancy is directly proportional to concentration. A = khc (9-4) where fe is a constant, h is the length of the light path through the solution, and c is the concentration of the solution. We make a measure- ment of the light absorbed by a set of standard solutions and then deter- mine the straight line shown in Fig. 9-7. The amount of light absorbed by a given molecule at any wavelength is a function of the molecular structure. If we measure the amount of light absorbed at any particular wavelength, this can be related to con- centration by any of several absorption coefficients (k, above). One of 122 COLORIMETRY-SPECTROPHOTOMETRY these, the specific absorption coefficient (a), tells us the amount of light absorption to be expected from a solution whose concentration is meas- ured in grams per liter. The molar extinction coefficient (e), on the other hand, tells us the amount of light absorption to be expected from a solu- tion whose concentration is expressed in moles per liter. The value h in the equation must be included because the thicker the solution the more 0 2 3 Concentration Fig. 9-7. Relationship between absorbancy and concentration. light it will absorb. In many of the commonly used instruments the glass tubes or cuvettes have a light path of one centimeter, and h simply drops out of the equation— but should not be forgotten. Further descriptions of the use of these measurements are given in the following section. Use of the instruments The spectrophotometer is commonly used for the qualitative identifi- cation of materials. Since the amount of light absorbed depends upon the electronic displacements possible within the molecule, the various colors of light absorbed are a strict function of molecular structure. Many metals or their vapors produce sharp-line spectra, but most colored organic compounds are complicated in structure, and the absorption bands are spread out laterally. If we wished to know whether two similar-appearing solutions were actually identical, we could measure the absorption of different colors of light by each. The absorption spectrum is a measurement of the absorption by the colored solution over a range of wavelengths. Wherever the electronic transitions are possible the molecule will absorb light, and where they are impossible relatively COLORIMETRY SPECTROPHOTOMETRY 123 little will be absorbed. The actual positions of the maxima and minima depend on the type of molecule. Figure 9-8 shows curves for two pink pigments extracted from different species of algae. Both solutions were pink although there was a noticeable difference in the shade of pink. 0.9 r- u c o o < 440 480 520 560 600 640 Wavelength (m/i) Fig. 9-8. Absorption spectra of pink pigments ex- tracted from two different species of algae, traced from a record measured by the Gary Model 14 re- cording spectrophotometer. 124 COLORIMETRY-SPECTROPHOTOMETRY These two pigments must be similar in structure because two of the peaks correspond. The third peak in one is completely missing from the other, however, so the molecules cannot be identical. Absorption spectra for many kinds of materials appear in the literature. It is possible to identify materials by comparison with these published curves. The qualit)^ of the spectrophotometer affects the absorption spectrum of a solution in a way that is surprising at first. Narrow spectral band widths produce sharper spectra, as can be seen from Fig. 9-9. One curve was determined by using quite broad bands of monochromatic light. Although the instrument may seem to be set at a certain wavelength, what the photocell sees is really that wavelength plus a band on either side. Thus the apparent absorption at this wavelength is really the average absorption over a wider band. If the actual curve happens to be passing through a maximum at this point, the measured absorption (average) is lower than it should be. The maximum absorption seems to be reduced, and minima are increased. 0.4 r 0.3 >» u c o ■e 0.2 o < 0. Narrow slit Brood slit • • • • ± 500 520 540 560 580 Wavelength (m/i) 600 620 Fig. 9-9. Absorption spectra of extract like that shown in Fig. 9-8, measured on two different instruments differing principally in effec- tive spectral band width. (Less concentrated solution than in Fig. 9-8); student data. In measurement of concentrations of materials it is usual to set the instrument at a point of maximum absorption. At this wavelength the reduction in light intensity caused by the absorption by the pigment is considerably higher than the reduction in light intensity from other COLORIMETRY-SPECTROPHOTOMETRY 125 possible causes, such as scattering. In addition, at these maxima a dif- ference of one or two milHmicrons in the setting of the wavelength on the spectrophotometer makes relatively little difference in the amount of light absorbed. If, alternatively, the wavelengths were set on a steep portion of the absorption curve, then one or two millimicrons could make considerable difference in the absorption. For routine measure- ments of concentration it is common practice to make up a standard curve from freshly prepared solutions of known concentrations. A curve such as that shown in Fig. 9-7 is drawn, from which the concentration of unknown solutions can be determined. If suitable care is taken in the measurements of the standard solution, it is possible to calculate the specific or molar absorption coefficient from the slope of the curve. If these values are known with precision, then it is not necessary to draw the curve. One merely measures the light absorption, uses the proper absorption coefficient, and calculates the concentration of the solution. u c o Xi k- o < / \ / \ ' ^\ / \ / /T\\ 1 W / \i \\ \\ '/i 1/ \ ^-' \ u // \ / u il \ / \| il p\ /q l\ W m ll \ / \\ II \ / \\ ll \ / \\ 1 \ / \\ 1 Y \\ 11 yx^ \\ ll I \ \\ ll / \ \ 1 ' \ A 600 650 Wavelength (m/i) Fig. 9-10. Absorption spectra of a pair of hypothetical pigments, and the spectrum to be expected from a solution containing equal concentrations of the two pigments. 1 26 COLORIMETRY-SPECTROPHOTOMETRY One of the assumptions involved in the development of spectropho- tometry was that the amount of light absorbed by a colored material would be independent of the presence of other materials. Each molecule would behave as an individual and would not be influenced by other molecules. Because of the apparent validity of this assumption, it is sometimes possible to measure the concentrations of two components in the solution by making measurements at two wavelengths. As an ex- ample, consider the pair of hypothetical solutions in Fig. 9-10. Pigment P absorbs strongly at 600 mfJi, while pigment Q absorbs strongly at 650 m//.. The absorption by pigment P is low at 650 mjtt, as is that of pigment Q at 600 m/A. The dotted curve in this graph shows the absorption spectrum that would be found in the case of a mixed solution containing equal concentrations of these two pigments. At wavelength 600 m/n, a large portion of the absorption results from pigment P. Pigment Q con- tributes only a small amount to the total absorption. At 650 m//-, the re- lationships are reversed. If we know the absorption coefficients, it is pos- sible to calculate the concentration of both components in this mixture by the following set of transformations. Once the calculations have been made the final pair of equations can be used directly in analyses. Agoo ^= Measured absorbancy at 600 m/u.. K^eoo = Absorption coefficient for pigment P at 600 mfi. [P] = Concentration of pigment P. Other symbols follow the same system. Aeoo = K^eoo h [P] + K%oo h [Q] (9-5) Ae6o = K^65o h [P] + K%,o h [Q] (9-6) Solving for [P], from equation 9-5, iV 600 t) Substituting in equation 9-6 and solving for [Q], r^l 1 / K 600 /l650 K 650^600 \ .^ q^ b \K 600 K'^650 K 650 K^GOO/ Substituting [Q] in equation 9-7 and simplifying, rpT 1 / K^650 AeOO K'^600 A650 \ ^q qn b \K 600 K^650 K. 6.50 K^GOoJ When the numerical values for the various constants are known, these equations can be simplified. COLORIMETRY-SPECTROPHOTOMETRY 1 27 Another common use of the spectrophotometer is in the determination of reaction rates. In this case we allow the chemical reaction to proceed directly in the cuvettes. In the reaction A + B > C + D suppose component C has a maximum absorption at 365 m/^ and none of the other components in the mixture, A, B, or D, has any appreciable absorption at this wavelength. If the starting materials A and B are placed in the cuvette in the spectrophotometer and the wavelength is set at 365 mfJi, the measured increase in optical density will be a measure of the appearance of material C. By carefully timing the reaction we have an adequate and convenient measure of the rate of the reaction. Fluorescence measurements Fluorescence is the ability of certain molecules to re-emit absorbed light. Since the molecule remains in the excited state for a short time, the emitted light is of a longer wavelength than that absorbed. With many kinds of compounds, especially at low concentrations, the fraction of the absorbed quanta which appear as fluorescence is constant, or nearly so. This means that solutions of greater concentration will absorb more light and therefore will fluoresce more intensively. In certain situ- ations it is advantageous to measure the fluorescence rather than the absorption. For example, riboflavin absorbs light at the blue end of the spectrum and shows a fairly strong yellow-orange fluorescence. The con- centration of a solution could be determined by measuring either absorp- tion or fluorescence. It happens, however, that several nonfluorescent materials commonly present in solutions of this vitamin also absorb ap- preciably at the same wavelengths. In such a mixed solution it is difficult to ascribe any particular portion of the total absorption to the riboflavin. Since it is the only component with marked fluorescence, measurement of the fluorescence gives a more convenient measure of the concentra- tion. Several fluorimeters are available commercially. Generally the light beams are handled according to some modification of the diagram in Fig. 9-11. The exciting beam enters the container from one side. The fluorescent light passes through a filter, which cuts out the exciting light. It then is measured by a photocell placed at right angles to the exciting beam. Several of the commercially available instruments also have photo- 128 COLORIMETRY-SPECTROPHOTOMETRY (^ Photocell Fluorescent light f 2 Filter 2 Filter Cuvette I Fig. 9-11. A simplified diagram of a system for measuring the fluorescence of solutions. cells behind the cuvette so that they can measure the light absorption by the sample as well as the fluorescence. Fluorescence spectra are characteristic of the fluorescing molecule. The instrument used for measuring a fluorescence spectrum must, of course, be more complicated than that for measuring concentration. The fluores- cent light is collected and resolved by a monochromator. The intensity of this light at various wavelengths is measured photoelectrically. Flame photometry The atoms or ions of metals characteristically absorb and emit light with sharp-line spectra. The flame photometer is used for determining the concentrations of such metals in solutions. The solution to be ana- lyzed is sprayed into a gas flame where the metal atoms are heated until they glow in their characteristic color. Sodium ions, for example, change the colorless gas flame to a brilliant yellow-orange by emitting a strong double line at about 589 m/tt. The light from the flame is collected and passed through a filter system or monochromator, which passes a band of wavelengths including the bright emission lines of the metal being analyzed. Other wavelengths are removed by the monochromator or filter so that only light emitted by this particular metallic atom is allowed to fall on the light-detecting device. Under otherwise identical condi- tions, a more concentrated solution of the metallic ions produces a brighter light so that measurement of the Hght passing through the COLORIMETRY SPECTROPHOTOMETRY 1 29 monochromator indicates the concentration of the solution. Instruments are available which are used exclusively as flame photometers, or a gas- burner attachment is used as a substitute for the usual energy source in one of the regular spectrophotometers. Flame photometry is specific; that is, it analyzes one metal even in the presence of others. It is also able to detect very small quantities of a metallic ion (quantities in the general range of milligrams per liter). SELECTED REFERENCES Clark, George L., ed., Encyclopedia of Spectroscopy. New York: Reinhold Publishing Corporation, 1961. A complete reference work with articles written by competent authorities. From the bibliography : Strobel, Howard A., Chemical Instrumentation. Willard, H. H., L. L. Merritt, Jr., and John A. Dean, Instrumental Methods of Analysis. Almost one-third of this book is devoted to various optical and photometric methods of analysis and the instru- ments in common use. CHAPTER 10 Measurements of Gas Exchange Frequently it is desirable to follow the progress of some biological reac- tion in which gases are produced or used. The best-known of these reactions, of course, are respiration and photosynthesis. Carbon dioxide and oxygen are exchanged in both of these processes. Respiration occurs in all living cells and converts chemical energy to a form usable by the cell. Some form of food, commonly the carbo- hydrate glucose, is degraded to carbon dioxide which contains less poten- tial energy than the carbohydrate. In a sense, the production of carbon dioxide is incidental to the more important energy release. When oxygen is available, most cells use oxygen and oxidize the sugar completely to carbon dioxide and water. We might use the imaginary but useful ex- pression {CH2O} to represent one carbon-atom's-worth of carbohydrate. Then a summary equation for respiration is as follows: (CH2O} + O2 > CO2 + H2O If we wish to follow the progress of this reaction we could measure any of the materials in the equation, at least in theory. However, measur- ing the disappearance of carbohydrate is difficult, largely because the common techniques interfere with the respiratory reaction itself. Meas- uring the production of water is also unsatisfactory because the cell contains a very large amount of water. The water produced in respira- tion represents a small change in a large amount of water already present. Generally, the most satisfactory method is to measure the carbon dioxide or oxygen or both of these gases. The same general principles apply to photosynthesis. Green cells, 130 MEASUREMENTS OF GAS EXCHANGE 131 when exposed to light, perform this reaction which is the reverse of respiration as far as net results are concerned, CO2 + HoO — > {CH2O) + O2 In photosynthesis the very same gases are exchanged and either of these, or both, can be measured to trace the progress of the reaction. Respiration and photosynthesis as described here represent only sum- maries of complicated sequences of single reactions. Most of the individ- ual reactions are enzyme-controlled. Many of the enzymes can be isolated from the cell and can catalyze the same individual reaction under arti- ficial conditions. If one of these separate reactions involved the produc- tion or use of carbon dioxide or oxygen, there is no reason that we could not use the same method of measurement used for the whole proc- ess. Gases like carbon dioxide and oxygen can be measured in a variety of ways. Some methods may depend entirely upon chemical principles, others upon physical principles. As an example of a chemical procedure, the carbon dioxide produced by cells may be analyzed by absorbing it in a solution of alkali to produce a carbonate, CO2 + 2 KOH > K2CO3 + H2O By titrating with a standard acid we can determine the amount of alkali neutralized, and from this amount we can calculate the amount of carbon dioxide. A slight modification of this method depends on the fact that divalent bases like calcium and barium form insoluble carbonate pre- cipitates. From the weight of the precipitate, the amount of carbon dioxide can be calculated. However, these methods frequently are cum- bersome and may not be easily adaptable to continuous measurements of gas exchange. Sometimes the chemical techniques destroy the gas being measured, and this destruction might be undesirable in certain types of experiments. There also are several chemical methods for analyzing oxygen, but these are even less convenient than the carbon dioxide analyses. The manometric technique The manometric method of measuring rates of metabolic gas exchange is used in almost every cell physiology laboratory in the world. Instru- ments similar in principle were developed by Barcroft and Haldane 132 MEASUREMENTS OF GAS EXCHANGE about 1902, but Otto Warburg, the noted German biochemist, demon- strated the general applicability of these principles to respiration and photosynthesis. He was largely responsible for the wide adoption of the manometric technique, and the method now bears his name. One should not be surprised to hear, "How did you measure oxygen?" "Warburg." Although we might expect more detailed communication between scien- tists, this brief answer has conveyed sufficient information. The principle of manometry is relatively simple. We merely follow the increase in pressure in a closed container as a gas is produced. The behavior of the gas obeys the physicist's gas law, which is convention- ally expressed by the following equation : PV = nRT (10-1) Here P is the pressure, V is the volume, n is the number of moles of gas, and T is the absolute temperature (°K). R is the "gas constant," which specifies the relationships of the other items. If the amount of gas (w) does not change, the equation becomes equivalent to Charles' or Gay-Lussac's law, It tells us that if the amount of a gas and the temperature remain con- stant, then an increase in pressure must be accompanied by a decrease in volume. Or if the temperature and volume remain constant, a decrease in the amount of gas will be accompanied by a decrease in pressure. This latter case is used in the manometric method. Temperature and volume are held constant, and the pressure is allowed to vary as the gas is produced or consumed. In practice, we place the living cells or other experimental material in a small glass container or vessel that can be coupled to a glass U-tube, the manometer (see Fig. 10-1). A change in pressure will be indicated by a difference in the height of fluid in the two arms of the U-tube. The glass tubing is graduated in millimeters so that we can read the height of the fluid. A glass stopcock in the manometer allows us to leave the vessel open to the atmosphere until we are ready to start the measurements— which means that the pressures inside and outside the container will be equal. When we close the stopcock the living cells will be in a closed space on one side of the manometer. The other side of the manometer is open to the air. Thus we compare the pressure inside the vessel with that out- MEASUREMENTS OF GAS EXCHANGE 133 side. As the experiment progresses, the pressure inside the vessel becomes greater than, or less than, that outside. A reservoir of manometer fluid in a short piece of rubber tubing at the bottom of the U-tube permits the adjustment of the fluid level and allows us to bring the internal volume back to its original level. If we allow the vol- ume to increase, or decrease, it becomes very difficult to calculate just how much gas has been produced or used. Imagine the complexity of calculation in the equation above if P, V, and n all change rapidly. At each reading, the fluid level is returned to the starting point on the closed arm, and the pres- sure diff^erence is read from the open arm of the U-tube. The manometer and vessel are con- veniently mounted on a supporting rack. The temperature in the vessel is main- tained at a constant value by immers- ing the vessel in a thermostatically con- trolled water bath. The water bath is usually the most expensive part of the entire apparatus. A sensitive mercury "thermoregulator" operates through re- lays to turn on heaters if the tempera- ture falls. Often a refrigeration system is included. The cooling system of course is essential for experiments con- ducted below room temperature. The thermoregulator balances the heaters against the cooling system to keep the temperature constant. A stirring device agitates the water vigorously so that the temperature is uniform throughout the bath. Water baths diff^er in detail (see Fig. 10-2), but all of them maintain the temperature within a very narrow range, perhaps at ±0.05° C. The water bath system also includes a means of holding the manometer racks and a means of shaking the Fig. 10-1. Manometer and vessel. A flexible reservoir of fluid is at- tached at the bottom of the U-tube. 134 MEASUREMENTS OF GAS EXCHANGE vessels while they are submerged. The shaking mixes the biological ma- terial thoroughly and assures the even distribution of gases within the vessel. About the time we start to make measurements, a complication be- comes obvious. The calculations above assume ideal conditions. Remem- ber that one arm of the U-tube is open to the atmosphere so that we can compare the pressure inside the vessel with the pressure at the beginning of the experiment, that is, the atmospheric pressure in the room. What happens if the pressure in the room changes? Must we abandon our experiment if a thunderstorm approaches and the barometer falls? No, we simply use an extra manometer containing no living material. This manometer can be used to correct for any changes in atmospheric pres- sure or any slight variations in temperature. This thermobarometer, as it is called, is placed on the water bath along with the experimental manometers. If some variation in the conditions causes a change in the thermobarometer, we simply assume that the other manometers would be affected in the same way. For example, if in one ten-minute interval the thermobarometer rises 2 mm, then each of the other manometers would rise 2 mm in the same time, even without living material. Since the thermobarometer is measuring a change in pressure, it is legitimate to correct the pressure changes of the experimental manometers by adding or subtracting the number of millimeters observed on the thermobarome- ter. Thinking about it for a few seconds tells us in which direction the correction must be applied. In an actual series of measurements by the manometric method, the results we obtain will be in millimeters of manometer fluid. The meas- urements would be more useful if they could be expressed as a real amount of a particular gas. "Microliters of oxygen" and "moles of carbon dioxide" have more meaning than "millimeters of manometer fluid." Fortunately, because there is a direct relationship between the pressure change and the change in the amount of gas, we can make the calcula- tions easily. The manometric method is used for small quantities of biological material and measures small quantities of gas. Therefore, the most convenient unit for the gas is the microliter (/Ltl). If we let h repre- sent the number of millimeters of manometer fluid (corrected by the thermobarometer reading) and X the actual amount of gas in microliters, then X = kh (10-3) or X is directly proportional to h. It can be seen from the gas equation t Fig. 10-2. Two types of constant temperature water baths. Top, courtesy American Instrument Co., Inc. Bottom, courtesy Bronwill Scientific Division, Will Corporation. 135 136 MEASUREMENTS OF GAS EXCHANGE (10-1) that the relationship will take this form if V and T are constant. If we know the value of the constant k, we can easily find the amount of gas corresponding to any change on the manometer. The value of k will depend upon the conditions under which the experiment is per- formed and upon the characteristics of the manometer-vessel combina- tion. If the vessel is small, a small change in the amount of gas will make a large change in the pressure and in the reading of the manometer. In contrast, if the vessel is larger, the same amount of gas will make a smaller change in the pressure. In order to compare the results of different experiments, we express the amount of gas used or produced with reference to a set of standard conditions, 0° C (273° K) and atmospheric pressure (760 mm Hg). Chemists very often express amounts of gas in these terms because they know that one mole of any gas, under these conditions, will occupy 22.4 liters, or any given volume of gas under these conditions will contain a specific number of moles. Although it may seem difficult to transform manometer readings to volume of gas under "standard conditions," we can incorporate these corrections during the computation of the constant k. An equation has been developed such that the constant k transforms h millimeters of manometer fluid directly into microliters (at 0° C and 760 mm Hg) of the gas being measured. The complete equation follows: 273° k = '—p (10-4) ro Vg = volume (in /Ltl) of the gas space in the particular vessel-manome- ter combination. T = absolute temperature, or Celsius temperature +273°. Vf = volume (in fA^ of liquid in which the living cells are suspended. oi m solubility of the gas in this hquid at this temperature. Po = the number of mm of this manometer fluid that would exert the same pressure as 760 mm of Hg. In other words, this is the "standard pressure" in terms of millimeters of manometer fluid. For any separate vessel-manometer combination we must measure the internal volume. The easiest accurate means of finding this volume is to find how much liquid the vessel will hold. Usually we fill the gas space with mercury and then weigh the mercury. The density of mercury is known quite precisely. Since it is great, a small difference in volume brings about a large easily-measured change in weight. The solubility factor, a, may be found in physical tables, or more easily, in the Umbreit, Burris, and Stauffer book, Manometric Tech- MEASUREMENTS OF GAS EXCHANGE 137 niques. It must be included in the equation because a certain amount of the gas we wish to measure will remain dissolved in the liquid. In biological experiments, Po is usually 10,000 mm of manometer fluid. This round number is no coincidence, because the manometer fluid is a specially prepared solution whose density will be 760/10,000 of the density of mercury. The fluid most commonly used is Brodie's solution, 23 g of sodium chloride in 500 ml of water. A small amount of detergent is added to prevent the liquid's sticking to the glass tube. The solution is colored with a dye for ease in reading. Suppose that we wish to measure oxygen exchange at 25° C in one of our manometers. We have found that the volume of the internal gas space is 18.36 ml (18,360 /"-l). For convenience we prefer to use 3 ml (3000 /Ltl) of fluid. Thus Vf = 3000 /ttl, and Vg is the total volume minus Vf, or 18,360 — 3000 = 15,360 fA. For oxygen in water at 25° C, « is found to be 0.028. If we use Brodie's solution, Po is 10,000 mm of manometer fluid. The arithmetic is not difficult. We find that the con- stant for oxygen in this vessel at this temperature is 1.42 (A of 02/mm, This means that each miflimeter on the manometer corresponds to 1.42 microliters of oxygen. If we keep our slide rule handy, it is an easy matter to find the actual amount of oxygen, even while the experiment is progressing. If we wished to measure CO2, we could find another constant by substituting the proper solubility figure in the equation. The example given here is a reasonably sized constant. Most of the vessels in common use today are of such a size that the constants for oxygen are between 1.00 and 2.00 lA 02/mm. The actual measurement is accomplished by placing the necessary materials in the vessels and placing the vessels on the manometers. The manometers are placed on the water bath with the vessels hanging into the water. With the manometer stopcocks open, a period of 15 to 20 min- utes is allowed for temperature equilibration. Then, as rapidly as pos- sible, the manometers are closed, and the initial pressure and starting time are recorded. The number of millimeters of pressure change is recorded at five or ten minute intervals. The final treatment of data from manometric measurements is used as an example in Chapter 14. In principle, the production or use of any gas could be measured by this technique. Actually carbon dioxide and oxygen are the gases most frequendy involved in biological reactions. Unfortunately, in respiration and in photosynthesis one of these gases is utilized while the other is produced. There is no net change, and the manometer will show no pres- 138 MEASUREMENTS OF GAS EXCHANGE sure change. Obviously a solution for this complication must be found. Some quite simple remedies are in common use, as well as some that are fairly complex. Perhaps the most obvious answer to the problem is to prevent the change in one of the two gases. If we wish to know the rate of the whole respiratory reaction, we may be satisfied to measure only oxygen. We must assume that respiration is the only cell process that uses or produces any gas, but often this assumption is safe enough. We can place the living material in the main part of the vessel and then, in a separate compartment, place a small amount of alkali solution. The car- bon dioxide produced in respiration diffuses through the gas space to the alkali where it is absorbed. Thus the only gas exchange that will aflFect the pressure is the change in oxygen. In many cases the amount of carbon dioxide produced is about the same as the amount of oxygen used. In photosynthesis another method is necessary because carbon dioxide is needed in the reaction. Ordinarily this measurement is performed by suspending the green cells in a bicarbonate solution. As the cells use carbon dioxide, more of this gas is provided by the bicarbonate. Since the carbon dioxide concentration in the gas phase does not change, we measure only the production of oxygen. This technique is used quite commonly even though the high pH of the bicarbonate solution may influence the activity of the cells. It might seem convenient to absorb the oxygen, just as the carbon dioxide was absorbed in respiration. How- ever, oxygen absorbers are expensive, dangerous or difficult to handle, or react quite slowly. If we wish to avoid the alkaline conditions of the bicarbonate solution, we must use one of the more complicated tricks of the manometric technique. Anyone who wants to use the manometric technique extensively should consult the standard reference book by Umbreit, Burris, and Stauffer, Manometric Techniques. It contains complete discussions of many applications of manometry and a number of variations that are possible. The 1957 edition also incorporates information on several other useful laboratory methods. This description has been restricted to carbon dioxide and oxygen. However, exchange of other gases may result from some of the metabolic reactions of cells. Some algae, for example, can perform a reaction in which molecular hydrogen is involved. Ammonia, nitrogen, methane, hydrogen sulfide, and perhaps other gases might appear or disappear in certain specialized biological processes. One of the great advantages of the manometric method is that it could be used for anv of these. MEASUREMENTS OF GAS EXCHANGE 139 (b) (c) (d) (e) (f) Fig. 10-3. Some examples of special manometer vessels. The rectangular vessel (a) is often used in photosynthesis studies because a greater surface can be exposed to light from below. The sidearms, as in b, permit the addition of materials while the measurement is in progress. Alkali for trapping COo might be placed in the center well, as in b or d, in a con- centric trough, as in c, or added after the experiment is in progress (e). Number f has a reduced volume to permit greater sensitivity of measure- ment. The techniques of manometry are not especially difficult to learn. The glassware is fragile but easy to handle, and a variety of kinds of vessels have been developed, including some specialized for certain un- usual measurements. Figure 10-3 shows some sample types. Only a little practice is required to read the manometer scales, even while the manometers are moving. In a short time it is possible to learn to recog- nize the behavior of the manometers when something is wrong. Occa- 140 MEASUREMENTS OF GAS EXCHANGE sionally, one of the ground glass joints will leak, causing pressure changes that appear too low. Sometimes the temperature of the bath will drift slightly, leading to large thermobarometer corrections. Rarely, the tem- perature in the bath is not uniform from place to place. In this case, the thermobarometer correction may not truly represent the change that would have occurred in the experimental manometers if they had had no living material. These difficulties are less bothersome than the annoying problems that are associated with many other laboratory instruments. The manometric technique has proved its advantage over many years. The literature con- tains only a few papers where manometry has been pushed beyond its limitations, a fact which also attests to the value of the method. Several new physical methods of measuring various gases have been developed recently. Some of these are much more convenient for certain measurements, and some provide a specific analysis for a particular gas. Although they will be used to advantage, it is doubtful that they will ever completely replace the old reliable manometers. The infrared gas analyzer One of the more popular of the recently developed instruments for gas analysis is the infrared analyzer. Most organic compounds have infrared absorption bands, arising from the vibrations of the C— O, C— H, O— H, and other bonds. Any individual molecule has a definite collection of such heteroatomic bonds and, consequently, a defined spec- trum of absorption in the infrared region. This principle is the basis of the infrared spectrophotometry mentioned in Chapter 9. The Beckman InfraRed Gas Analyzer is essentially a colorimeter de- signed to operate in the infrared region. It can be used for any of sev- eral gases, including carbon dioxide, water, and methane. The instru- ment responds to the amount of a particular gas in a stream of air, providing a continuous record of its concentration. The essential features and the operating principle can be seen by referring to Fig. 10-4. This is a simplified diagram of the analysis unit. The air or gas to be analyzed is drawn continuously through the sample tube. Infrared from the two sources (S) passes through the sample tube and the reference tube to the detector chambers. The windows (W) transmit infrared but prohibit the free movement of gas. The two detec- tor chambers are filled with the gas to be analyzed, for example, CO2. MEASUREMENTS OF GAS EXCHANGE 141 As the CO2 absorbs infrared energy it tends to expand, exerting pres- sure on the diaphragm (D). With no CO2 in the sample tube, the instru- ment is adjusted to indicate equal pressures on the two sides of the diaphragm. Thereafter, any CO2 in the sample tube will reduce the amount of energy reaching the lower detector chamber, and the result- ing pressure differential moves the diaphragm. The diaphragm and the stationary electrode (E) form a condenser whose capacitance changes as the distance between the plates changes. A radio frequency voltage is applied across this capacitor, and a very small movement of the dia- phragm produces a usable signal which is amplified and employed to drive a recorder. i Reference tube W Sample tube W ^ Detector ^^ Fig. 10-4. Simplified diagram of the detection system in the Beckman InfraRed Gas Analyzer. (Courtesy Beckman Instruments, Inc.) The fact that the gas to be analyzed is used in the detector makes the instrument absolutely specific for this gas. Of course, as the reduc- tion of energy in the sample tube depends upon the number of molecules of gas in the light path, the instrument is sensitive to pressure and tem- perature changes. Temperature control is provided in the instrument, and it is not too difficult to prevent pressure surges in the external gas stream. This gas analyzer responds extremely rapidly to very small variations in CO2. The instrument in our laboratories has been used to monitor the air in a room by pumping room air through the sample tube. The analyzer will detect one more person walking into the room in less than a minute. Analyzer tubes of several different lengths are available to permit the use of the instrument for several different ranges of gas concentration. If the detector chambers are charged with water vapor, we have a useful indicator of humidity. Various industries have similarly used the infrared analyzer to detect methane, ethane, and other gases. 142 MEASUREMENTS OF GAS EXCHANGE Magnetic oxygen analyzer Another instrument offering a specific test for a certain gas is the mag- netic oxygen analyzer. Oxygen is about the only gas which is to any extent paramagnetic; that is, it is attracted into a magnetic field. Carbon dioxide, water vapor, and most other gases likely to be encountered in biological experiments do not respond appreciably to a magnetic field. Several models of oxygen analyzers are available. The instruments consist of a chamber surrounded by a magnetic field. A stream of the air to be analyzed flows past this chamber, but only oxygen is drawn in. The pressure in the chamber is related to the amount of oxygen in the gas stream. In the Beckman instrument, changes in amounts of oxygen cause a rotation of a small dumbbell suspended on a fine wire. The rotation of the dumbbell is measured directly in some models or indirectly, through a null-point optical and electrical system, in other models. Another instrument, made by Siemens in Germany, determines oxygen concentration by measuring heat conductivity in the chamber. Changes in the amount of oxygen affect the conduction of heat away from a wire, which in turn influences the electrical resistance of the wire. In our experience, the Beckman Magnetic Oxygen Analyzer has proved to be slightly slow to respond because the gas must diffuse into the magnetic chamber. Otherwise, the instrument is very reliable, yield- ing reproducible measurements of oxygen concentration very con- veniently. SELECTED REFERENCES Dixon, Malcolm, Manometric Methods as Applied to the Meastire- ment of Cell Respiration and Other Processes, 3rd ed. New York: Cambridge University Press, 1951. An especially valuable discussion of the theory of manometry. Umbreit, W. W., R. H. Burris, and J. F. Stauffer, Manometric Techniques, 3rd ed. Minneapolis: Burgess Publishing Company, 1957. The standard laboratory handbook on the subject. CHAPTER 11 Chromatography Chromatography is a laboratory technique for separating mixtures of similar materials from each other. A solution of the mixture is allowed to flow with its solvent over the surface of a finely-divided or porous solid material. The different components in the mixture flow at different rates, eventually becoming separated from each other. Chromatography has become popular in biological work because extracts from living cells contain different materials of similar chemical nature. Often the only way to analyze those mixtures is to separate the components from each other. For example, we might break down a protein into the amino acids of which it was constructed. Since all the various amino acids are similar in chemical and physical properties, it may be difficult to study them by ordinary chemical techniques. Chromatography is the most convenient way of separating these amino acids from each other. As a second example, some of the work on the metabolism of cells involves mixtures containing simple sugars and sugar derivatives. Such mixtures also are separated by chromatography. A third example is a separation of the chloroplast pigments of plants. This mixture has been chosen for detailed description because these materials are highly colored, and it is easy to follow the progress of the separation. Several techniques for sep- arating these pigments are described in detail. In a later section, other applications of chromatography are described more generally. Chromatography means literally "writing with color." The name was chosen because the method was first developed to separate mixtures of colored compounds. Several different physical principles are involved, 143 144 CHROMATOGRAPHY but a discussion of these principles is better delayed until after a de- scription of the method. The technique of chromatographic separation seems to be an art, and a certain amount of "feel" is developed after some practice. This feature is to be kept in mind in going through the following discussions. Separation of plant pigments Chloroplasts of higher plants contain two chlorophylls, a and h, as well as two types of yellow pigments, the carotenes and the carot- enols (also called xanthophylls). The chemical properties of all these pigments are similar, and they have similar solubilities in organic solvents such as acetone, petroleum ether, and ethyl ether. Classical chemical methods for separating these pigments are difficult, but chromatography is relatively easy. Preparatory Steps: A sample of leaves (2 to 3 g) of a higher plant such as spinach is ground up with a mortar and pestle with a small quantity (a pinch) of CaCOa (to neutralize the acids of the leaf cells) and about 40 to 50 ml of acetone. The dark green acetone solution is filtered through coarse filter paper or cloth to remove cellular debris. More acetone may be added to the filter to extract more of the pigments. A separatory funnel is set up and about 10 to 25 ml of petroleum ether (a mixture of hydrocarbons, mostly pentanes and hexanes) is placed in the bottom. About 10 to 15 ml of the acetone solution is added and mixed by gentle swirling. If the mixture separates into an upper green layer and a lower, faintly yellow layer, the lower layer is drawn off and discarded. If there is no such separation, water is added a few drops at a time with gentle swirling after each addition. When enough water has been added, the separation into a petroleum ether layer and an acetone-water layer will occur. (Petroleum ether and acetone are soluble in each other; acetone and water are soluble in each other; petroleum ether and water are almost completely insoluble in each other.) After the lower yellowish layer, containing chiefly carotenols, is drawn off, more green acetone solu- tion can be added, and the same steps can be repeated. After four or five additions of acetone solution, it is well to add water to the petroleum ether solution (by now dark green) to remove excess acetone. Care must be taken at this point not to agitate the water- petroleum ether mixture, or an undesirable emulsion may form. Addition CHROMATOGRAPHY 145 of the green acetone extract and washing with water may be repeated until the pigments from the entire original acetone solution have been transferred to petroleum ether. After the last addition of acetone, the petroleum ether solution is washed several times with water to remove the last traces of acetone. The almost black solution can then be placed in a flask with a few crystals of anhydrous sodium sulfate to remove any remaining water. This solu- tion is now ready for the chromatographic separation. The transfer of the pigments requires a certain amount of judgment during the operation. The behavior of one preparation may be quite different from the be- havior of another. The water content of different lots of leaves may vary considerably, and this variation leads to difficulty in specifying amounts of various solvents to add. Occasionally an emulsion will form in the separatory funnel, and it is sometimes more economical to start over than to try to break it up. Column Chromatography: Chromatographic separations can be per- formed with a finely powdered adsorbing material in glass tubes of vari- ous sizes and types. A very convenient glass tube is fitted at one end with a ground glass joint and a fritted glass disk. The ground joint permits the easy removal of the solid material when the separation is complete. The fritted disk holds the adsorbing material in the tube while it allows solvents to pass through and drip out at the bottom of the tube. The following description will be easier to follow by referring to Fig. 11-1. This set of glass tubes represents a series of stages in a chromatographic separation. Preparing a column requires a certain knack, and the beginner should not he discouraged if the first few columns do not work well. Several different methods of packing columns have been described, but the fol- lowing has produced good results in our laboratories. The bottom of the tube is closed off so that liquid will not leak out. Petroleum ether is poured in until the tube is approximately half full. Grocery store pow- dered sugar, dried in an oven if the weather is damp, is sifted into the liquid and stirred with a glass rod to make a very thin paste. Alternate additions of powdered sugar and petroleum ether can be continued until the tube is almost full of the thin paste. When the stopper is removed from the bottom of the glass tube, petroleum ether will drip out and the sugar will settle into a uniform column. Usuafly the sugar settles faster than the solvent escapes, so a layer of clear petroleum ether appears at the top of the column. Occasionally the stirring rod is used to redis- 146 CHROMATOGRAPHY tribute any uneven settling. After a short time the sugar will have settled as much as it will, and the remaining petroleum ether will percolate downward through the sugar and out the bottom of the tube. Ad '■^wS Ad r-^a 1,2,3,41 2,3,4 (a) (b) (c) Fig. 11-1. A series of chromatograms at different stages of develop- ment. In a, the adsorbent (Ad) has settled, leaving a layer of solvent (S) on top. A little later (tube b) this layer of solvent has percolated through the adsorbing material. In tube c, a mixture of four colored materials has been added, and has begun to move into the column. In a few minutes (d), developing solvent has been added, and solute 1 is moving away from the other materials. The finished chromatogram (e) shows four separate bands which may be recovered by slicing the column along the dashed lines. Tube f shows one possible result of an unevenly packed column. When almost all of the petroleum ether has disappeared from the top of the sugar, the pigment mixture prepared earlier is added by allowing it to flow slowly down the side of the tube from a pipette. It is unwise to add too much of the mixture, and usually a layer about 1 cm thick is about right. The mixed pigments and solvent will percolate into the upper layers of the sugar column. When most of the pigment mixture has moved into the top of the CHROMATOGRAPHY 1 47 sugar column, the column can be "developed" as follows. More petroleum ether is added to the tube, a process also accompHshcd by allowing the liquid to flow slowly down the side of the tube from a pipette. The upper layer of the sugar has been stained green by the mixture of pig- ments, and the addition of petroleum ether causes a yellow band to move downwards through the sugar. The green band is left behind at the top of the column. More petroleum ether can be added to push the yellow band farther downward. When the yellow band of carotenes is fairly well separated from the green band, we add more petroleum ether con- taining about 1 per cent acetone or ethyl ether. Another colored band will move out of the upper colored layer. As we gradually increase the amount of acetone or ethyl ether in each addition of the solvent mixture, various colored bands will move downward through the sugar one after another. One of these bands will be the bluish-green chlorophyll a, followed by the green (or slightly yellowish-green) chlorophyll h. Yel- low bands of carotenols (xanthophylls) will appear in various orders, depending on the particular forms present in the mixture and on the exact composition of the solvent mixture used for developing the column. The different pigments may be recovered in either of two ways. They may be allowed to move downward until they drip out the bottom of the tube. Each pigment can be caught in a separate container. In this case we are "washing" the pigments off the sugar by the addition of solvent, a process commonly know^n as elution. It is usually faster, how- ever, to allow the pigments to become well separated from each other on the sugar, and then push the plug of sugar from the tube. This plug of sugar can be sliced into the different-colored bands. Each slice is placed in a separate container, and acetone or ethyl ether is added to liberate the pigment from the sugar. We now have several different solutions, each containing a different pigment. Pretty chromatograms are possible only on evenly packed columns. The technique of column packing apparently must be learned through several trials. We have found, however, that columns prepared as de- scribed percolate more rapidly and evenly than columns in which sugar is first packed in the dry state and then saturated with petroleum ether. When petroleum ether is used as the developing solvent, it is advisable to work in the cold because at room temperature bubbles of solvent vapor often disrupt the column. The gradual increase in the amount of acetone or ethyl ether added to the developing solvent requires some judgment. If it is increased too rapidly, several pigments may be pushed downward together. If it is increased too slowly, a chromatogram may take all day 148 CHROMATOGRAPHY to develop. It is usually wise to start slowly and then increase the amount of acetone or ethyl ether in larger jumps if it seems warranted. It is extremely difficult to predict the behavior of the development, apparently because of variations in the amount of water dissolved in the solvents or in the amount of moisture bound to the sugar particles. Paper Chromatography: In paper chromatography, the solid material is ordinary filter paper. Various combinations of solvents can be used to separate different kinds of mixtures. If we should wish to obtain a quick estimate of just which pigments were contained in a mixed solution from leaves, a paper chromatogram would allow a more rapid determination than would the preparation of a column. The chief disadvantage of the paper technique is that only small amounts of materials can be separated. The following description includes several variations of the use of filter paper for chromatography of plant pigments. (1) A strip of filter paper about 2 cm wide is cut to fit into a large test tube. A hook in a cork stopper suspends the paper in the tube so that the bottom of the paper strip almost touches the bottom of the tube (see Fig. 11-2). A small amount of the solution of mixed pigments is placed along a pencil line near the bottom of the paper and then dried. A mixture of about 5 per cent acetone and 95 per cent petroleum ether is placed in the bottom of the test tube. The paper and cork stopper are fitted into the tube with the bottom of the paper dipping into the liquid and the spot of pigment mixture about a centimeter above the surface of the liquid. As the hquid rises in the filter paper by capillary action, vari- ous pigments are carried along the paper strip. Some pigments move more rapidly than others, and after a few minutes several distinct bands of color become visible. (2) Larger amounts of the mixture can be separated, or several differ- ent mixtures can be tested simultaneously, by using a large beaker or battery jar instead of a test tube. A square of filter paper is prepared by placing a line of the mixture near one edge or by placing spots of several mixtures near the edge. The paper is then rolled and fastened into a cylinder with the spots of mixtures at one end. This cylinder will stand upright in the l)eaker or battery jar (Fig. 11-3). If solvent is placed in the bottom of the glass container, it will rise in the paper just as in the previous example. The same kind of separation of pigments occurs. (3) Sometimes the paper chromatogram will develop better if the liquid is allowed to descend over the paper instead of rising by capil- CHROMATOGRAPHY 149 larity. A spot of the mixture is placed near the top of a strip of paper. This end is placed in a container of the solvent, with the rest of the paper hanging down over the side. As the solvent moves downward on the sus- pended paper strip, the various pigments are carried along at different rates. This operation is conducted inside a closed container, where the atmosphere is saturated with the solvent vapor. Fig. 11-2. Ascending paper chromatography. The mixture of mate- rials was placed along the line (M) and the paper was dried. It was then placed in the test tube, and the solvent ascended by capillary action, having reached point S at the present time. Components 1, 2, 3, and 4 have been car- ried upward at different rates. 'Paper strip M Solvent (4) Solvent is placed in the bottom of a round glass dish. A cone of filter paper is prepared and placed point-up in the center of the dish so that the point of the cone stands just higher than the edge of the dish. Now a disk of filter paper slightly larger than the diameter of the dish 150 CHROMATOGRAPHY is prepared by placing a ring of the mixture to be separated around the center of the paper. After drying, the paper is placed over the dish, with the point of the cone piercing the center of the paper disk. An identical glass dish is placed over the paper to serve as a cover. The paper cone acts as a w^ick to deliver solvent to the center of the paper disk. From this point the solvent moves outward into the paper in all direc- tions. The finished chromatogram will be a circular disk of paper with concen- tric rings of the various separated pig- ments. (5) Paper chromatographic separations can even be performed in two dimensions. We place a spot of the mixture of ma- terials at one corner of a square of paper. Then, using one solvent mixture, we allow the various materials to move up- paper was rolled into a cylinder ^^j-J ^Jong one edge of the paper. Now for this chromatogram. The ^^ ^^^^^^ ^^^ p^p^^ ^^ ^1^^^ ^^le separated spots are at the bottom and run the sep- aration again, this time using a different solvent mixture (see Fig. 11-4). The fin- ished two-dimensional chromatogram will have spots distributed in various locations all over the paper because of the differ- ent rates with which the individual pigments move in different solvent combinations. Fig. 11-3. A square of filter edge of the paper could be fastened together with staples, adhesive strips, or with thread. Spots of a mixture were placed at the circles marked M, and the solvent has risen to point S. Physical principles involved in chromatography There is no general agreement on the physical principles which cause the separation of materials. The chances are reasonably good that the disagreement results from the fact that different principles are involved in the different kinds of chromatography. The following physical prin- ciples certainly are involved at one time or another, although it may not be easy to tell which is operating in any particular case. Differential Adsorption: Adsorptioa is a phenomenon which occurs at the surfaces or interfaces between two different kinds of material, as CHROMATOGRAPHY 151 (0) (b) Fig. 11-4. Two-dimensional paper chromatography. A spot of the mix- ture was placed at M, and solvent was allowed to ascend to point S, separating the components of the mixture (a). In step b, the paper was rotated and another solvent was allowed to ascend to level S', bringing about a further separation of the components. between a liquid and a solid or between a gas and a liquid. A certain amount of energy will be associated with this interface, as can be seen from the unequal energy distribution which causes the surface tension of water in contact with air. Some dissolved materials tend to reduce the amount of interfacial energy, in which case, they collect at the interface. Charcoal is useful for decolorizing solutions, because the colored ma- terials are adsorbed on the charcoal, or for purifying air, because certain gases are adsorbed. Adsorption certainly is involved in at least some forms of chromatog- raphy. In the separation of chloroplast pigments on the column, the vari- ous pigments are bound (adsorbed) on the surfaces of the sugar par- ticles, but different pigments are held with different degrees of tenacity. Each of the pigments has its own characteristic solubility in the solvent also. When the mixture of pigments is placed on the top of the column, the molecules are bound by the adsorbing material (sugar). A fraction of the molecules escape from this binding, however, and are carried downward with the solvent, only to be bound again in another location. The binding strength and the solubility in the solvent determine how far the average molecule travels before being bound again. This combina- tion of properties would be different for each component of the mixture. When the solvent is modified by adding acetone, these relationships are changed slightly by providing a solvent mixture in which certain pig- ments are more soluble. Liquid-liquid Partition: Whenever a material is placed in a mixture of two solvents which are insoluble in each other, the dissolved material 152 CHROMATOGRAPHY distributes itself in the two solvents according to a definite relationship. For example, consider a solute X which will dissolve in solvent A or in solvent B. If solvents A and B, immiscible in each other, are placed in a separatory funnel and subsequently a small amount of X is added, part of the X will go into each solvent. An equilibrium will be reached when a certain constant fraction is contained in each solvent. The ratio of the concentrations in the two solvents is called the partition coeffi- cient: _ cone, in solvent A cone, in solvent B Suppose that this ratio is large, meaning that most of the solute goes into solvent A. Given a solution of X in solvent B, the material X could be transferred to solvent A by successive additions of fresh solvent A. Several examples of this behavior are illustrated by transferring pigments from acetone to petroleum ether and then washing out the acetone with water. In another instance, imagine that X distributes itself 5 per cent in water and 95 per cent in petroleum ether. Another material, Y, reaches its equilibrium when 90 per cent is in water and 10 per cent in petroleum ether. Given an aqueous solution containing both X and Y, the two solutes could be separated from each other by mixing the solution with petroleum ether. At equilibrium, the water would contain only 5 per cent of the X but 90 per cent of the Y. If this water layer is drawn off and mixed with fresh petroleum ether a new equilibrium will be estab- lished. The water then will contain only 0.05 X 0.05 = 0.0025 of the X and 0.90 X 0.90 = 0.81 of the Y. After another step or two, X vir- tually disappears from the water, but only a little of the Y is lost. If the components in a mixture differ greatly in their partition in the two solvents, this is a practical method of separation. If the components of a mixture differ only slightly in their concentra- tions in two immiscible solvents, or if there are several different solutes in the mixture, this method becomes impractical. If one of the solvents is bound on the surfaces of a solid material, however, and the other solvent is allowed to flow over it, even relatively similar solutes may separate from each other. In most of the systems of paper chromatog- raphy one of the individual solvents in the mixture is likely to adhere as a film on the paper, while the other solvent flows over this film. The paper merely serves as a base on which the natural distribution or partition can occur. In a petroleum ether plus acetone-water chroma- CHROMATOGRAPHY 153 togram, the acetone-water forms a film on the paper, and the petroleum ether remains as a separate mobile liquid. Petroleum ether moves over the paper more rapidly than the other solvent, carrying certain pigments with it. Other pigments, more likely to remain in the acetone-water film, move less rapidly. In the descriptions given earlier, no water was added to the solvent mixture, but there nearly always is a small amount of water adsorbed on the paper. Since chromatography of chloroplast pig- ments can occur on dry paper with dry solvents, apparently adsorption is also involved. Ion Exchange Chromatography: Materials which have the ability to dissociate into positively and negatively charged ions can sometimes be separated by virtue of the electrical charge. A few natural minerals and a large number of synthetic resins bind ions on the surfaces of particles. For example, a resin might consist of a substance containing a number of acidic groups as part of the molecular structure. When the resin is in water, these acidic groups ionize, leaving negatively charged spots on the resin particle. Cations will be held at these spots, some more tightly than others. If a solution containing a mixture of cations is poured over this resin, these positively charged ions, such as Na"^, K+, or Ca++, will displace hydrogen ions from the resin. In effect, the resin exchanges its H+ ions for metallic ions. If a mixture of positively charged ions is allowed to flow continuously over such a resin in a long column, some ions will be carried along with the water faster than others. Ions which form a strong electrical bond with the negatively charged radicals on the resin will move very slowly. If the column is long enough, the various kinds of ions will emerge at the end of the column one at a time. A fairly complex set of equilibria exists in an ion exchange column. Imagine a synthetic resin which binds Ca+"'" ions more strongly than Na""" ions and Na+ more strongly than H+ ions. If the resin exists almost entirely in the acid form, that is, holding H+ ions, and a solu- tion of Na+ ions is added, the Na"^ ions displace hydrogen ions from the resin. Later, the addition of Ca++ will cause the removal of Na+ and the binding of Ca+ + . The binding is not permanent, however, and a large excess of hydrogen ions, as from HCl, could cause displacement of the Ca++ ions and thus bring about the regeneration of the original resin. Other resins, themselves positively charged, attract negatively charged ions. A mixture of organic acids might be separated on such a resin. The theoretical treatment of the elution of materials from ion ex- 154 CHROMATOGRAPHY change columns becomes quite complex, and there is no complete agree- ment on the principles involved. The synthetic resins are extremely useful in preparing ion-free water and in separating mixtures of amino acids, nucleic acids or nucleotides, and carbohydrate derivatives. The trade-names Dowex and Amberlite have become very familiar terms in the modern laboratory. Practical chromcftography A great variety of mixtures may be separated by chromatography. All that is required is a combination of solvents and solid materials such that there is a difference in the physical properties of the various com- ponents of the mixture. Whenever a new or untried mixture is to be chromatographed, the investigator must choose from among paper, adsorbent columns, and ion exchange resins. The filter paper technique, easiest and quickest, is usually tried first. The next question concerns the type of filter paper to use. The manufacturers produce a variety of papers, some of which may work better than others in any particular chromatographic separation. It is convenient to keep samples of various types in the laboratory, and a few preliminary trials usually will show that one paper excels the others in speed or completeness of separa- tion. The solvent to be used for development must also be chosen carefully. Each mixture of materials has its own set of properties, a fact which has an effect on the choice of solvents. If the general class of compounds to which the mixture belongs is known, the literature or experience will suggest several solvents. The materials to be separated must not be too soluble in the solvent or they will move together almost as rapidly as the solvent moves. If, in contrast, the solubility is too low, they will remain at the spot where they were originally placed. Another problem that was not apparent in the earlier description of pigment chromatography arises when colorless mixtures are separated. Amino acids, sugars, and many other colorless compounds can be sep- arated on paper, but the individual spots or bands must be found later. Sometimes the paper is treated with a reagent that reacts with the sep- arated materials to yield a colored product. Some materials are fluorescent under ultraviolet illumination, while others absorb ultraviolet and appear as dark spots on a slightly fluorescent paper. If some components in the mixture are labeled with radioactive tracers, the paper chromatogram will CHROMATOGRAPHY 155 "take its own picture" if clamped against a sheet of photographic film according to the technique known as radioautography. To find the location of any particular component of a mixture, two chromatograms can be run simultaneously under identical conditions. The mixture is placed on one paper; a sample of the compound whose location is sought is placed on the other paper. This compound should go to the same location on both chromatograms. A numerical value, R/, is frequently used in locating certain com- pounds. rate of movement of solute Rf = rate of movement of solvent On paper chromatograms, different components of a mixture move at different speeds. The "front" of each compound moves at a certain frac- tion of the speed at which the solvent moves. If one amino acid moves very rapidly in a certain solvent mixture, its R/ value will be high, perhaps 0.85. Another amino acid might move very slowly in the same system and have an R/ value of 0.15. On columns and on paper, any moving component commonly moves as a mass of molecules with a "tail" trailing out behind. This tail often will be mixed with the front of the next comp>onent. The second com- ponent in a mixture is thus harder to purify than the first. Sometimes this difficulty can be overcome by changing solvents and reversing the order of the two compounds. It is frequently possible to determine quantitatively the amounts of various materials on a paper chromatogram. If the spots or bands are colored, the absorption of light by the materials will be related to their concentrations in the spot. If this method is not practical because of great differences in colors of materials or because they are uncolored, they may be stained with a suitable dye and then scanned with a modified colorimeter or spectrophotometer. Fraction Collectors: When columns of adsorbents or ion exchange materials are used, one means of separating components is to collect them in separate containers as they drip out the bottom of the column. First the most rapidly moving component appears, followed at a later time by the other components. Automatic fraction collectors remove most of the work from this form of chromatography. Containers, usually test tubes, are held in a large wheel which rotates in steps. A small amount of Hquid is drained into one test tube, and the wheel rotates to place the next test tube under the column. The stepwise rotation of the wheel 156 CHROMATOGRAPHY can be controlled by a timer, allowing liquid to drip into each test tube for a pre-set time interval. In other fraction collectors, a siphon arrange- ment measures a certain volume of liquid and delivers this volume to the test tube. The most elegant fraction collectors employ phototubes to count drops. Amino Acids: Proteins are among the very most important biological compounds, and frequently the analysis of the amino acids composing a protein gives important information. The group of about twenty- five amino acids that result from the hydrolysis of a protein could be separated by any of the chromatographic methods, but ion exchange columns and paper partition chromatography are most commonly used. Amino acids possess both acidic and basic groups in the molecules, and most have side chains that may be acidic, basic, or neutral. The individual molecules may have a net acidity or basicity and thus can be separated on ion exchange columns. Passage through a resin such as Dowex-50 might separate the acidic from the basic amino acids. Often a complete mixture of amino acids is passed through several columns in succession, each column containing a different resin. Careful control of the pH of the developing solvent permits the elution of virtually all the various amino acids as individual bands. Several models of instruments have been fabricated which will perform such a separation, determine the concentration of each constituent, and plot the results on a strip- chart recorder, all quite automatically! For filter paper partition chromatography of amino acids, Whatman No. 1 seems to be among the more popular papers. Each investigator has his own preference, however, and for some mixtures of amino acids another paper would be superior. Of all the different solvent mixtures that have been used, only a few have become popular: a mixture of phenol and water, a mixture of collidine, lutidine, and water, and a mixture of one of the butanols, with acetic or propionic acid, and water. Changes in operating conditions, modification of the proportions of the solvent materials, or use of two different mixtures makes two-dimensional chromatography possible. The resolved spots usually are detected and frequendy are determined quantitatively by treating the developed chromatogram with ninhydrin (triketohydrindene hydrate). This treat- ment produces spots in various shades of pink or lavender. Carbohydrates: The sugars and sugar derivatives and the more elabo- rate polysaccharides are an extremely complex group. The biologist who must deal with these compounds is wise to enlist the help of an organic chemist. Carbohydrates often are modified chemically before chromatog- CHROMATOGRAPHY 157 raphy and then can be separated by adsorption, ion exchange, or partition chromatography. Such a range of techniques is in use that it is difficult to say that one method is more commonly used than any other. Paper chromatography of simple sugars is performed on Whatman No. 1 or Schleicher and Schuell No. 589 White Ribbon paper, using the same solvents as for amino acids, or slight modifications thereof. The general reference books listed at the end of the chapter give fairly detailed de- scriptions of various methods and numerous citations of the original liter- ature. Lipids: The lipids include fats, waxes, and other materials that dissolve in any of a set of nonaqueous solvents. In the broad sense, the chloroplast pigments are lipids, and, in fact, the chromatographic methods used for lipids in general are only slight modifications of those used for the chloroplast pigments. Proteins: The chromatographic separation of proteins, including the enzymes, has been difficult because the molecules are large and do not move freely, the molecular surfaces possess a wide range of chemical and electrical properties, and the proteins themselves are relatively un- stable. Ion exchange methods seem to have produced the best results, but the behavior even on ion exchange columns probably involves some adsorption. Columns of tricalcium phosphate gel and of several Dowex and Amberlite resins are satisfactory for some proteins. A number of materials prepared by a chemical treatment of cellulose have been re- markably successful in the separation of a variety of proteins. These include carboxymethyl-cellulose and DEAE-cellulose (diethylamino- ethyl). ECTEOLA-cellulose (from epichlorohydrin and triethanol- amine) is not as favorable for protein but is a superior material for separating nucleic acids. Most of these materials are now available from biochemical supply houses. Paper electrophoresis If an electrical field is imposed across a liquid containing charged par- ticles, these particles will migrate in the liquid, negatively charged parti- cles moving toward the positive electrode and positively charged particles toward the negatively charged electrode. This behavior is the basis of a very popular method— known as electrophoresis— for separating ma- terials. Proteins were among the first compounds to be separated this way, and more recendy other materials have been found to be adaptable to the method. 1 58 CHROMATOGRAPHY A chamber for electrophoresis might consist of a simple tube with an electrode at each end, as in Fig. 11-5. Many elaborations may be intro- duced into the structure of the chamber, such as a means of removing components after separation, or an ingenious method of sliding sections of tubing together so that individual components in the mixture move as zones. Fig, 11-5. A simple chamber for electrophoresis. Electrophoresis is also conducted very conveniently on filter paper or other supporting materials such as starch gel. A strip of filter paper moistened (but not sopping wet) with a solvent is hung on a rack with the two ends dipping into separate containers of the solvent. The solvent is a salt solution, so that it will conduct an electrical current, and usually its pH is controlled by the inclusion of a suitable buffer. The two elec- trodes, leading from a direct current power supply, are placed in the two containers of solvent. A spot of the mixture to be separated is placed on the paper strip, the circuit is closed, and various components move with the current. Positively charged molecules move in one direction, negatively charged molecules in the other. All positively charged mole- cules do not necessarilv move at the same rate because of differences in molecular weight, particle shape, magnitude of charge, viscosity of solvent, and several other physical properties. Proteins are readily separated by electrophoresis because of the nature of the protein molecule. It is composed of a large number of amino acids, some of which possess side chains having an acidic — COOH group, others of which contain basic groups of one kind or another. Since the dissociation of these side-chain groups will depend on pH, —COOH groups could exist as —COOH or as — COO~. Basic groups, such as — NH2, exist in this form at high pH but as — NH3+ in acidic solutions of low pH. The ratio of acidic to basic side chains in the molecule and the pH of the solution determine its net charge. Two distinct protein types, even if the net electrical charge is of the same sign, would move at rates depending on the magnitude of the charge. CHROMATOGRAPHY 159 Gas chromatography The most recent major development is gas chromatography, a separa- tion of materials in the vapor phase. A tube is packed with a solid ma- terial which is then coated with a selected solvent. In another system, the solvent forms a thin coating on the inner walls of a long, very fine capillary tube. In either case, a mixture of gases is allowed to pass through the tube. Those components which are least attracted to the solvent ap- pear earliest at the far end of the tube, the other components appearing later. As the individual gases emerge they are detected by a device that measures thermal conductivity or some similar physical property. The amount of each component is recorded electrically. So far, gas chromatog- raphy has been more useful in chemistry and in industry than in biology. Probably it will be possible to modify the method for the separation of a wider range of biological compounds. SELECTED REFERENCES Block, Richard J., Emmett L. Durrum, and Gunter Zweig, A Mantuil of Paper Chromatography and Paper Electrophoresis, 2nd ed. New York: Academic Press Inc., 1958. A comprehensive manual describ- ing theoretical aspects and practical methods. In many cases, sufficient detail is given to allow the reader to perform the operations without further study. Methods are included for almost any conceivable mix- ture of biological materials. Several thousand references to the orig- inal literature. Heftmann, Erich, ed., Chromatography. New York: Reinhold Pub- lishing Corporation, 1961. An encyclopedic reference work contain- ing articles on theoretical and practical chromatography written by experts in the field. Lederer, Edgar, and Michael Lederer, Chromatography: A Review of Principles and Applications, 2nd ed. Amsterdam: Elsevier Pub- lishing Company, distr. in U. S. by D. Van Nostrand Company, Inc., 1957. Descriptions of general techniques of column chromatog- raphy—with adsorbents and ion exchange materials— and of partition chromatography are given, followed by detailed discussion of separa- tions of various classes of compounds. This is a valuable guide to the literature. Strain, Harold H., Chromatographic Adsorption Analysis. New York: Interscience Publishers, Inc., 1945. CHAPTER 12 Isotopic Tracers Probably the single most valuable technique to become available to the biologist in recent years is the isotopic tracer method. Increased un- derstandings in a number of important areas can be attributed directly to these materials, which came into general use shortly after the end of World War II. The use of tracer isotopes is such a broad and highly technical subject that no attempt at complete coverage will be made here. Because the use of radioactive tracers, particularly, has potential dangers, I recommend that no one should amuse himself by simply playing with radioactive materials but instead should undertake tracer experiments only after competent instruction in the laboratory. Fortunately, although it is im- possible to give more than a summary here, a number of excellent pub- lications are available for the reader who is interested in pursuing the subject. Several kinds of problems, otherwise insolvable, are experimentally easy if tracers are used. If a material moves from one place to another within an organism but several different pathways are possible, the tracer can identify the pathway taken. For example, mineral ions move from the roots where they are absorbed upward to the leaves of plants. They might move through either xylem or phloem; the proper application of tracer experimentation tells which tissue is the actual path. In an animal, a certain material might move from place to place through blood or lymph, and a tracer could be used here also. The other major kind of problem is the chemical problem. We know that A is converted to Z, but paper chemistry tells us that any of several 160 ISOTOPIC TRACERS 161 sequences of intermediates could be involved. A series of tracer experi- ments can delineate the chemical pathway. As a less extensive modifica- tion, a tracer can tell us whether a particular reaction occurs at all. If we provide cells with the suspected substrate of a reaction labeled with a tracer and then allow time for the reaction to occur, we should be able to recover the product of the reaction, now labeled with the tracer. The tracer experiment The isotopic tracer experiment consists of substituting an unusual or uncommon isotope of an element for the more abundant form. For ex- ample, radioactive carbon (C^'*) can be substituted for ordinary carbon (C^^) and will go through the same chemical reactions. For the purposes of tracing, it is not at all necessary that every atom should be the unusual isotope. A small fraction of the tracer isotope serves to label the whole amount of a compound. The tracer experiment, then, requires an ele- ment or compound in which small fractions of the molecules contain the unusual isotope of an element. The other chief requirement in the experiment is some method for detecting the uncommon isotope before, during, and after the process being investigated. The fraction of the atoms which must be labeled depends upon the ability of the detecting method to distinguish the tracer atoms. "Isotope" means literally "same place" and refers to atoms that occur at the same place in the periodic table; therefore, they contain the same number of protons and electrons, which determine the chemical activity, but, because they contain different numbers of neutrons, they differ in mass. The hydrogen series illustrates the point reasonably well. Ordinary hydrogen is ordinary hydrogen because it makes up about 99.98 per cent of the naturally occurring hydrogen. It is the simplest pos- sible atom, consisting of one proton and one electron. About 0.02 per cent of the naturally occurring hydrogen atoms contain a neutron in addition to the proton and electron. This neutron doubles the mass of the atom without appreciably changing its chemical properties. A third type, present as a trace in nature but manufactured artificially in reactors, contains a second neutron and therefore has three times the mass of ordinary hydrogen. These three isotopes of hydrogen are designated iH\ iH^ iH^. The subscript 1 indicates the atomic number and thus the chemical identity of the element. The superscript (1, 2, 3) indicates the 162 ISOTOPIC TRACERS mass. Generally the symbol of the element gives adequate information, so the subscript atomic numbers are omitted. H" is frequently called deuterium and is sometimes given the symbol D, whereas H^ is called tritium (T). Radioactivity and radiation The nuclear combination of one proton and one neutron is stable; that is, there is no tendency for this nucleus to decompose. The tritium nucleus, however, is unstable and undergoes spontaneous degradation to a more stable form. The excess energy of the unstable form is given off as a radioactive emission, in this case y3~ particles or electrons, as one of the neutrons changes to a proton and an electron. The atom becomes 2He^, and the electron or /3~ particle is accelerated through space. Other radioactive materials disintegrate in this manner or in other patterns which yield a. particles (2 protons + 2 neutrons), P~ (electrons) or ^8+ (positrons), y quanta (electromagnetic radiation), or some combination of these. Any isofope disintegrates at a characteristic rate in a character- istic way. The oi, p ,1^ y emanations possess great energy. As they move through air or other materials they produce pairs of ions. They share this quality with X rays, which are produced by a different principle. Quanta of visible light and ultraviolet are basically similar to y rays and X rays, but the quanta possess insufficient energy to produce ionization. For this reason, only X rays and y rays are classified as ionizing radia- tion. Amounts of radioactive materials are measured in curies, one curie being the amount of a radioactive material such that 3.7 X 10^^ atoms dis- integrate per second. A curie is a large amount of radioactive material, and biologists are more likely to work with millicuries (mc) or micro- curies (/w-c). The basic unit of ionizing radiation is the roentgen (r). The roentgen is an amount of X or y radiation sufficient to produce about 2X10® ion pairs in 1 cm'^ of dry air. Several other units of radiation have been developed for use in studies on the biological effects of radiation. These include rep (radiation equivalent, physical), used in measuring radiation absorbed by soft tissue; the rad (radiation, absorbed dose), the amount absorbed in any medium; and the rem (radiation equivalent, man), which applies sp>ecific corrections for man. ISOTOPIC TRACERS 163 The development of tracer experimentation Tracer experiments were used for the first time by Hevesy in 1923. He used several naturally occurring radioactive isotopes of lead to trace the path by which materials moved from one place to another within plants. In 1934 the famous Curies demonstrated the possibility of pro- ducing artificial radioisotopes (a contraction of "radioactive isotopes"), and even before the beginning of World War II several isotopes had been produced artificially. Most of the potential tracers were not yet available in adequate quantities, however, so only a few isotopic tracer experi- ments were performed. After World War II great quantities of many different isotopes became quite readily available, and laboratories all over the world adopted this new tool. Some difficulties were encountered, and, in fact, some problems not particularly amenable to solution by the use of tracers were investigated. By now the "fad" has passed, and we have settled down to a judicious use of this most valuable technique. Tracer materials are now available in almost any conceivable form, and the instruments used for detection have reached a high state of develop- ment. Selection of tracer isotopes So many isotopes have been produced artificially and are potentially available that the investigator is faced with a choice of isotopes. For example, there are six isotopes of carbon, some stable, some radioactive. If a convenient radioactive isotope is available it usually is chosen as a tracer because the detection of radioactivity is easier than the detection of stable isotopes. Two principles are considered in selecting from among several possible radioisotopes: the rate of disintegration and the type of disintegration and radioactive emission. Any radioactive isotope disintegrates at a characteristic rate. Any single atom has a certain probability of decomposing, regardless of how many similar atoms are in the vicinity. This probability means that in a unit of time a certain constant fraction of the total will disappear. Thus, dN/dt = —KN, where N is the number of atoms, t is time, and K is a constant representing the fraction of the total number of atoms disin- 164 ISOTOPIC TRACERS tegrating in a unit of time. From this equation it can be seen that half of the atoms will disintegrate in some certain time. The same length of time is required for half of the remaining atoms to disintegrate. This disintegration rate can be seen more easily in Fig. 12T. The time required for half the atoms to disintegrate is called the half-life of the isotope, designated as ty.. Since the rate of disintegration of any isotope is a dis- tinctive prop>erty of that isotope, each isotope has a characteristic half- life. Biologists have become so accustomed to the term that they now facetiously speak of the half-life of their paychecks and of other items to which the reasoning is not strictly applicable. 0.5 2 2 2 2 2 Fig. 12-1. Curve describing decay of a radioactive isotope. The half-lives of isotopes influence the selection of tracer isotopes. Obviously the half-life must be some convenient period of time. If half the isotope disintegrates in 0.01 sec, any experiment must be done in an impossibly short time. The only radioisotopes in common use as tracers have half-lives of at least several days; C^^ decays so slowly that half of it still remains after almost 6000 years. Thus in case of a choice among several radioisotopes of the same element, some always have more favor- able half-lives than others. A second major consideration in choosing radioisotopes is largely a matter of laboratory safety. Alpha-emitters are not commonly used as biological tracers. There is often a choice, however, between (3- and y- ISOTOPIC TRACERS 165 emitters, and in this case the /3-emitter is likely to be chosen because the less-penetrating and less-energetic ^ particles are less dangerous to per- sonnel. As an example, let us examine the six isotopes of carbon: C^^, C^\ Ci2^ (-13^ Qi, ^^^ C15 (-12 i3 ji^g abundant or normal isotope; C^^ is stable and occurs naturally (about 1 per cent); C^^ is radioactive, a jS- emitter, and occurs naturally as traces. The radioisotopes C^**, C^\ C^^, and C^^ have all been produced artificially and have the characteristics listed in Table 12-1. Carbon, of course, is the biological element, if any element can so qualify. C^^ was the obvious choice as a tracer because of its long half-life and because of its relatively safe production of ^~ parti- cles. The first radiocarbon tracer experiments were performed with C^^ because it was produced earlier, but as soon as C^'* became abundant it was immediately adopted. Table 12-1. Radioactive Isotopes of Carbon Isotope Half-life Type of Emanation CIO 19 sec |3+,Y Cii 20 min P+ C14 5700 years p- C^^ 2.4 sec (3, Y Commonly used tracers Biologists usually deal with relatively few of the elements, chiefly those in the lower one-third to one-half of the periodic table. Most of these elements have at least one convenient radioisotope. The important exceptions are nitrogen and oxygen, both of which are extremely impor- tant in biology. Table 12-2 lists the isotopes most commonly used as tracers in biology. In addition to these commonly used tracers, several other radioisotopes are important in biology. Co^ is used frequently as a y source for experi- ments on the effects of ionizing radiation. Masses of Co^ up to several hundred curies can be used in properly shielded pieces of apparatus into which biological or chemical materials can be introduced for varying lengths of time. A cesium isotope (Cs^^") is used for similar purposes, offering easier handling but somewhat less energetic y radiation. Radio- active rubidium (Rb*^^) is sometimes used as a tracer, not for Rb^'' but for potassium, which it resembles chemically. Some cells accumulate Rb 166 ISOTOPIC TRACERS Table 12-2. Some Commonly-used Tracer Isotopes "Normal" Isotope Common Tracer Half-life Emanation iHi H2 Stable H3 12.4 yr P 6C12 C13 Stable C14 5700 yr P 7NI4 N15 Stable ,Oi6 QIS Stable nNa23 Na22 2.6 yr P.Y Na24 15 hr P.Y vMg-' Mg2^ 9 min P>Y ir,?-^^ p32 14 days P leS^^ S35 87 days 3 l,Cl-'5-'5.37 C136 4 X 105 yj. P i9K-^« K40 1.2 X 109 yr P-Y ooCa^o Ca-is 152 days P 26Fe-^« Fe59 45 days P-Y .7C059 Co«o 5.2 yr P>Y 53!^-^ J131 8 days P.Y almost as readily as K, so the Rb can be valuable in studies on cell membrane permeability. Sr**^ is also important biologically because of its chemical similarity to another element, calcium. Several biological mecha- nisms tend to concentrate the Sr"^ which might occur in fall-out from nuclear weapons. Iodine collects in the thyroid; therefore P^^ has been used, not only as a tracer, but for radiation therapy in disorders of the thyroid. Available Forms of Tracer Materials: Radioisotopes formerly were available chiefly as some salt containing the tracer element. C^^, for ex- ample, was delivered as BaC^^Os. The carbonate could be converted to C^'*02 by treatment with acid and ultimately converted into any of several chemical compounds. Now producers of chemical and biochem- ical compounds offer long lists of organic chemicals labeled with any of several tracer isotopes. It is even possible to specify which atom (or atoms) will be labeled. Acetic acid, for example, can be purchased as C^^HaCi-OOH, C^-HsC^^OOH, or C^^HsC^^OOH. Most of the sugars, most of the amino acids, and a variety of other compounds are available, labeled with H", H^ C'^ C'\ N^^ 0'\ or assorted other tracers. The availability of these compounds considerably simplifies the execution of tracer experiments. ISOTOPIC TRACERS 167 Detection methods Tracers, of course, are useless unless they can be detected after the experiment is complete. Since very small amounts of tracer materials are used, special methods are necessary to recover the tracer. A number of methods have been used to prepare materials for examination. If a physical movement from place to place is being investigated, nothing more than a dissection of the animal or plant may be required. Chemical experiments offer more difficulty. The suspected product of a reaction must be sep- arated from other compounds in the cells or reaction mixtures; nowadays chromatography is used for this separa- tion. One simple method of detecting radioactive materials is to place the plant or animal parts, or the chromatogram, against a sheet of photographic film. The radioisotopes "take their own pic- ture" or produce a "radioautograph" be- cause irradiated areas of the film show up on development. The G-M Tube: A number of electri- cal instruments are employed for quan- titative determination of radioactivity. The Geiger-M tiller (G-M) tube is prob- ably still most commonly used. It con- sists of a hollow metal tube, filled with a gas mixture, with a wire extending along its center for most of its length. The circuitry is diagrammed in Fig. 12-2. The behavior of the tube depends on the applied voltage. Over the commonly used range of voltages, the so-called G-M region, the tube responds to incoming radiation in a characteristic way. Beta particles enter through the win- dow, gamma quanta through the window or walls, and produce ions in the gas in the tube. At this voltage, negatively charged ions migrate to- ward the center wire, producing other ions as they travel. Eventually a "cloud" of ions strikes the center wire as a pulse of charged particles, setting up a momentary electric current in the wire. Positively charged ions move in the opposite direction, producing the same effect. Elec- Window Fig. 12-2. A Geiger-Miiller Tube. The window is frequently of mica. 168 ISOTOPIC TRACERS trie neutrality is re-established in the gas, and the tube is ready to ac- cept another particle or quantum. The pulse of current crosses the capaci- tor to be registered on a meter or on a scaling or counting circuit. The whole operation is completed in about a microsecond or so. If a radio- active sample is placed under the tube, some constant fraction of the radioactivity will enter the tube to be counted, the exact fraction depend- ing on the geometry of the system. Amounts of radioactivity measured this way are usually expressed as counts per minute (c/m). The Scintillation Counter: This instrument is more easily adapted to special measurements. The scintillator is a material which responds to radioactivity by producing flashes of light. These individual flashes are counted by a multiplier phototube (Chapter 13); the electrical output of the phototube is counted by a scaling circuit. The scintillating phos- phor may be dissolved in a liquid, which means that the counting cham- ber can take almost any shape. Whole-body counters have been con- structed as hollow cylinders in which dogs or other animals are placed. The scintillation counter surrounds the body and detects any radioac- tivity emitted from the animal's body. Scaling Circuits: The output from a G-M tube or a scintillation counter is a series of electric pulses. The rate at which these pulses are produced can be measured with an ammeter calibrated in amperes or in c/m. Alternatively, each pulse can be counted singly with a digital computing circuit known as a scaler. The electronic scaler can be ar- ranged in a variety of ways to indicate a total number of counts in a certain length of time or the length of time required to reach some pre- determined number of counts. Counts of radioactive materials always are accompanied by determina- tions of "background," that is, pulses produced from uncontrollable sources such as cosmic rays or y-emitters in the general vicinity of the counting tube. The background count, assumed to be constant during the measurement, is subtracted from the total number of counts. Since background varies from day to day, each measurement must be corrected for background. Background can be reduced by shielding the counting tube with lead or by rather intricate electronic correction, but it can never be eliminated. Accessory Equipment: Samples of materials to be counted are placed in small metal dishes and placed beneath the window of the G-M tube. (The scintillation counter, of course, is more flexible, and the sample can take any shape.) The samples are counted for a period of time adequate to obtain a good measurement of average rate of radioactive disintegra- ISOTOPIC TRACERS 169 tion, and then another sample can be counted. Several automatic sample changers are available. These devices place one sample under the tube, count to a predetermined number of counts, print the time required, and then place another sample under the G-M tube. Thirty-six or so samples can be counted in sequence over a period of several hours without any attention from the operator. Radioactive materials on paper-strip chromatograms are counted by strip scanners which automatically feed the strip of paper under the counting tube and then print or record the radioactivity as a function of distance along the paper strip. A G-M tube usually counts all pulses, whether they were produced by a, P, or y radiation and regardless of the energy of the radiation. A scin- tillation counter— or a G-M tube operating in another voltage range- produces a response related to the energy of the particles or quanta. A discriminating circuit can be set to allow only pulses above a certain size to be counted. Since the setting is adjustable, a whole spectrum of radiation could be determined by a series of counting operations. Gamma quanta vary in energy, depending on the emitting isotope, so a mixture of two or more y-emitters could be counted, yielding information about amounts of the different isotopes in the mixture. Detection of Stable Isotopes: As stable isotopes differ from each other only in mass, any measurement of these isotopes must depend on this property. A mass spectrometer is an instrument which separates mole- cules on the basis of mass and determines the amounts of each kind of molecule. The mass spectrometer consists of an evacuated tube across which a high voltage is applied. The gas to be analyzed is admitted at one end of the tube, ionized by a stream of electrons, and accelerated toward the opposite end of the tube. Figure 12-3 shows one such instru- ment, and Fig. 12-4 shows another which separates materials on a differ- ent principle. In the "time-of-flight" mass spectrometer (Fig. 12-3), ions traverse the length of the tube, but the greater the mass, the longer this passage will take. As the ions arrive at the negative electrode they set up a momentary current whose magnitude is dependent upon the number of such ions. If the tube is long enough, ions of different masses arrive at distinctly different times. Once all the desired masses have been measured, a new quantity of gas can be admitted and the operation repeated. The repeti- tions can be extremely rapid (up to several thousand cycles per second), so that the "mass spectrum" can be displayed on a cathode-ray oscillo- scope. 170 ISOTOPIC TRACERS Accelerating electrodes yXuU Cathode ray oscilloscope n r ion collector Gas inlet Vacuum Fig, 12-3. A time-of-flight mass spectrometer. Gas is ad- mitted, ionized by a stream of electrons, and accelerated toward the ion collector by the accelerating electrodes. Current on the ion collector is registered on the oscillo- scope. The other basic type of mass spectrometer (Fig. 12-4) separates ions of different mass in a magnetic field. The tube has a 60°, 90°, or 180° bend surrounded by the poles of a large magnet. As the ions traverse the tube they are deflected by the magnetic field through an angle dependent on their momentum. The accelerating voltage and the magnetic field can be adjusted to focus ions of a selected mass on the target electrode. By varying the accelerating voltage or the magnetic field or both, the whole spectrum of ions can be swept across the target. Current in this electrode will vary according to the number of ions, just as in the time-of-flight instrument. If the "mass spectrum" is scanned automatically, the instru- ment will record the relative amount of each component in the mixture. The actual operations required in a tracer experiment using stable isotopes become somewhat complex. We might perform a tracer experi- ment in which N^^ is used to follow the production of ammonia (NHs)- N^^Hs has a mass of 17, but N^'^Hs has a mass of 18, and the two could be separated on the mass spectrometer. Ordinary water also has a mass of 18, however, and H2O would obscure the N^'^^a. In this case it would be preferable to convert the nitrogen to some other form with unambigu- ous mass numbers. If we measure C^-Qo (Mass 44) and C^^02 (Mass 45), it becomes necessary to correct for O^' (C^'O^'^O^': Mass 45). Ex- cept for these technical details which must be considered, the mass spec- trometer permits tracer experiments with stable isotopes, experiments which can be just as effective as radioisotope experiments. ISOTOPIC TRACERS 171 Accelerated Gas enters Gas is ionized To vo pump Ion collector Fig. 12-4. A magnetic field mass spectrome- ter. Ionized gas is resolved by the magnetic field into a spectrum of particles of different sizes. Laboratory safety Most radioactive materials used as tracers are by-products of reactor operations controlled by the United States Atomic Energy Commission. The AEC specifies the conditions under which radioactive materials may be handled. Very small quantities (/nc or fractions of /u-c) of a large number of isotopes can be purchased by any citizen under a "General License." Usually these amounts are so small that serious radiation danger is unlikely even with somewhat careless handling. The amounts are ade- quate for certain kinds of experiments, however. Larger amounts of isotopes, as would be used in a typical research laboratory, must be procured under a special license granted to the labora- tory by the AEC. The license specifies the kinds and amounts of isotopes that can be held in the laboratory at any one time. One person is usually 172 ISOTOPIC TRACERS designated as Radiation Safety Officer, and he has the responsibiHty for maintaining careful records, supervising handling and disposal of radio- active materials, and protecting laboratory personnel against harmful exposures to radiation. No person or laboratory is granted such a special license unless the AEC is given the assurance that only persons trained by course work or "on-the-job" experience will be responsible for the han- dling of radioactive materials. SELECTED REFERENCES Aronoff, Sam, Techniques of Radiohiochemistry. Ames: The Iowa State College Press, 1956. An advanced treatment of the subject. Chase, Grafton, Principles of Radioisotope Methodology. Minne- apolis: Burgess Publishing Company, 1959. A series of laboratory exercises. Division of Radiological Health, eds., Radiological Health Hand- hook. Revised edition. U. S. Pubic Health Service, available through the Office of Technical Services, Washington, D. C, 1960. Detailed data on most of the known isotopes. In one section, the compilers set out to include everything anyone might want to know about physical constants, symbols and abbreviations, and interrelationships of units. They have almost succeeded. Kamen, Martin, Isotopic Tracers in Biology, 3rd ed. New York: Academic Press, Inc., 1957. RCA Service Company, Inc., Camden, N. J. Atomic Radiation. 1957. Atomic Radiation, Part II. 1960. This pair of booklets provides an introduction to the whole field of radiation, including the theory of radioactivity, radiation, and nuclear physics, followed by practical aspects of hazards, handling, treatment of injuries, etc. CHAPTER 13 Electrical Measurements Many biologists prefer to make their measurements electrically when- ever it is possible. Much of the information obtained in biological ex- periments is indirect information anyway. If the possibility exists of converting this indirect information into an electrical signal of some sort, several advantages accrue. The measurement can be made more-or- less automatically, which often reduces the chances of human error. The electrical devices can produce permanent records, as on strips of chart paper, and these records can be examined and re-examined as needed. The electrical instruments usually respond very rapidly, so some responses too fast for the human senses can be detected easily. Electrical quantities are rather easily converted from one form to another. Certainly there are disadvantages, too. Electrical and electronic equip- ment can fail, and sometimes does at the most awkward moments. The operator must be able to recognize faulty performance and, ideally, should know what to do about it. The electrical instrument measures only indirectly, necessitating the assumption that the electrical signal is some unvarying mathematical function of the biological response. Electrical instruments usually are expensive and require maintenance. In many kinds of experiments, however, the advantages outweigh the disadvantages. Electrical theory Current electricity, as opposed to static electricity, is used almost ex- clusively in electrical instrumentation. This fact somewhat simplifies the 173 174 ELECTRICAL MEASUREMENTS explanations. A current can be thought of as a stream of electrons mov- ing through a conductor, even if it is unlikely that any one electron travels very far. Materials such as metals, in which certain electrons are rather loosely bound to the atom, make good conductors because the electrons can move rather easily from one atom to another. A variety of other materials in which the electrons have almost no freedom to move conduct very poorly. Intermediate between these two extremes are a group of "semiconducting" materials. If electrons are to move primarily in one direction, and not at random, some force must be applied. This electromotive force is a difference in electrical potential, measured in volts. The ability of a material to con- duct electrons is usually expressed by the inverse property, or resistance. The resistance of a wire depends upon its cross-sectional area, its length, and the metal of which it is rfiade. Voltage (E), current (I) and resistance (R) are related through Ohm's law, 1 = E/R. If we know any two of these quantities, we can calculate the third. Another quantity, electrical power, is the product of voltage and current (W = IE) and is measured in watts. A current of electrons sometimes flows through a conductor in a single direction. In these direct current (d-c) situations. Ohm's law applies without modification. Alternating currents, to the contrary, change direction with regularity and periodicity. The sine curve is derived from the path traced by a point on the circumference of a circle. At any one instant the voltage is positive, negative, or zero; current will flow in a circuit in a forward direction, in the backward direction, or not at all. Over one cycle, the voltage rises from zero to a maximal positive value, falls to zero, then rises to a maximal negative value, and again falls to zero (see Fig. 13-1). Alternating currents can perform work, just as direct currents can, and are characterized by a frequency. The alternation of the current introduces some interesting and 270" Fig. 13-L Relationship of the sine curve to the circle. ELECTRICAL MEASUREMENTS 175 valuable complexities into the consideration of electricity. Any conductor carrying a current is surrounded by a magnetic field. When the circuit is broken this field collapses, only to expand again when the current starts flowing. Any d-c conductor, then, is surrounded by a steady mag- netic field which goes through transitory changes only when a switch is opened or closed. The a-c conductor, however, is surrounded by a mag- netic field which is continually expanding or collapsing. If this expand- ing and contracting magnetic field moves through a second wire, a cur- rent will be induced in this second wire. Now imagine a coil of wire conducting an alternating current. The expanding magnetic field around one turn of the coil cuts across the next turn in the coil, inducing a cur- rent in this second turn of the coil in the opposite direction. The net effect throughout the coil is an induced current of opposite sign, which tends to impede the original current. Impedance from this source oc- curs in addition to the regular resistance of the wire in the coil. Thus in any treatment of an a-c circuit which contains such coils we must consider the inductance as well as the resistance. Another device produces a more-or-less opposite effect. A capacitor (condenser) consists of a pair of plates of conducting material separated from each other by a dielectric (insulating) material. If the two plates are connected by wires to a d-c circuit, no current will flow through the dielectric material, but an excess of electrons accumulates in one of the plates and a deficiency in the other. The capacitor becomes charged, one plate positively, the other negatively. In an alternating circuit, electrons pour freely into one plate, forcing electrons out of the other plate. When the sign of the current is reversed, the events occur in the opposite di- rection. The capacitor seems to conduct a-c. The current flows so freely into the large plates of the capacitor, however, that the resistance seems smaller than if the capacitor were not present. The inductive eff^ect of a coil and the capacitive eff^ect of the con- denser become very important in any a-c circuit in which they occur. Instead of ordinary resistance, we speak of impedance (Z), and 1 = E/Z. The inductance (L) of a coil (in henries) affects the current in a manner which is dependent on the alternating frequency (f). The opposition to the current is called inductive reactance, Xl = l^rfL. The condenser of capacitance C (in farads) produces an opposite effect, so capacitive reactance Xr = l/(27r/C). The impedance Z is the vectorial sum of resistance R and the net reactance. Z = Jr^ + (27rfL - \/27TfCy 176 ELECTRICAL MEASUREMENTS In any a-c circuit containing both inductance and capacitance the actual current flowing under a given voltage could be calculated from this equation. Impedance defined in this manner becomes particularly im- portant in communications where frequencies are high. The ideas are used in biological instrumentation, of course, and should not be neglected even in measuring electrical behavior of tissues. Living materials possess electrical resistance, but there is also likely to be a measurable capaci- tance within the living material, and another capacitance often exists in the connection between the electronic instrument and the living ma- terial. Vacuum Tuhes: Vacuum tubes are still used in a majority of instru- ments for a variety of purposes. A very large number of different tubes is available commercially, but in general they all depend on the same principles. If a conductor with its loosely bound electrons is heated, some of these electrons will escape from the surface. In a sense, they are "boiled off." In an ordinary conductor, however, the electrons will be immediately recaptured by the positive charges left on the surface of the metal. At any instant, the hot surface will be surrounded by a "space charge," a cloud of free electrons. If such a hot electrode is placed in a vacuum, together with another "cold" positively charged electrode, electrons leave the hot surface, travel across the intervening space, and are captured by the positively charged electrode (called the plate or anode). A "diode" operates in this way. (Fig. 13-2a shows such a tube.) Obviously the tube must be connected into a proper external circuit so that the hot cathode does not become positively charged by the loss of electrons. Also obviously, the stream of electrons can move in only one direction. Such a device could be used to rectify, that is, to change al- ternating current into direct current. Anode Grid Cathode (b) Fig. 13-2. Diode (a) and triode (b) vacuum tubes. ELECTRICAL MEASUREMENTS 177 If a grid or screen of fine wires is placed between the cathode and the plate (as in Fig. 13-2b) and this grid is made slightly positive or negative, the small charge will interfere with the free movement of electrons across the space. Variations in the grid voltage are reflected in the cur- rent passing through the vacuum tube to the plate. The cathode-to-plate voltage is much higher than the grid voltage, so any "signal" or variation in grid voltage is thus amplified. The "triode" amplifier tube, with all the variations and improvements which have been made, is responsible almost by itself for our whole electronic world. Sometimes other elec- trodes are introduced, as in tetrodes and pentodes, but usually these are included to improve the performance of the basic triode. Semiconducting devices, or transistors, can perform some operations similar to those performed by vacuum tubes. These solid state materials differ only in degree from conducting materials (metals) or from in- sulating materials. Crystals of germanium or silicon possess a definitely ordered structure in which pairs of electrons are shared by adjacent atoms. Traces of impurities may fit into the crystal structure but intro- duce extra electrons which are not needed in the crystal bonds and thus are free to migrate. Other impurities may fit the crystal pattern, but with deficiencies of electrons or "holes." Such "slightly impure" crystals become the basis of transistors. A crystal of germanium (4 valence elec- trons) with a trace of arsenic or phosphorus (5 valence electrons) pos- sesses an excess of electrons and is called N-type (for negative) material. A similar crystal containing traces of gallium or indium (3 valence electrons) would be called P-type from its excess of "holes" or posi- tive charges. If a piece of N-type and a piece of P-type crystal are joined and then a voltage is applied, electrons flow in one direction more easily than in the other. "Holes" flow more easily in the opposite direction. This N-P junction becomes a rectifier (see Fig. 13-3a). A combination of N-P-N layers is a junction transistor, which can best be explained with the help of Fig. 13-3b. The size of the current flowing across the N-P and P-N junctions depends upon a number of factors. Electrons cross easily from the negatively-charged emitter to the P-type base, filling some of the holes in the base. The positively-charged collector withdraws electrons from the base, across the P-N junction, creating new holes. These new holes migrate across the base, eventually being filled with electrons from the emitter. The number of new holes formed— and therefore the current flowing in the base-collector half of the transistor— depends upon the number of electrons injected into the base from the emitter. The transistor can thus accept a signal and con- 178 ELECTRICAL MEASUREMENTS trol the current in a second circuit. If the voltage in the base-collector circuit is larger than that in the emitter-base circuit, the control amounts to an amplification. P-type N-type Collector (0) (b) Fig. 13-3. Solid state (semiconducting) devices: (a), a junction diode; (b), N-P-N transistor. For explanation, see text. In practice, the N-P-N transistor can be used in a circuit in any of several ways. The P-N-P transistor is the same kind of device except that the polarity is reversed— positive poles becoming negative and vice versa —so that the current carriers through the base are electrons rather than holes. Transistors have several decided advantages over vacuum tubes. Since they do not depend upon heaters, they can be operated with very little applied power. They also can be made very tiny, which helps to reduce the size and weight of instruments. Generally, they have a much longer life than vacuum tubes. Other advantages, and some disadvantages, will be discussed later under Amplifiers. Electronic systems An electronic system, as the term is used here, is a device or com- bination of devices that responds to some change in the environment in a characteristic way to produce an electrical change in a measuring in- strument or to bring about some control over the environmental change. Systems of this sort are adaptable to measurement of a variety of bio- logical phenomena. Some biological reactions produce voltages directly, ELECTRICAL MEASUREMENTS 179 and these are relatively easy to measure. In other cases, the biological process can lead to a change in resistance or capacitance or some other property of a circuit. Movement can be detected by a mechanical coupl- ing to an instrument which produces an electrical "signal." The modifi- cations, interconversions, and variations are almost unlimited. Signal Input transducer Power supply • /■ t - - Amplifier Feedback Fig. 13-4. A typical electronic system. "^ Output transducer *»^ H ^ IB ^ ^ ^ 1 Control The biological response, or the response of a physical instrument used in a biological experiment, must be converted into some kind of electri- cal signal. Any device which does this is called an input transducer. It accepts information and converts it into some usable electrical form. In the simplest system the input transducer feeds directly into an output transducer, such as a meter. More commonly, the electrical signal is amplified. The system then becomes more complex because the amplifier, and perhaps the other components, require a power supply to furnish a number of d-c and a-c voltages. The output of the system may be recorded, or it may be used to control, or both. A complete system is diagrammed in Fig. 13-4. Each major component is considered in more detail in a succeeding section. Input transducers Input transducers which respond to a variety of signals are available. These are admirably covered in the book by Lion.^ It is a shame that the other components of systems are not yet treated so well. Mechanical Transducers: Input transducers in this class respond to movements, changes in pressure, or other mechanical changes. A simple ^ Kurt S. Lion, Instrumentation in Scientific Research: Electrical Input Trans- ducers (New York: McGraw-Hill Book Company, Inc., 1959). 180 ELECTRICAL MEASUREMENTS kind of motion detector is a variable resistor. Any movement of the movable contact along the "slidewire" changes the resistance in the circuit, and this change becomes the electrical signal. Figure 13-5 shows several such devices which respond to linear motion, rotary motion, and pressure. rW\AWS Diaphragm or piston ~> -* — *- Pressure B A C B rAAAAAA^ rJ B Fig. 13-5. Mechanical transducers in which resistance varies in response to hnear motion (left), rotary motion of shaft (center), or pressure (right). A mechanical transducer could also produce changes in inductance or in capacitance. For example, the capacitance of a condenser depends, among other things, on the distance between the two plates. The Beck- man InfraRed Analyzer (Chapter 10) employs a variable capacitance detector. Some record-player pickup arms use variable inductances. In both inductance and capacitance transducers, a moderately high-fre- quency "carrier" current is varied by the signal because both inductances and capacitances have their greatest eflFect at higher frequencies. The variations in "carrier" current then can be amplified. Certain crystalline materials, quartz for example, exhibit the "piezo- electric" effect. If a voltage is impressed across a wafer of the crystal, the crystal changes its shape. If an alternating current is used, the crystal vibrates, vibrating most strongly when the frequency corresponds to the natural period of the crystal. This effect is used most commonly in regu- lating the frequencies in communications circuits. The reaction is revers- ible, however, making it possible to use such crystals as transducers. If the crystal is compressed or vibrated, it will generate an alternating voltage. This signal alternates with a frequency corresponding to the vibration of the crystal, so the device can be used to detect small motions. Probably the most spectacular of the mechanical transducers are the "strain gages." These devices depend upon the fact that the resistance of a wire changes a tiny amount M^hen the wire is stretched. The wire (or a flat strip of a conducting metal) is arranged so that the pulling force is exerted on it, and at the same time, the wire is part of a Wheatstone ELECTRICAL MEASUREMENTS 181 bridge. Changes in resistance can be amplified and used to drive a re- corder. Temperature Transducers: The thermocouple, the resistance ther- mometer, and the thermistor were discussed in Chapter 4. Each of these is properly called an "input transducer." The thermocouple produces a d-c voltage directly, while the other two instruments give variations in resistance. In either case, it is not difficult to fit the transducer into an electrical system. Radiation Transducers: Probably transducers which respond to radiant energy occur in the greatest variety. This is true partly because light is energy, and within the visible and ultraviolet range the quanta are large enough to cause measurable electrical or chemical effects. The photo- voltaic cell, or "barrier-layer cell," is a semiconducting device. Quanta of light displace electrons within the crystal. With a proper arrangement of semiconducting and conducting materials, the continued displacement of electrons becomes a small current which can be measured. Voltages always are low, but the currents are large enough to be measured on sensitive meters. Nearly all photoelectric exposure meters used in photog- raphy are barrier-layer cells. A phototube is a vacuum tube in which the cathode is sensitive to light. Quanta of radiant energy impinging on the cathode release elec- trons which are drawn to the plate or anode. Since the current across the tube is dependent on the number of quanta striking the photoemissive surface in a unit time, the output of the tube is proportional to the light intensity. A multiplier phototube is a phototube with a built-in amplifier called an electron multiplier. A series of dynodes (electrodes with increasing positive charge) is arranged so that electrons liberated from the photo- emissive cathode are drawn to the first dynode. Each electron causes the emission of additional electrons from the dynode surface. As shown in Fig. 13-6, a combination of eight or nine dynodes can result in a con- siderable multiplication of the current. Multiplier phototubes respond rapidly to very dim light or to quite small changes in light intensity. These tubes have been incorporated in a number of standard laboratory instruments such as spectrophotometers. Several semiconducting materials can be used effectively as light de- tectors because of a large decrease in resistance upon illumination. A dozen or more materials are available. Lead sulfide is used to measure infrared radiation in several of the commercial spectrophotometers. A 182 ELECTRICAL MEASUREMENTS typical cadmium selenide photoconductive cell is about the size of a small pea. These transducers have some disadvantages, but for certain applications in experimental research they can be very useful. + I00v 4-300V +500v ^^^. Photocathode +200v +400v +600v Fig. 13-6. Multiplier Phototube. A quantum of radiant energy causes the emission of an electron from the photocathode. At the first dynode (+100v) and at succeeding dynodes, additional electrons are liberated by secondary emission. Combinations: In many instances, some combination of transducers can be used. Although a very small movement might be detected directly w^ith a strain gage, it might sometimes be more convenient to measure such a movement by observing its effect on a beam of light. A sensitive galvanometer responds to a small current by moving a suspended coil of wire within a magnetic field. A small mirror, rotated along with the coil, reflects a beam of light which moves across the scale as the coil turns. The small mirror has less inertia than the needle in an ordinary meter, so the galvanometer responds to smaller currents. Thermocouples respond to temperature differences, but if one of the junctions is black- ened it will absorb light and become warmer and thus will measure ra- diant energy. A scintillation counter is a derived instrument because ionizing radiation causes flashes of light in the phosphor, and these flashes are detected by the multiplier phototube. Output transducers Of the many types of output transducers, the galvanometer was the earliest used. If a coil of wire is suspended between the poles of a magnet, any current in the wire tends to deflect the coil transversely across the magnetic field. A typical measuring meter consists of a coil mounted on jeweled pivots surrounded by the magnet. The torque or rotating force in any given meter is proportional to the current through the coil. An attached pointer moves across a scale, so current can be read di- ELECTRICAL MEASUREMENTS 183 rectly in amperes, milliamperes (ma), or microamperes (/-ta). The same meter can be used to measure voltage if an unvarying resistor is connected in series with the coil. Current through the resistor (and the coil) depends directly on the voltage. If the resistor is built into the instrument, the meter scale can be graduated in volts. A dry-cell (or other constant voltage source) and a meter can be used to measure resistance, once again because I = E/R. Speakers: The use of speakers as output transducers is sometimes ad- vantageous. In a complex instrumental arrangement, for example, the operator's eyes might be too busy to watch a meter. His ears, then, could detect changes in volume or pitch from the speaker. In some instruments speakers are used chiefly for demonstration purposes, while in others the speaker gives a warning when some misuse of the instru- ment or some other disaster is imminent. Speakers can also be used in conjunction with other output transducers. Recording Instruments: The ultimate in convenience comes with the use of a recording instrument as the output transducer. A pen scribes a permanent record on a moving paper. Most frequently used are the vari- ous strip-chart recorders, in which paper from a roll is fed under the pen. The pen moves across the paper by an amount proportional to the strength of the electrical signal. The recorder often contains a galvanometer and thus will measure current or, with an appropriate resistor, voltage. A pen is attached to the moving coil of the meter. Variations in the electrical signal are recorded as a curve, varying distances from some zero line. Since the paper moves with constant speed, the curve is plotted as a function of time. The potentiometer recorder is more complicated but can respond more reliably to smaller fluctuations. The instrument is a null measuring device because the signal creates an imbalance in the instrument. The instrument responds to this imbalance by actuating a motor which in- creases or decreases the resistance in the circuit, thus restoring the bal- ance. The amount of motor movement required to return the instrument to the balanced condition is recorded on the paper. In several of the in- struments the pen is attached to the same motor by a gear or pulley arrangement. The Brown Electronik Potentiometer Recorder is one of the favored instruments in laboratories everywhere. Several others may be as good or even better for some purposes. Oscilloscopes: For relatively fast, high frequency responses, the cath- ode ray oscilloscope is used. This instrument is a vacuum tube in 184 ELECTRICAL MEASUREMENTS which one end is coated with a phosphor that glows under the impact of a beam of electrons. At the other end of the tube an electron "gun" beams electrons toward the face of the tube. Along the electron path two pairs of charged plates deflect the electron beam in the vertical or horizontal direction. A potential diff^erence between the vertical control electrodes moves the electron beam upward or downward. A similar pair of electrodes controls movement in the horizontal direction. In the laboratory oscilloscope the horizontal control electrodes are set to make the electron beam sweep across the face of the tube, return, and sweep again. The sweep frequency can be a few hundred to many thousands of cycles per second. The signal to be observed controls the vertical move- ment of the electron beam. The phosphor on the tube face glows for a short time after the passage of the electron beam, so that in effect the face of the tube shows a continuous curve, horizontally across the face, vary- ing in height as the signal varies. The cathode ray oscilloscope is the standard test instrument in the electronics laboratory and can be used for a variety of other purposes. Power supplies The electricity delivered to a building is almost always alternating current. In the United States it is 60-cycle alternating current, usually 110 to 130 volts or some multiple thereof. Except in special circum- stances, instruments must operate with this source of power. Vacuum tube amplifiers require 100 to 300 volts d-c between the cathode and the anode; means of providing these voltages must be available. A diode vacuum tube can function as a rectifier, or for some applica- tions a semiconducting device like a selenium rectifier is preferred. In either case, the resulting direct current pulsates with the same fre- quency as the original a-c. Figure 13-7 shows a half-wave vacuum tube rectifier and a full-wave rectifier which offers some improvement. Even full-wave rectification results in a pulsating d-c voltage, however. A filter composed of inductances and capacitances or of resistances and capaci- tances is used to smooth out the fluctuations. The inductance or resist- ance tends to oppose any increase in the voltage, but the capacitors tend to oppose any decrease in voltage. The result is a direct current with some "ripples" but a current considerably smoother than the original pulsating d-c. If even better control is desired, an electronic regulating system can ELECTRICAL MEASUREMENTS 185 be used. Such a circuit employs gas-filled tubes, in which the voltage across the tube is independent of current, for reference. 1 he voltage from the filter is compared continuously with the gas-filled tube, and any tendency to increase or decrease is counteracted. There are several means of achieving this control, some of which become quite elaborate. AAA (mm Fig. 13-7. Rectifier circuits. Top: half-wave, using one diode. Bottom: full-wave, using two diodes (commonly enclosed in the same glass shell). In addition to the high voltage for the amplifier tubes, the power supply is likely to provide a set of low voltage a-c or d-c supplies for heaters in vacuum tubes or for other purposes. Amplifiers The basic unit of the vacuum tube amphfier is the triode. The signal to be amplified is fed into the grid of the tube, where it influences the passage of electrons between the cathode and the anode. Signal fluctua- tions appear on the anode amplified 30 to 100 times. Several stages are frequently used in amplifiers, arranged so that the output of one stage is further amplified by the next stage. Total amplification may be 10,000 or more times. The easiest amplifiers to build are those for alternating signals. The successive stages are connected through capacitors so that the alternat- ing signal can pass, but any direct current cannot. A direct-coupled amplifier is more useful in biology because it will amplify direct cur- rents as well as alternating currents. It is unfortunate that a very similar 186 ELECTRICAL MEASUREMENTS abbreviation, D. C, is used for direct-coupled as for direct current. Direct-coupled amplifiers come with built-in problems, and their design is best left to the electronics engineer. The problems are so serious that once they are solved the result is a very fine amplifier indeed. Biological amplifiers have some special requirements. The "hi-fi" amplifier is constructed so that it provides the same amplification re- gardless of frequency over a wide range. The "addict" is proud to state that his amplifier is "flat" from 5 to 50,000 cycles per second. Biological signals are direct current, or very low frequency alternations, so this fine frequency response is wasted. If a response like a nerve potential is to be measured, then somehow the nerve cell must be connected to the grid of the first amplifier stage. A small voltage (bias) must be applied to the grid, however, and this bias is likely to have "unbiological" con- sequences in the nerve cell. Or, to put somewhat the same idea in a different way, if a pair of electrodes is attached to a cell or tissue, and the impedance through the external circuit is lower than the impedance of the cells, then any electrical potential in the cells results in a current in the external circuit instead of in the cells where it belongs. Therefore, some form of high impedance input circuit must be used with biological systems in order to prevent the instrument from influencing the cells. Transistor amplifiers offer several advantages in biological experimen- tation by partially overcoming some complex problems. One of the dif- ficulties in the use of a conventional amplifier for measuring biological reactions is the set of electrical properties at the junctions between the electrodes and the cells. Transistor amplifiers permit the use of electrode- tissue junctions with less tendency to "damp" or obscure the biological signal. Another advantage is that transistor amplifiers give their best performance in the range of frequencies encountered in biological ex- periments. Finally, the small size and low power requirements permit the use of transistor amplifiers in situations where a vacuum-tube am- plifier would be impractical. Transistor engineering is very complex, so even the biologist who is quite competent in electronics does not try to design his own circuits. Potentiometric techniques A number of laboratory instruments measure electrical potentials (or changes in potentials) which result from chemical reactions. Consider, for example, a piece of copper wire with one end in a solution of a coj> ELECTRICAL MEASUREMENTS 187 per salt. Copper atoms exist in an equilibrium, Cu ^ Cu^"^ + 2e~. Copper atoms will be deposited on the wire and removed from the wire at random. Once the equilibrium is established there will be no net in- crease or decrease in the amount of copper metal or Cu"^"^. Any increase in the concentration of Cu++ or e~ leads to a deposition of Cu metal on the wire. Now, consider a similar system containing another metal, say zinc, in a solution of its salt. The same kind of equilibrium will be estabhshed, Zn^Zn + + + 2e~, but the proportion of metal and ions is different. If these two solutions are placed in the same container and the two wires are connected outside the solution, interesting things start happening. Zinc atoms are more likely than copper to exist as ions. If we compare the two equations above, one is more likely to run forward, the other backward. At the zinc wire, Zn + + ions depart into the solution leaving behind two electrons. At the copper wire, Cu++ ions pick up electrons from the wire and become Cu metal. As long as the wires are connected, electrons flow through the external circuit from one electrode to the other. The pair of chemical reactions, then, has generated a voltage which produces a current in the external wire. Almost any metal would work as one member of such a pair. In fact it need not even be a metal because hydrogen gas and hydrogen ions reach the same kind of equilib- rium. The hydrogen electrode is used as a reference point in potentio- metric measurements. A number of kinds of biological reactions produce similar effects, directly or indirectly. These reactions can be followed by measuring the potentials developed. In practice one of the electrodes is commonly the calomel electrode (Hg2 CI2 * KCl), and the other is a platinum wire which can absorb electrons from an oxidation-reduction reaction occurring in the solution. The potential developed in such a system is measured with a poten- tiometric circuit. The voltage to be measured is compared to some standard voltage by means of a bridge-like arrangement of resistors. The electrical measurement of pH depends on a similar electrode reaction. A solution of known hydrogen ion concentration is contained in a tube of glass (the glass electrode) in contact with a silver-silver chloride couple. A calomel electrode is used as the other "half-cell." If the glass electrode is placed in a solution containing hydrogen ions, as all aqueous solutions do, hydrogen ions move through the glass mem- brane. They move inward or outward, depending on the pH of the solution; the result is a potential which can be measured. Within the useful range, the potential developed is a remarkably consistent function of the pH of the solution. 188 ELECTRICAL MEASUREMENTS -AAAA^ — II 1(— -^WT nm Resistor R in ohms(ii) Capacitors C in farads, /ifd, or ixfjLf Inductance Inductance L in iron Transformer henries core + Connected Crossing Ground conductors or wires Switch Double-throw switch H»- ^■|l|•- Fuse Cell Battery of cells r\ <^ <^ ^ Potentiometer, voltage divider Variable resistors -J--VVV-J- nm^ m X. RC filter LC filter Bridge Fig. 13-8. Commonly used symbols. Circuit diagrams Most electronic instruments are accompanied by instruction books containing the complete circuit diagrams. These diagrams increase the likelihood that the operator can maintain the instrument and make neces- sary repairs and adjustments. Symbols have become more-or-less stand- ardized, which makes the task easier. Figure 13-8 is arranged to show some commonly used symbols as well as some representative circuit com- ponents. An interesting and very instructive exercise is to draw a circuit diagram by tracing the parts and connections of a finished electronic device, even a radio. ELECTRICAL MEASUREMENTS 189 SELECTED REFERENCES Anonymous, The Radio Amateur's Handbook, 38th ed. Amateur Radio Relay League, 1960. The various editions of this handbook have been among the most useful references for those who work with electronics. Lion, Kurt S., Instrumentation in Scientific Research: Electrical Input Transducers. New York: McGraw-Hill Book Company, Inc., 1959. A truly remarkable coverage of a difficult subject: compre- hensive, detailed, factual, and yet readable. Lurch, E. Norman, Fundamentals of Electronics. New York: John Wiley & Sons, Inc., 1960. Deliberately written for people who are not engineers. Stacy, Ralph W., Biological and Medical Electronics. New York: McGraw-Hill Book Company, Inc., 1960. Whitfield, I. C, An Introduction to Electronics for Physiological Workers, 2nd ed. London: Macmillan & Co. Ltd., 1960. Whitfield uses the British terminology (for example, he refers to vacuum tubes as "valves"), but some of his explanations are the most lucid to be found anywhere. CHAPTER 14 Calculation of Data Experimental data are meaningful only if they can be compared to some standard of reference and if general interpretations can be drawn from them. In all the sciences it has become customary to express data in certain sets of standard terms, just as the chemist expresses an amount of gas as the volume it would occupy at 0° C and 760 mm Hg. Biologists, whose materials are nonstandard, naturally have greater difficulty stand- ardizing their figures. A chemist can speak of concentrations of solutions in moles per liter, but what is the concentration in moles per liter of the potassium ions in a single cell? Or what is the concentration in moles per liter of the Chlorella cells in a Warburg manometer flask? Biologists must carefully specify the exact conditions under which a measurement was made if there is to be any hope of repeating the meas- urement. This can be accomplished by carefully detailed descriptions, but it can be done somewhat more simply by a careful choice of units in which to express results. Amounts of biological material If a 20-Kg dog eats 1 Kg of food a day, two 20-Kg dogs should eat about 2 Kg. In many instances such relationships hold reasonably well. Most of the computations expressed in this section are based on the assumption that the magnitude of an effect is directly related to the amount of biological material. Unfortunately, a 40-Kg dog is not likely to eat the same amount as the two smaller dogs for a variety of reasons. The assumption made above must be used with considerable care. 190 CALCULATION OF DATA 191 A direct comparison of two experiments is possible only if the same amount of living material was used in both cases. An indirect comparison can be made by recalculating both sets of results in terms of some stand- ard amount of material. Simply weighing the cells in both experiments makes such a comparison possible since the results can be expressed in terms of amount of change per gram of cells. Cells or tissues weighed in the living condition yield a "fresh weight." Suppose we were measuring the amount of a certain ion absorbed by slices of potato tuber, and we wished to compare two batches of potatoes. The absorption of ions could be expressed as a number of grams (or milligrams) per gram (fresh weight) of tissue. Because one batch of potatoes might contain a much larger amount of water, relatively, than the other, a direct comparison on the basis of fresh weight could be misleading. A more realistic measure of the amount of potato tissue, in this case, is the dry weight obtained by drying the tissue in an oven after the experiment is completed. Alternatively, one sample of each batch of potatoes could be used in the experiment and another sample of each, equal in fresh weight and volume, could be dried. Even this method is not entirely adequate. The absorption of ions sometimes de- pends upon metabolic activity, which in turn depends upon the relative concentration of enzymes present. If one kind of potatoes has a large dry weight, but most of this weight is metabolically-inactive starch, the results expressed on the dry weight basis are not very useful. A better comparison would be based on the amount of protein nitrogen per unit of potato tissue. The amount of protein would indicate the amount of enzymes present; thus, results expressed as milligrams of ions absorbed per unit of nitrogen are more realistic. This example illustrates a rather common dilemma in biological experiments. As the experimental results become more directly comparable, it becomes necessary to make two sets of measurements in each experiment: measurements of the amount of living material, as well as the measurement of the phenomenon being studied. Manipulations of raw data The purpose of an experiment is to answer a hypothetical question, but the results are just a set of numbers. The question cannot be an- swered unless the numbers are in a form directly related to the form in which the question was stated. Several kinds of manipulations of the raw 192 CALCULATION OF DATA data are sometimes necessary. One type is the transformation of dimen- sional units described in the preceding section. Another type is a conver- sion of the data produced by an instrument into other terms. For example, a stripchart recorder might record millivolts when we really want to know a change in chemical composition. In the properly designed instru- ment, the actual output will be related to the information being sought by some unvarying mathematical relationship or "transfer function." In the ideal instrument, the output is directly proportional to the response, the transfer function is linear, and the only correction required is a multi- plication by a constant. We must put up with random errors or variations of measurement. We can minimize such errors or estimate the size of the variations, but we cannot eliminate this source of error and we cannot correct for it. A systematic error, however, leads to an inaccuracy of measurement which results from some defect in the standards used for comparison. The too-short meter stick was used as an example in Chapter 4. We can estimate the magnitude of errors of this sort and then correct for them. If we know, for example, how much too short our meter stick is, a simple computation gives us corrected values. The following extensive example illustrates several kinds of manipula- tions of data. A set of measurements of rate of photosynthesis was made using the manometric method (Chapter 10). The data were recorded on a printed form (one suggested by Umbreit, Burris, and Stauffer 0» and some of the computations were performed on the same sheet. Figure 14-1 shows such a record sheet in an abbreviated form. The thermo- barometer vessel contained only water. The other manometer contained Chlorella cells suspended in KHCO3 solution so that the only gas ex- change affecting the pressure was the production of oxygen. The con- stant, K().^ = 1.32 lA 02/mm of manometer Huid, is a transfer function. The "raw data" include the time, the pressure on the thermobarometer, and the pressure on the experimental manometer. The second column under the thermobarometer and also under the experimental manometer is the amount of change since the beginning of the experiment. In the third column under the experimental manometer the first correction is made. The pressure change exhibited by the thermobarometer represents a systematic error, that is, a change in room air pressure since the begin- 1 W. W. Umbreit, R. H. Burris, and J. F. Stauffer, Manometric Techniques, 3rd ed. (Minneapolis: Burgess Publishing Company, 1957). CALCULATION OF DATA 193 ning of the measurement. Application of this correction changes the ex- perimental values slightly. The numbers in the last column under the experimental manometer were derived by multiplying by the constant, an operation which converts (corrected) millimeters of manometer fluid to microliters of oxygen at standard conditions. Thermobarometer (TB) Experimental manometer Temp: HO'C Light. I50u/. RiM. ^yte^xL at lO/C/rrb- 'h.yrJi. \\^0 1 1 /yn/ToT^"* iOlHIIa) ipiAj /mJL . kOa,= l.3a-^Oa./^. Time Time since beginning Manometer reading Change since beginning Manometer reading Chonge since beginning Corrected byTB Multiplied by constant IC-^ o l6o iSo 10^* 5 l6o O IG>8 + 18 + l« 24 lo'-? lo 152 + 2. l€S + 3S + 31 4S lo'-^ IS \53 + 2> 2,08 + 68 + S6 13 lo'i? 20 1 51 + 1 XXS +15 + T^- S8 10^-^ Xb |5\ + 1 24- + + ^ + + S3 1 a3 10^ 3o |S^ + 2 2G.3 + \l'b + ni 1 4-(* Q/vt/ 24.3 Fig. 14-1, Record of a manometric measurement of photosynthesis; student data. Further computations can be made. We know the amount of oxygen produced in thirty minutes by this lot of Chlorella cells. The average rate of photosynthesis (in fJi\ 02/hr) is given by multiplication. We know the volume of the cells and the amount of chlorophyll they contain, so we can calculate the rate in any terms we choose. In some experiments it is possible to combine all the corrections and computations in a single equation. Using the number of millimeters of manometer fluid (corrected by thermobarometer reading) from Fig. 14-1, we can compute the rate of photosynthesis in ^tl Oo/^tg chlorophyll X hr by the following equation: 194 CALCULATION OF DATA 1 1 1 mm manometer fluid V 132 fA O2 Y60 min \ 30 min X 3 ml cell suspension A 1 mm manometer fluid/ \ 1 hr / 1 ml cell suspension Y 1 mm^ cells \ _ c ^ ^l O2 1 1 mm^ cells /\\.7 fJ-g chlorophyll/ ~ ' hr X ftg chlorophyll Some of the dimensions and some of the numbers cancel, giving us the numerical rate in the desired terms. Arranging all corrections and conversions in this manner provides a dimensional check. If we per- formed each conversion separately, we might end with a rate in terms of oxygen production per square milligram of cells, or another such ridiculous unit. Aids In calculation Computations are likely to be long and involved in any extensive ex- periment. If an aid to calculation can be used without sacrifice of ac- curacy, it certainly should be employed. The figures resulting from biological experiments are subject to natural variation, and there is no need to maintain precision to twelve significant figures. Since three significant figures are usually adequate, slide rule accuracy is good enough. The biologist is likely to use his slide rule for multiplication and division and for finding squares, square roots, and logarithms. The trigo- nometric scales probably are used only rarely. Even this limited use, how- ever, makes the slide rule a valuable instrument. For problems requiring more significant figures, or for sequences in- volving additions and subtractions which cannot be done on a slide rule, the electrical calculators are a great help. These machines can add, sub- tract, multiply, and divide automatically. With more or less ease, square roots can be determined. A competent operator can make calculations very rapidly with few mistakes. Electrical calculators are used extensively in statistical computations. Mathematical treatments Physical theories are most desirably expressed as simple mathematical equations. Biologists would like to present their theories in the same CALCULATION OF DATA 195 way, but so far nothing has appeared in biology with the beautiful sim- plicity and profound generality of E = mc^. Nonetheless, on a lower level, experimental results and theoretical interpretations of the data can fit simple equations quite well. Analytic geometry presents a variety of equations for geometrical fig- ures. For example, if a graph is plotted on rectangular coordinate paper, the values of y on the vertical axis bear some natural relationship to the values of x on the horizontal axis. If y = mx -[- h, the graph is a straight line, m is the constant slope, and intercept h is the value of y when x is zero. The slope can be positive or negative. The straight line or linear relationship is very common in the labora- tory. A verbal expression indicating the same relationship is "y is directly proportional to x." The data in Fig. 14-1 can be used as an example. At zero time, the manometers contained some amount, h, of oxygen, al- though we were not concerned about this amount. After some time, it was apparent that the amount of oxygen produced in each five-minute interval was about the same. The slope, then, is the average change per five minutes, or 24.3 jA O2/5 min. The total amount of oxygen in the vessel at the end of the measurement is ' ^ " X 6 (5 min intervals) + l7Atl02 at start = 146 + 1? /t^l02 5 min interval m X -^h =y In our previous calculations, we assumed that h was unimportant and measured only the change since the beginning. This assumption does not change the equation; it merely assigns a value of zero to h, so y = mx + 0. Sometimes the actual expressions for which x, y, m, and h stand are exceedingly complex. A part of the genius of the theoretically-minded biologist lies in the ability to recognize simple equations in these com- plex expressions or to convert some more complex relationship into a straight line. The straight line is probably the most common relationship, partly because many experiments are set up to test for such a relationship. Per- haps the next most common equation is that for the hyperbola. The basic equation for the hyperbola iL_Zl- 1 196 CALCULATION OF DATA gives a curve like that in Fig. 14-2. Rotation of the axes yields another expression, xy = k, which is graphically represented in Fig. 14-3. This expression is quite commonly represented in the results of experiments Fig. 14-2. ^ - ^ = 1 a- b- Fig. 14-3. xy = k because it indicates that y is inversely proportional to x, or y = k/x. Still another transformation involving rotation and translation of the axes yields the curve in Fig. 14-4, with the equation Hx -\- Ky — xy =^ 0 where K is a constant with a small negative value and H is a constant with a larger positive value. Such a curve might result in an experiment where, for example, an over-all rate is controlled and limited by two sep- arate factors. When x is small, the rate is limited by x, so that the rate increases rapidly as x increases. When x is large, some other factor limits the rate, so further increases in x make little difference in the rate. Exponential and logarithmic expressions also occur frequently in the treatment of experimental results. The general exponential equation, x = a" (where commonly a = 2, e, or 10) can be written in the form y = \og„x. This equation produces the curve of Fig. 14-5. Because of its curvature, this graph is difficult to interpret. Notice the similarity to Fig. 14-4. The logarithmic curve is easily converted to a straight line however. If we set z = log„ x, then y = z, which is an equation for a CALCULATION OF DATA 197 straight line (Fig. 14-6). If it is suspected that the data fit a logarithmic curve, plotting log x against y tests this idea by either producing a straight line or failing to do so. This graph is easiest to construct on semilog paper, a special graph paper on which the subdivisions on one axis are logarithmic rather than equally spaced. Log-log paper, in which both Fig. 14-4. Hx+Ky-xj' = 0 axes have logarithmic scales, is sometimes useful; it is almost a laboratory joke, however, that log-log paper will make a straight line out of any set of data. The logarithmic relationship could, of course, exist in the opposite form, log y =^ x. Fig. 14-5, y — log X 198 CALCULATION OF DATA Fig. 14-6, y — z, where z = log x Differential equations handle all the considerations above a good deal more rapidly than algebraic treatment. Most experimental biologists even- tually take courses in differential and integral calculus so that they can take advantage of this useful mathematical tool. CHAPTER 15 Statistical Treatments A formal mathematical analysis of the results of experimental research is almost a necessity if the data are to be interpreted in terms of some hypothesis. This analysis is needed because of the natural variation in the numbers obtained from all measurements. Most hypotheses cannot be answered with an unquestionable "yes" or "no"; the variability of the results makes us say "probably." The formal system known as statis- tics, or more specifically as biometry, allows us to say "probably" with some estimate of how closely this "probably" approaches a definite "yes" « » or no. Biometry is now a method for the analysis of the quantitative data resulting from biological research. Although it is a branch of the division of mathematics known as statistics, biometry includes and uses almost all of statistics. "Statistics" originally were numbers, particularly numbers applied to populations of people. Thus "vital statistics" include such fig- ures as the number of births, deaths, marriages, the incidence of diseases, and the number of persons employed by the automobile industry. Mod- ern statistics includes a great deal more than measurements or records concerning people. Even the grammatical usage of the word has changed. When "statistics" is construed as a plural noun, each statistic is a number or bit of information. "Statistics" used as a singular noun, however, refers to a field of mathematics. Usually we can judge the meaning from the context. In addition to their usefulness in analyzing data, biometric principles are useful as a guide in planning experiments. If we know in advance what form of statistical analysis will be required, we can be sure to take 199 200 STATISTICAL TREATMENTS enough measurements to assure meaningful results, without taking a wastefully large number of measurements. More important, perhaps, we can be sure to take the right kind of measurements. Probability Statistics, a field based upon the laws of probability, treats events that occur at random. Most people have an intuitive notion about simple probability and agree readily that an honest coin comes up heads half the time and that seven is most likely to appear in a roll of a pair of dice. People are gamblers, not because they understand probability, but be- cause they do not. The professional gambler stays in business because he knows that any individual event has a certain probability of occurring, regardless of what has happened before. In the long run, the distribution of events follows the individual probabilities. If a million coins are tossed, very nearly half a million come up heads. Truly random events occur according to probabilities that are inherent in the events themselves and are not influenced by outside conditions or time sequences. Any individual atom of a radioactive isotope has a certain probability of decaying within the next second. This likelihood is not influenced by the presence of other atoms of the same kind. Ran- dom events are completely unpredictable on an individual basis. If meas- urement is influenced only by random variations, it is just as likely that the measurement will be a little too large as a little too small. In theory, events occur at random because they are not influenced by outside conditions. In practice, it is too easy for events to be aff^ected by bias of some sort, and therefore not to occur at random. In the analysis of experimental results, procedures are used which assume that the errors occur at random. The error or variability of measurement can be treated statistically only if the variability is random. For this reason, special steps must be taken in planning the experiment to assure the randomness of the errors. Laboratory biologists do not commonly carry out formal ran- domization steps but, instead, hope to obtain experimental results that answer the hypothesis even without statistical treatment of the data. Unfortunately, too many of them perform statistical analyses, even with- out prior planning. If the assumption of true randomness is ignored, the statistical analysis is not only meaningless but deceptive as well. Statistics ideally treats populations of things or events. By a population we mean all the possible things in a particular class, just as the human STATISTICAL TREATMENTS 201 population includes all the people. There would be an infinite number of possible repetitions of a certain measurement, and these together would constitute a population. A population has certain characteristics, including variability, that can be described in detail. Obviously experi- mental work with populations is impractical; usually we work with samples of populations. A sample of the human population might include all the people in a city or all the people in a room. The smallest sample is one person. The features of a sample are similar to the features of a population but obviously cannot be identical. If a sample is large, it more truly represents the population. Statistical procedures have been developed for dealing with populations and with large and small samples. The normal curve One of the characteristics of a population, or even of a sample, is a natural variability. In general, the individual values in the population of numbers tend to clump around a certain average value, but some sep- arate values will be very much larger or very much smaller. There are several ways of describing patterns of variation, but when dealing with populations, all these methods approach the bell-shaped "normal curve." A population of numbers is likely to be normally distributed, that is, to vary according to the normal curve. Several "parameters" can be used to describe this normal distribution within populations, but two of these are most important. The value around which the other values seem to be centered is the average or mean. When speaking of populations it is given the symbol m. The mean is found by adding all the values and dividing by the number (N) of values. This process is expressed mathematically by calling each value or variate xi, X2, xs, . . . , x\. The symbol Xi stands for any value of x: xi, then X2, then xs, and so on. The capital sigma (2) is used to indicate A' summation, so S Xi means "the sum of all the x values from xi to xn" The 1 arithmetic mean then is N Zj Xi 1 ■m = N In the normal curve, the total of all deviations above the mean is equal to the total of all deviations below the mean. If there is likely to be 202 STATISTICAL TREATMENTS no question about the meaning, the simpler expression, w = S x/N, can be used in place of the one above. m. = nrip Fig. 15-1. A pair of normal curves having the same mean (tn) but different standard deviations (o). Another characteristic of the normal curve is an amount of variability. Figure 15-1 illustrates two normal curves with the same mean but differ- ent variability. The deviations from the mean are much greater and more numerous in the lower curve. This relationship is made quantitative by calculating the variance. Each variate differs from the mean by an amount x — m, and those values lying below the mean are negative. The negative signs are eliminated by squaring each deviation. The sum of the squared deviations, divided by the number of such values, yields the variance (cr-). o- ^ ^{x — my N The square root of the variance is called the standard deviation cr. The mean and the variance provide enough information to determine the shape of the normal curve. Parameters of samples Because the mean m and the variance cr" refer to the normal distribu- tion of the population, they are rarely known precisely. They can be esti- STATISTICAL TREATMENTS 203 mated from the characteristics of a sample or a series of samples. The bet- ter the sampling procedure, the better the estimate of m and or'~. Each sample, even if it includes only one variate, has a sample mean, which might be nearly the same as the population mean or quite different. If a large number of samples is taken, the average of all the sample means provides a good estimate of the population mean. The sample mean is calculated in the same way as the population mean but is given the symbol x. Xx X = — n where n is the number in the sample. If n is large, the variance of the sample (s") can be calculated in the same way as the population variance. The sample variance only approxi- mates the variance of the population, however, and tends to underesti- mate the population variance by an amount equal to (w — 1)/m. For this reason, sample variance is calculated by , s (x — xy s- = -. — n — 1 The standard deviation (s) of the sample is the square root of this value. A working formula which makes computation on an electric calculator easier is 2,X^ — 2 W n — 1 The standard error of the mean is a commonly used figure. It is defined as the standard deviation of a distribution of means. If a large number of samples is taken from a population, the means of the samples are distributed in a normal curve. The variance of the distribution of means, cr^^^ can be shown to equal crVw. The standard error of the mean, then, is o-/\/n. There are other ways of arriving at a value for the stand- ard error; some of these will be discussed later with some other special statistics. Tests of significance The aim of statistical analysis is an estimate of the significance or meaning of the data. The analysis reveals the probability that the ob- 204 STATISTICAL TREATMENTS served effects result from the experimental treatment, as opposed to pure chance. Let us imagine an experiment as an example. We have observed that in vitro a chemical compound C combines with one of the enzymes in- volved in cellular respiration, but C is not a normal metabolic material. We can form the hypothesis that if C is introduced to living cells it should inhibit the normal respiration. The experiment involves a com- parison of the activity of cells in the presence and in the absence of C. Now, the normal cells will vary, and one sample of cells might be quite different from another sample even if all the untreated cells are part of the same population. Suppose that the cells treated with C respire some- what more slowly than the average of normal cells. The difference might result from the treatment, but it is also possible that the difference is the result of chance, that is, that even a sample of untreated cells could give a rate this much lower than average. A statistical analysis reveals the like- lihood or probability that a difference this large or larger could result from chance. If the mean and variance of the population are known, the test of significance becomes relatively easy. The area under the normal curve, or any portion of the total area, can be calculated. If we calculate the area included between a point lo" l)elow the mean and the point lo- above the mean, about 68 per cent of the area under the curve is in- cluded. This means that 68 per cent of the variates lie between these limits (see Fig. 15-2). Between —2cr and +2o-, about 95 per cent of the cases are included, and 99+ per cent lie between ±3o-. Other calcula- tions can be made as well. For example, we could find a pair of lines, one on each side of the mean, that \A'Ould include 50 per cent of the cases. Our experiment provided us with a figure for the rate of respiration in cells treated with compound C. If we know the mean and variance of the respiratory rates in normal cells, we can calculate the probability that the observed results are different. In theory, it is impossible to know the mean and variance of the population, but, if we have made measure- ments on a large number of samples, we can have a very good estimate of the parameters of the population. If the rate observed in treated cells is 4.5or below the mean for untreated cells, then the probability that this sample belongs to the population of untreated cells is very small indeed. If the rate is only lo- below the population mean, it would be dangerous to conclude that the treatment has had an effect because 32 per cent STATISTICAL TREATMENTS 205 (100 per cent — 68 per cent) of the samples of the untreated population will deviate from the mean this much. Normal Deviate: In the previous few paragraphs we have been speak- ing, without defining it, about a statistic known as the normal deviate. A Fig. 15-2. The normal curve, showing the relationship between the stand- ard deviation and the included area under the curve. value can be tested for significant deviation from the population mean by expressing the deviation from the mean in terms of standard deviations. The normal deviate (c) is (x — m) C =z (T which indicates that the value differs from the mean by a certain num- ber of standard deviations. Published tables of probability tell us how frequently such deviates occur in the normal curve. In 5 per cent of the samples, c is 1.96 or greater; in 1 per cent of the samples, c is 2.58 or greater; and in 0.1 per cent of the samples, c is 3.29 or greater. The normal deviate is also used to test the significance of the differ- ence between the mean of a sample and the mean of a population. In this case, the standard error of the mean is used instead of the population standard deviation: X — m X — m c = = = o"5 a-/Vw 206 STATISTICAL TREATMENTS Confidence Limits: In a statistical test of significance we select some arbitrary probability limits. If a deviation as large as that observed occurs by chance in only 5 per cent of the cases, it may be safe to conclude that the deviation is the result of the experimental treatment. In another experiment we might choose the 1 per cent level (probability = 0.01} of significance. The level of statistical significance chosen tells the confi- dence with which we can draw conclusions. We can never say absolutely "yes" or "no," but only "probably" or "very probably." The actual choice of confidence limits depends upon the seriousness of drawing the wrong conclusion. It is possible to accept a false hypothesis or to reject a true hypothesis on the basis of chance vari- ations alone. The rejection of a true hypothesis is called an error of the first kind, and the acceptance of a false hypothesis is called an error of the second kind. Depending upon the experiment, one error is more serious than the other. As an example suppose we test the hypothesis, "Drug A is harmless to human beings." The drug will be released for public use only after statistical tests. If we select the P = 0.05 level of significance, we are 95 per cent certain that the drug is harmless. This is not sufficient confidence because the 5 per cent chance that the drug is actually harmful is too great a risk. Student's t Test: When the variance of the population is unknown and cannot be reasonably estimated, as is true in experiments involving small samples, the normal deviate test of significance cannot be used. An English statistician who signed his name "Student" derived a statistical test useful in such cases. A value of t is computed in a manner similar to the computation of c: X — tn X — m cr/\/n s/\/n~ Here s is the standard deviation of the sample of which x is the mean. If the sample is large, s is very nearly equal to cr. If the sample is small, s is likely to be somewhat smaller than o". The probabilities of t values arrived at in this way are thus slightly different from the distribu- tion of c values. Tables of probability with which various values of t occur are used in this test of significance. The actual probability of a certain value of t depends upon the size of the sample; the test is said to have a certain number of degrees of freedom, equal to n — 1, where n is the sample size. To test whether a certain sample belongs to a population, we calculate STATISTICAL TREATMENTS 207 the value of t, look up the probability in the table, and then draw con- clusions in the same manner as in the normal deviate test. The Chi-square (x") Test: In some kinds of experiments the data occur as discrete numbers rather than as continuously variable quanti- ties. A genetics experiment might yield flowers that are either red or white, and the observation of the results consists of counting the two types. According to hypothesis, the two types should occur with a certain probability. For example, we might expect three reds for each white; the probability for red is %, and that for white is Vi. The X" test is an ap- proximation of a much more involved test for deciding how well the observed counts fit the hypothesis. A X^ value is calculated for each class of events. In the experiment we observe a certain number of white flowers (o), but from the hypothesis we can calculate the most probable number (c). The X" value is the square of the deviation from the expected value divided by the expected value : ^ C A similar calculation is made for each of the classes. The final X" value is the sum of these individual numbers ,^Co-cy I Co - cy C(white) C(red) In this test, only one degree of freedom exists because if a flower is not white, it has to be red. In another experiment, flowers might be red, white, or pink; here there are two degrees of freedom. A table of X" values is used in tests for significance. If the value of X^ is larger than could be expected (P = 0.05 or O.OI) for this number of degrees of freedom, then the observed data do not fit this hypothesis. In the example the expectation was V4 white and 44 red. If the X" value suggests that this hypothesis is incorrect, we could test some other hypoth- esis, such as Vi white and Vi red. Analysis of variance In the ideal controlled experiment all factors are constant except the one being tested. The significance of this varying factor is easily tested by one of the foregoing statistics. In practice, however, the effect ob- 208 STATISTICAL TREATMENTS served in an experiment is likely to be influenced by several factors. A truly efficient experiment tests the effect of several factors at the same time. Simple statistical tests become extremely difficult in this case, how- ever. Each factor might act independently or might interact with one or more of the other factors. Analysis of variance is a system of treating such experimental data. Since the actual analysis is intimately tied to the design of the experiment, a discussion of analysis of variance is given in the following chapter on experimental design. Regression and correlation One of the most important things we might wish to find from an experiment is the relationship between a pair of variable quantities. In any experiment, changes in x might produce proportional changes in y. The relationship is a straight line, but which straight line best fits the results? If the data are very good, the points on the graph deviate very little from the line that best describes the relationship, and the line, called the regression line, can be drawn by eye. If the results include a larger error, visual inspection might suggest any of several lines to fit the points, as is true with the data shown in Fig. 15-3. Obviously x has an effect on y, and y = a -\- hx,^ but what is the true slope h and the in- tercept a? Fig. 15-3. A scatter diagram showing three alternative straight Hnes that might be drawn through the points. ^ Note that the symbols here are different from those given in Chapter 14. The expression y = mx + 1? is the same straight line. Here a is substituted for h and b for m. STATISTICAL TREATMENTS 209 Lines are commonly drawn through such variable data by the method of least squares. The sum of the squares of all the deviations from the line is made to be a minimum. In other w^ords, that line is found such that the sum of the squared deviations is smallest. U y = a -\- hx, and if we know a and h, the line is determined. In the least squares formula XyXx- — S ?c 2 y" """^ NXx~-c^xy N S ?cy — S x S }' h = Nix^-C^ xy where N is the number of points. These are the basic equations; varia- tions may be used in special cases, as when the number of points is small. The correlation coefficient (r) was devised to express quantitatively the relationship between two sets of variables. It has values ranging from — 1 to 0 to + 1 . If r is positive, larger values of x are also large with re- spect to y. Taller men are heavier, in general. Zero correlation means there is no consistent pattern between the x values and the y values. Negative values of r mean that values large in respect to x are small in respect to y. The correlation coefficient actually measures the relationship of x and y in terms of their normal deviates (c), •^ CxCy r =^ n This expression is calculated by means of the following approximation: S(x — x/sa:')Cy — y/sy) _ S(x — xXy — y) r = n nSxSy The computation is easier on the electric calculator if the equation is rearranged to Xxy/n — xy X = SxSy The correlation coefficient can provide valuable information but must be used with discretion. It is too easy to attribute a cause-effect relation- ship to a situation where r is large. For example, we might find a corre- lation between the amount of water flowing in a river and the number of automobiles sold. There is no conceivable way in which one of these could cause the other even if they might be related through a third factor, namely the season of the year. 210 STATISTICAL TREATMENTS Other statistics The mean is the most commonly used measure of central trend in ex- perimental work. Two other measures are important in certain situations. The median divides the distribution of variates into two equal halves; half the values lie below the median, half above. The mode is that value which occurs most frequently. In the normal curve the three measures coincide. Several methods other than the variance exist for estimating the variability of a distribution, or the "error." Experimental results are frequently presented in papers as (for example) 167 ±13. The error figure (±13) might have any of several meanings, and it can be inter- preted by a reader only if the writer specifies how it was calculated. The range, a useful estimate in some cases, is the difference between the largest value and the smallest. It gives quick information, and, if the ranges of two samples do not overlap, the difference between the samples is almost certainly significant. Probable error is a deviation from the mean such that ±1 PE in- cludes one-half the cases. If the distribution of values is normal, ± 1 PE also includes half the area under the curve; from this, PE = 0.6745cr. The standard error of the mean (SE) is the preferred statistic to be included in tables of data. If everyone agreed with this statement, then 167 ± 13 above would be the mean and the standard error of the mean. The standard error was calculated as cr/\/n. Radioactive decay is almost perfectly random, and the number of counts determined is usually large. In this case a short-cut method of computing the standard error can be used. A certain number of counts (C) in a total time of X minutes yields an average rate of C/t counts per minute. The standard error is computed as \/C/t. This value is so near to a-/\/n that the agreement between the two methods is used as a check on the randomness of the counting. This relationship holds because in the normal curve cr is almost equal to \/m. The application of statistics Because the use of statistical analysis depends upon a number of as- sumptions, the conclusions drauoi can be no more valid than the assump- STATISTICAL TREATMENTS 211 tions. The assumption that the samples are random is a very important one but one too often forgotten. Statistical significance means only that the conclusion drawn is prob- ably correct. Biometry is nothing more than a formal procedure for calculating probabilities. If the experimental data are good, that is, if the error or variability is small relative to the effect being measured, there is no need to analyze statistically. Statistical procedures are particularly valuable in those experiments where the errors are large or the effects are small. Analysis of results and drawing conclusions require common sense. Blind acceptance of statistical significance can lead to ridiculous con- clusions. I remember one experiment in which three rows of bean plants were given identical experimental treatments, but one row pro- duced more beans than the other two. The experimenter said, "I don't know what it means, but it was significant at the 5 per cent level." Most biologists are not sufficiently expert to know which test to use in every circumstance, and most cannot remember all the assumptions that went into the development of the various analytical procedures. It is a common practice to enlist the help of an expert statistician or bio- metrician. The time to do so, however, is before the experiments are performed. Most biometricians prefer to assist in the planning stages rather than being called in at the last minute in a desperate attempt to salvage something from a ruined experiment. SELECTED REFERENCES Fisher, Ronald A., Statistical Methods for Research Workers. Edin- burgh: Oliver and Boyd, 1950. Sir Ronald Fisher's name is at the head of every list of persons who have contributed most to biological and agricultural statistics and to experimental design. This book is a classic and still one of the best available for reference. Freund, John E., Paul E. Livermore, and Irwin Miller, Manual of Experimental Statistics. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1960. This little book will be convenient for those people who understand statistics but cannot remember the details about equations and such. Snedecor, George W., Statistical Methods Applied to Experiments in Agricidture and Biology, 5th ed. Ames: The Iowa State College Press, 1956. One of the standard biometry textbooks, with good rea- son. Useful as a reference because it is complete, even if you must sometimes do some reading to find the exact detail you need. CHAPTER 16 Experimental Design We have seen some of the statistical treatments whose primary purpose is to show the probabiHty that interpretations are correct. This formal mathematical treatment gives a level of "significance" to the results. Experimental design is a term applied almost exclusively to the advanced planning of experiments in order to take advantage of the statistical pro- cedures available. The statistical design takes into consideration all the assumptions that went into deriving the various statistics, thus making the significance tests valid. A more practical effect, perhaps, is that experi- mental design enables the research worker to gain more information with less effort. The designed experiment is as much more effective than the haphazard experiment as that same haphazard experiment is more effective than observation without experiment. The ideal experimental design, of course, is the one that yields the most information and the most reliable information with the least expense and effort. The most extensive user of experimental design, and in fact the area responsible for the development of this branch of statistics, is agricultural research. A little reflection shows how necessary it is to the agricultural scientist that experiments be effective. A person interested in the effects of fertilizers on the growth of field crops can raise only one crop a year, and extensive test plots require large amounts of land. Imagine the serious result if badly planned experiments yield no information. Sim- ilarly, breeding and nutrition experiments with large animals are ex- pensive and time-consuming even if carefully designed. Psychology and certain other areas related to human affairs also depend heavily upon ex- perimental design, for somewhat the same reasons. 212 EXPERIMENTAL DESIGN 213 Laboratory biologists on the whole have been slow to adopt formal experimental design. One reason is that laboratory biologists customarily "design" their experiments without realizing it. It happens that experi- ments with microorganisms, small animals, and single plants can easily provide much information, so that very simple experimental designs are adequate. One is not likely to despair over the failure of an experiment to provide significant results if the experiment only cost an hour, one cake of yeast, and a dime's worth of chemicals. In most experimental laboratories, the experiments produce results faster than the experiment- ers can calculate what the data mean. The experimental biologist, how- ever, is rapidly reaching the point where the remaining problems are the very difficult ones, and it may be necessary to use more complex ex- perimental designs. The following sections briefly describe some of the more commonly used experimental designs. The biologist is wise to seek the counsel of a statistician instead of attempting to design his own experiments. The statistician is likely to be able to foresee the failure of an experiment if inadequate replications or repetitions are performed, but he might also estimate that a proposed number of replications is more than enough and therefore wasteful. Sampling and randomization It should be obvious by now that the organisms used and the measure- ments made in an experiment represent samples of much larger popula- tions. The selection of individual animals or plants to be used in an experiment is thus an exercise in sampling. Regardless of the experi- mental design, very careful consideration must be given to the choice of samples because they must be random selections from the population. It might seem a simple matter to pick a few rats from a cage full of rats and then to assign half of these to serve as controls and half to re- ceive the experimental treatment. Unfortunately, it is virtually im- possible to avoid personal bias. Maybe the rats selected are the ones that were easiest to catch. If half the rats are to be controls and half are to receive the experimental treatment, how does one decide which rats receive which treatment? In which group do you place the one rat that managed to bite your finger? The purpose of randomization is to minimize the natural and unavoid- able bias. In the selection of the rats it would be safest to let the flip of 214 EXPERIMENTAL DESIGN a coin decide whether any individual rat becomes a member of the ex- perimental group or a member of the control group. Tables of random numbers prepared by computers, available in several of the statistics books, can also be used in randomization. Each individual in the large group must have an equal chance of being used in the experiment, and, of those used, each must have an equal chance of being a "control" or an "experimental." In many experiments the selection of the organisms is completely random. Other cases allow the use of special statistical techniques in which the deliberate selection of individuals and matching in pairs produces more useful information than the complete randomization. For example, suppose we needed five rats to serve as controls and five to re- ceive an experimental treatment. We wish to test the effect of a new drug upon gain in weight by the rats. The amount of gain, however, might depend on the original weight of the rat. If we weigh the animals initially and then arrange the weights in a table in descending order, it is no difficult task to assign the rats in pairs that are more like each other than like all the rest. One member of each pair receives the drug, and we decide which one by flipping a coin. Each member of the pair must have an equal chance to receive the experimental treatment. If you find yourself wishing the coin would fall a certain way, you had better beheve the coin rather than your feelings and wishes. One of the beauties of the use of microorganisms or subcellular partic- ulates is that the experiment is conducted with a very large sample. There is considerable variability among the individual cells in a yeast suspen- sion, but the average behavior is surprisingly uniform. Only a few minor precautions are necessary in sampling from such favorable popu- lations. For reasons that are difficult to understand, the first pipetteful is likely to be slighdy different in composition from all succeeding pipette- fuls. Perhaps the dry inner walls of the clean pipette have some in- fluence. Because cells in suspension setde quite rapidly, swirling or stir- ring between samples is necessary. Obviously the number of cells per 5 ml is not uniform if the cell suspension in a 500-ml flask is more concen- trated at the bottom than at the top. Finally, pipetting errors are hkely either to increase or to decrease in taking a long series of samples. If all the experimental samples are taken first and all the controls later, bias may be introduced. The safe procedure, again, would be to withdraw a pipetteful of cell suspension, and then let a coin decide whether it is to be a control or an experimental sample. Many people have been successful research biologists without ever EXPERIMENTAL DESIGN 215 paying much attention to the randomization of samples. The same people have had litde need to analyze their data statistically. The reason, of course, is that the differences between populations (a population of con- trols and a population of experimentally-treated organisms) are very large compared to sampling errors and the variance of the populations. This difference is probably no accident because many of these experi- ments were very carefully thought out and steps were taken to make sure that if a difference existed it would be a large one. Some simple designs The simplest of all experimental designs is the one most commonly used in the experimental laboratory, a simple comparison of two groups of measurements. The following experiment can be used as an example, and it can be seen that the same reasoning could be applied to many different experiments. We wish to know whether maleic hydrazide has any effect on the respiration of yeast cells. We set up six manometers with a yeast sus- pension and then add maleic hydrazide to one set of three and an equal concentration of a neutral salt to the other three. We measure the oxy- gen consumption at five minute intervals for thirty minutes. We now have samples from two populations. Each population has a mean (wi and m2) and a variance (cti^ and 0-2^). The means obtained in the three manometers provide a good estimate of the population mean. The only estimate we have of the population variance is the variance of the sample s^. The application of the t test tells us whether the difference between the means is significant and at what level of significance. Paired Design: A similar experiment could be performed as a class exercise. Imagine seven groups of students, each performing the experi- ments described above. For simplicity, let each group of students use only one pair of manometers instead of three. The data could be analyzed by lumping together all the figures for untreated cells and all those for cells treated with maleic hydrazide. Whether each group uses one pair of manometers or three, the variance is likely to be somewhat larger than before because of individual differences in technique, in reading manometers, and in timing the readings. It is possible that each group would find a significant difference, but the combined data would indicate no statistical significance. In this case it would be wise to match the re- sults in pairs, with the experimental and control manometers of any one 216 EXPERIMENTAL DESIGN experiment compared to each other. The statistical treatment becomes shghtly more complex because there is reason to expect some interaction between pairs of manometers. One group of students might pipette the yeast cells generously, for example, so all their results would be higher than average. It is necessary to account for this covariance, as the interaction is called, but this can be done by a shortened method. For the sake of analysis we hypothesize that the two population means are equal, that maleic hydrazide has no effect. In the manometers of each of the seven groups there is a difference between manometer 1 (without maleic hydrazide) and manometer 2 (with maleic hydrazide). The seven dif- ferences make up a sample from a possible population of differences. Ac- cording to the hypothesis, the mean of these differences is zero. Table 16-1 shows some numerical results which will make the analysis easier to follow. Table 16-1. Results of a Paired Experiment /Ltl Oo/30 min Group Manometer 1 Manometer 2 d d - x^. Cd - x,y 1 117 111 -6 0.9 0.81 2 112 107 -5 -0.1 0.01 3 111 107 -4 -1.1 1.21 4 106 100 -6 0.9 0.81 5 104 99 -5 -0.1 0.01 6 114 110 -4 -1.1 1.21 7 109 X : 103 -6 0.9 2 0.81 w = 7 x= 110.4 = 105.3 > c,= -5.1 = 4.87 The seven differences have a mean of 5.1 as opposed to a hypothetical mean difference of zero. We can apply the t test to find out if the ob- served difference could occur by chance. In order to compute i, we must find the standard error of the mean (s/\/w). The standard deviation is calculated from the table as ='V^?^=- The difference between the two means xi and X2 is the same as the mean difference Xd, and • . = ^=.4.7 s/vw EXPERIMENTAL DESIGN 217 The number of degrees of freedom is w — 1 = 6, and in the t table we find that the chances of a t value this large are less than one in a thousand. Therefore, this particular difference between means does not belong with the distribution where the mean difference is zero. We must reject the hypothesis that the two means are equal, or in other words, we conclude that the differences observed are statistically significant and that maleic hydrazide does have an effect on the respiration of yeast cells. Incidentally, if the pairing design is not used, the difference be- tween the two means is not significant. More complex designs The simple experimental designs are adequate when only one pair of conditions is to be tested. If a larger number of comparisons is to be made, however, performing one set of measurements at a time may become unduly laborious. With the application of a little ingenuity it becomes possible to make several comparisons in a single experiment and still have legitimate statistical tests available. In most of these instances, analysis of variance is used. In the following discussion, three general types of experimental designs with examples are described, and then a typical analysis of variance is performed on one of these. Completely Randomized Design: In this design, as the name suggests, the choice of samples and treatments is left entirely to chance. The following example will not only illustrate this design but can be modified later for some other designs. We are interested in studies using chloro- plasts isolated from leaves. We have decided to use sugar beet plants and to raise the plants in a special constant-temperature chamber under artificial light. The hope is to provide uniformly active plants. We have our choice of soil, vermiculite (heat-expanded mica), or an aerated solu- tion as a medium in which to raise the plants. We perform the following preliminary experiment to determine which medium provides the great- est weight of leaves. Four plants (or groups of plants) are to receive each treatment. Since the space in the plant growth room is essentially uniform, the groups of plants are placed at random within the space available. They might be arranged on the bench in a pattern consisting of 4 rows of 3. Randomiza- tion could be achieved with one of a pair of dice in the following manner. Assign the numbers 1 to 4 to the plants in each of the three groups. Now, number the possible locations on the bench from 1 to 12. Start- ing with position 1, throw the die; use this throw to decide which of 218 EXPERIMENTAL DESIGN V3 1-4 S3 s, V, 1-2 V4 Li V2 Se 1-3 S4 Fig. 16-1. A completely random- ized design. V, L, S stand for sep- arate treatments; the locations were chosen by chance. the three groups is to be placed here ( 1 or 4 for soil, 2 or 5 for vermiculite, 3 or 6 for liquid, for example). With a second throw, decide which plant of the four is to be placed here. Proceed to the second available spot and repeat the process. Continue until all twelve plants are assigned. If by chance all the plants in soil should be together, trust the die because dice are better judges of randomness than you are. The plants in Fig. 16-1 have been labeled S, V, and L for soil, vermicu- lite, and liquid, respectively. They were placed by exactly the method de- scribed. There are simpler ways of randomizing, but this serves as an ex- ample. Now we allow the plants to grow until they have produced large leaves and then cut off the leaves and weigh them. The statistical analysis tells us whether there are any significant differences among the three treat- ments. Randomized Block: Use the same example as above. We may suspect that one end of the bench is very slightly warmer than the other end or that light intensity and humidity are not quite uniform. In order to distrib- ute the environmental variation more evenly among the plants, we divide the area into four "blocks" of three spaces each. One plant from each treat- ment goes into each block, but within the block the plants are randomized. One such finished design is shown in Fig. 16-2. Latin Square: This is a modified randomized block which helps to over- come environmental variation in two directions. Suppose that temperature and light intensity vary in the plant growth room, not only from end to end, but from side to side as well. In this instance we arrange the space in rows and columns, making sure that each treatment occurs in each row and in each column. The Latin square need not be square, but, for V. V3 L3 S, S3 L, V3 L, 1-4 S, S, V, Fig. 16-2. A randomized block design. Each vertical block includes each treatment once. Within the block the treatments are random. The blocks could also be arranged horizontally. EXPERIMENTAL DESIGN 219 s V L V L S L S V Fig. 16-3. A Latin square. Any treatment is present in each row and in each col- umn. simplicity, let us reduce the number of plants in each treatment to three. We then have three replications of each of three treatments, which can be arranged in a Latin square as shown in Fig. 16-3. This is only one of twelve possible 3X3 Latin squares, and the actual placing of the plants was done at random. Other Variations: Agricultural statisticians have developed a vast list of modified experi- mental designs, such as Partially Balanced Incomplete Block Designs and Lattice De- signs. Most of these are highly specialized, however, and it is unlikely that they would be of utility to the ordinary laboratory biolo- gist. The experiment on the growth of sugar beets in a plant growth room is very similar to an agricultural experiment, of course. Simi- lar designs can be adapted to strictly labora- tory experiments, such as the effects of several treatments on muscle con- traction or the metabolism of cells, but the analogies to blocks, replicates, rows, and columns are often abstruse. Frequently single simply designed experiments go so rapidly that it is not worth the effort to use one of the abstract designs. Analysis of Variance: In the experiment described above, more than two populations are being described simultaneously. Differences between sample means can arise because the populations are different but also from chance alone. Variances are more difficult to estimate. Our experi- ment can be tested most effectively by means of analysis of variance. The following assumptions will be made: the samples are random, the varia- bility is distributed according to the normal curve, and the variances of the different populations are equal. Again we shall use the hypothesis that the three treatments produce equal results, or that all the plants belong to the same population, and then we shall find the probability that the observed differences could arise from chance alone. Since the randomized block is used more frequently than the other designs, that one has been chosen for analysis. After a period of growth, the plants of Fig. 16-2 were harvested, and the leaves were weighed. The weights are recorded in Table 16-2. There are three sample means, x«, and these are calculated at the bottom of the table. The working formula, Xx^-(Xxy/n ^ = n — 1 220 EXPERIMENTAL DESIGN is used for computing variance, and the various parts of this calculation are also included in the table. The value, Xx- — (J^x^'/n, will be called the "sum of squares." In this experiment there are three ways of estimating the variance of the population. The first is the variance s^ computed over the entire ex- periment of twelve plants. The second consists of the variance within each of the treatments; for example, the four S plants have a variance. The third estimate is derived from the differences among the various treatments, S, V, and L. Ultimately, if the variability among treatments is greater than the variability which can be attributed to chance, we must conclude that the treatments had some effect. Table 16-2. Results of a Randomized Block Experiment Weight of Leaves in Grams Blocl S V L Entire Experiment 1 136 140 135 2 143 151 128 3 127 133 120 4 137 142 131 2x 543 566 514 1623 X 135.75 141.50 128.50 135.25 2x2 73843 80254 66170 220267 C2x -C2; xy/n 73712 80089 66049 219511 2x2 131 165 121 756 n = 4,3 n = 12 Referring to Table 16-2, an estimate of variance is obtained from the individuals in each treatment. The sums of squares are pooled, or added together, to yield a "total sum of squares." The variance of each treatment would be s2 = ^x^'-Qlxy/n n — 1 and the pooled value becomes the sum of the three "sums of squares" ^ ^ (WS - 1) -f (WV - 1) + (WL - 1) The sum (ws — 1) + (wf — 1) + (wl — 1) is the number of degrees of freedom assigned to the individuals within treatments. The means of the three treatments form another estimate of the popu- lation variance. The variance of this distribution of means is an estimate EXPERIMENTAL DESIGN 221 of the variance cr-/n corresponding to the standard error cr/\/n. The variance of the distribution of means is (0.50)^ +(6.25)^ + (6.75)^^^,^, This is an estimate of crV4, so the estimate of cr^ is equal to 169.7. Since we are deahng with three values, there are two degrees of freedom. Multi- plying by the number of degrees of freedom gives us a "sum of squares" which will be useful as a numerical check later. Table 16-3. Analysis of Variance of Data in Table 16-2 Source of Variation Degrees of Freedom Sum of Squares Mean Square Individuals 9 417 46.3 Treatments 2 339 169.7 Total 11 756 68.7 Now these figures are placed in an "analysis of variance" table (Table 16-3). Most of the numbers in this example are transferred directly from Table 16-2 or the foregoing discussion. The next step is to compare the variance estimates (called mean square in the table) by calculating a ratio called F. mean square of sample means _ 169.7 mean square of individuals 46.3 The table of distribution of F values shows us that a value of F this large occurs by chance more than 5 per cent of the time. In other words, the variations in the treatments are not large enough to be statistically sig- nificant, and it made no difference whether the plants were raised in soil, vermiculite, or liquid culture. If the difference had been significant, further calculations, pinpoint- ing the treatments which differed significantly would be possible. Factorial experiments It is often desirable to know the effects of several different factors on the responses of biological materials. Each of these factors could be tested separately by holding all the other factors constant. Temperature and the concentration of glucose both might influence the respiration of yeast cells, for example. Separate measurements of rate could be made at 222 EXPERIMENTAL DESIGN five different temperatures at any one level of glucose, and if these are replicated five times, there will be a total of twenty-five measurements. If we repeat this operation with each of five concentrations of glucose, we perform a total of 125 measurements. A factorial experiment can allow us to obtain the same information more easily and, at the same time, to evaluate any interaction between temperature and glucose concentra- tion. At 5° C, we make five measurements, each using a different con- centration of glucose. At 10°, 15°, 20°, and 25° C we do the same. We have performed a total of twenty-five measurements, and each tempera- ture and each glucose concentration is replicated five times. Analysis of variance tells us the significance of the effect of temperature, the effect of glucose, and any change in the effect of temperature resulting from changes in the glucose concentration. The factorial design is very logical. If a repetition of a measurement is identical in all details to the original measurement, the same errors are likely to be repeated. If several factors influence the results, one might as well learn something about these other effects upon repeating the measurement. If the number of factors is large, the factorial design can become ex- ceedingly complex. Because some of the factors may prove to be unimpor- tant, the complex design may provide a great deal of useless informa- tion. A few preliminary measurements may simplify the design. Despite its complexity, the factorial design is very useful, and it is likely that more and more laboratory experiments will be performed in this manner. SELECTED REFERENCES Cochran, William G., and Gertrude M. Cox, Experimental Designs, 2nd ed. New York: John Wiley & Sons, Inc., 1957. One of the most complete catalogues of exp)erimental designs. Finney, D. J., Experimental Design and its Statistical Basis. Chicago: The University of Chicago Press, 1955. A useful introduction to the subject. CHAPTER \7 The Manuscript You have just completed a research project. It is not really finished, of course, because the questions you have answered have only pointed out new questions, but you have solved the problem that you set out to investigate. It is only natural to feel a little proud because you have learned something about a little fragment of the universe that no one knew before. Science will be better off if you share your findings. Experiments have been performed, measurements have yielded num- bers, computations have been completed, and considerable thought has been given to interpretations. The next step is to prepare the material for publication. The preparation of the manuscript must follow the rules of the editor of the journal, so probably the first step is to decide in which journal your results should be reported. There are several thousand journals to chose from, but you know by now that only a selected few pertain to the specialized field in which you have been working. Among these, select on the basis of circulation, time required for publication, and the likelihood that the desired audience will be reached. Some scientific periodicals are sponsored by societies and accept papers only from members. The editor (or the editorial board) has prepared a set of rules, and every issue carries these rules or a note telling in which issue the rules were published. Only after taking these preliminary steps is it wise to proceed to prepare the report. Organization of the paper The scientific paper more-or-less naturally falls into several main sec- tions, although there are frequent modifications of these. Usually there is 223 224 THE MANUSCRIPT an introductory section, a description of experimental methods, a presen- tation of the resuhs, discussion of interpretations, and a Hst of cited hterature. There may be an abstract at the beginning or end. Introduction: The introductory statement tells the reader what to ex- pect. It provides a specific statement of the problem and outlines other recent work on the same or similar problems. Several references to the literature probably will be required in order to place this paper in a proper relationship to what is already known. 'Experimental Methods: Methods should be described in sufficient de- tail to allow others to repeat the work if they choose. What kind of or- ganisms were used and how were they prepared for the experiments? If commercial instruments were used throughout, naming them is enough. If you developed any new techniques, however, these should be described in detail. The description of the materials and methods must be a compromise. The journal cannot afford to publish long accounts of the method of holding a pipette, but any detail of technique which would not be obvi- ous to a person experienced in this field must be described. If a previously used method was adopted in the current experiments, a reference to the previous paper can save words or paragraphs. A naked reference, how- ever, may be unsatisfactory. For example, "Oxygen exchange was meas- ured as described previously (with reference) ..." tells the reader nothing, but "Oxygen was determined manometrically (with reference) . . ." may save him a trip to the library. Residts: Since most research is now quantitative, the results are ex- pressed in numbers. The results can be presented in tables or graphs, but usually the journal cannot allow the same results in both forms. The text of this section carries a running description of the results to help the reader understand the tables or graphs. Discussion: The results mean nothing unless they are related to the problem. The conclusions should answer the questions stated in the intro- duction. Perhaps the conclusions drawn in this paper can be applied to more general situations, and such generalizations can be pointed out. Literature Cited: In several of the earlier sections you will have re- ferred to the work of others. It is actually only polite to give credit to them and to tell your reader how he can find their papers. The list of references need not include every paper ever written on the subject, but it should include those papers needed by the reader to place your paper in the proper perspective. THE MANUSCRIPT 225 The job of writing Some persons can sit down with only a headful of ideas and write a paper from beginning to end, but most of us need to follow a fairly detailed outline. I suspect everyone develops his own system for writing a scientific paper. The purist insists that the writing start at the beginning, but this may not always be the easiest way. It often is easiest to wnrite the section on materials and methods first because this section will be changed relatively little later. Probably the section on results can logi- cally follow next. The introductory statement and the discussion of inter- pretations must cover the same subject matter, so it is best to write them together. Unless there is a careful correlation of these two sections, it is too easy to write conclusions to some problem other than the one investi- gated. The order in which the sections are written is a personal matter, and each writer learns by experience which order works best for him. Literary Style: The main purpose of all writing is to communicate ideas. A paper is prepared, not to prove that the writer is a great scholar, but to convey facts and ideas to a reader. Clarity of presentation is just as important as accurate information and logical arguments. If I could give only one rule to a prospective writer, that rule would be, "Remember your reader!" Your subject has been foremost in your mind, and you may forget that what has become obvious to you is not obvious to everyone. The English language is expressive because of its built-in ambiguities, but in scientific exposition the intention of the author should be abso- lutely clear. Scientific writing must be grammatically correct. Within the scope of correct English grammar there is plenty of room for an individual writing style. Simple, straightforward writing is the only kind suitable for science. The ideas of science are sufficiently com- plex without the distractions of overly dramatic or obscure expressions. Much bad wnriting has been included in the scientific literature, but prog- ress is being made toward improvement. Unfortunately many begin- ning writers imitate what they read and so, commonly, learn bad habits. One gains the impression from reading the literature that all verbs must be expressed in the passive voice to avoid saying "I," but this con- 226 THE MANUSCRIPT vention is not necessary. The avoidance of the first person pronouns was once thought to add objectivity, but just how "It was discovered that . . . ." is less subject to personal bias than "I found . ..." is a mystery. By whom "it was discovered" may even be a mystery to the reader. If the sentence can be written with active verbs, in the ordinary "subject, verb, object" order, by all means do so. No one can justify an awkward sentence by pointing out that the literature is full of such awkward sentences. Cast an occasional sentence backward when neces- sary or just for variety if you like. Many scientists use what is sometimes called the "German construc- tion," a compounding of nouns, adjectives, and verbs. As a simple ex- ample, examine "water regulating valve." Translated into German, there is no doubt about the meaning. In English, however, one is not sure if the water regulates the valve or vice versa. The confusion becomes worse if the object is a "solid brass solenoid controlled water influx and efflux regulating valve." Usually such constructions can be avoided by rearrang- ing the sentence, without necessarily making the sentence longer. Wordiness is probably one of the worst off^enses in these days of crowded journals. The use of unnecessary repetition or of phrases that contribute nothing to the communication of ideas may even make the paper more difficult to read. One may say, "Concerning the cell mem- brane it must be kept in mind that it is of the utmost importance due to the fact that it serves the function of being responsible for the control of the movement of materials into the cell," but it would be better to say, "The cell membrane is important because it controls the movement of materials into the cell," or to state the probable intended idea more simply, "The cell membrane controls the movement of materials into the cell." The jargon used in daily conversations in the laboratory has no place in the scientific paper. Biologists and biochemists deal with long and complex terms, and it is only natural to contract them into a slang. Your laboratory colleague knows perfectly well that the "Beckman" is the Beckman Model DU Spectrophotometer, but your reader might think of a Beckman pH meter. It is easy to invent words by adding -ate to indicate the result of a process, as in filtrate, or even "washate." Verbs can be concocted from adjectives by adding -ize (solubilize or counter- currentize). One of the most ridiculous jargon expressions I have read recently was a reference to the capacity of a technique as the "through- put. Biochemists habitually use initials, but some of these have become THE MANUSCRIPT 227 so firmly established in the literature that their use is acceptable. Almost everyone knows ATP even it they do not remember that it stands for (A)denosine (T)ri(P)hosphate. A few standard abbreviations of this sort are likely to be perpetuated, but certain ones should be avoided. TCA might be "trichloroacetic acid" or "tricarboxylic acid"; whereas DPN is an essential coenzyme in metabolism, DNP (dinitrophenol) produces profoundly abnormal metabolism. Most editors now recommend a minimum of such abbreviations and insist on their proper identification the first time they appear in the paper. Some jargon originates in the foreign languages with which the scientist must be familiar. If biological materials are finely ground with water, the resulting thin paste might be called a "brei" (German), a "gemisch" (German), "soup" (English), or "melange" (French). Jargon probably cannot be avoided in laboratory speech, but it should not be allowed to creep into formal scientific writ- ing. A fault which is like jargon, in a way, is the use of technical terms where they are not needed. You might say, "It is immediately obvious to the observer that this specimen of Canis familiaris L. exhibits a pre- dominantly melanistic pigmentation pattern," or you might say, "This dog is black." Think of yourself as the reader and take your pick. Some Technical Details: You will need to refer to several earlier papers, and the actual form of the citations depends upon the journal. Some journals place all references in a list at the end of the paper, but others use footnotes. References are identified in the text material by Arabic numerals (superscripts or enclosed by parentheses) or by the name of the author and the year; for example, "Green (1958) showed . . . ." The references at the end of the paper might be listed in order of their appearance in the text, alphabetically by author, by year, or in some other order. The citation itself also depends upon the journal. The following im- aginary example illustrates one system: Drake, P. D., and P. A. Mason. 1959. Effect of gamma rays on the yield of peaches. J. Agr. Biophys. 43: 17-28. Only the first word and proper nouns in the title are capitalized. Com- mon variations include inverted order for all author names, placing the year in some other position, omission of the title, and different systems of punctuation or different type styles. The abbreviations of the titles of periodicals should follow a system, preferably that used by Chemical Ahstracts. Generally, examining a few recent issues of the selected jour- nal helps one to learn the proper form for citations. 228 THE MANUSCRIPT Presentation of results Tables and figures (graphs or other drawings) are the common forms for presenting data. Figures: The preparation of illustrations requires a little knowledge about the printing processes. The original photograph or drawing must be reproduced by means of an engraving. The drawings furnished to the engraver usually are done on white paper with black India ink. Because reduction occurs in processing, the thickness of lines, the sizes of points on graphs, and the proportions of the lettering must be chosen to show clearly on the smaller picture. Some editors prefer good glossy photo- graphs of the original drawings. Graphs may be drawn on coordinate paper (graph paper) if the lines on the paper are blue, which does not photograph. Most of the figures in this book were drawn in India ink on 8Vi by 1 1 inch sheets of tracing paper. Lettering was done with a Leroy lettering set, a device in which the pen follows a prepared template. Several sizes of templates and pens are available, and the right combination must be chosen for the anticipated reduction in size of the printed drawing. Graphs are often a problem for the beginner. We might measure the effect of temperature on the rate of a reaction, and the results would be especially amenable to graphic presentation. We control the temperature and measure the rates, so temperature should be placed on the horizontal axis and rate on the vertical axis. Some other sets of variables are not quite so obviously "independent" or "dependent." Mathematical exami- nation shows that one is a function of the other, however, or y = f(:x:). The length of a growing animal varies with time, not vice versa. The absorption of light by a solution depends on concentration, not the other way around. Either you control and vary the factor and measure its effect or you observe the effect of some uncontrollable factor like time. In either case the effect goes on the vertical axis. The scale of distances on the graph corresponds to numerical intervals. If 1 cm stands for 1 min, then 10 cm stands for 10 min, even if measure- ments were made at 1 min, 2 min, 3 min, 5 min, 10 min, 20 min, or other odd intervals. Figure 17-1 shows a properly prepared graph, as well as some graphs containing common errors. Imagine how these errors would affect the conclusions that might be drawn. A table is the most efficient way of presenting numbers. The table THE MANUSCRIPT 229 40r- 30 o •o 20 10 X J_ 10 20 30 Time in minutes 40 - 40 — o w f a> / 1 50 ^ / o |30 - / o 1 10 / p .o< . . . 10 20 30 40 fi\ of gas 30 i_ in „/ o o> 20 " / "o o^ _ 10 - o^ :». o^ 'l 1 1 1 1 1 1 2 4 6 10 15 25 40 Time in minutes Fig. 17-1. Left, a properly prepared graph of the results of a typical experi- ment. The first few minutes were the most interesting, so readings were made at shorter intervals. Right, the same data : upper, axes interchanged; lower, time scale not linear. classifies the data, and its organization should be self-evident. Because the numbers in a table usually could be arranged in several different ways, it is desirable to try several forms and select the one that will dem- onstrate the results to the reader most effectively. Table 17-1 presents the same figures as Table 16-3, but in a different arrangement. If the two are compared, Table 16-3 is preferred because additions of vertical columns are easier than addition across a page. Table 17-1. Data on Analysis of Variance from Table 16-3 Source of Variation Degrees of Freedom Sum of Squares Mean Square Individuals Treatments Total 9 2 11 417 339 756 46.3 169.7 68.7 Typing the manuscript The manuscript should be typed on good white paper, preferably 16- pound or 20-pound bond. If the surface is too rough, erasures are impos- sible; if too smooth or glazed, ink will smear. Use a typewriter with sharp, clean type faces and a ribbon in good condition. 230 THE MANUSCRIPT Two copies are sent to the journal; therefore, you will need at least two carbon copies so that you may keep one yourself. Type on one side of the paper and double-space everything. Leave adequate margins (1 to 1^ in.) for editorial comments and instructions to the printer. Check the entire manuscript for typographical errors and errors in spelling, grammar, or content. Set it aside for a few days and then check it again. Let someone else, say a person not too familiar with your work, read it and criticize it. Select the readers carefully. If a reader says every- thing is "just fine," he must not have read it critically or he does not want to hurt your feelings. If another reader cannot find anything good about the manuscript and hands you a list of nasty comments, suspect him also. Eventually, after a few such trials, you will find two or more friends, relatives, or colleagues who can give you the kind of criticism you need. The purpose of the criticism, after all, is to improve the paper. The good critic will question something that is not clear to him; you had better rewrite that sentence or paragraph because you cannot explain it ver- bally to every reader. Minor corrections can be corrected in ink or by typing on the manu- script. Major corrections may require retyping a page. Probably if the criticism was adequate you will want to retype the whole manuscript. After all, the scientific paper is usually only a few pages long, and the final typing is a small part of the entire job. Title and abstract These parts have been saved until last because they are the most difficult to write. Since a maximum amount of useful information must be included in a minimum of words, the words must be used very effi- ciently. The title is the first item read by a reader searching the literature. Unless your title tells him what is in your paper, he may not read it, even though your findings might be important to him. Abstracting and indexing services frequently pick key words from titles. The title should tell the nature of the study, usually the approach to the problem, and often the kinds of organisms used. "The effects of gamma rays on the yield of peaches," mentioned earlier, provides enough information for a reasonable guess about the contents of the paper. The abstract is a one paragraph summary, limited to about 200 words. Any reader is likely to read the abstract before he reads the whole paper, THE MANUSCRIPT 231 even if he has found the title appeaHng. The title is not repeated in the abstract, but the two complement each other. Describe the problem, the methods, the results, and the conclusions or interpretations. Point out any especially significant new findings. To do all this in 200 words re- quires a great deal of writing, editing, rewriting, pruning, and rearrange- ment. Words must be used more accurately and efficiently than in almost any other kind of writing. SELECTED REFERENCES Conference of Biological Editors, Committee on Form and Style. 1960. Style manual for biological journals. Am. Inst. Biol. Sci. Wash- ington. 92 p. This manual has been prepared with the hope of stand- ardizing and improving biological writing. It includes useful hints for improving style, together with information about the actual prep- aration of the paper. Every biologist should have a copy and use it. University of Chicago Press. 1949. A manual of style: containing typographical and other rules for authors, printers, and publishers recommended by the University of Chicago Press. University of Chi- cago Press. Chicago. 522 p. The Chicago manual has become one of the most widely-used reference manuals. In a number of instances it has become the authority governing writing and publishing in a variety of subjects. Dictionary. An up-to-date, authoritative dictionary is the indispensa- ble tool of the writer and the reader. Webster's New International Dictionary is the usual authority, but some of the smaller books may be more convenient on the desk. Bibliography For a background in biology : * Weisz, Paul B., The Science of Biology. New York: McGraw-Hill Book Company, Inc., 1959. Weisz has attempted to concentrate on the con- cepts of biology and has left out many of the descriptive details. It is interesting to read and presents a good over-all picture of modern biology. McElroy, William D., and Carl P. Swanson, eds., Prentice-Hall Founda- tions of Modern Biology Series. Englewood Cliffs, N. J.: Prentice- Hall, Inc., 1960-61. Each of these small books was written by a care- fully selected, competent author. Taken together, they give a modern approach to the field. Moore, John A., Principles of Zoology. New York: Oxford University Press, 1957. One of several good zoology texts. Hill, J. Ben, Lee O. Overholts, Henry W. Popp, and Alvin R. Grove, Jr., Botany, 3rd ed. New York: McGraw-Hill Book Company, Inc., 1960. As a general reference on plants, this book is very useful. Stanier, Roger Y., Michael Doudoroff, and Edward A. Adelberg, The Microbial World. Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1957. A complete textbook of microbiology, useful as a general reference because it includes algae, protozoa, and various fungi as well as bac- teria. In fact, so many general principles of biochemistry, cellular physiology, and ecology are discussed, that this is one of the best references in general biology. Giese, Arthur C, Cell Physiology. Philadelphia: W. B. Saunders Com- pany, 1957. The physiology of cells, with no major distinction be- tween animal and plant cells. Baldwin, Ernest W., Dynamic Aspects of Biochemistry, 3rd ed. New York: Cambridge University Press, 1957. A thoroughly readable ac- count of this important field. * Arranged more or less in order of increasing technicality. 232 BIBLIOGRAPHY 233 Advanced work relating to experimental research: Richards, James A., Francis Weston Sears, M. Russell Wehr, and Mark W. Zemansky, Modern Unix^ersity Physics. Reading, Mass.: Addison- Wesley Publishing Company, Inc., 1960. A high-level complete intro- duction to modern physics. Strobel, Howard, Chemical Instrumentation. Reading, Mass.: Addison- Wesley Publishing Company, Inc., 1960. Complete discussions of instrumental analysis are included. This relatively new book will be- come a favorite of biologists as well as chemists. Willard, H. H., L. L. Merritt, Jr., and John A. Dean, Instrumental Methods of Analysis, 3rd ed. Princeton, N. J.: D. Van Nostrand Com- pany, Inc., 1958. The practical descriptions of most of the commercial instruments are quite detailed, and discussions of theoretical aspects are relatively easy to follow. Wilson, E. Bright, Jr., An Introduction to Scientific Research. New York: McGraw-Hill Book Company, Inc., 1952. The single most instructive volume on experimental research, using examples from biology as well as from the physical sciences. Sources of specific information: Crouse, William H., editor-in-chief, McGraw-Hill Encyclopedia of Sci- ence and Technology. New York: McGraw-Hill Book Company, Inc., 1960. Fifteen volumes. Hale, L. J., Biological Laboratory Data. New York: John Wiley & Sons, Inc., 1958. A pocket-size volume including items of specific informa- tion that the biologist is likely to need: mathematical tables, weights and measures, statistical tables, culture media, and so forth. Hodgman, Charles D., ed.. Handbook of Chemistry and Physics. Cleve- land: Chemical Rubber Publishing Co. (New editions appear fre- quendy; the most recent one I have seen is the 42nd, 1961.) Lange, N. A., ed. Handbook of Chemistry, 9th ed. Sandusky, Ohio: Handbook Publishers, Inc., 1956. Spector, William S., ed., Handbook of Biological Data. Philadelphia: W. B. Saunders Company, 1956. Index B Abbe, Ernst, 96 Absolute temperature scale, 38 Abstract, of manuscript, 230-231 Abstract services, 23 Acrylic plastic, 100 Adsorption, differential, in chromatog- raphy, 150-151 Amberlite resins, 154, 157 American Association for the Advance- ment of Science, 23 American Chemical Society, 23 American Optical Company, 97 Amino acids, 143, 156, 158 Amoeba, 70, 72 Amplifiers, 184, 185-186 Analysis of variance, 207-208, 219- 221 Angle heads, 79-80, 81 Angstrom unit, 35 Animal Welfare League, 63 Animals, exp>erimental, 68, 69—70 Antibiotic, 32 Applied Physics Corp., 119 Assay method, in preservation of iso- lated particulates, 74-75 Atomic Energy Commission, U. S., 171-172 Atomic weight, 40 Avogadro's number, 40, 111 Bacon, Francis, 7 Balances, 36-37 analytical, 36-37 torsion, 36 Barcroft, Joseph, 131 Bartok, Bela, 2 Bausch and Lomb Optical Company, 85 Beams, Jesse W., 82 Beckman InfraRed Gas Analyzer, 140- 141, 180 Beckman Instrument Company, 114, 141 Beckman Model DU Optical system, 114 Beckman Model DU Spectrophotom- eter, 226 Beckman pH meter, 226 Biological Abstracts, 23, 24 Biological sciences, subdivisions, 2—3 Biology descriptive, 3, 12 exp>erimental, 3, 12 fundamental, 3 literature, 19-29 Biometry, 199-200, 211 Books, 22-23 Brahms, Johannes, 2 Brodie's solution, 137 Brown Electronik Potentiometer Re- corder, 183 235 236 INDEX Burettes, 42-43 Burris, R. H., 136-138, 192 Calibration, 35, 45-46 Calorimeter, 38 Capacitor, 175 Carbohydrates, in chromatography, 156-157 Cary Model 14 recording spectropho- tometer, 116, 117, 118, 119, 123 Cathode ray oscilloscope, 183, 184 Census, United States, 49 Centigrade temperature scale, 38 Centrifugal force, 78-80 Centrifuges, 73, 74, 78-82 analytical, 80-81 angle heads, 79-80, 81 defined, 78 sedimentation, 80-81, 82 types of, 81-82 Centripetal force, 79 Charles' law, 132 Chemical Abstracts, 23, 24, 25, 227 Chi-square test, 207 Chhrella, 62, 63, 66, 70, 190, 192, 193 Chloroplasts, 15, 62, 72, 74-76, 143, 144, 151, 157 frozen and stored, 74 preparation of fragments for experi- ments, 75-76 separation of pigments, 143, 144, 151, 157 Chromatograms, 146-148, 150, 151, 169 Chromatography, 12, 143-159, 169 amino acids, 143, 156, 158 carbohydrates, 156-157 column, 145-148 defined, 143-144 differential adsorption, 150-151 elution, 147, 153, 156 fraction collectors, 155-156 gas, 159 Chromatography CCont.^: ion exchange, 153-154, 155, 156, 157 lipids, 157 liquid-liquid partition, 151-153 paper, 148-150, 154-155, 156, 169 paper electrophoresis, 157-158 partition coefficient, 152 physical principles involved in, 150- 154 plant pigments, separation of, 144- 150 proteins, 156, 157, 158 two-dimensional, 156 Circuit diagrams, 188 Colorimeter, 113, 140, 155 Colorimetry, 110-129. See also Spec- trophotometry analytical instruments, 112-117 concentration, measurements of, 120-122 flame photometry, 128-129 fluorescence measurements, 127-128 monochromators, 113, 115-116 radiation detectors, 116-117 Compound microscope, 83, 84-86, 87, 101 Conant, James B., 8 Condenser, 96, 175 Confidence limits, 206 Control, in experiments, 6-7 Corning Glass Works, 44, 45, 58 Correlation, 208-209 Cultures, 70-72 enrichment, 71 pure, 70-71 tissue, 71-72 Curie, Marie and Pierre, 163 Data, calculation of, 190-211, 219- 221 aids, 194 biological material, amounts of, 190- 191 INDEX 237 Data, calculation of CCont.^: mathematical treatments, 194-198 raw data, manipulation of, 191-194 statistical treatments, 199-211, 219- 221 Deductive reasoning, 7 Density, 5 1 Design, experimental, 212—222 analysis of variance, 219-221 comparison of measurements, 215 completely randomized, 217-218 factorial experiments, 221—222 Latin square, 218-219 paired, 215-217 randomization, 213-215 randomized block, 218 sampling, 213-215 Detection, of isotopic tracers, 167-170. See also Radiation detection Dichroism, 106-107 Diffraction, 90-93 Diffraction grating, 92-93, 115 Direct measurements, 32 Dowex resins, 154, 157 Drosophila melanogaster, 62, 63 Electrical measurements, 173-188 advantages, 173 circuit diagrams, 188 disadvantages, 173 electrical theory, 173-178 electromotive force, 174 electronic systems, 178-179 input transducers, 179-182 oscilloscopes, 183-184 output transducers, 182-184 of pH, 187 potentiometric techniques, 186-187 power supplies, 184-185 semiconducting materials, 174, 181- 182 transistors, 177—178 vacuum tubes, 176-178 Electromotive force, 174 Electron microscopy, 100, 107-109 Electrophoresis, paper, 157-158 Elution, in chromatography, 147, 153, 156 Energy defined, 37 measurement of, 37-40 heat, 37-38 temperature, 38-39 radiant, 88, 111, 116, 117 relationships, 16, 17 thermal, 37-38 vibratory, 88 Enzymes, 12, 15, 17, 73, 131, 157 Experiments controlled, 6-7, 15 data, calculation of, 190-211 defined, 6 design, 212—222 explanatory statements on, 7 factorial, 221-222 handling of animals before, 68 instrumentation, 53—60 building instruments, 55-60 design of instrument, 55-57 "elegant," 53 safety, 55 methods, 16 preparation of organisms, 67-76 animals, 68, 69-70 microorganisms, 70-71 parts of cells, 72-75 plants, 68-69 selection of organisms, 61-67 criteria for, 63-67 desirable features, 65-67 separation of parts of organism, 67, 68-69. See also Chromatog- raphy techniques of, 15, 53-60 Factorial experiments, 221-222 Figures, in manuscripts, 228 Filter paper, 148-150, 156, 157, 158 238 INDEX Flame photometry, 128-129 Fluorescence measurements, 127-128 Fluorimeters, 127 Fraction collectors, in chromotography, 155-156 Freeze-drying technique, 74 Fresh weight, 191 Galvanometer, 114, 182 Gas analyzer, infrared, 140-141, 180 Gas chromatography, 159 Gas discharge lamp, 115 Gas exchange, measurement of, 130- 142 Charles' law, 132 chemical techniques, 131 comparing results of experiments, 136 Gay-Lussac's law, 132 infrared gas analyzer, 140-141 magnetic oxygen analyzer, 142 manometric technique, 131-140 photosynthesis, 130, 132, 137-138 respiration, 130, 131, 132, 137-138 Gas law, 132 Gay-Lussac's law, 132 Geiger-Miiller tube, 167-168, 169 Glassware, volumetric, 42-46, 50 Graphs, 228 H Haldane, J. S., 131 Hales, Stephen, 13 Handbook of Biological Data, The, 25 Harvey, William, 13 Heat, 37-38 Heteroatomic bonds, 120 Hevesy, Georg von, 163 Hill, R., 62 Homogenizer, 73 Huygen's principle, 90 Hydrogen lamp, 115, 117 Hypothesis, defined, 7 I Inductive reactance, 175 Inductive reasoning, 7 Infrared gas analyzer, 140-141, 180 Infrared spectrophotometers, 119, 120 Input transducers, 179-182 Instrumentation, for experiments, 53- 60, 112-117 analytical, for colorimetry, 112-117 building instruments, 55—60 components, assembling, 57-60 construction of parts, 58 glass apparatus, 58-59 design, 55-57 "elegant," 53 safety hazards, 55 International Bureau of Weights and Measures, 34 International Business Machines Corp., 27, 28 Ion exchange chromatography, 153- 154, 155, 156, 157 Isotopic tracers, 12, 160-172 advantages in use of, 160-161 detection methods, 167-170 Geiger-Miiller tube, 167-168, 169 scaling circuits, 168 scintillation counter, 168 of stable isotopes, 169-170 laboratory safety, 160, 171-172 radiation and radioactivity, 162 tracer experiment, 161-162 development of, 163 selection of isotopes, 163-166 Johns Hopkins University, 23 Joules, 38 Journals, scientific, 19, 20-21, 23-24 Kelvin temperature scale, 38 Kontes Glass Company, 59 INDEX 239 Laboratory records, 19-20 Lachman, Sheldon J., 2, 1 1 Latin square design, 218-219 Leitz, E., Inc., 86 Leitz-Labolux Ilia, microscope, 85, 86 Letters to editors, 24 Library, in biological research, 26 Light absorption, 75, 120-129 in colorimetry, 110-112 diffraction, 90-93 flame photometry, 128-129 fluorescence measurements, 127-128 Huygen's principle, 90 optical theory, 86-93 in photosynthesis, 17, 75 polarization, 93 refraction, 89-90 semiconducting materials, 174, 181- 182 in spectrophotometry, 110-112 vibration, 93 Lion, Kurt S., 179 Lipids, in chromatography, 157 Liquid-liquid partition, in chromatog- raphy, 151-153 Liston-Becker InfraRed Gas Analyzer, 141 Literature, biological, 19-29 abstract services, 23 books, 22—23 catalogues, 25-26 handbooks, 25 journals, 19, 20-21 laboratory records, 19-20 letters to editors, 24 library, use of, 26 monographs, 22—23 periodicals, 19, 20-21, 23-24 record systems, 26-28 reprints, 26 review articles, 21-22 searching, 24-25 symp>osia, 22—23 Literature, biological CCont.^: technical papers, 20-2 1 types, 19 Log-log paper, 197 Lucite, 100 Lyophilization, 74 M Magnification, in microscopy, 93-95 Manometer, 132-136, 139-140, 190, 192, 193,215 scales, 139-140 thermobarometer, 134, 140, 192, 193 thermoregulator, 133 Warburg flask, 190 water bath system, 133-134, 135, 140 Manometric techniques of measure- ment, 131-140, 192-193 gas exchange, 131-140 photosynthesis, 192-193 Manuscripts, 223-231 abstract, 230-231 figures, 228 graphs, 228 organization of, 223-224 results, presentation of, 228-229 tables, 228-229 title, 230 typing, 229-230 writing, 225-227 Marine Biological Laboratory, 64 Mass, measurement of, 35-37 Mass spectrometer, 169-170 Measurements balances, 36-37 biological, 31-32 centrifuges, analytical, 80-81 chemical quantities, 40-42 comparison of, 215 dimensions, 50—51 direct, 32 effects on organism, 33-34 electrical, 173-188 of energy, 37-40 240 INDEX Measurements QCont.^ : error, normal curve of, 48 fluorescence, 127-128 gas exchange, 130-142 indirect, 51-52 length, 34-35 light absorption, 120-127 of mass, 35-37 metric system, 31 micromethods, 52 null, 32, 33, 114 precision of, 48-50 pressure, 42 of radioactive material, 162 significant digits, 49-50 standards, 30-34 of temperature, 38-39 theory of, 46-52 of time, 37 units, 33-34 volumetric glassware for, 42-46, 50 weight, 35-36 Mechanists, 13-14 Metric system, of measurement, 31 Micrometer, 35 Microorganisms, for experiments, 70- 71 Microscope. See also Microscopy compound, 83, 84-86, 87, 101 electron, 107-109 invention of, 13 phase contrast, 103-105 polarizing, 105-107 use of, 101-103 "zoom," 93-94 Microscopy, 83-109 abberations, 98 electron, 100, 107-109 importance in biology, 83—84 magnification, 93—95 microscopic methods, 103-109 oil immersion, 96, 97, 102-103 optical theory, 86-93 diffraction, 90-93 polarization, 93, 105-107 refraction, 89-90 phase, 103-105 Microscopy CCont.^: preparation of materials, 99-101 resolution, 93, 95-98 use of microscope, 101-103 Microtome, 97 Monochromator, 113, 115-116, 117, 128, 129 Monographs, 22—23 Morgan, T. H., 6l N National Bureau of Standards, 59 Newton's second law of motion, 35 Null measurements, 32, 33, 114 Observation, scientific, 5-7, 8 Ocular micrometer, 35 Ohm's law, 32, 174 Oil immersion microscopy, 96, 97, 102- 103 Organisms, experimental, 33-34, 61- 76 effects of measurement on, 33-34 microorganisms, 70-7 1 preparation of, 67-76 animals, 68, 69-70 plant parts, 68-69 selection of, 61-67 criteria for, 63-67 desirable features, 65-67 genetic background, 66 separation of parts, 67, 68-69. See also Chromatography Oscilloscopes, 183-184 Output transducers, 182-184 Oxygen analyzer, magnetic, 142 Paired design, 215-217 Paper chromatography, 148-150, 151, 154-155, 156, 169 INDEX 241 Paper electrophoresis, 157-158 Partition, liquid-liquid, in chromatog raphy, 151-153 Periodicals, scientific, 19, 20-21, 23- 24 Per kin-Elmer Corp., 117, 118 pH, 41-42, 187, 226 Phase contrast microscope, 103-105 Photometer, flame, 128-129 Photosynthesis, 17, 62, 66, 67, 70, 75, 130, 132, 137-138, 192-193 Chlorella experiments, 62, 66, 67, 70 gas exchange in, 130, 132, 137-138 Hght-absorbing phase, 75 manometric techniques of measuring, 192-193 Pipette, 42, 43-44, 45, 52, 214 Planck's constant. 111 Polarization microscopy, 93, 105-107 Potentiometer, 183 Potentiometric techniques, of electrical measurement, 186-187 Power supplies, for electrical measure- ments, 184-185 Preparation of materials, for micros- copy, 99-101 Pressure, measurement of, 42 Proteins, in chromatography, 156, 157, 158 Radiant energy, 88, 111, 116, 117 Radiation, 116-117, 162, 167-172, 181 detection, 116-117, 167-170 laboratory safety, 171-172 measurement, 181 Radiation Safety Officer, 172 Radioactivity, of isotopic tracers, 162, 171-172 Radioautography, 155, 167 Randomization, 213-215 Randomness, in statistics, 200-201 Record systems, for printed data, 26-28 Record systems CCont.^ : computer sorting, 28 edge-punched cards, 27-28 subject matter file, 28 Recording instruments, as output trans- ducers, 183 Refraction, 89-90, 98 Refractive index, 89-90, 93, 95, 96, 97, 98 Regression, 208-209 Religion, and science, 9 Reprints, of scientific papers, 26 Research, 4, 12-29 in biology, 12-18 assumptions by biologist, 15 biological problems, 16-18 difficulties in, 14-15 hterature, 19-29 data. See Data, calculation of scientific, defined, 4 Resins, in ion exchange chromatog- raphy, 153-154, 155, 157 Resolution, in microscopy, 93, 95-98 Respiration, 130, 131, 132, 137-138 Safety, 55, 171-172 instruments for experiments, 55 with radioactive materials, 171-172 Samples, parameters of, 202-203 Sampling, 213-215 Santayana, George, 5 Scale, trip, 36 Scaling circuits, 168 Schrodinger, E., 14 Science, 1-11 applied, 3 basic, 3 biological, subdivisions, 2—3 defined, 1-3 goals, 3-4 limitations of, 9-11 methods of, 5-9 deductive reasoning, 7 experimental, 6—7 242 INDEX Science CCont.^: methods of CCont.^: explanatory statements, 7 hypothesis, 7 inductive reasoning, 7 observation, 5-7, 8 and religion, 9 research, 1, 4 Scientific method, 7-9 Scintillation counter, 168, 182 Sedimentation, 80-81, 82 Semiconducting materials, 174, 181-182 Semilog paper, 197 Separation of parts of organism for ex- periment, 15, 67, 68-69. See also Chromatography Servall KSA system, 82 Spectrometer, mass, 169-170 Spectrophotometers in chromatography, 155 design and construction, 113-115 infrared, 119, 120 radiation, 117, 181 detecting systems, 117 infrared, measurement, 181 recording, 1 16, 1 17-1 19, 123 ultraviolet, 119-120 use of, 122-127 Spectrophotometry, 110, 111, 113-129, 140. See also Colorimetry analytical instruments, 112-117 concentration, measurements of, 120- 122 flame photometry, 128-129 fluorescence measurements, 127-128 infrared, 140 monochromators, 113, 115-116, 117, 128, 129 radiation detectors, 116-117 uses for, 1 10, 111 Spectroscope, 111, 113 Spikes, J. D., 75 Spinco Model L Preparative Ultracen- trifuge, 75, 82 Standards, for measurements, 30-34 direct measurement, 32 Standards, for measurements CCont.'): metric system, 3 1 null measurements, 32, 33 Statistical treatments of data, 194-198, 199-211,219-221 analysis of variance, 207-208, 219- 221 application of statistics, 210-211 correlation, 208-209 mathematical, 194-198 mean, the, 210 normal curve, 201-202 probability, 200-201 probable error, 210 randomness, 200-201 range, 210 regression, 208-209 samples, parameters of, 202-203 standard error of the mean, 210 tests of significance, 203-207 chi-square, 207 confidence limits, 206 normal deviate, 205 Student's t test, 206-207 Statistics, 199-211 StauflFer,]. F., 136-138, 192 Strain gages, 180 Stroboscope, 79 Student's t test, 206-207 Style, in writing, 225-227 Subcellular particulates, separation of, 73 Symposia, 22—23 t Test, Student's, 206-207 Tables, in manuscript, 228-229 Tachometer, 79 Temperature, measurement of , 38-39 Tests, 203-207 chi-square, 207 confidence limits, 206 normal deviate, 205 Student's t test, 206-207 INDEX 243 Thermistor, 39 Thermobarometer, in manometer, 134, 140, 192, 193 Thermocouple, 39 Thermometer, 38 Thermoregulator, in manometer, 133 Time, measurement of, 37 Title, of manuscript, 230 Torsion balance, 36 Tracers. See Isotopic tracers Transducers, 179-184 input, 179-182 mechanical, 179-181 output, 182-184 radiation, 181 temperature, 181 Transistors, 177-178, 186 Tri ode amphfier tube, 176, 177 Trip scale, 36 Tubes, 167-169, 176-178, 183-184 Geiger-Miiller, 167-168, 169 triode amplifier, 176, 177 vacuum, 176-178, 183-184, 185 Tungsten filament lamp, 115, 117 Typing, manuscript, 229-230 Vacuum tubes, 176-178, 183-184, 185 van Helmont, Jan B., 13 Variance, analysis of, 207-208, 219- 221 Vitahsts, 13-14 Volumetric glassware, 42-46, 50 w Warburg, Otto, 62, 132 Warburg manometer flask, 190 Water bath system, in manometer, 133- 134, 135, 140 Wave number, 88 Whatman No. 1 filter paper, 156, 157 Wheatstone bridge, 32, 33, 39, 114, 180-181 White, P. R., 72 Willstatter, Richard, 62-63 Writing, manuscript, 225-227 u Ultracentrifuges, 75, 81, 82 Ultraviolet spectrophotometers, 1 1 9- 120 Umbreit, W. W., 136-138, 192 Units, of measurement, 33, 34—35 University of Cambridge, 113 University of Utah, 66, 75 Yard, international, defined, 34 Yellow Springs Instrument Company, 39 "Zoom" microscope, 93-94