STATISTICAL METHODS IN BIOLOGY, MEDICINE AND PSYCHOLOGY STATISTICAL METHODS IN BIOLOGY, MEDICINE AND PSYCHOLOGY BY C. B. DAVENPORT Carnegie Institution of Washington Cold Spring Harbor, N. Y. AND MERLE. P. EECAS Northeast High School Philadelphia, Pa. Fourth, completely revised, edition NEW YORK JOHN WILEY & SONS, IN c. LONDON: CHAPMAN & HALL, LIMITED 1936 Copyright, 1899, 1904 . BY CHARLES B. DAVENPORT 1899, Copyright renewed, 1927 Copyright, 1936 By CHARLES B. DAVENPORT All Rights"Reserved This book or any part thereof must not be reproduced in any form without the written permission of the publisher. * , • 2 't • 31Ol Cºll/8(a FRINTED IN L, s. A. FREss OF B RAUNvvo RTH & co., IN.c. BUILDERs of Ecoks B RI D G E Flo RT, coru N. PREFACE TO FOURTH EDITION In meeting the request of the publisher for a new edition of “Statistical Methods” it has been found necessary to review much of the extensive literature that has appeared since the first edition, published in 1899. Among the newer developments have been the analysis of variance and extension of the theory of small samples that we owe to Dr. R. A. Fisher; and the expansion of the theory of correlation to the inclusion of multiple and partial correla- tions. Since some of the applications of statistics to economics include methods of general interest they have been included. In this edition some symbols used in the earlier editions have been changed to conform with the standards of the pro- posed “Dictionary of Statistical Terms and Symbols” by A. K. Kurtz and H. A. Edgerton. The present edition has received much assistance from Mr. William Drager in the computation of tables and the descrip- tion of some shorter methods rendered possible by calculating machines. He has checked all computations in the most painstaking and effective manner. For permission to reproduce tables we are indebted as follows: For Table XII copied from R. A. Fisher’s “Statistical Methods for Research Workers,” Oliver and Boyd, Edinburgh and London, with kind permission of author and publishers. For Table XIII, copied from G. W. Snedecor’s “Calculation and Interpretation of Analysis of Variance and Covariance,” Collegiate Press, Ames, Iowa, by kind permission, and with additions by Professor Snedecor. Professor Laurence H. Snyder cordially permitted us to use a large part of the twenty-ninth chapter of his excellent text book, “Principles of Heredity,” D. C. Heath and Company. Also we thank Dr. A. S. Wiener for use of his table on value of Q for determining V vi PREFACE TO FOURTH EDITION crossing over in man. To Dr. Raymond Pearl and the W. B. Saunders Company we are indebted for permission to repro- duce, in part, Appendix III of Pearl’s “Medical Biometry and Statistics” as our Table XI. Dr. J. R. Miner kindly consented to our use of part of his tables of values of 1 — r? and v/Trz. Many others have generously contributed data to make the book more generally useful. Statisticians and other users of this book have assisted by pointing out errors in its earlier editions. A continuation of such favors with respect to the present edition is earnestly solicited. - HAs. B. DAVENPORT CoLD SPRING HARBOR, N. Y. ‘. P. E NPO June 1, 1936 JERLE tº. HJKAS CONTENTS CHAPTER I ON VARIATION AND ITs MEASUREMENT Preliminary definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. On the collection of data (samples) for statistical analysis. . . . . . . . . 2 Processes preliminary to measuring characters of organisms. . . . . . . 3 The determination of integral variates; methods of counting. . . . . . 4 The determination of graduated variates; methods of measurement 5 Aids in calculation 9 4 s e e s e e s = e º e a e s e e s a c e a e a s = e s = e s is s a s = e º 'º e < * * * CHAPTER II ON THE SERIATION AND PLOTTING of DATA AND THE FREQUENCY Polygon Errors and differences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Simplification of data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Seriation and classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Plotting by the method of rectangles. . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Plotting by the method of loaded ordinates. . . . . . . . . . . . . . . . . . . . . 18 The distribution curve or polygon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 The Poisson series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 The rejection of extreme variates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Certain constants of the normal frequency polygon. . . . . . . . . . . . . . . 22 Measures of the central tendency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 The median. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 The arithmetic mean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 The geometric mean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 The harmonic mean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 The mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 The simple aggregative index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Measures of variability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 The range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 The standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 The estimation of the squared standard deviation, or variance, from small samples (s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 The quartile deviation, semi-interquartile range, or Q. . . . . . . . 31 The average deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 The coefficient of variation and relative variability Vii viii CONTENTS PAGE Sampling and its errors. . . . . . . . . . . . tº tº e g º º • a e s e e s e e s s a s s e º ºs e > * * 35 Probability. . . . . . . . . . . . . . . . . . . • e a e e s e s e s s e e s s = n e s s & e s = * * * 35 The probable error. . . . . . . . . . & s e e s e e s e s s = e s e s s = n = * * * * * * * * * 36 The standard error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 The probable errors and standard errors of leading constants. . 37 To measure the significance of a difference between means. . . . 37 The probable error of a difference. . . . . . . . . . . . . . . . . . . . . . . . . . 38 To measure the differences between variabilities. . . . . . . . . . . . . 38 To measure the differences between variabilities of two small samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Quick methods of roughly determining average and variability. 39 CHAPTER III THE CLASSEs of FREQUENCY PolyGON Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 The normal frequency curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 To compare any observed curve with the theoretical normal Curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 To determine the closeness of fit (“goodness of fit”) of a theoretical distribution to the observed distribution . . . . . . . . 46 The x* (chi square) test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 The probable range of abscissae. . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 The normal curve of frequency as a binomial curve . . . . . . . . . . 49 Example of a normal curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 To find the average difference between the pth and the (p + 1) th individual in any seriation (Galton's difference problem) . . . . . . . . 50 To find the best fitting normal frequency distribution when only a portion of an empirical distribution is given. . . . . . . . . . . . . . . . . . . . 51 Other unimodal frequency polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 The range of the curve....... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Asymmetry or skewness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 To compare any observed frequency polygon of Type I with its cor- responding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 To compare any observed frequency polygon of Type II with its corresponding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 To compare any observed frequency polygon of Type III with its corresponding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 To compare any observed frequency polygon of Type IV with its cor- responding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 To compare any observed frequency polygon of Type V with its cor- responding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 To compare any observed frequency polygon of Type VI with its cor- responding theoretical curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Example of calculating the theoretical curve corresponding with observed data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 The use of logarithms in curve-fitting. . . . . . . . . . . . . . . . . . . . . . . . . . . 60 General. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Type IV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 CONTENTS IX PAG: Multimodal curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Index of divergence....... . . . . . . . . . . . . . . . . . . . . . tº m e a ſe in s e º a º a 9 s tº 63 Index of isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 CHAPTER IV º ANALYSIS OF WARIANCE General. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Numbers of the observations in each class are equal. . . . . . . . . . . . . . 65 Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Numbers of observations in each class are unequal. . . . . . . . . . . . . . . 68 Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 To test whether the mean square between classes is significantly different from the mean square within classes. . . . . . . . . . . . . . . . . . 70 CHAPTER V CoRRELATED WARIABILITY AND MEASUREs of RELATIONSHIP Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Types of correlated variability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 A. Correlated variability between two sets of variables. . . . . . . . . . . . 72 I. Both variables quantitative. . . . . . . . . . . . . . . . . . . . . . . . . . . 72 1. Linear correspondence, coefficient of correlation . . . . . . . 73 a. Computation of r with ungrouped data. . . . . . . . . . . . . 74 By deviations from true mean. . . . . . . . . . . . . . . . . . . 74 By deviations from means taken at Mr = 0 and My = 0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 By rank difference method. . . . . . . . . . . . . . . . . . . . . . 78 b. Computation of r from grouped data. . . . . . . . . . . . . . . 79 Standard formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Sum and difference formulae. . . . . . . . . . . . . . . . . . . . 81 c. Regression lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 2. Non-linear correspondence: correlation ratio (m). . . . . . . 87 3. Spurious correlation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4. Special coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 a. The coefficient of alienation..... . . . . . . . . . . . . . . . . . . 91 b. The coefficient of determination. . . . . . . . . . . . . . . . . . . 92 c. The coefficient of reliability. . . . . . . . . . . . . . . . . . . . . . . 93 d. Correction for attenuation. . . . . . . . . . . . . . . . . . . . . . . . 94 e. The coefficient of similarity (Sm). . . . . . . . . . . . . . . . . . 95 f. The correlation between a variable and the deviation of a dependent variable from its probable value. . . . . . 95 5. Analysis of variance in correlation. . . . . . . . . . . . . . . . . . . 96 II. One variable quantitative; the other in two categories. The biserial correlation coefficient. . . . . . . . . . . . . . . . . . . . 97 X CONTENTS FA GEl III. Both variables are non-quantitative. . . . . . . . . . . . . . . . . . . . . 100 1. Both sets of variables occur in several classes. The coefficient of mean square contingency. . . . . . . . . . . . 100 2. One of the two sets of variables occurs in several classes (biserial eta). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3. Each set of variables occurs in two classes. Tetrachoric correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 B. Interdependence between three or more sets of variables. . . . . . . . 108 I. The coefficient of partial correlation. . . . . . . . . . . . . . . . . . . . . 108 1. Computation of r. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 2. Partial sigmas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3. Regression equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 II. Multiple correlation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 III. The coefficient of part correlation. . . . . . . . . . . . . . . . . . . . . . . 116 IV. Tetrad difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 CHAPTER VI HEREDITY Mendelian analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 The family method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 The mass method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Type 1. Without dominance. . . . . . . . . . . . . . . . . . . . . . . . . . 122 Type 2. A pair of allelomorphic genes, one of which is dominant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Type 3. Case of character dependent upon multiple allelo- morphs, 4, a', a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Type 4. Case of character dependent upon two pairs of factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Type 5. Expectation of affected offspring when the trait de- pends upon the occurrence of two independent genes (double dominant) and the trait is phenotypically absent in both parents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Type 6. Case of sex-influenced traits. . . . . . . . . . . . . . . . . . . 125 Type 7. Case of sex-linked traits. . . . . . . . . . . . . . . . . . . . . . . 125 Determination of linkage in man. . . . . . . . . . . . . . . . . . . . . . . . . . 126 CHAPTER VII SPECIAL TOPICS Growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 General growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Law of relative growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Index numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Simple index numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Aggregative index numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Weighted indices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Weighted aggregative index number. . . . . . . . . . . . . . . . . . . . . . 131 CONTENTS XI F.A.G.E. Measuring secular, seasonal and cyclical change . . . . . . . . . . . . . . . . . 131 Rapid but crude methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Smoothing the time series when trend can be expressed by a straight line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Method of moving median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Measure of dissymmetry in organisms. . . . . . . . . . . . . . . . . . . . . . . . . . 139 REFERENCEs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 ExPLANATION OF TABLEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 InDEx To PRINCIPAL LETTERS USED IN THE FORMULAE of THIS Book 156 THE GREEK ALPHABET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 LIST OF TABLES I. Formulae. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 II. Certain constants and their logarithms. . . . . . . . . . . . . . . . . 163 III. Ordinates of normal curve, mode = 100... . . . . . . . . . . . . . 164 IIIa. Ordinates of normal curve, mode = 0.39894. . . . . . . . . . . . 165 IV. Values of probability integral. . . . . . . . . . . . . . . . . . . . . . . . . 166 W. Reduction from common to metric system: (a) inches to millimeters; twelfths to millimeters; 16ths to milli- meters. (b) avoirdupois ounces to grams. (c) avoir- dupois pounds to kilograms. . . . . . . . . . . . . . . . . . . . . . . . . 173 VI. Minutes and seconds of arc in decimals of a degree. . . . . . I74 VII. First to sixth powers of integers from 1 to 30. . . . . . . . . . . . 175 VIII. Factorials of integers, 1–20. . . . . . . . . . . . . . . . . . . . . . . . . . . 175 IX. Factors for use in computing the probable error of the means and of the standard deviations. . . . . . . . . . . . . . . . 176 X. Odds against the occurrence of a deviation in term of the standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 XI. For deviations in multiples of the probable error from 1.0 to 50 in column 1, the probable occurrence of a deviation as great in 100 trials (column 2); also odds against the occurrence of a deviation as great in 100 trials (column 3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 XII. Table of x*. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 XIII. Tables F (ratio of variances) and t (significance of a differ- ence between means) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 XIV. Probable errors of the coefficient of correlation for various numbers of observations or variates (N) and for various Values of r. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 XV. Value of 1 — r2 for values of r from 0 to 0.999 . . . . . . . . . . 185 XVI. Values of VI – r2 corresponding to values of r. Example r = 0.337; VI – r2 = 0.9415 . . . . . . . . . . . . . . . . . . . . . . 187 XVII. Values of r = 2 sin #2 for computed values of p from 0.01 to 1.00. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 XVIII. Functions of r for use in computing the approximate probable error of the tetrachoric coefficient of correlation 190 CONTENTS XIX. YX. XXI. XXII. XXIII. INDEX .. PAGR, Table of log T functions of p. . . . . . . . . . . . . . . . . . . . . . . . . . 191 Value of Q for various sized families and various crossover values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Standard error of Q (from Wiener, 1932). . . . . . . . . . . . . . . . 193 Giving for any unit character the proportion of recessive offspring to be expected: (R) in random matings of dominants with dominants; and (S) in random matings of dominants with recessives (from Snyder, 1934). . . . . . 194 Squares, cubes, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 e e g a s a e s a wº e s e s e º e s e s e s e s g º is e s = e s s m e º 'º s is a e s º a s a s m e º ºs 213 STATISTICAL METHODS IN BIOLOGY, MEDICINE AND PSYCHOLOGY CHAPTER I ON VARIATION AND ITS MEASUREMENT Preliminary Definitions Variation relates to individual differences in a given trait or phenomenon. It is usually due to the complex and manifold nature of the causes that reproduce the trait or phenomenon. A variate is a single quantitative expression or magnitude determination of a character. By extension it may refer to non-quantitative expressions, such as categories of eye color. The term has been employed also to cover what is defined below as variant or class. The magnitude of any variate is designated by v (by some writers by 3). Integral variates are magnitude determinations of charac- ters which from their nature are expressed in integers, or by counting; e.g., the number of teeth in a porpoise. These are also called discontinuous or discrete. Graduated variates are magnitude determinations of char- acters which do not exist as integers and consequently in which the different variates may differ by any degree of magnitude, however small; e.g., the stature of man. These are also called continuous variates. Variates are of two kinds: errors and variables. Errors are variates obtained by repeated measurements of the identical or constant physical phenomenon, such as the diameter of a cylinder or the velocity of light. Lack of uniformity in such measurements is largely due to errors of instrumentation, in- cluding that of the observer. Hence these variates may be called observational errors. Or, the single phenomenon may be changeable, like the stature of Richard Roe from morning to night. Such errors are not merely observational but also due to objective changes. Errors of this class may be called objective errors. 2 ON VARIATION AND ITS MEASUREMENT Variables are derived from the measurement of non- identical but related phenomena, like the stature of English- men, or the length of the spines on the body of a fly. (These may also be called collective variables.) Unlikeness of these variables is due largely to real differences between the objects measured. These variables are either of the type known as fluctuations and may be due primarily to the complexities of development on the one hand, or to environmentally induced changes on the other; or else they are of the type called mutations and are primarily genetic in origin. A variant, among integral variates, is a single number condition; e.g., 13 ray flowers. Variance is a measure of variation occurring in a specific group of measured traits or phenomena. It is quantitatively expressed as the mean square deviation from the mean. A class, among graduated variates, includes variates of the same or nearly the same magnitude. The class value is desig- nated by V. The class range or interval gives the limits be- tween which the variates of any class fall. It is designated by 7. Variants may also, for convenience, be grouped into classes. Statistics deal largely with probability. A particular vari- ate, or variant, may be regarded as the occurrence of an event, in a set of n possibilities; e.g., of a 6-fingered person in a population of 1000 persons; or of a 6-fingered person in a family of 10 characterized by much polydactylism. Individual variation deals with diversity of a trait in a collection of individuals more or less closely related genetically. Organ variation, or intraindividual or partial variation, deals with diversity in multiple or repeated organs in single individuals, hence generally (but not always) of genetically identical origin. - - On the Collection of Data (Samples) for Statistical Analysis Although the experimental method (in which the conse- quences of two set-ups practically identical except for the element whose effect is being sought are compared) is in general the most efficient method of arriving at truth, yet there are occasions when such set-ups are impracticable. In such cases statistical analysis may be of aid. Its fundamental principle is the doctrine of probability or chance. It assumes MEASURING CHARACTERS OF ORGANISMS 3 that a Sample, consisting of a large number of individuals, other objects, or events, taken without bias, or at random, from as nearly homogeneous material as possible, will prac- tically represent the whole population in respect to the trait studied. Homogeneity, except for the single differential, is important (see discussion of the “representative method " by J. Neyman, 1934, Journal of the Royal Statistical Society, 97 : 558). If the units are divisible into two groups on the basis of a difference in effects of environment or genetics upon a given character then the consequences due to that difference may be determined, as described in Chapter IV. To illustrate the importance of homogeneity, suppose that we wish to compare the fecundity of college women with that of non-college women. It will not do to compare white college graduates with Negro non-graduates. It will clearly be better to compare college white women with their sisters who have not gone to college. In doing so we assume that any difference in fecundity is the consequence of a single difference in set- ups; viz., “graduating from college.” The value of the conclusion depends upon the validity of the assumption that “graduating from college '' is the only differential between the two groups. Actually it may not be; e.g., those women who have graduated before marriage may be a lot selected on the whole for less physical attractiveness or more pressing ambition for a career, etc. Nevertheless, there are, as stated, occasions when the statistical method, with all its limitations, is the only practicable one. The number of variates to be obtained should be large; if possible from 200 to 2000, depending on abundance and vari- ability of the material. If, however, the numbers are small, say under 100, they can still be analyzed by special methods devised by “Student ’’ and R. A. Fisher. Processes Preliminary to Measuring Characters of Organisms Some characters can best be measured directly; e.g., the stature of a race of men. Often the character can be better studied by reproducing it on paper. The two principal meth- ods of reproducing are by photography and by camera drawings. For photographic reproducticns the organs to be measured 4 ON VARIATION AND ITS MEASUREMENT will be differently treated according as they are opaque or transparent. Opaque organs should be arranged if possible in large series on a suitable opaque or transparent background. The prints should be made on a rough paper so that they can be written on; blue-print paper is excellent. This method is applicable to hard parts which may be studied dry; e.g., mollusc shells, echinoderms, various large arthropods, epider- mal markings of vertebrates and parts of the vertebrate skeleton. Shadow photographs may be made of the outline of opaque objects, such as birds' bills, birds' eggs and butter- fly wings, by using parallel rays of light and interposing the object between the source of light and the photographic paper. More or less transparent organs, such as leaves, petals, insect wings and appendages of the Smaller Crustacea, may be repro- duced either directly on blue-print paper or by “solar prints,” either of natural size or greatly enlarged. For solar printing the objects should be mounted in series on glass plates. They may be fixed on the plate by means of balsam or albumen and mounted between plates either dry or in Canada balsam or other permanent mounting media. Wings of flies, Orthoptera, Neuroptera, etc., may be prepared for study in this way, twenty-five to one hundred sets of wings being photographed on one sheet of paper, say 16 by 20 inches in size. Micro- photographs will sometimes be found Serviceable in studying small organisms or organs, such as shells of Protozoa or cyto- logical details. Camera drawings are a convenient although slow method of reproducing on paper greatly enlarged outlines of microscopic characters, such as the form and markings of worms and lower Crustacea, sponge spicules, bristles, Scales and Scutes, plant hairs, cells and other microscopic objects. In making such camera drawings a low-power objective, such as Zeiss A, will often be found very useful. The Determination of Integral Variates; Methods of Counting Counting offers no special difficulty with small numbers, but it becomes more difficult with an increase of numbers. To count large numbers accurately the general rule is to divide the field occupied by the numerous units into many small fields each containing only a few. It is important that THE DETERMINATION OF GRADUATED WARIATEs 5 the units do not pass from one field to another during the process of counting. Among the larger examples of this type of counting are the census of population of a country, or of the stars. Among examples of small things are the blood corpuscles in a drop of blood or the organisms in a cubic centi- meter of water as observed under the microscope. Eyepieces ruled into rectangles and glass slides ruled into small squares have long been used for counting microscopic objects. The Determination of Graduated Variates; Methods of Measurement The instruments for measuring graduated variates are of a large number of kinds. The measurements are always recorded on the metric scale since this is the universal scien- tific system. The fundamental units are grams, centimeters, seconds, constituting the c.g.s. system, of which the standards are preserved in Paris. There is a large number of dependent or derived units. There are units of angular measurement. There are also the absolute units of force, work, energy and power, which are derived from or dependent on gravitation, but are measured in the c.g.S. system. All measuring appara- tus should be calibrated directly or indirectly with these stand- ards. Such calibration is effected in the United States at the U. S. Bureau of Standards in Washington. Straight lines are easily measured by means of a measuring Scale of some sort. Various kinds of scales such as steel measuring tapes, graduated to millimeters (about $1.50), and steel rules (6 cm. to 15 &m.) graduated to # of a millimeter can be obtained from optical companies and hardware dealers. Cloth spring tapes, 1 meter long and divided in millimeters, made in Germany, are available through the American Mer- chandizing Company, each good for 1000 to 2000 measure- ments. Steel “spring-bow” dividers with milled-head screw are useful for taking off distances which may be read off from a scale. Instruments used in physical anthropometry are listed in Martin’s “Anthropology,” also in Davenport, 1927. They include (1) the anthropometer for measuring body lengths, (2) the compass caliper, (3) the sliding caliper, (4) the head height measurer, (5) the depth measurer, (6) the tape, (7) the 6 ON VARIATION AND ITS MEASUREMENT goniometer, (8) the balances; also many instruments for measurement of skull and small bones. Of these instruments 1, 2, 3 are made by P. Hermann, Rickenbach & Sohn, Zürich, Switzerland; Alig & Baumgårtel, Aschaffenburg, Germany; also made in America, upon inquiry of Professor T. W. Todd, Medical School of Western Reserve Uni- versity, Cleveland, Ohio. • Tortuous lines, e.g., the contour of the Serrated margin of a leaf or the outer margin of the wing of a sphinx moth, may be measured by a map-measurer (“Entfernungsmesser,” Fig. 1), supplied at artists’ and engineers' supply stores at about $3.75. Distances through solid bodies or cav- ities are measured by calipers made in various styles. Micrometer screw calipers (“speeded ”) reading to 0.01 millimeter and sold by dealers in physical apparatus for about $5.00 are excellent for deter- mining diameters of bones, birds' eggs, gastropod shells, etc. Leg calipers for rougher work can be obtained for 30 cents to $4.00. The micrometer “caliper-square,” available for inside or outside measure- ments and measuring to hundredths of a millimeter, is a useful instrument.* The area of plane surfaces, as, e.g., of a wing or leaf, is easily determined by means of a sheet of colloidin scratched in millimeter squares. By rubbing in a little carmine the scratches may be made clearer. The number of squares cov- ered by the surface is counted (fractional Squares being men- tally summated), and the required area is at once obtained. If the area has been traced on paper it may be measured by the planimeter (Fig. 2). This instrument may be obtained at engineers' supply shops. It consists of two steel arms hinged together at one end; the other end of one arm is fixed § ſº billſ|IIILIII Mill UILººſe ITITUTII FIG. 1.-Map II)08.8ll IſèI". * Many of the instruments described in this section are made by the Starrett Co., Athol, Mass., and by Brown and Sharpe, Providence, R.I., tool cutters; also by Lufkin Rule Co., 106 Lafayette St., New York. THE DETERMINATION OF GRADUATED VARIATES 7 by a pin into the paper, the end of the second arm is pro- vided with a tracer. By merely tracing the periphery of the figure whose area is to be determined the area may be read off from a drum which moves with the second arm. This method is less wearisome than the method of counting squares. Three-dimensional Traits. These may be measured by volume or by diameters. The volume of water displaced may be used to measure volume in the case of solids. The volume of water or sand contained will measure a cavity. Irregular form is best measured from photographs or drawings. Or two or more axes may be measured and their ratio found. i FIG. 2.-Planimeter, for determining the area by tracing the boundary of irregular plane surfaces. The weights of bodies are determined by balances, of which there is a great variety depending on the size of the body to be measured. Color Characters. Color may be qualitatively expressed by reference to named standard color samples. Such standard color samples are given in Ridgeway's book “ Nomenclature of Color,” and also in a set of samples manufactured by the Milton Bradley Company, Springfield, Mass. The best way of designating a color character is by means of the color wheel, a cheap form of which is made by the Milton Bradley Com- pany (Fig. 3). The colors of this “top” are standard and are of known wave-length as follows (in thousandths of a micron, i.e., millionths of a millimeter): Red . . . . . . 656 to 661 Green. . . . . 514 to 519 Orange. . . .606 to 611 Blue. . . . . . 467 to 472 Yellow . . . . 577 to 582 Violet. . . . . 419 to 424 8 ON VARIATION AND ITS MEASUREMENT The red used in the color sheet is not a pure spectrum red but an “ox-blood" red, and contains a considerable propor- tion of black. It is desirable to use Milton Bradley's color top as a standard. Any color character can be matched by using the elementary colors and white and black in certain proportions. The pro- portions are given in percentages. In practice the fewest W N Y WHITE BLACK YELLOW FIG. 3 B-Discs used in the color top. possible colors necessary to give the color character should be employed, and two or three independent determinations of each should be made at different times and the results aver- aged. Experience indicates that any color character is given (within 5 per cent) by only one combination of elementary colors. (See Science, July 16, 1897; Bowman, 1930; Am. J. Physic. Anthropol., 14.) Of physiological traits a large number have been precisely measured and very delicate apparatus have been devised for the purpose. The most complete account of such apparatus AIDS IN CALCULATION 9 is given in E. Abderhalden’s “Handbuch der biologischen Arbeitsmethoden.” Of mental traits, during the past 25 years a large number of measuring devices have been developed. Some of these are given in G. M. Whipple’s “Manual of Mental and Physical Tests '' (1914–1915). More recent mental tests are described in the lists of C. H. Stoelting Company of Chicago and the World Book Company of Yonkers, N. Y.—the latter largely tests of proficiency in School subjects. Aids in Calculation Tables. In the absence of calculating machines for multi- plying and dividing, Crelle's Rechnungstafeln (Berlin: Geo. Reimer) will be found useful. The tables of Barlow (“Tables of Squares, Cubes, Square Roots, Cube Roots, and Reciprocals of All Integer Numbers up to 10,000”) are like our Table XXIII, but more extended. A useful book of tables and for- mulas is that of Dunlap and Kurtz, “Handbook of Statistical Nomographs, Tables, and Formulas” (1932). Of a much more technical nature are Pearson’s “Tables for Statisticians and Biometricians”; “Tables of the Incomplete T Function”; “Tables of the Incomplete B-Function Ratio”; and “Tables of the Complete and Incomplete Elliptic Integrals.” Also the Cambridge (England) University Press publishes “Tracts for Computers.” These include K. Holzinger's “Tables of the Probable Error of the Coefficient of Correlation”; and “Tables of Logarithms to 20 Decimal Places.” The University of Chi- cago Press publishes a collection of tables by Holzinger. Very valuable are Glover's “Tables of Values of Probability Func- tions”; various other statistical functions, including values of the exponential growth curve, logarithms of factorial n (1 to 1000) and a table of 7-place logarithms of 5-place num- bers are given (George Wahr, Michigan). Addition of columns of numbers is best done with the aid of the “comptometer” (Fig. 4) (Felt and Tarrent Manufacturing Company, Chicago). The Burroughs machine lists and adds items. For multiplication and division of large numbers the “Monroe calculator” (Fig. 5) is excellent; machines pro- vided with an electric motor are highly automatic. There are Several other types of such computing machines. 10 ON VARIATION AND ITS MEASUREMENT The sorting and counting of large numbers, as in census work, is best effected by means of the Hollerith system. This FIG. 5.-Calculator for multiplication, division, root extraction, etc. requires the use of a special card and key punch. The sorting and counting machines may be rented from the International Business Machines Corporation (Figs. 6–8). This corporation, # -: 2. 30 31 32 33 34 35 36||37 3839 4041 4243 44}{545 47484 y y ##### Y|Y Y|YY|Y Y|Y Y Y Y Y Y Y Y Y|Y YY Y Y Y Y Y|Y rºy 7 || || 11 11111 || 1 || 1 || 1 || 1 , 1111 || 1 , , , , , ill 111111 (1111 lºſiſ, iſ liftill ill, Ilry = 1ST DIAGNOSIS | 2ND DIAGNOSIS 3RD DRAGNOSIS 4TH D1AGNOSIS LABORATORY # | | |: æ |0.000 000 || 0 || 0 || 0 || || 0 0 1 00 0000 00 000 || || 0 || || 0 00 00 00000 0 || || || 0|0|0|0|0|0|| 0 § | 1111111111111111111111 || 111111 || 11111111111||111111||1|| || iſ 1111111 § NAME, SITE AND MARIFESTATION OF THE DISEASE BLOOD COUNT - ... ºn " | 3|2|22 || 2:22||22222222222222222222222|H|222222222 22222222222|| 2.2222222 == TREATMENT AND X. RAY DAGNOSIS BL000 CHEMISTRY * = # 3333333333 II 333333333333333333333333333333333 33333333|3.3 33|3|33 ||3.3 * - =-|--# -É=- - - - - - - -|- E - RESULT ON Discharge ---|- - - - - - - - - - - ORGANISMs ---É =-|-3-ÉÉÉ-ā-ā-ā-|--|Hä ºiliiliº Illinº ſº. 44.4444444.1% º' jº, 44|4|4 44|4 4 : lf) 5 | #s bºº Ef C. SPINAL FLUID OPERATIONS ##| = as # # | = |s|≤ | º 555 5555 555 555 555 5555 5555 5555 555 555 555 555 555 555 555 555 5555 5555 5 5 5||5||5|| =; E ſº URINE AND MISC. : : źs. 2. 6|5|5 § 5 || 5 § 666 5 § 6 6 6 6 55 6 6 5 § 5 § 6|66 5 § 5 § 5 6|66 5 6 6 6 6 5 § 5 § 65|66 6 6 6 6|5 6|5 G|5 6555 5.6|6 SIB|=R; 5 |-5-|- - - - - - - -3|-|- - - - - - - -|- — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -i- ---|--|--|--|--|-|---|--|- =0. É 7|1|1 17 7 | 1 || || 7 || || 7 || 1 || 1 || 1 || 7 || || 7 || 7 || || 1 || 1 || 1 || 1 || 1 || || 1 ||7|1 T 1177|17|17|1|7|1||77|1|11 11 17:# : • * > OPERATIONS : 3. 8|888 888 888 & 3 & 8 8 8888 & 8 8 8888 888 & 8 8 888 & 8 8 888 & 8 8888 888 888 888||5||9|3|8 ||3|88 88 8||5. 39 g g g g g g g g g g g g g g g g g g g g 99.9999999999999999999999999999|| 3 || assissis ºld; 13 (4 15 18 17 tº 19 20121 22 23 24 25 25 27 zºlzº 30 31 Jz 33 34 35 36||37 33 39 40414243 44|4546 474849 50 5152 53 54 55,5615715859 50 S1 62 B3154 55166 ºf 72 7314|J5 78 ITITH 1980. Sº FIG. 6.--Sample of a punch card, reduced in size, used in the Hollerith machine. To use the card one has first to prepare a code in which each trait is represented by a number. If there are only 10 (or 12) traits under a head the code number is punched in 1 column. If 10 to 99 traits, each code requires 2 columns; if 100 to 999 traits each code requires 3 columns. Thus many traits (maximum 80) about an individual may be recorded on one punch card. The number of holes punched of each value for each trait is mechanically determined by the sorting and adding machines. 12 ON VARIATION AND ITS MEASUREMENT the Columbia University Statistical Bureau and the Record- ing and Statistical Corporation will furnish, at a reasonable cost, solutions to any of a wide variety of problems that are FIG. 7.-The Hollerith “key punch.” most economically handled with tabulating machine equip- ment. Nomograms are graphical devices for solving equations containing two or more independent variables. Nomograms FIG. 8.-The Hollerith card sorter. for 27 equations are given in Dunlap and Kurtz (1932). These are largely to aid in the computation of standard errors, probable errors and intelligence quotients. The slide rule is a mechanical instrument for use in solving problems similar AIDS IN CALCULATION 13 to some of these to which nomograms are applicable (Fig. 9). (The Macmillan Company of New York) table slide rule is a convenient, inexpensive, readily transportable device for computing power, root and circular functions. For correlation work a mimeographed checkerboard pattern on paper is a time- saving device. Numerous such correlation charts are in the market. Some of them, and other forms, are cited below. SOME CORREIATION CHARTS CoRRELATION FoRM, with absolute checks of all operations including plotting, by R. C. Tryon, Uni- versity of California. Directions and formulae. Also - form for frequency table, mean and standard devia- H tions with checks. - T# , i. E–C CoRRELATION CHART, by M. P. Ekas and *-ºf- W. W. Cheyney. Engineers Pub. Co., Philadelphia. + 1934. 11” x 16#”, 21 × 18 cells 0.30” circles. G. M. '#. at center, summation method, both diags. used, sum # formula and difference formula. Checks. Hand, #. machine or slide rule. Gets r, means, SD's, regres- # sions equations, standard errors of estimate. ForM Fort CoRRELATION CoEFFICIENT AND RATIOS, -Évrº by Karl J. Holzinger. 1928. 11” X 17”, 24 × 24 = ºf cells 0.25" × 0.50”. G. M. left for computer's És ºf preference. Standard formulae. Gets r, means, 5 SD's, etc. Computation by logarithms. ÉÉ . MACHINE or HAND CoFRELATION CHART, by C. H. H* = Smeltzer. Engineers Pub. Co., Philadelphia. 1933. 8#” × 11”, 14 × 14 cells 0.23". G. M. at zero. # Uses cross products. Hand work requires separate < É Āc, pieces of paper fitted to chart for writing d, d”, products. Checks. Gets r, means, SD's, PE (SD)'s. Two-page directions, 13 steps, sample. - •e RECTANGULAR CorrelATION SHEET, by Herbert - A. Toops. Bureau of Publications, Teachers College, Columbia Uni- versity, New York. 1922. 8%" × 11”, 18 × 18 cells 0.25". G. M. at zero. Diags. used. d’s printed. Gets r, means, and SD's. Table of products on reverse. Separate card with window also available for products. Chart nomograph for PEr and products of d steps, and their Squares, by frequencies printed separately on card. SYMONDS PARTIAL AND MULTIPLE CORRELATION CHART, by Percival M. Symonds. Bureau of Publications, Teachers College, Columbia University, New York. 1925. Job analysis sheet for 3 variables. Entered with 3 total r's, means and SD's. Gets partial r's, regression | 14 ON VARIATION AND ITS MEASUREMENT coefficients, standard errors of estimate, regression equation, multiple R. Size 8;” × 11”. Precautions in Arithmetical Work Even the most careful computers make mistakes in arith- metical work. It is absolutely necessary to take such precau- tions that errors may be detected. The best method is for statistical workers to compute in pairs, but independently, comparing results as the work progresses, so that time shall not be wasted by elaborate work done with erroneous values. In case of disagreement both workers should recompute, starting from that point of the work where their results check. If it is not feasible for the work to be done by two people, it should be calculated on different pages of the statistical book —proceeding through several steps on one page and then independently through the same steps on another page; checking the work as it progresses. It will be found useful as the work progresses to make rough checks by comparing the results with the original data to see that the results are probable. - Neatness in arrangement of work and in the making of figures is essential. It is best to make all calculations in a book with pages about 20 centimeters by 30 centimeters, quadruple ruled, with about three squares to the centimeter, So that each figure may occupy a distinct square. The advan- tage of working with a pencil is that slight errors may be erased and rectified. In case of larger errors running through several steps of the work, the erroneous calculations should not be erased but cancelled. Upon the completion of any calculation the number of decimal places to be recorded will depend upon the probable error of each constant. It will ordinarily suffice if the prob- able error contain two significant figures, e.g., +0.17 or ==0.0089; then the constant will be carried out to the same number of places and not farther. CHAPTER II ON THE SERIATION AND PLOTTING OF DATA - AND THE FREQUENCY PolyGON Errors and Differences Every measurement of an invariable object (like velocity of light), every count of a large but fixed number of similar objects (like the corpuscles in a drop of blood) involves a greater or less error. When it comes to the measurement of a lot of duplicated organs on one individual (leaves on a tree, bristles on a fly) there are differences due not only to error of measurement but also to fluctuations in the developmental process. Measurements of the same dimension on related, more or less representative, individuals of a population are subject to differences that depend also, in part, on diverse genetical factors. Simplification of Data The result of the collection of quantitative data on one phenomenon, whether of biological or non-biological nature, is a mass of numbers which are the raw material that must be properly arranged and analyzed before one can draw impor- tant conclusions. And first of all, data must be simplified and complexes must be reduced to single numbers. Thus quo- tients or ratios must be computed. In the special case of a Series like the percentages obtained by the color wheel, of the order: W 40 per cent, N (black) 38 per cent, Y 12 per cent, G 10 per cent, there may be selected the most variable element or that most significant for the problem in hand, e.g., N. In Some cases a series of non-mensurable traits is obtained, as of eye color, and such a series, arranged in progressive order, can often be treated statistically. 15 16 ON THE SERIATION AND PLOTTING OF DATA Seriation and Classification In the case of integral variates the data have to be grouped into classes that advance by integers. In the case of graduated variates, a similar grouping has to be made into classes that advance by equal intervals. The classes having been arranged in order of magnitude the number of variates occurring in each class is determined; this gives the frequency of the class. Each class has a central value, and inner and an outer limiting value and a certain range of values. The arrangement of fre- quencies by classes gives a “frequency distribution table.” The method of seriation may be illustrated by two exam- ples: one of integral variates, and the other of graduated variates. Example 1. The magnitudes of 22 integral variates are found to be as follows: 12, 14, 11, 13, 12, 12, 14, 13, 12, 11, 12, 12, 11, 12, 10, 11, 12, 13, 12, 13, 12, 12. In seriation they are arranged as follows: Classes: 10, 11, 12, 13, 14. Frequency: 1, 4, 11, 4, 2. Example 2. In the more frequent case of graduated variates our magnitudes might be more as follows: 3.2 4.5 5.2 5.6 6.0 3. 8 4.7 5.2 5.7 6.2 4. 1 4.9 5.3 5.8 6.4 4.3 5.0 5.3 5.8 6.7 4.3 5. 1 5. 5. 7. In this case it is clear that our magnitudes are not exact, but are merely approximations of the real (forever unknowable) value. The question arises concerning the inclusiveness of a class—the class range. One approximate rule is: Make the classes only just large enough to have no or very few vacant classes in the series. Following this rule we get: . 3.0–3. 4; 3.5–3.9; 4.0–4. 4; 4.5–4.9; 5.0–5.4; Classes. . . | 3.2 3.7 4.2 4.7 5.2 1 2 3 4. 5 Frequency I 1. 3 3 7 º 6.0–6.4; 6.5–6.9; 7.0–7.4; Classes... 5.7 6.2 6.7 7.2 6 7 8 9 Frequency 5 3 1. 1 When the number of observations is 100 or more, the number of classes will usually run between 10 and 25. The classes are named from their middle value, or better, for ease of subsequent calculations, by a series of small integers (1 to 25). PLOTTING BY THE METHOD of RECTANGLES 17 In case the data show a tendency of the observer towards estimating to the nearest round number, like 5 or 10, each class should include one and only one of these round numbers. As Fechner (1897) has pointed out, the frequency of the classes and all the data to be calculated from the series will vary according to the point at which we begin our seriation. Thus if, instead of beginning the series with 3.0 as in our example, we begin with 3.1 we get the SerleS E 3. 1–3. 5; 3.6–4.0; 4.1-4. 5; 4.6–5.0; 5. 1–5.5; Classes...{ 3.3 3. 8 4.3 4.8 3.5 Frequency 1. 1. 4. 3 6 Classes {*}” 6.1–6. 5; 6.6—7.0; 7.1-7.5; e s ſº 5. 8 6.3 6.8 7.3 Frequency 6 2 1 1. which is quite a different series. Fechner suggests the rule: Choose such a position of the classes as will give a most normal distribution of frequencies. According to this rule the first distribution proposed above is to be preferred to the second. Plotting by the Method of Rectangles In order to give a more vivid picture of the frequency of the classes it is important to plot the frequency distribution. This is done on coördinate paper*. The best method, especially when the number of classes is small, is to represent the frequencies by rectangles of equal base, and of altitude proportional to the frequencies. Lay off along a horizontal line (axis of X) equal contiguous spaces ==|| || || 3.0 3.5 4,0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 FIG. 10,-A histogram. each of which shall represent one class; number the spaces in order from left to right with the class magnitudes in succession, and erect upon these bases rectangles whose height, propor- tionate to the frequency of the respective classes (Fig. 10), is given at the Y axis at the left-hand side of the graph paper. This method of drawing the frequency distribution is known as the method of rectangles. The resulting figure is called a histogram. * This paper may be obtained at a stationery or artists' supply store. 18 ON THE SERIATION AND PLOTTING OF DATA Plotting by the Method of Loaded Ordinates With graduated variates, especially when the number of observations is large, the frequencies may also be represented by ordinates as follows: At equal intervals along a horizontal line draw a series of (vertical) ordinates whose successive heights shall be proportional to the frequency of the classes. 640 600 560 520 480 > 440 £ 400 $ 360 § 320 14- 280 240 200 160 120 90 40 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Number of Veins FIG. 11.-Frequency polygon, showing distribution of frequency of veins in beech leaves. After Pearson, 1902. Join the tops of the ordinates as shown in Fig. 11. This method of drawing the frequency distribution is known as the method of loaded ordinates. The resulting figure is called the fre- quency polygon (sensu strictu). A smoothed curve of the same area may be passed through the frequency polygon; this gives the frequency curve (Fig. 11, broken line). The Distribution Curve or Polygon The distribution curve or polygon is of several forms. The most typical one is the so-called normal frequency curve, also known as the Gaussian curve. This is well illustrated by Figs. 11 and 12. It is a symmetrical curve, reaching its highest point at the mean or mode; it is concave toward the central axis as it descends on each side, but at about one-half its modal height it undergoes an inflection so that it thenceforth is convex to the central axis as the frequencies gradually decrease. Though theoretically, with an infinite number of THE DISTRIBUTION CURVE OR POLYGON 19 : 500 Polygon of observed frequency 350 300 frequency polygon 250 2 O O 150 is Polygon of theoretical frequency (Type I) 100 50 W O l 2 3 4 5 • 6 Classes 7 8 9 l O FIG. 12.--Distribution of frequency of glands in swine. , polygon of observed frequency. — — — — , polygon of theoretical frequency (Type I). tº * * * * , normal frequency polygon. 20 ON THE SERIATION AND PLOTTING OF DATA cases, the curve never quite ends on the base line but is asymptotic to the base, yet with finite numbers the Gaussian curve reaches a limit on each side of the center. Gaussian curves vary in area, height and slope of the sides. Some of the constants of this and other distribution curves are described in this chapter. Further details about the normal curve and variant curves of the same general type are given in Chapter III. The binomial curve is that obtained by plotting as ordinates the coefficients of the successive terms of the series given by the expanded binomial (p + q)”. This takes on a great variety of forms depending upon the relative size of p and q. When p = q = # and n becomes indefinitely large the binomial curve is the normal (Gaussian) curve. The Poisson Series This series is useful in determining the distribution of occurrences (and non-occurrences) when the probability of occurrence (p) in one set is very small, but where the number of sets (n) is so great that the product np is a number of workable size. The expected distribution of occurrences in such sets is given by the terms of the formula: — M. M. I. M. M. 6 ( : M + 2 + 3 + . . . #) where a stands for the number of trials that are without suc- CeSSéS. This is very much like the binomial distribution. The percentage of sets yielding s occurrences is C 9 - e-m * y 100 s! where M is the arithmetic mean (see p. 24) and e the base of Napierian logarithms. The desired proportions for each set can be most easily found in Soper's table, published in Pear- son’s “Tables for Statisticians and Biometricians” (1914). Example. Ten coins are tossed 1024 (= n) times, and this operation is repeated 100 times so as to give 100 samples. Problem: In what THE REJECTION OF EXTREME VARIATES 21 proportion of the 100 samples (v) will 10 heads appear 2 times (s) in the 1024 tosses? Number of times (s) Number of in 1024 tosses that all samples heads appeared (y) 0 35 I 36 2 19 3 7 4 2 5 1 100 The mean number of times, M = 1.08. When s = 2, 1/ 1.08 1.082 100 " e --- 2 ; log y – log 100 = — 1.08 log e - 2 log 1.08 – log 2 log y – 2 = — 0.4690 -- 0.0668 – 0.3010; log y = 2 – 0.7032 log y = 1.2968 y = 19.8 (theoretical). The Rejection of Extreme variates In calculating the constants of a distribution polygon, extreme variates are to be rejected only rarely and with caution. In many physical measurements Chauvenet's cri- terion is used to test the suspicion that a single extreme variate should be rejected. A limiting deviation (ko) is calculated. k is the argument in Table IV corresponding to a tabular entry 2n – 1 equal to —II- Observations lying beyond ka may be 70, neglected in the computations. Example. In 1000 minnows from one lake there are found the fol- lowing frequencies of anal fin rays: Classes: 7 8 9 10 11 12 13 Frequency: 1 2 15 279 554 144 5 M = 10.835 fin rays; or = 0.728 fin rays. 1999 k = — = 0.49975. 4000 Looking in Table IV we find 3.48 corresponding to the entry 49975. Then the limiting deviation = 3.48 × 0.728 = 2.5334, and the limiting class is 10.835 – 2.533 = 8.302; hence the observation at class 7 might be excluded in calculating the 22 ON THE SERIATION AND PLOTTING OF DATA constants of the seriation; but it should not be suppressed in publishing the data. - This method depends on the improbability that any variate that exceeds a certain multiple of “a” (see p. 29) shall properly belong to a random sample of the population under considera- tion. Certain Constants of the Normal Frequency Polygon After the data have been gathered and arranged it is neces- sary to determine the law of distribution of the variates. To get at this law we must first determine certain constants. Measures of the Central Tendency 1. The median (Mdn) is the value above which and below which 50 per cent of the variates occur. To obtain it, proceed as follows: With ungrouped data arrange first in order of magnitude; if the number is odd the middle variate may be used as the median magnitude, or if the number of observa- tions is even, use the point midway between the two central variates. For example, in the series 16, 18, 25, 27, 30, the median is 25; but in the series 16, 18, 20, 22, 24, 26, the median may be taken at 21. For grouped data, the median may be found as follows:* Find the class interval in the distribution which contains the middle measure or mid-score and interpolate to find the probable score of this median measure, assuming that the measures in the class interval are evenly distributed. The formula is: . tº-vº f(eum,” Man H ll + i. fman * Midpoint is dependent on choice of class interval limits. For example, 42.5 is midpoint of an interval assumed to begin at 40.00 and end at 44.99 +. The limits of an interval should be chosen by the experimenter to fit his data in accordance with the technique of taking the original measurements. For example, if a weight measurement is precisely 40, such weight would be placed in a class having the limits 40.00 and 44.99 +, inclusive. If, however, the experimenter takes each weight to the nearest kilogram, so that 39.6, for example, would be recorded as 40, the limits of the interval would become 39.50 to 44.49+, with the midpoint at 42.00. MEASURES OF THE CENTRAL TENDENCY 23 N 2 T focum) DOWN OT Man = ul — i. fman where Man = median. ll = lower limit of middle class interval. wl = upper limit of middle class interval. i = width of class intervals. foun).” = cumulative frequency counted UP to lower limit of middle class interval. feum,” = cumulative frequency counted DOWN to upper limit of middle class interval. findn = frequency of interval containing median. Example. Weight in kilos f f(cum) IJP f(sun) DOWN 65–69 1 1 60–64 2 3 55–59 4 7 50–54 7 14 45–49 15 Median interval 40–44 7 27 35–39 11 20 30–34 5 9 25–29 4 4 N = 56 * * * ge . N 56 In the distribution above, the middle measure is 2. Or 2. = 28. Counting up from the bottom of the distribution, the median falls in the interval 45–49. Interpolating, the median is I’s of the way between 45–50. Then 45 + (; ; x 5) = 45.33. This can be checked by counting and interpolating in the other direction, i.e., down, whence 50 – (# x 5) = 45.33. The median has the advantage that it can be computed with accuracy when the more extreme variates are known only roughly. It can often be determined by inspection of the Series. It is especially useful with erratically spaced and non- quantitative variates. Unlike the mean it is little affected by extreme variates due to special cases like the stature of a dwarf or giant included in a series of stature measures. Irving Fisher (1922, p. 260) finds that it is rather better than any other simple form of average in computing index numbers. 24 ON THE SERIATION AND PLOTTING OF DATA 2. The Arithmetic Mean (M) is the sum of the separate variates divided by the number of variates. For ungrouped data the mean is computed as follows: Xt) M = −. N M, mean of the distribution. 2, the sum of. v, individual variates. N, number of variates. Example. Individual v (Weight in pounds) 1. 143 2. 132 3. 117 4. I28 5. 140 N = 5 20 = 660 M = ** = 132 lb. This method is impracticable with long series unless a cal- culating machine is available. For grouped data: xfV M = — N where M = mean of the distribution. X = the sum of. f = the frequency of each class. V = midpoint of each class. N = number of variates. Example. Weight in kilos y f fy 65–69 67.5 1 67.5 60–64 62.5 1 62.5 55–59 57.5 § 287. 5 50–54 52.5 4 210.0 45–49 47.5 8 380.0 40–44 42. 5 13 §52.5 2 e 35–39 37.5 11 Tº M → *.* 30–34 32.5 7 227.5 55 25–29 27.5 5 137.5 M = 42.5 N = 55 ×f V = 2337.5 This formula involves tedious labor when N is large unless a calculating machine is available. MEASURES OF THE CENTRAL TENDENCY 25 To simplify the calculation with grouped data proceed as follows: . Xf(V – Vo) M = Vo + 2. N g Vo = assumed mean. f = frequency of each class. V – Vo = deviation of class step from assumed mean. t = class range. N = number of cases or variates. -: Example. Weight in kilos f V – Vo f(V – V0) 65–69 1. 4 4 60–64 I 3 3 55–59 5 2 10 50–54 4 1 4 45–49 8 0 0 40–44 13 — 1 — 13 35–39 11 — 2 –22 30–34 7 — 3 — 21 25–29 5 — 4 – 20 N = 55 Xf(V – Vo) = – 55 – 55 M = 47.5 + 5 × − = 42.5. A formula based on cumulative frequency sums; Xf(eum,” N where Wo = assumed mean taken at midpoint of highest inter- val plus one class. * - V.- ( f (eum,” = the cumulative frequency from the bottom up. Example. Weight in kilos f fºcum)” 65–69 I 55 60–64 1 54 55–59 5 53 50–54 4 48 45–49 8 44 40–44 13 36 35–39 11 23 30–34 7 12 25–29 5 5 N = 55 xfocum).9° = 330 M = 72.5 – 5 x *#; M = 42.5. 26 ON THE SERIATION AND PLOTTING OF DATA This method is especially adaptable for adding or calculating machines. An efficient check is provided by summing in the opposite direction and adding the correction to the midpoint of the lowest interval minus one step. The formula then becomes M = Vo’ + iſeum,” N where Voſ = one class interval below midpoint of lowest class. fºoDown = cumulative frequency counted down from top. 2. – width of class interval. Example. Weight in kilos f fºcum)POWN 65–69 } 1 60–64 1 2 55–59 5 7 50–54 4 11 45–49 8 19 40–44 13 32 35–39 11 43 30–34 7 50 25–29 5 55 N = 55 feum)POWN = 220 M = 22.5 + 5 × º M = 42. 5. 3. The geometric mean of a series of variates is the Nth root of the product of all terms in the series. N /— G = Vºvºv, , º X (log p) N By logarithms, log G = If the variates are grouped into classes whose midpoints are VI, V2 V3, etc., and whose corresponding frequencies are fi, f2, fa, etc. g = V/Vſ. Vſ. Vºſ, I where N equals the total frequency. This computation is Carried out with the aid of logarithms as follows: _fi log |V1 + f2 log V2 + falog V8 + . . . N log G MEASURES OF THE CENTRAL TENDENCY 27 where G = geometric mean. N = number of variates. vi, v2, etc. = the individual variates. V1, W2, etc. = the mid-class values. fi, fº, etc. = the frequencies of the corresponding classes. -: The geometric mean is always less than the arithmetic mean and greater than the harmonic mean (see below). It is espe- cially useful in averaging rates of change, ratios and series of values increasing in geometric fashion. Example. To determine the geometric average weight of the intra- uterine mouse from the observed average weights of embryos at various ageS. Embryo age Average in days weight Variates 3 0.0086 v1 4 0.0329 172 5 0.0762 7)3 6 0.1298 v4 7 0.2288 1) 5 8 0.3651 1) 6 9 0. 5926 v7 10 0.8467 178 11 1. 190 tº 9 Computation with logs: log 0.0086 = 7.9344985 – 10 log 0.0329 = 8.5.171959 – 10 log 0.0762 = 8.8819550 – 10 log 0.1298 = 9. 1132747 – 10 log 0.2288 = 9.3594560 – 10 log 0.3651 = 9.56241.18 – 10 log 0.5926 = 9.77276.16 — 10 log 0.8467 = 9.9277296 — 10 log 1 . 190 = 0.0755470 73. 1448301 – 80 10, 0000000 — 10 83. 1448301 – 90 83.1448301 – 90 9 G = 0. 1731. 4. The harmonic mean is the reciprocal of the arithmetic mean of the reciprocals of the variates. 1 1 1 #2; log G = 9.2383.144 – 10. MH = 28 ON THE SERIATION AND PLOTTING OF DATA where M H = harmonic mean 1 * e H = reciprocal of number of variates. N 1 º e X () = sum of reciprocals of variates. ty In the series 18, 20, 21, 21, 25, 32, 35, MH is given by 1 Mry = * ~ ; c. I.T.T.T.T.Tº - 1 MH = g {º + 0.0500 '...} -F0,0476-i-0.040+0.0313 + 0.0286) 1 MH = − = 23.28. 0.042958 The harmonic mean is always less than either the geometric or the arithmetic mean. It is especially adapted to the aver- aging of such items as time rates. 5. The mode (M) is the class with the greatest frequency. It is necessary to distinguish sharply between the empirical and the theoretical mode. The empirical mode is that mode which is found on inspection of the seriated data. In the example on page 24 the empirical mode is 42.5 kilos. The theoretical mode is the mode of the theoretical curve most closely agreeing with the observed distribution. Pearson (1902, p. 261) gives this rule for roughly determining the theoretical mode. The mode lies on the opposite side of the median from the mean; and the abscissal distance from the median to the mode is double the distance from the median to the mean; or, mode = mean — 3 X (mean—median). More precise directions for finding the mode in the different types of frequency polygons are given in the discussion of the types. 6. The simple aggregative index is used in place of a mean in securing an index number in economics. It is obtained by taking the sum of the actual prices of certain commodities for a given year and dividing this by the sum of the prices of the Same commodities for a base year. The formula for the is tº ... 2 g aggregative index number is #. Xp1 being the sum of 100 prices for a given year and Xpo for the base year. The best index number as proposed by Irving Fisher (1922, p. 482) is MEASUIRES OF WARIABILITY 29 one in which quantity of production (q) is brought into the equation. Thus the weighted arithmetic mean is *(*) p0 (1) 2po go *P* (2) *(;) 191 The “ideal” index is V (1) × (2). A weighted harmonic mean is - Measures of Variability 1. The range is the total spread of the distribution from lowest to highest measure; e.g., the distribution 5, 7, 8, 10, 13 has a range 13 — 5, or 8. The range is simple to compute, but tells nothing about the shape of the distribution or the concen- tration about the central tendency. 2. The standard deviation, a, is the square root of the mean of the squares of the deviations from the mean, or the square root of the variance. For ungrouped data: tºº Jº G = N where a = standard deviation of the distribution. 2(v – M)” = sum of the squared deviations from the mean. N = number of cases. Example. Individual Measure tº — M. (v — M)2 1. 132 +11 121 2. 116 — 5 25 3. 124 + 3 9 4. 102 — 19 361 5. 140 + 19 361 6. 115 — 6 36 7. 118 — 3 9 N = 7 847 X(v — M)2 = 922 Mean = *# = 121. a = V/*# = 11.48. For grouped data: _ |xf(V – M)* Or = N 30 ON THE SERIATION AND PLOTTING OF DATA where f(V – M)? = for each class, the squared deviation multiplied by the frequency. Example. f V — M. (V – M)2 f(V – M)2 100–109 3 26. 44 699.0736 2097. 2208 90–99 6 16.44 270 , 2736 1621, 6416 80–89 11 6.44 41. 4736 456. 2096 70–79 14 — 3. 56 12. 6736 177. 4304 60–69 7 — 13.56 183. 8736 1287. 1152 50–59 3 — 23.56 555 , 0.736 1665. 2208 40–49 1 — 33.56 1126, 2736 1126. 2736 N = 45 xf(V – M)2 = 8431. 1120 8431. 1120 Mean = 78, 56. Or E V*.* = 13. 69. For computation with assumed mean: *=e N* - Vo)* |Xf(V – *] OT = ? N N where * = class interval. f(V – Vo) = for each class, the deviation multiplied by the frequency. f(V – Vo)* = for each class, the squared deviation multiplied by the frequency. Example. y f (V – Vo) f(V – Vo) f(V – Vo)? 100–109 3 3 9 27 90–99 6 2 12 24 80–89 11 1 11 11 70–79 14 O 60–69 7 – 1 — 7 7 50–59 3 — 2 — 6 12 40-49 1 — 3 — 3 9 N = 45 Xf(V – Vo) = + 16; Xf(V – Vo)2 = 90 90 2 or = 10 - —, 16 g 45 45 or = 13. 69. | The cumulative sums method for a: , - Wºº- 2.f.)" N N N MEASURES OF WARIABILITY 31 where f' = the cumulative frequency from the bottom up to and including that of the given class. f" = the cumulative frequencies of the f"s up to and including that of each class. Example. y f f' f" 100–109 3 45 164 90–99 6 42 119 80–89 11 36 77 70–79 14 25 41 60–69 7 1} 16 50–59 3 4 5 40–49 1 1. 1 N = 45 164 423 Vº 164 (#) or = 10 — — — I — I . 45 45 45 or = 13. 69. 3. The Estimation of the Squared Standard Deviation, or ‘‘Variance,” from Small Samples (s). As shown in para- graph 2, a is computed from the sum of the squares of devia- tions of each variate from the mean divided by the number of variates. This assumes that the number of the deviations is the same as the degrees of independence, or “freedom,” of the cases, taken collectively. If the number of cases is large the result obtained on this assumption is approximately correct. But if the number of cases is small the fact must be taken into account that the number of free deviations is reduced by 1, owing to the fact that the algebraic sum of all deviations from the mean is 0. Consequently, in getting the average of the Squared deviation it is necessary to divide by N — 1. Thus in the illustration on page 29, of 7 measurements (un- grouped), variance is, strictly, s” = H = H = 153.67, hence s = 12.40, instead of 11.48. 4. The quartile deviation, semi-interquartile range, or Q, is one-half the difference between the first and third quartiles. Q3 – Q1 Q === where Q = quartile deviation. Qa = third quartile, or 75 centile. Q1 = first quartile, or 25 centile. 32 ON THE SERIATION AND PLOTTING OF DATA Example. Score f f(cum) 25–29 5 70 f = frequency at each step. 20–24 13 65 fºcum) = cumulative frequency. 15–19 25 52 10–14 15 27 5–9 10 12 0–4 2 2 70 Quartiles are computed in similar manner as the median which is the second quartile, i.e., Q2. Q3 : 0.75(70) = 52.50 Q1 : 0.25(70) = 17.50 52.50 — 52 = 0.50 17. 50 — 12 = 5.50 0. 50 5. 50 2 — X 5 || = 20. 19. 10 — X 5 = 11.83. or (*x s) + (#x s) 20. 19 — 11.83 Q = — = 4.18. 2 The value Q = 4.18 is interpreted to mean that, in the above distribution, the middle half of the scores lie within 4.18 score points of the median, i.e., + 4.18. 5. The average deviation is the average (regardless of sign) of the individual deviations from a central point of tendency of the distribution. For ungrouped data: X | a N AD = where AD = average deviation. 2 a = sum of individual deviations from mean without regard to sign. N = number of variates. Example. Individual Variate 32 v — M. 1. 132 -H 11 (132 — 121) 2. 116 — 5 (116 — 121) 3. 124 -H 3 etc. 4. 102 — 19 5. 140 -H 19 6. 115 — 6 7. 118 — 3 N = 7 847 2 a | = 66 Mean = *# = 121. AD = ** = 9.43. MEASURES OF WARIABILITY 33 For grouped data: AD = **!; where. = W — M. Example. f (V – M) f(V – M) 100–109 3 26.44 79.32 90–99 6 16.44 98.64 80–89 11 6.44 70. 84 70–79 14 — 3. 56 — 49.84 (Mean = 78. 56) 60–69 7 — 13.56 –94.92 50–59 3 — 23.56 — 70.68 40–49 1 — 33.56 — 33.56 N = 45 X |f(V – M) | = 497.80 AD = 497.80 5 = 11.06. The average deviation of a normal distribution marks the limits of the middle 57.5 per cent of the measures. Applied to the accompanying problem (although not a normal distribution), these limits are 78.56 + 11.06 = 89.62 to 67.50. Coefficient of Variation. The standard deviation, like the other indices of variation, is a concrete number, being ex- pressed in the same units as the magnitudes of the classes. The standard deviation of one lot of variates is consequently not comparable with the standard deviation of variates measured in other units. The index of variation may be reduced to an abstract number, independent of any particular unit, by dividing the index of variation, or, of any set of vari- ates by their mean value. The quotient multiplied by 100 is called the coefficient of variation. In a formula, C = i. × 100 (Pearson, 1896; Brewster, 1897). Pearl (1927) has suggested a neat graphic representation of variability relative to the mean, in which the abscissae are the mid-class values expressed in percentage of the mean value. The ordinates are the absolute frequencies of each class, per 1 per cent of mean. For example, see Table 1: Variability TABLE 1.-ABSOLUTE AND RELATIVE FREQUENCY DISTRIBUTIONS FOR VARIATION IN (a) STATURE, (b) BODY WEIGHT AND (c) PULSE RATE, OF A SAMPLE OF MAYA INDIANS, BASED ON DATA OF M. STEGGERDA (1932) STATURE BODY WEIGHT PULSE RATE Casin || 2 || 3 | | | | | | | Sºn | 2 || 3 | | | 5 || 8 || ºf | 2 | 3 | | | 5 | CDOl. kilograms - per minute 145–149.99 || 14 | 182 | 95.09 || 4.34 || 56.36 || 41–42.99 1 15 || 78.45 || 0 , 27 || 3.91 || 29.50–34.49 || 1 11 61.15 0.10 | 1.09 150–154.99 || 24 || 312 98.32 || 7.45 96.75 || 43–44.99 || 3 44 || 82.18 || 0.80 11.59 || 34.50–39.49 || 2 || 22 || 70.71 || 0.21 || 2.28 155–159.99 || 26 || 337 || 101.54 || 8.07 |104.81 || 45–46.99 || 2 29 || 85.92 || 0.54 || 7.83 || 39.50–44.49 || 4 || 44 || 80.26 || 0.42 4.57 160–164.99 || 10 || 130 | 104.76 || 3. 10 | 40.26 || 47–48.99 || 8 || 116 | 89.65 | 2.14 || 31.01 || 44.50–49.49 || 23 || 250 | 89.81 | 2.41 || 26.20 165–169.99 || 3 || 39 || 107.99 || 0.93 | 12.08 || 49–50.99 || 10 || 144 || 93.39 || 2.68 38.84 || 49.50–54.49 || 35 | 380 | 99.37 || 3.66 || 39.78 51–52.99 || 17 | 246 97.12 || 4.55 || 65.94 || 54.50–59.49 || 17 | 185 | 108.92 || 1 , 78 || 19.35 77 |1000 53–54.99 || 9 || 130 | 100.86 || 2.41 || 34.93 || 59.50–64.49 || 5 || 54 118.48 || 0 , 52 5.65 55–56.99 || 6 87 | 104.59 | 1.61 || 23.33 || 64.50–69.49 || 2 || 22 || 128.03 || 0.21 2.28 57–58.99 || 5 73 || 108.33 | 1.34 || 19.42 || 69.50–74.49 || 1 || 11 || 137.59 || 0.10 | 1.09 ~. * ºr - e - 59–60.99 || 2 29 112.07 || 0.54 || 7,83 || 74.50-79.49 || 2 || 22 || 147.14 || 0.21 || 2.28 M = 155.11; a = 5.25; C. V. =3.38. |}}; #| || |###| |};| iſ ; 2 Frequency. 63–64. 99 || 0 119.54 0 0 92 ||1001 3 Frequency, per mille. 65–66.99 || 0 | . . . . . 123.27 0 0 4 Per cent that class midpoint is of mean. 67–68.99 || 2 29 || 127.01 || 0 , 54 || 7.83 5 Absolute frequency per 1 per cent of mean. 69–70.99 || 0 | . . . . . 130.74 0 0 M = 52.33; or = 7.25; C. W. = 14.12. 6 Per mille frequency per 1 per cent of mean. 71–72.99 || 1 15 || 134.48 || 0.27 | 3.91 69 || 1001 M = 53.54; a = 5.54; C. W. = 10.35 § SAMPLING AND ITS ERRORS 35 of Maya Indians in stature, weight and pulse rate (data from Steggerda, 1932, pp. 11–19, 91). See Fig. 13. 110 100 90 80 S 70 60 - 50 Maya Indians, Male M Gr C. V. 40 Stature 155.11 5.25 3.38 | Weight 53.52 5.54 10.35 Pulse Rate 52.33 7.25 14.12 av 80 90 Percentage of Mean Value FIG. 13.-Graphic representation of variability of stature, weight and pulse rate in Maya Indians. See text Sampling and Its Errors The traits of any population of individuals, whether people or animals or plants or values, are variable. To find the varia- bility of any trait in the entire population is generally imprac- ticable because of its extent. One must take a fraction of the population to stand for the whole, and so is raised the necessity of studying a sample and letting its constants stand for those of the whole population. The justification for so doing rests on the laws of probability. Probability. If from a bag of perfectly shuffled balls of Which 1000 are white, 500 black, 300 red, one draws 90 (replacing each as drawn and shuffling), then it is easy to Compute what the probability will be of each color. Or, conversely, if we draw 50 white, 25 black and 15 red, by the laws of probability it may be inferred, as the best bet, that in a bag of 1800 balls 1000 are white, 500 black and 300 red. However, in general, the larger the sample and the less variable the population the more reliance can be placed on such infer- €IlCéS. The probability that an event will happen is usually desig- nated by p; that it will fail by (1 − p) = q, certainty being a probability of p + q = 1. Also, the probability of occurrence together of independent events is equal to the product of the probabilities of the independent events. In a number of inde- | 36 ON THE SERIATION AND PLOTTING OF DATA pendent events which, taken Separately, have a certain proba- bility of occurrence, the probability of one or other occurring is the sum of the separate probabilities. Again, if p is the probability that an event will occur in one trial, p" is the probability that it will happen n times in succession. Upon the principles of probability is rendered our judgment of the sufficiency or reliability of any set of data to give significant or representative constants. If instead of one large sample a number of Smaller Samples be obtained and their means found, these means will have a “sampling distribution” which follows the normal law. The mean of this sampling distribution will be close to the mean of the entire population. Its variability and the deviation of the mean from the true mean may be measured as indicated below. The probable error, PE, of the determination of any value gives the measure of unreliability of the determination. The probable error is a pair of values lying one above and the other below the constant determined. There is an even chance that the true value lies between these limits, provided the distribution for which the constant was obtained is normal or nearly normal. The chances that the true value lies within + 2PE are 4.6 : 1. == 5PE are 1,341 : 1. == 3PE are 22 : 1. + 6PE are 19,300 : 1. == 4PE are 142 : 1. + 7PE are 427,000 : 1. == 8PE are 14,700,000 : 1. + 9PE are 730,000,000 : 1. The probable error should be found to two significant figures for most data. The constant for which it has been determined should be carried to the same number of places (see p. 14). The probable error of a constant is usually written after the value of the constant to which it is linked by the symbol +. The standard erroris 1/0.6745 (nearly 1.5) times the proba- ble error. Though the probable error is so firmly established that it may be said to be the prevailing mode of representing the unreliability of a constant, the standard error will proba- bly in time prevail, since it avoids the necessity of applying the factor 0.6745. When the standard error follows a constant it would be well to join the two numbers by the symbol =F, to distinguish it from the probable error. SAMPLING AND ITS ERRORS 37 The Probable Errors and Standard Errors of Leading Con- stants. The probable error of the mean: 0.6745or v/N The standard error of the mean: PEM = + Oſ SEM = -F ſº v/N The probable error of the median: PE + 0.8454.or Mdm T. = 2 v/N The probable error of the standard deviation: 0.6745 PE, H =E toº & v2N The standard error of the standard deviation: —“– v2N The probable error of the coefficient of variation: 2 PEG _0.6745C | + 2 (#) | v2N 100 When C is small, say less than 10 p.c., the factor within brackets may be omitted, especially since only two significant figures of the probable error need be recorded. To Measure the Significance of a Difference between Means. Means obtained from two independent samples of the same universe will rarely be precisely alike. A small difference in the means may not indicate a real difference. To determine whether or not a difference between means is probably significant, find the square root of the sum of the Squares of the two standard errors of the means. This root is the standard error of the difference between the two means. If the difference between the means exceeds twice the root, the difference is probably significant. - SE, = FF 38 ON THE SERIATION AND PLOTTING OF DATA Significant differences are those in excess of 3VPEMA-F PEM, or of 2VSEM.A -- SEM,”. Of course, the more the difference between means exceeds the limit of 2 X the standard error of the means the more cer- tainly significant the difference becomes. In the case of small samples, obtain in each Sample the sum of the squares of deviations from its mean and add together such sums; divide by the sums of the numbers of the degrees of freedom (in this case: [N1 — 1} + [N2 – 1). This will give the square of the approximate or, or call it s”. Then if the quotient of M1 - M2 t 1 1 S - &=º - N1 N2 is greater than the value in fine print (Table XIII) the difference is significant (Tippett, 1931, p. 81). The probable error of a difference between two correlated II].628. Il S. PE(M. — M.) = V/PEM. H. PEM,3–2rººm,PEM, PEM, To Measure the Differences between Variabilities. The general rule is to take the square root of the sum of the squared standard errors in each series. If the difference between the means exceeds twice the root this difference is probably significant. To Measure the Differences between Variabilities of Two Small Samples. R. A. Fisher (1932, p. 206) suggests using O 1 • e º g : e the formula 2 = loge +, where a 1 and a 2 are variabilities cal- G 2 culated with regard to degrees of freedom (p. 31), and z is the required relation between the variabilities. This equation may be put in form z = }(loge gi? – loge gº”), which avoids the operation of division. Snedecor (1934, p. 15) proposes the formula larger mean square a 1% 2 Smaller mean square a 2 and affords a table (reproduced in our Table XIII) of values of F based upon degrees of freedom of the two samples by SAMPLING AND ITS ERRORS 39 which one can infer the value of F that would be probably significant of a real difference; and that would almost cer- tainly be significant of such a real difference, beyond the chance of errors of sampling (see also p. 70). Quick Methods of Roughly Determining Average and Wari- ability. 1. Arrange the specimens (e.g., of a series of beech leaves) in a series according to the magnitude of the character, simply judging the order by the eye. Then pick out those two that will divide the series into thirds and measure them. Their average will be the average of the whole series. Then, mean — the smaller of the two measures 0.43 (0.43 is the value of + æ, at which the area of the curve in- cluded between these limits of a equals one-third of the whole). Or, 2. Select roughly two specimens that seem to be about one-third of the distance from the two extremes and group all others as larger than the larger one, smaller than the smaller one or between the two. Measure the two specimens. Count the number in each group and determine a by aid of Table IV (p. 166) as follows: Taking as origin the middle of the whole series, call the number of leaves from the middle to the Smaller n3", and the number from the middle to the larger n2"; also, the a distance to the lower division point hi and to the upper division point h9. Then (hi + h9) = the range covered by the middle division or the difference between the upper and lower value. As we know the areas of the curve between the origin and hi on the one hand and h9 on the other (percentage of individuals between the middle and hi and ha), h h we can find * and “from Table IV, since they are the values Or Or 3. g corresponding to the percentage areas determined. But h; the (hi + h), Or ; thus a is determined. Knowing a we Oſ Or can get hi or h9, and hence the mean. Or the value of the character of the middle specimen may be taken as the mean value. Example. Seventy-six beech-leaves which had fallen from one tree were picked up. They were sorted out as in the second method. It was found that 22 were smaller than the smaller type leaf, which was 40 ON THE SERIATION AND PLOTTING OF DATA 1.78 inches in length; and 23 were larger than the larger type leaf (2.22 inches in length). The 38th leaf is the middle of the series, and so the smaller type leaf was distant 16 leaves from the middle, and the larger 15. n2’ 16 n2" 15 = − = 0.2105; — = — = 0. 1974. 7, 76 7, 76 From Table IV: h1 h1 → 9% area Therefore − = 0. 555, Of 0. 56 || 0.21226 0. 55 || 0.20884 e e h2 Similarly a = 0. 517; h1 + h 2.22 – 1.78 *** - 1.072 - —, Of Gr 0.44 a = − = 0.4105; 1.072 *—- 0.555. *—- 0.517. 0.4105 T * “” 0.4105 T ****** h1 = 0.2278, h2 = 0.2122. Mean is at 1.78 + 0.2278 = 2.01. CHAPTER III THE CLASSEs of FREQUENCY PolyGONs The plotted curve may fall into one of the following classes: A. Unimodal. I. Simple. 1. Range unlimited in both directions: a. Symmetrical. The normal curve. b. Unsymmetrical (Pearson's Type IV). 2. Range limited in one direction, together with skew- ness (Types III, W and VI). 3. Range limited in both directions: a. Symmetrical, Type II. b. Unsymmetrical, Type I. II. Complex. B. Multimodal. Classification. The classification of any given curve is not always an easy task. Whether the curve is unimodal or multimodal can be told by inspection. Whether any uni- modal curve is simple or complex can not be told by any existing methods without great labor and uncertainty in the result. Complex curves may be classified as follows: 1. Composed of two curves, whose modes are different but so near that the component curves blend into one; such curves are usually unsymmetrical. 2. The sum of two curves having the same mode but differing variability. 3. The difference of two curves having the same mode but differing variability. If the material is believed to be homogeneous and the curve is unimodal it is probably simple and its classification may be carried further. 41 42 THE CLASSES OF FREQUENCY POLYGONS For classification arrange the classes as for finding mean square deviation (p. 30); also the mean cubed deviation and the mean of the deviations to the fourth power. Or in detail proceed as follows: Having arranged the classes and corresponding frequencies in two adjacent columns, take a class near the mean (call it Vo) as a zero point; then the departure of all the other classes will be: –1, -2, -3, etc., and +1, +2, +3, etc. Add the products of all these departures multiplied by the frequency of the corresponding class and divide by N; call the quotient v1. Add the products of the squares of all the departures multi- plied by the frequency of the corresponding class and divide by N; call the quotient v2. Add the products of the cubes of all the departures multi- plied by the frequency of the corresponding class and divide by N; call the quotient va. Add the products of the fourth powers of all the departures multiplied by the frequency of the corresponding class and divide by N; call the quotient v4. Or, ×f(V – Vo) departure of Wo from mean. Vo being v1 = N - known, M may be found [M = Vo + vil;” Xf(V – Vo)2 vg = — ; N , ºſ(V – Vo)", 3 N y - Xf(V – Vo)4 v4 = —w- * The values vi, v2, va, v4, are called respectively the first, Second, third and fourth moments of the curve about V. To get the moments of the curve about the mean, either of two methods (A or B) will be employed. Method A is used when integral variates are under consideration; method B when we deal with graduated variates. * This is the short method of finding M referred to on page 25. CLASSIFICATION 43 (A) To find moments in case of integral variates: p1 = 0; pu 2 = v2 – v1*; pia = va – 3viv.2 + 2v1*; pia = v4 – 4viva -- 6v1°v2 – 31.1%; pis = vs – 5viv 4 + 10v1°va – 10v1°v2 + 4v1*; pis = va – 6viv; + 15v1°v4 – 2001°v3 + 15v1*v2 – 5v 1°. (B) To find moments in case of graduated variates: p1 = 0; ga' = u2 – v1' – Hºs; pus' = va – 3viv 2 + 2y 1°; A.' = v4 – 4viva –H 6v1°v2 – 31, 14 — #(v. *=s* v1%) -- #0. Also, 31 – “.. 82 = −. pig" Al2 The probable errors of the preceding constants are as follows: PEu3 = 0.67449 y pua – piº” —w-; Vº – us” – 6942 + 99.2°. N In the special case of the normal curve the above probable errors become e 2 24 PEu2 = 0.67449a2 N; ; PE8, - 0.67449 N. ; PEus = , 10. - 6 as -0.87449, Vy; PEM = 0.07449 N 96 PEus = 0.67449a 4., |* . Al4 Oſ N ; 44 THE CLASSES OF FREQUENCY POLYGONS In the case of skew curves: | 3 PED = 0.67449 2N. " (p. 53); 3 PE of skewness = 0.67449 A lsº . (See p. 46.) (From Pearson, 1903.) The standard errors are derived from the corresponding probable errors by dividing by 0.67449. The classification of any empirical frequency polygon depends upon the value of its “critical function,” Fºº (Pear- son, 1901). * 81 (82 + 3).” 4(482 – 331) (232 – 331 – 6) Walue of F Corresponding frequency curve F = Oo Type III. Transitional between Type I and Type VI F' > 1 and < oo Type VI F = 1 g Type W. Transitional between Type IV and Type II F P 0 and < 1 Type IV F = 0, 81 = 0, 62 = 3 Normal curve F = 0, 61 = 0, 82 not = 3 Type II F - 0 Type I An important relation to be referred to later is , 662 - 3 - 1) 381 – 232 + 6 * This value of F is general. For the special case of Types I–IV the following critical function was given by Pearson and has been much used. F1 = 282 – 381 – 6. The classification was given as follows: 81 > 0, curve is of Type I. When F1 i ti { en F1 is negative and 81 = 0, 82 < 3, curve is of Type II. When F1 = 81 > 0, 3 2 > 3, curve is of Type III. en F1 = 0 and {; = 0, 32 = 3, curve is normal. When F1 is positive and 81 > 0, 82 > 3, curve is of Type IV. THE NORMAL FREQUENCY CURWE 45 THE NORMAL FREQUENCY CURVE The normal frequency curve (Fig. 14), obtained by rec- tangles or loaded ordinates, is symmetrical about a vertical (ordinate) erected perpendicular to the base at its middle point. This ordinate at the middle is also the highest one. The upper portion of the curve is concave toward the middle of the figure; the lower portions are concave toward the middle; and there is a point of inflection where the curvature changes. At the extremes the area of the curve is very small and, finally, with a finite number of observations, fades away to nothing.” If the whole area of the curve be assumed to be M X X 5 4 3 2 1 0 1 2 3 4 5 FIG. 14.—The normal frequency curve. 100 or 10,000 the proportion of the area of the curve between the ordinate at the middle abscissa and an ordinate at the abscissa representing any class or deviation removed at any distance from the ordinate at the middle abscissa or class, is called the probability integral. Table IV is a table of prob- ability integrals computed for half a frequency curve. The mathematical formula of the normal curve, a formula of which one does not have to understand the development in order to make use of it, is __N 1 3/ O’ v2r e?/20° tº * The theoretical range of this curve is infinite. 46 THE CLASSES OF FREQUENCY POLYGONS This formula gives the value of any ordinate y (or any class) at any distance a (measure along the base, X, X', of Fig. 14) from the mode. e is a constant number, 2.71828, the base of the Napierian system of logarithms. N is the total area of the curve or number of variates, and a is the standard deviation, which is constant for any curve and measures the variability of the curve, or the steepness of its slope. To compare any observed curve with the theoretical normal curve we can make use of tables. For the case of a polygon of loaded ordinates the theoretical frequency of any class at a deviation 3C 3C – from the mean can be taken directly from Table III. Here - Oſ O' is the actual deviation from the mean expressed in units of the standard deviation, and 9. the corresponding ordinate, yo !/0 being taken as equal to 1, and a is the standard deviation. For the case of a polygon built up of rectangles represent- ing the relative frequency of the variates, Table IV gives immediately the theoretical number of individuals occurring between the values a = 0 and a: = + 2. By looking up the Or * QC given values of – the corresponding theoretical percentage Oſ of variates between the limits a = 0 and a: = + 2. will be found O' directly. The ratio 2. may be called the indea of abnodality. Or The normal curve may preferably be employed even when 81 is not exactly equal to 0, nor 32 exactly equal to 3, nor F exactly equal to 0. Use the normal curve when 3v 22 *g- 2v 4. F X pig" <== 1 and 1 = 1 + 2; V4 also the skewness (p. 53) should be less than twice the value 3 0.67449 a ſ— . 2N To Determine the Closeness of Fil (“Goodness of Fit ’’) of a Theoretical Distribution to the Observed Distribution. The Lexian ratio L. tests the extent to which a particular set THE NORMAL FREQUENCY CURVE 47 of observations conforms to the normal frequency distribution (Wilhelm Lexis, 1877). This is the ratio between the aſ of an observed distribution in a statistical series and the theoretical value of the standard deviation, a = v/Npg. The theoretical value is determined by the hypothesis that is being tested, as, for example, the Mendelian expectation or the Poisson series or the binomial distribution or other, similar series. The application of the ratio may be illustrated by dice throwing. For each die, the probability (p) of a 6 turning up is 1 to 6, i.e., p = #; the probability (q) that something other than 6 will turn up is 5 to 6, i.e., q = # (always p + q = 1). Twelve dice were thrown 4096 times, and the appearance of a 6 counted a success. Thus, for each throw, there might turn up 12 sixes, 11 sixes, 10, 9, 8, etc., to 0 sixes. Since the mean of a binomial distribution = N p and the a = vºwpq, in the theoretical distribution of the number of 6's in the 12 dice thrown any number of times, the mean would be 12 X # = 2; the a would be V12 x + x; = 1.291. For the actual trial, the results were as follows: Successes Frequency Successes Frequency 0 447 7 7 1 1145 8 1 2 1181 9 3 796 10 4 380 11 5 115 12 6 24 Total 4096 The mean of the distribution is 2.000, exactly as predicted. The q is 1.296, or 0.005 larger than predicted. Since the Lexian ratio is the ratio between the observed or and the theoretical re 6 = 1.0039. Si * > º & .0 i 1.291 039. Since this ratio will be 1.0 if the actual distribution has the same a as the normal curve, the approach of the ratio to 1.0 shows how closely a given dis- tribution approaches the theoretical normal curve. The x* (“chi square”) test. Find for each class the difference (61) between the theoretical value (y) and the observed fre- o, it equals 48 THE CLASSES OF FEREQUENCY POLYGONS quency (f). Divide the square of this difference in each case by y. The sum of the quotients is x”, or 61? $/ The probability (P: 1) that the observed distribution is truly represented by the theoretical polygon may be looked up in Table XII. The value of P has been computed to 6 decimal places by Elderton. His table is published in Biometrika, I: 155–163, and is also reprinted in Pearson’s “Tables for Statisticians and Biometricians.” A new and more convenient table by R. A. Fisher (1925) is, by the courtesy of Fisher and his pub- lishers, reproduced in our Table XII. “Instead of giving the values of P corresponding to an arbitrary series of values of x”, we have given,” says Fisher, “the values of x” correspond- ing to specially selected values of P.” In using Table XII it is necessary to know the value of N. The rule for finding N is that N “is equal to the number of degrees of freedom in which the observed series may differ from the hypothetical; in other words it is equal to the number of classes the frequencies in which may be filled up arbitrarily.” This is at least one less than the actual number of classes that are being compared. The probable range of abscissae (2xi) of a normal distribu- tion, or that beyond which the theoretical frequency (y) is less than 1, varies with the number of variates (N) as well as with a , in accordance with the following formula derived N — a 2/2a2 == € or V 27F 2 N 22:1 = *N log —7–. log e or V 27" Example. For the ventricosity of 1000 shells of Littornea littorea from Tenby, Wales, M = 90.964 per cent and a = 2.3775 per cent. What is the probable range of ventricosity expressed in per cent? by the transposition of y = by putting y = 1: 1000 g- 2. 506628 X 2.3775 2a:e = 2 X 2.3775 Woºn X log 15.2. THE NORMAL FREQUENCY CURVE 49 The observed range was 15 (Duncker, 1898). See also the criterion of Chauvenet (1888) for the rejection of extreme variates (p. 21). The Normal Curve of Frequency as a Binomial Curve. The normal curve may also be expressed by the binomial formula (p X g)”, where p = }, q = } and X is the number of terms, less 1, in the expansion of the binomial—hence approx- imately the number of classes into which the magnitudes of the variates should fall. If the standard deviation be known, A may be found by the equation X = 4 X (standard deviation)* = 4a2. Eacample of Normal Curve. Number of rays in lower valve of Pecten opercularis from Firth of Forth: y f W – Wo f(V – Vo) f(V – Vo)2 f(V – Vo)3 f(V – Vo)4 14 I — 3 — 3 9 –27 81 15 8 — 2 — 16 32 — 64 128 16 63 — 1 — 63 63 — 63 — 63 17 154 0 0 O O 0 18 164 I 164 164 164 164 19 96 2 192 384 768 1536 20 20 3 60 180 540 1620 21 2 4 8 32 128 512 N = 508 342 864 1446 4104 $ () v3 = ** = 2.8465; v = ** = 8.0787. M = V0 + v1 = 17 –H 0.6732 = 17.6732. * = 1.700s – 0.6732 = 1.2476; a = Y, - 1.1170; 2.8465 — 3 × 1.7008 × 0.6732 + 2 × 0.67323 = 0.0218; 8.0784 – 4 × 2.8465 x 0.6732 + 6 × 1.7008 × 0.6732? — 3 × 0.6732* = 4.4220. 0.02182 4.4220 == = 0.0002; 32 = − = 2.8410. 1. 24763 1. 24762 F = 0.0002(2.8410 + 3)2 T 4(4 × 2.8410 – 3 × 0.0002) (2 × 2.8410 – 3 × 0.0002 – 6) v1 = ### = 0.6732; v2 = # = 1.7008; 4 4.l. -: At 3 Pl, 4 1. = - 0.00047. Fuzº – 0.0009. 2 – 2 v14 3 (1.70082) — 2 X 0.67324 3 v2 Piº 3( ) X = 1.0234. - V4 8.0787 N 508 Theoretical maximum frequency, y0 = - = = 181. 4. •v2r 1.1170V2. 50 THE CLASSES OF FREQUENCY POLYGONS To find the average difference between the pth and the (p + 1)th individual in any seriation (Galton's difference problem). Let in be the average interval between the pth and (p + 1)th individual; n the total number of variates (by exception, instead of N); and a their standard deviation. Then, (1) when n is large and p small: v 2 pe-p 1 i, - a *-*-i- {1 + c + c, + c, +...}, |p nym where yn = e–%m”. 271- m can be found from Table IV by the use of the formula == - € dx, 77, T ~70 where the value of m sought is the argument corresponding to — 2 the tabular entry ( º) º 70, (n − 2p)” – 2.5° E *P*. c1 = 0.833 º n(n − p)p n° ym - 2 + 1.875% = P2. (...) ; 77.3 $/m , - m) 3 3 — 2 c. *-0.75% - Piº - 1.5” Lºpº n°(n − p)p n° ym *g 2 –0.125% = Pº (, - #) (...) ; 77.3 7m 2 $/m -* 5 5 - 4 — 4 c. ––2.5% ºf £17.5% - 2 = P'.” n°(n − p)?p? n°(n − p)p ym - 3 3 2 –625° ºf P. (s tº- #) (...) ºn 5 'm? !/m +.625 (n - 2p) (n − p)p ( - #) (...) 3 ºn 5 !/m 7772 - 2 4 — 2 — .02083 ſº pº | 31 m. 101m2 + 28 7, 9°m GALTON'S DIFFERENCE PROBLEM 51 The solution of the equations for c1, C2, and c; will be facili- tated by finding, once for all, the logarithms of n, (n − p), 7??, (m. – 2p), (n − p)p, and −. !/m (2) When n and p are both large and not nearly equal: i, --- (1 + c + c, + c, +...). 7!!/m (3) When n is small the unsimplified form of the equation must be used. — |n (n − p)"-Pp.” ºvº – p)p l x; a +o, +o, +o, +...) |n means the products of all integers from 1 to n. The series ci, cz, ca is not complete, but the values of c with higher subscripts are so small that they may be neglected. Let Ipºpſ, be the difference measured in units of a between the p’th and the p”th individual, then Ip'p” -: (?p' + io'+1 -- fy +2 + g e e + ip” —1)0. The foregoing method is that of Pearson (1902°) based upon some considerations of Galton (1902). To find the best fitting normal frequency distribution when only a portion of an empirical distribution is given. First apply the following parabola of the second order: 2 (1) w-vſ. tº to () } where l is the half range and eo = #(3\o tº- 5A2) = 3(A2 gmºm 0.262); e1 = 3X1; €2 = 3.75(3)\2 gº A0); also, 7??0 4 7721 7712 go = - A0 = 3X2 – Tº e2; X1 X — . 2. ' 15 T m.j} ^* T nº.' 52 THE CLASSES OF FEEQUENCY POLYGONS To find mo arrange the frequencies in the usual manner (p. 30) and find the logarithm of each; their sum is equal to mo. Making the class situated at the middle of the range 0, find the deviation of each of the other classes from this class. The algebraic sum of the product of the logarithms by the deviations gives m1. The second moment about the same zero point gives m2. Or, mo = X log fa XY; m1 = X(Y(V – Vo)]; m2 = X|Y(V – Vo)*}. Substituting in (1) we get a numerical quadratic equation which can be put in the form 2 \ 2 €1 (C €1 2 €1 2 o Y-vº () +: (#)|4-(#)} eil 2 2e2 e12 a; + l * T 1. = ?/0 | 62 (a;+ h)? If the normal curve be y = zoe T 2a" , h)2 (3) Y = log y = log 2. – “tºoge. 20-2 whence, by comparison of right-hand expressions in equa- tions (2) and (3), € 1 2 log 20 = yo X (o-º) ; 4é2 _ !” log e yo X es' 2012 Then the required normal curve is y = zoe-z"/20°. (Pearson, 1902°.) Other Unimodal Frequency Polygons. The formulas of Pearson’s Types I to VI are as follows: 7n 1 7742 Type I. v-v (; ++) (-; e li l2 ASYMMETRY OR SEQEWNESS 53 22\ m Type II. y = yo 1-#) ſº 2 \ P - Type III. y = yo ( +7 e-z/d. Type IV. y = yo cos 0”e T", where tan 0 = º Type V. y = yoz-Pe-3/*. Type VI. y = yo(a – l)42/a:41. In these formulas: ac, abscissae; go, the ordinate at the origin, to be especially reckoned for each type; y, the height of the ordinate (or rectangle) located at the distance a from yo; l, a part of the abscissa axis XX’ expressed in units of the classes; r e, the base of the Napierian system of logarithms, 2.71828. The other letters stand for relations that are explained in the sections below treating of each type separately. The range of the curve is limited in both directions in Types I and II, is limited in one direction only in Types III, V and VI, and is unlimited in both directions in Type IV and the normal curve. The normal curve may give the best fit, however, notwithstanding the fact that in biological Statistics the range is ordinarily limited at both extremes. Thus the range of carapace length to total length of the lobster is limited between 0 and 1. The ratio of carapace length to abdominal length in various crustaceans may, how- ever, conceivably take any value from + o to 0. In the ratio of dorsoventral to antero-posterior diameter the forms of the molluscan genera Pinna or Malleus on the one hand and Solen on the other approach such extremes. Asymmetry or Skewness (o) is found in Types I, III, IV, V and VI. In skew curves the mode and the mean are Separated from each other by a certain distance D; or D = mean – mode. Asymmetry is measured by the ratio of = −. O’ If the mean is greater than the mode, skewness is positive; if the mean is less than the mode, skewness is negative. D, 54 THE CLASSES OF FREQUENCY POLYGONS and hence skewness, may be calculated when the theoretical mode is known (see p. 28, and below). - In Types I and IV skewness is measured also by the s=2. where s 6(82 – 31 — 1). If S FF 2 381 – 232 + 6 562 – 661 – 9 is positive, o, has the sign of ps; if negative, o, has the opposite sign to us (Duncker, 1900). #(- , Va. B2 + 3 ) S — ratio or = #vg. In Type I, a = 3 V8, 2\T * * *55, 65, Ig In Type III, or = #vg, == —#– where the sign is the -H 2Vº Same as that of pus: In Type IV, a = } Vºž. In Type V, a = *H y since p — 4 is the positive root of the quadratic: Q-9-º-o-º-0. p is readily found. (q + q.) Val-q, -3) D (q – q.) V ((q – 1)(q. H 1)} where (1 — qi) and (q2 + 1) are the two roots of the equation In Type VI, o, a S? = 0 4 + #31(s + 2)*/(s + 1) To compare any observed frequency polygon of Type I with its corresponding theoretical curve. 1711 m2 g = 1/0 1+.) 1 – # tº li l2 To find li, lº, Trli, 1712, Jo. The total range, l, of the curve (along the abscissa axis) is found by the equation l = : V8,(s-H2)2+16(s−1); 22 — S2 + COMPARISON OF POLYGON WITH TYPE 55 li and lz are the ranges to the one side and the other of yo; mºmº l, a #(l − Ds); D = oro. = \/p.2. o.; la = l — li; li m =; G –2); m1 + m2 = S – 2; N mi”.I. mº"? I'(m1 + m2 + 2) !/0 TT (mTºmºyºtºmº F(n + 1)T(m, ET) To solve this equation it will be necessary to deter mine the value of each parenthetical quantity following the I’ sign and find the corresponding value of I' from Table XIX. It is, however, sometimes easier to calculate the value of yo from the following approximate formula: — 1 1 1 I _ N. (mi + m2 + 1) vſm, H. "...G. H. - - - #). l V2'rmºm, $/0 With these data the theoretical curve of Type I may be drawn. Frequency polygons of Type I are often found in biological measurements. To compare any observed frequency polygon of Type II with its corresponding theoretical curve. 2 \ 17, g/ = ſ/o ( – ) g #12 This equation is only a special form of the equation of Type I in which li = lc and mi = m2. As from page 44, 31 = 0 in Type II, l = 2a: v's + 1; since the curve is symmetrical, D = 0, and N I'(m + 1.5) m = }(s — 2); yo = H →H-. 2 #l VTT (m+1) The I" values will be found from Table XIX. An approximate formula for yo is given by Duncker as follows: N s — 1 eT 4(s 1 2) go = ~ a y e a V 27" V (s -H 1) (s — 2) 56 THE CLASSES OF FEEQUENCY POLYGONS To compare any observed frequency polygon of Type III with its corresponding theoretical curve. ac\P gy = yo 1+.) e-z/d. 1 The range at one side of the mode is infinite; at the other is found by the formula 4 — 1 — oº 61 * (for Type III). lı l N pp-Fi Also, = + = — ; yo = ------→ . °, P - B - ...; W = 7.7F.II, The value of I' corresponding to p + 1 can be got from Table XIX. To compare any observed frequency polygon of Type IV with its corresponding theoretical curve. This is the com- monest type of biological skew curves. g = yo(cos 0)”.e. ". 6 is a variable, dependent upon a as shown in the equation a: = w tan 6. The factor (cos 0)” following yo indicates that the curve is not calculated from the mean ordinate (A), or the mode (A — D), but that the zero ordinate is at A — mL); or at a distance m X D from the mean. Vu, (M, as 4 V16(s−1) – 6 (s− 2)2, m = }(s + 2); - * Va.; F *. _ a – "A" | * D 2 ‘s II 2 mo – Vs. G 2); -: Vas(s – 2) V8, 4w , with the opposite sign to us; COMPARISON OF POLYGON WITH TYPE 57 O 6 (arc of circle) = 180° ' _ (COS q)? 1 N. ſ. eT3s 12s T rq, * 90 = - A | I- w V 27" (cos (p)*** . T q = angle whose tangent is - . S To compare any observed frequency polygon of Type V with its corresponding theoretical curve. g = yoz-Pe-3/* To find p solve the quadratic equation 16 16 — 4)? — — (p — 4) — — = 0, (p – 4) £31 (p – 4) 81 and take the positive root. y = e(p − 2) Vp – 31. ... Nºw" n – */ º º ; yo = − ; D = −R. with the sign of pla I'(p — 1) p(p − 2) To compare any observed frequency polygon of Type VI with its corresponding theoretical curve. gy = yo(a: - li)72/171. 1 – q1 and q2 + 1 are the two roots of the equation S2 = 0: 4 + #31(s -- 2)?/(s + 1) y (s + 1) li = S N Aſ 2 , where (1 — qi) and S are nega- (1 – qi) (1 + q2) (1 – q1) g tive: lve; g=º ml,71-02-1 I'(q) I'(q – q2 – 1) I'(q2 + 1) 2= l(q1 + q2) (q – q.) (q – q2 – 2) * The foregoing value is approximate and is applicable when, as is usually the case, s is greater than 2. The exact value is given by Pearson as 2% — S2 + 3/0 N ex2 tr g/0 = - tl, ºr p f (sin 6)*e"dº the formula for reducing which is to be gained from the integral calculus. 58 THE CLASSES OF FEREQUENCY POLYGONS Example of Calculating the Theoretical Curve Corresponding with observed Data (Fig. 12). Distribution of frequency of glands in the right foreleg of 2000 female swine (integral variates): Number of glands 0 1 2 3 4 5 6 7 8 9 10 Frequency. . . . . . 15 209 365 482 414 277 134 72 22 8 2 Assume the axis yy’ (Vo) to pass through ordinate 4; then: y V – Vo f f(V – Vo) f(V – Vo)2 f(V – Vo)3 f(V – Vo)* 0 — 4 15 — 60 240 — 960 3840 1 –3 209 — 627 1881 — 5643 16929 2 — 2 365 — 730 1460 — 2920 5840 3 — 1 482 — 482 482 — 482 482 4 O 414 O 0 O O 5 1 277 277 277 277 277 6 2 134 268 536 1072 2144 7 3 72 216 648 1944 5832 8 4 22 88 352 1408 5632 9 5 8 40 200 1000 5000 10 6 2 12 72 432 2592 2. 2000 —998 6148 – 3872 48568 v1 = — 998 + 2000 = - 0.499; v2 = — 6148 + 2000 = 3.074; v3 = – 3872 + 2000 = – 1.936; v4 = — 48,568 -i- 2000 = 24.284. M = 4 – 0.499 = 3. 501; 3.074 – (-0.499)* = 2.824999; — 1.936 – 3 (–0.499 × 3.074) + 2(–0.499)3 = 2.417275; 24.284 – 4 (–0.499 × — 1.936) + 6 (0.4992 × 3.074) –3 (0.499)* = 24.826314. (2.417275)2 81 = (2.824999); = 0.259177; 24.826314 = — = 3. 110825. 82 (2.824999)2 0.259.177 (3.110825 –– 3)2 F = = — 0.373. { 4(4 × 3. 110825 – 3 × 0.259.177) } (2 × 3. 110825 – 3 × 0.259.177 – 6) ..". Type I. 6(3.110825 – 0.259.177 – 1) 3 × 0.259.177 – 2 × 3.110825 + 6 #Vo.259177 3. 110825 —H· 3 = 0.311157. 5 X 3. 110825 – 6 × 0.259.177 – 9 D = V2.824999 × 0.311157 = 0.522984. D - s = 0.522984 X 19.986091 = 10. 452406. Fº £1.1 Piº £1.3 £14 == = 19.986091. C2 = 2= COMPARISON OF POLYGON WITH TYPE 59 1. 680.773 l = —- Vo. 259177(19.986091 + 2)” -- 16(19.986091 -ī- 1) = 18.045048. 18.045048 — 10. 452406 ll = 2 = 3. 796321. 18.045048 – 3. 796321 = 14.248727. 3. 796321 × 17.986091 m1 = = 3.783918. 18.045048 17.986091 – 3.783918 = 14.202173. l 2 - m2 == 2000 18.9861 × V 17.9861 18.0450 V 2, x 3.7839 × 14.2021 474. 50 the frequency of the modal class. Position of the mode, y0 = A — D = 3.501 – 0.523 = 2.978. The closeness of fit to the theoretical curve is calculated below by Pearson's method (pp. 47, 48). 29(200556–2643–.0704) 2 V f Theoretical (y) 6 62 * — 1 O O O O O 0 15 19.4 — 4.4 19.36 1.00 1 209 180.9 28, 1 789 .. 61 4.36 2 365 391 .. 1 — 26. I 681. 21 1.74 3 482 474. 5 7.5 56.25 0.12 4 414 408.9 5. 1 26.01 0.06 § 277 274.7 2.3 5. 29 0.02 6 134 149.6 — 15. 6 243.36 1.63 7 72 67.0 5. 0 25.00 0.37 8 22 24.7 — 2.7 7.29 0.30 9 sk 7.3 :k x: 10 º 1.7 | | 0.7 0.49 0.05 11 0.3 62 X — = 9.65 = x2 3/ degrees of freedom = 10 — 3 = 7. Three degrees of freedom have been absorbed in finding the mean, standard deviation and adjusting the total frequency. Looking up the entry of 9.65 under N' = 7 in Table XII we find the P = 0.21. That is, the probability is that in 21 out of every 100 random series belonging to Type I we should expect a fit not essentially closer than that given by our series, which, of course, assures us that this distribution is properly classified under Type I. * Classes with a frequency of less than 10 should be grouped together. 60 THE CLASSES OF FREQUENCY POLYGONS The Use of Logarithms in Curve-fitting. Most of the statistical operations can be greatly facilitated by the use of logarithms. In curve-fitting their use becomes necessary. The following paradigm will be found of assistance: GENERAL log v1 = log (V – Vo) — log N. M = Wo + v1. log v2 = log (V – Vo)” – log N. log aſ log va = log (V – Vo)? — log N. log C log v4 = log (V – Vo)* — log N. log PEM = 9.828982 + log o – ; log N. log PEa = log PEM – 0.150515. log PEG = log PEa — log M + 2. # log us. # log us-log M-H2. log 2 = 0.301030 +3 = 0.0833333. Find: 2 log vi. log 3 = 0.477121 3+p = 0.0291667. 3 log v1. log 4 = 0.602060 3 #6 = 0.0125000. 4 log v1. log 6 = 0.778151 log # = 9.698970. gº = }t (log v2) — J. (2 log vi) — 0.083333. Find: log us; 2 log u2; 3 log u2. as = }t (log va) — 9 (log 3 + log vi + log va) + 91 (log 2 + 3 log vi). Find: log us; 2 log ps. ps = }t (log va) – 9 (log 4 + log vi + log va) + 9t (log 6 + 2 log vi + log v2) — Wł (log 3 + 4 log vi) – 9t (9.698970 + log us) + Hº. Find log us. log 31 = 2 log us – 3 log u2. log 32 = log us – 2 log u2. w = 582 – 631 – 9 (Types I, IV). Skewness: Type I. logo. = # log 31 — log w 4-log (3, +3) — 0.301030. Type III: log a = # log 31 – 0.301030. Type IV: logo. - # log 31 – log (3, 4-3) + log w – 0.301030. MULTIMOD AL CURVES 61 Type V: logo. = log2 + # log (p −3) — log p. Type VI: logo. = log (q1 + q2) + # log (q – q2 – 3) — log (q – q.) – 4 log (q - 1) – # log (qa + 1). TYPE IV This is the most difficult of all the types to be fitted. The work of fitting is carried out by the use of logarithms, as follows: log j = # log 3 +log (s – 2). log k = logj + # log us. log a = logj — log (s-H 2) — 0.301030. log u = # log us + # log {}t ſlog (s – 1) + 1.204120. – 9t[log 31 + 2 log (s — 2)]} – 0.602060. 2 log D = logo. + # log us; m _* + g 2 log mL) = log k – 0.602060. log r = log k + logs – 0.602060 — log u. log tan q = log r — logs. log 0 = 8.241877 -- log 9°. * logyo = log N + # logs + 9t (log(): (2 log cos d – log 3s) —% (8.920819 — log s) — J. (log r + log b)] +9.637784} – 0.399090 — log u — (s-H 1) log cos ſp. log y = log yo + 9t log (s -- 2) + log log cos 0 – 9t[9.63778 + log 0° “ -- log r). MULTIMoDAL CURVES Multimodal curves are given when the frequency in the different classes exhibits more than one mode. False mul- timodal curves result from too few observations, or when the classes are too numerous for the variates. By increasing the number of variates or by making the classes more inclusive Some of the modes disappear. * In degrees and fractions of a degree. 62 THE CLASSES OF FREQUENCY POLYGONS Multimodal curves differ in degree. The modes may be so close that only a single mode (usually in an asymmetrical 5.5 4,5 3.5 2.5 1.5 .5 0.5 1,5 2,5 3.5 4,5 5.5 6.5 7.5 Frg. 15 curve) appears in the result; or one of the modes may appear as a hump on the other; or the two modes may even be far apart and separated by a deep sinus (Figs. 15 to 18). 2 1 0 1 2 3 4 FIG. 16. Pearson has offered a means of breaking up a compound curve with apparently only one mode into two curves having distinct modes; but this method is very tedious and rarely applicable. MULTIMODAL CURVES 63 The index of divergence of two modes of a multimodal curve is the distance between the modes expressed in terms 3 2 1 0 1 2 3 4 5 6 7 8 FIG. 17 of the standard deviation of the more variable of the com- ponents.” The index of isolation of two masses of variates grouped 3 2 1 0 1 2 3 4. 0 1 2 3 4 FIG. 18 about adjacent modes is the ratio of the depression between the modes to the height of the shorter mode. * I have proposed (Science, VII, 685) to measure the divergence in a unit = 3 X standard deviation, which has certain advantages in species study. C. B. D. 64 THE CLASSES OF FREQUENCY POLYGONS The meaning of multimodal curves is diverse. Sometimes they indicate a polymorphic condition of the species, the modes representing the different type forms. This is the case with the number of ray flowers of the white daisy which has modes at 8, 13, 21, 34, etc. Sometimes they indicate a splitting of a species into two or more varieties. See also Chapter IV. CHAPTER IV ANALYSIS OF WARIANCE General Variance, as the measure of the degree of variability, is the mean of the squared deviations from the mean, or the second moment about the mean. It is equal to the square of the standard deviation. It frequently happens that the variants do not all belong to the same homogeneous group. The total variability can often, in such cases, be analyzed to advantage. And it is a remarka- ble property of the variance of a variable that, when it has been analyzed under several independent causes, the total variance is the sum of the variance due to each cause. To analyze variance, arrange the variants in a table, placing, in one row, the frequencies of one class, distributed with refer- ence to the second varying or differentiating factor; and so proceed with the various rows. This class of cases is illus- trated in the example (Table 2), in which is given the number of well-developed fruits in various (10) flower clusters taken at random from each of ten dogwood trees (a-j). Starting with such a distribution we may seek an answer to the question in how far is the collection of data homogeneous? There are differences, clearly enough, between the averages of the different trees. What is the probability that such dif- ferences are due merely to random sampling? In the analysis of variance from this point there are two methods according as the total frequencies of the columns are all equal or, on the other hand, unequal. 1. Numbers of the Observations in Each Class Are Equal The deviation of any class entry (such as the number of fruits in a head from any one tree) from the grand mean of all 65 66 ANALYSIS OF WARIANCE fruits from all trees is the deviation of such entry from the mean of such class plus the deviation of the class mean from the grand mean: or, a - Mr = (a, - Ms) + (Ms – Mr); where ac is the individual variant, Mr the mean of all the variants, or the grand mean, and Ms is the mean of the class (in the example, the sample derived from one tree). The total sum of squares is obtained by squaring the above equation as many times as there are observations, and then summing up all these squared deviations. This result may be expressed as follows: total sum of squares = X(a: — M.:)* = X, X(a: — Ma)” + n 2s (Ms – M.)? where n is the frequency in each group, and 2s indicates that the summation is made for all (s) groups. If the data are grouped and one assumes a centrally placed guessed mean the arithmetical labor of computing the devia- tion is much reduced. The various components of the above equation may then be found as follows: total sum of squares = X(a – M.)* = X(x')? — N(M'.)”; sum of squares within classes = 2, 2(a: — M.)* = X(w')” – m Xs (M'.)”; sum of squares between classes = n X, (Ms – M.)” = n.2,0M’s)” – N(M'.)”; where the prime (') indicates that the symbol it accompanies is measured from an arbitrary origin. N is the total number of observations. n is the number of observations within each class. Example (Table 2). X (a – M2)2 = 738 – 77.44 = 660.56 2s X (a: — Ms)2 = 738 – 456 = 282.00 n2a(Ms – Mr.)2 = 456 – 77.44 = 378.56 OBSERVATIONS IN EACH CLASS ARE EQUAL 67 ?), ††ZA, * 0I† * ſ;00 º 69/, * 900 ^ Ig|Z “ Z9I * 063 º 999 ' Z9/, '9I† º 8(#) & \ / ± 88 " 0 –I " Z0 ° € –† • ZO " Ig * I –† - O –8 " Z –9 * T –9 ° Z –6 “ Z –ſa -; 4 88 –IZ ,08 –ÞZ0IĢI —† –8Z –9I –9Z --6.Z —,'pº, 00I0I0I0I0I0I0I0I0I0I0I‘ “ ’ [8ņoJ, ZIIZT 0II 00I Ź.Z6 889II8 8IZI8IZ ZIZ#9III9 gZITI9 #IIZ†ZII8# gI98IZII#8 8IIIZI†889Z 9IQȚZZ88I !!q3J9p0q18J94 snpo [840),J9dſ 8ņyn IŲ yo ºut3(){Jøqu'InN ĶIOĶĪĶIVH ONIŁA S CITOO JLV CIGIJLVOOT (nun pºloſſ snu 100) SGIGI HJ, CIOOAAĐOCI JLN GIHAHJI,IIGI JIO 'HIGIJLSQ TIO HIGHÆ SJLIQ}{\BI CIGH&IOTIGHAGHCI-TITIGIAA HO S'HIGHEIIN (1N „RO SNOIJL[]{{I\IJLSICI—’3 GHT8IVJL 68 ANALYSIS OF WARIANCE 2. Numbers of Observations in Each Class Are Unequal The general method follows that of equal classes, but with some restriction. The general equation for finding the sum of squares when the size of classes is equal (p. 66) is to be modi- fied as follows: X(a: sºme M.)? -: 2s2(a: sºme M.)? + 2sms (Ms Eºs Mz)?. If the data are grouped and if one assumes a centrally placed guessed mean the procedure for finding the various sum of squares is as follows: total sum of squares X(a — M.)? = X(w')? — N(M'.)”; sum of squares within classes = 2s2(a — Ms)* = X(w')” – 2ns(M’s)”; Sum of squares between classes = 2,n,(Ms — M.:)? = 2,ns(M'.)” – N(M'.)”. Example (Table 3). X(a: — Mr.)2 = 830 – 12.35 = 817.65. 2s2(z — Ms)2 = 830 – 168.92 = 661.08. Xsn's (Ms — Mr.)2 = 168.92 – 12.35 = 156.87. To get the mean square between different classes (e.g., trees) divide by the number of “degrees of freedom,” which is one less than the number of samples. This procedure is justi- fied on the ground that in a series of n deviations from the mean (of which the sum is zero) the nth deviation is fixed and should not be regarded as an independent contributor to the mean variance. The same procedure is followed to get the mean square variation within classes. The total number of observations minus 1 gives the total number of “degrees of freedom.” This can be analyzed into two parts: (1) the degrees of freedom between classes, and (2) the degrees of freedom within classes. - The results of Analyses 1 and 2 can be best brought out by OBSERVATIONS IN EACH CLASS ARE UNEQUAL 69 ?), 98’ ZI | 66ý“† || 19Z" $ | 08ſ ( 14 | 063 - 0 | 0țý“ Z009’ 0 || || II ’3000’0 || || 6 [[ 'f9 || 836 ° {g [8’ 0 [ [ 000 º 9000' ŻI | 336 * #3 | 0(±)', & \, ?! 86 I 70– | [ſ] : 0 | 19ț¢’0– | 908' I || 860' 0 | Þýz' 0– | 19 Iº 0 | 8g8°0– | 000’I – | #68‘ I — | 988' 0– | ' 0– | 00g“ 0– | 000 * I -- | g3g’I0'a-'; #9–ŻI| –#3†OI ----8—9–() I –9ý –g–6 I –0 ! –ZI~~8I0,93, Z88Z8g [IZIýI†8I2. I0Į88ȘIg804ŻIȘIII?), 8Ģ3I82–13 ZIII88†9Z-ga, g80I899ŹI8I8#Z-8Z [99ZZ9†8IIIIZZZZ-IŻ 988ZZ9 []03{IZI99 I0 IZ830Z-6I g0I8Z8† IØI3898I99 IZ9ZI8I-LI I#8IZ-8Z†ZýIŹZ†Z9I-gI 0IIZI8ZI† I-ȘI †ZIȚZ I–II gŤ | # I8I | Z [II || 0 ||6 82.9g*8Zſ | sſageai [840 LJo 19qūInN queĮđ ſempļAȚpuſ go JaquInN 'HO8IHVH ĐNIHāIS CITOO LV "(puqoqº snių?I) OVW n'S GIHJ, „IO SGITHIN VXGI GI JHO SGIAVGIT SílOI?IVA GIHJ, NO CINQOH SJLGITI, IVGHT JHO HGHAI WIQIN GHEIJ, JHO SNOIJLQ {II}IJLSICI–'8 GITIĶI VJL 70 ANALYSIS OF WARIANCE arranging the data as follows, and then by computing the mean square: 1. ANALYSIS OF WARIANCE, DOGWOOD FRUITS Sum of Degrees of Mean Source of variation Squares freedom Square Variation within classes. . . . . . . 282. 00 90 3. 13 Variation between classes. . . . . . 378. 56 9 42.06 Total . . . . . . . . . . . . . . . . . . . 660. 56 99 6.67 2. ANALYSIS OF WARIANCE, R. glabra Sum of Degrees of Mean Source of variation of squares freedom square Variation within classes. . . . . . . 661.08 317 2.09 Variation between classes. . . . . . 156.67 14 11. 18 Total . . . . . . . . . . . . . . . . . . . 817.75 331 2.47 To test whether the mean square between classes is sig- nificantly different from the mean square within classes, it is necessary to compute larger mean square smaller mean square If F is as great as the light-faced figures given in Table XIII*, the difference between the mean squares is considered to be statistically significant, since only 5 times in 100 trials would random sampling give a difference of the magnitude. If F is as great as, or greater than, the dark-faced numbers the difference is considered very significant since only once, or less, in 100 trials would random sampling give a difference of this magnitude. 42.06 3. 13 that an F of about 2.72 is very significant, and therefore the variation between trees is greater than that within trees. 11. 18 2.09 about 2.24 is very significant, and, therefore, the variation between plants is greater than that within plants. In Analysis 1, F = = 13.4. Entering the table, it is seen In Analysis 2, F = = 5.3. The table shows that an F of * For descriptions of Table XIII see p. 152. CEIAPTER W CofEELATED WARIABILITY AND MEASURES OF RELATIONSHIP Definition. When two or more traits, characters or sets of values show mutual or reciprocal correspondence of relation- ship in their variability, they are said to be correlated. Correla- tion has been variously defined as: (1) the concomitant varia- tion of two characters or traits; (2) the interdependence of two variables; (3) the tendency for two variables to be related in a single-valued mathematical function. Correlation may be between two or more quantitative or non-quantitative varia- bles. It may be linear (rectilinear, p. 73) or non-linear (cur- vilinear, p. 87). The following types of correlated variability or relation- ship are considered in this chapter: A. Correlated variability between two sets of variables. I. Both variables quantitative. 1. Linear correspondence. a. Computation of r with ungrouped data. a. By deviations from true mean. 8. By deviations from Mz = 0 and My = 0. ºy. By rank difference method. b. Computation of r from grouped data. c. Regression lines. 2. Non-linear correspondence: correlation ratio (m). 3. Spurious correlation. 4. Special coefficients. . Alienation. Determination. Reliability. . Attenuation. . The coefficient of similarity. . The correlation between a variable and the deviation of a dependent variable from its probable value. 5. Analysis of variance in correlation. . 71 72 MEASURES OF RELATIONSHIP IN VARIABILITY II. One variable quantitative; the other in two categories (biserial correlation coefficient). III. Both variables are non-quantitative. 1. Both sets of variables occur in several classes (coefficient of mean square contingency). 2. One set of variables with several classes (biserial eta). 3. Each set of variables with two classes (tetrachoric correlation). B. Interdependence between three or more sets of variables. I. Coefficient of partial correlation. 1. Computation of r. 2. Partial sigma. 3. Regression equations. II. Multiple correlation. III. Coefficient of part correlation. IV. Tetrad difference. A. CORRELATED WARIABILITY BETWEEN TWO SETS OF WARIABLES I. Both VARIABLES QUANTITATIVE As just stated, the correspondence between two quantita- tive series of correlated data may be linear or non-linear or curvilinear. The following scatter diagrams illustrate the dif- ference between these types (Figs. 19, 20). For a refined test of linearity see page 88. CORRELATION BETWEEN TWO SETS OF WARIABLES 73 ; l : : § i : i § § § § à § š § § § § § Stature in Millimeters |PIG. 19. Corr ELATION OF STATURE AND AGE For Boys 5 To 16 YEARs oF AGE. B.Rooklyn ORPHAN ASYLUM DATA, UNITED STATES AND North EUROPEAN RACIAL GROUP. 1933. (RECTILINEAR TYPE.) Not only is there shown a close relationship between age and stature for these data, but the functions are markedly linear in the tendency shown by the data to form a straight band across the table. NoTE: For both tables, scale values indicated at each class are the mid- points of the class intervals. 1. Linear Correspondence The coefficient of correlation, r, also called the Pearson product moment coefficient,” is an index of the degree of linear relationship between two sets of varying values. The coefficient r may have any value from –1 to +1. In positive correlation, relationship is such that large deviations from the mean of one variable tend to be associated with large devia- tions of the other, small deviations with small deviations, etc. In negative correlation, relationship is reciprocal, and large deviations in one sense of one variable tend to be associated with corresponding deviations in the opposite sense of the * After Karl Pearson, who developed the product-moment formula. 74 MEASURES OF RELATIONSHIP IN VARIABILITY other. Zero correlation, r = 0, denotes the absence of any relationship between the variability of the series of variables. 19 18 17 16 15 14 13 12 ; 1 1 In to to un un - un § 3. 5 33 3 § Thigh Girth/Leg Length- (Per Cent) FIG. 20. CoRRELATION OF AGE AND THIGH GIRTH/LEG LENGTH FOR Boys 3 To 19 YEARs of AGE. B.Rookly N ORPHAN Asy LUM DATA, No RTH EUROPEAN RAcIAL GROUP. 1933. (CURVILINEAR TYPE.) Although relationship may be fairly high, the data show a tendency to form a curved band across the table. The table is definitely but not highly curvilinear. It is sometimes difficult to determine by inspection whether data are linear or curvilinear. See page 88 for more precise method. 1 un to UO OO \O VO ; § ; a. Computation of r with ungrouped data. o. By devi- ations from true mean. The fundamental formula is 2a:y Try = -— Noºr gy COEFFICIENT OF CORRELATION 75 Example. The relation of body weight in kilograms (X) and trans- verse chest diameter in millimeters (Y) for 10 individuals will be used as an illustration. Desired: the correlation between these two dimensions. TABLE 4.—RELATION BETWEEN WEIGHT AND TRANS- VERSE CHEST DIAMETER IN 10 INDIVIDUALS, TOGETHER WITH THE COMPUTATION OF THE ELEMENTS OF THE COEFFICIENT OF CORRELATION Let X = weight in kilograms. = chest transverse diameter in millimeters. Mz = arithmetic mean of the weight distribution. My = arithmetic mean of chest diameter distribution. or = standard deviation of weight distribution. a y = standard deviation of chest diameter distribution. a = deviation of an individual's weight from the mean weight. y = deviation of an individual's chest diameter from the mean chest diameter. 2 = sum of, summation. N = number of individuals in distribution. 3. !/ 24/ Individual | X | Y £ $/ 22 y? mºm -- 3C/ o a. o y ozo y 1. 45| 226|—10.40|–18. 10|108.16|| 327. 61 — 1.29 –0.82] 1.05| 188.24 2 44| 221|–11.40|—23. 10|129.96 533.611–1.41|–1.04| 1.47; 263.34 3 66| 281| 10.60|| 36.90|112,36||1361, 61 | 1.31|| 1.66|| 2.17| 391. 14 4 64|| 256 8.60|| 11.90| 73.96|| 141.61 | 1.06| 0.54|| 0.57| 102.34 5 64 268 8.60|| 23.90| 73.96: 571. 21| 1.06| 1.08] 1. 14 205.54 6 60|| 258| 4.60| 13.90; 21.16|| 193,21| 0.57| 0.63| 0.36|| 63.94 7 55] 253 – 0.40 8.90|| 0.16|| 79.21–0.05] 0.40|—0.02 –3.56 8 47| 221|– 8.404–23. 10| 70.56] 533. 61|–1,04 — 1.04| 1.08| 194.04 9 49| 210|— 6.40|–34. 10| 40.96||1162.81|–0. 791–1.54|| 1.22 218.24 10 60| 247 4, 60 2.90| 21.16 8.41 0.57 0.13| 0.07| 13.34 Total (X) |554|2441 652.40|4912.90 9. 11 1636.60 - 1636. 60 10 × 8.08 X 22, 17 The problem above may be worked without the use of the three columns, --, - and g oz ory orzo y column, combined with the g’s of the two arrays, will suffice, and Tacy = 0.914. To compute r the data in the right-hand Naray Substituting in this formula, , which is the original form of the product-moment coefficient. 1636.60 "ºv - Goºsos).I., " + 9°14' 76 MEASURES OF RELATIONSHIP IN VARIABILITY To test the significance of an observed r, calculate ===, VN-2 t – ºvn-2 where N is the number of paired observations on which r is based. Then enter Table XIII with N – 2 as the argument “degrees of freedom for smaller mean square.” r is significant if the calculated tº is equal to or greater than the value of t appearing in fine print, since an r of this magnitude would not occur more than five times in a hundred trials where the suc- cessive samples were taken from a population having a correla- tion of zero. Example. In the problem on pages 80 and 81 where raw is 0.776 - 0.776 v/1 - 0.7762 From Table XIII at 50 degrees of freedom the t value in bold faced type is 2.678. Since the value of t found is greater than this we conclude that r is significant, since an r of that size would not be found in an uncorre- lated population as often as once in 100 trials. V50 – 2 = 3.06 To test whether the difference between two correlation co- efficients r1 and r2, is significant calculate: 2'1 = # (log2(1 + ri) — loge(1 — ri)} 2' = }{log.(1 + r.) – log.(1 – ra)} The standard error of (2/1 – 2'2) is: 1. v'N, IEN, 6 where N1 and N2 are the number of paired observations on which r1 and r2 are based. If (2/1 – 2'2) is at least two times greater than its standard error the difference is probably significant. A widely used formula for determining the standard error of an observed r is: S.E.(z'i - 2'2) = -F 1 — r? vºn This formula is quite accurate when N is large and r is Small. For standard errors of various values of r consult Table XIV. 8. By deviations from means taken at Mz = 0 and My = 0, one proceeds as follows (same example): S.E.r = CORRELATION COEFFICIENTS 77 Individual Y Y y2 Y2 XY 1. 45 226 2025 51076 10170 2. 44 221 1936 48841 9724 3. 66 281 4356 78961 18546 4. 64 256 4096 65536 16384 5. 64 268 4096 71824 17152 6. 60 258 3600 66564 15480 7. 55 253 3025 64009 13915 8. 47 221 2209 48841 10387 9. 49 210 2401 44.100 10290 10. 60 247 3600 61009 14820 Totals, X = 554 2441 31344 600761 I36868 M X Y. - 554 = 55.4: M, = XY - 2441 244.1 z - N - io - “” “v - N - io ~ ***. XX2 31344 or = gº- Mr.” = - 55.42 == 8.08. N 10 XY2 2 600761 ow = \|− – My” = — 244.12 = 22.17. N 10 21:y XXY 136868 TNT - TNT — Mr My = 10 — (55.4) (244.1) = 163.66 (numerator). XXY M N acara V 163.66 Try = --> = 0.914. ora o y (8.08) (22.17) Still another formula which does not involve computing the means, thus eliminating fractions, is given by NXXY – XXXY vſ Nºxº~ (xx) ||Nºyº - (x,y); Substituting values of sums in the above problem gives == 10(136868) — (554) (2441) vſ10(31344) — (554)2][10(600761) - (2441)2 = 0.914. Tacy A great number of variations of the original product- moment formula for r have been worked out for use with data of various forms (Symonds, 1926). One of the more com- monly used formulae is illustrated below. 78 MEASURES OF RELATIONSHIP IN VARIABILITY 'y. The rank-difference method for computing r is useful for short series or for series which may be ranked in order of merit or magnitude. It is less accurate than the product-moment method because it takes account only of relative position of the items in a series rather than the magnitude of the devia- tions from the mean. Use of this method is recommended when: (1) One or both variables are expressed in ranks. (2) A short method is desired for determining the presence of relationship rather than its extent. (3) The nature and amount of the data do not warrant the use of more laborious methods giving accurate results. Spearman's formula is as follows: 6XD2 p = 1 — --—- N(N2 – 1) where D refers to the differences in ranks. The method of assigning ranks to scores or measures must take account of possible ties in rank. Two methods are in use: (1) The mid-rank method. Individual values are given equal rank in case of ties, and the rank assigned is the mid- rank of the set of equal values. (2) The bracket-rank method. Individual values are given equal rank in case of ties, and the rank assigned is the value which would have been assigned to the first of the tied values, had there been no tie. In both methods, the individual after the ties takes the rank which would have been assigned if there had been no ties. Rank Rank Example. Values Mid-rank method Bracket-rank method 29 1 I 30 2 2 31 3.5 3 31 3.5 3 38 6 5 38 6 5 38 6 5 42 8 8 50 9 9 The following set of measures used on page 75 may serve to illustrate the method of computing p (rho): COMPUTATION OF r WITH GROUPED DATA 79 Ranks X y QC 3/ ac - ly (a — y)2 44 221 1 2.5 — 1 .. 5 2.25 45 226 2 4 —2.0 4.00 47 221 3 2.5 0. 5 0.25 49 210 4 1 3.0 9.00 55 253 5 6 — 1.0 1.00 60 247 6.5 5 1.5 2.25 60 258 6. 5 8 — 1.5 2.25 64 256 8.5 7 1. 5 2.25 64 268 8.5 9 — 0.5 0.25 66 281 10 10 0 O XD2 = 23.50 6(23.50) p Indon-I, ++086. Assuming normally distributed variates, r is slightly larger than p. The correction formula given by Pearson is r = 2 sin jº. Table XVII, page 189, presents values of r for computed values of p. For the above problem, with p = +0.86, r = +0.87. The probable error of p is given by Pearson's formula 0.6745(1 — p?) v/N For the problem above PEp = | | {1 + .086p2 + 0.013p 4 + 0.002p*]. 0.6745 (1 – 0.862) PEp = —º- [1 -- .086 (0.862) + 0.013 (0.864) + 0.002 (0.866)] = + 0.060. If fine accuracy is not desired, the last term of the for- mula may be neglected and the formula becomes PEp 0.6745(1 — o? -: º which for our problem gives PEp = +0.056. b. Computation of r with Grouped Data. o. Standard Formula. The formula for the calculation of the correlation coefficient r from grouped data and using assumed means is as follows: P., .W. (Xfc') (Xfy') zºº' --→ Vº &m *Nºwy sºme sº - Try = Example. TABLE 5. — DISTRIBUTIONS OF THE MEASURES OF WEIGHT AND TROCHANTER BREADTH FOR 50 INDI- VIDUALS, ILLUSTRATING METHOD OF WORKING OUT THE FORMULA FOR ray Trochanter Breadth in Millimeters z-variable 70–71 68–69 66–67 64–65 62–63 60–61 58-59 56–57 54-55 52–53 50–51 # P-4 # & º i ; ar' fz' —8 fr’2 64 ; : f : 1 3 5 : 3 S. º cºrº * 4, #3 & 6 4. 0 1 0 4 0 4 ; ; : 1. : . . . 3 : : 2 0 8 f y’ fi/2 Xfr’y' l 8 8 64 72 0 7 0 0 0 1 6 6 36 12 4 5 20 100 75 4 4 16 64 36 3 3 9 27 9 2 2 4 8 6 4 1 4 4 —5 6 0 0 0 0 1 - 1 1 0 7 – 2 28 28 7 —3 63 57 4 - 4 64 60 4 —5 100 110 1 —6 36 48 1 —7 49 35 50 644 543 N X fly’2 X frºy' 50 —46 764 COMPUTATION OF r WITH GROUPED DATA 81 Let ac' and y' = deviations in classes from the assumed mean of the a-variable and the y-variable, respectively. fac’, fy’ = number of cases at a given class times the deviation of that class from the assumed mean. fr’2, fy'2 = (a:') (fr’) and (y') (fy'), respectively. At the weight class (44–45) there are 4 cases, f = 4. This class is the fifth below the assumed mean, y' = — 5. Then fy' = 4(–5) = —20; and fy’2 = (–5) (–20) = + 100. fac'y' = product-moments of frequencies about the assumed mean. The value of fac"y’ for each row is the sum of the products of the cell frequencies by their corresponding deviations. For example, in class 70–71, one case is plotted in the cell at ac' deviation, +9, and at y' devi- ation, +8, hence fac'y' = (1) (+9) (+8) = 72. In class 62–63, one case is plotted at ac' = -1 and y' = +4, one at ac' = +2 and y' = +4, two at ac’ = +4 and y' = +4, hence fac'y' = (1) ( – 1) (+4) + (1) (+2) (+4) + (2) (+4) (+4) = + 36. Substituting in the formula: 543 – S-40E tº = 0.776. 50 rzy = - 4 ( – 18) — 46)2 — 1 QY2 Via * ºved = - 50 50 8. Sum and Difference Formulae. The summation method may be applied to correlation problems and is quite rapid for use with an adding and calculating machine. The formulas 3.Te or°r + o”, – a "z- ſ g Tarty = (difference formula). !/ 2010, 2 2 2 Gr 2 — or 2 - Oſ º + ly * (sum formula). - Tzy 2a row The moments a”-y and a *, +y are obtained by summing the frequencies along diagonals from lower left to upper right and from lower right to upper left, respectively, when the lower limits of the variables are located at the lower left corner of the scatter diagram. The difference formula involves least work when correlation is positive and high; the sum formula is shorter when correlation is negative and high. The mo- ments may be checked by working out both numerators. The Sample problem illustrates the summation method using the difference formula: Let 2', y', (a – y)' = cumulative sum of frequencies at each class. cumulative sum of ac', y', (a – y)', respectively, at each class: ac”, v", (a. wº- g)” 82 MEASURES OF RELATIONSHIP IN VARIABILITY TABLE 6.-CORRELATION BETWEEN WEIGHT AND FOOT AREA TO II, LUSTRATE THE APPLICATION OF THE DIFFER– ENCE FOR.MULA *s * co nº CŞ co arº GN c 's | 3 s 3 s to ~ |3 | $º S. ** 9N a s a s a a “g & ~ 8. ^*.* º 's * ! - - - - º w-i º ;3 >: Weight Sº, in kg TTTTTTT 65–69 1 55 330 60–64 1 54 275 55–59 5 53 221 50–54 4 48 168 45–49 8 44 120 40–44 13 36 76 35–39 1 11 23 40 30–34 3 7 12 17 25–29 || 1 4 5 5 5 cºs serſ © Cº. I Cab tº C cº s > § | 3 ; 3 tº 3 ºf 3: $; $ ; Kºš Xfy(Mza – Mr.)” 1359.5351 O' mac F N -- so = 3.9084 a ma 3.9084 a my 2.6247 mzy = − = IHL. = 0.8012. my: = + = H = 0.7227: OTz 4.8782 O y 3.6316 O my = W- = N-gº--- For age: mean = 10.33; a = 3.6316. For thigh girth X 100 + leg length: mean = 54.86; a = 4.8782. 90 MEASURES OF RELATIONSHIP IN VARIABILITY 3. Spurious Correlation Spurious correlation between variables is correlation due partly or wholly to factors other than those on which the true value of the coefficient of correlation is based. Spurious cor- relation may arise from one or more of the following: 1. Presence of one or more common factors in both varia- bles. E.g., correlation of physical growth with mental growth would give high positive value for r because of the common factor, age. To eliminate spurious value, use the partial r technique (p. 108). 2. Correlation of a single measure with a composite which includes it. E.g., the correlation of leg length with stature. The amount of spurious correlation present is given by *—, where Ma = mean of the single measure, and Maº Ma--b = mean of the composite. 3. Increased range of material. E.g., suppose that two variables have scales of 0 to 100. Given a series of measure- ments none of which exceeds magnitude 50 on either scale, and with the coefficient of correlation between the two variables = 0. If a second series of measurements is made, also showing zero correlation, but lying in the 50–100 range of the scales, combining the two distributions will give a surface for which the correlation coefficient may be well above zero. Mills (1924, p. 407) gives an extreme example where the addition of one case high up on the scales raised an obtained r from —0.034 to +0.999] 4. Correlation of indices. Pearson (1897) has shown that Spurious correlation may arise when indices are correlated. The amount of spurious correlation present may be as high as 0.500. The denominator of each index becomes a common factor contributing the spurious correlation. E.g., a correla- tion of relative span (i.e., span/stature) with relative sitting height (i.e., sitting height/stature) will show considerable Spurious correlation because of the common denominator, stature, in each index. A measure of the spurious correlation arising when indices are correlated is given by the formula: V.2 tº- v/V.” + V.2 VV;2 + V.2 7'0 (spurious correlation in indices) THE COEFFICIENT OF ALIENATION 91 where ro = the amount of spurious correlation between the Ol. b indices – and -. C C Va = the coefficient of variation, i. X 100, of the a 0. factor, i.e., the numerator of the first index. Vb == the coefficient of variation, i. × 100, of the b factor, i.e., the numerator of the second index. Ve = the coefficient of variation, #. × 100, of the c factor, i.e., the denominator of both indices. When using the formula it is assumed that the absolute values, a, b and c are uncorrelated. 4. Special Coefficients a. The coefficient of alienation measures lack of correlation between two variables. Kelley (1919, p. 173) has assigned the symbol k to denote this coefficient, and its formula is k = V/T2 (coefficient of alienation) The coefficient k is also called residual correlation by Tryon (1929), k” being “the degree of determination of Y from resid- ual factors other than X.” The coefficient k may be used as a measure of the predictive value of an r. The estimate im- proves as r increases. Table 9 gives values of k for values of r from 0.10 to 1.00. E.g., to reduce the error of estimate by one- half, k = 0.50, r must be 0.866. Again, r must be 0.98 before the error of estimate is reduced to one-fifth of a chance predic- tion (k = 0.199). The general relation between r and k is r” + k2 = 1. When k = 1 and r = 0, prediction of a score by means of the regression equation becomes merely a “chance” prediction. Holzinger (1928, p. 166) gives a further application in pre- diction with his formula Ip = 100(1 — v/TL r”) (improvement over change in pre- diction by a single score) 92 MEASURES OF RELATIONSHIP IN VARIABILITY E.g., for r = 0.50, Ip = 13.4, which means that the regression forecast with a single score when r = 0.50 is only 13.4 per cent better than a random guess. TABLE 9.—GIVING VALUES OF k FOR, VALUES OF r FROM 0.0 TO 1.00 Coefficient of Coefficient of Coefficient of Coefficient of correlation alienation correlation alienation ºr k r k 0.00 1.000 . . . . . . . . . . . 10 0.995 0.80 0.600 . 20 .980 .85 . 527 . 30 .954 . 866 . 500 . 40 .917 .90 . 436 . 50 .866 .95 . 312 . 60 . 800 .98 . 199 . 70 . 714 .99 . 141 . 7071 . 7071 1.00 .000 b. The coefficient of determination is the square of the coefficient of correlation, i.e., r*. It gives a measure of the per cent of variation in the dependent variable associated with the independent variable. It may be shown (Ezekiel, 1930, p. 120) that when half the variation in Y is directly associated with X, r = V, - 0.707. By the table above, k=0.707 also. But r* = 0.707* = 0.5, which indicates that 50 per cent of the variation in Y is determined by X. Then, also, k” (called the coefficient of non-determination) is 0.707* = 0.50, the per cent of variation in Y not determined by X. For further uses of coefficients of determination, see Ezekiel (1930). c. The Coefficient of Reliability. A reliable test or mea- Suring instrument should give an identical set of scores when applied to the same individuals a second time (assuming that the individuals have not changed). Correlating such a first and second set of scores would give a coefficient of +1.00, showing perfect reliability. Since few instruments are per- fectly accurate, the correlation coefficient computed for two sets of Scores or two forms of a test of the same individuals is a measure of the degree of accuracy and is called a “coefficient of reliability.” The formula that gives this coefficient of reliability is: 2r 11 1 + riſ r(21 + z'1) (2. + z'I) E THE COEFFICIENT OF RELIABILITY 93 y I 3Cl QC {-} where z = − and 2' = − are the standard scores (or vari- Orl O 1 ates) obtained by two similar tests (observations), a and b, and 31 y ac'ſ º z - *, 2/1 = − are the standard scores (or variates) on O'I O I another form of the same test (observational technique) or and 8. Also, riſ is the correlation between the first scores by each method of testing. Repeating the test several times or lengthening a test tends to increase reliability. The Spearman (1913)-Brown (1911) prophecy—formula predicts the size of the new reliability coeffi- cient when a test is lengthened: r = Nrziº, 1 + (N — 1)rziz, (Spearman-Brown prophecy-formula) where = the predicted reliability coefficient. rziz, - the obtained reliability coefficient. N = the number of times length of test is increased (i.e., doubling the length of test: N = 2, etc.) Suppose that an intelligence test containing 85 items has a reliability coefficient, rziz, = 0.882. Desired: the reliability coefficient if the test is increased to a length of 150 items: 150 (1.765) (0.882) — = 1.765 r = = 0.930. 85 * T TIE (1765 IT) (oss2) N = The formula may be used also to determine how much a test should be lengthened (or how many times repeated) to produce a desired reliability coefficient. r – rrziz, (formula to predict length of test for de- sired reliability coefficient). N = Tr1zz - Trac1r2 Desired for the above test a reliability coefficient of 0.950; then 0.950 – (0.950) (0.882) 0.882 – (0.950) (0.882) and the test should contain at least 217 items (since 85 × 2.542 = 216-i-). - = 2.542 94 MEASURES OF RELATIONSHIP IN VARIABILITY d. Correction for Attenuation. If, to check the error of individual measurements, the observations or measurements of the at and the y variables be repeated, giving act and 22 values, and also yi and y2 values, the correlation coefficient, r, may be corrected for errors of measurement in the variables by using the formula of Spearman (1907) for correction for attenuation. ,, . . . . ſºul ºrd acy - (rziz) (rvivº) where r"ry = the corrected coefficient of correlation. rziv, = the correlation between a first set of measure- ments in the a variable and a second set of measures in the y variable. rzºyi = the correlation between a second set of mea- sures in a and a first set of measures in y. rziº, - the correlation between a first and second set of measures in ac, i.e., the reliability coefficient - of the ac measure. rviva - the correlation between a first and second set . of measures in y, i.e., the reliability coefficient of the y measure. Example. Let ra:1z2 = 0.842; rviva = 0.785; rativz = 0.684; ra:2y1= 0.726. 0.684) (0.726 Then º, - \# - 0.887. (0.842) (0.785) Given only one correlation between the two variables, an approximate correction may be made, if the reliability coefficients are known, by the formula T2:11/1 rzy = V= (rz122) (rviv.2) where raiyi = the obtained correlation between the two variables, and the other terms in the formula have the same definitions as above. If the obtained correlation rzivi = 0.700 then (Spearman, 1904). 0.700 rzy = −7– = 0.861 vo.342) (0.785) Errors of attenuation lower r except when the errors are correlated, i.e., chance errors of measurement reduce r, but constant errors leave r unchanged. COEFFICIENT OF SIMILARITY 95 e. The coefficient of similarity (Sm), also called the first moment correlation coefficient, may be used to measure relationship when data are not normally distributed and cer- tain extreme items give exaggerated values for standard deviations and product moments. The coefficient is based on average deviations. Davies (1930) claims it particularly useful in time series problems. Sm coincides with r at the –1 and +1 limits, but is generally less than r for other values of r. It does not give rise to a regression line for prediction purposes, although an approximation may be obtained by graphing. f. The Correlation between a Variable and the Deviation of a Dependent Variable from Its Probable Value (J. A. Har- ris, 1909). Given two variables, X and Y, where Y (the de- pendent variable) is a part of X, it is useful to know not only whether Y increases as X increases, but also whether Y bears the same proportion to X as X increases from low to high values. To determine this it is only necessary to have the constants that are usually computed when finding the correla- tion between X and Y and to substitute them in the formula: rzy – Vr/Vy Td = s v (1 * r”zu) + (rry tº- Waſ W.)” rzy = the correlation between X and Y. standard deviation of X Vz = X 100. mean of X Vy = standard deviation of Y X 100. mean of Y If ra is greater than zero it means that Y becomes relatively greater as compared with X, as X increases from low to high values; and rzy - Vºſ Vy. If ra is less than zero it means that Y becomes relatively Smaller as compared with X as X increases from low to high Values; and rºw < Vz/Vy. If ra is zero, it means that Y maintains the same proportion to X as X increases from low to high values. 96 MEASURES OF RELATIONSHIP IN VARIABILITY For examples, Harris counted the number of ovules per pod in a large series of black locust trees; and he distinguished between these ovules which had developed into seed and those which had not. He then sought the relationship between the relative ability of pods with a large number of ovules to mature their seeds and those with a smaller number. The data are from a sample of 1427 pods of the black locust collected from 12 trees. X = Ovules. Y = Seeds. Mz - 12.1794. My == 7.6874. or = 2.2763. a y = 3.4938. rzy = 0.693 + 0.009. From the above data: Orz 2.2763 Vz = − = − = 18.690%. Mz 12.1794 % Oſ 3.4938 V, a tº = Tº = 45.448 * T M., T 7.6874 % Tzy - Vz/V, 7'd -VH+. 0.693 – 0.411 V1 – 6932 + (0.693 – 0.411)? where V indicates coefficient of variation and rd is as on page 95. This is the same formula as given on the preceding page. Harris concludes that an ra of 0.364 indicates that for this Sample the pods with the larger number of ovules are relatively more capable of maturing their seeds than those with fewer. 5. Analysis of Variance in Correlation In any correlation table like that on page 85 we can dis- tinguish two variables: (1) the variance of the straight regres- THE BISERIAL CORRELATION COEFFICIENT 97 sion line (passing in this case from upper left to lower right); and (2) the variance within the array. (1) is measured (a) by the summation of the squared deviations of each of the y (regression) values from the y mean corresponding to each as class; or (b) r? being known, by the product of r" and the sum of the squared deviations of each of the y class entries from the mean of all the y values, all divided by the number of degrees of freedom (see p. 31). (2) is measured (a) by the summation of squared deviations of each of the y class entries from the corresponding Y (regression) values; or (b) r being known, it is the product of (1 — r?) by the summation of the Squared deviations of each of the y class entries from the mean of all the y values, all divided by the number of degrees of freedom. Or, 1b having been calculated, 2b is equal to 1b 1 — r? 1 multiplied by − = a – 1. r r II. ONE WARIABLE QUANTITATIVE; THE OTHER IN Two CATEGORIES The biserial correlation coefficient is an index of relation- ship between two variables, one of which is expressed in a quantitative distribution while the other is expressed in two categories (Pearson, 1909). The assumptions are that the categorical data are really continuous and normal in distribu- tion, and that the relationship existing between the variables is linear. The formula for the coefficient is: (M2 – M1)pg rbis = — O y? where M1, M = respective means of first and second cate- gorical distributions. p, q = respective proportions of cases in larger and smaller distributions. 2 = ordinate of normal probability curve at limit of area p – 0.5 (obtained from Table IIIa). a y = standard deviation of Yi and Y 2 combined. 98 MEASURES OF RELATIONSHIP IN VARIABILITY The probable error is 0.6745 (ºf - rº) v/N PErbis = Example. TABLE 10.--CORRELATION BETWEEN DISTRIBUTIONS OF BODY BUILD (WEIGHT/STATURE2) FOR THE OFFSPRING OF PARENTS WHOSE BUILD IS DESCRIBED CATEGORICALLY AS VERY SLENDER (WS), SLENDER (S), MEDIUM (M), FLESHY (F), AND VERY FLESHY (VF). The distribution of progeny of different matings according to index of build is as follows:* Build VS : V S S S M M M F F | VF index X | X | X X X X X X X X | Totals M | S M M F | VF F VF | VF 22 1 1. 1 I. . . . . . . . . . . . . . . . . . . . . . . . . . e i s s a s - e s s a 3 24 1 |. 4 | . . . . . 2 |. . . . . 2 . . . . . 9 26 4 2 | 1.4 6 4 1 I 1 1 34 28 1 3 || 12 24 12 11 3 10 | . . . . . . . . . 76 30 2 6 9 57 48 31 9 8 6 2 178 32 1 9 7 74 67 86 17 10 || 15 5 291 34 2 60 62 67 24 29 || 13 5 262 36 3 36 66 62 12 30 9 3 221 38 1 19 35 33 16 18 || 16 2 140 40 12 18 23 7 13 9 2 84 42 2 7 6 10 4 17 | 1.4 1 61 44 5 5 7 8 6 3 35 46 2 2 5 3 5 5 4 26 48 2 2 1 2 2 3 |. 12 50 2 I 1 2 2 1 3 12 52 | . . . . . . . . . . . . . . . . . . . . . . . . . 2 l 3 2 |. 8 54 . . . . . . . . . . . . . . . . . . . . . . . 1 | . . . . . . . . . . . 1 56 | . . . . . . . . . . . . . . . . . . 1 I. . . . . . . . . . . . . . . . I 58 . . . . . . . . . . . . . . . . . . . . . . . . . . 1 |. . . . . . . . . . . 1 Totals | 11 || 28 || 47 || 306 || 327 | 340 || 110 || 156 |100 || 30 || 1455 * Adapted from Davenport (1923, p. 34). A biserial distribution may be formed by dividing the table arbitrarily between S X M and M X M, so that no fleshy parents contribute prog- eny to the first section of the distribution, and no slender parents contribute to the second section. THE BISERIAL CORRELATION COEFFICIENT 99 y Let Yi = index of build distribution of progeny from “non-fleshy' parents Y2 = index of build distribution of progeny from fleshy parents. Y = total distribution. Matings Build index p = 0.73058. Y1 || Y2 | Y q = 0.26942. 22 3|. 3 24 5| 4 || 9 2 = 0.33029. 28 40|| 36|| 76 = 4.98996. Oſ 1/ -- 34 || 62| 200) 262 M1 = 32.53061. = 35.55597. 42 9| 52| 61 (35.55597 – 32.53061) (0.73058) (0.26942) 44 || 5 || 30| 35 rºis = 46 || 2 | 24; 26 (4.98996) (0.33029) 48 || 2 | 10 12| rº, a 0.3613. 50 || 2: 10| 12 52 |...| 8| 8| PErba- 54 |. . . 1 cºs(voz8058,02894) — oasis) 56 1 I 0.33029 e 58 1. 1 v1455 Totals wº 1455 PErbis = +0.0214. If the grouping is broad in the quantitative distribution, e.g., less than 12 intervals, Sheppard's (1898) correction tº- º, cov «Eº ov” = *-* 12 should be applied to the gy used in the denominator. In the correction formula, a y = the obtained standard deviation, i = the size of the class interval, and cow = the corrected standard deviation. A correction to the coefficient rhi, for small populations, e.g., less than 100, may be made with Soper's (1914) formula 1 || 1 , pg pa: Q2. *.* ~~~} +}|}+;-(1-4)(4) +}r.] where p, q and 2 are the values used in the original formula and z is the * abscissal distance from area a and is obtained from Table IV. . 100 MEASURES OF RELATIONSHIP IN VARIABILITY III. BOTH WARIABLES ARE NON-QUANTITATIVE 1. Both Sets of Variables Occur in Several Classes The coefficient of mean square contingency, C, is a measure of relationship between variables expressed in qualitative categories, e.g., colors, temperaments, health, religious prefer- ences, etc. When both characters are normally distributed and the number of categories is large, the value of C approaches the product moment r as a limit. The amount of relationship is assumed to be a function of the correspondence between actual cell frequencies and those to be expected in a pure chance distribution. The formula for determining the co- efficient is: U – 1 Uſ where U is obtained as follows: Using, for example, a 3 X 3 fold distribution table and the notation a, a', a”, etc., for cell frequencies, A, B, C, etc., for totals of rows and columns, the table has the form O. a' 0'’ A. b b' b” B C c' c" C D E. F N The value U may be found from the formula: | | a 2 b2 C2 1 || 0/2 b/2 c'2 v-, +, +, +}|#####| 1 g”2 b”2 c”2 +}|#####| For 4 × 4 fold, 5 × 5 fold and larger tables, the formula is extended to include a corresponding number of bracketed terms. In computing U, it is convenient to arrange the oracketed terms in a vertical column. The method is illus- COEFFICIENT OF MEAN SQUARE CONTINGENCY 101 trated with eye-color data for offspring and great-grandpar- ents: Eye color of offspring ; gº à || 3 || 3 || 3 § § tº a º Totals >, | .*4 Tº *: ; . Sº # ## 35 | #3 || 33 3 ſº C CO Cºld H ſº Qſº º ‘5 : Blue 192 103 81 45 48 469 35 | Gray 70 85 52 22 27 || 256 3 : | Blue-green 9 & * | | Dark gray Fº § | Hazel 36 21 - 27 9 16 109 ſ—s bſ) g Light brown TOW n 43 27 17 33 24 144 Dark brown Black 25 33 36 11 30 135 Totals 366 269 213 120 145 1113 C = Vºi = 0.2608. 1. 07299 CoMPUTATION OF C 1 ſ 1922 702 362 432 252 — — — — — — — — — — — — — | = 0.3472 366 L469 +- 256 109 + 144 + #. 7 1 ſ 1032 852 212 272 332 — — — — — — — — — — | = 0.25286 269|469 256 109 144 + #| 1 ſ 81* , 52% 27° 17* , 36° 213|469 256 109 144 135 1 T452 222 92 332 112 ~ | H -- - - - - - - - -H + | = 0.12842 120 |469 " 256 "109" 144 "135. & 1 || 48%. , 27° 16° x 24" L 30° 145|469 256 109 144 135 U = 1.07299 = 0.20116 * = 0.14328 The coefficient of contingency, C, does not have negative values, but varies between 0 and 1.0. Its value is limited by 102 MEASURES OF RELATIONSHIP IN VARIABILITY the number of categories used in the distribution, and Yule (1932, p. 66) gives for various small numbers of categories the limiting values of C. Number of cat- egories each Way . . . . . . . 2 3 4 5 6 7 8 9 10 C limit. . . . . . . 0.707|0. 816|0.866|0.894}0.913|0. 9260. 935|O. 943|O. 949 A correction for broad grouping (few classes) has been given by Pearson (1913): C Tzcº ve cC = where rac and rye = correlation of variable with class value. Using the convenient notation of our 3 X 3 fold table, these r’s may be obtained from the formulas: N N N 7 cc = Nà (20 – 21)” + E. (21 – 22)” –H F (22 – 23)” . . .etc. N N N Tvc = N; (2'0 — 2/1)” + B (2/1 — 2/2)” –H c (2/2 – 2's)” . . . etc. where 20 = ordinate at zero area. 2'0 = ordinate at zero area each = 0. g D f e ordinate at area ºz. 2'ſ ordinate at area –. N' N F 21 g E . g B 22 = ordinate at area N' 2'2 = ordinate at area N” dinate at area. 2 ordinate at area C OTOil D.8, UE rea, –. I'ea, ~. W. “ N -: 23 The ordinates may be read from Table IIIa. The correction becomes less important with finer grouping and is generally omitted with 6 × 6 fold or finer classifications. The probable error of C may be found from the formula: p3 % - 1 — d.” _0.6745 || @* + q, PEc = —— . v/N L (1 + 4*)* EISERIAL ETA 103 where q ^ = U – 1; and (a 4) ( py *-*. —º- + *—- TN || (ADY" BD)* N N using the notation of the 3 X 3 fold table given above. + ... etc. for each cell 2. One of the Two Sets of Variables Occurs in Several Classes Biserial eta (bis m) is a method of determining correlation where one variable is given by alternative and the other by multiple categories (Pearson, 1910). Regression is not as- sumed to be linear, and the categories are not quantitative. But the alternative category is assumed to have the Gaussian distribution. gº K2 * 3:2 ** = NTTK. where a is the alternative variable and y the multiple variable. 1 K? = - X(malac”, ). N (n,x*) aſy is the proportion of the total area (as determined from Table IV) lying above the point of dichotomy. ny, the number of cases in a y category. N, the number of cases in the total distribution. Example. Offspring WS S M F WF Total 3 Slender 5 140 192 41 11 389 Ǻ à | Notslender 4 148 582 244 74 1052 Totals 9 288 774 285 85 . 1441 V, very; S, slender; M, medium; F, fleshy. q’ = 0.55556 0.48611 0.24806 0.14386 0.12941 zy = 0.1397 0.0348 0.6806 1.0632 1.1292 nvirº2 = 0.1756 -- 0.3488+358.5295 +322.1623 + 108.3829 = 789.5991 104 MEASURES OF RELATIONSHIP IN VARIABILITY 789. 5991 K2 = = 0.54795. a = 0. 61282. 1441 1052 s 0. 54795 – 0.612822 p = H = 0.7300. bismay = V → = 0.3337. 1441 1 + 0.54795 389 q = − = 0.2700. 1441 To obtain each a y enter body of Table IV with value for 0.5 – q'. If q’ > 0.5, read ac/a value for q' – 0.5 and prefix a minus sign. The standard error for biserial eta less than 0.500 is obtained by the formula: 1 — nº 2 SEbism -> º pg + 2p* à. N g” (1 + a:2)? The y or ordinate value is obtained from Table IIIa. Enter the table with a /or = 0.6128 and read y = 0.3306. Thus the standard error for the problem above is: SE,-HºNº. 2 × 0.7300 × 0.6128% " V1441 0.3306? (1 + 0.61282)? = 0.0341. For eta values greater than 0.50, a longer formula for the standard error is needed (Pearson, 1917). 3. Each Set of Variables Occurs in Two Classes Tetrachoric Correlation. This is a method of expressing correlation quantitatively when the variables can not be so expressed, as, for example, in the case of effectiveness of vac- cination. Strictly, this method assumes normal variation in variables, but it can be employed generally, in default of a better method, with fairly accurate results. The prime requisite is that the qualities to be compared shall be separable into two grades, an upper and a lower. For example, in the case of the result of vaccination: on the one hand, either presence or absence of a scar; on the other, either recovery or death. As either of the second pair may occur with either of the first pair, four classes, a, b, c, d, will be formed altogether, and a correlation surface like the following may be made: TETRACEIORIC CORRELATION 105 --y Ol. b a + b —º Q: C d c + d a + c b -- d N | !/ The surface should be so arranged that a + b > c + d and a + c > b + d. The axes y, -y and ac, -a, probably do not coincide with the axes y and a passing through the “origin '' of the correlation surface, but may be regarded as situated from those axes at the respective distances h and k. These values may be found from the formulae (a + c) — (b -- d.) -Năſ.-º N T J'0 a royerº -Năſ’.” a, b, c, and d being known, h and k are found from Table IV. Then e-33° and K = V 27ſ 27r of which the values may be looked up in Table III, or, better, their product may be calculated by logarithms as follows: h2 + k2 2 H = – Vák2 log HK = 9.201820 – N |º + owns º Find also log hk, h” and k”. To find r solve the following equation to as many terms as may be necessary: ad — be hk; 1. = — r2 - 2 — 2 — 3 wiłł = r + 3 r + ( – ) (k - Dr 1 +; hkº – 3) (8 – 3)r. 106 MEASURES OF RELATIONSHIP IN VARIABILITY 1 +Pºſh – Gº 4 3)(6 – 6e 43r. 1. + zººk(hº – 10h.” + 15) (k4 – 10k” + 15)r" -- etc. This gives us a numerical equation of the nth degree which can be solved by ordinary algebraic methods, using Sturm's functions and Horner's method. Or it can be solved by successive approximations as follows: The first approxima- tion is made by neglecting all powers of r above the second and solving the quadratic (remembering that, if azº-Hbac-Hc =0, —b == V b? — 4 3C = 2 e) and taking the positive root. Sub- O. stitute this value in the whole equation to the fourth power for f(r), and in the first derivative of the same equation for f'(r) (remembering that the first derivative of f(z) is obtained by multiplying each term in f(z) by the exponent of a in that term and diminishing the exponent of a by 1). The correc- f(r) f'(r) Repeat this process as often as the correction affects the fourth place of decimals, and go to r" if necessary. tion should be added to the value of r used in substituting. Example. The eye colors of a certain set of people (see Bio- metrika, II, 2, pp. 237–240) and of their great-grandparents were found to be distributed as follows: Offspring 1 2 3 4 6 7 8 D E. ſº a j : § !. B: B: §º B | 3 || 3 || 3 2 2 --> -C F| $7;| 6 -C, § ---> ſ # !? j| 3 | # | | | 3 | # # # | ##| ##| ##| 5 || 3 || 3 || 3 || 3 E ; : I go | CO | Q tº ſº | Q gº H cº § | 1. Light blue. . . . . . . 4 3 8 5 | . . . . . 1 | . . . . . . . . . 21 ..., | 2. Blue—dark blue. . | 8 |177 || 95 || 76 5 || 39 31 || 17 448 à | 3. Gray—blue-green.| 1 || 69 || 85 | 52 2 || 20 26 1 || 256 (5 4. Dark gray—hazel. | 6 || 30 || 21 27 2 7 | 15 1 || 109 5. Light brown. . . . . . . . . . 4 |. . . . . . . . . . . . . . tº º v - 4 6. Brown . . . . . . . . . . 2 || 37 27 | 17 3 || 30 20 4 || 140 7. Dark brown . . . . . . . . . . . 15 | 20 | 24 3 4 9 9 84 8. Black . . . . . . . . . . . . . . . . 10 || 13 | 12 2 2 7 5 51 Totals. . . . . . . . . 21 |345 |269 |213 || 17 |103 |108 || 37 |1113 TETRACHORIC CORRELATION 107 It was desired to determine the correlation between the eye color of the offspring and that of their great-grandparents. Clearly the ranges of the classes given above are not quantitatively equal nor determinable. Consequently a fourfold table was formed by dividing the population into those having eyes whose color was gray blue-green, or lighter, and those having dark gray, hazel, or darker eyes. This gives a good basis for calculation. If the dark gray and hazel eyes had been grouped with the lighter eyes it would have made quadrant a entirely too large; and there is nothing in the nature of the data that strongly favors one division more than another. Offspring g 725 – 388 gº 1–3 4–8 Totals. a1 = − = 0.302785 °. 1113 § ro 1–3 450 275 725 *—tº - 0.141080 : a2 = --— = U. -- 2 1113 * 4–8 185 203 388 cº From Table IV: § Totals | 635 478 1113 %a1 h (approac.) %a2 k (approac.) 0.151392 0.07053 0.15173 0.390 0.07064 0. 178 0. 15136 0.389 0.07025 0.177 0.00037 0.00039 h = 0.389 + (0.000032 + 0.37) = 0.38909 k = 0.177 -- (0.00028 + 0.39) = 0. 17772 hk = 0.069150; ; hk = 0.034575. h2 + k2 h? = 0.151392; k2 = 0.031585; = 0.091489. 0.224962 = r + 0.034575r12 + #(h? – 1)(k2 – 1)r,3 + ºthköh? – 3) (k2 – 3)rt" + etc. Solving 0.034575rt2 + r – 0.224962 = 0: –1 + V1 +4(0.034575 × 0.224962) rt = 2(0.034575) = 0.223225 to first approximation. - h” – 1 = 0.848608; k2 – 1 = –0.968415; coeff. riº = 0.136967. 0.069150 X 2.84.8608 X 2.968415 coeff. r14 = 24 = 0.024.363. 0.024363rt* + 0.136967rgº + 0.034575rt2 +- rt – 0.224962 = 0. Applying Newton's approximation, we reach the result rt = 0.2217. 108 MEASURES OF RELATIONSHIP IN VARIABILITY The probable error of tetrachoric rº is given approximately by the formula: - Tsin-1 rºVAT PErt = ºw | * º |(H+) | ME (a + c) (c + d) *H) N*'yy' VN The first factor may be found in Table XVIII. and u’ are found as foll a + b 725 0.6514, the r Il OL10 WS = — = U. y vana v are round as follows: -w- = His proportion of the cases in a + b. Since exactly half of the cases lie on each side of the mean in a normal distribution, the proportion of area beyond the mean will be 0.6514 – Q. 0.5000 = 0.1514. Since 0.1514 represents area, the - or O’ abscissal distance is found in Table IV. The same procedure a + g Then enter Table IIIa and find is gone through for the ordinates corresponding to the two abscissae found in Table IV. These two ordinates are y and y', respectively. 0.6510 V72 5 × 388 × 478 PErt = 5 725 × 635 × 388 × = 0.032. (1113) × 0.3698 × 0.3927 × V1113 B. INTERDEPENDENCE BETWEEN THREE OR MORE SETS OF WARIABLES I. CoEFFICIENT of PARTIAL CoRRELATION The coefficient of partial correlation is a measure of the net relationship between two variables when one or more other, more or less dependent, variables are held constant. For ex- ample, the correlation between stature and total heat produc- tion was found to be 0.622. Also, the correlations between weight and heat production, and weight and stature, were found to be 0.818 and 0.571 respectively. Now it is possible that a part or all of the correlation between stature and heat production is due to the correlation of stature with weight, PARTIAL CORRELATION 109 which in turn is correlated with heat production. That is to say, stature may be correlated with heat production through a third variable, in this case, weight. 1. Computation of r In order to eliminate the influence of a varying weight for different individuals on the correlation between stature and heat production, one could compute the desired correlation by using only those individuals who had the same weight (within a small range). This correlation was found to be 0.197 for the 41 individuals whose weights ranged from 50 kilograms to 60 kilograms. A better way of holding one variable constant and then finding the correlation between two other variables is offered by the following formula: T12 – 7'13 - ?93 T 12.3 = v/1 ſº- r°13 v/1 smº r°28 Letting stature = 1, heat production = 2; weight = 3, r12. 3 is a symbol which is read as follows: “The partial correla- tion between 1 and 2 with the influence of 3 held constant or eliminated.” After assigning a number to each variable, as above, r12. 3 is said to be “the partial correlation between stature and heat production with the influence of weight eliminated or held constant.” Substituting in the formula T12 – 7°13. Tº 3 vſ1 *m T*13 v1 *- r°23 y T12.3 = we have 0.622 - (0.571) (0.818) V1 – (0.571)2 V1 – (0.818); gº-ºº: T12.3 The correlation between stature and heat production with Weight held constant is thus about one-half of the correlation where the individuals involved had different weights. This reduction means that part of the greater heat production of tall individuals is due to their greater weight. The value of 0.328 for r12.8 indicates that there is a tendency for tall indi- viduals to have a greater heat production than shorter ones even though they all have the same weight. 110 MEASURES OF RELATIONSHIP IN VARIABILITY Eliminating the influences of weight on the correlation be- tween stature and heat production by using only those indi- viduals who have the same weight, or computing r12.3 by the above formula, gives practically the same results. The differ- ence obtained by the two methods in the above example is 0.131, which is not significant when its probable error is taken into consideration. The selection method involves a reduction in the number of individuals on which to base the correlation. On the other hand, the computation of r12. 3 affords a measure of relationship based on the weighted variability of all the ar- rays. It serves to hold variables constant without reducing the number of cases. r12, r12.3, r12.34, r12.34. . . n are called correlations of the zero, first, second and nth orders, the order being determined by the number of subscripts to the right of the decimal point. The subscripts before and after the decimal point are called the primary and secondary subscripts respectively. In correlations the order of both the primary and secondary Subscripts is indifferent. Thus, r12.84 = rs1.34 = rol. 48 = r 12.43. To calculate partial correlations for any order it is only necessary to work out the equation: r _ T12 24 . . . (n − 1) T 7 in . 84 . . . (n − 1) "2n , 34 . . . (n − 1) 12. 34 . . . n. T e v1 - rºln. 34... (n-1) vſ1 – rºzn. 34... (n-1) 7°12 – T13 rº: Thus: T12. 3 = ; v/1 — r°13 v1 — r°23 T 12.3 - T 14.3 ° 7'24.3 also 7 12. 34 = v/1 – rºl.4.3 v/1 * rº..." It has been stated that the order of both the primary and secondary subscripts is indifferent. Accordingly, T12.3 - T 14.3 ° 7'24.3 T12. 34 = ==== = ? 12, 43 v/1 – rºl.4.3 V 1 — r?24.3 tº- T12.4 - T13.4 °7'23.4 v1 — r*18.4 v1 – rºz8.4 º Rewriting the equation in this way is an excellent means of checking the work. PARTIAL CORRELATION 111 The following data were taken from a study by Harris and Benedict (1919) on basal metabolism in man. These data will be used to illustrate the computation of partial correlations, partial sigmas, coefficients of net regression, regression equa- tions and multiple correlations. Variable Mean dr 1. Total heat production in calories per day. 1631. 74 204.66 2. Stature in centimeters . . . . . . . . . . . . . . . . . 172.96 7.59 3. Weight in kilos . . . . . . . . . . . . . . . . . . . . . . . 64. 10 10.30 4. Age in years. . . . . . . . . . . . . . . . . . . . . . . . . 26.88 8.77 Correlations T12 = 0.6149. 7'23 = 0.5725. T13 = 0.7960. T24 = - 0.1154. r14 = — 0.3062. rs4 = 0.0067. To find the correlation between total heat production and stature with the effects of age and weight held constant, it is necessary to solve the equation: T 12.3 - T 14.3 °7'24.3 T 12.34 = v1 — r2 14- 3 v1 gmm T*24.3 T 12 — r13 - r 23 v/i — r? 13 \/ 1 — r*23 0.6149 – (0.7960)(0.5725) where T 12.3 =: := = 0.32077. V1 – 0.7960° V1 – 0.5725? r 714 – T13 T34 14.3 = v/1 * r°13 v/1 e- r°34 — 0.3062 – (0.7960) (0.0067 = (0.7960)( 67) = — 0.51469. v1 – 0.7960; VI – 0.0067. r - T24 - 7'23 7'34 24.3 - V1 - r”28 v/1 - T*34 _ –0.1154 – (0.5725)(0.0067) tº- = — 0.14543. v1 – 0.57252 V1 – 00067. . 112 MEASURES OF RELATIONSHIP IN VARIABILITY 0.32077 – (–0.51469)(–0.14543) V1-(–0.51469)? V1-(–0.14543)? Thus, 7 12.34 = = 0.28991 Or r12.34 = 0.290. :k Conclusion: In this sample, about half of the correlation between total heat production and stature is due to the varying ages and weights of the individuals involved (r12.84 is about half of r12). Nevertheless, there is a tendency for the taller individuals to have a greater total heat production than shorter ones, even though all the individuals involved have the same age and weight. 2. Partial Sigmas In the discussion of correlation involving two variables it was pointed out that the standard error of estimate is a measure of the variability about the regression line, and that in a particular problem, the smaller the standard error of estimate, the more reliable a prediction based upon the regres- sion equation would become. Partial sigmas may also be interpreted in exactly the same way, only in this case the variability is measured about a regression line describing the relationship between one variable, a 1, and a series of other variables 22, 2a, 24, . . . , 2n. In other words, a partial sigma is a measure of the variability remaining in aci, after the varia- bility due to 22, a3, . . . , an has been eliminated. A partial sigma of order n may be written as a 1.234 ...n. The subscripts to the right of the decimal point indicate the variables whose effect on the variability of aci has been elimi- nated. A partial sigma of order n may be computed as follows: O 1.234 . . . n = OT1 V } — r*12 v1 smº Tºiá.2 vi smº T*14.23 • * * * v/1 - rºln. 23 ... (n-1) thus, G 1. 23 F or 1 v1 — r*12 v/1 – rºua. 2; O 1.234 F O 1 v1 — r°12 v1 — r*13.2 v/i – rºi 4.28. * In the calculations it is necessary to carry out the computations to more places than is desired in the final answer in order to avoid cumula- tive errors due to the rounding off of numbers. JPARTIAL CORRELATION 113 Since the subscripts to the right of the decimal point merely indicate what variables are held constant their arrangement is immaterial as far as the numerical value of the partial sigma is concerned. Thus a 1.23 = a 1.32 = a1 V1 – rºs V1 – r°12. 3. Recomputing a 1.23 in this manner gives a convenient check on the calculation. For the metabolism problem the partial sigmas are as follows: O 1.23 = OT1 v1 — r°13 v1 — r*12.3 204.66 V1 – (0.7960); V1 – (0.3208)? = 117.332; g 2. 13 F O 2 v/1 — r°23 v/1 gºmº rºls.; 7.59 V1 – (0.5725)? V1 – (0.3208)? = 5.894; O'3. 12 = OT3 v1 – r”13 v1 – T*23.1 10.30 V1 – (0.7960); V1 – (0.1740)? - 6.139. a 1.2s is the variability of the total heat production after the variability due to the different statures and weights of the individuals involved has been removed. a 2.13 and a 3.12 may be interpreted in a similar way. The regression coefficients are calculated by the formula: sk O 1.234 . . . n. b*12. 34... n = rig. 34... n tº O 2. 134 . . . n Th O 1.23 e uS, b12. a = r 12.3 2 O 2. 13 O 1.23 g b18.2 = T13.2 y Oſ3. 12 O' 2, 13 & b21.3 = T12.3 O 1.23 * In calculating partial correlation coefficients it was stated that r12.34 = t21.34. In regression coefficients, however, b12.34 does not equal b21.34, since b12.34 is the slope of X1 on X2 and b21.34 is the slope of X2 on X1, it being understood that the effects of X3 and X4 have been eliminated before these slopes were measured. 114 MEASURES OF RELATIONSHIP IN VARIABILITY 3. Regression Equations The equation of net regression of X1 on X2, Xa . . . Xn gives a way of estimating X1 from X2, Xs . . ., Xn, provided certain relationships are known. Thus the equation to pre- dict X1 from X2 and X3 is X1 = a1.23 + big. 3 X2 + bi 3.2 X3. Similarly to predict X1 from X2, Xa and X4 X1 = a1.234 + b12.34 X2 + b13.24 Xa + b 14.23 X 4, and, in general, X1 = a1.284... n + b12.34 ... n X2 + b13.24. . . n X3 + . . . + bin .28... (n-1) Xn where X1 = the variable to be predicted. X2, X3, ..., Xn = the variables used in predicting X1. The constant term a1.284... n = M1 - biz. 34... n X2 - bis. 24... n X3 – . . . – bin .2s... (n-1) Xs. Thus, a 1.28 = M1 – biz.: X2 – big. 2.2×8. a 1.284 = M1 – biz.84 ×2 – big. 24 ×a – biº.28 X4. b12.84... n = the net regression coefficient of X1 on X2, when the effect of the variables Xs, X4, ..., Xn is held con- stant. It is the weight to be given to X2 in predicting X1 from X2, Xs, X4, ... Xn. Similarly b13.24. . . n is the weight to be given to X's in predicting X1 from X2, X3, X4, . . ., Xn. In the present problem to predict an individual’s “total heat production ” from a knowledge of his stature and weight, it is necessary to use a regression equation involving three variables, namely, X1 = a1.2s + biz.: X2 + bia.2 Xa MULTIPLE CORRELATION 115 where X1 is the predicted “total heat production ” for an individual of X2 height and X3 weight. And where O 1.23 117.332 b12.3 = r 12. = 0.3208 = 6.39: 12. 3 T 12.3 O'2. 13 5.894 y O 1.23 117.332 b = + = 0.6866 — = 13.12: 13. 2 T13.2 O'3. 12 6.139 y CL I. 23 = Mi — biz. 3 X 2 – b13.223 = 1631.74 – 1105.21 – 840.99 = —314.46; Yi = — 314.46 – 6.39× 2 + 13.12X3. II. MULTIPLE CoRRELATION The aim of multiple correlation is to test how closely it is possible to predict a variable, say X1, from several other varia- bles such as X2, X2, ..., Xn. It is desirable to know whether the equation expressing the relationship between “total heat production ” and stature and weight gives results which check up closely with the actual measurements. Thus, if the predicted “total heat productions” are cor- related with the actual values, a certain correlation will be found, and it is obvious that the more closely the predicted and the actual values agree, the higher will this correlation be. A correlation of this type is called a multiple correlation, since a variable, say Xi, is being correlated with several other variables. The symbol for multiple correlation is R1.234. . . n. The subscript to the left of the decimal point indicates the variable that is being predicted from the variables X2, X3, X4, ..., Xn. The order in which these latter variables are written is immaterial. Thus R1.284... n = R1.824. ... n. Multiple correlation coefficients may be calculated by the formula: 2 0 "1.234 . . . n. R1.284... n := N. sm ºmºmºmºm-m 2 or from O'1 R1.2ss. . . ºn T v1– (1-rºi2)(1 – r*13.2)(1 – r*14.2a). . . (1 - rºln. 23... (n-1)). 116 MEASURES OF RELATIONSHIP IN VARIABILITY | Since the order of the subscripts to the right of the decimal point is immaterial R1.234. . . n = V1-(1-rº)(1-rº)(1–rº.º.) ... (1–rºin.as... (n-p). Thus in the present problem the multiple correlation between “total heat production ” and stature and weight is R... - . 1 – º – N (; * 0.819 * - V o,” " 204.66 / T * OI’ R1.28 -: V1 g- (1 * r°13) (1 - r”12.8) = V1 – (1 – 0.7960°)(1 – 0.3208%) = 0.819. It will be noted that predicting “total heat production ” from weight gives results practically as good as estimating it from both stature and weight, since the respective correlations are 0.796 and 0.819. III. THE COEFFICIENT OF PART CoRRELATION The coefficient of part correlation (Smith and Ezekiel, 1926) measures the correlation between the dependent factor (after removing the net variations found to be associated with the remaining independent factors) and the particular inde- pendent factor to be considered. For example, if farm income 1 be dependent on (and the sum of) independent variables 2 + 3 + 4, then if the terms 3 and 4 be subtracted from 1 in each case, it is possible to correlate this reduced 1 with the 2 part factor. Such an operation is designated as 12:34, the subscript to the left of r indicating the dependent variable and the independent variable whose effect is being measured, while the subscripts to the right indicate the independent variables whose effects are removed. This coefficient differs from the partial correlation coeffi- cient in that all the original variation is left in the independent variable (2) and only the dependent variable (1) is adjusted. Formula: r N b°12. 340.2% (coefficient of part cor 12T34 F * : 4 b°12.3402” + a 1*(1 — Rº1.284) relation) p TETRAD DIFFERENCE 117 where 12"s4 = the coefficient of part correlation. b12.34 = the partial regression coefficient, given by OT 1. 234 T12. 34 º O 2. 134 R1.234 = the multiple correlation coefficient. or 1, 0.2 = standard deviations of distributions. Applying the formula to the problem of page 111 gives (6.39)2(7.59)? (6.39)*(7.59)* +204.662(1–0.819°) Note that at page 111 ruz = 0.615 and r12.3 = 0.321 Further discussion is given by Ezekiel (1930, p. 182). = 0.382 127°34 = IV. TETRAD DIFFERENCE The tetrad difference idea was first applied in the attempt to prove the Spearman g theory (Spearman, 1927). It can be proved (statistically) that if four variables have one, and only one, common factor running through them, then every tetrad difference involving the r's of these four variables will be equal to 0. Tetrad differences are defined as follows: tiz84 = r 12"34 – r13rz4. tiz48 = r 12784 – r14'23. tis 42 = r 18724 – r14'23. tiaz A = r 18724 – r12"34. tº 428 = r1,r2s - T 127'34. tº 432 = ri Arzs — risr24. Illustration: Coefficients of correlation between parts of an intelligence test. Oppo- Com- Memory Discrimi- sites pletion nation Completion 0.80 Memory 0.60 0.48 Discrimination 0.30 0.24 0.18 Cancellation 0.30 0.24. 0.18 0.09 118 MEASURES OF RELATIONSHIP IN VARIABILITY Let o = opposites. Then roc = 0.80. c = completion. rmd = 0.18. m = memory. rom = 0.60. d = discrimination. red = 0.24. Since tizs4 = r 127'34 – r 13?'24; then tocmd = rocrind - rom red. Substituting, tocºnd = 0.80 × 0.18 – 0.60 × 0.24 = 0.144 – 0.144 = 0.0. Any other tetrad equation using the above table of coeffi- cients will be 0. If every tetrad difference equals 0, then the variables may be thought of as having one general factor and no group factors. In practice, one finds the distribution of obtained tetrad differences and computes the or of the distribu- tion. If this a is no greater than the variability to be ex- pected theoretically (Lexian ratio may serve as a measure), then one and only one general factor is common to all varia- bles. (Of course each variable has a specific factor also.) For more complete account see T. L. Kelley (1928, p. 47). See also R. Pintner (1931.) CHAPTER VI HEREDITY Mendelian Analysis The study of heredity in man is especially difficult owing to the small size of fraternities. It consequently requires special methods. The Family Method. Factorial analysis of traits arising from recessive genes meets with the difficulty of small families. The simplest recessives appear in only 1 child out of 4, and in families of even as many as 4 children, derived from 2 heterozy- gous parents, it will often happen that the expected recessive will not occur, even though (since the parents are both heterozygotes) the sibship potentially contains one. Assume a group of heterozygous (DR) parents each produc- ing 4 children. The probability of occurrence or association of the dominant and recessive phenotypes is obtained by the expansion of the binomial (; ++)", as follows: (#)4 + 4(#): (#)3 + 6(#)*. (#)3 + 4(#)3. (#) + (#)* or, in 256 families: 81, 108, 54, 12, 1 children are in the respective terms of the binomial formula, with proportion of recessives (R) and dominants (D) as follows: 0R4D, 1R3D, 2R2D, 3R1D, 4R0D. That is to say, 81 families out of 256 will not show their potential recessives, and we can not say of such families whether or not both the parents are actually heterozygous. Since in the above families recessives occur once in 108 times, twice in 54 times, thrice in 12 times and in four times once, or a total of 256, the observation of these families would 119 120 HEREDITY lead to the false conclusion that the incidence of the recessive trait was 256 out of 4 X 175, or 700, individuals instead of 4 × 256 or 1024 individuals. Thus a false ratio of 36.6 per cent instead of the true ratio of 25 per cent would be obtained. It follows that with human families expectation in such matings is no longer 25 per cent, or in the case of back crossing to a phenotypically recessive parent RR 50 per cent, but a larger proportion, depending on the average number of chil- dren in the fraternities studied. In fraternities of 2 the ratios that will probably be found are 57 per cent and 67 per cent respectively; in fraternities of 5 children 33 per cent and 52 per cent; in fraternities of 10 children 26 per cent and 50 per cent respectively. The general formula for finding the unit recessive ratio that we may expect to find in DR X DR matings is 1 p j, where p is the theoretical percentage, s the — q number of children in the fraternity and q = 100 — p. The charts (Figs. 23, 24) from Koller (1931) will be found useful in determining the proportion of unit recessives occurring in small fraternities, and the curve DR X DR can also be used to determine prediction in respect to very rare recessive sex-linked traits where the affected progeny are all males. But if the trait is not very rare, matings of affected males with female carriers may occur, in which case half of the offspring are recessives and curve DR X RR may be used to determine prediction. In the particular instance of hemophilia no well-authenticated cases of affected females are known. In dealing with a pool of families from heterozygous parents and containing at least one recessive offspring let t be the observed number of recessives, c the maximum number of children in any family. Then if one is actually dealing with a recessive character, which has been observed in a popula- S = C tion XXt times, where t = º S = 1 S = C XD —H- = 0.25 MENDELIAN ANALYSIS 121 1.00 0.90 0.80 ſº g0.70 to ; 0.60 30.50 .9 30.40 3 § 0.30 0.20 0.10 0.00 DR XRR DR XDR Recessive Probability p 1 —q" l 2 3 4. 5 6 7 8 9 10 Number of Children, S Fig. 23.−To show expected proportion of recessive children, in fraterni- ties of a children, that will be found in DR X DR and DR X R.R. matings respectively. Where the families are very large, 50% and 25% recessives will be expected as indicated at the extreme right of the figure, where the DR X RR line quite touches 0.50 and the DR X DR line nearly touches 0.25. But as the number of children becomes smaller (toward the left) the expectation-proportion of recessive children rises (in the families selected because they show recessive children) until at s = 2 the ratios are .67 and .57 respectively. The curves are derived from the equation (in which p is the theoretical Mendel number and q = 1 — p) and enable one to read off the percentage of expected recessives in families of 1 to 10 children 0.30 5 DR X RR tº 0.20 gº) k; DR X DR -> 30 10 § , or Variance -> w(q—wsq.") 0. 00 2 3 4. 5 6 7 8 9 10 Number of Children. S 2 FIG. 24.—Variability in terms of variance (#) of the distribution of & recessive children of DR × DR and DR × RR matings, in families where at least 1 child (of a fraternity of s children) is a recessive. The curves are drawn from the equation in which w is the theoretical prob- ability of recessive children, as altered by the selection of affecte 122 HEREDITY If this equation is satisfied we may conclude as probable that a single pair of allelomorphs with dominance is involved. Standard Error of Above Determination. Compare 2t with 27, in which r represents the expected number of recessives, calculated as follows: ms(1 — q) XT = XC II ºf The difference between Xr and Xt will then have a standard error of S.E. dif. = V2nk, where k = +% (sq"(q – 1) + q(1 – q’)]. (1 — q”)? Lenz (1929, p. 707) offers the generalization that if a rare 1 mono-recessive trait shows itself (phenotypically) in n of the population the frequency of the corresponding gene in l the population is Vn Also in the parents and children of the 70, carriers of a trait the trait will appear phenotypically with a 1 frequency of about −; for all the parents and all the vºn children of a carrier of the trait have received the appropriate gene at least once and another gene of the same kind will tend to coincide in fertilization in accordance with the general 1 frequency, Vº of the gene in the population. 7? The Mass Method. This has been developed by Snyder (1934) for cases of a single factor for a unit character. Type 1. Without Dominance. There is involved a pair of allelomorphs C and c. The heterozygote CC is phenotypically recognizable. Three classes are recognizable in the population: the dominant homozygotes DD and the heterozygotes DR, and the recessive homozygotes RR. Let q = frequency of D allelomorphs, p frequency of R; then p + q = 1. The three phenotypes dependent on the genes CC, Cc and co, respectively, will occur in the population in the proportion q*, 2pg and p”. THE MASS METHOD 123 Hence g = v/proportion of DD individuals. p = V/proportion of RR individuals. If the proportion of a genotype in the general population be indicated by a superimposed line, then DD stands for the proportion of homozygous dominant individuals; RR for the proportion of recessive individuals. Then v'IND + vīf = 1. The standard error of the deviation from unity is the recipro- cal of 2VN = 1, where N is the number of people in the sample examined. If the equation p + q = 1 is satisfied, probably inheritance is of Type 1. Type 2. A Pair of Allelomorphic Genes, One of Which Is Dominant. Call them D and R, D being dominant over R. Then if q = frequency of D and p = frequency of R, again p + q = 1. The dominant individuals will occur in the general population with the frequency g” + 2pq, and the recessive with frequency p”. Then p = v'FF. If L = pro- portion of recessive type expected in the offspring of random matings of dominants with dominants and M = proportion of recessives to be expected in the offspring of random matings = ~#– 1 + p. The observed proportions of recessives in the families studied may thus be compared with the expected proportions and if the difference between observed and calculated results are non-significant the hypothesis of a single pair of allelomorphs with dominance is satisfied (Table XXII). The standard error of L is: FF 1 – Vo N b ; (1 + Vb). N (1 — b) * M - + H+. N (1 — b)' where b is taken as the observed value of the proportion of recessives (dd) in the general population, and N is the total number of individuals studied in the population. Type 3. Case of Character Dependent upon Multiple Alle- lomorphs, A, a', a. Let allelomorphs A and a' be dominant 2 of dominants and recessives, then L = H.) ; M 1 + p 124 HEREDITY to a, but the heterozygote Ala' is phenotypically recognizable. Then four kinds of phenotypes occur in the population: those that show both dominant factors; those that show only one; those that show only the other; those that show neither dominant. Let q = frequency of A allelomorph; p = fre- quency of a', and r frequency of a. Then p + q + r = 1. Then individuals showing both dominants (Aa') will occur in the general population with frequency 2pq; those showing one dominant (AA, Aa) with the frequency g” + 24r; those showing the other dominant (a'a', a'a) with the frequency p” + 2pr; and those showing neither dominant with the frequency r*. The proportion of AA individuals is (AA), of a'a' individuals aſa', and of aa individuals is aa. Then, via Tůž E Zia) + Vaz Taº-Hºa) – Vaa = 1. If this equation is satisfied, the hypothesis of multiple allelomorphs is justified. Type 4. Case of Character Dependent upon Two Pairs of Factors. Assume a pair of coöperating allelomorphs A and a, and a second pair B and b; with complete dominance. Let q = frequency of A gene; p = frequency of a, r = frequency of B, and s = frequency of b. Then p + q = 1; r + s = 1, and s” = aabb -- (AAbb + Aabb); p* = aabb + (aa BB + adBb); (1 — p?) (1 — s”) AABB + AABb + AaBB + AaBb. If this equation is valid the hypothesis of two pairs of inde- pendent factors is probably true. Type 5. Eacpectation of Affected Offspring When the Trail Depends wipon the Occurrence of Two Independent Genes (Double Dominant) and the Trait Is Phenotypically Absent in Both Parents. What is the expectation of affected offspring (double dominant) in matings when neither parent shows the dominant trait; and the dominant trait depends upon the concurrence of two dominant genes (double dominants) A and B which are located on different chromosomes and which have germ cells of frequency in the population indicated by l and m respectively? THE MASS METHOD 125 For a family of s members this is given by the formula: l2 4l(1 — l) 0.5s 4(1 — l)? 0.25s S g & º (l − 2)2 (l − 2)2 1 – 0.5° (l − 2)2 1 – 0.75° This expectation is close to the expectation for recessive off- spring of two normal parents when the dominant genes are fairly rare (1 to 40 individuals, or less). The proportion of affected individuals in a family where only one parent is phenotypically affected owing to the pos- session of the two dominant genes required can be computed from the tables given by Hogben (1933, pp. 84–87). Type 6. Case of Sea-influenced Traits. Assume a pair of allelomorphs H and h, such that H is dominant in males but recessive in females. Let q = frequency of H and p = fre- quency of h. Then p + q = 1. Let Q H = proportion among women of women who show the trait represented by the H gene; cºh = the proportion among men of men who show the character represented by the h gene. Then Vg H + V cºh = 1. If this equation is satisfied the trait is inherited under this category. Standard error of the deviation from unity: 1 — B (1 – B)? 1 – D (1 — D)? S. E. dev. = == } * tº Eº L. dev. = + w N. Tibb(N): " Nº Tigb(N): where B = @H, D = Gºh, NI = number of individuals from which the value of B was derived, and Na = number of indi- viduals from which the value of D was derived. Type 7. Case of Ser-linked Traits. Assume a pair of alle- lomorphs M and m, with M dominant to m, carried on the X-chromosomes. Let q = frequency of M gene, and p = fre- Quency of m gene. Then p + q = 1. Designating the pro- portion among men of men who show the recessive trait as dºm, and the proportion among women of women who show the recessive trait as Qºm, then v/ ºn = cºn. The standard error of this equation is: S.E. FF N — B (1 – B)? + D(1 —D) 4N T 64b(N)2 N2 126 HEREDITY where B = V Q m, D = Gºm, N i = the number of individuals used in computing B, and N2 = number of individuals used in computing D. Determination of Linkage in Man. Assume two pairs of genes C and c, D and d, located on the same pair of chromo- somes. A double heterozygote is then CCDa. Since these two genes are by hypothesis linked they may be written in paren- theses; thus a double heterozygote involving linked factors is (CD) (cd) in the coupling phase, or (Cd) (cD) in the repulsion phase (Snyder, 1934). Assume a series of family histories involving the cross CCDa X Codd. The CCDa parents would be recognized as being heterozygous for both genes by having produced some offspring of each recessive type. If the parents are both CCdd then cc offspring will have been produced. It is indeterminate whether the CCDd parent was in the coupling or repulsion phase, i.e., whether the constitution is (CD) (cd) or (Cd) (cD), since the phenotype is the same in both. If we assume that the two phases are equally common it will not be necessary to differentiate them. With the given constitution of the parents there are certain offspring called “determinate ’’ that indicate whether the dominant genes are coupled or not. These show the recessive trait represented by c. Thus, if coupled, the offspring of genetic constitution codd (without dominant gene) will be found and will be non-crossovers, while offspring of the formula ccDil (showing one dominant) will be crossovers. If the dominant genes are not coupled in either parent the findings in the children are to be interpreted in opposite fashion. How- ever, in any particular family one type of progeny will repre- sent crossovers and the other type will represent non-cross- OVerS. In the long run, non-crossovers will be more abundant than crossovers. Consider a large number of families of which the genetic constitution of parents and children is known (with respect to two pairs of allelomorphs). Bring together similar crosses and list in a column headed U the “determinate ’’ children of the more frequent type (who are probably non- crossovers), and in a column V the less frequent type (who are probably crossovers). The ratio of the frequency of V to the sum of V +- U gives Q, which is approximately the true pro- DETERMINATION OF LINKAGE IN MAN 127 portion of crossing over sought. By a refined method Wiener has prepared a closer approximation to true crossover values based on Q and number of children in a family. These are given in Table XX, derived from Wiener (1932, p. 343). Example. Following Wiener. In the blood cells there are two substances called agglutinogens since they are capable of causing agglutination of the blood corpuscles under certain conditions of the blood. They are labelled A and B. In some blood the cells produce both agglutinogens AB; in other blood neither; such blood is called O, and is not further considered here. In addition some blood contains agglutinogens M and N, either one, both or neither. A mating with the genetic formulas AB + + X O + - means: mother has (symbols preceding the X) both A and B and also M (first +) and N (second +); while the father (symbols following the X) has neither A nor B; has M and lacks N. Assume the genotype of the mother in this case to be (AM) (BN); her linked gametes are (AM) (BN); her crossover gametes (AN) (BM). The O + – father produces OM and On gametes in equal numbers. The linked zygotes are The crossed-over zygotes are produced as below: produced as below: AB+ + XO-i- — mating AB+ + XO-i- — \–y-º A M-3-OM *AN–3.On union of gametes X. B N-— On * B M →–)-O M produced are produced are genotypes phenotypes - genotypes phenotypes A MOM | A + — *A NOn | *A — —- A MOn | A + — zvgote *A NOM | A + + B NOM | B + + ygotes *B MOn B+ — B NOn B — — — *BMO M B + — Thus the linked types are A + — and B — --, while the crossover types are A — -- and B+ –. Progeny of type A + + and B+ + might be either linked or crossovers and must therefore be classified as “indeter- minate.” One makes out a table of analyses like that of the last para- graph for all the matings with which one is dealing (e.g., AB ++ X 0 ++; A (heterozygous) ++ X 0 —-i-; A (heterozygous) ++ X B (heterozygous) + —; B (heterozy- gous) ++ X A (pure, p) ++], and lists the genetic and pheno- typic constitution of all kinds of determinate types of offspring and classifies them as linked or crossovers. The indeter- minate may be neglected. * Crossed-over gametes or zygotes. 128 HEREDITY Thus from linked CrOSSOVer offspring offspring A B + + X O -|--|- A -- – B – -H A — -- B -- — AB + + X Ap ++ A B — —H· A + — AB + – A – –H Then rule a number of columns, each column for a family of a certain number of offspring, e.g., 4. Rule 3 subcolumns, the first with the identification number of the family, the second for the number of children of the commoner type (call it U) and the third for the number of children of the less common type (call it V). If the number of children of the two types is the same enter this number in both columns. U represents the sum of the linked types for each column; V, the crossover types. U + V = the total determined types for each column. V/(U + V) gives the value of Q, the percentage value of V or of crossing over. The theoretically expected value of Q for different numbers of children and for various crossover values (top line) has been computed by Wiener,” and the results of such computation are given in Table XX. If the computed Q for the given number of determinate children is less than the crossover value found in column 0.50 by a difference equal to more than 2 times the standard error as given in Table XXI, the traits in question are probably linked and the indicated crossing over occurs. * With the use of the formula Q = pa -- (pg)2 + 2(pg)3 + 5(pg)4 + 14(pg)" + 42(pg)" + 132(pg)" + 429 (pg)* + . . . where p represents the true linkage intensity and q the crossover frequency so that p + q = 1. Let S equal the number of determinate children. To determine Q from this formula, s/2 terms are taken if s is even; (s – 1)/2 if s is odd. Thus if s = 7 and p = 0.50, Q = (0.25) + (0.25)2 + 2 (0.25)3 = 0.34375. CHAPTER VII SPECIAL TOPICS Growth General growth in organisms is the phenomenon of increase in size either of the body as a whole or of multiplying cells. One must carefully avoid confusing growth of an aggregate (or average) of a sample on the one hand and growth of an individual on the other. The various formulae that have been proposed to describe growth have mostly to do with averages and not individuals and hence have a statistical value rather than a biological interest. a. “Autocatalytic ’’: y = (a -i- be-") -1. This is an S- shaped curve asymptotic to the X-axis at the lower end and to a parallel to the X-axis at the upper end, and lying at a 1 distance from the X-axis of k = −. k is the final size. The Ol. curve has an inflection of the value at # a b and n are con- stants depending on the starting point and slope of the curve. It is also called the logistic curve. This formula is suitable for populations, where t takes value from — oc to + oc. b. Gompertz: y = ke^***" or loglog (k/y) = a – bt. This curve is not symmetric about the inflection. This has been usefully employed in computing mortality of life- insurance-policy holders. c. Algebraic: y = k(t/T)*(2 — tſ'T)3. This curve rises as a cubic from t = 0 to t = T' (where it terminates). It has an inflection at 0.55T, y = 64k/125. For some mammals (though not for man) the curve of the most rapidly growing period from shortly after birth is given by y/k = 1 — e-”-”, where k and n are constants of final size and time involved in growth and t” is the time of inflec- tion of growth. 129 130 SPECIAL TOPICS Law of Relative Growth. As proposed by Huxley (1932, pp. 6, 7) this is: (dy/y)/(dv/a) = k (a constant), or y = ba" in which b is a constant, with the value y/a" when y and c are an initial or final or other arbitrarily fixed size. y is the size of the part (e.g., an appendage), and a the size of the rest of the body (e.g., the trunk of the body). Where there is an inert mass (of size a) which does not grow, the formula: y = (ba" + a) may be found more suitable. In these formulae the value k may be found by plotting the curve of growth on double logarithmic paper, which gives a straight line whose slope is k. This formula is useful for determining the relative rate of two parts or organs of the body. Index Numbers Collectively these constitute a device for indicating quan- titative changes in economic (or other) conditions. The changes are expressed in percentages of a condition present in a fixed base year. Index numbers measure relative change. The numbers may be either simple or composite. Simple index numbers show the changes in a single variable. Having selected a base year, divide the number or value for that year into the number or value for each of other years whose relation to the first is desired, and multiply the quo- tient by 100. Or each year may be taken in succession as a base year. In this case the index number indicates changes from one year to the next following. This latter method of computing index numbers is called the chain system. Aggregative index numbers are either unweighted or weighted. The unweighted indices are obtained by summing the values for the group of 2 or more elements that are to be compared, first for the base year, taken as 100, then for some other year, and by dividing the sum for that year by the sum for the base year. This index is called aggregative. For example: Base year 5 years later Aggregate index Price of wool. . . . . . . 0. 5883 1. 6600 Price of mutton . . . . . 0. 1025 0. 1982 Price of skins. . . . . . . 2. 5833 5. 5625 Total . . . . . . . . . . 3.2741 7.4207 2.2665 SECULAR, SEASONAL AND CYCLICAL CHANGE 131 Weighted indices are those in which prices of and number of related commodities are multiplied by some value-weight; it may be quantity of consumption (or production); it may be per cent of total consumption of each item in an aggregate. For example: Base year Given year Consump- Consump- g in Aggre- tion in tion in gative Price - * : * ſº Product. Price I millions | Product * millions º weighted * of units º of units index base year Wool |0.5883| 448 263, 6 || 1 , 6600 448 743.7 Mutton|0. 1025| 732 75 - 0 |0. 1982| 732 145.1 Skins |2. 5833 6, 7 17.3 |5. 5625 6, 7 37.3 355.9 260.2 The weighted index is found by dividing the summed prod- ucts of price for the given year and consumption for the base year by the summed price X consumption products for the base year. Thus the aggregative weighted index shows that in the given year the products were 260 per cent of the base year. Irving Fisher's (1922, p. 482) ideal formula for weighted aggregative indea; number is: NGº)xGº) 2pogo 2poqi where pi stands for price at given year, po price at base year; Qi weight at given year, go weight at base year. Measuring Secular, Seasonal and Cyclical Change 1. Rapid but Crude Methods For a fairly large number of recurring periods (months) find the total for each period (e.g., the total for all the Januaries; the total for all the Februaries, etc.). Find the average of the totals for all the months, and divide each monthly total by 132 SPECIAL TOPICS this average. This gives the relative numbers for each month. To eliminate the seasonal factor divide each original item by this relative number. This crude method is inadequate if there is a marked trend, say downward, from January to December. This difficulty can be relieved by finding the total change from the January of the year under consideration to the succeeding January and subtracting from the item of the February total of the year one-twelfth of the total annual change; from the March total two-twelfths; from April three-twelfths, and so on. This adjustment for trends has to be made before the relative numbers are computed. 2. Smoothing the Time Series When Trend Can be Ea:pressed by a Straight Line In economics, and to a less extent in biology (animal or plant censuses and the like), the frequencies of production rates, etc., in successive periods of time often show great fluctuations (Table 11.) The problem is to replace the very irregular curve of time changes with a smooth trend line and to express the trend quantitatively. The best measure of slope is the line of regression of which XCX Xz2 One arranges (as in Table 13) the X series (e.g., years) in the first column; the quantities (Y) found, in the second; and the deviations (a) of each of these elements of the time series (e.g., years) from the mean time in a third column; one then computes aº, the squares of these deviations, for a fourth column. Next a Y is calculated, and Xa: Y is obtained, having due regard to sign. This sum is now divided by the sum of the ac” column. Using this increment and starting from the as- sumed zero the smoothed production (trend) values are com- puted (Table 14). The quotient gives the increment in the unit of time (e.g., years). The line based on computed values (trend) may be passed through the mean point on the same graph paper with the irregular curve of time changes (Fig. 25). The long-time trend may be eliminated by taking the regression line as a base and plotting the actual figures as plus or minus deviations therefrom. When this elimination is effected seasonal changes become still more striking (Fig. 26). the formula is SECULAR, SEASONAL AND CYCLICAL CHANGE. 133 2000 1800 1600 –A 1400 | | ſ | \ | t | | | it. | H. T ... W | | ||| |||| l \|| || W. W N W | TH | 600 1924 1925 1926 1927 Fig. 25. Production of creamery butter in factories of the United States, 1924–32. (From Yearbook of Department of Agriculture, 1934) 1928 1929 1930 1931 1932 | | | || || || - I ſ \ } |M|| || "|| : 100 | 95 W | || w 1924, 1925 1926 1927 1928 1929 1930 1931 1932 FIG. 26.-Creamery butter production, United States, 1924–32. trend and seasonal variation eliminated Secular 134 SPECIAL TOPICS To take into account seasonal variation, find for each unit period (e.g., month) the frequency distribution of the ratio of actual to trend values (e.g., of production, Table 14), and then their arithmetic mean * for each period (Table 12). These means of quantities should then be adjusted so as to give an annual average of 100. These seasonal indices are then each subtracted from 100 to give the monthly deviations from the annual mean. An illustration based on production of creamery butter is given in Table 12. The bottom line shows the seasonal varia- tion. Seasonal variation may be eliminated from any series by subtracting from or adding to such ratios the Seasonal indices for each successive month. This method is illustrated for two years of production of creamery butter in Table 14. Example. Using the data of Table 11 (p. 135), a table is made (Table 13). Column 1, the years (a series); column 2 the annual pro- duction of creamery butter in units of 100,000 pounds. Column 3, the deviations of the years from the middle year; column 4, the squares of the deviations; column 5, the product of the items in columns 2 and 3 (2 . Y.) X2: Y 26573 e = − = 442.88, the computed increment per year. Xa;2 60 The computed middle year (1928) value is 15229.67 and Y’ = 15229.67 + 442.882, a being years. For July, 1928, the computed monthly production is 15230 + 12 = 1269.17, and the successive average changes in monthly production from a given month to the same month a year later is 1269.17 –H 36.9lac, where a is the number of years. The monthly increase of monthly production is one-twelfth of the annual increase of monthly production, or 3.08. Thus on July 15, 1928, the com- puted monthly production is 1270.71, and on January 15, 1928, 1252.25. * If these ratios are erratically spaced the median should be used instead of the mean as a measure of central tendency. SECULAR, SEASONAL AND CYCLICAL CHANGE. 135 I†69]{ 80ZI 860 I 8 IZI † ŁŻI 96 #1 #89 I 906 I998 ſ2. Iſ I[88I6ţZI&#ZIZ86 I 92,991 || I 83, I0), II992 I60ZI†0ýI9I 9I I888 I†gțI89ZI960ſ#8III86 I Ø969 I | Z III020 IZOZI9ZZI† 28I92,9 I868 I##8I888I2,9 II£ZOI†80 I086 I 0269 I || 6 IO IZA,6I8II98ZIZZQ I898 I6Z6I87/IZ88 I - | †† II000 I980I6Z6I 028$ I || 9,6Z Z8690 I96 II†g#I919 I0[8I899 ſ88II8III#660IO I8261 996ýŤ | 288[98†ZOIG8II89ýI90/, I888 I889 I†9ZI | | g III9960861,36 I 8IgțI || 606988I80IZ9ȚI888I969 I88/, I699 I0 IZI†ZIIZț66/69Z6T g|I981 || II6998gț0I880 I1981689 I8 #91ggiº I010 I£Z6Ø08I/8936 I I998 I | 098844.900 IIg II82,9 IÞý9I029I00țI0901896298928†Z6I ZZț% I || 9226; V,8688ZOI80ZI88€I789 I††8I900I888[#/.2,88836I odöörodööroożoiodöörodööromööbioożoioołįįhíooſhioożoiooffoiodööoiodöör [840J,'09GI* AON'qoO'qda S·ānyKŲn pa un pKeIA).* Idy* IE IN° C, 0){• UſepIB3 K †86 I ‘(‘V’S'n) ain?Inoſlāv go ſooq.reax go 862 oſqe I, ao pageg ’6861-836I ‘SGILVLS GIGILIND ‘SCII HOLOVI NI NOILO nGIO'Ha : HEILLnq xſºTWygryfo—ºrt frigyti TABLE 12.--PRODUCTION OF CREAMERY BIJTTER: DISTRIBUTION OF ACTUAL TO TREND VALUES DURING 9 YEARS Actual -i- computed production of creamery butter in percents Jan. Feb. Mar. Apr. May June July Aug. Sept. Oct. Nov. Dec 150–159.9 1 140–149.9 7 2 130–139.9 6 1 4 120–129.9 3 1 I 110–119.9 2 5 100-109.9 3 1 9 J– 99.9 4 4 6 1 & 0– 89.9 7 2 2 8 1 3 70- 79.9 2 7 5 6 60– 69.9 3 Arithmetic mean 82.8 77.2 89.4 || 100.6 131. 7 || 145.0 | 131. 7 || 112.8 93.9 86. 1 72.8 78.3 Seasonal indices (with - - - - . - base of 100) 82.6 77. 1 89.2 100.4 || 131.5 || 144.7 131. 5 || 1 12.6 93.7 85.9 72.7 78.2 100 — seasonal indices 17.4 22.9 10.8 || – 0.4 – 31.5 | – 44. 7 – 31.5 — 12.6 6.3 14. I 27.3 21.8 g SECULAR, SEASONAL AND CYCLICAL CHANGE 137 TABLE 13.—SHOWING HOW TREND LINE OF ANNUAL PRO- DUCTION OF CREAMERY BUTTER IN THE UNITED STATES IS COMPUTED Annual Year production 3C 22 a: Y Computed (Y) values 1932 16941 4 16 67764 17001 1931 16675 3 9 50025 16558 1930 15952 2 4 31904 16115 1929 15970 1. I 15970 15673 1928 14870 O 0 0 15230 1927 14965 — 1 1 – 14965 14787 1926 14518 — 2 4 — 29036 14344 1925 13615 — 3 9 — 40845 13901 1924 13561 — 4 16 — 54244 13458 137067 60 26573 Xaº Y Annual slope = 22 = 442.88; average annual increase in total pro- JC duction. y X Y. + = 15229.67; middle year (1928) value. Y’ = 15229.67 + 442.88a, (a, being the number of years). &={ O = 1269.17. 1 For July, 1928, the computed (monthly) production is On a monthly basis the average changes of production from a given month to the same month a year later is Y’ = 1269.17 -- 36.91a: where a is the number of years, and Y’ is the computed value of Y. Monthly increase in production is 3.08. 3. Method of Moving Median (King) (1) Tabulate and plot the original series of data that is being studied at the monthly or other available interval. (2) Then draw a free-hand curve representing as approxi- mately as possible the course of the cycle. Next, read from the free-hand curve the figures representing the tentative esti- mate of the cycle-amounts. Divide the actual data by the Corresponding tentative estimate for each month (or other time unit). (3) These quotients are then tabulated as a table of ratios of actual (1) to tentative (2) cyclic data. (4) These are Smoothed for each month by a moving median, which is then 138 SPECIAL TOPICS TABLE 14, TABLE OF PRODUCTION OF CREAMERY BIJTTER IN THE UNITED STATES SMOOTHED AND ADJUSTED FOR LONG TREN ID AND SEASONAL VARIATION Based on Table 398, Yearbook of Agriculture (U.S.A.), 1934 Year |Actual pro- Adjustment ... *. Computed | Actual + | to be made Cyclical and duction in g tº 4 tº month | 100000 lb. production] computed | for seasonal variation variation 1924 1 875 1104 79.3 17.4 96.7 2 867 1107 78. 3 22.9 101.2 3 958 1111 86. 2 10, 8 97.0 4 1060 1114 95.2 — .4 94.8 5 I400 1117 125.3 — 31.5 93.8 6 1620 | 120 144.6 — 44.7 99.9 7 1644 1123 146.4 — 31.5 114.9 8 1378 1126 122.4 — 12.6 109.8 9 1151 1129 101.9 6.3 108.2 10 1005 1132 88.8 14. 1 102.9 11 773 1135 68. 1 27.3 95.4 12 830 1138 72.9 21, 8 94.7 1925 1 871 1141 76.3 17.4 93.7 2 802 1144 70. 1 22.9 93. 0 3 923 1147 80. 5 10, 8 91.3 4 1070 1151 93.0 — . 4 92.6 5 1455 1154 126, 1 — 31 .. 5 94.6 6 1643 1157 142.0 – 44.7 97.3 7 1589 1160 137. () — 31 .. 5 105.5 8 1367 1163 117, 5 — 12.6 104.9 9 1083 1166 92.9 6, 3 99.2 10 1045 1169 89.4 14. 1 103.5 11 855 1172 73. 0 27, 3 100.3 12 911 1175 77.5 21.8 99.3 assigned to the middle year spanned by this moving median as the smoothed value for the year and month. The median is usually to be preferred to the mean as being less influenced by large chance fluctuations. (5) Next adjust the percentages for each month of the year so that the sum of the adjusted monthly averages equals exactly 12, or in other cases equals exactly the number of repetitions of measurements for the year. Finally divide each item of the original raw tabulations (1) by the corresponding items in (5). The quotients give the cyclical movements after elimination of the normal seasonal movementS. MEASURE OF DISSYMMETRY IN ORGANISMs 139 The Median-Link relative method is a more refined method worked out by W. M. Persons (Rev. Economic Statistics, January, 1919). A good description of this method is given in Secrist, 1925, pp. 450–465. Measure of Dissymmetry in Organisms A dissymmetry index, E, measuring the average degree of asymmetry in the right and left organs of bilateral organisms, was proposed by Duncker (1903). First a series of integral differences –3, -2, -1, 0, 1, 2, 3, 4, etc., between the right- and left-side measurements of the organ in question is made, and the frequencies of each integral difference (reckoning to the nearest integer) is found. The average of the difference series is the difference of the averages of the right- and left-side measurements, and the standard deviation of the difference is given by ord = va,” + ori” – 2ro rail, in which the subscripts refer to the bilateral series of which the asymmetry is to be found, and r is the coefficient of correlation between the two sides. Let d’ represent any positive differences in the series, and d” any negative differences; and let fi', fo', etc., represent the frequencies of the negative-difference classes, and fi’’, fa", etc., the frequencies of the positive-difference classes. Then the asymmetry index 2(f) × 2(d) – 2(f") × 2(d") m[X(d") + 2(d”)] Example. Absolute difference between dextral (d) and sinistral (s) lateral edges (L) of carapace of right-handed fiddler-crabs—Gelasimus pugilator (Yerkes, 1901; Duncker, 1903): d = La — Ls: –1 0. 1 2 3 f: 1 63 310 23 3 2(d’) = 310 × 1 + 23 × 2 + 3 × 3 = 365, x(f’) = 336. 2(d”) = 1, X(f’’) = 1, n = 400. a – 336 × 865 – 1 × 1 – 122639 400 X 366 146400 = 0.83770. 140 REFERENCES REFERENCES LIST OF PERIODICALS Annals of Eugenics, ed. by K. Pearson, later by R. A. Fisher, Univ. College, London. Annals of Mathematical Statistics, Ann Arbor, Mich. Biometrica. A journal for the statistical study of biological problems. Cambridge (England) Univ. Press. Contributi del laboratoria di statistica. Universita cattolică del sacro cuare. Milano. Journal of the American Statistical Association (Quarterly). Washington, D. C. Journal of the Royal Statistical Society. London. Metron, ed. by C. Gini, Roma. R. università, Inst. di Statistica. PUBLISHED TABLES BARLow, P. 1930. Tables of squares, cubes, square roots, cube roots, reciprocals. London: E. and F. N. Spon. 3rd edition (1st edition, 1814). CASTLE, W. E. 1924. Outline for a laboratory course in genetics. Cambridge: Harvard Univ. Press (R. A. Emerson's Table of probable errors, deviations due to chance alone from the 1 : 1 (3 : 1, 15 : 1, 9 : 7, 13 : 3) ratio in numbers for different values of n, 12 pages). DAVIs, H. T., and W. F. C. NELSON. 1935. (See Litera- ture.) Common logs; y = e”, y = e^*; squares, Square 1 roots, reciprocals; y = v/27 e-”; goodness of fit, T fitting straight lines and parabolas to data. DUNLAP, J. W., and A. K. KURTZ. 1932. Handbook of statistical nomographs, tables and formulas. Yonkers: World Book Co. 163 pp. FRY, T. C. 1928. (See Literature.) Tables of factorials of integers; 1 – 200! Logarithms of factorials, 1 — 1200; binominal coefficients C,” (m. 1 — 100, n 1 – 50); normal error function; normal law its integral and derivatives to the sixth; x*; the Poisson formula, *P'(j), * II’v. LITERATURE 141 GLovER, J. W. 1930. Tables of applied mathematics in finance, insurance, statistics. (Compound interest func- tions and logarithms of compound interest functions, life insurance and disability insurance functions, probability and statistical functions and seven-place logarithms of numbers from 1 to 100,000.) Ann Arbor: George Wahr. HodgMAN, C. D. (chief ed.). 1933. Handbook of chem- istry and physics: A ready reference book of chemical and physical data. 1st edition. Cleveland (O.): Chemical Rubber Publ. Co. 1818 pp. HoDZINGER, K. J. 1925. Statistical tables for students in education and psychology. Chicago Univ. Press. Squares, square roots, logarithms, 1 – r", v/1 – r?, 1 – r, ac, probable error, product-moment correlation coefficient, areas and ordinates of Gaussian curve; Vpu when p + q = 1; Spearman's rank formula; 0.67450 (z-v)/ory; cos−"a for values of ac; values of [(1 — r?13) (1 — r2,3)-93). PEARSON, K. (ed.). 1914. Tables for statisticians and bio- metricians. Cambridge (England) Univ. Press. Ixxxiii + 143 pp. — 1931. Tables for statisticians and biometricians Part II. ccl + 262 pp. WARWICK, B. L. 1932. Probability tables for mendelian ratios with small numbers. Texas Agricultural Experi- ment Station Bull. 463. December, 1932. 28 pp. LITERATURE BERNSTEIN, F. 1934. Growth and decay. In Cold Spring Harbor Symposia on Quantitative Biology, II, 209–217. BLAKEMAN, J. 1905. On tests for linearity of regression in frequency distributions. Biometrika, IV: 332–350. Bowl.EY, A. L. 1920. Elements of statistics. London: King. 459 pp. - Bowman, H. A. 1930. The color-top method of estimating skin pigmentation. Am. J. Physic. Anthropol., XIV: 59–72. BREWSTER, E. T. 1897. A measure of variability, etc. Proc. Am. Acad. Arts & Sci., XXXII: 268–280. 142 REFERENCES BROWN, W. 1911. The essentials of mental measurement. Cambridge, England. and G. H. THOMSON. 1921. The essentials of mental measurement. Cambridge, England: Univ. Press. 216 pp. (2nd edition of Brown, 1911). BROWN, WILLIAM. 1913. The effects of “observational errors ” and other factors upon correlation coefficients in psychology. Brit. J. Psychol., WI: 223–235. CASTLE, W. E. 1922. Genetic studies of rabbits and rats. Carnegie Inst. of Wash. Publ. No. 320. 55 pp. CHADDOCK, R. E. 1925. Principles and methods of statis- tics. Boston: Houghton Mifflin. 471 pp. CHARLIER, C. W. L. 1920. Worlesungen über die Grundzüge der mathematischen Statistik. Lund. 125 pp. CHAUVENET, W. 1888. A treatise on the method of least Squares, etc. Philadelphia. pp. 469–599. DAVENPORT, C. B., and J. W. BLANKINSHIP, 1898. A precise criterion of species. Science, VII: 685–694. — 1915. The feebly inhibited; nomadism or the wan- dering impulse with special reference to inheritance of temperament. Carnegie Inst. of Wash. Publ. No. 236. 1923. Body build and its inheritance. Carnegie Inst. of Wash. Publ. No. 329. 176 pp. — 1927. Guide to physical anthropometry and anthro- poscopy. Cold Spring Harbor: Eugenics Research Association. 55 pp. DAVIES, G. 1930. First moment correlation. J. Amer. Statist. Assoc., XXV: 413–427. DAVIS, H. T., and W. F. C. NELSON, 1935. Elements of sta- tistics with applications to economic data. Bloomington (Ind.): Principia Press. 424 pp. DUNCKER, G. 1898. Die Methode der Variationsstatistik. Roux's Archiv. f. Entw.-Mech. d. Organis., VIII: 112–183. — 1898. Bemerkung zu den Aufsatz von H. C. Bumpus. Biol. Centralbl., XVIII: 569–573. — 1900. On the variation of the rostrum in Palaemometes vulgaris Herbst. Am. Nat., XXXIV: 62. — 1903. Ueber Asymmetrie bei “Gelasimus pugilator ’’ Latr. Biometrika, II, 307–320. DUNN, H. L. 1929. Applications of statistical methods in physiology. Physiolog. Reviews, IX (2) 275-398. LITERATURE 143 ELDERTON, W. P. 1906. Frequency curves and correlation. London: C. & E. Layton. 172 pp. - EZEKIEL, M. 1926. The determination of curvilinear regres- sion “surfaces '' in the presence of other variables. J. Am. Statist. Assoc., XXI: 310–320. — 1930. Methods of correlation analysis. New York: John Wiley & Sons. 427 pp. FECHNER, G. T. 1897. Kollektivnasslehre. Leipzig Engel- mann. 483 pp. FISHER, ARNE. 1926. The mathematical theory of proba- bilities and its application to frequency curves and statistical methods. New York: Macmillan. Vol. I. 289 pp. FISHER, IRVING. 1922. The making of index numbers: A study of their variety, tests and reliability. Boston: Houghton Mifflin. 526 pp. FISHER, R. A. 1925. Statistical methods for research work- ers. 1st edition. London: Oliver and Boyd. — 1932. Statistical methods for research workers. 4th edition. Edinburgh: Oliver and Boyd. 307 pp. FRY, T. C. 1928. Probability and its engineering uses. New York: Van Nostrand. 476 pp. GALTON, F. 1902. The most suitable proportion between the values of first and second prizes. Biometrika, I: 385–390. GARRETT, H. E. 1926. Statistics in psychology and educa- tion. New York: Longmans, Green. GLovER, J. W. 1930. Tables of applied mathematics in finance, insurance, statistics. Ann Arbor: G. Wahr. 678 pp. HARRIs, J. A. 1909. The correlation between a variable and the deviation of a dependent variable from its prob- able value. Biometrika, WI: 438–443. — 1909. A short method of calculating the coefficient of correlation in the case of integral varieties. Biometrika, VII: 214–218. — and F. G. BENEDICT. 1919. A biometric study of basal metabolism in man. Carnegie Inst. of Wash. Publ. No. 279. 266 pp. HogBEN, L. 1933. Nature and nurture. New York: Nor- ton & Co. 144 pp. 144 REFERENCES HolzingER, K. J. 1928. Statistical methods for students in education. Boston: Ginn and Co. 372 pp. HUxLEY, J. S. 1932. Problems of relative growth. London: Methuen & Co. 276 pp. RELLEY, T. L. 1919. Principles underlying the classifica- tion of men. J. Appl. Psychol., III. — 1923. Statistical method. New York: Macmillan. 390 pp. — 1928. Crossroads in the mind of man. Stanford Univ. Press. 238 pp. KollBR, S. 1931. Gegenwärtiger Stand der erbstatistischer Methodik beinn Menschen. Arch. f. Sociale Hygiene u. Demographie, WI: 194–199. LENz, F. 1929. Methoden der menschlichen Erblichkeits- forschung; in Handb. d. hygienische Untersuchungs- methoden. Jena: Fischer, 689–739. LEXIS, W. 1877. Zur Theorie der Massenerscheinungen in den menschlichen Gesellschaft. Freiburg. MARTIN, R. 1928. Lehrbuch der Anthropologie. Jena: Fischer. 1816 pp. MILLs, F. C. 1924. Statistical methods applied to economics and business. New York: Henry Holt. 604 pp. NEYMAN, J. 1934. On the two different aspects of the repre- sentative method. The method of stratified Sampling and the method of purposive selection. J. Roy. Statist. Soc., XCVII: 558–625. OTIS, A. S. 1926. Statistical method in educational measure- ment. Yonkers: World Book Co. PEARL, RAYMOND. 1930. Introduction to medical biometry and statistics. Philadelphia: Saunders. 459 pp. — 1927. The graphic representation of relative variabil- ity. Science, LXV: 237–241. PEARSON, K. 1894. Mathematical contributions to the theory of evolution. Nos. I–IV; Nos. V-XII, 1898; No. XVI, 1907; Nos. XIII, XIV, 1904–1907; No. XVIII, 1912. Phil. Trans. Royal Soc. London. — 1906–22. Draper's Company Research Memoirs, Bio- metric Series. — 1896. Mathematical contributions to the theory of evolution. III. Regression, heredity and panmixia. Phil. Trans. Roy. Soc. London, CLXXXVII: 253–318. LITERATURE 145 PEARSON, K. 1897. Mathematical contributions to the theory of evolution. On a form of spurious correlation which may arise when indices are used in the measurement of organs. Proc. Roy. Soc. London, LX: 489–498. — 1901. Mathematical contributions to the theory of evolution. X. Supplement to a memoir on skew variation. Phil. Trans. Roy. Soc. (A), CXCVII, 443– 459. — 1902. Variation of the eggs of the sparrow (Passer domesticus). Biometrika, I: 256–257. — 1902°. Note on Francis Galton's problem. Biometrika, I: 390–399. — 1902°. On the systematic fitting of curves to observa- tions and measurements. Biometrika, II, 1–23. — 1902°. Mathematical contributions to the theory of evolution. XI. On the influence of natural selection on the variability and correlation of organs. Phil. Trans. Roy Soc., London, CC: 1–66. — 1903. On the probable errors of frequency constants. Biometrika, II: 273–281. — 1908. On the influence of double selection on the variation and correlation of two characters. Biometrika, VI: 111–112. — 1909. On a new method of determining correlation between a measured character A and a character B of which only the percentage of cases wherein B exceeds (or falls short of) a given intensity is recorded for each grade of A. Biometrika, VII: 96–105. — 1910. On a new method of determining correlation, when one variable is given by alternative and the other by multiple categories. Biometrika, VII: 248—257. — 1913. On the measurement of the influence of “broad categories '' on correlation. Biometrika, IX: 116–139. — 1914. Tables for statisticians and biometricians. London: Cambridge Univ. Press. — 1917. On the probable error of biserial m. Biometrika, XI: 292–302. - — 1931. Tables for statisticians and biometricians. Part II. London: Cambridge Univ. Press. - PINTNER, R. 1931. Intelligence testing: Methods and results. New York: Henry Holt. 146 REFERENCES RICHARDSON, C. H. 1935. An introduction to statistical analysis. Enlarged edition. New York: Harcourt, Brace. 285 pp. RIETz, H. L. (ed.). Handbook of mathematical statistics. Boston: Houghton Mifflin. 221 pp. REITz, H. L., and H. R. CRATHORNE. 1924. “Simple cor- relation ” in Handbook of mathematical statistics. Boston: Houghton Mifflin. pp. 120–138. RUGG, H. O. 1917. Statistical methods applied to educa- tion. Boston: Houghton Mifflin. ScEIIEFFELIN, BARBARA, and GLADYs C. SchwesiNGER. 1930. Mental tests and heredity, including a survey of non- verbal tests. New York, Cold Spring Harbor: Eugenics Research Association. 298 pp. SECRIST, HoRACE. 1917. An introduction to statistical methods. New York: Macmillan. 584 pp. Revised and enlarged edition, 1925. SHEPPARD, W. E. 1898. On the calculation of the most probable values of frequency constants for data arranged according to the equidistant divisions of a scale. Proc. London Math. Soc., XXIX: 353–380. SMITH, B. B. and M. EZEKIEL. 1926. Correlation theory and method applied to agricultural research. Dept. Agricul- ture, Bureau Agr. Econ. (mimeographed report). SNEDEcoR, G. W. 1934. Calculation and interpretation of analysis of variance and covariance. Ames, Iowa: Col- legiate Press. 96 pp. SNYDER, L. H. 1934. Modern analysis of human behavior. Eugenical News, XIX: 61–69. — 1935. The principles of heredity. New York: Heath. 385 pp. SoPER, H. E. 1914. On the probable error of the bi-serial expression for the correlation coefficient. Biometrika, X: 384–390. SPEARMAN, C. 1904. The proof and measurement of asso- ciation between two things. Am. J. Psychol., XV. — 1907. Demonstration of formulae for true measure- ment of correlation. Am. J. Psychol., XVIII. — 1910. Correlation calculated from faulty data. Brit. J. Psychol., III: 271–295. LITERATURE 147 SPEARMAN, C. 1913. Correlation of sums and differences. Brit. J. Psychol., W: 417–426. — 1927. Abilities of man. New York: Macmillan. 448 pp. STEGGERDA, MoRRIs, 1932. Anthropometry of adult Maya Indians. Carnegie Inst. of Wash. Publ. No. 435. “Student,” 1907. On the error of counting with a haemacy- tometer. Biometrika, W: 351–360. “Student,” 1908. The probable error of a mean. Biometrika, WI: 1–25. “Student,” 1925. New tables for testing the significance of observations. Metron, W: 105–120. SYMONDs, P. M. 1926. Variations of the product-moment (Pearson) coefficient of correlation. J. Ed. Psychol., XV. THORNDIKE, E. L. 1922. An introduction to the theory of mental and social measurements. 2nd edition. New York: Columbia Univ. 277 pp. TIPPET, L. H. C. 1931. The methods of statistics: An introduction mainly for workers in the biological sciences. London: Williams and Norgate. 222 pp. TRYon, R. C. 1929. The interpretation of the correlation coefficient. Psychol. Rev., XXXVI. U. S. DEPT. AGRICULTURE. 1934. Year Book, p. 637. WALKER, HELEN. M. 1929. Studies in the history of sta- tistical methods. Baltimore: Williams and Wilkins. 229 pp. WALLACE, H. A., and G. W. SNEDECOR. 1931. Correlation and machine calculation. Ames, Iowa: State College. 71 pp. WATKINs, R. J. 1930. The use of coefficients of net deter- mination in testing the economic validity of correlation results. J. Am. Statist. Assoc., XXV: 191. WEST, C. J. 1918. Introduction to mathematical statistics. Columbus: Adams and Co. WHIPPLE, G. M. 1914–15. Manual of mental and physical tests. Parts I and II. Baltimore: Warwick and York. 702 pp. WHITTAKER, E. J., and G. Robinson. 1926. The calculus of observations: A treatise on numerical mathematics. London: Blackie & Son. 395 pp. 148 EXPLANATION OF TABLES WIENER, A. S. 1932. Method of measuring linkage in human genetics; with special reference to blood groups. Genetics, XVII: 335–350. WILSON, E. B. 1934. Mathematics of growth, in Cold Spring Harbor Symposia on Quantitative Biology, II, 199—201. WRIGHT, S. 1921. Correlation and causation. Jour. Agric. Research, XX: 557. YERKES, R. M. 1901. A study of variation in the fiddler- crab, Gelasimus pugilator Latr. Proc. Am. Acad. Arts & Sci., XXXVI, 417–442. YULE, G. U. 1900. On the association of attributes in sta- tistics. Phil. Trans. Roy. Soc. (A), CXCIV. — 1932. An introduction to the theory of statistics. 10th edition. London: Griffin and Co. 434 pp. EXPLANATION OF TABLES I. Formulas. In this table the principal formulas used in the calculation of curves are brought together for convenient reference. The meanings of the letters are explained in the text. This table is preceded by an index to the principal letters used in the formulas of this book. II. Certain constants and their logarithms. This table includes the constants most frequently employed in the calculations of this book. III. Table of ordinates of normal curve. This table is for comparison of a normal frequency polygon consisting of weighted ordinates with the theoretical curve. The mode is taken at 10,000. Eacample: M = 17.673; or = 1.117; y0 = 181.4. (See p. 49.) Entries in Table V — M. corresponding to V W – M Or V — M. 1/0 3/ Of 14 — 3.673 3.29 .00449 X 181.4 = 0.8 1 15 — 2.673 2.39 .05750 X 181.4 = 10.4 8 16 — 1. 673 1.50 , 32465 X 181.4 = 58.9 63 IIIa. Table of ordinates of normal curve, based on 1 or V 27F _1 (£Y” - 62 #(#) obtained from Table III by multiplying g = TABLES I–IV 149 1. each entry in that table by ~7- = 0.398942 to get the y * V. g respective entries in the Table. The mode is at * = 0.39894. Oſ IV. Table of values of probability integral. This table is for comparison of a normal frequency polygon consisting of rectangles with the theoretical curve. Eacample: A M = 17.673; or = 1.1170. (See p. 49.) 2. Per Class Deviation 21 (; – #a) × 100 Class Of cent Limits from Or l X CO A = 3C1 €SS 2:1 + or 14 — 3.29 0.2 0.225 14.5 — 3. 173 — 2.841 15 — 2.39 1.6 2. 364 15.5 — 2.173 — 1. 945 16 — 1.50 12.4 12.097 - 16.5 — 1. 173 — 1.050 17 — 0. 60 30.3 29. 155 17. 5 — 0.173 – 0. 155 18 0.29 32.3 33. 194 18.5 0.827 0.740 19 I. 19 18.9 17.873 19.5 1.827 1. 636 20 2.08 3.9 4. 524 20.5 2.827 2. 531 21 2.98 0.4 0.568 100.0 100.000 In the example, the data of which are given on page 49, the frequency between the limits is given in per cent column. The QC & e & e e g – of each limit (as inner class limit) is found and the entries Oſ in Table IV corresponding to the limits are taken. Each such entry is subtracted from 0.50000, is multiplied by 100, and from the product is subtracted the total theoretical percentage of variates lying between the outer limit of the class and the corresponding extremity of the half curve. This gives the theoretical frequency of the class in question. The closeness of agreement of the last column with the “Per cent ’’ column indicates the closeness of the observed frequency to the theoretical. 150 EXPLAINATION OF TABLES W. Table of reduction from the common to the metric system. a. Inches to millimeters. This is first given for whole inches from 1 to 99, excepting even tens which may be got from the first line of figures by shifting the decimal point one place to the right. The table may be used for hundredths of an inch by shifting the decimal point two places to left. The reduction of twelfths and sixteenths of inches to millimeters is given immediately below. b. The table for converting avoirdupois ounces into grams is based on the conversion factor given. c. Table for converting pounds into grams, using the con- version factor given. VI. Table of minutes and seconds of arc in decimals of a degree. This table will be found of use in the fitting of curves of Type IV (p. 61). VII. First to sixth powers of integers from 1 to 30. This table is useful in calculating moments. VIII. Factorials of integers. The product of all integers from 1 to and including the given integer, i.e., n! or ſº. IX. Factors for use in obtaining the probable errors of the means and standard deviations, from N = 5 to N = 250. The entries of this table are computed from the fraction given at the top of the column, for the different values of N, given at the left of the entry. Since the probable error of the mean is 0.6745 v/N 0.674. v/2N tiplying the entry by a. Standard errors are 1.483 times prob- able errors; roughly greater. o, and the probable error of the standard deviation is 5 o, the probable error in each case is obtained by mul- 0.67448975 0.674.48975 Formula: PEmean == -VW- or, PEor = TV5MT Oſ Examples. Let N = 110, mean = 1542.50, a = 60.45. PE mean = (0.064310) (60.45) = 3.89. Then mean = 1542.50 -- 3.89. PEa' = (0.045474) (60.45) = 2.75. Then a = 60.45 == 2.75. TABLES V-XI 151 The corresponding standard errors are approximately one- half greater than the probable errors or can be obtained to a greater degree of precision by dividing the probable errors by 0.6745. Especially with smaller numbers of observations (N) use the entry for one less than N. X. Odds against the occurrence of a deviation in terms of the standard deviation (of any individual in a distribution or a sample mean) as great or greater in a large collection of samples taken from a homogeneous population. This was computed from the formula: 1 – 2 (1 – #(1 + ol 2[1 – #(1 + 2)] in which of is double the area from the mode to a; as given in Table IV. Thus in the example cited in explanation of Table 3C XI, when – = 1.349, the corresponding odds against are nearly Of midway between 4.17 and 5.19, or 4.68 — as compared with 4.64 in the explanation of Table XI. XI. Giving for deviations in multiples of the probable error from 1 to 5.0, in column 1; the probability of the occurrence of a deviation as great or greater in 100 trials, in column 2; also the odds against the occurrence of a deviation as great or greater in 100 trials X : 1 in column 3. Column 2 of this table is derivable from Table IV, in which the argument values QX PE this table. The corresponding entry is the integral for the half area of the probability curve. Double the entry, subtract from 1.00 and multiply by 100. The result is the entry of 3. of - are, of course, 0.67445 times those of , as given in Or Table X. Thus, when # is 2.0, * = 1.349. The nearest Or entry in Table IV is 0.41133. 0.41133 × 2 = 0.82266 and 100(1 – 0.82266) = 17.73, the corresponding entry in Table X. Column 3 gives the fraction that the probable non-occur- rence (q, odds against) is of the probable occurrence (p), or 100 – *99 - p g Thus, if 17.73 is the probable occurrence, p p 100 - 17.73 82.27 17.73 T 17.73 tº 4.64. Thus the odds against are 152 EXPLANATION OF TABLES 4.64 to 1. From R. Pearl’s “Medical Biometry and Statistics,” W. B. Saunders Co. XII. Values of X*. This table is copied from R. A. Fisher's “Statistical Methods for Research Workers ” (Edinburgh: Oliver and Boyd) by generous permission of author and pub- - 2 lishers. It gives for various values of x* = XC (f - 9) !/ (that is, the sum of all the differences between observed and expected frequency in each class expressed in units of the expected frequency) the probability that the found x* may result only from errors of sampling. The x* values are ar- ranged in lines by number of degrees of freedom (n). For example, if in 11 — 1 classes one gets a x* value of 10 then the probability of the deviation of the observed values, from expected being due to random sampling is about 0.40, a deviation which is to be expected in about 40 out of 100 samples, from the same source. But if x” has the value 20, then the probability is only about 3 in 100, which is regarded as small. In fact, a probability (P) of less than 0.05 is regarded as so small as to justify the conclusion that the (large) observed differences between observation and expectation can not be due merely to random sampling. XIII. Ratio (F) of the larger mean variance (as between means of samples and within samples) to the smaller mean variance. The ratios at the intersection of the column of the degrees of freedom of the greater mean variance and that of the smaller mean variance are the limiting ratios for an expectation that the ratio shall be exceeded in random Sam- pling 5 times in 100 trials (plain type) or only once in 100 trials (bold-face type), hence a ratio that is very significant of a real difference between the sample such as can hardly be due to random sampling. Note that Fisher's 2 can be S 2 obtained from the F ratio by taking the square root of º: and S2 finding loge of this root. (From G. W. Snedecor, 1934, by kind permission of the author and publisher and with exten- sions by the author.) Also in the right-hand column the values of t, t = difference between two means divided by the standard error of the difference (p. 38). In general, t measures the probability that TABLES XII-XVIII 153 two Samples belong to the same universe; t has to be con- sidered in relation to the degrees of freedom in both of the samples. The entries in the right-hand column give the 5 per cent probability and the 1 per cent probability (in bold-face type) that in view of the t found the difference is significant of a real difference and not merely one due to sampling errors. If the t found is smaller than either of these numbers the dif- ference is probably due merely to sampling. (This table also is from Snedecor, 1934.) XIV. Table of the probable errors of the coefficient of cor- relation for various numbers of observations or variates (N) and for various values of r. The probable error of the — r? coefficient of correlation being 0.67459–rº, a table for the v/N varying values of N and r is easily constructed, and for large values of N is accurate with interpolation by inspection to two significant figures, which are all that are required. XV. Values of 1 — r" for the various values of r from 0.00 to 0.99. This table will facilitate the computation of the probable error of r, in accordance with the formula: I - r2 1. * r2 PE, = 0.67449 × −7=-; or the standard error of r, H-. v/N v/N This table has been specially computed. XVI. Values of V 1 — r", corresponding to values of r from 0.00 to 0.99. This table will be found of use in computing partial correlations and that of the average variability in an array in the correlation tables, whose value is a v/Trº and in many other cases. Specially computed. Tables of values of 1 – r" and V1 – F have been published by Miner (1922), and copied by others. XVII. Values of r = 2 sin : p for different values of p. This will aid in computing r by the rank difference method, given at page 79. XVIII. Functions of r for use in computing the approximate probable error of the tetrachoric coefficient of correlation function; - sin Tirt \? .674. 1 — rºl || 1 — 0.67 •N * | (*. ) 154 EXPLANATION OF TABLES where r is the tetrachoric coefficient. The entire formula (p. 108) is: PE, - ſº N [1 — ri” | ºm (**) | 90° ſº (a + c) (c + d) (b rº N°yy'v'N in which y, y' are the ordinate values of the normal probability curve, obtained from Table IIIa; see page 165. XIX. Table of log F functions of p. This table will enable one to solve the equations for yo given on pages 55–57 The table gives the logarithms of the values of F functions only within the range p = 1 to 2. As all values of the func- tion within these limits are less than 1, the mantissa of the logarithms is – 1; but it is given in the table as 10 – 1 = 9, as is usually done in logarithmic tables. Supposing the quantity of which we wish to find the value reduced to the form F(4.273). The value can not be found directly because the value of p is larger than the numbers in the table (1 to 2). The solution is made by aid of the equation I'(p + 1) = p/"(p), thus: log T(1.273) = 9.955.185 log 1.273 = 0.104828 log I'(2.273) = 0.060013 log 2.273 = 0.356599 log I'(3.273) = 0.416612 log 3.273 = 0.514946 log T(4.273) = 0.931558 or, more briefly, log I'(1.273) = 9.955185 log 1.273 = . 104828 log 2.273 = .356599 log 3.273 = .514946 log F(4.273) = 0.931558 = log 8.542 TABLES XIX-XXII 155 XX. Theoretical value of Q for various sizes of families (s) of 2 to 17 children. Q is the proportion that V is of U + V V or Q = UTV. To get U and V one has first to know the genic constitution of each of the two traits whose linkage is being studied as indicated in the text (p. 126). XXI. Standard errors cf the mean theoretical values of Q given in Table XX. To use in judging to which crossing- over category of Table XX any determination of Q belongs. XXII. Giving for any unit character the proportion of recessive offspring to be expected: (R) in random matings of dominants with dominants, and (S) in random matings of dominants with recessives when (B) gives the proportion of recessive individuals in the general population. The for- mulae are: R – (**) - (; ;) 4 \g” + 2pq q + 2p - (**) = —” 2 \g” + 2pg q + 2p where p + q = 1. g” + 2pq = the dominant individuals in the population (A). p” – the recessive individuals in the population (B). p = Vb. q = 1 – Vb. The probable error of p is given by the formula: q + 2p 0.6745 8N (1 + Vb) Nºxº (1 – Vb). 4(1–Vb), 4(1–Vb). 1 + Vb b (1+Vb). Vö (1+ v5. When N is 100 or more this formula may be simplified as fol- lows: PE of P - 997320 = V5 | –. q + 2p 2(1 + Vb) (1 — b)N 156 INDEX TO PRINCIPAL LETTERS 2 The probable error of ( ) is given as q + 2p 0.6745(1–Vb) |16.Nb — 1 – 9-? v/b). 12(1-2 v'b) 4N (1 + Vö)2 N1 – b (1 + Vb). 1 + Vb When N is 100 or more this reduces to approximately: p \* 0.6745(1 – Vb) b PE of | –tº– ) = gº *(; ;) (1 + Vb). N (1 — b) XXIII. Squares, cubes, square roots, and reciprocals of numbers from 1 to 1054. The use of this table can be extended by using the principle that, if any number be multiplied by n, its square is multiplied by n”, its cube by g te 1 n° and its reciprocal by −. 70, INDEX TO PRINCIPAL LETTERS USED IN THE FORMULAE OF THIS BOOK AD Average deviation O. Frequency of upper left quadrant CY Skewness index b The frequency of the upper right quadrant (p. 105) 6 Ratio of moments (p. 43) CC Coefficient of contingency CC’ Corrected coefficient of contingency C The frequency of the lower left quadrant D Distance from mean to mode (pp. 53–58); also stands for dominant d A difference; differential; the frequency of the lower left quadrant Difference between y and f; other deviations Base of Napierian logarithms = 2.718282 Correlation ratio Critical function (pp. 44–49); ratio of variances (pp 38, 70) Class frequency A certain value of a (p. 39) Class interval or range ; USED IN THE FORMULAE 157 Ipºpſ, Interval between the p’th and p”th individual (p. 51) 7p Interval between the pth and (p + 1)th individual (p. 50) A function of k. A proportional value of a;; also any constant; also co- efficient of alienation (p. 91) K Factor in finding limiting deviation from M (p. 21) l Range of normal curve along the abscissal axis (p. 53) lilº Portions of the curve range (p. 54) A Number of classes M Arithmetic mean n Modulus of common system of logs = log e M’ Arbitrary origin, assumed mean Maib Mean of composite MG Geometric mean MH Harmonic mean Mo Abscissal value of the mode (theoretical) M’o Abscissal value of the mode (empirical) M. Arithmetic mean of sample M. Moment about M N Number of variates (p. 24). 70, Number of classes n! n Factorial n, product of all integers from 1 to n (p. 150) 9t Number corresponding to logarithm: antilog v1 Average moment about M’ P Probability p Probability that an event will occur, especially the occur- rence of a recessive trait or gene; ordinal rank of a particular individual or case (p. 50) PE Probable error of the determination of any value PEAD Probable error of the average deviation PEc Probable error of the coefficient of variation PEM Probable error of the mean PE, Probable error of the standard deviation PE, Probable error of the correlation ratio PE, Probable error of the coefficient of correlation T Circumference in units of diameter, 3.14159 p0 (In economics) data (prices) for a base year p1 (In economics) data for current year Q Probability of non-occurrence; probability of a dom- inant gene; a root or power f 158 INDEX TO PRINCIPAL LETTERS Q (In economics), quantity of production; Q0 in base year; Q1 in current year. Also quartile deviation R Recessive; coefficient of multiple correlation r Coefficient of correlation p Coefficient of correlation by rank-difference method, p. 78 S A relation of 8's (p. 44); also “sample”; also standard deviation of sample SE Standard error of the distribution of the magnitudes SEM Standard error of the mean, - g/ VN = standard devi- ation of the sampling distribution of the mean SE, Standard error of a standard deviation, or/ V2N X Summation sign Or Standard deviation of a frequency distribution; index of variability t Measure of the significance of a difference (p. 38, 76); also number of observed recessives T In type IV (p. 56); number of expected recessives 6, p. Angles U Sum in mean square contingency formula; also number of non-crossover progeny V Abscissal value of any class; coefficient of variation Vo Value of central class 1) Magnitude of any variate or value X. The horizontal axis or basis of polygon Q. A varying abscissal value 21, 22, etc. Definite values of a; X Test of closeness of fit (p. 47) Y The vertical axis of polygons; also the log of y (p. 52) $/ A varying ordinate value yo Value of the ordinate at the origin z' Transformed r (p. 76) THE GREEK ALPHABET A. o. Alpha I t Iota P p Rho B 6 Beta K K Kappa X a s Sigma I” ) Gamma A X Lambda T + Tau A 6 Delta M p. Mu T U Upsilon E e Epsilon N v Nu dº q, Phi Z ºf Zeta F. : Xi X x Chi H m Eta O o Omicron \g p Psi 6 6 19 Theta II ºr Pi Q (o Omega FORMULAE 159 TABLE I. FORMULAE Binomial expansion n(n − 1) (a + y)” = z* + na"-ly + —g- n(n − 1) (n − 2) -- 3| a"Tºyº -- ... + y” To solve any equation of the second degree: —b = Vö, - 4 az” + ba + c = 0; a = G.C 2a >v ×f V A M = — = — verage N N Geometric IOlea, Il M – X (1 G = Voivºva. . . or log G = 2(log v) N X *w- 2 Standard deviation = q = N sººn = Viſº. - M)” º - Vo)* |>f(V – Vo)|* N N N Variance = or? Coefficient of variation C = a X 100 Probable and Standard Errors 0.6745or 0.6745ar M v/N Or v/2N Of Or SEM = −F — SE *-g M v/N • ** Vºx 0.8454.or 0.67456 C \ 2 |}% PE n = PEO = — | 1 2 : — * = + TVN C *}| + (#) Significant difference > 3 v'PEy. + PEM,”, or 2 v SEM. H. SEM,” 160 FORMULAE TABLE I. FORMULAE–Continued Significance of difference between correlation coefficients of 2 samples, 2' = # log.(1 + r.) – log (1 – r), p. 76 1 Standard error of 2", "F —7— V n' — 3 - Curve Analysis > f(V – V , 2/(V – Vº) v2 = 2f(V – Vo)? N xf(V – V.): xf(V – Vo)" v3 = N p4 = N pu 1 = 0 pu2 = v2 - w1% pu 3 = v3 – 3viv.2 + 2v 13 pſ 4 = v4 - 4viva +- 6v1°v2 *- 3v.14 wi' = 0 u2' = y2 – v1% – Hºg; us' = va – 3viva -- 2v,” u', a v. – 4viva -- 6v1°va – 3914 – #(va – viº) + gºd I'or normal distribution: 2. 6. SEu2 = −F or? N; SE81 = N; SEu3 = F or 3 6. SE63 = 24 * = + ° VW 62 = N 96 SEus = or 4 N 61? Chi square, x* = y For skew distributions: Distance mean to mode: D = a a 2 2A/o-3 (Types I, IV); o' = 2Vo-3 S == k = } Skewness of = } Văii; p (Type W) FORMUL.AE 161 TABLE I. FORMULAE–Continued T3 SED = *Nº. Curve Typing F1 = 282 – 381 – 6 = 81(82 + 3).” 4(482 – 381) (282 – 381 – 6) Variance, Equal Classes Total sum of squares, X(a: — Mac)* Sum of squares within classes, Xs X(a – Ms)* Sum of squares between classes, n2s(Ms — M.)” Test Significance of a Difference in Variance larger mean square tº- (with Table XIII) Smaller mean square Correlations, etc. X(dev a X dev y X f) r F N or 1 - or 2 1 — r? *-sº-sº-sº-sº 1 — r? SEr = FF –F– or, small samples: * T VW Ples: VV-H Grouped data sºy-Gººd Nºey sºme ºw. mºsº ep)2 Regression equation: To predict Y from X: Tacy F Y = a + b X; a = My — bl/Ma:; b = ** r O 3: To predict X from Y: Oſa: Mac – bMy; b = − r Oy Y = a + b Y; a 162 D FORMULAE TABLE I. FORMULAE-Continued g g o M, of M, Correlation ratio (m): myz = ; may = O y Oſir Coefficient of alienation, k = V/1 — r2 Coefficient of determination, r* (p. 92) 2r Coefficient of reliability, r (21+2") (21+2') = –tº– (p. 92) 1 + ? II V.” v/Vº + V.2 v/V52 + V.2 ºr 12 – 7'13 - ?93 v/1 – rºº V1 – rºs Spurious correlation, ro = Partial correlation, r12.3 = (p. 109) Partial sigmas, a 1.234 . . . n = 0 1 v1 — r°12 v1 * rºs., V1 – r"14.28 . . v/1 - r’in. 23. (n-1) (p. 112) Multiple correlation R1. 234 . . n -: V1 - [(1 - r°12) (1 - r°13.2) (1 - rºla. 23) s & ſº (1 - r" in 23 (n-1)) (p. 116) Coefficient of part correlation, r -N b°12. 340.2% (p. 116) 12 W 34 b°12. 340.2% + or 1% (1 — R*1.284) D. h O 1.234 (p. 117) where b12.84 = T 12. 34 O 2. 134 Law of relative growth, y = br” + a (p. 130) Xa:Y Measure of slope in time series, #. (p. 132) Q. CERTAIN CONSTANTS AND THEIR LOGARITHMS 163 II.-CERTAIN CONSTANTS AND THEIR LOGARITHMS Title Symbol Number Log Ratio of circumference to diameter . . . . . Tr 3. 1415927 0.4971499 Reciprocal of same . . . . . . . . . . . . . . . . . . . º 0.3183099 || 9 - 5028501 Square root of same . . . . . . . . . . . . . . . . . . v/r | 1.7724538 || 0.2485749 Reciprocal of square root of same J. 0. 5641896 || 9.7514251 Tr Square root of 2r. . . . . . . . . . . . . . . . . . . . . 27r 2. 506628 0.399.090 1 Reciprocal of same . . . . . . . . . . . . . . . . . . . . V→ 0.398.9423 || 9 . 6009 101 2T Reciprocal of 2ir. . . . . . . . . . . . . . . . . . . . . . 3. 0.159155 9. 201820 Square root of 2. . . . . . . . . . . . . . . . . . . . . . V2 | 1.4142136 || 0.150515 Reciprocal of same . . . . . . . . . . . . . . . . . . . –– O. 707107 9.8494849 v/2 2 2. Square root of . . . . . . . . . . . . . . . . . . . . . . W: 0.797885 9.90.19401 Base of hyperbolic logarithms. . . . . . . . . . 8 2.71828.18 0.434.2945 I Reciprocal of square root of same. . . . . . Ve 0. 606530 9, 7828528 Modulus of common system of logs = log e 777, 0.434.2945 9.6377843 Reciprocal of same-hyp. log 10........] 2.3025851 | 0.3622157 777. Factor to reduce a to probable error . . . . T 0.67449 9, 828.976 Com. log a = m X hyp. log z, or Com. log (com. log ay q = 9.6377843 -i-com. log (hyp. log ar) Hyp. log z = com. log a × # Or Com. log (hyp. log z) = Com. log (com. log) ac-H 0.3622157 Circumference of circle. . . . . . . . . . . . . . . . 27rºr Area of circle. . . . . . . . . . . . . . . . . . . . . . . . Trr2 Area of sector (length of arc = l). . . . . . . . %lr Area of sector (angle of arc = a”). . . . . . . -*—rr? 360 e is & 2 — h;2 & as g Eccentricity of an ellipse, e = vaſ-E, where , a = semi-major axis; b = a? semi-minor axis of ellipse. 164 OF NORMAL CURVE ORDINATES (Y) TABLE QC == III.--TABLE OF ORDINATES (Y) OF NORMAL CURVE, OR VALUES OF + CORRESPONDING TO VALUES OF : 3/0 Gr Oſ -: deviation from mean. standard deviation. 1/ 1/0 = frequency. N orv/27 = maximum frequency. : : : :::: i : : 100000 99501 98.020 95600 92312 88250 83527 78270 72615 66698 60653 54.607 48675 42956 37531 32465 27804 23575 19790 16448 13534 11025 08892 07.100 05614 04394 03405 02612 0.1984 0.1492 0.1111 00034 00000 99.995 99396 97819 95.309 91939 87805 83023 77721 72033 66097 6004.7 54007 48092 42399 37007 3.1980 27361 23176 19436 16137 13265 10795 08698 06939 05481 04:285 03317 02542 0.1929 0.1449 00819 00022 99.980 99.283 97.609 95010 9.1558 87353 82514 77167 71448 65494 59.440 53409 475.11 4.1845 36487 31500 26923. 22782 19086 15831 13000 10570 0.8507 06780 05350 04179 03232 02474 01876 0.1408 00598 00015 99955 9915S 97.390 94.702 91.169 86896 82010 76610 70861 64891 58834 52812 46.933 41294 35971 31023 26489 22392 1874.1 15530 12740 10347 08320 06624 05:222 04.074 03148 024.08 0.1823 O1367 004:32 00010 99920 99.025 97.161 94387 90.774 86432 81481 76048 70272 64287 58228 52214 46357 40747 35459 30550 26059 22008 18400 15232 12483 10129 08.136 0647.1 05096 0.3972 03066 02343 01772 013:28 00309 00006 99.875 98881 96.923 94055 90371 85962 80957 75484 69681 63683 576.23 51620 45.783 40202 34950 30082 25634 21627 18064 14939 12230 09914 07956 06321 04973 03873 02986 02:280 0.1723 0.1289 00219 00004 99.820 98728 96676 93723 8996.1 . 80429 74916 69087 63.077 570.17 51027 45212 39661 34445 296.18 25213 21251 17732 14650 11981 09702 O7778 06174 ois; O3775 0.2908 0.2218 0.1674 0.1252 0.0153 00003 99755 98.565 96420 93382 89.543 S5006 798.96 74342 68493 62472 56414 50437 44644 39.123 3394.4 29158 24797 20879 17404 14364 11737 09495 07604 06029 04734 03680 02831 02157 01627 01:215 00106 00002 99.685 98393 96.156 93034 891.19 84519 79359 73769 67896 61865 558.10 49848 44078 38589 33447 28702 24385 20511 17081 14083 11496 09.200 07433 05888 04618 0.3586 O2757 02098 01581 0.1179 00073 00001 99596 98.211 95882 92677 88688 84060 78817 73193 67298 61259 55.209 49260 43516 38058 32954 28251 23978 20148 16762 13806 11259 09090 07:265 05750 04.505 03494 0.2684 02040 O1536 01145 00050 00001 ORDINATES OBTAINED FROM TABLE III 165 1 / 2\ 2 1 ..-:() TABLE IIIa.—ORDINATES BASED ON y = —-e 2 \o OBTAINEID 2T or FROM TABLE III BY MULTIPLYING EACH ENTRY IN THAT TABLE BY = 0.3989.42 TO GET RESPECTIVE ENTRIES IN THIS TABLE. 1 V 27" (Specially calculated.) QC / Or 0 1 2 3 4 5 6 7 8 9 .39894| .39892|. 39886|.39876}. 39862]. 39.844). 39822|. 397.97|.39767|. 39783 . 39695|. 396.54|. 39608). 39.558|. 39505|. 394.48|.39387|. 39322]. 39253].39.181 . 39.104). 39024|. 38940 .38853].38762.38667|.38568|. 38466|. 38361|. 38.251 .38139|. 38023). 37903] .37780 .37654|.37524 .37391 .37255|. 37.115||. 36973 . 36827 .36678|.36526%. 3637]|.36213.36053|. 35889|. 35722. 35553|. 35381 : : .35206|. 35029| .34849|.34667 .34482|.3429.4|. 341041.33912|.33718. , 33521 .33322|. 33121.32918|. 32.713|.32506|. 32.297/.32086; .31874|. 31659|.31443 .31225 .31006|. 30785.30563. 30339|. 30114.29887. 29659|.29430|. 29.200 .28969|. 28.737|.28504|, 28269|. 28034|. 27798|.27562) .27324. 27086). 26.848 .26608|.26369|.26129|.25888|.25647] .25406!. 25164.24923|. 24681; .24439 : : . 24.197|. 23955]. 23713]. 23471|. 23230). 22988]. 22747|. 22506|.22265|.22025 . 21785.21546|. 21307|. 21068). 20831|. 20594|. 20357/.2012.1]. 19886|. 19652 . 19419|. 19186|. 18954]. 18724|. 1849.4|. 18265). 18037|. 17810). 17585|. 17360 . 17137|. 16915). 16694. 16474). 16256]. 16038|. 15822]. 15608. 15395|. 15183 . 14973}. 14764]. 14556]. 14350l. 14146]. 13943|. 13742. 13542. 13344|. 13147 : : . 12952|. 12758|. 12566|. 12376]. 12188|. 12001|. 11816|. 11632]. 11450|. 11270 . 11092]. 10915). 10741|. 10567|. 10396]. 10226|. 10059]. 0.9892|.09.728.09566 .09.405.09246|. 09089).08933}.08780).08628.08478|.08329|.08183.08038 .07895|.077.54|.07614|.07477|.07341}. 07.206. 07074|. 06943|.06814|.06687 .06562|.06438|.063.16|.061951. 06076|.05959|.05844|.05730.05618|.05508 . : .05399.05292|.05186.05082|.04980.04879|. 04780 .04682.04586}.04491 .043981.04307|.04217.04128.04041.03955.03871.037881.03706).03626 .03547|.03470|.0339.4|.03319|.03246|. 031741.03103|.03034|.02965.02898 .02833|.02768|.02705|.0264.3|.02582: .02522].02463|. 02406|.02349|.02294 . 0.2239|.02186.02.1341.02083 ºpoios, .01936.01888|.01842|, 0.1797 : : .01753|.01709|.01667|.01625.01585|. 01545.01506|.01468.01430.01394 .01358|.01323|.01289.01256).01223.01.191]. 01160.01130.01.100|.01070 . 01042.01014|.00987|. 00960|.00935|.00909 .00885).00860ſ. 00837|.00814 . 00792). 00770|.00748.00727|.00707.00687|.00668,00649.00631|.00613 .00595|.00578|.00562|.00545|.00530.00514|.004991.00485|.00470|.00457 . i 3 .00443|.00327|.00238|.00172|.00123}.00087.00061|.00042.00029 .00020 4 .00013|.00009|.000061.00004|.000021.000021.00001|.00001).00000|.00000 166 TABLE OF THE HALF CLASS INDEX (94a) VALUES TABLE IV-TABLE OF THE HALF CLASS INDEX (#a) VALUES OR THE NORMAL PROBABILITY INTEGRAL CORRESPONDING TO VALUES OF #; OR THE FRACTION OF THE AREA OF THE CURVE O" BETWEEN THE LIMITs o AND ++, OR O AND – : Or Or Total area of curve asswºmed to be 100,000 deviation from mean -: 3C a = standard deviation . 00 00000 40 80| 120 159| 199| 239| 279| 319|| 359| 40 m 7 6 3 . 25 | 98.71| 99.09| 9948| 9986|10025|1006410102|101.41|10180|10218 . 26 ||10257|10295|10334|10372|10411|10449|10488|10526|10565|10603 . 27 |10642|10680|10719|10757|10796|10834|10872|10911||10949110988 . 28 |11026||11064|11103||11141||11179|11217|11256||112941.1333||11371 . 29 |11409|11447||11485||11524|11562|11600||11638||11676||11715||11753 . 30 11791|11829|11867|11905||11943||11981|12019|120.58||12096] 12134 . 31 |12172|12210|12248|12286|12324|12362|12400||12438||12476|12514| 38 . 32 12552|12589 #12627|12665|12703||12741|12778||12816|12854|12892 .33 |12930|12968||13005||13043||13081||13118||13156||13194|13232||13269 . 34 |13307||13344|13382||13420]13457|13495||13533||13570113608||13645 . 35 |13683||13720. 13758||13795||13833/13870|13908||13945||13983|14020 1 8 7 1 4 2 7 1 8 2 7 2 2 1 7 2 6 0 7 2 9 9 7 3 3 8| 7 3 7 8 7 4 I 7 7 4 5 6 7 4 9 5 PROPORTIONAL PARTs, A 1 2 40 || 4.0 8.0 12.0 16 20. 24.0 .0 36.0 39 || 3.9 7.8 11.7 15. 6 19.5 23.4 27.3 31.2 35.1 38 || 3.8 7.6 11.4 15.2 19.0 22.8 26.6 30.4 34.2 37 || 3.7 7.4 11.1 14 18.5 22.2 5.9 6 33.3 TABLE OF THE HALF CLASS INDEX (94a) VALUES 167 TABLE IV.--Continued 0 1 2 3 4 5 6 7 8 9 14058 14431 14803 15173 15542 15910 16276 16640 17003 17364 17724 18082 18439 18793 19146 19497 19847 20194 20540 20884 21226 21566 21904 22240 22575 22907 23237 23565 14095 14468 14840 15210 15579 15946 16312 16676 17039 17400 17760 18118 18474 18829 19181 19532 19881 20229 20574 20918 21260 21600 21938 22274 22608 22940 23270 23FQQ 14132 14505 14877 15247 15616 15983 16348 16713 17075 17436 17796 18153 18509 18864 19216 19567 19916 20263 20609 20952 21294 21634 21971 22307 22641 22973 23303 23630 23891 24215 24537 24857 25175 25.490 25804 26115 26424 26730 27035 27337 27637 27935 28230 28524 28814 23924 24.247 24569 24889 25206 255.21 25835 26.146 26454 26761 2.7065 27367 27667 27964 28260 28553 28843 23956 2428O 24601 24920 25238 25553 25866 261.76 26485 26791 27095 27397 27697 27994 28289 28582 28872 14169 14542 14914 15284 15652 16019 16385 16749 17111 17472 17831 18189 18545 18899 19251 19602 1995.1 20298 20643 20986 21328 21667 22005 22341 22674 23006 23335 23663 23988 24312 24633 24952 25269 25584 25897 26207 26516 26822 27.125 27.427 277.26 28023 28318 28611 28901 14207 14579 14951 15321 15689 16056 1642}. 16785 17147 17508 17867 18225 18580 18934 19287 19637 19986 20332 20678 21021 21362 21.701 22039 22374 22707 23039 |23368 23695 24021 24344 24665 24984 25301 25615 25928 26238 26546 26852 27.156 27457 277.56 28053 28347 28640 28930 14244 14617 14988 15357 15726 16093 16458 16821 17184 17544 17903 18260 18616 18969 19322 19672 20020 20367 20712 21055 21396 21735 22072 22407 2274.1 23072 23401 23728 24053 24376 24697 27786 28082 28377 28669 28958 14281 14654 15025 15394 15763 16129 16494 16858 17220 17580 17939 18296 18651 19005 19357 19707 2005.5 20402 20746 21089 21430 21769 22106 22441 22774 23105 23434 23761 24085 24408 24729 25048 2|25364 25678 25990 26300 26608 26913 27216 27517 27816 28112 28.406 28698 28987 14319 14691 15062 15431 15799 16166 16531 16894 17256 17616 17975 18332 18687 19040 19392 19742 20090 20436 20781 21123 21464 21803 22139 22474 22807 23138 23467 23793 24.118 24441 24761 25079 25.395 25709 26021 26331 26638 26943 14356 14728 15099 15468 15836 16202 16567 16930 17292 17652 180]. I 18367 18722 1907.5 19427 19777 2012.5 20471 20815 21158 21498 21836 22173 22508 22840 23171 23499 23826 24.150 24473 24.793 25111 25427 25741 26052 26362 26669 26974 14393 14765 15136 15505 15873 16239 16604 16967 17328 17688 18046 18403 18758 19111 19462 19812 20160 20505 20850 21.192 21532 21870 22207 22541 2287.4 23.204 23532 23859 241.83 24505 248.25 25143 25459 25772 26084 26393 26700 27004 27246 27547 27845 28142 284.35 28727 29016 Z (Z / / 27577 27875 28171 28465 28756 29045 2.7307 27607 27.905 2820.1 284.94 28785 29074 37 36 35 34 33 32 31 30 29 PRoPortionAL PARTs, A 1 2 i i i | 168 TABLE OF THE HALF CLASS INDEX (94a) VALUES TABLE IV.-Continued ac / Of O 1 2 3 4 5 6 7 8 9 29.103 29389 29673 29954 30234 30510 30785 31.057 31327 31594 3.1859 291.32 2.94.17 29.701 29982 30261 30538 30812 31084 31353 3.1620 31885 32147 32407 32665 32919 33172 33422 33670 33915 34.158 34.399 34637 34873 35106 35337 35565 3.5791 29160|291.89 29446|29474 297.29|297.57 30010|30038 30289|30317 30565|30593 308393O866 3.1111:31138 31380|31407 3.1647|3|1673 31911|31937 32.173|32199 32433|32459 32690|32715 32945|32970 33197|33222 334471334.72 33695:33719 33940|33964 34.182|34206 34423|34446 34661|34684 34896||34919 35129|35152 35360|35.382 35588:35610 35814|35836 33744|33768 33988|34013 34230|34255 34470|34494 34708|34731 34943|34966 35175|35198 35405|35428 35633|35656 35858|35881 292.17|29246 2950229531 29785|29814 30066||300.94 30344130372 3062030648 30894 |30921 31.165|31.192 31433|31460 31700|31726 31964|31990 32225|32251 32484|32510 32741|32766 32995|33021 33247|33272 33497|33521 292.74 29559 29842 30.122 30400 30675 30948 312.19 31487 31753 32016 29303|29332 295.88|29616 298.70|29898 30.150|301.78 30427:30455 30702|30730 30975|31002 31246||31273 31514|31540 3.178031806 32042|32069 52.2 ſ ( 32536 32792 33046 33297 33546 32308|32329 32562|32587 328.18|32843 33071|33096 33322|33347 335.7133596 33793 34037 34279 34518 34.755 34989 3522} 35451 35678 35903 33817|33842 34061|34086 34303|34327 34542|34566 34778|34802 35013|35036 35245/.35268 35474|35497 35701|35724 35926|35948 29360 2964.5 29926 30206 30483 30757 31030 31300 3.1567 3.1832 32095 32355 32613 32869 33.122 33373 33621 33867 34.110 34351 34590 34826 35059 35291 35.520 35746 35970 28 27 26 25 24 23 22 21 20 1 2 3 1 1 etº TABLE OF THE HALF CLASS INDEX (94a) VALUES 169 TABLE IV.-Continued ac/ar 0 1 2 3 4 5 6 7 8 9 A 1. 21 |38686) 705| 724| 743| 762| 781] 800 819. 838|| 857 19 1. 22 876|| 895| 914| 933| 952 971 990 008| 027| 046 1.23 |39065| 084| 102 121 139|| 158|| 177| 195| 214|| 232 1. 24 251 270 288| 306| 324| 343| 361| 380| 398: 417 1.25 435| 453| 471| 489| 507 525 544; 562i 580| 598 1.26 617 634|| 652| 670) 688 706| 724 742| 760| 778 || 18 1.27 796|| 813| 831|| 849| 866|| 884| 902| 920. 937| 955 1.28 973| 990 008|| 025' 042|| 060|| 077 095) 112| 130 1.29 40147| 165| 182| 199| 216. 233| 251| 268| 285| 303 1.30 320|| 337| 354; 371. 388| 405 4:22| 439| 456|| 473 || 17 1.31 490) 507 524| 540| 557. 574| 591 608| 625| 641 1.32 658| 675|| 692|| 709| 725] 742|| 758| 775|| 792 808 1.33 825| 841 857| 873, 889| 906| 922| 938 955| 971 l. 34 987 004| 020|| 036|| 052; 068| 084|| 101| 117| 133 1.35 |41.149| 165| 181| 197: 213| 229| 245| 261| 277. 292 | 16 1.36 308|| 324; 340|| 355| 371| 387| 403|| 418| 434|| 450 1.37 466|| 481. 497| 512| 527| 543| 558 574| 590, 605 1. 38 621| 637| 652| 667| 683. 698| 713| 728| 744. 759 1. 39 774| 789: 804 819| 834| 849| 864| 879| 894. 909 || 15 1.40 924| 939| 954, 969| 984, 998 0.13| 028 043| 058 1.41 |42073| 088; 102| 117| 131|| 146|| 161| 175|| 190| 205 1.42 220, 234 248| 263| 277| 292|| 306| 321|| 335|| 350 1.43 364| 378] 393| 407| 421ſ 435| 449| 464| 478| 492 1.44 507| 521 535i 549| 563; 577| 591 605 619| 633 || 14 1.45 647| 661| 675 688| 702 7.16|| 730| 744. 758| 772 1, 46 785; 799| 813| 826; 840|| 854 867| 881| 895, 908 1.47 922| 935, 949, 962. 975|| 989 002| 016|| 029| 043 1.48 |43056|| 069| 083| 096] 109| 122| 136|| 149| 162| 175 1.49 189| 202| 215| 228| 241 || 254| 267| 280; 293| 306 13 1. 50 319|| 332| 3.45| 358! 371| 383| 396. 409| 422| 435 1.51 448| 460| 473| 486| 498| 511. 524. 536|| 549; 562 1, 52 574| 587| 599| 612 624| 637; 649 662| 674 687 1. 53 699| 711| 724 736|| 748| 760 773| 785| 797.| 810 1.54 822| 834; 846, 858| 870, 882| 894| 906| 919| 931 12 1. 55 943| 955| 967| 978| 990 002| 014|| 026|| 038) 050 1.56 44062| 074| 085| 097| 109| 120 132| 144|| 156] 167 1. 57 179| 191; 202| 214, 225 237] 248| 260| 271 283 1.58 295] 306| 317| 329| 340| 351 363. 374; 385| 397 1.59 408| 419) 430| 442| 453| 464| 475|| 486; 498| 509 PRoPort IonAL PARTs, A 1 2 3 4 5 6 7 8 9 19 1.9 3.8 5.7 7.6 9.5 11.4 13.3 15.2 17. 1 18 1.8 3.6 5.4 7.2 9.0 10.8 12.6 14.4 16.2 17 1.7 3.4 5. 1 6.8 8.5 10.2 11.9 13.6 15.3 16 1.6 3.2 4.8 6.4 8.0 9.6 11.2 12.8 14.4 15 1.5 3.0 4.5 6.0 7.5 9. O 10.5 12.0 13. 5 14 1.4 2.8 4.2 5. 6 7. 0 8.4 9.8 11.2 12.6 13 1. 3 2.6 3.9 5.2 6, 5 7.8 9. 1 10.4 11.7 12 1.2 2.4 3.6 4.8 6.0 7.2 8.4 9.6 i0.8 11 1.1 2.2 3.3 4.4 5.5 6.6 7.7 8.8 9.9 170 TABLE OF THE HALF CLASS INDEX (94a) VALUES TABLE IV.-Continued a:/or O 1. 2 3 4 5 6 7 8 9 A 1.60 44520 || 531 || 542 || 553 || 564 || 575 || 586 597 || 608 || 619 || 11 1.61 630 || 641 || 652 | 662 || 673 | 684 || 695 || 706 || 717 | 727 i. 62 738 749 || 760 || 770 781 791 || 802 || 813 || 823 || 834 1.63 845 || 855 866 || 876 || 887 | 897 || 908 || 918 929 || 939 1.64 950 960 970 980 || 991 001 || 011 || 022 || 032 || 042 1.65 45053 || 063 || 073 || 083 || 093 || 103 || 114 || 124 || 134 || 144 1.66 154 || 164 174 | 184 194 204 || 214 || 224 234 || 244 || 10 1.67 254 264 || 274 || 283 || 293 303 || 313 || 323 || 332 || 342 1.68 352 362 || 371 || 381 || 391 || 400 || 410 || 419 || 429 || 439 1.69 449 || 458 || 467 || 477 || 486 || 496 || 505 || 515 || 524 || 534 1.70 543 553 || 562 571 || 581 590 599 || 609 || 618 627 1.71 637 646 || 655 | 664 || 673 | 682 | 692 || 701 || 710 || 719 1. 72 728 || 737 || 746 || 755 || 764 773 782 | 791 800 809 9 1.73 818 || 827 | 836 | 845 854 863 | 871 || 880 | 889 | 898 1.74 907 916 || 924 || 933 942 950 | 959 || 968 || 977 | 985 1.75 994 003 || 011 || 020 || 028 || 037 || 045 || 054 || 062 || 071 1.76 || 46080 088 096 || 105 || 113 | 121 || 130 || 138 || 147 || 155 1.77 164 || 172 | 180 | 188 || 196 || 205 || 213 221 || 230 || 238 1.78 246 || 254 || 262 | 270 279 287 || 295 || 303 || 311 319 1 .. 79 327 | 335 | 343 || 351 359 || 367 || 375 || 383 || 391 || 399 8 1.80 407 || 415 || 423 || 430 || 438 446 || 454 || 462 469 || 477 1.81 485 || 493 || 500 || 508 || 516 || 523 531 || 539 || 547 || 554 1.82 562 || 570 577 || 585 592 || 600 607 || 615 622 || 630 1.83 638 || 645 || 652 | 660 | 667 || 674 682 | 689 697 || 704 1.84 712 719 || 726 | 733 || 741 || 748 || 755 || 762 770 777 1.85 784 791 || 798 || 806 || 813 || 820 | 827 | 834 || 841 | 849 1.86 856 | 863 870 | 877 | 884 || 891 | 898 || 905 || 912 919 7 1. 87 926 933 || 939 946 || 953 960 967 || 974 || 981 988 1.88 995 001 || 008 || 015 || 021 || 028 || 035 | 042 || 049 || 055 1.89 || 47062 | 069 || 075 || 082 || 088 || 095 || 102 || 108 || 115 | 122 1.90 128 135 | 1.41 | 1.48 || 154 || 161 | 167 || 174 180 | 187 1.91 193 || 200 | 206 212 || 219 || 225 || 231 238 244 || 251 1.92 257 || 263 || 270 276 || 282 || 288 294 || 301 || 307 || 313 1.93 320 || 326 || 332 338 || 344 || 350 || 356 || 362 || 369 || 375 1.94 381 || 387 || 393 || 399 || 405 || 411 || 417 || 423 || 429 || 435 6 I .95 441 || 447 || 453 || 459 || 465 || 471 || 476 || 482 || 488 || 494 1.96 500 || 506 || 512 || 517 | 523 || 529 535 | 541 || 546 || 552 1.97 558 || 564 || 569 || 575 || 581 || 586 || 592 || 598 || 603 || 609 1.98 615 620 || 626 || 631 637 643 || 648 || 654 659 | 665 1.99 670 | 676 | 681 | 687 || 692 698 || 703 || 709 || 714 || 719 2.00 725 | 730 | 735 | 741 || 746 752 || 757 762 || 768 772 2.01 778 784 || 789 || 794 || 799 || 804 || 810 || 815 || 820 826 2.02 831 836 | 841 | 846 851 || 856 862 || 867 872 877 2.03 882 887 | 892 | 897 || 902 || 907 912 || 917 922 || 927 5 2.04 932 || 937 || 942 | 947 || 952 957 962 | 967 972 || 977 PROPORTIONAL PARTs, A 1. 2 3 4 5 6 7 8 9 11 1.1 2.2 3.3 4.4 §. 5 6.6 7.7 8.8 9.9 10 1.0 2.0 3.0 4.0 5.0 6.0 7. O 8.0 9.0 9 0.9 1 .. 8 2.7 3.6 4.5 5.4 6.3 7.2 8.1 8 0.8 1.6 2.4 3.2 4.0 4.8 5.6 6.4 7.2 7 0.7 1.4 2.1 2.8 3.5 4.2 4.9 5.6 6.3 6 0.6 1.2 1.8 2.4 3.0 3.6 4.2 4.8 5.4 TABLE OF THE HALF CLASS INDEX (94a) VALUES 171 TABLE IV.--Continued ac/a. O 1 2 3 4 5 6 7 8 9 A 2.05 || 47982 | 987 991 || 996 001 || 006 || 011 || 015 || 020 || 025 2.06 || 48030 || 035 | 039 || 044 || 049 || 054 || 058 || 063 || 068 || 073 2. 07 077 || 082 || 087 || 091 || 096 || 100 || 105 || 110 || 114 || 1 19 2.08 124 || 128 || 133 || 137 || 142 | 1.46 || 151 | 155 | 160 | 165 2 : 09 169 173 || 178 182 | 187 | 191 196 || 200 | 205 || 209 2. 10 214 218 222 || 227 || 231 || 235 | 240 || 244 || 248 || 253 2. 11 257 261 266 270 274 || 278 || 283 287 291 295 2. 12 300 || 304 || 308 || 312 || 316 || 320 || 325 || 329 || 333 || 337 2. 13 341 || 345 || 350 || 354 || 358 || 362 || 366 370 || 374 || 378 2. 14 382 || 386 || 390 || 394 || 398 || 402 | 406 || 410 || 414 || 418 4 2. 15 422 || 426 430 || 434 || 438 || 442 || 446 || 450 || 453 457 2. 16 461 || 465 || 469 || 473 || 477 || 480 || 484 || 488 || 492 || 496 2, 17 500 503 || 507 || 511 || 515 || 518 || 522 || 526 || 530 || 533 2. 18 537 541 || 544 || 548 || 552 || 555 || 559 || 563 || 566 || 570 2. 19 574 577 || 581 || 584 || 588 || 592 || 595 || 599 || 602 || 606 2, 20 610 || 613 || 617 | 620 || 624 || 627 | 631 || 634 || 638 || 641 2. 21 645 || 648 || 652 || 655 || 658 | 662 | 665 669 || 672 || 676 2. 22 679 | 682 | 686 | 689 || 692 || 696 699 || 702 || 706 || 709 2, 23 713 || 716 || 719 || 722 || 726 || 729 || 732 | 736 | 739 742 2.24 745 749 || 752 || 755 758 || 761 || 765 768 || 771 || 774 2.25 778 781 || 784 || 787 || 790 793 || 796 || 799 || 803 || 806 2. 26 809 || 812 815 818 || 821 || 824 827 | 830 || 833 837 2.27 840 | 843 | 846 | 849 || 852 855 || 858 || 861 || 864 || 867 3 2. 28 870 | 872 | 875 || 878 || 881 884 || 887 | 890 | 893 | 896 2.29 899 || 902 || 905 || 907 || 910 || 913 || 916 || 919 || 922 || 925 2.30 928 || 930 || 933 || 936 || 939 || 942 944 || 947 | 950 953 2.31 956 | 958 961 || 964 || 966 || 969 || 972 || 975 | 977 || 980 2.32 983 || 986 || 988 || 991 994 || 996 || 999 002 || 004 || 007 2.33 || 49010 || 012 || 015 || 017 | 020 || 023 025 | 028 || 031 || 033 2.34 036 || 038 041 || 043 || 046 || 048 || 051 054 056 || 059 2.35 061 || 064 || 066 || 069 || 071 || 074 || 076 || 079 || 081 || 084 2.36 086 || 089 || 092 || 094 || 096 || 098 || 101 || 103 || 106 || 108 2.37 111 || 113 || 115 || 118 || 120 | 122 || 125 | 127 | 130 || 132 2.38 134 || 137 || 139 || 141 || 144 || 146 || 148 || 151 | 153 || 155 2. 39 158 | 160 | 162 | 164 167 169 || 171 || 173 || 176 || 178 2.40 180 182 | 185 187 | 189 || 191 || 193 || 196 || 198 || 200 2.41 202 || 205 || 207 || 209 || 211 || 213 || 215 217 | 220 222 2.42 224 || 226 228 || 230 || 232 234 237 239 || 241 243 2.43 245 || 247 || 249 || 251 || 253 || 255 || 257 || 259 || 261 || 264 2.44 266 || 268 270 || 272 || 274 || 276 || 278 || 280 282 284 2 2.45 286 || 288 290 292 || 294 || 295 || 297 || 299 || 301 || 303 2.46 305 || 307 || 309 || 311 || 313 || 315 || 317 || 319 || 321 || 323 2.47 324 || 326 || 328 || 330 || 332 || 334 || 336 || 337 || 339 || 341 2.48 343 345 || 347 || 349 350 || 352 || 354 356 || 358 || 359 2.49 361 || 363 || 365 || 367 368 || 370 || 372 || 374 || 375 || 377 2.5 379 || 396 || 413 || 430 || 446 || 461 || 477 || 492 506 || 520 | 16 2.6 534 || 547 560 | 573 || 585 || 598 || 609 || 621 || 632 || 643 || 12 2.7 653 | 664 || 674 || 683 || 693 || 702 || 711 || 720 | 728 736 9 2.8 744 || 752 760 || 767 || 774 || 781 || 788 795 || 801 || 807 7 PROPortion AL PARTs, A 1 2 3 4 5 6 7 8 9 16 1.6 3.2 4.8 6.4 8.0 9.6 11.2 || 12.8 14.4 12 1.2 2.4 3.6 4.8 6.0 7.2 8.4 9.6 10.8 9 0.9 1.8 2.7 3. 6 4.5 5.4 6.3 7.2 8. I 7 0.7 1.4 2.1 2.8 3.5 4.2 4.9 5. 6 6.3 6 ‘O 8' 0 A ‘0 9 * 0 g "O # "O 9 * 0 Z "O I 0 I 8' I 9 * I # I Z ‘I O' I 8 * 0 9 * 0 | f * 0 6, '0 & A." (, # * Z, I Z 8 I g “I Z' I 6 * 0 9 * 0 9 * 0 9. 9 9 Z'g 8 * Z, # * Z, O' Z, 9 I & ‘I 8' 0 # * 0 # g" fº O' 5 gº 8 0 ° 9 | g & 0 3 | g I O' T Q 'O Q 6 8 1. 9 Q 5 9. & I V 's Lāvd IV NOILHodoºd O 000 || 000 || 000 || 000 || 000 | 666 | 666 | 666 || 866 | 1.66 f O A66 1,66 966 || 966 || 966 || 966 || 966 || 966 || 966 || 966 6 ° 9 0 Q66 966 966 | #66 | #66 #66 | #66 || 966 £66 £66 8 ° 9 O Z66 || 366 || 366 || Z66 || I66 || I66 || 066 || 066 || 066 | 686 A 9 I 686 || 886 || 886 | 1.86 || 186 986 || 986 || 986 || 986 || #86 9 9 I 986 || 886 || 386 || IS6 || I86 || 086 || 6L6 || 8/6 || 8/.6 1/6 G 9 I 9/6 || 9 ||6 || #16 || 8/6 || 3/6 || IA6 || 0/6 || 696 || 896 || 996 # '9 I Q96 || 596 || 396 || I96 || 096 || 896 || 1.96 || 996 || 896 || 396 9 9 3, 096 || 876 976 | pipó | Zip6 || 076 | 896 || 996 || #86 || I96 Ž 9 9 636 || 936 || 736 || IZ6 || 8T6 916 || 8 I6 OI6 || 906 || 906 I '9 # 006 || 1.68 || 868 || 688 || 988 || 388 8/8 || 8/8 || 698 || 998 0 ° 9 g I98 || 998 || Ig8 978 || IW8 || 988 || I98 || 938 || 6’I8 || EIS67 || 6’ 3, V 6 8 1. 9 9 f 8 3. I 0 2/2: ‘panuºuo,0—"AI (IIQIVJ, SGInſiva (p3%) xSIGINI SSV'Io (ITVH GIHL IO IIavi, ZZI REDUCTION FROM CoMMON TO METRIC SYSTEM 173 TABLE V.—REDUCTION FROM COMMON TO METRIC SYSTEM (a) Inches to Millimeters 1 2 4 5 6 7 8 25.40 279.40 533.39 787.39 1041.4 1295.4 1549.4 1803.4 2057.4 2311.4 50.80 304.80 558, 79 812. 79 1066.8 1320.8 1574.8 1828.8 2082.8 2336, 8 76. 20 330. 19 584. 19 838.19 1092.2 1346.2 1600.2 1854.2 2108.2 2362.2 101, 60 355.59 609.59 863.59 1117.6 1371.6 1625. 6 1879.6 2133.6 2387.6 127.00 380.99 634.99 888.99 1143.0 1397.0 1651.0 152.40 406. 39 660. 39 914 - 39 1168.4 1422.4 1676.4 1930.4 2184.4 2438.4 177.80 431. 79 685.79 939.78 1193.8 1447.8 1701.8 1955.8 2209.8 2463.8 203.20 457. 19 711.19 965.18 1219.2 1473.2 1727.2 1981.2 2235.2 2489.2 228.60 482.59 736.59 990. 58 1244.6 1498.6 1752.6 2006.6 2260.6 2514.6 Twelfths Sixteenths 4.23 6.35 8.47 10.58 12.70 14.82 16.93 19.05 21. 17 23.28 25.40 1.59 3.17 4.76 6.35 7.94 9. 52 11.11 12.70 9/16] 14.29 5/8 11/16] 17.46 3/4 15.87|| 7/8 19.05 15/16 l 13/16| 20.64 22.22 23.81 25.40 (b) Avoirdupois Ounces into Grams 1 ounce = 28.349527 grams 1 3 4. 6 9 28.4 3.11.8 595.3 878.8 1162.3 1445.8 1729.3 2012.8 2296.3 2579.8 56.7 340.2 623.7 907.2 1190.7 #3 1757.7 2041.2 2324.7 2608.2 85.0 368.5 652.0 935.5 1219.0 1502.5 1786.0 2069.5 2353.0 2636.5 113.4 396.9 680.4 963.9 1247.4 1530.9 1814.4 2097.9 2381.4 2664.9 2405.7 2693.2 170.1 453.6 737.1 1020.6 1304.1 1587.6 1871. 1 2154.6 2438.1 2721.6 1332.4 1615.9 1899.4 2182.9 2466.4 2749.9 1077.3 1360.8 1644.3 1927.8 2211.3 2494.8 2778.3 226.8. 255.1 510.3| 538.6 793.8| 822.1 1105.6 1389.1 1672.6 1956.1 2239.6 2523.1 2806.6 Lb. (c) Avoirdupois Pounds into Grams 1 pound = 0.4535924 kilogram 4 6 0 4536 9072 13608 18144 22680 27216 31752 36.288 40824 1814 6350 10886 15422 19958 24494 29030 33566 38102 42638 2268 6804 11340 15876 20412 24948 29484 34020 38556 43092 2722 7258 11794 16329 20865 25401 2993.7 34473 39009 435.45 44906 174 MINUTES AND SECONDS IN DECIMALS OF A DEGREE TABLE VI.-MINUTES AND SECONDS IN DECIMALS OF A DEGREE . 000278*||21|, 0.0583.3||41}. 01.1389 , 000556 ||22.006111||42|. 011667 . 000833 ||23.006389||43. 011944 . 001111 ||24|, 0.06667||44|. 012222 . 001389 ||25 || 006944||45|. 012500 . 016666|| 21 |. 350000|| 41 | . 683333 . 033333|| 22 |. 366666|| 42 . 700000 . 050000|| 23 |. 38.3333|| 43 . 716666 . 066666|| 24 |. 400000|| 44 | . 733333 .083333||25 |. 416666|| 45 | . 750000 ... 100000|| 26 .433333|| 46 | . 766666 , 0.01667 ||26. 007222||46|. 012778 . 116666|| 27 |. 450000|| 47 | . 783333 , 0.01944 ||27|, 007 500||47|. 0.13056 . 133333|| 28 # . 800000 . 002222 ||28|. 007.778||48}. 013333 . 150000|| 29 4.83333|| 49 | . 816666|| 9 |. 0.02500 ||29|. 008056||49. Of 3611 10 |. 166666|| 30 |. 500000|| 50 | . 833333|| 10 | . 002778 ||30|. 008333||50|. 013889 11 |. 183333|| 31||. 516666|| 51 | . 850000|| 11 |. 003056 |31 |. 008611||51|. 014167 12 |. 200000|| 32 |. 533333|| 52 | . 866666||12 |. 0.03333 ||32|, 0.08889||52. O14444 13 |. 216666||33 |. 550000|| 53 | . 883.333|| 13 |. 003611 ||33.00.9167||53}. 014722 14 |. 233333||34 |. 566666|| 54 | . 900000|| 14 |. 003889 ||34|. 009,444||54|, 0.15000 I5 |. 250000|| 35 |. 583.333|| 55 .916666|| 15 . 004167 ||35|. 009.722#55. 015278 16 .266666|| 36||. 600000|| 56 | . 933333|| 16 |. 0.04444 ||36|. 010000|56|. O15556 17|. 28.3333||37 |. 616666|| 57 | .950000|| 17 | . 004.722 ||37]. 010278||57|, 0.15833 18|. 300000|| 38 |. 633333|| 58 | . 966666||18 |, 0.05000 ||38|. 01:0556||58.016111 19 |. 316666|| 39 |. 650000|| 59 | . 983333||19||. 0.05278 ||39|. 010833||59|. 016389 20 |. 333333}| 40 |. 666666|| 60 || 1 , 000000|| 20 .005556 ||40|. 0.11111 ||60|. 016667 * . 0002777778 POWERS AND FACTORLALS OF INTEGERS 175 TABLE VII.-FIRST TO SIXTH POWERS OF INTEGERS FROM 1 TO 30 Powers First Second Third Fourth Fifth Sixth 1 1 l 1 1 1 2 4 8 16 32 64 3 9 27 81 243 729 4 16 64 256 1024 4096 5 25 125 625 3125 15625 6 36 216 1296 7776 46656 7 49 343 2401 16807 117649 8 64 512 4096 32768 262,144 9 81 729 6561 59049 531441 10 100 1000 10000 100000 1000000 11 121 1331 1464. 1 161051 1771561 12 144 1728 20736 2488.32 298.5984 13 169 2197 28561 371293 4826.809 14 196 2744 38416 537824 7529586 15 225 3375 50625 7.59375 11390625 16 256 4096 65536 104.8576 16777216 17 289 4913 83521 14 19857 24.137569 18 324 5832 104976 1889568 34012224 19 361 6859 130321 24.76099 470.45881 20 400 8000 160000 3200000 64000000 21 441 92.61 194481 4084 101 85766121 22 484 10648 234.256 5153632 113379904 23 529 12167 279841 6436343 148035889 24 576 13824 33 1776 7962624 191102976 25 625 15625 39.0625 97656.25 244140625 26 676 17576 456976 1 1881376 30891.5776 27 729 19683 531441 1434 S007 3874.20489 28 784 21952 614 (556 172 1 036S 48.1890304 29 841 24389 7()7281 2051 1 149 594823321 30 900 27000 81.0000 24.300000 729000000 TABLE VIII.—FACTORIALS OF INTEGERS 1 TO 20 I 1 11 39,916,800 2 2 12 479,001,600 3 6 13 6,227,020,800 4 24 14 87,178,291,200 5 120 15 1,307,674,368,000 6 720 16 20,922,789,888,000 7 5,040 17 355,687,428,096,000 8 40,320 18 6,402,373,705,728,000 9 362,880 19 121,645,100,408,832,000 10 3,628,800 20 2,432,902,008,176,640,000 176 FACTORS FOR COMPUTING PROBABLE ERRORS TABLE IX.—FACTORS FOR USE IN COMPUTING THE PROBABILE ERRORS OF THE MEANS AND OF THE STANDARD DEVIATIONS PE 0.6745 PE 0.6745 M = Oſ. $or = − or vºw V2N To obtain the probable errors the factors in this table have to be multiplied by a . N 0.6745 0. 6745 N 0.6745 || 0.6745 N 0.6745 || 0.6745 v/N V2N v N | V2N v/N | V2W 5 .301641 | .213292 56 | . 0901.32 | . 063733 || 106 | . 065512 | . 046324 6 .275359 | . 194708 || 57 | .089338 |.063172 || 107 .065205 | . 046107 7 | .254933 | . 180265 || 58 |.088565 | .062625 || 108 | .064903 | .045893 8 ; ; 59 .087811 | . 062092 || 109 | .064604 | . 045682 13|3;|...};|| 30 ºś|ºgizZ || iio || 06:310|| 035474 11 20:366 e 143802 61 | .086360 | .061065 || 111 | . 064020 | . 045269 13 | Tö47ö; | 1375;6|| 62 |.085.660 |.060571 || 112 |.063733|.045066 13 187070 . 132278 63 | .084978 | . 060088 || 113 | .063451 | . 044866 14 | . 180265 | . 1274.67 64 .084311 | .059617 || 114 .063172 . .044669 15 174153 1231.44 || 65 .083660 | .059157 || 115 .062896 | .044475 16 168622 || 119234 || 66 | .083024 .058707 || 116 .062625 | . 044282 17 | 163588 || 115674 || 67 | .082402 | .058267 || 117 | . 062357 | . 044093 18 .158979 | . 112415 || 68 .081794 .057837 || 118 .002092 .043906 19 . 154739 . 109417 || 69 | .081199 |.057416 || 119 .06.1830 | .043721 20 | . 150820 | . 106646 || 70 .080617 | .057005 || 120 | . 061572 | .043538 21 | .147.186 | . 104076 || 71 | .080047 | .056602 || 121 |.061317 | .043358 22 | .143802 . . 101683 || 72 | .079489 | .056207 || 122 |.061065 |.043180 23 .140641 | .099448 || 73 |.078943 | .055821 || 123 .060817 | . 043004 24 . 137680 .097354 || 74 |.078408 || .055443 || 124 .060571 | . 042830 25 | . 134898 || .095387 || 75 | .077883 | .055072 || 125 | . 060328 .042659 26 | . 132278 | .093535 76 | .077369 .054708 || 126 | . 060088 | . 042489 27 | . 129806 .091786 || 77 . .076865 .054352 || 127 | .059851 | .042321 28 . 1274.67 | .0901.32 78 | .076371 | . 054002 || 128 .059617 | . 042156 29 125250 | .0885.65 79 | . 075886 .053660 || 120 | .0593.86 .04.1992 30 | . 123144 | .087076 80 .075410 .053323 || 130 | .059157 | . 041830 31 121142 | .085660 81 | .074943 | .052993 || 131 .058930 | . 041670 32 | . 119234 | .0843.11 82 | .074485 | . 052669 || 132 | .058.707 | . 041512 33 117414 | .083024 83 | .074035 | .052351 || 133 .058486 | . 041356 34 | . 115674 .081794 84 .073593 | . 052038 || 134 .058.267 | . 04120.1 35 | . 114010 | . 080617 85 | . 073159 | . 051731 || 135 | . 058051 | .04.1048 36 112415 | .0794.89 86 | . 072732 .051429 || 136 | .057837 | . 040897 37 | . 110885 | .0784.08 87 .072313 | . 051133 || 137 .057626 . O40747 38 | . 109417 | .077369 88 .071901 | .050842 || 138 | .057416 | . 040599 39 . 108005 | .076371 || 89 | .071496 .050555 || 139 .057209 | . O40453 40 | . 106646 .075410 || 90 .071097 | .050273 || 140 | .057005 | .040308 41 105.338 | . 074485 91 | . 070706 | .049996 || 141 .056802 | . O4016.5 42 | . 104076 | . 073593 92 | . 070320 | .049724 || 142 |.056602 || 0400.24 43 | . 102859 .072732 93 | . 06994.1 | .049456 || 143 | .056404 | . O39883 44 | . 101683 | . 071901 94 | . 069568 .049192 || 144 | .056207 | . O397.45 45 . 100547 | . 071097 95 | . 06920.1 ! .048933 || 145 .056013 | . O39607 46 .099448 | . 070320 96 | .068840 | .048677 || 146 || ,055821 | .039472 47 | .098384 | . 069568 || 97 | . 068484 || .048426 || 147 | .055631 . O39337 48 | .097354 |.068840 || 98 | . 068134 .048178 || 148 .055443 | .039.204 49 | .096356 | . 068134 99 || . 067789 . . 047934 || 149 i. 055256 | .039072 50 | .095387 | .067449 || 100 | . 067449 . 047694 || 150 | . O55072 | . 038942 51 | .094447 .066784 || 101 | . 067114 | . 047457 || 151 | .054889 |.038813 52 | .093535 | .066139 || 102 | .066784 | . 047224 || 152 |.054708 |.038685 53 | .09.2648 .065512 || 103 | . 066459 |.046994 || 153 | .054529 | . 038558 54 .091786 .064903 || 104 |.066139 .046767 || 154 |.054352 |.038433 55 |.090948 |.064310 || 105 | .065823 |.046544 || 155 |.054176 |.038308 FACTORS FOR COMPUTING PROBABLE ERRORS 177 & TABLE IX.-Continued N |0.6745 |0.6745 || N |0.6745 |0.6745 || N 0.6745 |0.6745 v/N | V2W v/N v/2N v/N v/2N 156 . 054002 | . 038.185 || 188 | . 049.192 | .034784 || 220 | . 045474 | . 032155 157 | .053830 || , 038.064 || 189 | . 049062 | . 034692 || 221 | . 045371 | . 032082 158 | . 053660 | . 037943 || 190 | , 0.48933 . . 034601 || 222 | . 045269 | . 032010 159 . 053491 | . 037.823 || 191 .048804 | . 034510 || 223 . 045167 | . 031938 160 .053323 | . 037.705 || 192 .048677 | . 034420 || 224 . 045066 | . 031867 161 | . 053157 | . 037588 || 193 ) .048551 | . 034331 || 225 | . 044966 | . 031796 162 | . 052993 || , 0.37472 || 194 | . 048426 | . 034242 || 226 | . 044866 | . 031725 163 | . 052830 | . 037.357 || 195 .048301 | . 034154 || 227 . . 044767 | . 031655 164 | . 052669 | . 037242 || 196 || , 0.48178 . 034067 || 228 . 044669 . 031586 165 | . 052509 | . 037129 || 197 | . 048055 | . 033980 || 229 . 044572 | . 031517 166 | . 052351 | . 037017 || 198 | . 047934 | . 033894 || 230 , 0.44475 | . 031448 167 | .052.194 | . 036906 || 199 || . 047813 | . 033809 || 231 | . 04:4378 | . 031380 168 .052038 . 036796 || 200 | . 047694 | . 033724 || 232 | . 044282 | . 031312 169 | . 051884 | . 036687 || 201 | . 047575 | . 033641 || 233 | . 044.187 | . 031245 170 | . 051731 | . 0.36579 || 202 | . 047457 . 033557 || 234 , 0.44093 . 0311.78 171 | . 051580 | . 036472 || 203 | . 047340 | . 033474 || 235 | . 043999 . . 031112 172 . .051429 | . 036366 || 204 | . 047224 | . 033392 || 236 | . 043906 | . 031046 173 .05128} | . 036261 || 205 | . 047108 | . 033311 || 237 | . 043813 .030980 174 .05.1133 | . 036156 || 206 | . 0.46994 | . 033230 || 238 . 043721 | . 030915 175 | .050987 . 036053 || 207 | . 0.46880 | . 033149 || 239 | . 043629 | . 030850 176 | .05084.2 . . 035950 || 208 || . 046767 | . 0330.70 || 240 | . 043538 | . 030786 177 | .050698 || .035849 || 209 | . 046655 | . 032990 || 241 | . 043448 | . 030722 178 .050555 . . 035748 || 210 | . 04654.4 | . 032912 || 242 . 043358 | . 030659 179 .050414 | . 035648 || 211 | . 046434 . 032834 || 243 . 043269 | . 030595 180 | .050.273 | . 035549 || 212 . 046324 | . 032756 || 244 | . 043180 . . 030533 181 .050.134 | . 035450 || 213 | . 046215 | . 032679 || 245 | . 043092 . 030470 182 | .049996 | .035353 || 214 | . 046107 | . 032603 || 246 | . 043004 | . 030408 183 | . 0.49860 | . 035256 || 215 . 046000 | . 0.32527 || 247 . 042917 | . 030347 184 || .049724 . . 035160 || 216 , 045893 | . 032451 || 248 | . 042830 | . 030285 185 | . 049589 | . 0.35065 || 217 | . 045787 . 032377 || 249 . 042744 | . 030225 186 | .049456 | . 034971 || 218 . 045682 | . 032302 || 250 . 042658 . . 030164 187 | . 049324 | . 034877 || 219 . 045578 | . 032228 178 PROBABLE OCCURRENCE OF A DEVIATION TABLE X.—ODDS AGAINST THE OCCURRENCE OF A DEVIATION IN TERMS OF THE STANDARD DEVIATION (OF AN INDIVIDUAL IN A DISTRIBUTION OR OF A SAMPLE MEAN) AS GREAT OR GREATER IN A LARGE COLLECTION OF SAMPLES TAKEN FROM A HOMOGENEOUS POPULATION. NEW LY COMPUTED a ſo Chances a ſa Chances a ſo Chances 1.0 2. 15 to 1 2. 1 26, 99 to, I 3. 1 515, 74 to 1 1.1 2. 69 2.2. 34.96 3, 2 726.70 1.2 3.35 2.3 45.62 3.3 1033, 34 1.3 4.17 2.4 59.99 3.4 1483, 12 1.4 , 5.19 2.5 79. 52 3.5 2148. 61 1. 5 6. 48 2, 6 106, 27 3.6 3.141.68 1.6 8. 12 2.7 143. 22 3.7 4637. 22 1.7 10. 22 2.8 194.69 3. 8 6914, 63 1.8 12.92 2.9 266.98 3.9 10394. O1 1.9 16.41 3.0 369.40 4.0 15771. 87 2.0 20.98 w TABLE XI. — GIVING FOR IDEVIATIONS IN MULTIPLES OF THE PROBABLE ERROR FROM 1.0 TO 5.0 IN COLUMN 1, THE PROBABI.E OCCURRENCE OF A DEVIATION AS GREAT OR GREATER IN 100 TRIALS (column 2); ALSO ODDS AGAINST THE OCCURRENCE OF A DEVIATION AS GREAT OR GREATER IN 100 TRIALS, a TO 1 (column 3) l 2 3 1 2 3 1.0 50.0 1 to 1 3.0 4. 30 22.2 to 1 1.2 4.1.8 1. 39 3. 1 3.65 26.4 1.4 34.5 1.90 3.2 3.09 31.4 1.6 28. 1 2. 57 3. 3 2, 60 37.4 1.8 22. 5 3.45 3.4 2, 18 44.8 2.0 17. 73 4. 64 3.5 1.82 53.8 2. 1 15. 67 5. 38 3. 6 1. 52 64. 9 2.2 13.78 6.25 3. 7 1. 26 78.5 2. 3 12.08 7.28 3. 8 1. 04 95.4 2.4 10. 55 8.48 3.9 . 853 116.3 2.5 9. 18 9.90 4.0 698 I42. 3 2.6 7.95 11. 58 4.2 461 215.8 2.7 6.86 13. 58 4.4 300 332.4 2.8 5. 89 15.96 4. 6 192 520.4 2.9 5. 05 18.82 4.8 121 828. 3 5.0 0745 1341.0 PROBABILITY OF SIGNIFICANCE OF X* 179 'Iorua prºpueņs qţun qņța așețAap Ubuſou º se pasn aq Kaur I – ,Ayz, – zXZ/^ aoſssardzo ºqº ', N go sonība roșiei Io) Z68’09 | 396 · 257 | 82,2 ' £† || 99Z ’0$ | 093 · 98 | 08g º £8 | 988 · 64 | 30g º gºz | #98 · 82 | 669 · Oz£6'ſı º ŞI908 º 9I996 * # I | 08 889 '6ý | 869 '9ý | 299 · Zº || 280 · 68 | 681 º gº | [9ý · Zº || 988 · 84 || 229 - † Z | g/# ' ZZ || 391 · 6L80/, '/, I† 29 ºg I9g Z * # I || 63 823 · 837 | 6 Iý ’9Ť | 2,88 ' Iiſ | 916 “ Z.8 || ZZO - †8 | [68 · 18 | 988 · ZZ | 2,9 ° €Z | 339 · Ig || 696 · 3I8Z6 - 9 IZī£8 * # IG9g º £I| 8Z 896’9ý | 0ýI’ fº | 8II ’0ý | I† 2. ‘98 || ZI6 º Z9 | 618 ° 09 | 988 · 9Z | 6L/ * za | 801 · OZ | † II º 3 [IĢI º 9 ſGZI * Þſ6/8° ZI| 13 Z†9 · 957 | 998 · Zº || 988 ’88 || 899 ‘98 || 96/, ' [8 | 9ýz · 62 | 989 º gz | 2,6/, ' I Z | Oz3 · 6 I | Z6z , ZI628 ºg I60ț¢ £ €I86.1 ° ZI| 93 † 18 º ffy | 999 ` Iſſº | Zg9 , 18 | Z88 * #8 | g29 ' 08 || ZZI ‘ 3Z | 188 - † Z | 298 · OZ | 0+6' 3 I || g/# · 9III9 * # I269 ° ZI#Zgº II || GZ 086 ZW | 013° 0ý | 9 Iº ( 98 || 96I ‘88 || 899 · 64 || 960 º 14 | 2,88 ° €z | '6I | Zg0° ŞI || 6gg ºg I8#8’ 8IŹ66 º II99.8 ° 0'[| iſ, 889 º Iº || 896 · 88 || ZZI (98 || 200 º Z9 | 6ZŤ ’83 || SIO · 9Z | 288 · ZZ | Iz0 , 6 I || 23 I - ZI | 333 - † I[60 º £I963 º II96 I " OI| 83 683 '0ř || 699 ' 28 | 736' 88 | 8 IS ’08 | I08 · ZZ || 686 - †Z | 298 · Iz | IOL ( 31 | ff Ig - gI | Iſ-0 * #1888 ' ZI009 º OIZț79 º 6 || ZZ 386 · 88 | 8f8 , 98 || I 19° Z.8 || 9 I9 , 62 | IZI ‘92 || 898 · 84 | 288 · OZ || Z8I. 'ZI | Gºff º gI | Oţa, º g I[6g º IIĢI6 º 62,68 ° 8 || IZ 999 ' 28 | 030 '98 || 0 Lý º 18 ° || ZI† `83 | 880 ºg Z | gſ.), º ZZ || 288 · 61 | 99Z • 9 I | 3,1 g * # I || 9 #ţ - ZII98* OI18% º 609Z " 8 || 0Z I6 I ’98 || 189 · 88 || Ý ýI ’08 | #0Z , ZZ || 006 º 8 z | 639 ' IZ | 888 · 8I | Zgg ºg I | g[ /. : ĢĪ | Igg · I IA II º OI199 · 8889 " ' | 6I. 908 - †8 | 9ý8' Z.8 || 698 · 83 || 686 ’94 | 092 · ZZ || I09* 04 || 899 ‘ LI | Offſ; * † I | Zgº · ZI | g99 · OI068 · 6906 “ Z.gI0 ° 2 || 8I 60† - 88 | 966 '08 | 189 · ZZ || 69/. : † Z | g[9 ' IZ || IIg ( 6 I | 888 · 9I || Igg ºg I | ZOO · ZI | g30 · OIZAĻ9 ° 8992 ' 280ſ; º 9 || Z.I 000' Z8 | 889 · 63 | 96Z '93 || Zțg º £Z | g9ý · OZ || 8I º ‘ŞI || 898 ºg I | ſz9 , ZI | ZgȚ · [ [ | g19 · 6Z96 “ Z† IQ º 9ZI8 º g | 9 I 829 ’08 | 69Z ’83 | 966 * #3 || 208 º ZZ || II º ‘6 I | ZZ8° LI | 688 - † I | Izp. " [ [ | 209 · OL | №g ’ 319Ż, ' },986 ºg6ZZ, º g | GI If I ( 63 || 828 º 93 || 989 º £Z | 590 ' IZ | Ig I • 8I | ZZZ º 9 I || 699 º £I | Iz3 * OI || 19ț, * 606/, ' ',I 19 º 9898 ºg099 º fy | # I 889 º 13 || .ZZț, '92 || Z.98 ° ZZ || ZȚ8*6 I || 986 · 9I | 6II ºg I || Oſº ' ZI | 926 · 6#89 º 8Zț0 ° /.Z63 ºg99/, ' $ZOI º ff | 8I ZIZ ' 9Z | #90 i †, | 930 " IZ | 679 · 8I | ZI8 ºg I || IIO * #1 || 0ț¢ £ º II | #90 · 6108’ ſ.†08 · 99ZZ ’98/ I ' †I 2,9 ° 8 || ZI 92/, ‘ſz | 819 · ZZ || 929 º 61 | 92Z , ZI | 189 * # I || 668 ° ZI | Î Ï £ · OL | 3#I ( 3686 ’98/9 ºg92,9 ° †609 º £890 º 8 || II 60% º £Z | I9 I ' IZ | 109 ° 8I || 286 ºg I || Zſºț, º £T | Ț Ș3/, ' II | ZŁg º 6Z93 ' 261. I º 9993 º $0ý6 ° 8690 º 88gg º Z | OI 999 * EZ | 629 º 61 | 616 º 9 [ | #89 º ţI | Zf Z - ZI | 999 º OT | &#g º $3868 º 9088 ºg89 I º fy928 º £Z89 ' Z880 º Z | 6 060 ° 03 || 89 I º 8 ſ. | 209 ’9 I || Z.98 ° €I | 080 º II | †;&g º 6††;$ “ Z.A ZG ( 9† 6g º fy | 06f; º £88/, ' ZZ80 º ∞,9ț79 " I || 8 9/, ſº ' 8I | Z39° 9′{ | 190 - † I || LIO “ ZI | 803 º 6888 º 89ț8 º 9I 29 * ſ;ZZ8’ 888.8 ° Z.19 I ' ?#99 º I68Z ' [ | 2, ZI8 º 9 I | 880 ºg I | Z6g º ZI | gț9 ° OT | 3Çg º $3I8z, ' ';8 #8 ºg8ZS ’80/0 ° 8†0Z ' Z989 " I#ÇI * IZ/8' 0 || 9 980 º GT | 888 · 81 | 020 º II || 98Z, * 668Z, " Z†790 º 9Igº * ſ;000 º £$fºº • Z0{9 " Igț¢I * IZg/, * 0†gg º 0 | g 11Z, º £I || 899 º II || 88†, º 66/2 ' 2686 ºg8/ S * ſ;298 · 896 I " Z6ſ-9 * I#90 º I{ { A, º 06.Zſ; * 0Z63 ° 0 | †; IŤgº II || 288 · 6ĢI8 ° /.IQZ * 9zț9, †Ç99 º 8998 ' Z† Zſ; * IÇ00 º T#89 º 0Zg8 * 0g8[ ° OĢII º 0 || 8 0IZ” 6†Z8’ ſ.I66 ºgg0g) * ſ;6IZ ' £80ſ º Z | `989 - I£IA ’09fff; " O[[Zº 080 I ’0†0ý.0 ° 0IOZO " 0 || Z. 989 º 9ZI† * gIý8 ° 890/, ' ZZ#9’ I† 10. ^ Iggf; * 08ŤI” 0 |ZŤ90 ° 0 |8gȚ0 ° 0 |86800 * 0 |849000 º 0 || ZgI000 * 0 | I I0 ° 0ZO ° 090° 00[ ′ 00Z ' 008° 009 ° 00/, ” 008° 006° 096 * 086 '066°0 = cſſ | „N SCHICH GHS HO[1S HO JLQO NGISIH V SVH HO OJL SÐ NOTGIRI GITAIWVS GIPIJL JJV HJ, SGIL VOICINI SOȚIȚIGIS TVOIJLGIHO GIHJ, V JIO JLVHJ, HJŪIAA GITđIWVS V JIO NOIJLQ QIQIJLSICI GIHL HO NOSIŁvēIWOO GIHAL NI GIGINI VLOEIO QIQ TIWA z* GIHJ) (, N) …“WOGIGIGIH H HO GIGIHÐGIGI ,, NGRAIÐ V HJŪIAA ‘J VHJ, (4) ALITIĶIvgIO HÆ GIHI, ĐNIAAOHS–, IIx GITGIVEL 180 VALUES OF F 69Ț º 8 [I.6 ° €ZI” și88 ***Zgº †ȚA, * ſ;g8 * ſ;90 ºgIZ ' g68 "g#9 ºg66 ºgºg ’999 ' },†0 " OI 833'Ø |79 · 3 ff.9°Z ļºſ, º Z |Z8“ Z [I.6°Z |26° Z | 20 ºg þýI ºg Izzº g ||gººg |3; * g |I]] · g [oi -5 |ģē, †OI Ogz ' 8 |Isº ſº | Ig - †82. * ſ;Z6 * ſ;II ºg9% ºg!;iſ; º GZ9 ºg08” º90 º 9ſº º 966 º 9Ł0 ° 899. * OTIș Z9?, ? ? ||I/, “ Z08° Z.06° Z.86 “ Z20 ° 88I º £8Z , 86Z * 8.A,8 ° 88ț¢ º £89 º £98 º 89Ż * ſ;ZI * 96} gg8 * 8 |98* ſ;90 ºg8% ºg8ff; * g1,9 ºgZ8” º80 ° 96T ( 91,8 ° 989 '910 ° ſ.69 ° F),99° 99% "IT} 908 º Z | 86 “ Z80 ° 8ZI * 80Z, * 88Z " 8†g º £††; * g09 ° 889 " 869 ° 8†8 º £Z0 * #9ff; * †Z8 ºg8} 66f; * 8. |g9 ºgg8 * g1.0' 91,3° 9Diff º 9Z9 ’9#8 º 900 ‘ſ,6ſ ^ ſ.9† ` ſ.g8’ ſ.gi; * 9gº º 6gº ’ ’ZTģ 998 º Z | 82 ° 8Z8 º 8If, ºg6† * 8Zg º £89° 88.1 ° 862 " 918° 926 º 8ZI * ſ;gº * ſ;# 1, * †69 ºgZg- 1,0), º 8 188 º 960° ſ.18 " },ºg º 1, |Z|, "},1,8' ſ.Oſ º 89× ° 8Liſ; º 8gſ., '8gI ( 681, º 6Z6 ° OT |† №, º £TE Z†† º Z | 2,9 ° 8g/, * 8#8 º £Z6 ° 8Ū0 ° †90 * #gI , †IZ ' +SZ * ſ;68 ° †ºg º fy92, º fy#I * g66 ºg9§ %80 º ff; İZO " 6#84 * 6Liſ; º 689 " 669 ° 690° 0'ſ ||1.8 ° 0Ț |gț º OT || 1,9 ° OT || 1.6 ° OT |68 º II |90° ſ√T |p,, º £I |9,4 ° 9'ſģ I 19° Z |98* ſ;††; * †ºg * ſ;09 - †;89 " ?ț¢/ * ſ;Z3 * ſ;88, †96 * #Ç0 ºg6I ºgI†; * g62, ºg[9 * 99 5. †09’ſ ļ9Ť (81 |69’ ET |8.6 ° ET |GT ‘WI |1.8 ° WT ||№g ‘ſ’T [08'ſ I |g6 * #T ||Tzº gI ſzg - gI |g6 ºg I ||69 · 9I loo" giſ logº Izg; 9/, /, * ?, † 89 ºg0/, ”9Z/, '9'#8 ºgI6 ºg96 ºg†0 º 960 º 9QI ‘992, '968 º 969 ° 9†6 * 9I/, ' /,†§ Iſº Ig |ZĽ93 ||98 93 |09'93 |88`93 |90° ſ., |84’ 14 |6ý º 1.3 | 1,9“ LZ ||16° ſ.z. liſz, gz III., gz |9ț¢’ 64 || Ig - 08 IzI , #gË Ź8I º 8 įºg º $389 º 8#9 º 869 ° 8† 2. ‘ 88/, ' 8#8 º 888° 8#6 º 8IO * 6ZI * 68Z, * 699 º 6ȘI " OI ! 9gº 936 | 6 |0.9 ° 66 |8† - 66 |9† `66 |†† `66 |ZŤ ‘66 |0ý º 66 |98 ° 66 |†g º 66 |ge · 66 | 08 · 66 |gz • 66 || LI · 56 |iſo ( 66 |6; º g6Ē 808'ſ 109°6'I |ZŤ’6I |gŤ ’6I |8Ť ‘6I ||IŤ ’6T [68 · 61 ||18.^ 61 |98 · 6I |gg · 6I 109 · 6I |gz • 6ī ļg I • 6I |0ô · 6I ſig · și | z 5 !99! 89|8ý (9989|00' 3089|91 º $849|00' 6919||88 º g019ļ00° 9909||Þ8 · 186g|00, 8Z6gļ68 · 698.g|80 ° ± 91.g|ĶI º gz9g|G# : goțg|go · 665 #|0T º zgoș 902° ZI|38° ſ.gº. [08°193 |#0’6řZ |09 · 973 ||I6'8īz 100° zřZ |68,8€z ſg). ‘984 || 16 º gºz || I - Oºz |/g, †zz |z| ºg Iz lóg · 66I |g# · IgŤ | I § 48 OO09†Z9IZI0I8Z9g†£ZI sanpe A 9.IgnbS u 80 W 19ņē919 Iog taopºeuſ go s93ıāøOI "? HO SGH (l'IVA OSTV "SCHTAIWVS GHEIJ, NAHGELAAJLGIRI XJLIGINGIÐOWOH JIO XHOVII TVGTRI V JIO JLNVOI HINĐIS GIEI OL (H8 JLS.[1]W OIJLW'R GÐHJ, GIZIS LVHAA BIO “SGTIRGIS OAAL GIEIL ŒIO „INOGIGIGIÀIJI BIO SG{{H}{0{HCI », SITOIHVA GIHJL HOH ‘ÐNIJ VOICINI ‘SGHTā WVS NIHILIAA ĢIONVIȚIVA NWOHIN HOITIIVINS ȚEII, OJ, SRITICIJNYS NGIQHAAJ_{H\{ {HONVIHVA NV GIW HIGIÐ HVT GIHJ, BIO OILW'NI GIHL ĐNIGIRI ‘H JIO SGIQTVA ŠIO SRITIĶIWL–‘IIIx GIYIgy L VALUES OF F 181 1,08 ' && 690 ° 2, 6Ț8 ° ', † 2.0 ° Z. T88 ° ', 080 - Z giğ8' Z 980 ' Z I98* Z 860 ° Z. 81.8 ° Z. IOI • Z 868’ º OI I “ Z IZ 6 “ Z 0ZI " Z ſiſ; 6 “ Z I8I “ Z !,!,6 ° % gŤ I " Z ZIO '8 09 I " Z gg0 º $ 6.ZI " Z 90'ſ ' £ I03, ' Z 9% º £ 9/, ' [ 08' Z 8/, ‘I 98 " ? |[8’ I Ziff º º #8 * I 6ſ; *?, 88 * I 1,9 ° ', Z6 " I 99 ' Z 96 * I g/, ' && IO “ Z },8° Z. 10 ' Z 00 ^ $ £I ' Z 9] ^ $ IZ ' Z 98 " 8 08° Z. 09 º £ Off • Z 8ff; * & 88 " I 89 " && I6 " I 89 " ? 86 " I 89 ' Z 96 " I 01, " && 00 ' Z 6), “ Z †0 - Z 98 ' & 80 " & 96 ° ', £I ' Z 1,0 ° 8 ŞI “ Z IZ * 8 ſº “ Z 1,8 ° 8 Z8 ° 3 99 º £ Off • Z 08° 8 09 ° Z. 68 " & I [ ' Z †6 “ Z £I ' Z 66 " ? ĢI " Z |g0′ E 8I “ Z ŹI '8 IZ ' Z 0% º 8 ÇZ ( Z 1,3 ° 8 63 ' Z 1,8 ° 8 88 " Z 9j; * 8 68 " Z Z9 º £ ## * z 82, ’8 Ig - Z 86 º 8 09 ° Z. IZ * ſ; 0), “ Z 1,0 ° 8 02 ' 3 ŹI º £ £& ’ Z ET ’8 GZ ' Z 8% º 8 SZ ' Z 08 º 8 I8 ° Z. 1.8 ° € †28 ° Z. giff º 8 88 " Z ºg º £ Z† • Z 1,9 ° 8 8; * Z 08° 8 89 ' Z 96 ° 8 09 ' Z 9I * # 69 ° Z. Oſ * ſ; 6/, “ Z Įº º 8 83 ' Z 9:4 ' £ 09 ' Z 18 º 8 Z8 ° Z. 1,8 ° 8 G8 ° Z. $ſ; º 8 88 " Z Țg º £ I† • Z 69 ° € G# * z. |69 º £ 6ý · Z 08' 8 99 ' Z ț6 º 8 09 ' Z OI ºff 19 " Z 08 - † 9/, “ Z ſg º ff 98 " Z set: N- CN Co CŞ. N- tº CN ºf lſº gº & CNR CN C C C, OO Co to OO sº Cº us co º cº, sº Q ºf CD be C & 9% º ff 08° Z. IS '$ Z8' Z, 1,8 * ſ; †8 ° Z. £(; * ſ; 18 ° Z. Og * ſ; 06° Z. 99 º ff 96 “ Z 1,9 ° § 96 ° 3 !,/, 'f' IO * 8 68 " º 90 º 8 80 ºg II º 8 08” g 8I º 8 Iſ; * g 9Z " & 1,9 ºg 98 " 8 83 6I 8I ZI 9I gI #I 8I ZI II orbnbS ubeſ), Iall guıS Ioj UIOpool,H. Jo SaeijaCI 182 VALUES OF F 81.9 % |89. Iſý6 * T8T º z88 ° ',99 º º0'ſ, º £68 " &&80 ° 86Ț º £Iſ; º £%), º 80,4 * ſ;90 ºg1.1 ° ſ. 800 º Z |## * I09 " I†/, º Ig8 * Ig6 * IZO " Z£I “ Z02 ' Z6Z “ ZOff; *?,99 ' Z6/, “ Z8I * 880 º ff;09 069|| ? |g|, !66 " I£® ° ',£(; * ?,I9 ° Z.fºſſ, º º†6 ° ',1,0 ° 8|84|8giff º £ſ.ſ, "8gº º ffII ºg8×] ©B †IO “ Z |8f, ‘I89 " I9/, ‘I28 " I26 " IGO " ZQI " ZZZ " ZI8 ° Z.Ziff " Z89 ' Z[8° Z.Iz º 890 º ff;9Ť și #01.',Z8 * T90° ºg68° Z.§ ' Z99 ' ,,08' Ø.66 ° ',ZT * 86% º 8Țg º £88° 8Iſº º ff9T ºgIĘŁ} IZ0 ° Z HZgº I99 " I61 " I06 " I00 ‘Z/0 " &8I “ ZGZ = Z†ę º zgț¢ º zI9 ° Z.#8° Z.8Z "980 ° ſyOff;} † 31. 306 * T£I " Z1,8 ° %99 ' Z† 1, * ?,1,8' Z},0 ° € 46.1 ° 81.8 ° 86g º £I6 º £Off º ff1,3 ° gŹŹſ.} 080 ° Z | 19 * I01. ^ {£8 * T† 6 " I†0 ° Z.II " ZZZ " Ø.6Z ' Z2.8 ° Z.8; * Z†9 ° Z.28 ' Z9% º 8ZI * ſ;98 ; CD 091||3TO “ Z†”, “ Zſiſ; *?,99 " Zîý8' Z, 186 º €ET ’808 - 8Aiff º £0ſ, "8Z0 º ffIg * ſ;68 ºg|99|| !Ģ ZŤ0 º Z |Z9 * I9/, ' [68 " I66 " I60 ° Z.9 I“ ZAZ " &†8 ° Z.Zſ; º Z|×69 ° Z IZ 6 “ ZZº º 8ZI * ſ;09 B 99), z80 %1,3° 36† • Z89 " ?1,8' Z00 ‘80% º £|(88 ° 809 ° 8£ſ, "8†0 * ſ;ſg º †Ziff * g09] !ß gï0 º Z | #9 * I// ' I06° I00 ' Z0[ ' ZŞI " Z8Z ' Z98 " Zgiſ; ' Z†g “ Z0/, ' Z86 “ Z88 ° €ŞI * ſ;6%Éſ 891.) z 1902 z08 ' &Zgº zȚſ, º 306 “ Z80 ° 88% º 898 · 8€g º £|}}}!,0 * ſ;1,9 ° †giff * g#9] !Ë 8ț70 º Z |g9 * I8/, ' [I6 " IZO " ?ZI " Z6I , Z6Z , Z,98 " Z†† • Z99 ' Z[/, ' Z96 “ Z#8 º £OZ * ſ;8, 9; Į1}{30Ț ' Z88 " ?gg ' Zſýſ, " Z86 “ Z90 · 89% º 868 ° 899 º £8ſ, º £II º ţ09 - †6ý ºg89 "№,ș Z90 º Z | 19 * I08 ` I86 " I80 ° z.8.I ' Z03 ' Z08' Z28 ' ZQ# * ?,29 ' Z8/, ' Z96 “ Zgº º $IZ * ſ;2,3Ë §!!! 3£TI ‘ ºg98 " Z99 ' Z8.1, ' Z96 “ Z60 º £6% º 8Ziff º69 ° 8Ź8 º £șI ‘ ſi#9 * ſ;8g ºgØſ,’ 1.g? 990 º Z | 69 º IZ8 * I96 * IÇ0 ° Z.gI , ZZZ ' ZZ8 ° Z.69 ° Z.Ziff " Z69 ° Z.† 2. ' Z86 " Z18 º 8ZZ " †9%Ē Cò !8!!3E.T | &0ý º ZZ9 ° Z.I8' Z66 " &&£T º 8Z8 ° 89; º £89 º 898 " 89I * ſ;99 º ſyEg ºgL/, ' \, 090 ° Z ||I/, ‘I#3 º I96 * {10 ' Z9I ' Z#Z ' Z#9 ° Z.I† • Z6ý º 309 º 39/, ' Z66° Z.88 ° 8† Z * ſ;GZ !6!! % |14,†† • Z99 ' Zg8 * &80 º $Elſ º 898 º 80g º £1,9 ° 806 º 8&& * ſ;%ſ, º ſºI9 ºg%8’ ſ. †790 º Z |$/, ' [98 " I86 " I60 ° Z.8I “ Z93 “ Z98 " ZZŤ; “ Z[9 ° Z.Z9 ° Z.8/, ' Z[0° 8Off º £9Z "††Z CO09†Z9IZſ0I8/.9Q†£ZI q qe sºn IBA 9.JenbS u 89 W 1948919 Iog ULIOpaar H. Jo sø9139Cſ pºnuºqu00–IIIX GITIĶIWL VALUES OF F 183 91,9 ° 096 ' Z96 ' 989 ' Ģ96 * 889 ' ? 996 ' [ Z69 ' Z 896 " I I09 ° Z Z 16 " I 609 º Z 92.6 ' [ 9Ț9 ° Z. 62.6 ' [ 9:49 ' Z #86 * I % { I89 ' Z I % I Z89 ' Z Z86 ° { 889 ' Z 066 " I 9ī£9 ° Z. †66 " I 099 º Z 000 º Z lſº CŞ. º g e e e aſº cre ºf to CN ºn Co dº CO tº OO to “C sº º 1O 00 aſ Y GO lſº be ºf the “H be ºf to ~# to ~H to do nº dº up to lo co irº do nº *-ſ ºl r— wº H y- H v- y-H wº r-R wº y- ºr-. 7-4 wº P- - Y-8 v H wº H wº *— wººd co be tº CN r- od Co tº QO cº öö , #9 º I0 ° Ț [ % Ç9 ' [ % I % di 9/, CŞ. GN CN H CŞ. H CN r- ºr r- CŞ 9:4 ' Z 08” I I8 ° Z. Z3 º I 88 " Z £8 * T 1,8 ° Z. g8 * T 68 " Z 98 * T Ziff " Z 88 " I giff " Z 68 " { 09 " & Z6 " I •& * & 88 " I #8° Z. #8’ ſ. 98 " Z G8 ' [ 4,8 ° % g8 * I 88 " ? 98 " I ſº ' Z 18 " I gț; * Z 68 " I Liſ; ' ? 06 * I Țg “ Z Z6 " I 8 g ' Z 86 " I gº ' Z Ç6 " I 69 ° ', 26 " I 89 ' Z 66 " I [ g' & #6 * I 8 g ' Z 96 ° { 99 ' Z 96 " I 99 ' Z 96 " I 1,9 ° Z. 26 " I 09 " Z 86 " I 89 " Z 00 ' Z 99 ' Z IO “ Z 69 ° Z. 80 ° Z. ZA, “ Z †0' Z † 1, ' & 90° Z. 84, " ? 10 " Z Ź8 ° ', OI " Z 99 ' & ['0 ' Z 99 ' Z Z0 - Z 99 ' Z 80 ° Z. 69 " Z 80 " Z 0], ' & †0 ° Z. £', ' Z g0 - Z 9), " ? 20 - Z 6/, “ Z 80 ° Z. Z8' Z OI “ Z ț8' Z II " Z 1,8' Z ZI " Z I6 ° ', #I ' Z g6 ° ', / I ' Z tº sº lo on CO to CN tº C v- Co tº to tº ºb & Co ºn to CN to ºn CN v- CN Ço cº CAO sº tº C. co cro co ºr to ºr, CN to CN to CN tº 99.'ſ 99 º ff 80 ° 8 Țſ, º į †0 ° 8 gſ, º iſ; 90 º 8 8ſ, º iſ; 20 ° 8 Z3 º ff 60 ° 8 g8 * ſ; 0[ ° 8 88° ; II º £ Ź6 * ſ; ȘI '8 96 * ſ; GI ºg aſº ºc) “H s. + tº cos có ‘c cº to cº to cº to cº to cº to cº to co is cd “c cô is cº º C - C C CO on CD cº Cº crº Cº. oO Ob OC CO Ne CO tº CO tºe CO C3 CO to CO CO C CO GO v- so to Aro CN et c CN sºft O vº-1 cro to DS- GN KO º co do 000 I 009 00ſ; 008 003 09 I 00I 06 eIgnbS UIbe W JetIQUIS Ioy UIopaalºſ Jo SaaijaCI 08 0/, 09 184 PROBABLE ERRORS OF COEFFICIENT OF CORRELATION TABLE XIV.—PROBABILE ERRORS OF THE COEFFICENT OF COR- RELATION FOR VARIOUS NUMBERS OF OBSERVATIONS OR VARI – ATES (N) AND FOR VARIOUS VALUES OF r (Specially Calculated) Correlation Coefficient r Number of Ob- servations 0.0 0.1 0.2 0.3 0.4 0. 5 25 0. 1349 0.1336 O. 1295 0. 1228 0.1133 0.1012 50 0.0954 0.0944 0.0916 0.0868 0.0801 0.0715 75 0.0779 0.0771 0.0748 0.0709 0.0654 0.0584 100 0.0674 0.0668 0.0648 0.0614 0.0567 0.0506 200 0.0477 0.0472 0.0458 0.0434 0.0401 0.0358 300 0.0389 0.0386 0.0374 0.0354 0.0327 0.0292 400 0.0337 0.0334 0.0324 0.0308 0.0283 0.0253 500 0.0302 0.0299 0. 0290 0.0274 0.0253 0.0226 600 0.0275 0.0273 0.0264 0.0251 0.0231 0.0207 700 0.0255 0.0252 0.0245 0.0232 0.0214 0.019.1 800 0.0239 0.0236 0.0229 0.0217 0.0200 0.0179 900 0.0225 0. 0221 0.0216 0.0205 0.0189 0.0169 1000 0.02.13 0.02 11 0.0205 0.0194 0.0179 0.0160 0.6 0.7 0.8 0.9 1.0 25 0.0863 0.0688 0.0486 0.0256 O 50 0.0611 0.0487 0.0343 0.0181 O 75 0.0498 0.0397 0.0280 0.0148 O 100 0.0432 0.0344 0.0243 0.0128 0 200 0.0305 0.0243 0.0172 0.0091 O 300 0.0249 0.0199 0.0140 0.0074 O 400 0.0216 0.0172 0.01.21 0.0064 O 500 0.0193 0, 0154 0.0109 0.0057 0 600 0.0176 0.0140 0.0099 0.0052 O 700 0.0163 0.0130 0.0092 0.0048 O 800 0.0153 0.0122 0.0086 0.004.5 O 900 0.0144 0.01.15 0.0081 0.0043 0 1000 0.0137 0.0109 0.0077 0.0041 0 VALUES OF 1 – r" correspond ING TO VALUES of r 185 TABLE XV.--THE VALUES OF 1 – r2 CORRESPONDING TO VALUES OF r FROM .000 TO .999 r 0. 1 2 3 4. 5 6 7 8 9 .00 |1.0000||1.0000||1.0000||1.0000||1.0000||1.0000||1.0000||1.0000||1.0000||1.0000 .01 | .9999; .9999 .9999| .9998] .9998] .9998] .9997.| .9997 .9997 .9996 .02 | .9996] .9996 .9995] .9995] .9994| .9994| .9993| .9993] .9992] .9992 .03 | .999] .9990] .9990] .9989. , 9988] .9988 .9987 .9986] .9986] .9985 .04 || .9984| .9983| .9982| .9982] , 9981] .9980} .9979 .9978] .9977] .9976 . O5 | .9975] .9974] .9973| .9972| .9971 .9970] . 9969| .9968] .9966] .9965 .06 .9964 .9963 .9962] .9960] .9959] .9958] .9956| .9955| .9954] .9952 .07 | .995]] .9950) .9948] .9947 . 9945] .9944| .9942| .994.1| .9939 .9938 .08 | .9936 .9934] .9933] .9931 .9929 .992S] .9926 .9924 .9923] .992.1 .09 | . 9919| .9917] .9915 .9914] .9912| .9910; .9908. .9906| .9904 .9902 ... 10 | .9900] .9898] .9896 .9894 .9892| .9890 .9888 .9886| .9883| .9881 . 11 | .9879| .9877| .9875|| .9872| .9870] .9868] .9865. .9863| .9861 .9858 . 12 | .9856] .9854| .9851] .9849| .9846| .9844| .984.1 .9839| .9836] .9834 . 13 .9831|| .9828 .9826) .9823] .9820. .9818] .9815| .9812| .9810] .9807 . 14 .9804| . 9801) .9798} .9796] .9793 .9790 . 9787] .9784} . 9781| .9778 . 15 | . 9775 . 9772 .9769, .9766] .9763] .9760 - 9757| .9754. .9750| .9747 . 16 .9744 . 9741 - 9738] .9734 . 97.31] . 9728. , 9724 .972.1} . 97.18 . 9714 . 17 | . 9711] .9708 .97.04] .97.01 .9697. , 9694 .9690) .9687] .9683| .9680 is . .3676, §673| 36égl §665 géðil jčáš Šēśā .9650. .9647| . 9643 . 19 | .9639| .9635| .9631. .9628| .9624 . 9620| .9616) .961.2 . 9608 .9604 . 20 | .9600| .9596 .9592| . 9588 .9584| . 9580| .9576] .9572} .9567| .9563 . 21 | .9559| . 9555 .9551| .9546. . 9542| .9538| .9533| .9529) . 9525| . 9520 . 22 .9516] .9512} .9507| .9503| .949S. . 949.4| . 9489| .9485| .94.80| .9476 . 23 .947II .9466] .9462] .9457| .9452| . 9448] .9443| . 9438 .9434| .9429 , 24 | .9424| .94.19) . 9414) . 9410| .9405| . 9400| .9395| .9390| .9385| .9380 .25 | .9375] .9370] .9365! .9360 .9355| .9350 . 9345 . 9340] . 9334|| .9329 .26 | .9324| .9319| .9314| .9308|.9303] .9298] .9292 .9287| .9282] .9276 . 27 | . 9271] .926.6} . 9260] .9255] .9249 .9244] .9238| .9233| .9227 . 9222 . 28 | .92.16|| .9210| .9205| .9199| .9193 .9188 .9182| .9176] .9171: .9165 . 29 | . 9159| .9153| .9147| .9142| .9136|| . 9130| . 9124| . 9118| .9112| .9106 . 30 | . 9100 .9094 .9088 . 9082| .9076. .9070. .9064| .9058 .9051 - 9045 . 31 | . 9039| .9033| .9027| .9020) .9014 .900.8 .9001| .8995) . 8989| . 8982 .32 | .8976] .8970] .8963] .8957| .8950] .8944 .8937] .8931. .8924 .8918 . 33 | . 8911| . 8904 .88981 .. 8891} . 8884 .8878 . 8871 . 8864| . 8858 . 8851 . 34 | . 8844| . 8837| . 8830| . 8824 .8817| . 8810; .8803| . 8796] .8789| . 8782 . 35 | . 8775 .8768 .8761] .8754 .8747] .8740. . 8733| .8726. , 8718 - 8711 . 36 .8704| . 8697| . 8690) .. 8682| . 8675|| . 8668} . 8660} . 8653| . 8646. . 8638 . 37 | . 86.31] . 8624 . 8616| .8609| . 8601| . 8594 .8586| .8579| . 8571 . 8564 .38 | .8556] .8548 .8541] .8533. .8525] .8518] .8510| 8502 .8495) .8487 . 39 | . 8479| . 8471] .8463| .8456} .8448) .. 8440 .8432} . 8424 .8416) . 8408 . 40 | . 8400 . 8392 - 8384 .8376 . 8368 . 8360 . 8352 . 8344 .8335| . 8327 . 41 | . 8319| . 8311| .8303} . 8294 - 8286. . 8278 .8269| .8261| . 8253 . 8244 . 42 . 8236 . 8228) .. 8219; . 8211| . 8202| .8.194 .8185} . 81.77| .8168 . 8160 .43 8151] .. 8142| . 8134] .8125. , 8116} . 8108} .8099 .8090 .8082| . 8073 .44 | . 8064 .8055 . 8046|| . 8038 . 8029| . 8020: . 8011| .8002| .7993 . 7984 .45 | .7975|| . 7966 .7957| . 7948) . 7939] .. 7930} . 7921 .7912 . 7902 . 7893 .46 | . 7884) . 7875) . 7866) . 7856. , 7847] .. 7838) . 7828) . 7819) . 7810| . 7800 .47 || 7791. 7733| 7772 7763| 7753| 7744 7734 .7725] .7715||7706 .48 || .7696) . 7686. .7677] .7667| .7657| .7648] .7638] .7628) . 7619) . 7609 .49 | .7599, .7589| .7579 - 7570 , 7560 ### 7530 - 7520) . 7510 186 VALUES OF 1–r” For VALUES OF r TABLE XV.-Continued r 0 1 2 3 4 5 6 7 8 9 , 50 7500 - 7490) . 7480| . 7470| . W460} . 7450) . 7440) . 7430] .. 7419) . 7409 . 51 . 7399. , 7389| . 7379| 7368| . 735S} . 7348) . 7337| . 7327| . 7317| , 7.306 . 52 . 7296) . 7286| . 7275 . 7265. . 7254|| . 7244|| . W 233| . 7223| . 7212| . 7202 . 53 7191| . 7180) . 7170) . 7159| . T 148 - 7138|| . 7127| . 71.16) . 7106] .. 7095 . 54 7084) . 7073| 7062| . 7052] , 7041 .. 7030. , 7019| , 7008 . 6997.| . 6986 . 55 . 6975|| 6964 . 6953| 6942| 6931 . 6920 - 6909 . 6898| . 6886. . 6875 . 56 6864 . 6853| . 6842| . 6830) .. 6819| . 6808| 6796|| . 6785| . (5774) . 6762 . 57 . 6751 .. 6.740 . 6728) .. 6717| . 6705] . 6694 | . 6682} . 6671| . 6659] . 6648 . 58 6636| , 6624| . 6613| . 6601| 65S0| . 6578 . 6566| . 6554| . 6543. . 6531 . 59 . 6519) . 6507 . 64951 .. 6484 | . 6472| . 6460| . 6448) .. 6436| . 6424| . 6412 . 60 6400 . 63SS| . 6376| . 6364| . 6352} . 6340] . 6328 . 6316i . 6303 . 6291 . 61 6279| . 6267| . 6255] .. 6.242 . 6230|| . 6218. , 6205| . 6193| . 6181 6168 .62 6156| . 6144|| . 6131| . 6119| . 6106| . 6094) . 6081| . 6069| . 6056| . 6044 . 63 . 6031| . 6018| . 6006| 59.93| . 5080 . 596S| . 5955 . 5942] .. 5930ſ .. 5917 . 64 5904| . 5891| , 5878 . 5866| 5853| 5S40| 5827. , 5814| . 5801: . 5788 . 65 . 5775|| . 5762| . 5749| . 5736) . 5723| . 5710| .56971 .. 5684) . 5670| .5657 . 66 5644| . 5631| . 5618. , 56.04] .5591 . 5578. , 5564] .5551| . 5538| . 5524 . 67 . 55.11 5498. . 5484 - 5471) . 5457| 5444 . 5430 - 5417. . 5403] .. 5390 . 68 5376. .. 5362| 5349| , 5335| 5321| 5308| 5294| , 5280| . 5267| . 5253 . 69 5239| .5225 . 5211 .5198| .5184| 5170) , 5156] .5142] , 5128. , 5114 . 70 5100 , 5086) .. 5072] , 50.58| .5044| . 5030} . 5016| , 5002] .4987| . 4973 .71 . 4959| . 4945| . 4931. . 4916| . 4902 .4888; .4873| . 4859 .4845] . 4830 . 72 . 4816. , 4802] , 4787] .4773| .4758 .4744] .4729| .4715. , 4700 . 4686 . 73 .4671] .4656] .4642] .4627| .4612 .4598| .4583 .4568| .4554 .4539 . 74 4524] .4509| .4494; .4480 .4465 .4450| . 4435| .4420, .4405] .4390 .75 . 4375] .4360; .4345| .4330, .4315| . 4300|| .4285 .4270| .4254 .4239 . 76 . 4224| .4209 .4194| .4178| .4163] .4148] .4132 .4117| .4102| .4086 .77 . 4071 .4056| . 4040] .4025| .4009 .3994| . 3978 3963] .3947 . 3932 . 78 . 3916| .3900 .3885| .3869| .3853] .3838| 3822 - 3806) . 3791) . 3775 ... 79 .3759| .3743| . 3727 .3712| .3696 - 3680 .3664| , 3648] .3632 .3616 .80 3600 .3584| .3568] .3552. .3536 .3520 - 3504| .3488 .3471 .3455 .81 | .3439| .3423| 3407 .3390| .3374| .3358. , 3341| .3325| .3309| .3292 . 82 . 3276] .. 3260 .3243] .3227 .3210 .3194 .3177| .3161 .3144|| .3128 . 83 . 31.11: .3094| .3078| .3061 .3044 .3028, .3011| .2994 .2978 .2961 , 84 2944] .2927| 2010| .2894. . 2877| .2860] .2843] .2826| . 2809| , 2792 .85 | .2775|| .2758 .2741| .2724 .2707| .2690 .2673] .2656] .2638 . 2621 .86 . 2604. .2587| .2570| .2552 .2535 .2518. , 2500| . 2483| .2466] .2448 . 87 , 2431| .2414| .2396| .2379| .2361| . 2344| .2326|| .2309| .2291] .2274 .88 .2256 .2238 .2221 | .2203 - 2185] .2168) . 2150 .2132| .2115| . 2097 . 89 . 2079| . 2061 .2043| 2026 , 2008] . 1990] . 1972| . 1954 - 1936 . 1918 . 90 | . 1900 - 1882 - 1864 - 1846 . 1828 - 1810. , 1792| . 1774 - 1755 . 1737 .91 . 1719, . 1701] . 1683 - 1664] . 1646] . 1628 . 1609| . 1591 . 1573| . 1554 . 92 . 1536. , 1518) .. 1499| . 1481| . 1462 - 1444| . 1425 | . 1407 . 1388 . 1370 .93 . 1351} . 1332| . 1314|| . 1295| . 1276 . 1258 - 1239| . 1220, . 1202 . 1183 .94 1164| . 1145| . 1126| . 1108) .. 1089| . 1070| . 1051 .. 1032| . 1013| .0994 .95 | . 0975|| 0956 . 09:37 . 0918 .0899| .0880 . 0861| .0842|| .0822 - 0803 .96 O784 . O765 . O746 . O726|| . 0707| . 0688 . 0668] .0649| . 0630 . 0610 .97 .0591 - 0572 - 0552. .0533. . 0513| .0494| . 0474 - 0455| . 0435} . 04:16 .98 O396 . 0376 . 0357| . 0337 . 0317| . 0298 . 0278 . 0258 . 0239 . 0219 .99 0.199| 0179| . 0159| . 0140|| . 0120 . 0100 . 0080| . 0060. .0040: . 0.020 0.086 |Z086 |#086 |9086 (S086° 0186 K.I.S6" |#I86 |9|IS6 ISIS6' | 6T Oż86 |&S6 |#ZS6 |Q&86 |/.386" |6ZS6' |I886 |9886 |9886 ||986 ' | SH 8886. Oñ86. |V. [rºsº. 97.86. Arg5 6.86. [I386. 3986. [rºsº | II $666' 3666 |6666' 6666' 6666' 6666' 6666' 6666' 6666' 6666' | id- 0000 ‘ I [0000 IOOOO I|0000 IOOOO I 0000 IOOOO I|0000 IOOO0 I|0000 I 00 6 8 P. 9 g º 8 % T 0 .i. 666" OJ, 000" WO?HJH 4 JLO SGIſlTVA. O.J., DNIC[NOCISGI?I?IOO zl - I/v JO SGIſlTVA- IAX (HT3 VII, ZSI J Jo SGInſivA O.I. bNIGINOdSGIHOO zl—IA GIO SGInſivA 188 VALUES OF V1–r” coBRESPONDING TO VALUES OF r TABLE XVI.-Continued VALUES OF r =2 sin (T/6)p FOR COMPUTED VALUES OF p 189 TABLE XVII.-GIVING THE VALUES OF r = 2 sin . p FOR COMPUTED VALUES OF p FROM 0.01 TO 1.00 p T p r p r p r .01 .010 . 26 .271 .51 . 528 .76 .775 .02 .021 . 27 . 282 . 52 . 538 .77 .785 . 03 . 031 . 28 . 292 . 53 . 548 . YS . 794 .04 . 042 .29 .303 .54 . 558 . 79 . 804 .05 .052 .30 . 313 . 55 . 568 .80 ... 813 .06 .063 . 31 .323 . 56 . 578 . 81 . 823 . 07 .073 . 32 . 334 . 57 . 588 .82 . 833 .08 .084 .33 . 344 . 58 . 598 .83 .842 09 094 . 34 354 . 59 608 84 S52 . 10 ... 105 . 35 . 364 . 60 . 618 .85 . 861 . 11 . 115 . 36 , 375 || . 61 . 628 .86 . 870 . 12 . 126 . 37 . 385 .62 . 638 .87 . 880 . 13 . 136 .38 . 395 .63 . 648 .88 .889 . 14 . 146 . 39 .406 . 64 .658 . 89 . 899 . 15 . 157 .40 .416 . 65 . 668 .90 . 908 . 16 . 167 . 41 .426 , 66 . 677 .91 .917 . 17 . 178 .42 . 436 .67 . 687 . 92 . 927 . 18 . 188 .43 .447 . 68 . 697 .93 .936 . 19 . 199 .44 . 457 . 69 . 707 .94 .945 . 20 . 209 .45 . 467 . 70 . 717 .95 .954 . 21 . 219 .46 . 477 .71 . 727 .96 .964 . 22 . 230 .47 . 487 . 72 . 736 .97 . 973 . 23 . 240 . 48 . 497 . 73 . 746 .98 . 982 . 24 . 251 . 49 . 508 . 74 . 756 .99 ..991 . 25 .261 . 50 . 518 . 75 . 765 1.00 1.000 190 FUNCTIONS OF re TABLE XVIII.--FUNCTIONS OF re FOR USE IN COMPUTING THE APPROXIMATE PROBABLE ERROR OF THE TETRACHORIC COEFFICIENT OF CORRELATION FUNC'I'ION 0.6745 Vº - *|| - (**) 90 - 7't Function Tº Function rt Function TÉ Function of rº of rº of r: of r! .00 . 6745 .25 . 6445 . 50 . 5507 .75 . 3756 .01 . 6744 . 26 . 6421 . 51 . 5455 .76 . 3662 .02 . 6743 . 27 . 6396 . 52 , 5401 .77 . 3567 . 03 . 674.1 . 28 . 6360 , 53 . 5346 . 78 , 3470 .04 . 6737 . 29 . 6341 . 54 . 5289 ... 79 . 3369 .05 . 6733 , 30 , 6312 . 55 . 5231 . 80 . 3267 .06 . 6728 . 31 . 6282 . 56 . 5173 . 81 . 3161 . O7 . 6722 . 32 . 6251 . 57 . 5112 .82 . 3053 . 08 . 6715 . 33 . 62.19 . 58 . 5051 .83 .2942 , 00 . 6706 . 34 . 6186 . 59 . 4987 .84 . 2827 ... 10 . 6698 . 35 ... 6153 . 60 , 4922 . 85 . 27.10 . 11 . 6688 . 36 . 6118 , 61 . 4856 .86 . 2589 . 12 .6677 . 37 . 6081 . 62 . 4788 . 87 , 2463 . 13 . 6665 . 38 . 6044 .63 . 4719 . 88 . 2334 . 14 . 6652 . 39 . 6006 . 64 .4649 , 89 . 2200 . 15 .6638 . 40 . 5966 . 65 . 4576 .90 . 2062 . 16 . 66.23 .41 .5925 . 66 . 4502 .91 . 1918 . 17 . 6607 .42 . 5884 , 67 . 4427 . 92 . 1767 . 18 . 6590 .43 , 5840 . 68 . 4349 .93 . 1610 . 19 . 6573 . 44 . 5797 , 69 . 4270 .94 . 1445 . 20 . 6554 .45 . 5751 . 70 . 41.89 .95 . 1269 . 21 . 6534 . 46 . 5705 .71 . 4106 .96 . 1083 . 22 . 6514 .47 . 5658 . 72 .4021 .97 .0880 .23 . 6492 . 48 . 5608 .73 . 3935 , 98 .0656 . 24 . 6469 .49 . 5558 . 74 . 3846 . 99 . 0.395 . 25 . 6445 . 50 . 5507 .75 .3756 1.00 .0000 LoG T FUNCTIONS OF p TABLE XIX.-TABLE OF LOG I. FUNCTIONS OF p (see pp. 55–57) p 0 1 2 3 4 5 6 7 8 9 1.00 | . . . . . . . . 9750 9500 || 9251 || 9003 || 8755 8509 || 8263 || 8017 | 7773 1.01 9.997:529 || 7285 || 7043 | 6801 || 6560 || 6320 || 6080 || 5841 5602 || 5365 1.02 5128 || 4892 || 4656 || 4421 4187 || 3953 || 3721 || 3489 || 3257 || 3026 1.03 2796 || 2567 || 2338 2110 | 1883 | 1656 || 1430 | 1205 || 0981 0757 1.04 0533 0311 || 0089 || 9868 9647 | 9427 9208 || 8989 8772 || 8554 1.05 9,988338 || 8122 || 7907 || 7692 || 7478 || 7265 7052 | 6841 | 6629 || 6419 1.06 6209 || 6000 || 5791 || 5583 || 5376 5169 || 4963 4758 || 4553 || 4349 1.07 4145 || 3943 || 3741 || 3539 || 3338 3138 2939 || 2740 || 2541 || 2344 1.08 2147 | 1951 1755 1560 | 1365 | 1172 || 0978 || 0786 || 0594 || 0403 1.09 0212 || 0022 || 9833 || 9644 || 9456 9269 || 9082 || 8896 || 8710 || 85.25 1.10 9.078341 || 8157 || 7974 : 7791 || 7610 7428 || 7248 || 7068 || 6888 || 6709 1. 11 6531 6354 || 6177 || 6000 || 5825 || 5650 || 5475 || 5301 || 5128 4955 1. 12 4783 4612 || 4441 || 4271 || 4101 || 3932 || 3764 || 3596 || 3429 || 3262 1.13 3096 || 2931 || 2766 || 2602 || 2438 2275 2113 || 1951 || 1790 1629 1. 14 1469 || 1309 1150 || 0992 || 0835 0677 || 0521 || 0365 || 0210 0055 1.15 9,969901 || 9747 9594 || 94.42 | 9290 || 9139 || 8988 8838 || 8688 8539 1.16 8300 | 8243 8096 || 7949 7803 || 7658 || 7513 || 7369 || 7225 || 7082 1.17 6939 || 6797 | 6655 || 6514 || 6374 6234 || 6095 || 5957 || 5818 || 5681 1, 18 5544 || 5408 || 5272 || 5137 || 5002 || 4868 || 4734 || 4601 || 4469 || 4337 1, 19 4205 || 4075 || 3944 || 3815 || 3686 3557 || 3429 || 3302 || 3175 || 3048 1. 20 2922 2797 || 2672 || 2548 || 2425 || 2302 || 2179 || 2057 1936 | 1815 1.21 1695 || 1575 || 1456 || 1337 | 1219 || 1101 || 0984 || 0867 || 0751 || 0636 1 .22 0521 || 0407 || 0293 || 0180 || 0067 || 9955 || 8843 || 97.32 || 9621 9511 1.23 9,959401 || 92.92 || 9,184 || 9076 | 8968 || 8861 || 8755 || 8649 || 8544 | 8439 1.24 8335 | 8231 || 81.28 || 8025 || 7923 || 7821 || 7720 || 7620 || 7520 || 7420 1.25 7321 | 7223 || 7125 || 7027 | 6930 | 6834 || 6738 || 6642 || 6547 || 6453 1.26 6359 || 6267 || 61.73 || 6081 || 5989 || 5898 || 5807 || 5716 || 5627 | 5537 1.27 5449 || 5360 || 5273 || 5185 || 5099 || 5013 || 4927 || 4842 || 4757 || 4673 1.28 4589 || 4506 || 4423 || 4341 || 4259 || 4178 || 4097 || 4017 | 3938 || 3858 1.29 3780 || 3702 || 3624 || 3547 || 3470 || 3394 || 3318 || 3243 || 3168 || 3094 1.30 3020 || 2947 || 2874 2802 2730 || 2659 2588 || 2518 2448 || 2379 1.31 2310 || 2242 || 2174 || 2106 || 2040 || 1973 || 1907 | 1842 || 1777 1712 1. 32 1648 || 1585 | 1522 || 1459 || 1397 || 1336 || 1275 | 1214 || 1154 || 1094 1.33 1035 || 0977 || 0918 || 0861 || 0803 || 0747 || 0690 0634 || 0579 || 0524 1.34 0470 || 0416 || 0362 0309 || 0257 || 0205 || 0153 0102 || 0051 || 0001 1.35 9, 94995.1 99.02 || 9853 || 98.05 97.57 || 9710 9663 || 96.17 | 95.71 || 9525 } .36 94.80 || 9435 | 9391 || 9348 || 9304 || 9262 || 9219 || 91.78 9136 || 9095 1.37 9054 || 9015 | 8975 | 8936 || 8898 || 8859 || 8822 || 8785 || 8748 || 8711 1. 38 8676 || 8640 || 8605 || 85.71 8537 || 85.03 || 84.70 | 8437 | 8405 || 8373 1. 39 8342 . 8311 || 8280 || 8250 | 8221 || 8192 || 8163 || 8135 | 8107 || 8080 1.40 8053 | 8026 || 8000 || 7975 || 7950 || 7925 | 7901 || 7877 7854 || 7831 1.41 7808 || 7786 || 7765 7744 7723 7703 || 7683 || 7664 || 7645 7626 1.42 7608 || 7590 || 7573 || 7556 || 7540 || 7524 || 7509 || 7494 || 7479 || 7.465 1.43 7451 || 7438 || 7425 || 7413 7401 || 7389 7378 || 7368 || 7358 || 7348 1.44 7338 || 7329 | 7321 | 7312 | 7305 7298 || 7291 || 7284 || 7278 || 7273 1.45 7268 || 7263 || 7259 || 7255 | 7251 7248 || 7246 || 7244 || 7242 | 7241 1.46 7240 || 7239 || 7239 || 7240 | 7241 || 7242 | 7243 | 7245 || 7248 || 7251 1.47 7254 || 7258 || 7262 | 7266 | 7271 || 7277 | 7282 | 7289 || 7295 || 7302 1. 48 7310 | 7317 | 7326 | 7334 7343 || 7353 | 7363 | 7373 | 7384 || 7395 1.49 7407 || 7419 || 7431 || 7444 || 7457 || 7471 || 7485 || 7499 || 75.15 || 7529 192 LOG T FUNCTIONS OF p TABLE XIX. —Continued p 0 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 1.50 | 9.947545 || 7561 || 7577 || 7594 || 7612 || 7629 || 7647 || 7666 7685 || 7704 1.51 7724 || 7744 || 7764 || 7785 || 7806 || 7828 || 7850 || 7873 || 7896 || 7919 1.52 7943 7967 || 7991 i 8016 || 8041 || 8067 || 8093 8120 || 8146 || 8174 1.53 8201 i 8229 || 8258 8287 8316 8346 8376 8406 | 8437 | 8468 1.54 8500 || 8532 || 8564 || 8597 8630 || 8664 || 8698 || 8732 8767 || 8802 1.55 8837 || 8873 || 8910 | 8946 | 8983 9021 | 9059 || 9097 || 9135 | 91.74 1.56 9214 || 9254 || 9294 || 9334 || 93.75 | 94.17 | 9.458 || 9500 || 9543 || 9586 1. 57 9629 || 9672 | 97.16 || 9761 || 9806 || 9851 || 9896 || 9942 | 9989 || 0035 1.58 || 9.950082 || 01:30 || 0177 || 0225 || 0274 || 0323 || 0.372 || 0422 || 0472 0522 1.59 0573 || 0624 || 0676 || 0728 || 0780 || 0833 || 0886 || 0939 || 0993 || 1047 1. 60 1102 || 1157 | 1212 || 1268 || 1324 || 1380 || 1437 || 1494 || 1552 | 1610 1.61 1668 || 1727 1786 | 1845 || 1905 || 1965 2025 2086 || 2147 2209 1.62 2271 || 2333 2396 || 2459 || 2522 2586 || 2650 2715 || 2780 || 2845 1.63 2911 || 2977 || 3043 || 3110 || 3177 || 3244 || 3312 || 3380 || 3449 || 3517 1.64 3587 3656 || 3726 3797 || 3867 || 3938 || 4010 | 4081 || 4154 || 4226 1.65 4299 || 4372 4446 4519 || 4594 || 4668 || 4743 || 4810 || 4894 || 4970 1.66 5047 5124 5201 || 5278 5356 || 5434 5513 || 5592 || 5671 || 5750 1. 67 5830 || 5911 || 5901 || 6072 6154 || 6235 | 6317 | 6400 || 6482 6566 1.68 6649 || 6733 | 6817 | 6901 || 6986 || 7072 || 7157 | 7243 | 7329 7416 1.69 7503 || 7590 7678 || 7766 || 7854 || 7943 8032 || 8122 || 8211 8301 1. 70 8391 | 8482 || 8573 || 8664 || 8756 || 8848 || 8941 || 9034 || 9127 9220 1.71 9314 9409 9502 || 9598 || 9693 9788 || 9884 || 99.80 || 0077 || 0174 1.72 || 9.960271 || 0369 || 0467 || 0565 0664 || 0763 || 0862 || 0961 || 1061 || 1162 1.73 1262 || 1363 || 1464 || 1566 1668 || 1770 | 1873 1976 || 2079 || 2183 1.74 2287 2391 || 2496 || 2601 || 2706 2812 || 2918 3024 || 3131 || 3238 1. 75 3345 || 3453 || 3561 || 3669 || 3778 || 3887 || 3996 || 4105 || 4215 || 4326 1.76 4436 || 4547 || 4659 || 47.70 || 4882 || 4994 || 5107 || 5220 || 5333 || 5447 1.77 5561 5675 5789 || 5904 || 6019 || 6135 | 6251 | 6367 || 6484 || 6600 1.78 6718 || 6835 | 6953 || 7071 7189 || 7308 || 7427 || 7547 || 7666 7787 1.79 7907 || 8028 8149 8270 | 8392 || 8514 || 8636 8759 || 8882 9005 1.80 9129 || 9253 || 9377 | 9501 || 0626 || 9751 || 9877 || 0003 || 0120 || 0255 1.81 || 0.970.383 || 0509 || 0637 || 0765 || 0893 || 1021 | 1150 | 1279 || 1408 || 1538 1.82 1668 || 1798 || 1929 2060 2191 || 2322 || 2454 || 2586 2719 || 2852 1.83 2085 || 3118 3252 3386 3520 || 3655 || 3790 || 3925 | 4061 || 4197 1.84 4333 || 44.70 || 4606 || 4744 || 4881 || 5019 || 5157 || 5295 || 5434 || 5573 1.85 5712 || 5852 5992 || 6132 6273 || 6414 || 6555 | 6697 || 6838 || 6.980 } .86 71.23 || 7266 7408 7552 || 7696 || 7840 || 7984 || 8128 || 8273 || 84.19 1,87 8564 || 8710 || 8856 9002 || 9149 || 9296 || 9443 || 9591 || 9739 || 9887 1.88 || 9.980036 || 0184 || 0333 || 0483 || 0633 || 0783 || 0933 || 1084 || 1234 || 1386 1.89 1537 1689 1841 1994 || 2147 2299 || 2453 || 2607 || 2761 || 2915 1.90 3069 || 3224 3379 || 3535 | 3690 || 3846 || 4003 || 4159 || 4316 || 4474 1.91 4631 || 4789 || 4947 || 5105 || 5264 || 5423 || 5582 5742 5902 || 6062 1.92 6223 6383 6544 || 6706 | 6867 7029 || 7192 || 7354 || 7517 | 7680 1.93 7844 8007 || 8171 || 8336 | 8500 | 8665 | 8830 | 8996 || 9161 || 9327 1.94 9494 || 9660 9827 | 9995 || 0162 || 0330 0.498 || 0666 || 0835 | 1004 1.95 || 9.99.1173 || 1343 | 1512 || 1683 1853 2024 2195 || 2366 || 2537 || 2709 1.96 2881 || 3054 || 3227 | 3399 || 3573 || 3746 3920 4094 || 4269 || 4443 1.97 4618 || 4794 || 4969 || 5145 || 5321 5498 || 5674 5851 || 6029 6206 1.98 6384 || 6562 || 6740 || 6919 || 7098 || 7277 || 7457 || 7637 || 7817 | 7997 1.99 8178 8359 8540 | 8722 | 8903 || 9085 9268 || 9450 || 9633 9816 VALUE OF Q AND ITS STANDARD ERROR 193 TABLE XX.-VALUE OF Q FOR. VARIOUS-SIZED FAMILIES AND VARIOUS CROSSOVER VALUES Crossover value Number of children 0.05 || 0.10 0.15 0.20 0.25 | 0.30 || 0.35 | 0.40 0.45 || 0.50 8 = 2–3 0.0475|0.0900|0. 1275|0. 1600|0. 1875|0.2100|0.2275|0.2400|0.2475|0.2500 8 = 4–5 0.0498||0.0981|0. 1237|0. 1856|0.2226|0.2541|0.27920. 2976|0.3088||0.3125 8 = 6–7 0.0499|0.09960. 1479|0.1038||0.2358||0.2726|0.3028||0.3252|0.3391|0.3437 8 = 8–9 0,0500|0.0999|0. 14920. 1971|0.242010, 2823|0.31620.3418||0.35780.3633 8 = 10–11 || 0.0500|0.10000. 1496|0. 1985|0.2453|0. 2881|0.3247|0.3530|0.3708||0.3769 8 = 12–13 0.0500|0. 1000|0. 14980. 1992|0.2471|0.2917|0.3305!0.3610|0.3805|0.3872 8 = 14–15 |0.05000. 10000. 1499|0. 1996|0.2482|0, 2940|0.3347|0.3671|0.3880|0.3951 8 = 16-17 |0.0500|0. 1000|0. 1499|0. 1998 **** 0.3940|0.4016 TABLE XXI.-STANDARD ERROR OF Q Crossover value Number of children 0.05 || 0, 10 || 0.15 || 0.20 || 0.25 || 0.30 || 0.35 | 0.40 0.45 || 0.50 8 = 2 0.1466|0. 1921|0.2179|0.2333|0.24200. 2468||0.2490|0.2498||0.2500|0.2500 8 = 3 0.116510. 1480|0.1639|0. 1665|0.1654|0. 1609|0, 1552|0.1497|0. 1456|0, 1443 8 = 4 0, 1211|0.1448||0. 1658||0, 1765|0. 1802|0. 1788||0.1750|0. 1731|0. 1668||0. 1653 8 = 5 0.0964|0. 1284|0. 14520, 1545|0. 15480. 1467|0. 1388||0. 1305 (0.1239;0, 1218 8 = 6 0.0871|0.1210|0.140810. 1514|0.1548||0. 1527|0. 14670. 1411|0.13370. 1313 8 = 7 0.0803|0. 1118||0.12940. 1417|0.13970. 1353|0. 1301.0.1172|0. 10940. 1069 8 = 8 0.0769|0. 1068||0.1241|0. 1383|0. 1392|0. 13920. 1313|0.122510. 11510. 1118 8 = 9 0.0705|0.099510. 1156|0.1262|0. 1295|0.1271 º 0.0989,0.0955 8 = 10 0.0689|0.0937|0.11190.1225 * 0.12060.1112|0.1092|0.0993 194 PROPORTION OF RECESSIVE OFFSPRING TABLE XXII.—GIVING FOR ANY UNIT CHARACTER THE PROPOR- TION OF RECESSIVE OFFSPRING TO BE EXPECTED: (E) IN RANDOM MATINGS OF DOMINANTS WITH DOMINANTS, AND (S) IN RANDOM MATINGS OF DOMINANTS WITH RECESSIVES WHEN B GIVES THE |PROPORTION OF RECESSIVE IN DIVIDUALS IN THE GENERAL POP- ULATION. ABBREVIATED FROM SNY DER, 1934, pp. 9–17 B R S B R S B R S .001 | .0009 | . 0307 . 062 . 0398 | . 1994 . 132 . 0710 | .2665 .002 | . 001.8 | . 0428 .064 . 0408 .2019 . 134 | . 0718 .2680 .003 | .0027 . . 0519 .066 . 0418 .2044 . 136 | . 0726 .2694 .004 | . 0035 | . 0595 .068 . 0428 | . 2068 . 138 . 0734 . . 2709 .005 | .0044 | . 0660 . 0.70 | . 0438 | . 2092 . 140 | . 0742 | . 2723 .006 | .0052 . 0719 .072 | .0448 .2116 . 142 | . 0749 | . 2737 .007 .0060 | . 0772 .074 | . 0457 .2139 . 144 | . 0756 | .2751 .008 .006.7 | . 0821 .076 | . 0467 | . 2161 . 146 | . 0.765 . . 2765 .009 | . 0075 .0867 .078 | . 0477 | . 2183 . 148 | . 0.772 | .2778 .010 . 0083 | . 0909 - 080 | . 0486 | .2205 . 150 | . 0780 . .2792 .012 | .0098 || .0987 . 082 | . 0496 | .2226 . 155 .0798 } .2825 .014 | . 0112 | . 1058 . 0.84 .0505 | .2247 . 160 | . 0816 | . 2857 .016 | . 0126 | . 1123 . 086 | . 0514 | . 2268 . 165 | . 0835 | . 2889 .018 | . 0140 | . 1183 .088 | . 0524 | .2288 . 170 | . 0852 | .2919 .020 | . 0154 | . 1239 . 090 | . 0533 | . 2308 . 175 | . 0.870 | . 2950 .022 | .0167 | . 1292 ,092 | .0542 .2327 . 180 .0887 .2979 .024 | . 0180 | . 1341 . 094 | . 0551 | . 2347 . 185 | . 0905 | .3008 .026 | . 0193 | . 1389 .096 | . 0560 | .2366 . 190 . 09.22 .3036 .028 .0205 | . 1433 .098 || , 0.568 .2384 . 195 | . 0938 . .3063 . 030 | .0218 | . 1476 ... 100 | . 0.577 | . 2403 . 200 | . 0955 | .3090 . 032 | . 0230 . 1517 ... 102 | .0586 | . 2421 . 205 | . 0972 | .3117 .034 | . 0242 | . 1557 ... 104 | . 0595 | . 2439 . 210 | . 0988 | .3143 . 036 . 0.254 . . 1595 ... 106 | . 0603 | . 2456 . 215 | . 1004 .3168 .038 | . 0.266 | . 1631 , 108 | . 0612 | . 2474 . 220 | . 1020 | . 31.93 . 040 | . 0.278 | . 1667 . 110 | . 0621 | . 2491 . 225 | . 1035 | .3217 . 042 | . 0289 . 1701 , 112 | . 0629 | . 2508 . 230 | . 1050 | .324] . 044 | . 0301 | . 1734 || . 114 | . 0637 .2524 . 235 | . 1066 | . 3265 .046 . 0312 | . 1766 . 116 | . 0646 | .2541 . 240 | . 1081 .3288 .048 | . 0323 | . 1797 . 118 .0654 | . 2557 . 245 | . 1096 | . 33.11 . 050 | . 0334 | . 1827 . 120 | . 0662 | .2573 . 250 | . 1111 | .3333 .052 . 0345 . 1857 . 122 | . 0670 | .2589 . 255 | . 1126 | .3355 .054 | . 0356 . 1886 . 124 . 0678 | . 2604 . 260 | . 1140 | .3377 .056 | . 0366 | . 1914 . 126 . 0686 | . 2620 , 265 | . 1155 | .3398 .058 . .0377 | . 1941 . 128 | . 0694 | . 2635 . 270 | . 1169 | .3419 . 060 . 0387 | . 1968 || . 130 . 0702 .2650 . 275 | . 1183 | .3440 PROPORTION OF RECESSIVE OFFSPRING 195 TABLE XXII.-Continued B R S B R S B R S . 280 | . 1197 | .3460 .455 | . 1623 .4028 . 770 | . 2185 | .4674 . 285 | . 1212 | .3481 . 460 | . 1633 . .4041 . 780 . .2200 | .4690 , 290 | . 1225 | . 3500 .465 | . 1644 .4054 ... 790 .2215 | . 4706 . 295 | . 1239 || .3520 . 470 | . 1654 .4067 . 800 | .2229 | .4721 .300 | . 1253 .3539 . 475 | . 1665 .4080 . 810 | .2244 | . 4.737 , 305 | . 1266 | .3558 . 480 | . 1675 | . 4093 . 820 | .2258 .4752 . 310 | . 1280 .3577 .485 . 1685 | .4105 . 830 ; .2273 | . 4767 .315 | . 1292 | .3595 . 490 | . 1695 | .4118 . 840 | .2287 .4782 , 320 | . 1305 | . 3613 . 500 | . 1716 | . 4142 . 850 | . 2301 | . 4797 . 325 | . 1318 . .3631 . 510 | . 1736 | . 4166 . 860 | . 2316 | . 4812 .330 | . 1332 .3649 . 520 | . 1756 | . 4190 . 870 | .2329 . 4826 .335 | . 1344 | .3666 . 530 | . 1775 | . 4213 . 880 . . 2343 | .4840 . 340 | . 1356 || , 3683 . 540 | . 1794 | .4236 . 890 | . 2356 .4854 , 345 | . 1369 | . 3700 . 550 | . 1813 | . 4258 . 900 | . 23.70 | . 4868 . 350 | . 1382 . 3717 . 560 | . 1832 | . 4280 .910 .2383 | . 4882 . 355 | . 1394 | . 3734 . 570 | . 1851 | . 4302 .920 | . 2397 | . 4896 . 360 | . 1406 | . 3750 . 580 | . 1869 .4323 .930 | . 2410 | . 4909 . 365 | . 1418 . 3766 . 590 | . 1887 | . 4344 . 940 | . 2424 | . 4923 . 370 | . 1430 | . 37.82 . 600 | . 1905 | . 4365 . 950 | . 2436 | . 4936 . 375 | . 1443 | . 3798 . 610 | . 1923 | . 4385 .960 : . 2449 | . 4949 . 380 | . 1455 | .3814 . 620 | . 1940 . . 4405 . 970 .2462 | . 4962 . 385 | . 1466 | . 3829 . 630 | . 1958 | . 4.425 .980 | . 24.75 | . 4975 . 390 | . 1478 | .3844 . 640 | . 1975 .4444 .990 | .2487 | . 4987 . 395 | . 1489 | .3859 . 650 | . 1993 | .4464 .999 || .2499 . 49.99 .400 . 1501 | .3874 . 660 | . 2010 | . 4.483 .405 | . 1512 | .3889 . 670 | .2026 | .4501 .410 | . 1524 | .3904 . 680 | .2042 | .4519 . 415 | . 1535 | .3918 , 690 .2059 | . 4538 .420 | . 1546 .3932 .700 | .2075 | .4555 .425 | . 1557 | .3946 . 710 | .2091 | . 4573 .430 ( . 1568 | .3960 .720 | . 2107 | . 4590 .435 | . 1579 | .3974 .730 | .2123 . 4607 .440 | . 1590 | .3988 .740 | .2138 | .4624 .445 | . 1601 | .4002 . 750 | . 2154 .4641 .450 | . 1612 | .4015 . 760 .2170 .4658 196 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-SQUARES, CUBES, SQUARE ROOTS No. Squares. | Cubes. sº Cube Roots. | Reciprocals. 1 1 1 1.0000000 1.0000000 1,000000000 2 4 8 1.4142136 1.2599.210 .500000000 3 9 27 1 7320508 1.4422496 .333333333 4 16 64 2.0000000 1.587'4011 .250000000 5 25 125 2.2360680 1.70997.59 .200000000 6 36 216 2 4494897 1.8171206 , 166666667 7 49 343 2.6457513 1.9129312 . 142857.143 8 64 512 2,8284271 2.0000000 . 125000000 9 81 729 3.0000000 2.0800837 ..111111111 10 100 1000 3, 1622777 2.1544347 ... 100 11 121 1331 3.3166248 2.223980ſ .090909091 12 144 1728 3.464.1016 2.2894286 .08333333; 13 169 2197 3,6055513 2,3513347 .076923077 14 196 2744 3.7416574 2.4101422 .07142857.1 15 225 3375 3.8729883 2.46621.21 .06666666? j6 256 4096 4.0000000 2.5198421 .062500000 17 289 4913 4. 1231056 2.5712816 .058823.529 18 324 5832 4,2426407 2. 6207414 .055555556 19 361 6859 4.3588989 2.6684016 05263.1579 20 400 8000 4,4721360 2.7144177 .050000000 21 441 9261 4.5825757 2.7589.243 ,047619048 22 484 10648 4.6904158 2.8020393 .045454545 23 529 1216? 4.7958315 2.8438670 .043478.261 24 576 13824 4.8989795 2.8844991 .041666667 25 625 15625 5.0000000 2.9240.177 .040000000 26 676 17576 5,0990.195 2.96.24960 .03846.1538 27 729 19683 5, 1961524 8.0000000 .037037037 28 784 21952 5.2915026 8,0365889 ,035714286 29 841 24389 5.385.1648 3.07.23168 .034482759 30 900 27000 5,4772256 8.1072325 .033833333 31 961 29791 5.5677644 3.1413806 .032258065 32 1024 32768 5.6568542 3.1748021 .031250000 33 1089 3.5937 5,7445626 8.2075848 .030303030 34 1156 39304 5.8309519 3.2306.118 , 0.294.11765 35 1225 42875 5.9160798 8.2710663 .028571429 36 1296 46656 6.0000000 3.3019272 ,027777778 37 1369 500.53 6,0827625 3.33222.18 .027027027 38 1444 5487: 6. 1644140 3.36197'54 .0263.15789 39 1521 593.19 6.2449980 3,3912114 .02564,1026 40 1600 64000 6.3245553 3.4190519 .025000000 41 1681 68.921 6.4031242 8.4482172 .024390244 42 1764 74088 6,4807407 3.47.60266 ,023809524 43 1849 79507 6.5574385 8.5033981 .023255814 44 1936 85184 6.6332496 8, 5303483 .02.27 27273 45 2025 91125 6.7082039 3,5568983 .022222222 46 2116 97336 6.7823300 3.58.30479 .021739.130 47 2209 103823 6.8556546 8.6088261 .021276596 48 2304 110592 6 0282032 3.634241], ,020833333 49 2401 117649 7,0000000 3,6593057 .0204081.63 50 2500 125000 7,0710678 3.6840814 .020000000 51 2601 132651 7.1414284 3,7084.298 ,01960.7843 52 2704 140608 7.2111026 3.7'3251.11 .019230769 53 2809 148877 7,280.1099 3.7502858 .01886.925 54 2916 157464 7,3484692 3,7797631 .018518519 55 3025 166375 7,4161985 3. 8020525 .018181818 56 3136 175616 7.4833148 3.8258624 .017857'143 57 3249 185193 7,5498.344 3.8485011 ,017543860 58 3364 195112 7.6157731 3,8708766 .017241879 59 3481 205379 7.6811457 3,8929965 .016949.153 60 3600 216000 7.745966? 3.91.48676 .016666667 61 37:21 226.981 7, 8102497 3 9364972 .016393443 62 3844 238328 7.8740079 3.9578915 .016129032 SQUARES, CUBES, ROOTS, AND RECIPROCALS 197 TABLE XXIII.-Continued No. Squares. Cubes. §. Cube Roots. Reciprocals. 63 3060 250047 7. 9372539 3.9790571 .01587'3016 64 4096 262,144 8.0000000 4.0000000 .015625000 65 4225 27.4625 8. 062257'7 4.0207.256 .01538.4615 66 4356 287'496 8, 1240384 4.0.412401 .015.151515 67 4489 300763 8, 1853528 4.0615.480 ,014925373 68 4624 3.14432 8, 2462113 4.0816551 .014705882 69 4761 328509 8, 3066239 4.1015661 .014492754 70 4900 343000 8. 3666003 4.1212853 , 0.142857.14 71 5041 357911 8.4261.498 4. 1408178 014084507 72 5184 37'3248 8, 48.52814 4. 160167 .013888889 73 5329 38901? 8, 544003? 4, 1793390 .013698630 74 5||76 40.5224 8.6023253 4, 1983364 , 0.13513514 75 5625 21875 8. (5602540 4.217.1633 . 013333333 76 57.7 438.976 8, 7.177979 4.2358236 .013157895 77 5929 456533 8, 77.49644 4.2543210 .012987.013 78 6084 474.552 8,8317609 4.2726586 .012820513 79 6241 493039 8.8881944 4.2908.404 .01.2658228 80 6400 512000 8.944.2719 4.3088695 .012500000 81 6561 531441 9,0000000 4.3267.487 . 012345679 82 6724 55:1368 9.0553851 4.3444815 .012.195122 83 6889 57.1787 9. 1104336 4,3620707 .01204.8193 84 7056 59.2704 9. 1651514 4.37'95.191 .01.1904762 85 7225 614125 9. 2195445 4.3968296 .011764706 86 7396 636056 9.2736,185 4.4140049 .01.1627.907 7 7569 658503 9.327°37'91 4.4310476 .011494.253 88 7744 681472 9.3808315 4.447.9602 .011363636 89 7921 704969 9. 4339811 4.4647451 .01.1235.955 90 8100 729000 9.4868330 4.4814047 .011111111 91 8281 753571 9.5393920 4. 4979414 .010989.011 92 8:464 778688 9, 5916630 4.5143574 .010869565 93 8649 804357 9.6436508 4.5306549 .010752688 94 8836 830584 9,695.3597 4, 5468359 .010638298 95 9025 857.375 9.7.467943 4, 5629026 .010526316 96 9216 88.4736 9.797.9590 4.5788570 . 0104.16667 97 9.409 912673 9,848857 4,5947'009 ,010309278 98 9604 941.192 9.8994949 4 6104363 .010204082 99 9801 97.0299 9.94987:44 4.6260650 .010101010 100 10000 1000000 10.0000000 4.64.15888 . 010000000 101 102.01 1030301 10.0498756 4,657'0095 .009900990 102 10404 1061208 10.09950.49 4.6723287 .009803922 103 10609 1092727 10, 1488916 4.6875482 0097087 104 10816 1124864 10, 1980390 4. 7026694 .009615385 105 11025 1157625 10.2469508 4.7.176940 .009523810 106 11236 1191016 10.2956301 4.7326235 .009.433962 107 11449 122:50.43 10.3440804 4.7474594 ,009345794 108 11664 1259712 10.3923048 4.7622032 ,009.259259 109 11881 1295029 10.4403065 4.7768562 .009.174312 110 12100 1331000 10.4880885 4.7914,199 .009090909 111 12321 1367631 10, 5356538 4.8058955 .009009009 112 12544 1404928 10.5830052 4.8202845 .00892857.1 113 12769 1442897 10.6301,458 4.8345881 .0088.49558 114 12996 1481544 10.6770.783 4.8488076 .00877.1930 115 13225 1520875 10. 7238053 4.8629442 .008695652 116 j3456 1560896 10, 7703296 4.8769990 .008620690 117 13689 1601613 10.8166538 4,890.97.32 .008547009 118 13924 1643032 10,8627805 4.9048681 .008474576 119 1416.1 1685159 10.9087.121 4.9186847 .008403361 120 14400 728000 10.9544512 4,9324242 008333333 121 14641 1771561 11.00)0000 4.9460874 .008264463 122 14884 1815848 11.0453610 4.9596757 .0081967:21 123 15129 1860867 11.0905365 4.97.31898 .008130081 124 1537 1906624 11. 1355.287 4.9866310 .008064516 198 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued No. Squares. | Cubes. | §. Cube Roots. | Reciprocals. 125 15625 1953125 11. 1803399 5.0000000 .008000000 126 15876 2000376 11.22497'22 5.0132979 .007'936508 127 | 16129 2048383 || 11,2694277 5.0265257 | .007874016 128 16384 || 2007 152 11.3137085 || 5.0396842 | .0078.12500 129 | 16641 2146689 || 11.3578167 || 5.0527743 | .007751938 130 16900 2197000 11.4017543 5.0657970 .007692308 131 17161 2248091 11.4455231 5.0787531 .007633588 132 17424 22999.68 11. 489.1253 5.0916434 .007575.58 133 17689 2352637 11. 5325626 5. 1044687 .007518797 134 17956 6104 11.5758369 5, 1172299 .00; 462 135 18225 2460375 11.6.189500 5.1299278 .007407407 136 18496 2515456 11.6619038 5.1425632 007'352941 137 18769 257 (353 11.7046999 5, 1551867 007299270 138 19044 2628072 11.7°47'3401 5. 1676493 0072,46377 139 19321 2685619 11.7898.261 5.1801015 007194245 140 19600 27.44000 11.832.1596 5, 1924.941 007 142857 141 19881 2803221 11.8743421 5.2048279 00709:2199 142 20164 2863.288 11.9163753 5.2171034 007042254 143 20449 2924207 11,9582607 5.22932.15 .006993007 144 20736 298.5984 12.0000000 5.241.4828 ,0069 145 21025 3048625 12. 0415946 5.2535879 006896552 146 2.1316 3112136 12.0830460 5.2656374 006849315 147 21609 3176523 12. 1243557 5.277.6321 006802721 148 21904 3241792 12, 1655.251 5.2895725 .006'56757 149 22201 3307949 12.2065556 5.301.4592 .0067.11409 150 22500 33.75000 12.2474487 5.3132928 .006666667 151 22801 3442951 12.2882057 5,3250740 ,006622517 152 23104 3511808 12.3288280 5,3368033 006578947 153 09 3581577 12.3693.169 5.3484812 006535.948 154 23716 3652264 12.4096736 5.360.1084 00649.3506 155 24025 3723875 12.4498996 5.37.16854 006451613 156 24336 3796416 12.4899.960 5.3832126 006410256 157 24649 3869893 12. 5299641 5 3946907 .006869427 158 24964 3944312 12.5698051 5,4061202 .006829114 159 25281 4019679 12.6095202 5.4.1750.15 .006289308 160 25600 4096000 12.6491106 5.42.88352 .006250000 161 25921 4173281 i2.68857" 5.4401218 .006211180 162 26244 425.1528 12.727'9.221 5.45.13618 .006172840 163 26.569 4330747 12.767.1453 5.4625.556 ,006134969 164 26896 4410944 12.8062485 5.4737037 , 000097561 165 27.225 4492.125 12,8452326 5. 4848066 .006060606 166 27.556 45.74296 12.8840087 5. 4958647 .006024096 167 7889 4657463 12, 9228480 5.5008784 .005.988024 168 28224 4741632 12, 9614814 5.51784.84 .00595288.1 169 28561 4826809 13.0000000 5.5287.748 .005917160 170 28900 4913000 13.03840.48 5.5396.583 .005882353 171 29241 5000211 13.0766968 5,5504991 .00584'953 172 29584 50884.48 13. 1148770 5,5612978 .0058.13953 173 29929 5177.717 13. 1529464 5.572.0546 0.05780347 174 30276 5268024 13.1909060 5,5827702 005747 126 175 30625 53593.75 13.2287566 5.5934447 .005714286 176 30976 545.1776 13.2664992 5,604078? .005681818 177 31329 5545233 13.3041347 5.6146724 .0056497.18 178 3.1684 5639752 13.341664} 5,6252263 .005617978 179 32041 5735339 13.3790882 5. 6357408 .0055865.92 180 32400 5832000 13.4164079 5.6462.162 00:5555556 181 32761 5929741 13.4536240 5.6566528 005524862 182 33124 6028568 13.490.737 5.6670511 005494505 183 33489 6128487 13.5277493 5.6774114 005464481 184 33856 6229.504 13.5646600 5.687,7340 00543478. 185 84225 633.1625 13.60.14705 5. 6980.192 005405405 186 34596 6434856 13,638.1817 5.7082675 005376344 SQUARES, CUBES, ROOTS, AND RECIPROCALS 199 TABLE XXIII.-Continued No. | Squares. Cubes, §. Cube Roots. | Reciprocals. 187 34969 6539203 13.6747943 5.7.1847.91 .0053475.94 188 35344 6644672 13.7113092 5.7286543 .005319149 189 35721 675.1269 13.747.7271 5.7387936 .005291005 190 $6100 6859000 13.7840.488 5.7488971 .005263158 191 86481 6967871 M3.8202750 5.7589652 0.05235603 192 36864 7077888 13.8564065 5.7689982 0052 193 37249 71890.57 13.8924440 5,778.9966 005181347 194 3.7636 7301384 13.9283883 5.7889604 0051546.3% 195 38025 74.14875 13.964.2400 5.7988900 005128205 196 38416 7529536 14.0000000 5.8087857 005102041 197 38809 7645373 14.0356688 5.8.1864.79 .005076.142 198 39204 7762392 14.07 12473 5.8284767 .005050505 199 39601 7880599 14.1067360 5.8382725 .005025,126 200 40000 0000 14.1421356 5.8480355 .005000000 201 40401 8120601 14.1774469 5.8577660 .004975124 202 40 08 14.2126.704 5,8674643 .004950495 203 41209 8365427 14, 24.8068 5.8771307 .004926.108 204 41616 8489664 14,2828569 5.8867653 .004901961 205 42025 8615125 14.3178211 5.8968685 .004878049 206 42436 8741816 14.3527001 5,905.9406 .004854369 207 42849 8869743 14.3874946 5.9154817 .004830918 208 43264 89.98912 14.4222051 5.924992.1 .004807692 209 43681 91293.29 14.45683.23 5.9344721 004:784689 210 44100 926.1000 14.4913767 5,9439220 .004761905 211 44521 9393931 14.5258.390 5.95334.18 .004.739336 212 44944 9528.128 14.5602198 5.9627320 .004716981 213 45369 9663597 14.5945,195 5,9720926 .004694836 214 45.796 98.00344 14,6287 5.9814240 .004672897 215 46225 9938875 14.6628783 5.990.7264 .004651.163 216 46656 100776.96 14.6969385 6.0000000 004629630 217 47089 10218313 14.7309|199 6,0092450 004608295 218 47524 10360232 14.7648231 6.01846.17 .004587156 219 47961 10503459 14.7986486 6.0276502 .004566210 220 48400 10648000 14.8823970 6.036810? .004545455 221 48841 107.93861 14.8660687 6.0459435 .004524887 222 49.284 1094.1048 14.8996644 6.0550489 .004504505 223 497.29 11089567 14,9331845 6.0641270 .004484305 2.24 50176 11239424 14.9666.295 6.0731779 .004.464286 225 50625 11390625 15.0000000 6.08.22020 .004444444 226 51076 11543176 15.0332964 6.091.1994 .004424779 227 51529 11697083 15.0665.192 6.1001702 .004405286 228 51984 11852352 15.0996689 6.109.1147 .004385965 229 52441 12008989 15. 1327460 6.1180332 .004366812 230 52900 12167000 15.1657509 6.126925? .004347826 231 53361 12326391 15. 1986842 6.1357924 .004329004 232 53824 12487168 15.2315462 6.1446337 .0043.10345 233 54.289 12649337 15.2643375 6, 1534495 .004291845 234 54756 12812904 15.297.0585 6.1622401 .004273504 235 55225 1297,7875 15.3297097 6.17100.58 .004255319 236 55696 18144256 15.3622915 6.1797466 .004237288 237 56169 1331.2053 15.3948043 6.1884628 .004219409 238 56644 1348.1272 15.4272486 6, 1971544 .004201681 239 57121 1365.1919 15.4596248 6.2058218 .004184100 240 57.600 3824000 15.4919834 6.21446.50 ,00416666? 241 58081 1399.7521 15. 5241747 6.2230843 ,004,14937 242 58564 14172488 15.5563492 6.23.16797 .004132231 243 59049 14348907 15.5884573 6.24025.15 .004.115226 244 59536 14526784 15. 6204994 6.2487'998 .004098361 245 60025 14706125 15,6524758 6.2573248 ,004081633 246 60516 14886936 15.6843871 6.2658266 .004065041 247 61009 15069223 15.7162336 6.2743054 .004048583 248 61504 15252992 15.7480157 6.2827613 .00403.2258 200 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued No Squares. Cubes. §º Cube Roots. Reciprocals. 249 62001 15438249 15.7797.338 6,291.1946 .004016064 250 62500 15625000 15.81.13883 6,2996053 .00.4000000 251 63001 15813251 15.84:297.95 6.307'9935 .003984064 252 63504 16003008 15.87°45079 6.3.163596 .003968254 253 64009 16194277 15.9059737 6.3247035 .00395.2569 254 64516 16387.064 15.937.3775 6,3330.256 .003937'008 255 65025 16581375 15.9687.194 6.3.4.13257 .00392.1569 256 65536 16777216 16.0000000 6.3496042 .003906250 257 66049 1697.4593 16.0312195 6.357'8611 .003891051 258 66564 17173512 16.0623784 6.3660968 .003875969 259 67081 1737.3979 16.0934760 6.3743111 .003861004 260 67600 jºš76000 16.1245155 6.3825043 .003846,154 261 68121 17779581 16. 1554944 6,3906?'65 .00383.1418 262 68544 17.9847'23 16. 1864141 6.3988279 003816794 263 69.169 181914:47 16.217.2747 6.4069585 .003802281 264 69696 18399744 16.2480768 6, 4150687 .003787879 265 702:25 186096.25 16.2788:200 6.4231583 .003773585 266 707'56 18821096 16.3095064 6, 4312.276 .003759398 267 71289 19034163 16.3401.346 6, 43927.67 .003745318 268 718-24 19248,832 16.370.7055 6, 4473057 .003731343 269 72361 1946.5109 16,4012195 6.4553148 .003717472 270 72000 19688000 16.4316767 6.4633041 .003703704 271 73441 1990:2511 16.4620??' 6, 4712736 .003690037 272 73984 20123648 16.49242.25 6. 47'9.2236 .0036764.71 27; 7'452.) 20346417 16.5227116 6.4871541 .00366300.4 27. 7507’t; 20570824 16.5529.454 6. 4950653 .003649635 275 75625 207'96875 16.5831:240 6,5029.572 .003636364 276 76176 21024576 16. 6132477 6.5108300 .003623.188 277 767'29 21253.933 16.6433170 6.5186839 .003610108 278 7.7284 21484)52 16.6733320 6.5265,189 .003597,122 279 77.841 21717639 16,7032931 6.5343351 .003584229 280 78.400 21952000 16.7332005 6,5421326 .003571429 281 78961 22:188041 16.7630546 6,54991.16 .003558719 282 '9524 22:42.5768 16.7'9.28556 6, 55767:22 .003546099 283 80089 22665187 16. 8226038 6, 5654144 .003533569 284 80656 22906304 16.852.2995 6, 57.31885 .003521127 285 81225 231.491.25 16,8819430 6.5808443 .003508772 286 81796 233.93656 16.9115345 6, 5885323 .0034.96503 287 2369 23(339003 16. 94107.43 6. 5962023 .003484321 288 82.944 2.3887872 16.97'05627 6.6038545 .003472222 289 83521 24137'569 17.0000000 6.6114890 .0034.60208 290 84.100 24380000 17.0293864 6.6191060 .003448276 291 84681 2464:2171 17.05872:21 6, (3267'054 .00343642; 292 85264 24897088 17,08800'5 6.634287 .003424658 293 85849 25153757 17.1172428 6,6418522 .003412969 294 864.36 2541218.4 17, 1464282 6.6493998 .003401.361 295 87'025 2567.2375 17. 1755640 6,6569302 .003389831 296 87.616 2593.4336 17.2046505 6. 6644437 .0033783.78 297 88209 261980'3 17.233687.9 6.6719403 .003367003 298 88.804 26463592 17.2626765 6. 67'94200 .003355705 299 89.401 26730899 17.2916.165 6.6868831 .003344482 300 90000 27000000 17.3205081 6.6043295 .003333333 301 90601 27:270901 17.3493516 6.7017593 .003322259 302 91204 27543608 17.3781472 6.7'091729 .003311258 , 303 91809 278181.27 17.4068952 6.7165700 .003300330 304 92416 28094464 17.4355958 6.7239508 .003289474 305 930.25 2837.2625 17.4642492 6.7313155 .003278689 306 93636 28652616 17,4928557 6.7386641 ,003267'974 307 94249 28934.443 17. 5214,155 6.7459967 .003257'329 308 94864 292.18112 17.5499288 6.7538134 .003246753 309 95.481 29503629 17. 5783958 6.7606143 .003236246 310 96.100 29791000 17.6068169 6.7678995 || .003225806 SQUARES, CUBES, ROOTS, AND RECIPROCALS 201 TABLE XXIII.-Continued No. Squares, CubeS. §º Cube Roots. Reciprocals. 311 96721 30080231 17. 6351921 6, 7751690 .0032.15434 812 97344 30371328 17,6635217 6.7824229 .003205128 813 97969 30664297 17.6918060 6.7896613 .003194888 314 98596 30959144 17.7 200451 6.79688.44 .003184713 315 99225 31255875 17.7482393 6.8040921 .00317.4603 316 99856 31554496 17.7763888 6.81.12847 .003164557 817 100.489 31855013 17.8044938 6.8184620 .003154574 3.18 101.124 32157.432 17.83255.45 6.8256242 .003144654 319 101761 32461759 17.8605711 6.8327714 003.134796 320 102400 32768000 17.8885438 6.839.9037 ,003125000 321 10304.1 33076161 17.91647.29 6.8470213 .003.115265 322 103684 33.386248 7.94.43584 6.854.1240 .003105590 323 10432 33698.267 17.9722008 6,8612120 .003095975 824 104976 34012224 18.0000000 6.8682855 003086420 325 105625 34328.125 18.0277564 6.8753443 003076923 326 106.276 34645976 18.0554701 6.8823888 003067485 327 106929 34965.783 18.0831413 6.8894.188 003058.104 328 107584 35287'552 18. 11077 6,8964.345 .003048780 329 108241 35611289 18, 1383571 6.9034359 .003039514 330 108900 35937000 18.1659021 6.9104232 .003030303 331 109561 36264691 18. 1934054 6.9173964 .003021148 332 110224 3659.4368 18.2208672 6.9.243556 .0030.12048 333 110889 36926037 18.2482876 6.9313008 .003003003 334 111556 37259.704 18.2756669 6.938.2321 .002994012 335 112225 37595375 18.3030052 6, 9451496 .002985075 336 112896 37933056 18.3303028 6.9520533 .002976190 337 113569 382?2753 18.3575598 6.9589.434 .002967859 338 142. 38614472 18.3847763 6.9658,198 .002958580 339 114921 38958.219 18.4119526 6.9726826 .002949853 340 115600 39304000 18.4390889 6,9795321 .00294.1176 341 116281 3965.1821 18,466.1853 6,9863681 .002932551 842 116964 40001688 18.4932420 6.9931906 .002923977 843 117649 40353607 18, 5202592 7.0000000 .0029.15452 344 118336 40707584 18, 547.2370 7.006?'962 .002906977 345 119025 41063625 18.574f7'56 7.0135791 .002898551 346 119716 41421786 18.6010752 7.0203490 002890173 347 120409 41781923 18.6279360 7.027.1058 .00288.1844 348 121104 42144192 18, 6547581 7.0338497 002873563 349 121801 42508549 18.6815417 7,0405806 .002865.330 350 122500 42875000 18.7082869 7.047.2987 .002857'143 351 123201 43243551 18.7349940 7.054004:1 .0028.49003 352 123904 43614208 18, 7.616630 7.0606967 .00284.0909 353 124609 43986977 18,7882942 7.06737.67 .002832861 354 125316 4436.1864 18.8148877 7.0740440 , 00282.4859 855 126025 44738875 18.8414437 Y.0806988 .0028.16901 356 126736 451.18016 18.8679623 7.08784.11 .002808989 357 127449 45499293 18.894.4436 7.09397.09 , 002801120 358 128164 45882712 18,9208879 7.100.5885 .002.793296 359 12888.1 46268279 18.9472953 7.1071937 .002785515 360 129600 46656000 18, 97.36660 7.1137866 .002777778 361 130321 47045881 19.0000000 7,1203674 002770083 862 131044 47437928 19.0262976 7.1269360 00:2762431 863 131769 47832.147 19.0525589 7. 1334925 002754821 364 132496 48228544 19.0787'840 7, 1400370 .002747253 365 #33225 48627125 19, 10497'32 7.1465695 .0027.397.26 366 133956 49027896 19.1311265 7. 1530901 .002732240 367 134689 49430863 19, 1572441 7.1595988 .002724796 368 1354.24 49836032 19. 1833261 7.1660957 .002717391 369 13616.1 50243409 19.209372? 7.1725809 .002710027 370 136900 50653000 19.2353841 7.1790544 .002702703 371 137641 51064811 19.2613603 7, 1855162 .00269.5418 37 138384 51478848 19,28730.15 7.1919663. .002688.172 202 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.—Continued No. Squares. | Cubes. §. Cube Roots. Reciprocals. 373 139129 51895,117 19.3132079 W. 1984050 .002680965 374 139876 52313624 19.3390796 7.2048322 .002673797 375 140625 52734375 19.364916? 7.2112479 .00266666? 376 141376 5315.7376 19.3907194 7.2176522 .0026595.74 377 142129 53582633 19,4164878 7.2240450 .002052520 378 142884 540.10152 19.4422221 7.230.4268 .002645503 379 143641 54439939 19.4679223 7.2367.972 .002638522 380 144400 54872000 19.4935887 7.2431565 .00263.1579 381 145161 55306341 19.51922.13 7.2495045 .002624672 382 145924 55742968 19.5448203 7.2558415 .002617801 383 146689 56181887 19.5703858 7.2621675 .002610966 384 147456 56623104 19.5959.179 7.2684.824 ,002604167 385 148225 57,0666.25 19.6214,169 7,274.7864 .003597.403 386 148996 57512456 19. 6468827 7.28.10794 .002590674 387 149769 57960603 19.6723156 7.2873617 .002583979 388 150544 584.11072 19.6977.156 7.2936330 .002577320 389 151321 58863869 19.7330829 7.2998936 .002570694 390 152100 593.19000 19.7484.177 7.3061436 .002564,103 391 15288.1 5977647.1 19.7737199 7.3123828 .002557545 392 153664 60236288 19.7989899 7.31861.14 .002551020 303 154449 60698457 19.8242276 7.3248295 .002544529 394 155236 61162984 19.8494332 7.3310369 .002538071 395 156025 61629875 19.87.46069 7.3372339 .002531646 306 156816 62099.136 19.8997.487 7.3434205 .002525253 397 157609 6257.0773 19.9248588 7.3495966 .002518892 398 158404 63044792 19.9499.373 7.3557624 .0025.12563 399 15920.1 63521.199 19.9749844 7.3619178 .002506266 400 64000000 20.0000000 7.3680630 .002500000 401 160801 64481201 20.0249844 7.374.1979 .002493766 402 161604 64964808 20.0499877 7.3803227 .002487562 403 162409 6545082 20.0748599 7.3864373 .00248.1390 404 163216 65939264 20.0997512 7.3925418 .002475248 405 164025 66430.125 20. 1246.118 7.3986363 .002469136 406 164836 66923416 20. 1494417 7.4047206 .002463054 407 165649 67419143 20. 17424.10 7.4107.950 .002457002 408 166464 679.17312 20. 1990099 7.4168595 .002450980 409 167281 68417929 20.2237484 7.4229142 .002444988 410 168100 68.921000 20.2484567 7.4289589 .002439024 411 168921 69426531 20.2731349 7.4349938 002433090 412 169744 69934528 20.297,7831 '7.4410189 0024.27184 413 170569 70444997 20.3224014 7.4470342 002421308 414 171396 70957944 20.3469899 7.4530399 .002415459 415 172225 71473375 20.37.15488 7.4590359 .002409639 416 173056 71991296 20.3960781 7.4650223 .002403846 4.17 173889 7251.1713 20.4205779 7.47099.91 .002398082 418 174724 73034632 20,4450483 7.4769664 .002392344 419 175561 73560059 20,4694895 7.4829242 .002386635 420 176400 74088000 20.4939.015 7.4888724 .002380952 421 17724.1 746.18461 20.5182845 7.49481.13 .002375297 422 178084 7515.1448 20.5426386 7. 5007.406 .002369668 423 178929 75686967 20,5669638 7.5066607 .002364066 424 179776 62250 20, 5912603 7.5.125715 002358491 425 180625 767656.25 20.6155281 7.5184730 002352941 426 181476 77.308776 20.6397674 7.5243652 002347418 427 182329 77854.483 20.6639783 7.5.302482 00234.1920 428 183184 7840.2752 20.6881609 7.5361221 002336449 429 184041 78953589 20.7123152 7.54.19867 00233.1002 430 184900 79507000 20.7364414 7.5478423 .002325581 431 18576i 80062991 20.7605395 7.5536888 .002320186 432 186624 80621568 20.7846097 7.5595263 00231.4815 433 187489 81182737 20.8086520 7.5653548 002309469 434 188356 81746504 20.8326667 7. 5711743 002804,47 SQUARES, CUBES, ROOTS, AND RECIPROCALs 203 TABLE XXIII.-Continued Squares. CubeS. i.e Cube Roots. Reciprocals. 1892.25 82312875 20.8566.536 7.5769849 .002298851 190096 8288.1856 20.8806130 7.5827865 .002293578 190969 83453453 20.9045450 7, 5885793 .002288330 191844 84027.672 20.9.284495 7.59.43633 .002283105 192721 84604519 20,9523268 7.600.1385 .002277'904 193600 85184000 20.9761770 7.605.9049 .0022.72727 194481 85766121 21.0000000 7,6116626 .002267'574 195364 86350888 21.0237960 7.61741.16 .002262443 1962.49 86938307 21.047'5652 Y.6231519 .002257,336 197136 87528384 21.0713075 7.6288837 .0022.52252 1980.25 88121125 21.0950231 7.6346067 .002247.191 198916 88716536 21. 1187121 ºf .6403213 .0022.42152 1998.09 893.14623 21. 1423745 7.6.460272 .002237136 200704 899.15392 21, 1660105 7.65.17.247 .002232143 201601 90518849 21.1896201 7.6574,138 .002227171 202500 91.125000 21,2132034 7.6630943 .002222222 203401 917.33851 21.2367606 7.6687665 .002217295 204304 92345408 21.2602916 7. 6744303 .002212389 205209 92.95967? 21.2837'967 7.6800857 .002207506 206116 93576664 21.3072758 7.6857328 .00220.2643 207025 94196375 21.3307290 7.6913717 .002197802 207936 948.18816 21,354.1565 7.6970023 .002192982 208849 95443993 21.3775583 7.7026246 .002188.184 209764 9607 1912 21.40093:46 7.7082388 .002183406 210681 96702579 21.424.2853 7.7138.448 .002178649 211600 97.336000 21.4476106 7.7194426 .002173.913 212521 97.97.218.1 21.4709106 7.7250325 .002169197 213444 9861.1128 21.494.1853 7.7306141 .002164502 214369 99.252847 21.5174348 7.7361877 .002159827 215296 99.897344 21,5406592 7.74.17532 .002155172 216225 100544625 21.5638587 7.7473109 .002150538 217156 101.194696 21.5870331 7.7528606 .00:2145923 218089 101847'563 21.6101828 7.758.4023 . 002141328 219024 1025.03232 21.6333077 7.7639361 .002136752 21996.1 103.1617'09 21.6564078 ºf .769.4620 .0021321.96 220900 103823000 21.6794834 7.7749801 .002127660 221841 104487111 21.7025344 7.7804904 .002123142 222784 1051540.48 21.7255610 7.7859928 .002118644 223729 105823817 21.7485632 7.7914875 .002114,165 224676 1064964.24 21.7715411 7.7969.745 .002109.705 2256.25 107171875 21.7944947 7.8024538 .002105.263 226576 1078501 21.8174242 7.807'9.254 .002100840 227529 108531333 21.8403297 7.8133892 .002096436 1092.15352 21 8632111 7.8188456 002092050 229441 10990.2239 21.8860686 7.8242942 .002087683 230400 110592000 21.9089023 7.8297.353 .00208.3333 231361 111284641 21.9317122 7.835.1688 .002079002 232324 111980.168 21.9544984 7.8405949 .002074689 233289 112678587 21.9772610 7.8460184 002070393 234.256 113379904 22.0000000 7.8514244 002066,116 235225 114084125 22.0227.155 7.856828 00206.1856 2361.96 11479.1256 22.0454077 7.8622242 002057613 237169 115504303 22.0680765 7.8676130 O02053388 2381 11621.4272 22.0907220 7.8729944 002049180 239121 116930169 22.1133444 7.8783684 002044990 240100 117649000 22, 1359.436 7.8837352 .002040816 24.1081 118370771 22. 1585198 7.8890.946 .002036660 242064 119095488 22, 1810730 7.894.4468 .002032520 2430.49 11982315? 22.2036033 7,8997.917 .002028398 244036 120553784 22.2261108 7.9051294 .002024291 245025 121287.375 22.2485955 7.9104599 .002020202 246016 122023936 22.27105.75 7,9157832 .002016129 204 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued No. Squares. CubeS. §. Cube Roots. | Reciprocals. 497 247009 122763473 22.2934968 7,9210994 .002012072 498 248004 123505992 22.3159136 7,9264085 .002008032 499 249001 12425.1499 22.3383079 7.9317104 .002004008 500 250000 125000000 22.3606798 7.9370053 .002000000 501 251001 125751501 22.3830.293 7.9422931 .001996008 502 252004 126506008 22.4053505 7.9475739 .001992032 503 258009 127263527 22.4276615 7. 9528477 .001988072 504 254016 128024.064 22.4499443 7. 9581144 .001984.127 505 255025 128787625 22.4722051 7,9633743 .001980.198 506 256036 129554216 22.494.4438 7. 9686271 .001976285 507 257049 130323843 22.5166605 7. 9738781 .001972387 508 258064 131096.512 22.5388553 7.97.91122 .001968504 509 259081 131872229 22.56.10283 7.9843444 .001964637 510 260100 132651000 22.5831796 7.9895697 .001960784 511 261121 1334.82831 22.6053091 7. 99.47883 .001956947 512 262,144 1342177 22.6274.170 8.0000000 .001953125 513 263169 1350.05697 22,6495033 8.0052049 .001949318 514 2641.96 1357967'44 22.6715681 8.0104032 .0019455.25 515 265225 136590875 22.6936,114 8.0155946 .001941748 516 266256 13738.8096 22.7.156334 8,0207794 .001937'984 517 267289 138188413 22.7376340 8.0259574 .001934236 518 268324 138991832 22.75.96134 8.031128? .001930.502 519 269361 139798359 22,7815715 8,0362935 .001926782 520 270.400 140608000 22.8035085 8,0414515 .001923077 521 271441 141420761 22.8254244 8.0466030 .001919386 522 272484 142236648 22.8473193 8.0517479 .001915709 523 273529 143055667 22.869.1933 8,0568862 .001912046 524 274576 143877824 22.89.10463 8.0620180. .001908397 525 275625 144703125 22.9128785 8.067.1432 .001904762 526 276676 145531576 22.9346899 8.07.22620 .001901141 527 277729 146363183 22,9564806 8.0773748 .001897533 528 278784 147.197952 22,9782506 8.08.24800 .001893939 529 279841 148035889 23.0000000 8,0875794 .001890359 530 0 148877.000 23.02.17289 8.09.26723 .001886792 531 281961 1497.21291 23.0434372 8,0977589 .001883239 532 283024 150568768 23,065.1252 8. 1028390 .001879699 533 28.4089 15141943? 23,0867928 8, 1079.128 .001876173 534 285.156 152273304 23, 1084400 8.1129803 .00187.2659 535 286225 153130375 23.1300670 8. 1180414 .001869159 536 287296 15399.0656 23.1516738 8.1230962 .001865672 537 288369 154854.153 23. 1732605 8.1281.447 .001862197 538 2894.44 1557'20872 23. 194827 8.1331870 .001858736 539 290521 156590819 23.2163735 8.1382.230 -001855288 540 291600 157464000 23.2370001 8, 1432529 .001851852 541 292681 158340421 23.25.94067 8, 1482765 001848.429 542 293764 159220088 23.2808935 8.1532939 00:1845.018 543 294849 16010300? 23.3023604 8.1583051 00184162 544 295936 160989184 23.3238076 8.1633102 001838235 545 297025 161878625 23.3452351 8. 1683092 001834862 546 298116 1627.71336 23.3666429 8. 1733020 001831502 547 299209 j63667323 23,3880311 8, 1782888 001828154 548 300304 164566592 23.4093998 8, 1832695 .0018248.18 549 301401 165469149 23.4307.490 8.1882441 .001821494 550 302500 1663,75000 23.45.20788 8. 1932ſ2? .001818182 551 303601 167284151 23.4783802 8.1081752 .001814882 552 304? 168196608 23. 4946802 8.2031319 .001811594 558 305809 169112377 23.5159520 8.2080825 .001808318 554 306016 170031464 23.5372046 8.2130271 .001805054 555 30 170953875 23.5584380 8,217965? .001801802 556 309136 171879616 23.5796522 8.2228985 .001798561 557 310249 172808693 23.6008474 8.2278254 .001795332. 558 311364 173741112 23.62.20236 8.2327463 .001792.115 SQUARES, CUBES, ROOTS, AND RECIPROCALS 205 TABLE XXIII.-Continued No. Squares. Cubes. §. Cube Roots. Reciprocaſs. 559 3.12481 174676879 23,6431808. 8.2376614 .001788909 560 | 813600 || 175616000 23 6643.191 8.24257'06 .001785.714 561 314721 176558.481 23.685.4386 8,2474740 .001782531 562 315844 177504328 23.7065392 8.2523715 .001779359 563 316969 178453547 23, 7276210 8.257.2633 .0017761.99 564 3.18096 179406144 23, 7486842 8.2621492 .001778050 565 319225 180362,125 23,7697.286 8.2670294 .001769.912 566 || 320356 181321496 23.7907 8.2719039 .001766784 567 321489 | 182284263 23.81.17618 8.2767,726 ,00|763668 568 322624 183250.432 23.8327506 8, 28.16355 .001760.563 569 323761 184220009 23.8537209 8.2864928 .001757469 570 324900 185193000 23.8746728 8,2913444 .001754386 571 326041 1861604.11 23.8956063 8.296.1903 .0017. 51813 57. 3.27184 187149248 23.9165.215 8.3010304 .0017 48252 573 328329 188132517 23.9374.184 8,3058651 .0017.45201 574 320476 1891192.24 23.9582971 8.3106941 .0017.42160 575 330625 190109375 23,9791576 8, 3155.175 .001739.130 576 331776 191102976 24.0000000 8.3203353 .001736111 577 332929 1921.00033 24.0208243 8, 3251475 .001733102 578 334084 193100552 24,0416306 8 3299.542 .0017.30104 579 335241 194104539 94.0624188 8.3347553 .601727116 580 336400 195112000 24.0831891 8.339550.9 .0017241.38 581 337'561 196122941 24, 1039416 8,3443410 .001721170 582 3387: 197137368 24, 1246762 8 .349.1256 .001718213 583 339889 198155287 24, 1453929 8,3539047 ,001715266 584 34.1056 1991.76704 24, 1660919 8.35867 .001712329 585 342,225 200201625 24, 18677.32 8. 3634466 .001709402 586 34.3396 201230056 24.2074369 8,3682095 .001706485 587 344569 202262003 24.2280829 8. 37.29668 .00170357 588 3457 203207472 24.2487113 8.37.77.188 .0017.00680 589 346921 204336469 24, 2693222 8.38:24653 .001697.793 590 848100 205379000 24, 2890156 8.3872065 .001694915 591 349.281 2064.25071 24.3.104916 8 39.19423 .001592047 592 350464 20747.4688 24, 3310501 8 3966729 .00168918.9 593 351649 20852.7857 24, 35.15913 8.40.13981 , 0.01686341 594 352836 209584584 24, 37.21152 8.4061180 .001683502 595 3540.25 210644875 24.3926.218 8.4108326 00:168067.2 596 355216 211708736 24, 4131112 8.4155419 .001677852 597 356409 212776173 24.4335834 8.4.202460 .001675042 598 357604 213847.192 24.4540385 8, 4.249448 . OO1672241 599 358801 214921799 24.4744765 8.4296.383 .001569449 600 360000 216000000 24, 4948974 8 4348267 .00166666? 601 361.201 217081801 24, 5153013 8, 4390098 .001(j63894 602 362404 218167208 24. 5356883 8. 4436877 , 0.01661130 603 36.3609 219256227 24, 5560.583 8.4483605 .0016:58375 604 364816 220348864 24, 5764115 8,4530281 ,001655629 605 366025 221445.125 24.5967.478 8. 4576906 .001652893 606 367236 222545016 24 6170673 8.46234.79 ,001650165 607 368449 223648543 24, 6373,00 8.4670001 .001647446 608 369664 224755712 24, 6576560 8,471647 .001644737 609 370881 225866529 24.67792.54 8.4762892 .001642036 610 372100 226981000 24.6981781 8.4809.261 ,001639344 611 373321 2280991.31 24, 7184142 8. 4855579 .001636661 612 374544 229220928 24.7386.338 8, 4901848 ,001633987 613 375769 230346397 24.7588368 8.4948065 .001631321 614 376996 231475544 24.7790234 8, 49.94233 .001628664 615 878225 232608375 24.7'991935 8. 5040350 .001626016 616 379456 233744896 24.8193473 8,50864.17 001623377 617 380689 2348851.13 24,8394847 8.5132435 .001620746 618 38.1924 236029032 24,8596058 8, 5178403 .001618.123 ,619 883.16.1 237176659 24.879.7106 8.5224321 .001615509 620 384400 238328000 24, 8997992 8.5270.189 .001612903 206 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued U18.T 8 No. Squares. Cubes, §OOtS. Cube Roots. | Reciprocals. 621 385641 239.483061 24. 91987.16 8.5316009 001010306 622 || 386884 || 240641848 24.9399278 8,536.1780 .00160771? 623 388129 241804867 24.9599679 8.540750ſ .001605136 624 389376 242970624 24.97.99920 8.545317: .001602564 625 390625 244140625 25,0000000 8.54987.97 .001600000 626 39.1876 2.45314376 25,0199920 8.554437.2 ,001597444 627 393129 246491883 25,0399.681 8.5589899 .001594896 628 394384 2476.73152 25,0599282 8. 5635377 .001592.57 629 3956.41 248858.189 25,07987:24 8,5680807 .001589825 630 || 396900 || 250047000 || 25.0098008 || 8.5726189 .001587302 631 398161 25.1239591 25, 1197134 8.5771523 .001584.786 632 3994.24 252435968 25, 1396.102 8,5816.809 , 001582.278 633 400689 25.3636137 25, 1594913 8.5862047 .001579779 634 401956 254840104 25, 1793566 8,5907'238 .001577.287 635 403225 256047,875 25, 1992063 8,5952380 .001574803 636 404496 257259456 25 2190404 8,5997.476 .001572327 637 405769 258474853 25.2388589 8,6042525 .001569859 638 407044 25969,407.2 25.2586619 8.6087526 .001567898 639 408321 260917119 25.2784493 8.6132.480 .001564945 640 409600 262144000 25, 2982213 8. 617'388 .001562500 641 410881 263374721 25.317977.8 8.6222248 .001560062 642 412164 2646.09.288 25, 3377189 8.6207063 .001557.632 643 413449 2658.47'707 25.3574447 8,6311830 .001555210 644 4.14736 267089084 25, 3771551 8.6356551 , 001552795 645 416025 268336125 25.3968502 8.6401226 .001550388 646 417.316 269586.136 25.4165301 8.6445855 00.1547988 647 4.18609 270840023 25,4361947 8.6490437 00.1545595 648 4.19904 27209.7792 25.4558,441 8,653497 001543210 649 421.201 273359.449 25.47547.84 8,6579,465 00.1540832 650 422500 27.4625000 25.4950976 8.6623.01.1 .001538.462 651 428801 275894.451 25.5147016 8. (5668310 .0015.36098 652 425.104 27716.7808 25.5342.907 8,6712665 .001533742 653 426,409 278445077 25, 5538647 8, 6756974 .0015.31394 654 4277'16 2797.26264 25,5734237 8,6801.237 .001529052 655 42.9025 28101.1375 25.5929678 8,6845.456 .001526718 656 430336 283300416 25, 6124969 8.68896.30 .001524390 657 431649 283593303 25.6320112 8.69337'59 ,001522070 658 432964 284890.312 25.6515107 8, 6977.843 .001519757 659 434281 286.191179 25,6709953 8,7021882 .001517451 660 435600 287496000 25,6904652 8.7065877 001515152 661 436921 288804781 25,7099.203 8. 7109827 .001512859 662 438244 2901 17528 25.7293607 8.7153734 , 001510574 663 439;569 29143424? 25.7487864 8.7 1975.96 .00150.8296 664 440896 292754944 25.7681975 8.724.1414 .001506024 665 442225 29.4079625 25.7875939 8,7285.187 .0015037'59 666 443556 295.408296 25.80697.58 8.73289.18 , 001501502 667 444889 296740963 25.8263431 8.737'2604 .001499250 668 446224 298077632 25.8456960 8.7416246 .001497006 669 447561 299418309 25.8650343 8.745.9846 .001494.768 670 448900 300763000 25.8843.582 8,7503401 .001492537 671 450241 302111711 25.9036677 8.7546913 .0014903.13 672 451584 303464448 25.9229628 8.7590383 ,001488095 673 452929 3048.21217 25.94.22435 8.7633.809 001485884 674 454276 306182024 25.9615100 8.7677192 .001483680 675 455625 307546875 25, 980.7621 8.7720532 .001481481 676 456976 308915776 26.0000000 8,7763830 .001479290 677 458329 310288733 26.0192237 8.7807084 .001477.105 678 459684 311665752 26.0384331 8.7850296 .001.474926 679 461041 313046839 26.0576284 8.7893466 .001472754 680 || 462400 || 314432000 || 26,0768006 || 8.7936593 ,001470588 681 463761 315821241 26.0959767 8.7979679 .001468429 682 465,124 317214568 26.1151297 8.8022721 .001466276 SQUARES, CUBES, ROOTS, AND RECIPROCALS 207 TABLE XXIII.-Continued No. Squares. | Cubes, §. Cube Roots. | Reciprocals. 683 466489 318611987 26, 1342687 8.8065722 ,001464129 684 467856 320013504 26, 1533937 8,8108681 .003461988 685 469.225 321419125 26. 1725,047 8.8151598 .001.459854 686 470596 3228288:56 26, 1916017 8.8194474 .001.457726 687 471969 3242,42703 26.2106848 8,823.7307 .001455604 688 473344 32566067.2 26,2297541 8.8280099 .001453488 689 474721 327082769 26.2.488.095 8.8322850 .001451379 690 476100 328509000 26.267.8511 8.8365559 .001449275 691 477481 32993037'1 26.28(38789 8,8408227 .001.447178 692 478864 33137.3888 26. 305892.9 8.8450854 .0014.45087 693 480249 3328.1255? 26, 3248932 8.8493440 .001443001 694 481636 334255384 26, 34387'97 8.8535985 .0014.40922 695 483025 335702375 26, 3628527 8,8578489 .001438849 696 484416 337.153536 26. 3818119 8.8620952 .001436782 697 485809 338608873 26. 4007576 8.8663375 .001434.720 698 487.204 340068392 26.4196896 8,8705757 .001432665 699 488601 341532099 26.4386081 8,8748099 .001430615 700 490000 343000000 26.4575131 8.8790.400 001428571 701 491401 34447.2101 26.4764046 8,8832661 .001426534 702 49.2804 345948.408 26.4952826 8.8874882 001424501 7. 494209 3474.28927 26.5141472 8.8917063 001422475 704 49.5616 348913664 26,5329983 8.8959204 .001420455 705 497025 350402625 26.5518361 8.900.1304 .001418440 706 498.436 351895816 26. 5706605 8.90.43366 .001416431 707 499849 353393243 26,589.47°16 8.9085387 .001414427 708 501204 354894012 26. 6082694 8 91.27'369 _001412429 709 502(581 356400829 26. 627'0539 8.9169311 .001410437 710 504100 35791.1000 26,6458252 8.921.1214 .001408451 711 505521 359425.431 26,6645833 8,925.3078 .001406470 712 506944 360944,128 26,6833281 8.929.4902 .001404494 713 508369 362467097 26,7020598 8.9336687 .001402525 71.4 509796 36399.4344 26.720778. 8,9378433 .001400560 715 511225 365525875 26.7394839 8,9420140 . 001398601 716 5.12656 367061696 26.7581763 8.946.1809 .001396648 717 514089 86860.1813 26.776855? 8.950.3438 .001394700 718 515524 3701.46232 26.'955220 8.95450.29 .001392758 719 51696.1 371694959 26,814.1754 8,9586581 .001390821 720 518400 373248000 26,8328157 8,9628095 .001388889 "21 51984.1 374805361 26.8514.432 8.966.957 .001386963 722 521284 376367048 26.87°0057 8.97.1.1007 . 001385042 723 5227:29 37.7933067 26.8886593 8.975.2406 .001383126 724 524.176 37950.3424 26.907'2481 8.9798766 .001381215 7; 525625 38.1078.125 26, 9258240 8.9835089 .0013793.10 726 52.7076 382657176 26,9443872 8.9876373 .001377.410 727 528529 384240583 26 962.9375 8.99.17620 .001375516 728 529984 385828352 26.9814751 8.0958829 .001373626 729 531441 38742.0489 27.0000000 9.0000000 .001371742 730 532900 38901.7000 27.0185122 9 0041134 .001369863 731 534.361 3906.17891 27.03701.17 9,0082229 ,001367989 732 535824 392.2231.68 27,055.4985 9 0123288 .001366,120 733 537289 393832837 27.07397'27' 9.0164309 001364.256 734 538756 395446904 27.09.24344 9.0205293 001362398 735 540225 39.7065.375 27, 1108834 9.0246239 .001360544 736 541696 398688256 27.12931.99 9,0287.149 .001358696 737 543169 400315553 27.1477439 9,0328021 .001356852 738 544644 401947272 27 1661554 9 0368857 .001355014 739 546121 40.3583419 27, 1845544 9.0.409655 .001353180 740 54'600 405224000 27.2029.410 9,0450417 .001351351 741 549081 406869021 27, 22.13152 9.0.491142 .001349528 742 550564 408518488 27, 2396769 9,0531831 .001347709 743 552049 410172407 27.2580.263 9,0572482 .001345895 744 553536 4.1.1830784 27.2763634 9,0613098 .001344086 208 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued Square Reciprocals. No. Squares. Cubes. Roots. Cube Roots. 745 555025 413493625 27.2946881 9.0653677 00.1342282 746 556516 415160936 27.3130006 9,0694220 001340483 747 558009 4168327'23 27.3313007 9.07'34726 00.1338688 748 559.504 4.18508992 27, 3495887 9.077'5197 .0018368.98 749 561001 420.189749 27.3678644 9,0815631 ,001335113 750 562500 4218.75000 27,3861279 9,0856.030 .001333333 751 56.4001 4235647'51 27.4043792 9.0896392 ,001331558 752 565504 425259008 27.4226184 9.0936719 .001329.787 753 567009 4269577.77 27.4408:455 9.0977010 .00132802] 754 568516 428661064 27, 4590604 9. 1017265 .001326260 755 57.0025 430368875 27.4772633 9. 1057485 .001324503 756 57'1536 432081216 27.4954542 9. 1097669 .001322751 757 573049 4337'98093 27.5.136330 9. 1137818 .001321004 7.58 574564 435519512 27.5317998 9. 1177.981 .00131926} 7.59 57.6081 437245479 27.5499546 9, 1218010 .001317523 760 57.7600 438.976000 27.5680975 9, 1258053 .001315789 761 791.21 440711081 27. 5862284 9, 1298061 .001314060 ''62 580644 442450728 27,6048475 9. 1338034 ,001312336 763 5821.69 444194947 27.62.245.46 9. 1377971 .001310616 764 583696 445948744 27,6405.490 9. 1417874 .001308901 765 585225 447697'125 27.6586334 9. 1457.742 .001307190 766 586756 4494.55096 27.6767.050 9, 1497'576 ,001305483 767 588289 451217663 27. 6947648 9. 1537.375 .001303781 768 589824 452984832 27, 7128,129 9. 1577139 .001302083 769 591361 4547'56609 27.730849? 9, 1616869 .00.1300390 770 592900 456583000 27, 7488739 9. 1656565 .001298.701 771 594441 458314011 27.7668868 9. 1696225 ,0012970.17 77; 59.5984 460099648 27,7848880 9, 1735852 001295337 773 507529 4618899.17 27.802877 9, 1775445 ,001293661 774 5990.76 463684824 27,8208555 9. 1815003 ,00129.1990 77 600625 465484375 27.8388218 9.1854527 , 00129.0323 77 60217.6 467288576 27.8567766 9. 18940.18 .001288660 77 6037'29 469097433 27.87°4'197 9, 1933474 , 00128.7001 778 605284 470910952 27.8926514 9, 1972897 ,001285347 779 606841 472729139 27.9105715 9.2012286 .001283697 '80 608400 47455.2000 27,9284801 9.2051641 , 001282051 781 609961 47637'9541 27.946.377: 9, 2000.962 ,001280410 '82 61}524 478211768 27.9642629 9.2130250 00127.8772 '83 6.13089 4800.48687 27,982.1372 9.2169505 .001:27.7139 784 614656 48.1890304 28.0000000 9.2208726 , 001275510 785 616225 4837.36625 28,01785.15 9,2247914 .001.273835 786 617796 485587656 28.0356915 9.228.7068 .001272265 787 61.9369 487443403 28.0535203 9,2326189 .0012.70648 788 620944 480303872 28.0713377 0.2365277 .001269036 789 622521 491.169069 28.0891438 9,240.4333 .001267.427 700 624100 493039000 28. 1069386 9.2443355 .001:265823 791 625681 494913671 28.1247222 9.2482344 .001264223 792 627264 49679.3088 28. 1424.946 9.2521300 ,001:262626 793 628849 498677257 28, 1602557 9.2560224 ,001261034 794 630436 500566184 28, 1780056 9.2599.114 .001259446 795 63.2025 502459875 28. 1957.444 9.2637973 .001257862 796 633616 504358336 28.2134720 9.2676798 .001:256281 797 635209 50626157: 28,231.1884 9,2715592 .001.254705 798 636804 508169592 28,2488938 9,275.4352 .001253.133 799 638401 510082399 28.2665881 9.2793081 .001:25.1564 800 640000 5.12000000 28, 2842712 9.2831777 .001250000 801 641601 513922401 28.3019434 9.2870440 , 001248439 802 643204 515849608 28.3196045 9.2909072 . 0012.46883 803 644809 517781627 28.3372546 9.294.7671 .001:245330 804 646416 519718464 28 3548938 9,2986.289 .001243781 805 648025 521660.125 28,3725219 9,3024775 .001:242236 806 649636 523606616 28.3901391 9.3063278 ,0012.40695 SQUARES, CUBES, ROOTS, AND RECIPROCALS 209 TABLE XXIII.-Continued Square No. Squares, Cubes, Roots. Cube Roots. Reciprocals. 807 651249 525557943 28.4077.454 9. 3101750 .001.239157 808 652864 527514112 28.4253408 9.3140190 , 001.237624 809 654481 529,475,129 28.4429253 9.3178599 .001.236094 810 656100 531441000 23.4604989 9.32}6975 .001234568 811 657721 5334,11731 28, 4.780(517 9.3255320 , 001.2330.46 812 659344 535387328 28.49.56137 9.3293634 ,001.231527 818 660969 537367.79 28,5131549 9.3331916 .001.230012 814 662596 539353144 28.5306852 9.337.0167 . 001228501 815 664225 541343375 28. 54820.48 9,3408386 .001226994 816 665856 543338496 28,5657137 9.3446575 .001.225.490 817 667.489 5.453385.13 28,5832119 9.3484731 .001.223990 818 669124 547343432 28.6006993 9.3522857 .001222494 819 67076i 549353259 28.6.181760 9.3560952 .001.221001 820 672400 551868000 28.6356.421 9.3599016 .0012.19512 821 674041 553.387661 28.6530976 9.3637049 .0012.1802? 822 675684 555-412248 28.6705424 9.367,5051 .001216545 823 677329 557441767 28.687.9766 9.37.13022 .001215067 824 67897 559476224 28.7054002 9.37'50963 .001213592 825 680625 561515625 28.7'228132 9.3788873 .0012.12121 826 68227 563559976 28.7402157 9.3826752 .0012.10654 827 683929 565600283 28.7576077 9,3864600 .001209190 828 685584 567.663552 28.774989 i 9.3902419 .001207729 829 687.241 5697.22789 28, 7.923601 9.3940206 .001206273 830 688000 571787 000 28.8007206 9,3977964 .001204819 831 690561 57.3856.191 28.827 0706 9.4015691 .001203369 832 6922.24 575930368 28.84.44102 9.405338? .001201923 833 693889 57800.9537 28.861.7394 9.4091054 001200480 834 695556 580093704 28.87'90582 9. 4128690. 001 199041 835 697.225 582182875 28.8063666 9. 4166297 001 197605 836 698896 58427.7056 28.9136646 9.4203873 00.1196f 72 837 700569 5863.76253 28.9309523 9.4241420 .0011947.43 838 702244 588480472 28.9482297 9.427.8936 .001 193317 839 703921 590589719 28.9654967 9.4316423 .001191895 840 705600 50270.4000 28.9827.535 9.435.3880 .001100476 841 707:281 59.48233:21 29.0000000 9,439.1307 .001 189061 842 708964 5969.47°688 29,0172363 9.44287 .001 187648 843 710649 5990.77107 29.0344623 9, 4.466072 .001186240 844 71.2386 601211584 29.0516781 9,4503410 .001184834 845 71.4025 603351125 29,068883? 9.4540719 001183432 846 715716 605495736 29,08607.91 9. 4577'999 .001182033 847 717409 607645423 29.1032644 9.4615249 .001180638 848 719104 609800192 29. 1204396 9.4652470 .0011792.45 849 720801 611960049 29.1376046 9,4689661 .001177856 850 722500 614125000 29.1547595 9.4726824 .00117647.1 851 72420.1 616295051 29, 1719043 9.4763957 .001175088 852 725904 61847.0208 29, 1890390 9.480f 061 .001173709 853 727609 62065047? 29, 2061637 9.4838136 .001172333 854 729.316 622835864 29, 2232784 9.4875182 .001170960 855 731025 625026375 29.2403830 9.4912200 .001169591 856 732736 627222016 29.2574777 9.4949.188 .001168224 857 734449 6294.22793 29, 2745623 9.49861.47 .001166861 858 736164 631628712 29.2916370 9,5023078 .001165501 859 737881 638839779 29.3087018 9.5059980 .001164144 860 739600 636056000 29.3257566 9,5096854 .0011627.91 861 741321 638277381 29.34280.15 9.5133699 , 001161440 862 743 640503928 29.359.8365 9.5170515 .001160093 863 7'44769 642735647 29. 3768616 9.5207303 , 001158749 864 746496 644972544 29,3938769 9.5244063 .001157407 865 748225 647.214625 29.4108823 9.5280794 .001156069 866 '49956 64946.1896 29.4278779 9.5317497 .001154734 867 751689 651714363 29.4448637 9.5354,172 001153403 868 65397.2032 29.4618397 9.5390818 .001152074 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued Squares. Cubes. §. Cube Roots. | Reciprocals. 755161 656.234000 29.47°88059 9.5427437 .001150748 756900 658503000 29.4957624 9.5464027 .0011494.25 758641 6607763.11 29.5127091 9.5500589 .001148106 760384 66.3054848 29.5296461 9, 5537.123 001146789 762129 6653.386.17 29.5465734 9.5573630 00114547 763876 667627624 29.5634910 9.5610.108 001144.165 7656.25 669921875 29,5803989 9.5646559 001142857 767376 67'2221376 29,597297.2 9.5682982 .001141553 769129 674526133 29. 6141858 9.57.19377 .001140251 770884 676836152 29.63.10648 9. 5755745 .001138952 77.2641 67915.1439 29.6479342 9.5792085 .001137656 "74400 68.1472000 29.6647939 9,5828397 .001136364 776161 683797841 29. 6816442 9, 5864682 .001135074 777924 686128968 29,6984848 9.5900939 .001133787 779689 688465387 29,7153159 9.5937169 .001132503 781.456 690807'104 29.7821375 9.597.3373 .001131222 783225 693154125 29.74894.96 9.6009548 .001129944 78.4996 695506.456 29.7657'521 9.6045696 .001128668 786769 69786.4103 29.7825452 9.6081817 .001127396 788544 700227072 29, 7'993289 9.6117911 .001126126 790321 702595369 29.8161030 9.6153977 .001124859 '92100 704969000 29.8328678 9.6190017 .001123596 793881 707'347971 29.8496231 9.6226030 00.1122334 '95664 7097'32288 29.8663690 9,626.2016 .00112107 797449 712121957 29,883.1056 9, 6297975 .0011198.21 7992.36 714516984 29.8998328 9.6333907 .001118568 801025 7169,17375 29.9165506 9.636.9812 .001117318 802816 719323136 29.9332591 9.6405690 .001116071 804009 721734273 29.9499.583 9.644.1542 .001114827 806404 724.150792 29.9666481 9,6477367 .001113586 808201 || 726572699 || 29.9833287 || 9.6513166 | .001112347 810000 729000000 30.0000000 9.6548038 .001111111 811801 731432701 30, 01666:20 9.6584684 .001109878 813604 7.33870808 30.0333148 6620403 .001108647 815409 736314327 30.0499584 9.6656096 .001107420 817,216 788763264 30.0665928 9.6691762 .001106195 819025 741217625 30.0832179 9.6727403 .001104972 820836 7436.77416 30.0998339 9.6763017 001108753 822649 746142643 30. 1164407 9.6798604 .001102536 824464 7486.13312 30. 1330383 9.6834166 .001101322 826281 751089429 30.1496.269 9.68697.01 .001100110 828,100 75857 1000 30.1662063 9.6905211 .001098.901 82992.1 756058031 30. 1827765 9.6940694 .001097695 831744 758550,528 30.1993377 9,697.6151 .001096491 833569 761048497 30.2158899 9.7.01.1583 .001095290 835396 763551944 30.2324329 9.7046989 .001094092 837.225 '66060875 30.2489669 9.7082369 . 001092896 839056 768575296 30.2654919 9.7.117723 .001091703 840889 7710952.13 30.2820079 9.7.153051 .001090513 842724 '773620632 30.2985148 9, 71.88354 .001089325 844561 776151559 30.3150128 9.7223631 .001088139 846400 778688000 30.3315018 9,7258883 ,001086957 848241 78.1229.961 30.84.79818 9.7294.109 .0010857.76 850084 7837774.48 30, 3644529 9.7329309 00.1084599 85.1929 786330467 30.3809151 9.7364484 .001083424 853776 788889024 30.397'3683 9.7399634 .001082251 855625 791.453125 30.41381.27 9,74347 .001081081 857476 7940.227.76 30.4302481 9.7469857 .001079914 859329 79.6597983 30,4466747 9.7504930 .001078749 861184 799178752 30.4630924 9.75399.79 .00107.7586 86304? 801765089 30.47950.13 9.7575002 .001076426 864900 804357000 30.4959014. 9.7610001 .001075.269 squaRES, CUBEs, Roots, AND RECIPROCALS 211 TABLE XXIII.-Continued Square No. Squares. Cubes. Roots. Cube Roots. Reciprocals. 931 866761 806954491 30 5122926 9,7644974 ,001074114 932 868624 809557568 30.5286750 9.7679922 .001072961 933 870489 81216623 30,5450487 9.7714845 .00107.1811 934 87.2356 814780504 30.5614136 9.7749743 .001070664 935 87°4225 817400375 30.5777697 9.7784616 .001069519 936 87.6096 820025856 30,594.1171 9,7819466 001068376 7 87.7969 822656953 30. 6104557 9,7854.288 .001067236 938 8798.44 825293672 30.6267.857 9,7839087 .001066098 939 88.1721 827'936019 30,6431069 9,7923861 .001064963 940 883600 830584000 30,6594.194 9.7958611 .001063830 941 885,481 83323.7621 30,6757'233 9.7998336 .001062699 942 887364 835896888 30.6920185 9,8028036 .001061571 943 8892.49 838561807 30.7083051 9,8062711 .001060445 944 891136 841232384 30,7245830 9.8097362 .001059322 945 89.3025 843908625 30.7408523 9.8131989 .00105820.1 946 89.4916 846590536 30.7571130 9,8166591 .001057082 947 896809 849278.123 30.7733651 9 8201169 .001055966 948 898.704 85.1971392 30,7896086 9.8235723 .00105.4852 949 900601 854670349 30, 8058436 9.8270.252 .001053741 950 902500 85737'5000 30, 8220700 9.8304757 . 001052632 951 90.4401 860085351 30,8382879 9.8339.238 .001051525 952 906304 86.2801408 30.8544972 9,8373695 .001050420 953 908209 865523177 30 8706981 9,8408.127 .0010493.18 954 910116 868250664 30,8868904 9.8442536 .001048218 955 912025 870.988875 30.9030743 9 8476920 .001047120 956 913936 878722816 30.9192497 9.851.1280 ,001046025 957 915849 876467493 30 9354,166 9.8545617 001044932 958 9.177 879217912 30.9515751 9 8579929 .00104384.1 959 919681 88.1974079 30.9677.251 9.8614218 , 001042753 960 92.1600 736000 30.9888668 9,8648483 ,00104.1667 961 923521 887503681 31.0000000 9.8682724 ,001040583 962 92.5% 890277128 31.0161248 9,8716941 001039501 963 927.369 89.3056347 31.0322413 9,8751.185 .001088422 964 929296 89584.1344 31.0483494 9 8785305 ,001037344 965 93.1225 898632125 31.0644491 9.881945.1 , 001036269 966 933.156 901428696 31.0805405 9.885357 .0010351.97 96? 935089 904231063 31.0966:236 9.8887673 .001034126 968 937 907039232 31, 1126084 9 892.1749 .001033058 969 938961 909853209 31. 1287648 9,895.5801 .001081992 970 940900 912673000 31 1448230 9,8989830 00.1030928 97 942841 915.498611 31, 1608729 9 9023835 ,001029866 972 944? 918330048 31 1769145 9,9057817 001028807 973 946729 921167317 31 1929.479 9,909.1776 .001027749 974 948676 9240.10424 31 20897.31 9 9125712 001026694 975 950625 926859875 31.2249900 9 9159624 001025641 976 95.2576 929714176 31.240998? 9.9193513 001024590 977 95.4529 93257.4833 31 2569992 9 9227.379 001023541 978 956484 935441352 31 2729915 9 926.1222 . 001022495 979 958441 9383.13739 31.2889757 9,9295042 .001021450 980 960400 941192000 || 31, 30.49517 || 9,9328839 .001020408 981 962361 944076141 31. 320919.5 9.9362613 .001019368 982 964324 946966168 31 3368792 9, 9396363 .001018330 983 966.289 949862087 31. 3528308 9.94.30092 , 001017294 984 968256 952763904 31.3687.743 9.946.3797 .001016260 985 970.225 955671625 31. 3847.097 9.94974.79 .001015228 986 97.2196. 958585256 31 4006369 9 9531138 .00101.4199 987 97.4169 96.1504803 31.4165561 9,95647.75 .001013171 988 97.6144 964430272 31 4324673 9,9598389 .001012146 989 978121 96736,1669 31.44837 9,963.1981 .001011122 990 980100 97.0299000 31.4642654 9,9665549 .001010101 991 982081 978242271 31.4801525 9,96990.95 .001009082 992 984064 976191488 31,4960315 9,9732619 .001008065 212 SQUARES, CUBES, ROOTS, AND RECIPROCALS TABLE XXIII.-Continued No. Squares. Cubes. §. Cube Roots. | Reciprocals. 993 986049 97'9146657 31.5] 19025 9.9766120 .001007049 994 9.88036 982.1077.84 31 - 5277655 9.9799599 .001006036 995 990025 985.074875 31.5436206 9.9833055 .001005025 996 99.2016 988047936 31.559.4677 9.9866488 .001004016 997 99.4009 99.1026973 31.5753068 9.9899900 .001003009 998 996004 99.401 1992 31.5911380 9.993.3289 .001002004 999 99.8001 99.7002999 31,606.9613 9.9966656 .001001001 1000 1000000 1000000000 31.62.27766 10.0000000 .001000000 1001 1002001 I003003001 31.6385840 10.0033322 0009990010 1002 1004004 100601.2008 31. 6543836 10.00666.22 0009980040 1003 1006009 1000027027 31.670.1752 10.0099899 .0009970090 1004 1008016 1012048064 31.6859590 10.0183155 .0009960159 1005 10100.25 1015075,125 31.7017349 10,0166389 .0009.950249 1006 101.2036 1018108216 31.7175030 10.0199601 .00099.40358 1007 1014049 102.1147343 31.7332633 10.02327 91 .0009930487 1008 1016064 1024.192512 31.7A90157 10,0265958 .0009920635 1009 1018081 10272437.29 31.7647.603 10,0299.104 .00099.10803 1010 1020.100 103030.1000 31.780.4972 10.0332228 .0009900990 1011 102.2121 1033364331 31.7962.262 10.0365830 .00098.91197 1012 1024144 10364337'28 31.8119474 10.0398.410 .0009881423 1013 1026169 103950919? 31.8276609 10,0431469 .0009871668 1014 1028196 10425.907.44 31.8433666 10. 0464506 ,0009861933 1015 1030,225 1045678.375 31.8590646 10.0497521 .0009852217 1016 1032256 104.8772096 31.87.47549 10.0530514 .0009842520 1017 1034289 1051871913 31,890.4374 10.0563485 0009832.842 1018 1036324 1054977882 31.9061123 10.0596435 0009823183 1019 1038.361 1058089859 31, 92.17794 10.0629364 .0009813543 1020 1040400 1061208000 31.9374388 10.0662271 .0009803922 1021 1042441 1064332261 31.953090 10. 0695156 .00097.94319 1022 1044484 1067462648 31.9687'347 10. 0728020 .0009784786 1023 1046529 1070599167 31.98.43712 10.0760863 .0009775171 1024 1048576 107374.1824 32.0000000 10.0793684 .0009765625 1025 1050625 10:6890625 32.0156212 10.0826484 .0009756098 1026 105.2676 1080045576 32.0312348 10.0859262 .0009746589 1027 10547'29 108.3206683 32.0468.407 10,0892019 .00097.37098 1028 1056784 1086.373952 32.0624391 10.0924.755 .00097.27626 1029 1058841 1089547389 32.0780298 10.0957.469 .0009?18173 1030 1060900 1092727000 32,0936131 10.0990.163 .0009708738 1031 1062961 109591.2791 32,1091887 10.1022835 .0009699321 1032 1065024 1099.104768 32.1247568 10, 1055487 .000968.9922 1033 1067089 1102302937 32.1403173 10, 10881.17 .0009680542 1034 1069156 1105507304 32.1558704 10, 1120726 .0009671180 1035 1071225 11087.17875 32.1714159 10. 1153314 .0009661886 1036 1073296 1111934656 32, 1869539 10, 1185882 .0000652510 1037 107:5369 11151576.53 32.2024844 10, 1218428 .0009643202 1038 1077.444 111838.6872 32.2180074 10, 1250953 .0009633911 1039 1079521 i121622319 32.2335229 10 1283457 .00096.24639 1040 1081600 1124864000 32.2490310 10.1315941 .0009615385 1041 1083681 1128111921 32.2645316 10. 1348403 .00096061.48 1042 1085764 1131366088 32.2800248 10. 1380845 .0009596929 1043 1087849 1134626507 32.2955105 10, 1413266 .000958728 1044 1089936 1137893184 32,3109888 10, 1445667 .0009578544 1045 109.2025 1141166125 32.3264598 10.1478047 .0009569378 1046 1094116 11444.45336 32, 34.19233 10, 1510406 .0009560229 1047 1096209 1147730823 32,357.3794 10.1542744 .0009551098 1048 1098304 1151022592 32,3728281 10.1575062 .000954.1985 1049 1100401 1154.320649 32,3882695 10, 1607359 .0009532888 1050 1102500 1157625000 32.4037035 10.1639636 .0009523810 1051 1104601 1160935651 32,4191301 10. 1671893 .0009514748 1052 1106.704 1164252608 32.4345495 10. 1704129 .0009505703 1053 1108809 1167575877 32.44996.15 10, 1736344 .00094.96676 1054 1110916 1170905464 32,4653662 10, 1768539 .0009487666 - PAGE) Abderhalden, E. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abmodality, index of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Aggregative index, number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 29, 130 Alienation, coefficient of. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91, Arc in decimals of a degree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150, 174 Arithmetical work, precautions in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Assumed mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Asymmetry (see Skew curves) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44, 53 Attentuation, correction for. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Average deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 33 Barlow, P.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Basal metabolism in man. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Benedict, F. G. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ill Binomial curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20, 49 Biserial correlation coefficient. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Biserial eta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 ody build, correlation between parents and offspring . . . . . . . . 98, 103 Bowman, H. A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brewster, E. T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Brown, W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Calculating machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9–1 on Calculation, aids in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9–14 Calipers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chauvenet, W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 49 Cheyney, W. W... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Chi square test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47, 152, 179 lass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 TBD956 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 16 Coefficient of mean square contingency. . . . . . . . . . . . . . . . . . . . . . . . . 100 Color, measurement of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7, 8 Constants and their logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 148, 163 Contingency, coefficient of mean square. . . . . . . . . . . . . . . . . . . . . . . . 100 Correction for attenuation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Correlated variability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71–118 types of. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71, 72 Correlation between a variable and deviation of a dependent vari- able from its probable value. . . . . . . . . . . . . . . . . . . . . . . . . . . . 95, 96 Correlation charts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Correlation, multiple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 ratio (n). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87–89 , significance of size of... . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Counting, methods of... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , 5 Creamery butter in United States. . . . . . . . . . . . . . . . . . . . . . . . . . 133–138 Crelle, A. L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Critical function, F. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Crossing over in man..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126–128 Curve fitting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44, 54–57 Curvilinear correlations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74, 87–89 Cyclical change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 213 214 INDEX PAGE Data, collection of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , simplification of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Davies, G. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Degrees of freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 38, 59 Determination, coefficient of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Deviation, average. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 33 Difference between means, significance of . . . . . . . . . . . . . . . . . . . . . 37–39 Dissymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Distribution curve or polygon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18–20 Divergence, index of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Dogwood fruits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65–70 Drager, W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Duncker, G. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49, 54, 55, 139 Dunlap, J. W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9–12 Edgerton, H. A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . y Elderton, W. P. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Eye color inheritance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101, 106 Ezekiel, M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92, 116, 117 Factorials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150, 175 amily method of mendelian analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Fechner, G. T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Fisher, I... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 131 Fisher, R. A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v, 3, 38, 48, 76 Fluctuations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Formulae. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148, 162 Frequency polygons, classes of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41–64 , constants of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 , multimodal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61–64 , types of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 59 , unimodal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 61 Galton's difference problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 51 Gaussian curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18–20 Glover, J. W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 18–20 Goodness of fit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46–48 Growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Harris, J. A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95, 96, 111 Ogoed, L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Hollerith system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10–12 Holzinger, K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 13, 91 Homogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Huxley, J. S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Index numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Isolation, index of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kelley, T. L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91, 118 King, W. I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Roller, S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Kurtz, A. K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v, 9, 12 Lenz, F. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lexian ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46, 47, 118 Lexis, W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linearity, test for. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Linkage in man . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126–128 Literature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141–148 Loaded ordinates, methods of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Logarithms in curve fittings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60, 61 INDEX 215 PAGE Martin, R... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mass method of mendelian analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Mean, arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 24–26 geometrio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27 harmonio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27–29 Measurement, apparatus for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–7 & , technique. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9 Median. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Mendelian analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Mills, F.C.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Miner, J. R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Moments in distribution polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 43 Moving median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Multimodal curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61–64 Multiple correlations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Mutations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Neyman, J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Nomogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Normal frequency curve, comparison with observed curve . . . . . . 46–48 , determination of, from a portion. . . . . . 51, 52 , example of computation of . . . . . . . . . . . . . 49 , formula of... . . . . . . . . . . . . . . . . . . . . . . . . . 45 Odds against a deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151, 178 Ordinates of normal curve. . . . . . . . . . . . . . . . . . . . . . . . . . . 148, 164, 165 Part correlations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Partial correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Partial sigmas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Pearl, R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi, 33 Pearson, K., 9, 20, 28, 33, 41, 44, 48, 51, 52, 59, 62, 73, 79, 90,97, 102–104 Pearson product moment coefficient. . . . . . . . . . . . . . . . . . . . . . . . . . 73–83 Periodicals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Persons, W. M.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Photographic processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Pintner, R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Planimeter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6, 7 Plotting of data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17–20 Poisson series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Powers of integers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150, 175 Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 35, 36 Probability integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149, 166–172 Probable error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 of coefficient of correlation . . . . . . . . . . . . . . . . . . 153, 184 of contingency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . of means. . . . . . . . . . . . . . . . . . . . . . . . . . 37, 150, 176, 177 of standard deviation. . . . . . . . . . . . . . . . 37, 150, 176, 177 of tetrachoric r. . . . . . . . . . . . . . . . . . . . . . . . . . . . • * * * * 108 Quartile deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 32 Range. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Rank difference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Ratio of variances (F). . . . . . . . . . . . . . . . . . . . . . . . . 38, 70, 152, 180–183 Recessive offspring expected . . . . . . . . . . . . . . . . . . . . . . . . . 155, 194, 195 Rectangles, method of, in plotting frequency distribution. . . . . . . . . 17 Reduction tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150, 173 Regression equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Regression lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71, 83–87 Rejection of extreme variates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Relative growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Reliability, coefficient of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92, 93 216 INDEX PAGE) Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35–40 Seasonal change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Secrist, H. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Secular change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131-137 Selection of data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seriation of data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15–17 Sex-linked traits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Significance of differences between: correlated means. . . . . . . . . . . . . 38 correlation coefficients. . . . . . . . 76 distributions. . . . . . . . . . . . . . . 46–48 etas . . . . . . . . . . . . . . . . . . . . . . . . 88 In 68,118 . . . . . . . . . . . . . . . 37, 38, 180 variabilities of two small sam- ples. . . . . . . . . . . . . . . . . . . . . . 3 Significant figures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Similarity, coefficient of... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Skew curves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44, 53, 60, 160 Smeltzer, C. H. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Smith, B. B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116, 117 Snedecor, G. W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v, 38 Snyder, L. H. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v., 122, 126 Soper, H. E. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20, 99 Spearman, C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78, 93, 94, 117 Spurious correlation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90, 91 Standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29–31 Standard error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 37 of biserial eta. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 value of Q. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155, 193 Steggerda, M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 “Student". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sumac leaflets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68–70 Symonds, P. M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 77 Tables, calculating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tetrachoric correlation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Tetrad differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Tippett, L. H. C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Todd, T. W. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Toops, H. A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Tryon, R. C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 91 Variability, measures of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29–35 range of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Variables, collective. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2, 31 , analysis of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65–69 t analysis of, in correlation . . . . . . . . . . . . . . . . . . . . . . . . 96, 97 in correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Variant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Variate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Variates, continuous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 , discontinuous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 discrete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 graduated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1, 5 integral. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1, 4 number of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Variation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 , coefficient of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33–35 , individual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 , intraindividual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 • Organ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 • partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Volume, measurement of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Whipple, G. M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wiener, A. S... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v, 127, 128 Yerkes. R. M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Yule, G. U. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 MEMORANDA UNIVERSITY OF MICH IGAN 44 300ft 3- *::::H7;