o C/S/O'^^ o a 4) D METHODS AND STANDARDS FOR ENVIRONMENTAL MEASUREMENT Proceedings of the 8th Materials Research Symposium Held at the National Bureau of Standards Gaithersburg, Maryland 20234 September 20-24, 1976 William H. Kirchhoff, Editor Institute for Materials Research National Bureau of Standards Washington, D.C. 20234 *°»EAV Of " U.S. DEPARTMENT OF COMMERCE, Juanita M. Kreps, Secretary Dr. Sidney Harman, Under Secretary Jordan J. Baruch, Assistant Secretary for Science and Technology NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Acting Director Issued November 1977 Library of Congress Cataloging in Publication Data Materials Research Symposium, 8th>Gaithersburg, Md., 1976. Methods and standards for environmental measurement. (NBS special publication ; 464) Supt. of Docs, no.: C 13.10:464 1. Pollution — Measurement — Congresses. 2. Environmental monitoring — Congresses. I. Kirchhoff, William H. II. United States. National Bureau of Standards. Office of Air and Water Measurement. III. Title. IV. Series: United States. National Bureau of Stan- dards. Special publication ; 464. TD177.M37 1976 628.5 76-608384 National Bureau of Standards Special Publication 464 Nat. Bur. Stand. (U.S.), Spec. Publ. 464, 659 pages (Nov. 1977) CODEN: XNBSAV U.S. GOVERNMENT PRINTING OFFICE WASHINGTON. 1977 For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402 - Price $11 Stock No. 009-003-01704-3 FOREWORD The Office of Air and Water Measurement of the National Bureau of Standards' Institute for Materials Research has responsibility for coordinating the NBS program in air and water pollution measurement. This program consists of a nearly 70 man-year effort utilizing the resources of 12 technical divisions in both the Institute for Materials Research and the Institute for Basic Standards. In addition, the Institute for Basic Standards administers a 30 man-year effort in developing standards for noise pollution measurement. Although the technical results of our efforts are directed to a broad cross section of scientists and engineers charged with environmental measurements, NBS devotes nearly 30 man-years of effort to assisting other Federal agencies with special environmental measurement problems related to each agency's mission. The objectives of the Air and Water Measurement Program at NBS are to provide measure- ment standards such as Standard Reference Materials, to develop new and improved measurement methods to support these standards, and to provide and evaluate chemical and physical data needed to relate accurately environmental emissions to environmental quality. In a sense, we at NBS do not view ourselves as environmental specialists. We are not a regulatory agency. Rather, we are measurement scientists concerned with the reliability of all measurements and particularly with those measurements which affect national policies. The effectiveness of the NBS program in air and water measurement depends critically upon our perception of measurement needs and the usefulness of standards in meeting those needs. Thus, a major goal of this symposium was to illuminate problems associated with air and water measurements. The topics chosen for inclusion in the Symposium ranged from the role of Standard Reference Materials in Environmental Measurement to the Application of Laser Technology to Air Pollution Measurement. These topics were chosen either because there is not yet a consensus regarding the applicability of a technology or because the technology is still in an early stage of development. Although we are not ourselves environmental specialists, we nevertheless recognize the importance of living in harmony with our natural resources. We hope that our present and future activities will contribute to a better understanding of the relationship between man's activities and the environment and more reliable methods for achieving this harmony. The active participation of over 350 scientists in this Symposium reflects the commitment of the technical community to our common goal. JOHN D. HOFFMAN, Director Institute for Materials Research III PREFACE This publication comprises the formal proceedings of the 8th Materials Research Sympo- sium, "Methods and Standards for Environmental Measurement." The topics chosen for inclusion in this symposium were those which were in any early stage of development or those which were of particular concern and interest at the time of the symposium. Because of the rapid changes expected to take place in our knowledge and understanding of these topics, timely publication of the proceedings was particularly important. For this reason, this volume consists of extended abstracts rather than full papers on the state of the art in environ- mental measurements and standards. Readers interested in more details concerning a particular topic are urged to contact the appropriate author for more information. The format for the symposium consisted of review papers for each topic presented in a morning plenary session followed by simultaneous afternoon sessions of contributed and invited papers on particular aspects of each topic. Each session was organized by an individual knowledgeable in the topic of the session. The plenary lecturers and session organizers were selected with the guidance of an ad hoe program committee consisting of individuals actively involved in air and water pollution measurement. In addition to the presented papers, a panel discussion on certification of water analysis laboratories was held. Each panel member represented a particular point of view: a Federal regulatory agency, the U. S. Department of Commerce, a state regulatory agency, the industrial sector and the professional chemist. The opening remarks of each panel member are included in these published proceedings. It should be noted that throughout these proceedings certain commercial equipment, instruments or materials are identified in order "to specify adequately experimental pro- cedure. In no case does such identification imply recommendation or endorsement by the National Bureau of Standards, nor does it imply that the material or equipment identified is necessarily the best available for the purpose. The success of a symposium such as this depends on the enthusiastic participation of many dedicated individuals. Many members of the technical division and the staff of the Institute for Materials Research helped in many ways. Several individuals were involved with the symposium from its inception. Ronald B. Johnson and Robert F. Martin of the Institute for Materials Research and Sara J. Torrence of the Office of Information Activities provided the fiscal and administrative management of the symposium including the accommoda- tions, social programs, and logistics. In the Office of Air and Water Measurement, Barbara Watkins and Carol Grabnegger gave tirelessly of their time and good humor before and during the symposium and in the preparation of the final proceedings. A special commendation is deserved by Dr. James R. McNesby, who led the Air and Water Pollution Program at NBS since its inception until August 1976 when he left NBS to take an appointment at the University of Maryland. It is through his leadership that NBS has found itself firmly rooted in environmental measurement and the success of this symposium is testimony to the success of that leadership. Finally, and most importantly, the contribution of Eileen Myers as program coordinator is gratefully acknowledged. Those who participated in the symposium are well aware of the role she played in ensuring that all deadlines were met and that all problems were smoothly solved. WILLIAM H. KIRCHHOFF, Acting Chief Office of Air and Water Measurement IV ABSTRACT This book presents the Proceedings of the 8th Materials Research Symposium on "Methods and Standards for Environmental Measurement" held at the National Bureau of Standards, Gaithersburg, Maryland, on September 20 through September 24, 1976. The Symposium was sponsored by the NBS Institute for Materials Research in conjunction with the Office of Air and Water Measurement. The volume contains extended abstracts of the invited and contributed papers in topics of concern at the time of the symposium: Accuracy, the analysis of trace organic compounds in water, multielement analysis, the physical and chemical characterization of aerosols, in situ methods for water analysis, the application of laser technology to atmospheric monitoring, ambient air quality monitoring, the chemical characterization of inorganic and organometallic constituents, reference materials for environmental measurement and finally, environmental laboratory certification and collaborative testing. Key Words: Accuracy; Aerosol; Air; Collaborative Testing; Laboratory Accreditation; Laser Technology; Multielement Analysis; Pollutants; Speciation; Standard Reference Materials; Trace Organics; Water CONTENTS Page Foreword j j j Preface j-y Abstract y Keynote Address XV Part I. ACCURACY Accuracy - An Industrial Viewpoint William 0. Fitzgibbons 3 Ultraviolet Photometer for Ozone Calibration Arnold M. Bass, Albert E. Ledford, Jr. , and Julian K. Whittaker 9 Accuracy of Ozone Calibration Methods Richard J. Paur 15 Interrelationships Between Primary Calibration Standards for Nitric Oxide, Nitrogen Dioxide and Ozone as Applied to Test Gas Atmospheres Generated by Gas Phase Titration Donald G. Muldoon and Anthony M. Majahad 21 An Analysis of the Measurement Accuracy and Validity of Results from the Charcoal Tube Sampling Technique Gerald Moore 37 Achieving Accuracy in Environmental Measurements Using Activation Analysis Donald A. Beaker 43 A Comparison of Factors Affecting Accuracy in Atomic Emission and Atomic Absorption Spectrometry Using a Graphite Furnace for Trace Metal Analysis in Water M. S. Epstein, T. C. Rains and T. C. 'Haver 47 Improved Accuracy in Background Corrected Atomic Absorption Spectrometry Andrew T. Zander and Thomas C. 'Haver 53 Standard Addition Uses and Limitations in Spectrophotometric Analysis Robert Klein, Jr. and Clifford Haoh b1 Part II. DETERMINATION OF TRACE ORGANIC POLLUTANTS IN WATER Unmet Needs in the Analysis of Trace Organics in Water William T. Donaldson 69 GCIR--A Versatile and Powerful Tool for Analysis of Pollutants Leo V. Azarraga _ ' ^ The MSDC/EPA/NIH Mass Spectral Search System S. R. Heller 75 Methods for Analysis of Trace Levels (yg/kg) of Hydrocarbons in the Marine Environment S. N. Chesler, B. H. Gump, H. S. Hertz, W. E. May and S. A. Wise 81 Potential Carcinogens in Water: GC/MS Analysis Ronald A. Hites 87 VI Page A New Simple Method for the Recovery of Trace Organics from Water T. D. Kaazmarek 91 Determination of Trace Organic Pollutants in Water by Spectrophotofluorescence After Treatment with Activated Carbon Joseph G. Montalvo, Jr. and Ching-Gen Lee 97 Polyurethane Foam Plugs for Concentration of Trace Quantities of Benzo(a)pyrene from Water J. Saxena, J. Kozuchowski and D. K. Basu 101 Part III. MULTIELEMENT ANALYSIS Thousands of Metal Analyses per Man Day--A Reality in U.S. EPA's Central Regional Laboratory: Multielement (23) Analysis by an Inductively Coupled Argon Plasma Atomic Emission System (ICAP-AES) Richard J. Ronan and Garry Kunse Iman 107 Multielement Analysis of River Water R. Sahe lenz 113 Multielement Analysis in Rainwater J. G. van Raaphorst, J. Slanina, D. Borger and W. A. Lingerak 121 Quantitative Multi -element Analysis of Environmental Samples by X-Ray Fluorescence Spectrometry P. A. Pella, K. Lorber, and K. F. J. Heinrich 125 Multielement Analysis of Air and Water Pollutants in Gold Mines by Thermal and Epithermal Neutron Activation C. S. Erasmus, J. Sargeant, J. P. F. Sellsahop and J. I. W. Watterson '" A Multielement-Direct Reading Method for the Spectral Analysis of Sediment Leachates M. M. Moselhy, D. W. Boomer, J. N. Bishop and P. L. Diosady 1 37 High Efficiency Solvent Extraction of Trace Elements in Aqueous Media with Hexaf 1 uoroacetyl acetone Morteza Janghorbani, Max Ellinger and Kurt Starke 151 Determination of Trace and Minor Elements in the Combustible Fraction of Urban Refuse William J. Campbell, Harold E. Marr, III and Stephen L. Law Part IV. PHYSICAL CHARACTERIZATION OF AEROSOLS Physical Characterization of Aerosols K. T. Whitby 165 Measurement of Aerosol Size Distribution with a Particle Doppler Shift Spectrometer Plan Chabay 175 Instrumental Analysis of Light Element Composition of Atmospheric Aerosols Edward S. Maoias X-R-D Analysis of Airborne Asbestos Preparation of Calibration Standards 1 on M. Fatem%, E. Johnson, L. Birks, J. Gilfrioh and R. Whitlook VII Page Respirable Ambient Aerosol Mass Concentration Measurement with a Battery-Powered Piezobalance Gi Imore J. Sem 191 A Cascade Impaction Instrument Using Quartz Crystal Microbalance Sensing Elements for "Real-Time" Particle Size Distribution Studies D. Wallace and R. Chuan 1 99 A Limitation on Electrical Measures of Aerosols W. H. Marlow 213 The Use of a Modified Beta Density Function to Characterize Particle Size Distributions Alan S. Goldfarb and James W. Gentry 219 Part V. WATER ANALYSIS in- situ Monitoring with Ion-Selective Electrodes—Advantages and Pitfalls, or What the Instructions Didn't Say Richard A. Durst 229 Problems of Mercury Determination in Water Samples Sunao Yamazaki, Yukiko Dokiya and Keiichiro Fuwa 233 Sampling for Water Quality Willie R. Curtis 237 Monitoring Bacterial Survival in Seawater Using A Diffusion Chamber Apparatus In- situ George J. Vasconce los 245 Clean Laboratory Methods to Achieve Contaminant-Free Processing and Determination of Ultra-Trace Samples in Marine Environmental Studies C. S. Wong, W. J. Cretney, J . Piuze and P. Christensen 249 A Modified Procedure for Determination of Oil and Grease in Effluent Waters G. M. Rain and P. M. Kerschner 259 Variability of Trace Metals in Bed Sediments of the Po River: Implications for Sampling M. T. Ganzerli-Valentini, V. Maxia, S. Meloni, G. Queirazza and E. Smedile 263 Part VI. APPLICATION OF LASER TECHNOLOGY TO ATMOSPHERIC MONITORING Application of Laser Technology to Atmospheric Monitoring A. Mooradian 277 Laser Monitoring Techniques for Trace Gases William A. McClenny and George M. Russwurm 287 Long-Path Monitoring with Tunable Lasers E. D. Hinkley and R. T. Ku 291 Development of a Two Frequency Downward Looking Airborne Lidar System J. A. Eckert, D. H. Bundy, and J. L. Peacock 295 VIII Page Remote Analysis of Aerosols by Differential Scatter (Disc) Lidar Systems M. L. Wright, J. B. Pollack and D. S. Colburn 301 Dial Systems for Monostatic Sensing of Atmospheric Gases E. R. Murray, J. E. van der Laan, J. G. Hawley, R. D. Hake, Jr., and M. F. Williams. 305 Coherent Anti -Stokes Raman Scattering in Gases J. J. Barrett 315 Highly Selective, Quantitative Measurement of Atmospheric Pollutants Using Carbon Monoxide and Carbon Dioxide Lasers D. M. Sweger, S. M. Freund and J. C. Travis 317 Part VII. CHEMICAL CHARACTERIZATION OF AEROSOLS Chemical Characterization of Aerosols: Progress and Problems William E. Wilson 323 Detection of Individual Submicron Sulfate Particles Yaaoov Mamane and Rosa G. de Vena 327 An Analysis of Urban Plume Particulates Collected on Anderson 8-Stage Impactor Stages Philip A. Russell 335 Size Discrimination and Chemical Composition of Ambient Airborne Sulfate Particles by Diffusion Sampling Roger L. Tanner and William H. Marlow 337 The Identification of Individual Microparticles with a New Micro-Raman Spectrometer E. S. Etz and G. J. Rosasco 343 A Compact X-Ray Fluorescence Sulfur Analyzer L. S. Birks, J. V. Gilfrioh and M. C. Peckerar 347 The X-Ray Identification and Semi-Quantification of Toxic Lead Compounds Emitted Into Air by Smelting Operations Peter F. Lott and Ronald L. Foster 351 Flameless Atomic Absorption Determinations of Cadmium, Lead and Manganese in Particle Size Fractionated Aerosols Mark E. Peden 367 Part VIII. AIR POLLUTION MEASUREMENT Ambient Air Quality Monitoring George B. 'Morgan 381 Applications of Remote Monitoring Techniques in Air Enforcement Programs Francis J. Biros 387 IX Individual Air Pollution Monitors ■ 9 e S. C. Morris and M. Granger Morgan 391 Intercalibration of Nitric Oxide/Nitrogen Dioxide/Ozone Monitors D. H. Stedman and R. B. Harvey ^93 A Reactive Gas Generator Wing Tsang and James A . Walker Semiconductor Gas Sensor Equations for Predicting Performance Characteristics S. M. Toy 405 Monitoring Non-methane Hydrocarbons in the Atmosphere by Photoionization J. N. Driscoll 415 A Study of Vertical Diffusion in the Atmosphere Using Airborne Gas Chromatography and Numerical Modelling R. S. Crabbe 4iy In-situ Quantitation of Background Halofluorocarbon Levels L. Elias 435 Origin and Residence Times of Atmospheric Pollutants: Application of 14 C L. A. Currie and R. B. Murphy 439 Part IX. CHEMICAL CHARACTERIZATION OF INORGANIC AND ORGANOMETALLIC CONSTITUENTS Chemical Characterization of Inorganic and Organometallic Constituents Robert S. Braman 451 Analytical Techniques for the Study of the Distribution and Speciation of Heavy Metals in Aquatic Systems H. J. Tobschall, N. Lawkowski and K. Kritsotakis 459 Inorganic Speciation of Copper in Estuarine Environments David Burrell and Meng-Lein-Lee 461 Use of Ion-Specific Electrodes in Studying Heavy Metal Transformation in Aquatic Ecosystem S. Ramamoorthy and D. J. Kushner 467 Electrochemical Studies of the Methylmercury Cation Richard A. Durst, F. E. Brinckman, Kenneth L. Jewett, and John E. Doody 473 An Element -Specific Technique for the Analysis of Organometallic Compounds I. K. Chau and P. T. S. Wong 485 Chromatography-Atomic Spectroscopy Combinations. Applications to Metal Species Identification and Determination Douglas A. Segar and Adrianna I. Cantillo 491 Changes in the Chemical Speciation of Arsenic Following Ingestion by Man Eric A. Crecelius 495 The Determination of Lead in Aqueous Solutions by the Delves Cup Technique and Flameless Atomic Absorption Spectrometry Ealeem J. Issaq 497 X Part X. THE STATUS OF REFERENCE MATERIALS FOR ENVIRONMENTAL MEASUREMENT Page The Status of Reference Materials for Environmental Analysis John K. Taylor 503 Reference Type Samples for Water/Waste Analyses in EPA John A . Winter 509 The Preparation and Analysis of a Trace Elements in Water Standard Reference Material J. R. Moody, H. L. Rook, P. J. Paulsen, T. C. Rains, I. L. Barnes, and M. S. Epstein. 51 5 The Standard Fineparticle Brian H. Kaye 519 Thin Film Standards for X-Ray and Proton Induced X-Ray Fluorescence D. N. Breiter, P. A. Pella and K. F. J. Heinrioh 527 Reference Materials for Automotive Emissions Testing Theodore G. Eokman 53] Long Term Investigation of the Stability of Gaseous Standard Reference Materials E. E. Hughes and W. D. Dorko 535 Standard Reference Materials for the Analysis of Organic Vapors in Air Barry C. Cadoff 541 Working Reference Materials for Lead Contamination Analyses of Air and Water Alfred C. Eokert, Jr. and Dennis M. Mongan 545 Part XI. COLLABORATIVE TESTING Collaborative Testing of Air Pollution Methods John B. Clements 555 Interlaboratory Comparison of Neutron Activation and Atomic Absorption Analyses of Size-Classified Stack Fly Ash J.M. Ondov, R.C. Ragaini, R.E. Heft, G.L. Fisher, D. Silberman, and B.A. Prentice.. 565 Collaborative Testing of a Continuous Chemi luminescent Method for Measurement of Nitrogen Dioxide in Ambient Air John H. Margeson, Paul C. Constant, Jr., Michael C. Sharp, and George W. Scheil. . . . 573 Collaborative Testing of EPA Method II Joseph E. Knoll, M. Rodney Midgett, and George W. Scheil 575 Evaluation of Interlaboratory Comparison Data by Linear Regression Analysis Donald E. King 581 Potential Enforcement Uses of Emission Test Collaborative Studies Louis Paley and Walter S. Smith 597 XI Part XII. AIR POLLUTION, WATER ANALYSIS Page Ion Chromatography--A New Analytical Technique for the Assay of Sulfate and Nitrate in Ambient Aerosols James D. Mulik, Ralph Puokett, Eugene Sawioki, and Dennis Williams 603 The Use of a Gas Chromatograph-Microwave Plasma Detector for the Detection of Alkyl Lead and Selenium Compounds in the Atmosphere Donald C. Reamer, Thomao C. 'Haver, and William H. Zoller 609 The Dutch National Air Pollution Monitoring System - A Focal and Reference Point T. Schneider 613 Analysis and Calibration Techniques for Measuring of Airborne Particulates and Gaseous Pollutants I. Delespaul, H. Peperstrate and T. Rymen 617 Factors Governing the Contents of Metals in Water D. J. Swaine 625 Effects of Water Soluble Components of Refined Oil on the Fecundity of the Copepod, Tigriopus Japoniaus Colin Finney and Anthony D 'Agostino 627 Part XIII. CHEMICAL CHARACTERIZATION OF AEROSOLS A Comparison of Electron Microscope Techniques for the Identification of Asbestos Fibers C. O. Ruud, P. A. Russell, C. S. Barrett, and R. L. Clark 635 Determination of Reducing Agents and Sulfate in Airborne Particulates by Thermometric Titration Calorimetry L. D. Hansen, D. J. Eatough, N. F. Mangelson and R. M. Izatt 637 Determination of Acidic and Basic Species in Particulates by Thermometric Titration Calorimetry D. J. Eatough, L. D. Hansen, R. M. Izatt, and N. F. Mangelson ^43 Single-Particle Analysis of the Ash From the Dickerson Coal-Fired Power Plant John A. Small and William H. Zoller 651 Laser-Raman Monitoring of Ambient Sulfate Aerosols R. G. Stafford, R. K. Chang, and P. J. Kindlmann 659 Chemical Characterization of Particulates in Real Time by a Light Scattering Method C. C. Gravatt 669 XII Part XIV. LABORATORY ACCREDITATION - PANEL DISCUSSION Page The National Voluntary Laboratory Accreditation Program T. R. Young 675 The National Laboratory Certification Program for Water Supply Charles Hendricks 677 Laboratory Accreditation C. Eugene Hamilton t °81 Certification of Water and Wastewater Laboratories: A Professional Chemists View H. Gladys Swope 685 Lessons to be Learned from Clinical Laboratory Accreditation William F. Vincent ; 689 XIII NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). METHODS AND STANDARDS FOR ENVIRONMENTAL MEASUREMENT Honorable George E. Brown, Jr. Chairman, Subcommittee on the Environment and the Atmosphere Committee on Science and Technology U.S. House of Representatives Washington, DC 20515, USA Measurement: The First Step I am genuinely pleased to be here this morning with such a distinguished gathering of specialists in the field of environmental measurement science. It heartens me to know that, in commemoration of the Nation's Bicentennial and the 75th Anniversary of the National Bureau of Standards, you are being called upon to look back, review and critique past measurement methods, then emerge with a strong, enlightened vision of future directions. It is a doubly fitting occasion for you to assess and substantiate old approaches to environmental measurement, while also proposing totally new ones. I would like to commend this as a wery timely effort. All too often this past year, in celebration of the Nation's Bicentennial, we have failed to recall that our success and affluence have not been achieved without sacrifices, one of which has been the well-being and purity of our "space ship" Earth. Now is the time that we as a Nation, and as a world community of nations, are considering, reaffirming and expanding our goals for reclaiming and guarding a high- quality environment. This reaffirmation has been tempered by the realization that environmental goals must be balanced with, but not compromised by, other energy and economy- related objectives. The goals have been legitimi tized by persuasive new evidence of health effects, such as the correlation of cancer to environmental factors in a majority (60-90 percent) of all cases. They have been influenced by the prediction of drastic global repercussions of seemingly minor actions, such as the release of halocarbons from common spray cans. The focus of the past has been on regulations to cure existing environmental degradation. Now is the appropriate time to think of the future: to strive for regulations and incentives to prevent the spread of this increasingly pervasive disease. In either case, the need for environmental measurement cannot be overemphasized. Whether controlling pollution, or preventing it, we must be able to identify all environmental contaminants and document the nature and extent of their damage to public health and welfare. As entrepeneurs in the field of environmental measurement, you are constantly subjected to conflicting pressures. On the one hand, quality control measures stress the need for accuracy, reliability, and intercomparability of data and techniques. On the other, you are exhorted to produce inexpensive, practical instruments on short order, for immediate use in the field. I must admit that these diverging interests are incorporated into existing pollution control legislation, which often sets deadlines for compliance with regulations, calls for use of "economically achievable" technology, and stipulates that the best, most reliable scientific data be used to justify policy decisions. Can these diverging interests be balanced? Can you, as specialists, overcome your preference for sophisticated, specialized approaches? This dilemma was experienced by a researcher conducting the epidemiological study of sulfur oxides (known as CHESS), for th Environmental Protection Agency. Apparently, measurement methods which were used under controlled laboratory conditions were not suited to widespread application in the field. The accuracy of the data suffered as a consequence. Therefore, I urge you to keep in mind the need to extrapolate from lab to field methods. XV e It is my belief that we are dealing with a heirarchy of needs. It might be useful to imagine a pyramid of instrument types: the narrow apex represents the most accurate, sophisticated apparatus which can be designed, the broad base the array of more rugged, field equipment used to conduct daily monitoring and measuring tasks. The validity of the field data hinges upon the inherent accuracy of those "top" instruments, to which the simpler instruments have been standardized. Thus, it is of primary importance to attain the goal of accurate measurement and analysis before attempting to compare and qualify the data collected under varying conditions. However, that data must be collected; we cannot afford to put off establishing a firm foundation of "baseline" knowledge any longer. Thus, an equal and simultaneous effort must be made to diversify and expand the base of our pyramid. This range of instruments, tightly-knit by measures to verify the accuracy of each component, is what you must seek to develop. The concern at the top of this pyramid is to arrive at standardized calibration points, accurate and reliable characterizations of environmental contaminants in each of the different media. These will then serve as reference points upon which to base all analytical methods. As scientists, you constantly deal with physical limitations of environmental measure- ment methods. In fact, you are dedicated to exploring, defining, minimizing, and, when possible, eliminating them. Thus, you are well aware that all data is qualified by levels of confidence and probability. You recognize that no number is significant, and subse- quently worthy of being recorded, without an estimate of its accuracy. While there is no need for me to reiterate these points to this technical audience, I must remind you that far too often, in practice, they are not considered. Too many people, especially lawyers and legislators, believe the numbers generated by scientists without reservation. So, rather than exhort you to present information accurately to society (which I assume you already do), I would like to emphasize the need for the inclusion of caveats whenever data is released, circulated, and interpreted. Let me explain why. In too many instances, the naive faith in unqualified numbers has led decisionmakers to promote regulations and policies whose justification was later seriously questioned, or whose implementation was eventually frustrated. This was the case back in 1970, when the authors of the Clean Air Act Amendments mandated a 90 percent removal of oxides of nitrogen (N0 X ) from automobile emissions. This drastic regulation was based upon measurements which, unbeknownst to these politicians, were seriously in error and consequently distorted the health risks. Subsequent data has tended to re-confirm the wisdom of the original requirement for a 90 percent reduction in N0 X , but for different reasons. As this single example, one of a regrettably large number of similar ones, illustrates, more effort should be channelled into documenting the need for accurate measurements. Users of environmental data must be advised of its potential weaknesses. Unless they are, public officials may continue to formulate strategies whose undeniable repercussions upon the economy and public's health may not be warranted. But this one example points out another aspect of how scientific data is used. Frequently, the right decision is made for the wrong reasons. The guess for N0 X was either good, or lucky. In any case, those who use scientifically generated numbers to support their own policy perspectives frequently distort data to bolster their case. This is normal practice in politics. Even the most seemingly trivial measurements strengthen the first link in the long chain of environmental action: usually it begins with the discovery and quantification of a "pollutant." The next step, based upon the measurement of exposure-effect relationships, enables society to determine the need to reduce or remove the substance in question. Finally, measurements are needed to monitor compliance and the effectiveness of pollution control requirements. In dealing with the accuracy of environmental data, the stakes are high: We cannot risk endangering public health, nor can we justify the economic costs of overprotecting it. Industry should not delay investing in technology to ameliorate pollution, but neither can it afford to waste its scarce resources on innovations of dubious effectiveness. XVI It strikes me that there are strong parallels between the objectives of this scientific "meeting of minds" and those of the House Subcommittee on the Environment and the Atmosphere, which I chair. Let me clarify this comparison somewhat. The Subcommittee was formed less than two years ago, in order to coalesce oversight over federal environmental research and development efforts under a single House Committee. Its major responsibilities include evaluating the state of knowledge on environmental matters and considering ways to optimize the use of research resources. Similarly, you have been called together to oversee the status of environmental measurement techniques and standards. While we legislators are generally concerned that adequate research is being done to insure the timely abatement of pollution, our intuition is not enough: we would be paralyzed without the technical advice of specialists such as yourselves. So, we hold hearings to provide the crucial interface between the political and technical spheres. The Subcommittee's findings have been most disturbing: There is a dearth of comprehen- sive baseline data and long-term, basic research in almost e\/ery case of environmental concern, on a wide spectrum of pollutants, particularly under chronic, low-level conditions. Yet, the federal resources are not presently focused on these areas, but are skewed toward short-range, "crisis-oriented", applied environmental research. In many cases, (such as that of sulfates), existing levels of contamination are definitely found to be harmful to human and ecological health, yet we lack the capability to measure and monitor many deleterious substances in air, water and food ^o our satisfaction {i.e., accurately, inexpensively, and speedily). Since it is apparent that the technical foundation of environmental knowledge is frequently unreliable, and unquestionably piece-meal, it is essential to delineate on a high priority basis what additional data are needed. We need to expand and upgrade the current data base, using refined techniques, but we can no longer continue to do this haphazardly. If the situation I have depicted is not bleak enough, couple it with the scenario described by the Office of Technology Assessment (OTA) in a recent study of the Office of Research and Development (ORD) in the EPA: Authority for developing and standardizing valid analytical methods is scattered throughout the federal agencies. While the EPA's "equivalency" program is a step in the right direction, there still is no designated central standardizing procedure. This provides fertile ground for bureaucratic rivalries and for cumbersome, time-consuming procedures which seriously delay the introduction, acceptance and utilization of improved instrumentation. Thus, it is also essential to standardize the definitions of problems, and the methods and results of environmental research and monitoring, in order to coalesce local, state and regional efforts into a coherent, usable whole. The question is: who will accept these responsibilities? Scientists, legislators, health experts and government officials, industrial representatives and the public all must voice their concerns and preferences. Can this multidiscipl' 1 ' ', varied input be coordinated within the present federal structure? At this point, some legislators and government officials might be tempted to proffer the stock solution: an increase in funding. However, I will have to risk my popularity by saying that this approach is neither politically sound, nor scientifically ethical. It does no good to infuse a body with new blood unless the circulatory system is functioning. Considering the tightness of the federal environmental research (R&D) budget, we must first make a credible attempt to make better use of what we already have. We must address the issues: the direction, quality, priority, and logic of the federal programs to attain the legislated environmental goals. XVII As the OTA study additionally notes, the numerous Government agencies which are involved in this effort, are not integrated. Clearly, the challenge of the future is to create explicit responsibility within the federal structure for the coordination and evaluation of environmental research and development activities. I might add that this challenge is not new, but has been conveniently side-stepped for some time. Since a major campaign issue this year is the reorganization of the Executive branch, it is time to exploit it. If this momentum does indeed exist, you here today, among others, will have a major responsibility to channel your solutions into this endeavor. In a closing note, let me return to a comment I made earlier concerning the need to focus on the prevention of future crises. A major focal point of any reorganization effort must be the development of an "early warning system". This has become increasingly apparent, expecially as issues of overwhelming global significance are being raised. The depletion of ozone is a classic, and perhaps the most immediate international environmental problem. The release of halocarbons in even a single location on the globe could ultimately endanger the well-being of the world's population. The realization that increased skin cancer or massive crop damage may lag decades behind the cause of ozone depletion makes it critical to detect subtle changes before adverse effects occur. Only then can credible policies for prevention be proposed. International cooperative research and monitoring efforts will require close partnership, full disclosure of techniques and their limitations, and intercomparability of results. These are problems you will have to deal with, before the control policies of any one country will be adopted by the world community of nations. I have outlined some of the responsibilities which you, in providing the means towards achieving a high quality environment, have towards society. Let us make those first steps count! XVIII Part I. ACCURACY NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977) ACCURACY - AN INDUSTRIAL VIEWPOINT William 0. Fitzgibbons The Standard Oil Company Research and Engineering Department Warrensville Laboratory Cleveland, Ohio 44128, USA 1 . Scope The scope of this paper deals primarily with the industrial viewpoint of accuracy. We are dealing with non-professional union personnel. They are analyzing water samples and doing stack sampling. Ambient air will not be dealt with here because ambient air sampling and analysis is analogous to weather forecasting. Ambient air analysis is the concern of the whole community. It deals with all air pollution sources, not just industry alone and therefore it should be the responsibility of the government. In environmental analysis, we are dealing primarily with empirical and not absolute measurements. Analyses such as dichromate oxidizables; oxygen depletion after five days in a 300 ml glass bottle; stuff stuck on a fiberglass filter; freon extractable non-volatile compounds at test conditions; steam distillable material which reacts with AAP; and finally total Kjehldahl nitrogen. All of these techniques are empirical tests, and only one, total Kjehldahl nitrogen, admits to the fact that it is the total nitrogen obtained under the specific test conditions. Why do we have these source and empirical analyses? We have them because the regula- tions are based on source and effluent standards. In other words, the allowable emissions are based on the results obtained using an empirical test on a survey of many installations. Recently, there has been a trend from source and effluent standards to stream and toxic effluent standards. Because of this switch we are going to have to go to absolute test methods. These are methods which measure absolute values. Examples of absolute test methods include metal analyses, temperature, organic carbon, chloride, sulfate and fish toxicity. Before we precede further, it is important that we define what we mean by accuracy, precision, bias, reproducibility, and repeatability. Figure 1 is a drawing of a target. In the center of the target is the right answer. What we are trying to do is get all of our shots in the bull's-eye. Unfortunately that is not the real world. In figure 2 you see that the shots are all ^jery close together but they are not anywhere near the bull's-eye. This is an inaccurate result, but quite precise. Figure 3 is a drawing of the shots which, on the average, are centered around the bull's-eye. The shot identified as A is the first test done by the chemist. As you can see it was very accurate. Unfortunately most times they stop there, and we never find, how unprecise a result is. In figure 4 you will see that all of the shots are ^jery close together, and they are located in the bull's-eye. They are \/ery precise and very accurate. The distance away from the bull's-eye from all of the shots is a measure of the bias. Reproducibility is the difference of results obtained by two different laboratories. Repeatability is a measure of how well any particular analyst can repeat himself. Figure 1. Target drawing with center being right answer. Figure 2. Target drawing, inaccurate results but precise. Figure 3. Target drawing, accurate, not precise results. Figure 4. Target drawing, accurate and precise results. 2. Steps Involved in Obtaining Accuracy A. Sampling About one-half the problem of obtaining accuracy is getting the sample. In fact in many cases it is one-half the cost. It is particularly true when you are dealing with stack sampling. The object is to have the sample represent the effluent. B. Method selection The first step is to find out what is available. Generally this is done by a litera- ture search starting with the American Public Health Association (APHA) Standards Methods, ASTM, or the EPA Manual. The second step is to determine what you are going to do with the results. In some cases regulations specify the test method which you should use. You must check to make sure the scope of the method covers the applications which you intend to do. Find out who is going to run the test. You may need a completely different test if it is going to be run by a Ph.D. than if it is to be run by an operator. Lastly, you need to be concerned with what is going to be done with the results. In many cases the analytical test is different, depending on what is to be done with the results. Is it to be used for designing pollution control equipment, is it to be used for NPDES requirements, or is it to be used for performance evaluation? The third step is to compare your results with another test method; that is, independent confirmation. An example is work we did with cyanide and fish toxicity. The ASTM and APHA cyanide methods indicated very high cyanide levels. On the order of ten times the media tolerance limit (TL m ) for fish. We put fish in the tank and the fish lived in straight effluent. Thus, we deduced that what we were measuring was not cyanide but in fact was rather an interference. After much investigation we found that the test method was measur- ing thiocyanide as cyanide but thiocyanide isn't toxic. This work points out what we now call the "Spike Recovery Fallacy." Spike recovery will not identify matrix interferences. They will simply tell you whether or not you can recover the amount you spiked. The last step in obtaining an accurate method is to have it written so it can be evaluated. It must be clearly written in an acceptable format. C. Training Our experience has been that serial training is bad. Train from the method and use a teacher whose main job is to train. Do not use the person who is running it to train the next person. This institutes everyones short cuts, and by the time the third person is training there is very little semblance between what is done and what the written method says. An analyst is qualified at the end of training, not by the fact that he has attended 4, 8, or 12 hours or days of training, but rather the fact that he has demonstrated an ability by running unknown samples or by showing he is able to do the procedure. This ability must be demonstrated by running test knowns and effluents, and getting results within the es- tablished precision and accuracy of the method. Documentation of this ability supports the fact that this analyst is trained and capable of getting the right answer. D. Internal qua! ity assurance Internal quality assurance has been broken into two phases. Phase I is the portion which is normally done by a reputable analyst. This includes the preparation of blanks, standardization, calibration, and replication. The cost of doing Phase I Internal Quality Assurance is, in most cases, included in running the procedure. A blank almost always needs to be run if for no other reason then to set up the colorimeter. Phase II in our internal quality assurance program is a programmed system of duplicates and spike recoveries. It may or may not be known to an analyst which samples are duplicates or spikes. The point is that it is not a police system, but is a self-controlled system wherein the analyst is running his own internal quality assurance program. There are two examples of this, the EPA Quality Assurance Manual points out the Shewhart and cusum tech- niques. Our experience has been that the Shewhart technique has been much better than the cusum technique for our environmental analysis quality assurance program. Quality assurance is applicable to stack sampling, primarily in the sense of knowing what to look for, of running blanks, and calibrating your equipment. E. External quality assurance External quality assurance involves at least three different programs. One of them is round-robin audits. A central lab prepares samples, and ships them out. Each laboratory analyzes them, the results are sent back, and tabulated. A report is sent to each lab showing how they compared to the known and with other analysts. A second very important portion of external quality assurance is to have your analysts and analyses checked by independent laboratories. This is about the best and maybe the only way to do stack sampling analytical quality control. Have a contract organization come in and run a side-by-side sample. This does two things. First, it points out to management the cost involved with having an outsider come in, and second, it shows them that stack sampling is a pesty job no matter who does it. The third external quality assurance program we use is watch audits. Someone from the central lab, probably one of the teachers, comes to each lab and watches the analyst go through the technique. He checks that the analyst is not doing any shortcuts and checks the standard curves. Are they up to date? This is done in our company annually for every analyst doing environmental sampling. F. Trouble-shooting Basically this step involves solving problems which are identified in other areas, such as method selection, or the internal or external quality assurance programs. The cure is what is defined as trouble shooting. 3. Costs Involved With Accuracy in Environmental and Analytical Quality Control There is a mass of literature available from the regulatory agencies and their contrac- tors on analytical quality control. I am sure that the costs are significant. In many cases the analytical quality control data is useful to us. Projects such as ASTM-Project Threshold are also involved. Costs associated with analytical quality control are large. The costs are those associated with our corporation which involves three refineries, two large petrochemical complexes and four quite small petrochemical complexes. A. Sampling The cost associated with sampling ranges from $100 to $35,000. One hundred dollar sampling locations are generally used for grab type water effluent samples or process gas samples. It costs $35,000 to install a stack sampling platform and sampling connections on a fluid cat cracker at one of our refineries. On an average, it costs us about $10,000 per effluent to install sampling and flow measuring equipment to take a composite sample, preserving it with five kinds of preservatives so that the results are acceptable for the National Pollutant Discharge Elimination System (NPDES) monitoring program. B. No analytical quality control Seventy thousand dollars per year per effluent is the cost for NPDES testing for an effluent guideline source with 14 parameters. This $70,000 per year per effluent does not make one gallon of gasoline. What is even worse, it doesn't improve the environment, it merely tells us in an accurate way where we are. C. Methods selection The costs associated with this step vary between $500 and $50,000. The simple non- controversial methods generally cost $500 or less. Basically what we are doing is evalua- ting an existing method with which we are familiar, to find out whether it has applicability to a new source. On the other hand there are complex problems which may cost between $10,000 and $50,000. Cyanide: in 1974-1975 we spent $42,000 for method evaluation on the cyanide analysis. Oil and grease analysis: method development cost us $29,000 in the last two years. On the stack sampling side, we spent $16,000 working up the H 2 S method. Other examples with significant costs have been the FSU0D method (first stage ultimate oxygen demand). Hexavalent Chrome is another quite simple technique where we have spent significant amounts of money simply to select a method which will work in our effluent and involved selecting a method from only three possible sources. D. Training A total cost on a corporate basis is only $5,000. This includes only the teachers cost, it does not include the cost of the analyst being trained. The cost on a per analyst basis is $850. More assistance is needed here. The training manuals don't work very well in training the kind of people we are trying to train. You need films or videotapes, some training technique like television which can get the point across. E. Internal quality assurance Phase I internal quality assurance costs are included in the basic "no analytical quality control" costs. In other words, the $70,000 per year per effluent includes our Phase I or "expected" analytical quality control. The costs of Phase II analytical quality control are dependent on the number of samples. The EPA says that you can run one replicate and one spike with each ten samples. If you're running three samples a week as required by your NPDES permit, that means a replicate every three weeks. Obviously that is not a very workable program. The estimate, based on one sample and one spike e\/ery ten samples, in the Analytical Quality Control Manual is 30 percent. Our actual costs have been shown to be more like 100 percent or approximately $50,000 per installation. Initially, the costs were more than 100 percent. After the program was debugged and they got started, it dropped to a 65 percent increase. It's maintained at about 65 percent. The reason is that less time is required for set-up and data digestion as the program becomes more familiar. However, more time is spent in finding causes of problems. In other words, more time is spent trouble-shooting. F. External quality control On a corporate basis our cost for this, program is $16,000 a year. This is only the corporate cost, it does not include the cost for running the control samples in the various installations. This is independent of the size of the company, or the number of the labs, except when you're talking about trouble-shooting. The more labs you have the more troubles and the more challenges. We're just getting started in the trouble-shooting. In the past we have had many analyst problems to be solved. We are now getting into methods problems. We are not in a position to estimate the cost of trouble-shooting. 4. Justification or Environmental Analytical Quality Control Program There are four basic reasons to be used as justification. It keeps us out of trouble with regulatory agencies. Secondly, it avoids bad publicity in an industry that must sell to the public. Third, it keeps regulatory agencies "honest". This is not to imply that they are dishonest, but rather, it makes them think about the methods that they are propos- ing. The most important reason, which is number 4 is that it reduces capital and operating costs. The first three justifications are hard to put dollar and cents values on. Bad press does effect the image of an industry that is selling to the public. The one that sells the program is that it does reduce capital and operating costs. Following are four case histories involving instances wherein an analytical quality control program reduced or eliminated capital or operating costs. The first one involves oil and grease removal. Monitoring using the freon gravimetric technique showed that there was excessive oil and grease in the effluent. The engineers brought out a variety of equipment designed to remove oil by gravity techniques from water. None of these techniques reduced the oil content of the effluent significantly. We went back and evaluated the method doing all of the steps involved with methods selection. We found that the oil and grease that we were measuring by that technique was not oil and grease in the classic sense but rather an organic material which is soluble in water, but was also quite soluble in freon. We were able to show that there were other methods of removing this soluble organic and no oil and grease equipment was purchased. The reason being there was no oil or grease problem. The second instance involves suspended solids. Analytical results indicated excessive suspended solids in the effluent, yet when compared to a neighboring domestic sewage treat- ment plant which had about one-half the suspended solids levels, the turbidities were equivalent. A little detective work showed that each installation was using a different fiberglass filter. Both types of filters were evaluated on both effluents. One of the filter types resulted in suspended solids values approximately 50 percent of those obtained by the other type of fiberglass filter. Incidentally, both types of fiberglass filters are acceptable according to the EPA Manual. By this change from one type to another type of suspended solids filter the NPDES suspended solids limit were met and final filtration was not required. The estimated capital cost for final filtration for this installation would have been $900,000. That expenditure was not made. Variability reduction is another example. Much waste treatment design is based on a maximum load. One of the results of an analytical quality control program is a reduction in variability. Much of the effluent variability turns out to be analytical rather than process. It is a known fact that equipment costs for 50 to 70 percent reductions are much less than that required for 90 to 95 percent. By reducing the variabil ity,a cost savings is realized by reducing the level of treatment required while still meeting effluent limits. The last case in point involves cyanide. The proposed effluents regulations were issued a while ago and they included cyanide. The analytical method was found to be inaccurate and the cyanide in refinery effluents is present as complex, and relatively non-toxic, cyanide, and not free cyanide. Since that time, the proposed toxic effluent regulations have been withdrawn. In conclusion, accuracy is expensive, but worth it. A million dollar lab, with all the fancy equipment, and highly educated people, with all the credentials, does not mean that their analytical result is right. Only the right answer is accurate. The right answer is assured with an Analytical Quality Assurance program. NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). ULTRAVIOLET PHOTOMETER FOR OZONE CALIBRATION Arnold M. Bass, Albert E. Ledford, Jr., and Julian K. Whittaker National Bureau of Standards Washington, DC 20234, USA 1. Introduction In order to provide a facility for photometric ozone measurements, we have designed and constructed a double-beam photometer for ozone concentrations in the range 0.025 to 1.0 ppm. The sample path length of this instrument is approximately 300 cm. The instru- ment measures changes in ozonized-air sample transmissions of mercury radiation at 253.7 nm where the photoabsorption cross-section of ozone has been well determined. Radiation at wavelengths other than 253.7 nm from the mercury lamp is removed by passing the light through narrow-band interference filters. The light is collimated and passed through a beam splitter which directs approximately equal intensity beams through the two cells. Clean air flows through one cell into the ozone generator and then the ozonized air flows through the second cell. The light beams are recombined on the face of a photo- multiplier tube used in the photon counting mode. A rotating chopper allows the two beams to be detected sequentially so that the transmissions of the two cells may be directly observed. Tests indicate that measurements may be made at the 0.05 ppm level with a precision of 10 percent or better. Ozone calibration data with the 3-meter photometer agreed within 1 and 2 percent with gas phase titration and UV photometric 3 measurements respectively made at the EPA laboratories in Research Triangle Park, North Carolina. 2. Experimental The oxidation of iodide to iodine by ozone, in a properly prepared solution of potas- sium iodide, is the basis for the reference method specified by the Environmental Protection Agency [l] 1 for the calibration of atmospheric monitors. Recent comparative measurements [2] of the specific iodometric methods have raised serious doubts as to the accuracy and reproducibility of the iodometric calibration procedures. The report of the California Air Resources Board [2] recommended that oxidant analyzers in California should be calibrated by a UV photometric method rather than by the iodometric method and in May 1975 this recom- mendation was accepted for the state monitoring network. At the present time the U. S. Environmental Protection Agency is considering two candidate methods, gas phase titration and ultraviolet photometry as replacements for the 1 percent neutral -buffered potassium iodide procedure which is the current Federal Reference Method for calibration of pollutant monitors [1]. In order to provide a facility at NBS for the measurement of ozone concentrations, independently of gas phase titration based on a nitric oxide standard, it was decided to set up an ultraviolet photometer [3] that would have the desired sensitivity for ozone measurements at ambient concentrations. The desired performance for the photometer was to figures in brackets indicate the literature references at the end of this paper. be able to measure ozone concentrations over the range 0.05 to about 1.0 ppm with an accuracy of at least 0.005 ppm over the entire range. The photometric measurement method is based on the application and the validity of the Beer-Lambert Law: T T Q vn / -273 CP^ m 1 - l ex P ( — Twr~> (1) where: c is given in ppm (parts per million by volume) k = 308.5 cm- 1 atm -1 (base e) is the ozone absorption coefficient [4] at 253.7 nm, 273 K, and 1 atmosphere L is the path length, cm P is the total pressure, atm. T is the temperature of the cell, K I/I is the transmittance (Tr) of the sample. The design of the photometer is based principally on the accuracy requirement, 10 per- cent at 0.05 ppm. The quantities k, L, P, T appearing in the equation are all known or can be measured to within 1 or 2 percent. Thus the accuracy of the concentration measurement is mainly determined by the accuracy of the transmittance measurement. The error in the transmittance measurement may be expressed as Ac _ ATr _J ,„n c Tr x inlr * ' It was estimated that the transmittance measurement could be made with a precision ATr/Tr of about .0005 by using photon counting. For a concentration of 0.05 ppm these conditions imply a transmittance of 0.995 which can be achieved in an absorbing path of approximately 3 meters. The design that was selected for the photometer is shown in figure 1. It was decided that a double-beam arrangement would provide greater precision in the measurement through elimination of the effect of variability of the UV source. The cells of the photometer are made of 1h in diameter Pyrex pipe; teflon gaskets are used to make vacuum-tight seals for the fused silica windows. The light from a low pressure mercury discharge lamp is passed through a narrow-band interference filter in order to isolate the 253.7 nm emission line. The light is collimated by a fused silica lens and passed through a partially-transmitting neutral density filter which serves as a beam splitter. The two beams then pass through the two absorption cells. Adjustable aperture stops limit the diameter of the beams to ensure that there are no reflections from inner walls of the cells. The light beams emerge from the cells and are recombined on the face of a photomultiplier tube by another partially reflecting filter. 10 Figure 1. Ultraviolet photometer for ozone calibration, 11 The differential UV absorption method of photometry adopted for 3 concentration mea- surements requires the precise and accurate measurement of two light intensities—one for each cell. It was decided to use photon counting techniques for these intensity measurements using a UV sensitive tube with excellent single photo-electron resolution. If such a tube is cooled to about -20°C, the dark count rate is a few counts per second. Utilizing high- speed electronics and very precise timing methods it is possible to obtain accurate, and statistically well-characterized, pulse counts corresponding to incident light intensity on the photomultiplier. This method may be preferable to analog techniques which are more subject to instability, drift, and uncertain amounts of non-linearity. The Hg vapor lamp is energized either by a lOKHz square-wave power oscillator or by a 60Hz high voltage transformer. Light passing through each sample cell is alternately allowed to fall on the photomultiplier by means of a light chopper. A chopper blade with a single hole is driven by a hysteresis synchronous motor at a preselected rate--chosen to be unrelated to any harmonic or subharmonic of the line frequency (approximately 23 Hz). Light emitting diode-phototransistor pairs are used to sense the position of the chopper. The signal from the phototransistor triggers a discriminator to start the timing and counting cycle for each sample tube. A logic system, triggered by the discriminators controls the pulse counters associated with each sample tube. In order to ensure precise counting times as the photomultiplier is exposed to each tube, an electronic gate is used to ensure that the photomultiplier is fully (and not merely partly) exposed to the light beam passing through the sample. The photomultiplier tube which must be selected for gain, low dark count rate and, most importantly, negligible afterpulsing, detects the photons as they arrive. This type of tube, with outstandingly good single photon resolution, is essential for this measurement. The pulse output from the photomultiplier is amplified 100 times by direct-coupled amplifiers and the pulses are detected by a high-speed pulse amplitude discriminator. Pulse counting is performed by conventional 100 MHz pulse counters and a rough guide of overall pulse rate is provided by a rate meter. Counting time is determined by a preset counter which counts the revolutions of the chopper blade past the LED phototransistor pairs. At the end of a counting interval, the results are printed out and the sequence repeats. House air, dried and filtered, flows through one cell ("reference cell"), then into an ozone generator [5] from which ozonized air flows through the second cell ("sample cell"). The measurement is made by comparing the ratio of the signals transmitted by the two cells in the presence and in the absence of ozone. This provides the transmittance (Tr = I/I ), and the ozone concentration is determined by application of the Beer-Lambert Law, as discussed above. Since the mercury lamp is viewed simultaneously through both cells, fluctuations in lamp intensity do not affect the measurement. Any impurities present in the air stream will be observed in both cells and will not interfere with the 3 determination. The performance of the photometer has been determined over the ozone concentration range 0.020 to 1.500 ppm. At each of the measured concentrations the standard deviation was less than 0.005 ppm. In order to compare the photometer with the other methods of calibration, a Dasibi commercial photometer was calibrated against the NBS laboratory photometer. The Dasibi instrument then served as a transfer standard for interconnecting the measurements made by the different methodologies in different laboratories. Multiple analyses were made at several concentrations, and a linear regression analysis of the averaged data can be written as: [0 3 ] uv = (1.020 ± .004) [0 3 ] DAS - (0.001 ± .002) 12 For additional comparative data the NBS Dasibi instrument was transported to the EPA Research Triangle Park facility. Simultaneous 3 comparisons were made by Dr. J. A. Hodgeson of the Office of Air and Water Measurement, NBS, at the EPA Environmental Monitoring and Support Laboratory, where concurrent evaluations of the gas phase titration method (GPT) and a modified Dasibi photometer [6] were being performed. From these measure- ments the following relationships were obtained: [ °3 ] GPT,EPA = (1 - 01 ± °- 02) [ °3 ] UV,NBS + (0 - 011 ± °- 003) [0 3 ] UV , EPA - (0.98 ±0.01) [0 3 ] UVjNBS Thus the agreement between UV photometric and gas phase titration measurements is excellent. Comparisons between UV photometry and iodometry, using either neutral phosphate buffered KI or boric acid buffered KI, show a much larger discrepancy. This discrepancy is currently being examined in detail. References [1] Reference Method for the Measurement of Photochemical Oxidants Corrected for Inter- ferences due to Nitrogen Oxides and Sulfur Dioxide, Federal Register Z§_, 8195-8197 (30 April 1971). [2] (a) Comparison of Oxidant Calibration Procedures, Report of the Ad Eoo Oxidant Mea- surement Committee of the California Air Resources Board, Sacramento, CA (20 February 1974). (b) DeMore, W. B., Romanovsky, J. C, Feldstein, M. , Hamming, W. J., and Mueller, P. K. Interagency comparison of iodometric methods for ozone determination, in Calibration in Air Monitoring, ASTM Special Tech. Pub I. 598, 131-143 (Philadelphia 1976). [3] Clements, J. B. Summary Report: Workshop on Ozone Measurement by Potassium Iodide Method, EPA-650/4-75-007, 36, U. S. Environmental Protection Agency, Washington, DC 20460, February 1975. [4] The value of k used in this work (308.5 cnr^atnr 1 ) is based on an evaluation by R. Hampson and D. Garvin of measurements reported in the published literature: (a) Inn, E. C. Y. and Tanaka, Y., J. Opt. Soc. Am. 4_3, 870 (1953). (b) Hearn, A. G., Proc. Phys. Soc. 78, 932 (1961). (c) DeMore, W. B. and Raper, 0., J. Phys. Chem. 68, 412 (1964). (d) Griggs, M. , J. Chem. Phys. 49, 857 (1968). (e) Simons, J. W., Paur. R. J., Webster, H. A., and Bair, E. J., J. Chem. Phys. 5J[, 1203 (1973). (f) Becker, K. H., Schurath, U., and Seitz, H., Int. J. Chem. Kinet. 6_, 725 (1974). [5] Hodgeson, J. A., Stevens, R. K., and Martin, B. E., ISA Trans. Y\_, 161 (1972). [6] Paur, R. J., Baumgardner, R. E., McClenny, W. A., Stevens, R. K. , Status of Methods for the Calibration of 3 Monitors, extended abstract .presented before the Division of Environmental Chemistry, ACS Meeting, April 1976, New York. 13 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). ACCURACY OF OZONE CALIBRATION METHODS Richard J. Paur U.S. Environmental Protection Agency Research Triangle Park, North Carolina 27711, USA 1. Introduction The calibration of monitors for measuring ambient ozone concentrations presents special problems because the instability of ozone makes it impossible (at least for the present time) to prepare a certified ozone standard. Therefore, ozone must be generated in a dynamic calibration system and assayed each time a monitor is to be calibrated. The U. S. Environmental Protection Agency is considering candidate methods as replace- ments for the 1% neutral -buffered potassium iodide procedure (40 CFR Part 50, Appendix D) that is the current Federal Reference Method for assaying ozone in a dynamic calibration system. Two candidate methods, gas phase titration (GPT) and ultraviolet photometry (UV photometry), have been used for determining ozone concentrations for some time and are generally accepted as valid methods. A third method, ]% boric acid buffered potassium iodide (BKi) H] 1 , is new and is in the preliminary stages of evaluation. 2. Discussion The precision of these candidate methods is estimated by repeated determinations of the 3 output from a stable ozone generator while simultaneously verifying the stability of the generator by other methods. Owing to the lack of an ozone standard at the concentration range of interest, the accuracy of the procedures used for the assay cannot be individually verified. The approach taken to insure that the assay methods provide reliable estimates of the ozone concentration is based on a comparison of results of two or more methods with the following characteristics: 1) The methods should be accurate at concentrations where the 3 can be determined manometrically. 2) The methods should be precise, i.e., the 95 percent confidence interval should ideally be no larger than ±5 percent of the ozone concentration. 3) The methods must be based on different measurement principles. 4) Extension of the methods to low concentrations should be well-based in theory. 5) The methods should have a sensitivity of at least 2 ppb. 6) The sources of probable error in the methods should be easily identifiable. figures in brackets indicate literature references at the end of this paper. 15 The accuracy of the UV photometric procedure is dependent on the the absorptivity of ozone, the transmittance of the ozone sample, and through the ozone sample can be measured. The optical pathlength in a cell can be measured to within 0.5 percent of the length without diffi< one's choice of equipment, the transmittance of the sample can be measi in 10 3 to 1 part in 10 5 . Critics of the UV photometric method have que of the ozone absorptivity at 254 nm, however, a review of the literatur accuracy with which the optical pathlength typical absorption iculty. Depending on iured to within 1 part jestioned the accuracy jre (Table 1) indicates that the value of 134 atm^cm -1 used in this work is probably accurate to within 1.5 percent. Table 1 Ref. Workers Ozone Absorptivity a, atnrjxnrj-j base 10 Method [2] Inn and Tanaka (1953) 133 Manometry [3] Griggs (1968) 132 Manometry [4] Becker Schwrath and Seitz (1974) 135 Manometry [5] Hearn (1961) 134 Decomposition St :e] DeMore and Raper (1964) 135 Decomposition St [7] Clyne and Coxon (1968) 136 (250 nm) GPT For most of the experimental work reported in this paper, the UV photometer used was a modified [8] Dasibi (1003-AH) instrument. The precision of the Dasibi photometer is generally significantly better than the stability of the ozone generator/flow system. In experiments employing flow control units (Tylan FC-260) to control the air flow through the ozone generator, the precision of the photometer is approximately 1 ppb (one standard deviation of 20 consecutive measurements) over the range 150-700 ppb. In experiments utilizing pressure regulators and needle valves for flow control, the precision decreases to 1 percent of reading over the range 100-1000 ppb. The Dasibi photometer appears to be a reliable instrument; however, one malfunction (leakage of the solenoid valve which directs either the ozone containing sample or refer- ence air to the absorption cell) would cause erroneous readings. Most other malfunctions of this instrument result in an obvious failure of the system. Gas phase titration [9] is based on the reaction of nitric oxide with ozone described by the equation: nNO mOc mN0 2 + m0 2 + (n - m)N0 (1) The GPT system is set up to minimize competing reactions such as oxidation of NO by 2 and reaction of N0 2 with 3 . Under these conditions the decrease in NO is assumed to be equal to the amount of ozone originally present in the reaction mixture. The GPT apparatus used to obtain the data presented in this paper consisted of teflon tubing, stainless steel needle valves for flow control, bubble meters for flow measurement, any of several commercial NO analyzers, an ozone generator, and a single piece of glassware containing the reaction chamber, capillaries for splitting the air stream and the dilution chamber. The design reduces the probability of leaks while providing a relatively compact system. The accuracy of the procedure is dependent on the accuracy with which the NO concentra- tion is known; this in turn relies on the accuracy with which the NO cylinder concentration and the NO and air flow rates are known. The NO cylinder used in the experiments reported was either an NBS SRM (Ser. # RSG-30-8002) or a cylinder whose concentration was determined 16 by comparison with the SRM cylinder. Certification of the SRM placed the 95 percent confi- dence limits on the SRM NO concentration at ±1.1 percent of the 46.3 ppm nominal concentra- tion. The flow measurements of the NO and air streams were estimated to be accurate to ±1.5 percent. Response of the NO analyzers was found to be linear to within the accuracy of the test procedure. The short term (few hours) reproducibility of the GPT apparatus was better than 10 ppb over the range 150 to 100 ppb; however, due to the length of time required to make the GPT measurements, it is likely that at least part of the differences in the GPT data is due to drift in the 3 generator output. Combination of the errors in the GPT and UV photometry techniques as implemented for these studies indicate that the two systems would have to yield answers that differ by more than 2.5 percent before the discrepancy would be significant. Since these two methods are based on entirely different measurement principles, they do not tend to be affected in the same way {i.e., give erroneous high or low readings) because of such problems as low quality zero air or faulty flow measurements. Therefore, a calibration system that employs both GPT and UV photometric assay systems should provide mean ozone concentrations accurate to within ±2 percent if the UV photometric and GPT values agree to within 2.5 percent. For the equipment used in this work, the assay systems agreed (figure 1) within the 2.5 percent range over concentrations from 150 ppb to 700 ppb (higher concentrations were not considered to be of interest). In a preliminary evaluation, the third method considered—the boric acid potassium iodide method--was compared to the Dasibi photometer used above. The BKI is similar to the 1% neutral -buffered potassium iodide except that it uses 0.1M boric acid instead of the phosphate buffer system. For fifty data points (20 at * 500 ppb, 6 at ^ 250 ppb and 15 at ^ 120 ppb), the average ratio of BKI results to photometric results was 1.016 with a standard deviation for the ratio of 3 percent. If further tests of the BKI confirm these early results, the BKI will be able to provide a third check on the accuracy of the calibration system. 17 800 700 600 - < \— 1— 500 GO < Q_ 400 < CD 300 m Q_ CO o 200 100 i i i 1 1 1 1 1 1 — — SLOPE = 1.00 INTERCEPT = 2.7 — / i 1 i i 1 I 1 100 200 300 400 500 600 700 800 03. PPB (UV PHOTOMETER) Figure Comparison of GPT and UV ozone determinations. 18 References [1] Flamm, D. L., Personnal communication on iodometric ozone calibration study carried out at Texas A&M University under EPA sponsorship. [2] Inn, E. C. Y., and Tanaka, Y., J. Opt. Soc. Amer. 43, 870 (1953). [3] Griggs, M. J., Chem. Phys. 49, 857 (1968). [4] Becker, K. H., Schurath, U., and Seitg, H., Internal. J. Chem. Kinetics, VI, 725 (1974). [5] Hearn, A. G., Proa. Phys. Soc. 78, 932 (1961). [6] DeMore, W. B. and Raper, 0., J. Phys. Chem. 68, 412 (1964). [7] Clyne, M. A. A. and Coxon, J. A., Proc. Hoy. Soc, A303 , 207 (1968). [8] Paur, R. J., Baumgardner, R. E., McClenny, W. E., and Stevens, R. K. , ASTM Symposium on Calibration Problems and Techniques, August 5-7, 1975, University of Colorado, Boulder, Colorado. [9] Rehme, K. A., Martin, B. E., and Hodgeson, J. A., Tentative Method for the Calibration of Nitric Oxide, Nitrogen Dioxide, and Ozone Analyzers by Gas Phase Titration, EPA Report EPA-R2-7 2-246, (March 1974). 19 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). INTERRELATIONSHIPS BETWEEN PRIMARY CALIBRATION STANDARDS FOR NITRIC OXIDE, NITROGEN DIOXIDE, AND OZONE AS APPLIED TO TEST GAS ATMOSPHERES GENERATED BY GAS PHASE TITRATION Donald G. Muldoon and Anthony M. Majahad Environmental Research and Technology Inc. Concord, Massachusetts 01742, USA 1. Introduction Environmental Research and Technology, Inc. (ERT) currently operates and maintains over 200 ambient air quality monitoring stations. These monitoring sites are predominately located in the northeastern quadrant of the United States. At many of these AIRMAPR (an acronym for the ERT, Air Monitoring, Analysis, and Prediction system designed to provide information concerning air quality in a region and the effect of various meteorological conditions on air quality) sites, chemiluminescence N0/N0 x continuous analyzers are in use. In the development of an overall quality assurance and maintenance program in support of this effort it became evident that a Gas Phase Titration System (GPTS) was needed for both calibration and functional testing (converter efficiency checks, etc.) of this equip- ment. A number of commercially available systems were evaluated but none could meet the special requirements of a system that would be used in the routine operations of a high- volume (10-20 units under test at one time) calibration laboratory. The required system should be capable of generating stable concentrations of NO and N0 2 with a maximum degree of reproduceabil ity at a sufficient output volume. Such a Gas Phase Titration System was constructed incorporating the following features: 1. Clean air system capable of delivering suitable quantities of clean dilution air with a background of NO, N0 2 , and 3 less than 2 parts per billion. 2. Precision electronic mass flow controllers for both dilution air (0.2-20.0 SLM) and cylinder N0/N, mixtures (0.2-100 SCCM). 3. A modified high output ozone generator with temperature (35 ± 0.5°C) and voltage (110 Vac ± 1 percent) control. A micrometer adjustable drive mechanism to adjust the sleeve position of a stable low-pressure mercury lamp assembly (at constant dilution air flow rate) to vary the 3 concentration. 4. Suitably sized reaction chamber (150 cc) and mixing bulb. (150 cc) assembly housed in a temperature controlled cabinet (35 ± 0.5°C) to carry out the gas phase reaction. 5. An all glass-Teflon manifold and delivery system with suitable sampling ports for a N0/N0 x analyzer, 3 analyzer and iodimetry (Neutral Buffered Potassium Iodide) bubbl mg apparatus. After the preliminary functional testing of the apparatus was completed, a series of intercal ibration tests, between the applicable National Bureau of Standards reference materials (N0 2 permeation devices (SRM-1629), NO cylinders (SRM-1683), As 2 ? (SRM-83c), and Na 2 C 2 0^ (SRM-40h)), were conducted to determine the most effective calibration technique to be used with the GPT system. 21 In the course of the evaluation of the Gas Phase Titration System two other intercali- brations were performed: A. NBS gravimetrically calibrated NO? permeation devices vs the Griess-Saltzman N0 2 bubbling procedure (ASTM D1607-69). B. NBS prototype reference ozone generator vs chemi luminescent 3 monitor with calibration based on three different transfer standard (NO dilution, N0 2 permeation devices and Iodimetry) . 2. Experimental A complete modular description of the Gas Phase Titration System used for these experiments is contained in Appendix A. For each experimental set of GPT data, an initial NO concentration of approximately 0.90 ppm was generated by appropriate dilution of a high concentration N0/N 2 cylinder with dilution air. The flow rates of both the dilution air and N0/N 2 mixture were controlled precisely by electronic flow controllers calibrated by reference techniques. 1 At all times a chemi luminescence N0/N0 y analyzer (Thermo Electron 14B) and 3 chemi- luminescence analyzer (McMillan 1100) were used to monitor the output of the GPT system. The 3 monitor was independently calibrated by bubbling the output of the system's ozone generator using the rp-Pcrence (NBKI) technique 1 and transferring this calibration to the monitor. The N0 2 channel of the N0/N0 x monitor was independently calibrated using N0 2 test gas atmospheres generated by a permeation tube assembly incorporating an NBS N0 2 permeation device. 1 The NO channel of the N0/N0 x monitor was also independently calibrated by dynamic dilution of N0/N 2 test cylinders. The calibration of these test cylinders were periodically verified by a transfer standard technique 2 using an NBS certified 1 N0/N 2 gas mixture. on For a given experimental set of GPT data, the N0/N 2 cylinder flow rate and the diluti air flow rate were kept constant. The 3 concentration was increased incrementally from 0.000 ppm until an excess 3 concentration was detected by the system's 3 monitor. The generation was accomplished by appropriate adjustment of the sleeve position of the 3 generator. The NO, N0 2 , N0 X (NO + N0 2 ) and 3 instrument output voltage for each test point was recorded. The 3 value at each test point with no N0/N 2 flow was also determined. The concentration at each test point of NO, N0 2 , N0 X (NO + N0 2 ) and 3 could then be calculated from the output voltage and appropriate calibration constant. Two other intercal ibration experiments were also performed during the evaluation of the Gas Phase Titration System. A. Wet chemical determination of the N0 2 output of permeation tube assembly by Griess- Saltzmann procedure (ASTM D1607-69). The N0 2 concentration determined by bubbling the ^ee table 1 for reference calibration techniques. 2r t r i r nfk r r 1 INSTRUMENT NO RE SPONS E TEST CYL uonc i est uyi - tone mb uert uyi INSTRUM - £NT N0 resP0NSE~NBS CYL 22 Table 1 ERT gas phase titration system calibration and verification techniques — CJ o o + c 13 T3 CD CD (A •* o J- o CO 3 CO 4-1 rH • O CO D H i-H • CD CJ u O t-H ■ H CD 1 CD -3" 13 T3 S UJ r-1 CD 4-> 13 ■— 1 e - rH CM C c rH CD 2 C '-' O CD '* ca ca cu c E UJ r—t •H • CO rH • rC LO ^^ 4-> ■P O CD " LO 2 !h o CD O H CM U CO CO l/) -H rH rx X O 2 C +-> 2 CM O co 4-i 3 t— 1 to +J O CD C rH rH u s) 3 ifl ^3 f-l M 0) H ■H S rH XrH O CD CM CD S rH ca o- 0) C U ni to ca rH CD • <4H O <4H •H CD m <-H •rH UD -H • H +-> 'H ^■oo IT) 2 CO a a e 4-> 4-1 -rH rH U t/> rH o o pa ca c ca 4-1 c/1 r-H CD CD CD o S +i ca 2 ca rH rH rH — 1 1— 1 t-j- CO UJ 02 Qi 2 UJ U- UJ 2 •M Csl tO CTl CTl to t/) LO 00 O rC CN CD oo oj H h in H o vD tn \0 H U o =tfc =tfc 4fc • 2 OS 5 < rH T3 , — s 4-> rH 13 / — ^ ■m ca •H W S> Q - o -a D ca ~* z CO ca c cr M 7. 10 O 1> ca J- •H s — ' •^ ® o " rH 4-> o i — I CO *~ CD CO CM * — ' m CM LO M > U o 2 ca ■H CC CL rt O o CM CM OJ ^^ , — 1 CD ■rH '•"' rH C- Oh CL. rH CM ca o LO O U Cu >— > 2 2 CD < 2 CD , V i—. 'J ca CJ 2 o LO r- 1 cu CTi -H Jh £ LO O ca O > CD CD 1 ■H g 1 CD 13 'J b § □a r " 4-1 4-> ^ in 1 1 r^ a 0) CO rH 'J r^ C r " rH fH rH 4H o uu o ca •rH O CD O CD CD CD r-l o c CJ rH t — > U E E rH g +-> 4-> S CD O ca rH O < — •H CD U 4-J CD CD CO ■rH Ch U E 13 CD < cj a; U u a ^h E B 1 a 4h UJ CD 3 O 4-1 E 3 Ph 3 ^,H Ca O rH CD E- E ^ -H E < oo ca ^ t/1 rH I/) CO 4-1 rH CD • H CO rH Ui M O C- o rH O -H O -H < o ca u_ '/-< < CD CM O -c; uj ess 1 CJ > a > -3 v_/ h'Jh a k -' Cr 2 Si CJ >— - g> o c— u , — N ^ 2 u 2 3 UJ CO -J 4-1 ^ CO ca UJ [/) 2 O 3c rH ~ v LO o o O o Csl 13 uS rH O , — i • ex o ro o CO Uh rj UH i — 1 ~ 2 o 2 <- V I A.\ CD UJ - — ' ** — * H •^~ 23 output of a N0 2 permeation tube assembly incorporating an NBS permeation device using the Griess-Saltzmann procedure was compared to the N0 2 concentration calculated from the NBS permeation rate and dilution flow rate for a number of test atmospheres. These experiments were conducted to verify the calibration of the N0 2 permeation devices. B. Evaluation of a prototype NBS reference 3 generator. A prototype NBS reference 3 generator, 3 low pressure mercury vapor lamp type, was received in our laboratory for evaluation while our experiments were in progress. The 3 output of this device was controlled by manual adjustment of the UV lamp sleeve position (keeping the dilution rate constant). The Gas Phase Titration System's 3 monitor was used to monitor the output of the prototype 3 generator at 9 test points. The NBS designated 3 concentrations were then compared to the 3 concentration as measured by the instrument based on three different transfer standards: a. NO dilution b. NO2 permeation c. Iodimetry 3. Results and Conclusions This calibration technique is based on the rapid gas phase reaction between nitric oxide (NO) and ozone (0 3 ). NO + 3 -y N0 2 + 2 (1) Under proper experimental conditions, the following relationships are applicable due to the stoichiometry of the reaction. N0 2 = 3 = ANO (2) N0 2 = N0 2 generated by gas phase reaction 3 = 3 consumed by gas phase reaction ANO = NO consumed by gas phase reaction, {i.e., initial NO concentration - final NO concentration. ) Also if a classical GPT plot of NO output (Y-axis) vs 3 concentration (X-axis) is made, the concentration of the N0/N 2 cylinder may be calculated from the following relationship: Cone of N0/N 2 Tank = D x b (3) n n-n «.. r . Flow Rate (Air) + Flow Rate (NO/N2) D = Dilution Factor = Flow Rate (NO/N2) b n = x intercept = equivalent 3 concentration, {i.e., equivalent NO concentration) u 3 3 NBS Ozone Generator #19 supplied by Region I, EPA. 24 A typical Gas Phase Titration plot is presented in figure 1. A comparison of the N0/N 2 cylinder concentration determined from this set of experimental data points as compared to the N0/N 2 cylinder concentration determined from NO transfer standard techniques is presented in table 2. These results indicate that the two calibration techniques are equivalent within experimental error. Table 2 Determination of N0/N 2 cylinder concentration by gas phase titration and by comparison to NBS standard cylinder A Classical Gas Phase Titration (A. M. Majahad/02.24.76) O3 Sleeve Percent PPM Setting NO O3 000.0 86.4 0.000 100o0 75.9 0.125 200.0 58.7 0.299 250,0 49.8 0.388 300.0 39.6 0.490 325.0 34.1 0.535 350.0 29.4 0.580 375.0 24.5 0.615 425.0 15.8 0.710 475.0 500.0° 525.0^ 550.0 b 7.4 0.800 3.8 0.835 1.3 0.895 0.5 0.925 Flowrates: N0/N 2 = 36 .1 SCCM AIR = 8. 950 SLM Not included in Linear Regression due to bend in curve (Figure 1 ) Calculations Linear Regression Analysis of GPT: y = rnx + b where; y = PPMO3 ° 3 x = °oNO PPM 0: °3 -0.0099 (%NO) + 0.87 m = slope (PPMO3ANO) equivalence point (PPMO3) NO/No Cylinder Concentration Determination: J NO D x b where: °3 PPM NO Flow, air Flow NO/N; J NO 8950 + 36.1 36.1 03 x 0.872 Flow..,, ... NO/N 2 0.872 PPMO3 = 217 PPM NO Technique NO/N 2 Cylinder Concentration Comparison Standard Cone (PPM NO) Gas Phase Titration Transfer Standard As 2 3 (SRM-1683) NO (SRM-83c) 217 214 25 ^ , ^ i - 3*E ^-^ 7 y"S Jr | r a r /V 52 r IU ' r ~3> j/ ~3E r ■a r <^ 12 £ 5^ ,^ zz L- -<^ r ^ >^ 1" . ^ • X ± .^ . jj .j r ]- u L .' " ]- j 3 L l^r y' i— ;: jfl " .' - % /j <) o c> (S^ C 5CL5 tClg U) O OS a a. If all three outputs (NO, N0 2 and 3 ) associated with Gas Phase Titration are cali- brated using independent standards, then the interrelationships among these standards can be determined. 26 Test data in table 3a indicates that, within experimental error, the three aformentioned calibration techniques are equivalent and any one of them could be used as the primary calibration technique for a gas phase system. (The converter efficiency of the N0/N0 x analyzer would have to be 100% in order to use a N0 2 permeation device as the primary standard. ) Table 3a* Interrelationship of NBS primary calibration standards for a typical gas phase titration (D. G. Muldoon/04-28-75' ANO = NO; 0-. PPM ANO PPM N0 2 PPM 3 Average SRM-1683 SRM-1629 SRM-83c ANO + N0 2 + 3 Setpoint PPM NO (No Dilution) (Permeation) (NBKI) 3 a 0.924 OoOOO 0.000 OoOOO 0.000 1 0.895 0.029 0.041 0.036 0.035 2 0.843 0.081 0.097 0.089 0.089 3 0.746 0.178 0.197 0.196 0.190 4 0.695 0.229 0.247 0.247 0.241 5 0.641 0.283 0.307 0.304 0.298 6 0.589 0.335 0.349 0.347 0.344 7 0.534 0.390 0.404 0.396 0.397 8 0.491 o 433 0.440 0.441 0.438 9 0.441 0.483 0.483 0.492 0.486 10 0.394 0.530 0.533 0.544 0.536 Average 0.270 0.282 0.281 0.278 Flow Rates: N0/N 2 Air 17.5 SCCM 5.121 SLM Corrected for background NO and N0 2 Table 3b* Linear regression coefficients for GPT curves PPM X = M(PPM NO) + b Slope Equivalence STD Error of Correlation X Cm) Point (b) Estimate (S») Coefficient (r) ANO -1.000 0.924 9.0 x 10" 7 -0.9999 N0 2 -0.989 0.929 7.4 x 10~ 3 -0.9991 3 -1.009 0.941 5.8 x 10" 3 -0.9995 *see figure 2 for graphical representation of Table 3a and 3b. 27 2 Q. O. * ^:i;:iiii::|i;Bip PPM STD £> =ANO O »N0 2 O = 0~ 28 4. Other Results and Conclusions Test data presented in table 4 demonstrated that a chemi luminescent 3 monitor can be calibrated using Gas Phase Titration Techniques as well as the reference NBKI methodology. Table 4 National Bureau of Standards Ozone Generator Evaluation Date: November 20, 1975 Instrument: McMillan 1100 (#6396) Location: ERT, Concord, Mass. Calibration: 3 Output from GPT (NBKI Verification) NBS Observed Observed Observed Average Sleeve Designated PPM O3 PPM 3 PPM 3 ERT Position PPM 3 (a) (b) (c) PPM0 3 90 (d) 0.495 0,462 0,443 0.473 0.459 80 0.410 0.408 0.391 0.418 0.406 70 0,357 0.343 0,329 0.351 0,341 60 0.300 0.288 0.276 0.295 0.286 50 0,244 0.238 0.228 0.244 0.237 40 0,192 0.187 0,179 0.191 0.186 30 0.141 0.138 0.132 0.141 0.137 20 0.092 0.091 0,087 0.093 0,090 10 0.042 0,041 0,039 0.042 0.041 Off 0.000 0.000 0.000 0,000 0.000 (a) NBKI Transfer Standard (As 2 3 ) (b) Permeation System Transfer Standard (N0 2 Permeation Device) (c) Gas Phase Titration Transfer Standard (N0/N 2 Cylinder) (d) Assuming linearity, this designated concentration does not conform. Test data presented in table 5 demonstrates that N0 2 determinations based on the Griess-Saltzman procedure (ASTM-D1 607-69) overestimate the actual N0 2 concentration. The experimental evidence indicates that although the Griess-Saltzman procedure is very reproduceable, the Saltzman empirical factor (0.72 mole NaN0 2 produces the same azo-dye color as 1 mole N0 2 ) should be neglected when working at ambient concentra- tions, i.e., less than 1.0 ppm N0 2 . 29 Table 5 Evaluation of NBS N0 2 permeation devices (SRM-1629) PERMEATION NBS DESIGNATED ASTM D-1607-69 a STOICHIOMETRIC DEVICE CONCENTRATION OBSERVED OBSERVED NUMBER (PPM N0 2 ) (PPM N0 2 ) (PPM N0 2 ) 39-12 0.187 0.248 0.179 39-12 0.149 0.188 0.135 48-11 0.196 0.270 0.194 48-11 0.196 0.273 0.195 48-11 0.125 0.172 0.124 45-7 0.122 0.168 0.121 45-7 0.122 0.177 0.127 45-7 0.122 0.173 0.124 i j Griess-Sal tzman empirical factor neglected. It is the opinion of the authors that Gas Phase Titration techniques should be used for the calibration of chemi luminescence N0/N0 x analyzers and that the primary calibration of the gas phase system be based on dilution of N0/N 2 cylinders traceable to NBS standards This technique is the simplest and most practical approach. Also, there is no reason why chemi luminescent 3 monitors could not be calibrated using the same techniques. This would eliminate the necessity of using the very difficult iodimetry techniques. 5. Future Work Work is currently underway at ERT to incorporate a current-controlled ozone generatio apparatus into the gas phase titration system. This modification will allow pre-programme multipoint Gas Phase Titration calibrations to be performed routinely. Also, as stable, low concentration (<10 ppm) S0 2 /N 2 and N0 2 /N 2 mixtures are now commercially available, feasibility studies are also being conducted by the authors to determine if the Gas Phase Titration system may be used as a dynamic dilution system to generate pre-programmed, stable concentrations of S0 2 and N0 2 . As with the Gas Phase Titration system, it is anticipated that the calibration of these commercial cylinders wil be traceable to NBS reference materials. Appendix A GPT Air Supply The Clean Air Supply used for the ERT Gas Phase Titration System consists of two mail sections. Figures 3 and 4 show the components of the system. 30 -tx- © o ©- 0- o- ) a ©■ fe t I H i •D 1 tf a m z o 5 oc Q. CO 111 OC < Q. CO 2 uj CO a a a o 5 UJ f- (0 >- CO > oc UJ > -J UJ o oc < oc z < UJ — CX] — f o oc I- z o o UJ z < Ol 1 — tX3- Ol o 31 8 I- z o * ° ,u i- Q. O UJ CM z I o o o I. Clean/Dry Air Delivery System (Figure 31 A. Compressor 1. Kellog-American, Inc. Model #452TV B. Water/Oil Vapor Drop-out 1. Wilkerson #51024 2. Schrader #3532-1200 C. Filter/Drop-out Filters 1. Rego #8822 K-Y 32 D. Oil Vapor Trap 1. Wilkerson #5102-4 E. Regulator 1. Rego #1682 a. 0-200 PSIG F. Heatless Dryer 1. Pure-Gas #P-HF-200-120E217 a. Special Combination Dessicant: - 70% Molecular Sieve - 30% Charcoal b. Purge Orifice #200-404-24 G. Moisture Indicator 1. Matheson #465 H. Regulator 1. Wilkerson #R1 0-03-000 I. Moisture Indicator 1. Matheson #465 J. Regulator 1. Bastian-Blessing #88021-Y K. High Pressure Particulate Filter 1. Mi Hi pore #XX45 047 00 2. Filter Membrane a. Millipore Hydrophobic, 0.45 ym, 47-mm dia, (Aquapel #HAHP 047 00) L. Manifold 1 . 1/4" Copper Tubing II. GPT Flow Control System (figure 4) A. Clean/Dry air is delivered to the GPT flow control devices at a working pressure 40-50 PSIG. This allows a maximum flow of 20 LPM. Figure 4 shows the main components of the flow control assembly. 1. An ON/OFF Ball Valve is needed to isolate the GPT from the Clean/Dry Air System. 2. A pressure regulator is required to reduce the air stream pressure to the flow controllers. 3. An ultraviolet light assembly is used to remove NO from the Clean/Dry Air (Ultraviolet products, #11 SC-1 with SCT-1 Transformer/Power Supply] NO is converted to N0 2 . 4. The N0 2 is removed from the Clean Air by the scrubber. This consists of a mixture of molecular sieve, activated charcoal, and silica gel. 5. A final particulate filter (see I. K. ) is used to remove any substances remaining in the Clean/Dry air. III. Mass Flow Controllers A. The dilution air is monitored and controlled using a Tylan FC-202. The range is 0-20 LPM. B. N0/N 2 flow is regulated using a Tylan FC-200. The range is 0.2-100 cc/min. C. Both mass flow controllers are connected into a control box (Tylan R0-733). The flows are set via potentiometers and are observed on a digital readout display. IV. Reaction Chamber The GPT system consists of an environmentally controlled reaction chamber. This apparatus provides the required temperature, mixing, and ozonating for a gas-phase titration. Figure 5 shows the configuration of the reaction cabinet. The ozonator utilizes a double-helix-thread adjustable sleeve which shields the U.V. lamp (Ultra- violet Products, Inc., Special SOG-2 ozone generator). The lamp housing assembly contains an Alzac reflector and Suprasil flow tube. The flow tube was modified to increase the diameter to 25 mm, which increases the output of 3 . The lamp sleeve position is set with a micrometer control dial (Amphenol #1314). 33 H 111 ffl < U < Ul DC (3 QC Ul IL IL rr o 111 DC c« (0 Ul 2 3 (*) — 1 o a i- 2 Ul o QC u -I < O ui S DC IU 2 a 2 a Ul a O 2 (0 u N Ul a a U. Ul Ui H O Ul 2 a a 2 DC DC Ul Ul X s H C5 J. 34 The air flow that passes through the ozonator assembly is obtained by splitting the inlet dilution air in a ratio of approximately 1 to 10. A stainless steel "T" with 1/4 in o.d. tubing inlet and outlet and a 1/8 in. o.d. outlet for the ozone air is used to split the air stream. The length of the 1/8 in. o.d. split required a series of trial and error experiments to obtain the necessary amount of clean air to pass through the ozonater's flow tube. The ozone is injected into an all glass mixing "T" and reacts with a stream of N0/N 2 gas. The resultant gas mixture is then passed into a 155 cc Kjeldahl mixing bulb. The bulb and glass "T" are connected by Glinches of 1/2 in. I.D. corrugated Teflon (Penntube Plastics CT-Flex). The N0/N0 2 /N 2 gas mixture then mixes with the remaining dilution air via another all glass "T". The gases were then further mixed in a second Kjeldahl mixing bulb with corrugated Teflon tubing connections. The gases are then allowed to exit via the CT-flex tubing into an all-glass/Teflon manifold assembly. The entire ozonator and mixing apparatus is housed in an insulated cabinet with a temperature control environment. The use of a thermistor (Omega #0-90-UUA-32J4, 2252 ohms @ 25°C) with a proportional set-point temperature controller and Triac power control (Action Instruments Co., Inc., #'s 2020 and 3020) provides a temperature range from 25-50°C ± 0.5°C. This temperature is monitored by a 3 in. dial display thermometer (Brooklyn Thermometer Co., Inc. #5204). The heat is obtained from 3 1 00- watt heating mats (Cole-Palmer Instrument Co. #3125) cemented to the chassis of the GPT system cabinet. The use of two metal muffin fans (Panmotor #4506) insured constant temperature circulation. Two indicator lamps are employed to show the firing of the heating pads. The dilution air and N0/N 2 flow controllers were connected to the rear of the GPT via stainless steel Swagelok fittings. The outlet was connected to an all- glass/Teflon manifold assembly via 6 feet of 1/2 in. o.d. CT-flex corrugated Teflon tubing. V. Manifold The manifold system consists of 6 specially fabricated 1 in. o.d. glass manifolds. Each glass manifold contains four 1/2 in. o.d. ports and two 3 in. x 13 mm o.d. ends. The ports provide the means to sample the N0/N0 2 /0 3 gas stream. This is accomplished by heat-shrinking 1/4 in. o.d. Teflon tubing into the ports. The ends of the individual manifolds are connected via force fitting of CT-flex Teflon. The entire manifold assembly stretches 30 feet and is able to accommodate 24 N0 X and/or 3 instruments. The manifolds are vented into the facility's exhaust duct system to prevent noxious gas build-up in the Air Quality Instrumentation Laboratory. The authors wish to express their gratitude to Mr. Robert Michaud and Mr. Dave Goldstein for their help in the construction of the experimental apparatus. NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). AN ANALYSIS OF THE MEASUREMENT ACCURACY AND VALIDITY OF RESULTS FROM THE CHARCOAL TUBE SAMPLING TECHNIQUE Gerald Moore MDA Scientific, Inc. Park Ridge, Illinois 60068, USA 1. Introduction Atmosphere monitoring for contaminant gases and vapors using the charcoal tube sampling technique is becoming more widespread, particularly because of the increased activity of OSHA in recent years in generating exposure standards for a growing number of different substances. The basics of the method are well described in the paper published in the American Industrial Hygiene Association Journal [l] 1 and currently is the recommended technique for measurement of a wide variety of organic solvents in industrial hygiene applications. Typically, the NIOSH recommended design of charcoal tube is employed together with a small battery operated personal sampling pump, a variety of which are available from different manufacturers (figure 1). The use of an adsorbent material such as charcoal provides a convenient and compact system for personnel exposure monitoring and a variety of other adsorbents are also coming into use for other applications where charcoal is not the most suitable adsorbent. In the absence of more specific monitoring techniques, it can be expected that this sampling method will remain in widespread use for many years despite the known limitations and scope for inaccuracies, many of which have been documented by other authors. The technique is relatively complex comprising of two distinct stages—sample collection and subsequent analysis by gas chromatography in the laboratory. Careful consid- eration is needed to determine where errors can occur and how best to correct them. 2. Sources of Errors and Inaccuracies A comprehensive study and analysis of a large number of samples was carried out by NIOSH and published as a report [2]. This report clearly shows the extent to which errors can occur and analyzes some of the reasons for these errors. Of particular significance are the so-called "outlier" results which were eliminated from the final analysis in the report but which serve to indicate that gross inaccuracies can occur even with supposedly well-trained staff. The report also shows that the precision of results is markedly inferior at low contaminant concentrations. Analysis of the technique shows that there are five main areas where errors can occur, all contributing to an overall lack of precision. These are identified as: (1) variations in tube manufacture, e.g., different grades and batches of charcoal used and variations in pressure drop which can affect sampling rate; (2) the stability of sampling flow rate which can affect the accuracy of determination of total sample volume as well as biasing the total exposure determination; (3) the means of determining the total sample volume drawn through the charcoal tube--this is shown to be strongly dependent on the type of pump used; (4) the desorbtion efficiency and variations in this parameter with different loadings of contaminants on the charcoal; (5) the standardization of the gas chromatograph which is used to perform the final measurements. figures is brackets indicate literature references at the end of this paper. 37 Q b E E CD CP c 3 CO CO en c CD CD > o E o o o o o O o E *_ ^_ e « CO o o §e u _£Z -C rO CD o o ■Cod "CD O > ■o "O c i_M— CD CD c =3 ■6 O O C\J > > "^_ *.-: — o o o o CO CO I" 2 CO CO — cn CD CD IS E E _g O o *4— E E o O o if) CD Figure 1, NIOSH recommended desiqn of charcoal tube 38 , . Each of these factors and its contribution to the total error sum is discussed. Certain types of errors can occur without necessarily affecting the reliability of results since they are systematic and will apply equally to the standardization process and the sample measurement. Other sources of error are inherent and can only be minimized to ensure good overall measurement accuracy. The validity of results is directly related to the precision which can be achieved and is critical where the results are used for standards compliance purposes. The shortage of trained personnel in the industrial hygiene field requires that where possible all monitoring techniques should be capable of yielding good results even in unskilled hands. 3. Some Ways to Improve Validity and Precision Four specific areas where improvements can be made are discussed: (1) The use of replicate sampling which allows true "outlier" samples to be eliminated, assists in deter- mining where gross errors have occurred in the procedure and allows averaging of results to obtain greater precision (figure 2). (2) The use, with sampling pumps, of an improved design with particular reference to accurate determination of total sample volume (figure 3). (3) The definition of controlled sampling parameters for each individual contaminant, taking into account optimum tube loading, the effect of sampling mixtures and the influence of environmental factors, such as humidity. A study has. been made of the basic data which are currently available and this has been brought together on one chart to form a common reference for all users of the charcoal tube sampling technique. (4) Greater control of the procedures for the gas chromatograph in analysis stage with particular emphasis on repeatability and care in standardization. The need for more user oriented GC equipment and the potential advantages of thermal desorbtion are discussed. 4. Conclusions The author concludes that significant improvements in overall precision are possible. The technique as presently used is certainly capable of determining contaminant concentra- tions to within ±10 percent of actual in the hands of "skilled users," but gross inaccu- racies can occur with less skilled users or on an uncontrolled random basis due to the large potential for unrecognized errors. Improvements in pump design, (figure 4), the use of replicate sampling and controlled sampling parameters will contribute to more reliable routine determinations by all users, but especially in industry where laboratory trained staff are not always available. The author believes that an overall accuracy of ± 10 per- cent should be the goal for all users, while recognizing the fact that ±20 percent is probably adequate for general purpose industrial hygiene measurement. Greater control becomes far more significant when considering the use of other adsorbent materials which can be expected to become more commonplace during the next few years. These materials typically have much weaker adsorbtion characteristics and are more critical as regards to sampling rate. There is a need for further study in this respect. The author concludes that the charcoal tube sampling method is convenient and viable for use on a routine basis provided that some additional attention is paid to adequate control in the areas discussed. The technique may be gradually superceded by other methods such as diffusion operated "badges" or specific personal monitors, but until that occurs all users should at least be attempting to get the best possible precision and validity of results from this proven technique. 39 Charcoal Tube Filter Plastic Tube Holder O Ring Seal Limiting Orifices Figure 2. Replicate sampling assembly 40 Figure 3. Functional diagram accuhaler personal sampling pump, 41 Figure 4. Accuhaler personal sampling system. References [1] White, L. D., et al., A Convenient Optimized Method for the Analysis of Selected Solvent Vapors in the Industrial Atmosphere, AIHA Journal 31 225-232 (March-April 1970). [2] Reckner, L. R., Sachdev, J., Collaborative Testing of Activated Charcoal Sample Tubes for Seven Organic Solvents, NIOSH Technical Information, HEW Publication No. (NIOSH) 75-184, June 1975. 42 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). ACHIEVING ACCURACY IN ENVIRONMENTAL MEASUREMENTS USING ACTIVATION ANALYSIS Donald A. Becker Institute for Materials Research National Bureau of Standards Washington, DC 20234, USA 1. Introduction The analytical technique of activation analysis has found widespread acceptance and usage for environmental trace element measurements. In particular, the capability for multielement determinations, high sensitivity, and non-destructive analyses have been important in this regard. However, to consistently achieve accuracy in environmental analyses requires careful attention to detail and use of well characterized facilities. Activation analysis can best be described by separation into its essential parts: sampling and sample handling, irradiation, chemical separation (if used), and counting. Each of these component parts should be optimized to provide acceptable accuracy, always considering the end use for the data obtained. The effect of each of these components on the overall analytical accuracy is considered in more detail below. A. Sampling and sample handling The first consideration in any analysis is whether the sample to be analyzed is truly representative of the material of interest. Lack of a truly representative sample results in imprecision or inaccuracy (or both) in the final determination, which is due not to the analytical technique, but to the original unrepresentative sample. Aqueous or moist matrices are particularly difficult to sample and store due to problems of adsorption, absorption, desorption, and leaching, both with container walls and any solid matter present. Once a valid sample is obtained, it must usually be stored, processed, or sub-sampled. Each process undertaken before irradiation increases the possibility of contamination. The statement that activation analysis is generally free from "blanks" primarily refers to reagent blanks. The only sample which can closely approximate a blank-free system is the one which can be irradiated as a single integral piece, then subsequently heavily etched to eliminate surface contamination and contamination beneath the surface due to recoil from nuclear reactions occurring from surface contaminants. (Treatment with several different etchants is recommended to reduce the possibility of redeposition of the contam- ination back onto the fresh surface.) All processing of subsampling which occurs should be made under clean conditions using noncontaminating implements and containers. Contami- nation is a particularly difficult problem for physical processing of solid samples {e.g., grinding, cutting, pulverizing, etc.), or for liquid samples. There are many papers in the literature which indicate that some types of environmental samples undergo changes in structure and chemical state even when dried at room temperature. This should not have a great effect on the total trace element content in most cases, but 43 if speciation, organic extractable trace elements, eta., are of interest, any form of drying may invalidate the sample. A recent extensive survey of the literature on sampling, sample handling and long term storage goes into much greater detail on the techniques and problems involved [I] 1 . B. Irradiation The choice of the irradiation source obviously affects the sensitivity, detectability, and selection of trace elements of interest. In addition to this, the specific character- istics of the irradiation source and the degree to which it has been evaluated play a large part in determining the final result of the activation analysis. Several factors which affect the accuracy and precision of reactor activation analysis are discussed below. Most irradiation facilities have a significant variation in the neutron flux within a single irradiation container. For example, the author has measured variations of more than 40 percent along the length of a cyclindrical polyethylene "rabbit" (^8 cm), and over 20 percent across the diameter of such a container (^2.5 cm). It is considerably worse when these two effects are present in the same facility. For high reproducibility and accuracy, each sample should be monitored during irradi- ation. A simple method for accomplishing this is to attach small bits of a pure metal foil to the irradiation container, which are later removed and counted to provide a normal- ization factor. The elements copper (for short or medium length irradiations) and iron (for long irradiations) have been found to work well. Other problems during irradiation include variations in the neutron energy spectrum, production of interfering nuclear reactions, and neutron self shielding. The methods and problems in reactor characterization have been addressed more completely elsewhere [2]. C. Radiochemical separations The judicious use of an appropriate radiochemical separation can often change a marginal trace element determination into a simple and effective one, often with greatly improved accuracy and precision. While the use of radiochemical separations does require the services of a chemist, and usually one with experience in such separations, the advantages which accrue are well worth the increased time and effort needed. The use of modern semiconductor detectors has also made group radiochemical separations an important tool in activation analysis, especially for very low level trace analysis. Some of the problems affecting accuracy and precision in radiochemical separations include sample dissolution difficulties (incomplete dissolution and losses during dis- solution); inadequate carrier exchange with the sample trace element(s); volatilization losses (during preirradiation processing, during the actual irradiation, when opened after irradiation, and especially during sample dissolution before carrier exchange); adsorption effects on containers (irradiation container, dissolution container, or separation con- tainer); quality of the separation procedure itself (single element, multiple element, complicated procedures, etc.); and finally, chemical yield measurements (reproducibility, with quantitative yields highly desirable). Each of the above aspects can introduce a random or systematic bias into the procedure, and thus should be carefully evaluated and tested before being accepted as a reliable and reproducible procedure. figures in brackets indicate the literature references at the end of this paper. 44 However, even with all of the cautions mentioned above, the use of radiochemical separations in the activation analysis program affords the capability for achieving the theoretical sensitivity for almost all elements, rather than for only the few very favor- able cases. This capability is not to be taken lightly. D. Counting In order to complete the analysis, the radiation emitted from the induced radio- activity in the sample must be accurately determined. Basically, most activation analysis detection systems consist of a single photon detector [Ge(Li), Ge, or Nal(Tl)] and associ- ated electronics, with a sample positioning device of some sort. With such a system, a variety of sample radioactivity levels and sizes can be counted rather easily. The requirements for analytical accuracy and precision for each individual determination will to a certain extent prescribe the care with which the sample is evaluated. In general, however, the attainment of a resonable accuracy and precision requires the analyst to at least be aware of potential errors in detection systems. What is often not realized is that even relatively small differences between sample and standard, or from sample to sample, in any one of many variables can result in a significant bias in the detected count rates. In a previous publication, the systematic biases associated with photon counting in activation analysis [3] was described in detail. For purposes of discussion, these biases were separated into eight sub-groups as follows: sample configuration; sample positioning; sample density; sample homogeniety and the effects due to activating particle flux inhomogeniety; photon intensity; radioisotopic purity; photon peak integration techniques; and errors in nuclear constants. Since the first subgroup mentioned, sample configuration, is both rather important and often ignored, it will be discussed briefly here also. Differences in source configuration between samples or between sample and standard can be a significant source of analytical error. Most analysts do not realize that, even at 5 cm away from a 13 percent Ge(Li) detector, a 1 mm difference in average height will induce a systematic bias of approximately 3 percent. Therefore, a sample and standard which vary by 5 mm in height (2.5 mm in average height) will have a bias of approximately 7 percent. This is in addition to all the other errors connected to the analysis, both random and systematic. These errors are much greater when sample positioning is closer to the detector. E. Reference materials One valid way to assess matrix-derived errors in any analytical system is through the use of reference materials. These reference materials are usually carefully studied materials which have documented homogeniety and elemental composition. There are varying levels of integrity associated with these materials, depending on their origin, but they all have one thing in common. They attempt to match, as exactly as possible the matrix composition of some class of material usually encountered in real life. A reference material which has been analyzed for one or more characteristics, and is issued by a recognized standards organization or a national laboratory, may be referred to as a certified reference material. The confidence connected to such a material is, of course, directly related to the degree of confidence placed in the certifying organization. Reference materials provide the opportunity for the activation analyst to check or calibrate, in their own laboratory, the accuracy and precision of their entire analytical system. In the ideal case, this is accomplished by analyzing a reference material, having the same or a similar matrix as the samples, to determine whether the known or "true" value is obtained. Any deviation from this known value is a measure of the uncertainty of the analytical system. When independent replicates of the reference material have been analyzed, the range of values obtained indicates the precision of the analytical system, while the deviation of the average analytical value from the "true" value indicates the accuracy of the system. If the value obtained is significantly different from the "true" value, the analytical system should be examined carefully to determine the source of the 45 error. In some cases, no "true" value is available for the reference material, and therefore only a precision value can be obtained. Reference materials of environmental matrices are available from several sources. 2. Conclusion In conclusion, the effective use of standards and reference materials offer an import- ant opportunity to continually evaluate the last three components of activation analysis, in order to control or eliminate errors. Evaluation of the first component, sampling and sample handling, is much more difficult to detect and evaluate, since in most cases each environmental sample is unique and cannot be reproduced. However, rather than assume complete acceptance or total confidence in their results, the activation analyst should continually strive for the best methods and techniques possible, consistent with the required costs. References [1] Maienthal , E. J. and Becker, D. A., A Survey of Current Literature on Sampling, Sample Handling and Long Term Storage for Environmental Materials , NBS Technical Note 929, Government Printing Office, Washington, DC (1976). [2] Becker, D. A. and LaFleur, P. D., Characterization of a Nuclear Reactor for Neutron Activation Analysis, J. Radioanal. Chem. ]9_, 149 (1974). [3] Becker, D. A., Accuracy and Precision in Activation Analysis — Counting, Proceedings of Nuclear Methods in Environmental Research, University of Missouri, Columbia, p. 69 (1974). 46 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). A COMPARISON OF FACTORS AFFECTING ACCURACY IN ATOMIC EMISSION AND ATOMIC ABSORPTION SPECTROMETRY USING A GRAPHITE FURNACE FOR TRACE METAL ANALYSIS IN WATER Michael S. Epstein and Theodore C. Rains Institute for Materials Research Analytical Chemistry Division National Bureau of Standards Washington, DC 20234 Thomas C. 0' Haver Department of Chemistry University of Maryland College Park, MD 20742 1. Introduction The determination of the trace metal content of water samples, whether it be for establishing criteria for drinking water or for studying the chemistry of the oceans, requires an extremely sensitive analytical technique. The concentration of many trace metals in water is in the pg/1 range and below, introducing the problem of contamination and/or loss during storage and analysis. The ideal analytical technique should thus combine sensitivity, precision and accuracy with a minimum of sample manipulation. Such an analytical technique is atomic spectroscopy. However, the sensitivity required for the direct analysis of many metals in water by atomic spectroscopy can only be obtained at the present state-of-the-art by the use of a furnace atomizer. Typically, these analyses are performed using atomic absorption spectrometry (AAS). For several elements, an alterna- tive to AAS with a furnace atomizer (GFAAS) is graphite furnace atomic emission spectrom- etry (GFAES). In this technique, the furnace is employed as the spectral excitation source as well as the atomization cell, and atomic emission from excited species in the furnace is measured [l] 1 . The two techniques provide an interesting contrast in analytical methods. The meth- odologies utilized in the application of both GFAAS and GFAES to sample analysis are con- siderably different. Interferences in the two techniques may differ in their severity, mode of interference (additive or multiplicative), and method of correction. In order to obtain satisfactory levels of accuracy, proper correction methods must be used. This paper compares the effect of various interferences and correction methods on accuracy in the analysis of trace metals in water using GFAES and GFAAS. 2. Instrumentation Analysis by GFAES, when performed with conventional furnace systems designed for atomic absorption, is limited in many cases by intense blackbody radiation generated by the heated graphite tube. When this radiation reaches the photodetector, analytical errors are intro- duced because of increased photon shot-noise, non-reproducibility of the blackbody emission intensity from run to run, and the scatter of blackbody emission into the monochromator by sample matrix components. Since the radiation from the furnace is largely continuum, the technique of wavelength modulation can be used to reduce or eliminate errors due to scatter figures in brackets indicate the literature references at the end of this paper. 47 and changes in the intensity of the blackbody emission [2]. The instrumentation used for wavelength modulation in GFAES has been detailed previously [2,3], and consists of an HGA- 2100 graphite furnace, a 0.5-m monochromator, and associated electronics. The monochromator was modified for wavelength modulation by placing a vibrating quartz refractor plate mounted on a torque motor between the entrance slit and the collimating mirror. The modulation apparatus is driven by the sinusoidal signal from a function generator and audio-amplifier, and the signal from the photodetector is processed by a phase-sensitive synchronous ampli- fier referenced to the second harmonic of the modulation frequency. 3. Discussion Since the accuracy of an analytical technique will be a function of the number and kind of interferences involved in its implementation, an examination of interferences encountered in the application of GFAES and GFAAS to sample analysis is required. These interferences can be loosely classified as either physical [4] (relating to effects caused by a physical property of the analyte species or which change one of the physical properties involved in the analytical measurement), chemical (relating to effects involving the chemical form of the analyte), or spectral (relating to the isolation of the analyte radiation from radiation due to other sources). A. Physical interferences GFAES analytical curves, particularly using resonance transitions, are nonlinear, in contrast to GFAAS growth curves which are generally linear over several orders of magnitude. The nonlinearity appears to be largely due to the design of commercial furnace atomizers and the thermal gradient in the furnace tube, which result in self-reversal and self-absorption of analyte emission in GFAES. Analysis by GFAES therefore requires close bracketing of samples by standards in most cases. The major physical interference in GFAAS is the scatter and/or absorption of source radiation by sample matrix constituents. The error due to this interference is additive and increases as analyte concentrations decrease. Typically, correction for this interference is performed by a continuum radiation source time-shared with the elemental line source. Scatter and absorption of analyte emission in GFAES are not as serious a problem as in GFAAS, but correction for the interference is complicated by the nonlinearity of GFAES growth curves. If a standard-addition correction procedure is used, chemical interferences (which reduce the analyte atomic population and thus reduce the emission intensity) must be distinguished from scatter or absorption (which do not effect the atomic population but reduce the emission intensity). The effect of the two interferences on a nonlinear intensity versus concentration relationship will be different. B. Chemical interferences Chemical interferences in both GFAES and GFAAS result in multiplicative errors and are correctable by standard-addition procedures. Interferences due to compound formation appear to be less severe in GFAES than in GFAAS, probably because emission intensity maxima occur later in the atomization cycle than absorption maxima. The gas temperature in the furnace will be higher when emission intensity maxima occur, thus decreasing compound formation effects. On the other hand, the increased temperature appears to increase ionization interferences in GFAES. C. Spectral interferences Spectral interferences in GFAES are more severe than in GFAAS. Source intensity modulation eliminates any additive errors in GFAAS due to the blackbody emission from the heated graphite tube and absorption line overlaps are less likely to occur than emission line overlaps. A wavelength modulation system is required to eliminate the additive error due to the furnace blackbody emission in GFAES. However, structure in the wavelength distribution of the background radiation from the furnace may still interfere with GFAES 48 analysis, although wavelength modulation intensity-nulling procedures may be employed to reduce the effect in some cases [3]. Scatter by matrix components of blackbody radiation into the monochromator may also be considered a spectral interference in GFAES, but it is correctable using the wavelength-modulation background-correction system. Contamination in the graphite furnace appears to be a greater problem in GFAES, due to the higher temperatures employed. Spectral interferences, due to the overlap of line or band emission from the sample and source with the analyte emission distribution, can occur as in flame atomic emission. Such spectral interferences should either be resolved by employing a smaller spectral bandpass or nulled by using the wavelength modulation system. 4. Results A comparison of detection limits (LOD) for GFAES [2] and GFAAS [5], as well as approx- imate concentrations of several metals in seawater [6] and proposed Standard Reference Material 1643 (Trace Elements in Water) [7], are shown in table 1. The concentration of most elements in seawater is below the concentration range required for direct analysis procedures, so preconcentration measures must be employed, with the inherent contamination and recovery problems. An exception is barium, which can be determined directly by GFAES, while direct determination by GFAAS is extremely difficult or impossible using a conven- tional atomic absorption spectrometer. The concentrations of many elements in proposed SRM 1643 are within the analytical capacity of both GFAES and GFAAS for direct analysis without preconcentration. Table 1. Water analysis by GFAAS and GFAES. Element Al Ba Be Cr Cu Ni Based on a 50 yl sample size. SRM 1643 is designed to simulate elemental concentrations in natural water samples. The concentrations of the major matrix constituents (Na, K, Ca, Mg) are in the yg/ml range {i.e., approximate concentrations are Ca ^ 27 ug/ml , Mg ^ 7 ug/ml, and Na -v 10 ug/ml). Although some chemical interferences are observed, the major factor affecting accuracy is the sensitivity of each technique for the element being analyzed. Figure 1 compares recorder tracings of GFAES and GFAAS signals from barium and beryllium in SRM 1643. The comparison is made at equivalent noise bandwidths of both systems under optimum experimental conditions for both techniques. The LOD of barium by GFAES is approximately 25 times lower than the LOD by GFAAS, while the LOD for beryllium by GFAAS is approximately 30 times lower than the LOD by GFAES. This is clearly reflected in the comparison of signal-to-noise ratios observed in figure 1 . 49 Limits of Detection (u 3/1) a GFAES GFAAS 0.04 0.1 0.08 2 2 0.06 0.4 0.2 0.4 0.1 4 2 Approxime Conce te Elemental ntrations (j- g/D SRM 1643 Seawater 80 0-1900 40 6 - 90 20 - 16 0. 04 - 2.5 15 1 - 25 50 0. 1 - 2.6 [A] 8a L GFAES GFAAS [B] Be GFAAS Figure 1. A comparison of GFAES and GFAAS signals for barium and beryllium in proposed SRM 1643. (A) 1 ng Ba, 553.6 nm. (B) 1 ng Be, 234.8 nm. As the matrix becomes more complex, such as in seawater, the effect of physical, chemical and spectral interferences on analytical accuracy becomes more pronounced. How- ever, no generalization can really be made about the analytical accuracy of GFAES versus GFAAS, although evaluations can be made on individual elements in a particular sample. The accuracy of each technique will depend on the element being analyzed, the sample matrix, and the skill of the analyst in recognizing and correcting for interferences when they occur. References [1] Ottaway, J. M. and Shaw, F., Analyst, T_00, 438 (1975). [2] Epstein, M. S. , Rains, T. C. and O'Haver, T. C, Appl. Speatrosa . 30, 324 (1976). [3] Epstein, M. S. and O'Haver, T. C, Spectrochim. Acta, 30B, 135 (1975). [4] Koirtyohann, S. R., Chapter in Flame Emission and Atomic Absorption Spectrometry, Vol, I., Dean, J. A. and Rains, T. C, Eds., pp. 295-315 (Marcel Dekker, New York, 1969). 50 [5] Perkin-Elmer Instrument Literature, HGA-2100, Feb. 1974. [6] Parker, C. R., Water Analysis by Atomic Absorption Spectroscopy, Varian Techtron, Palo Alto, California, 1972. [7] Moody, J. R., Private Communication, National Bureau of Standards. 51 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) IMPROVED ACCURACY IN BACKGROUND CORRECTED ATOMIC ABSORPTION SPECTROMETRY Andrew T. Zander and Thomas C. O'Haver Chemistry Department University of Maryland College Park, Maryland 20742, USA 1. Introduction The use of background correction techniques in atomic absorption spectrometry allows the analysis of a broad range of samples with complex matrices. These techniques eliminate the interference arising from particulate light scattering and broadband absorption by the matrix constituents. In conjunction with carbon furnace atomizers and standard addition methods, very low levels of analyte can be determined in complex samples with a minimum of sample preparation and a high degree of accuracy. Recently, attention has been focussed on the difficulty of obtaining corrections for spectral interferences when analyzing samples with unusual or complex matrices [l^^] 1 . As more and greater demands are placed on atomic spectrometric means of analysis, in terms of accuracy and precision at very low levels of analyte concentration, it is important to understand the limitations of the instruments utilized in dealing with interferences of spectral origin. The types of spectral interferences which are most commonly encountered in atomic absorption are: scattering of radiation and broadband absorption nonanalyte absorbing lines within the spectral bandwidth direct absorption spectral overlap. The interference from nonanalyte absorbing lines within the spectral bandwidth is of importance principally when a continuum source of radiation is used as a secondary source to monitor the background, or for continuum source atomic absorption methods which use a continuum source as a primary radiation source. The high elemental selectivity of hollow cathode lamp line sources eliminates this interference for line source atomic absorption methods, up to the point at which this interference becomes that of direct absorption spectral overlap. 2. Spectral Interferences and Methods of Correction A. Scattering of radiation and broadband absorption Reflection and scattering of incident radiation by particles in the flame and absorp- tion by molecular species cause absorption signals which generally exhibit broadband spectral character. Such matrix background absorption results in erroneously high analyte absorbance readings. Figure 1 shows point-by-point absorption spectra of solutions of figures in brackets indicate literature references at the end of this paper. 53 > d \ \ M CO -P s? S u C 1 ' ^— ' C QJ o\o o\° <#p O -P iH O o rH ■H C • • • ■P O O rH rH W a u cc S-l • • • D 4J DC UJ CO 03 O ~ E .6 mg/l ACTUALLY PRESENT mg/l STANDARD ADDED Figure 2. Multiple Standard Additions Graph Hach Chemical Company provides a convenient system for performing standard additions by offering premixed, precal ibrated standard solutions (Voluette™ Ampule Standards) 2 for almost every important water and wastewater test parameter. Using a Hach Micromatic m Pipet, small increments of concentrated standard may be accurately added to the sample over a range of 0.1 to 0.5 ml rather than adding a larger amount of a dilute standard solution, making it necessary to correct for the change in volume. 2 Voluette™ are ampules of premixed, precal ibrated standard reagent chemical solutions intended primarily for analytical laboratory use in water and wastewater testing. 64 3. Conclusion A summary of general problem categories which may be encountered on unknown real -world samples using standard additions and specific examples of each are described in table 1. In almost all the uses listed the method of standard additions may help the analyst determine if problems are present that would affect the reliability of his original analysis thus allowing him to arrive at a more accurate answer. Standard additions will not help if part of the analyte is tied up by the interference or if the interference reacts in a similar manner as the analyte. Thus the method of standard additions is a useful tool in that the performance of the reagents, the instruments and apparatus, and the procedure itself can be checked for possible error. The method has its greatest utility in demonstrating that the result of an analysis is incorrect. Table 1 Standard Additions: Summary of problems Problem : General Category Interference a) Decreases b) Increases constant constant 2. Interference prevents test working 3. Interference changes molar absorbtivity 4. Interference depletes reagent 5. Reagent and/or standard efficacy 6. Matrix effects a) Kinetics shift b) pH, Buffer capacity inadequate c) Temperature 7. Incorrectly prepared standards 8. Mechanical aspects of Procedure a) Incorrect wavelength b) Unmatched cells (dirty, scratched, eta. ) c) Forgotten reagent d) Incorrect timing Specific Example Fe(CN)g 3 in 1 ,10-phenanthrol ine method for iron Cd in the Zincon method for Zn Co blocks end pt. in Calmagite method for hardness Substituted phenols in 4-Aminoantipyrine Colorless Cu, Cd 1 ,10-phenanthrol ine complexes in method for Fe Loss of reducing power in 1 ,10-phenan- throl ine method for Fe Analysis of PO, and SiO- in salt water Cr(III) in EDTA back titration method Will Standard Additions be Effective? No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes 65 References [1] Kloster, M. B., Amer. Lab., July, 63 (1976). [2] Barendrecht, E., in Electroanalytical Chemistry, Ed. Bard, A. J., p. 53, Marcel Dekker, New York, 1967. [3] Copeland, T. R. , Skogerboe, R. K. , Anal. Chem. 46, 1257A H974). [4] Orion News letters, II (11,12), 50 (1970). [5] Varian manual, Basic Atomic Absorption Spectrophotometry, 1974. [6] Willard, H. H., Merritt, L. L., Dean, J. A., Instrumental Methods of Analysis, p. 379, Van Nostrand, New York, 1974. [7] Kloster, M. B., Hach, C. C, Anal. Chem. 421, 779 (1970). [8] Hach Chemical Co., Water Analysis Handbook, p. 1-25, 1976. 66 Part II. DETERMINATION OF TRACE ORGANIC POLLUTANTS IN WATER NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977) UNMET NEEDS IN THE ANALYSIS OF TRACE ORGANICS IN WATER William T. Donaldson Environmental Protection Agency Environmental Research Laboratory Athens, Georgia 30601, USA 1. Introduction Addressing the unmet needs in the analysis of trace organics in water requires consid- ering the problems to be studied and the factors that define requirements for identification and measurement. Trace organics are of interest because some are known to be detrimental to aquatic organisms and some are suspected of being detrimental to human health when injested. It is therefore necessary to know which of these organic compounds are in the aquatic environment, how they get there, what their effects are and how they can be control led. 2. Number of Compounds in Water There are over 2 million known organic chemicals. In our laboratory we have observed that the number of compounds detected in a sample of water is related to the detection level: as the detection level decreases an order of magnitude, the number of compounds detected increases an order of magnitude. Based on the number of compounds detected by current methods, one would expect to find every known compound at a concentration of 10" 12 g/l x or higher. In only 5500 literature entries reporting organic compounds identified in water, 1296 different compounds were reported. On the basis of these observations one would expect to encounter a large number of different compounds in samples from the aquatic environment. At present toxicologists are unable to select the most important compounds for environ- mental dose-response assessments. Of 6000 that have been tested for carcinogenicity, 1000 showed some carcinogenic activity. Adding to the complexity of selecting specific compounds of interest is the difficulty in extrapolating results from laboratory tests with lower organisms exposed at high concentrations to human health effects of concentrations encoun- tered in the environment. Knowledge of distribution and concentration of organics in the environment would be valuable to the toxicologists in selecting compounds for health- effects screening. To assure that wastewaters are treated efficiently and adequately it is necessary to know what compounds are in raw wastes and how they are removed or transformed during treatment. Since raw wastes contain many compounds that are not easily predicted by considering manufacturing products and processes, these complex mixtures must be analyzed both qualitatively and quantitatively. The same is true of the treated wastewaters. Not all organic compounds enter the aquatic environment through discharge of industrial and municipal wastes. Some compounds, such as agricultural chemicals, are washed from non-point land sources; some fall from the atmosphere; and some are transformed as a result of chemical or bacterial action on other compounds present in the water. So complex are the factors that affect transformation processes in the natural environment that chemical analysis of environmental waters is mandated, both to learn what chemicals are there and to study the processes by which they are formed. l In these Proceedings 1 denotes liter, usually abbreviated L. 69 All of the factors discussed above point to the need for a cost-effective capability to identify and measure all organic compounds, above some prescribed detection level, in environmental samples rather than measuring concentrations of only selected compounds. Because detection level determines the number of compounds identified in a sample, one must select the detection level carefully. It must be low enough to reveal all import- ant compounds but not so low that it makes the analysis unduly difficult. In drinking water, those compounds at the highest concentrations are usually present at 10 2 pg/1 1 or less. At 10" 2 yg/1 the analyst's ability to separate compounds is taxed. Therefore a detection limit of 10" 2 or 10" 1 yg/1 is usually selected for drinking water and most lakes and streams. A higher level would probably be selected for wastewater effluents, whose constituents are generally more concentrated. While this need presents a formidable challenge, the challenge can be met. Much can be accomplished toward this end with current technology, but there are significant problems to be addressed. 3. A Proposed Approach Samples can be collected either by "grabbing" a liter of the water of interest or by passing a larger quantity through carbon or macroreticul ar resin accumulators. In the laboratory, some volatile compounds are purged from the sample with inert gas and sorbed onto a resin. The compounds remaining in water or on accumulators are extracted sequen- tially with appropriate organic solvents to effect a preliminary separation. Concentration of the extracts by evaporation follows. Marker compounds for retention time and quantita- tion references can be added prior to concentrations. The concentrates are injected into gas chroma tographs , or high-pressure liquid chroma- tographs, coupled to low-resolution mass spectrometers, which record spectra of all separated compounds. A central computer tentatively identifies sample components by matching their spectra with those of known compounds in a computer library. When the computer "identifies" a compound, it also selects an appropriate marker compound for automatic computation of concentration and relative retention time. Spectra of compounds that are not in the computer file are listed separately by the computer and numbered to determine their frequency of occurrence, even though they are not identified. These compounds must be identified by the analyst, who usually generates additional spectral information. Frequency of occurrence of unidentified compounds is a useful factor in prioritizing them for identification. 4. Research Needs An analysis of the steps in the proposed approach reveals many unmet needs for research, beginning with the sampling process. Grab sampling is simple enough for stable and non- volatile compounds, but improvements are needed to make certain that volatile compounds do not escape and others do not decompose or interact during storage. The use of accumulators raises needs for further studies to determine appropriate accumulator materials, optimum flow rates, and removal efficiencies for all compounds or classes of compounds of interest. Development of a sampling device to accommodate accumulators and to meter flows is an indicated need. Solvent extraction from the accumulator or from the grab sample is often performed without adequate optimization of extraction conditions or adequate knowledge of recoveries. Sequential extraction from accumulators should be investigated to effect preliminary separations. Marker compounds for quantitation and retention-time references should be added as early in the process, as is practical, to indicate recoveries through subsequent steps. Appropriate marker compounds must be developed for each class of compounds that behaves differently during extraction, concentration, separation or detection. In these Proceedings 1 denotes liter, usually abbreviated L. 70 Concentration techniques for organic extracts are generally well established, but much needs to be done to determine the best means of concentrating polar compounds, that remain in the aqueous phase during solvent extraction of grab samples. Although improvements in gas chromatography are reported continually, gas chromato- graphy is by far the best established separation technique. Unfortunately only 10 to 20 percent of the mass of organic material in most environmental waters is amenable to gas chromatography. Recent advancements in high-pressure liquid chromatography show con- siderable promise for separating the polar volatile and the non-volatile compounds of moderate molecular weight. So little is known about the nature of high-molecular-weight materials in environmental waters that it is difficult to suggest approaches for their analysis. Obviously the tedious task of characterizing these materials should be pursued to provide further insight into their nature and their significance as pollutants. Rapid-scanning, low-resolution mass spectrometers have proved to be so effective in providing identifying spectra, that they are almost universally selected as the first-line approach to identifying separated compounds. Computer-assisted interpretation of spectra has advanced far enough to make it practically an integral part of organic mass spectrometry. Currently most low-cost computerized spectra-matching programs are based on empirical matching and, therfore, require that the spectrum of the compound to be identified be present in the computer library of spectra. More relevant files should be developed to further reduce cost and time for spectra matching. For compounds that are not amenable to conventional mass spectrometry, techniques such as atmospheric pressure ionization mass spectrometry, Fourier transform infrared spectroscopy, nuclear magnetic resonance spectrometry and Raman spectroscopy should be investigated for their applicability to these compounds. Information generated by these techniques will also be helpful to the analyst in identifying compounds that can not be identified from their mass spectra alone. Ab initio identification of organic compounds (identifying them by piecing together information gained in the laboratory) is tedious and costly, but some compounds will be encountered that can not be identified by empirical mass spectra matching. Generation of additional spectral information for these compounds is necessary. If large numbers of such compounds are encountered, then it will be worthwhile to develop a systematic, repetitive (although complex) approach to ab initio identification to make the process more efficient. Full advantage should be taken of computer-assisted analytical techniques in such a process. 5. Surrogate Methods Obviously a comprehensive qualitative and quantitative analysis is not necessary or practical for some samples. For example, once the identities of components in an industrial effluent are established, a GC retention time is adequate for identification. Extraction and concentration conditions can be tailored to the particular effluent. Municipal wastes and water supplies will vary in composition as a function of time, and GC retention times alone do not provide reliable identities for components in these samples. Surrogate methods may provide a means of monitoring these systems. Surrogate methods do not measure specific chemicals; they respond to characteristics of the sample representing significant groups of chemicals that can be considered, as groups, from the stand point of toxicity or control. For example "purgeable organic chlorine" may be a significant parameter for water supplies, since purgeable organic compounds formed during chlorine disinfection represent a significant percentage of the mass of organics so formed. Conversely most compounds formed during chlorine disinfection of municipal sewage are non- purgeable, suggesting "total organic chlorine" as a parameter for consideration. 71 In neither case mentioned above is there sufficient information to establish the usefulness of such parameters. From the standpoint of toxicity the same can be said of Total Organic Carbon as a parameter, because TOC would be influenced largely by the concen- tration of humic acids and lignins in natural waters. A major problem with surrogate methods is that not enough is presently known about the significance of the information they produce. 6. Summary What is needed at this time is research to improve capabilities to analyze samples comprehensively so that the significance of trace organics in the environment can be determined. This paper raises more questions than it answers. Hopefully these questions will stimulate analytical chemists to seek their answers. 72 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg , Md. (Issued November 1977) GCIR— A VERSATILE AND POWERFUL TOOL FOR ANALYSIS OF POLLUTANTS Leo V. Azarraga U.S. Environmental Protection Agency Environmental Research Laboratory College Station Road Athens, Georgia 30601, USA 1. Introduction In an earlier study [l] 1 , the feasibility of using the Fourier transform gas chromatograph-infrared (GCIR) system for analyzing environmental pollutants was investigated. Because of the complex chromatograms of environmental samples, the most useful method of recording the spectra of GC effluents was found to be the sequential storage of each inter- ferogram throughout the chromatographic run. 2. Discussion On-line-trapping of the eluted zones and stop-flow methods have limited applications. The time required for data acquisition and the quantity of sample needed to allow trapping or stop-flow analysis of each component are the major limiting factors. Poor GC resolution and high GC background also restrict the application of on-the-fly signal averaging. None of the three methods (on-line-trapping, stop-flow, and on-the-fly signal averaging), therefore, is completely reliable when measuring the spectrum intrinsic to the eluted substance. This is because extraneous absorptions may be introduced into the spectrum by GC background materials or by the coherent addition of interferograms containing different spectral information. The original FTS-14D spectrometer system and its GCIR accessory that were used during the initial evaluation lacked sensitivity and data collection and storage capabilities for the intended method of GCIR analysis. These problems were resolved with the adoption of three major system modifications: (1) a more sensitive GCIR cell was installed, (2) the triglyceride sulfate (TGS) detector was replaced, and (3) the necessary software and hardware were added to allow sequential storage and the processing of interferograms collected during the GC run. These modifications have now been implemented and significant observations have been reported at various stages of modification. A gain of more than an order of magnitude in the signal to noise (S/N) ratio of GCIR spectra was reported by Azarraga and McCall [2] when a liquid-nitrogen-cooled MCT detector was substituted for the TGS detector. The observed gain agreed with the S/N advantage of mercury cadmium telluride (MCT) over TGS calculated by Griffiths [3]. The critical stage in the modification involved the construction of a sensitive GCIR cell. Work undertaken at EPA's Athens Environmental Research Laboratory resulted in the development of a gold-coating process for small diameter glass tubes and produced the figures in brackets indicate the literature references at the end of this paper. 73 required infrared light-pipe (LP) for the construction of a sensitive GCIR cell. After the incorporation of the peripheral hardware and software for GCIR data collection and processing, results were obtained showing the sensitivity and capabilities of the GCIR system with the modified LP. These results were reported during the Fifth Annual Symposium on Recent Advances in the Analytical Chemistry of Pollutants in May 1975 [4]. Observations on the effect of GC columns on GCIR sensitivity were reported during the 27th Pittsburgh Conference in March 1976 [5]. 3. Conclusion The present GCIR system is sufficiently sensitive to yield identifiable spectra from quantities of substances as small as 0.2 microgram. Interferograms are collected at rates of 40, 20, and 15 per minute at 8, 4, and 2 cm -1 resolution, respectively. At these rates, all interferograms are stored singly and sequentially throughout a 55-minute GC run. The system, therefore, provides a complete and permanent storage of spectral information on the chromatographed sample. References [1] Azarraga, L. V., and McCall, A. C, Infrared Fourier Transform Spectrometry of Gas Chromatography Effluents, pub. No. EPA-660/2-73-0Z4 (Environmental Protection Agency, Athens, Georgia, January 1974). [2] Azarraga, L. V., and McCall, A. C. , Fourier Transform Infrared Spectroscopy of Gas Chromatography Effluents, 25th Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy (Cleveland, March 1974). [3] Griffiths, P. R. , Optimization of Parameters for On-line GC-IR Using an FTS-14 Spectro- meter, 25th Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy (Cleveland, March 1974). [4] Azarraga, L. V., GCIR System with Sub-microgram Sensitivity, 5th Annual Symposium on Recent Advances in Analytical Chemistry of Pollutants (Jekyll Island, May 1975). [5] Azarraga, L. V., Improved Sensitivity of On-the-fly CGIR Spectroscopy, 27th Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy (Cleveland, March 1976). 74 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). THE MSDC/EPA/NIH MASS SPECTRAL SEARCH SYSTEM S. R. Heller Environmental Protection Agency 401 M Street S.W. (PM-218) Washington, DC 20460, USA 1. Introduction A large file of unique and high quality mass spectral data has been assembled in a collaborative effort involving the U.S. Environmental Protection Agency (EPA), the U.S. National Institutes of Health (NIH) and the U.K. Mass Spectrometry Data Centre (MSDC). This file of 29,936 spectra, together with the programs for searching through it has been made available to the international scientific community via a time-sharing computer net- work. This "Mass Spectral Search System" (MSSS) has been in operation for almost five years during which time an estimated 50,000 searches have been completed. Currently there are about 200 working accounts which are using the system on a daily basis. 2. Background The last few years has seen the gradual development primarily at NIH and EPA of a Chemical Information System (CIS) on an interactive time-sharing DEC PDP-10 which contains several collections of data such as mass spectra, carbon nuclear magnetic resonance spectra and x-ray diffraction data. Of all these components, the MSSS is the most highly developed and has been operating on a commercial basis for three years. It is available on a fee- for-service basis via the ADP-Network Services Inc., Cyphernetics Division computer network which can be reached by a local telephone call throughout most of North America and Western Europe and via Telex throughout the world. This paper describes the MSSS and provides a' report on the status of the system. 3. Methods of Searching As a practical matter, it is preferable that the time necessary to search through a data base be largely independent of the size of the data base. In the case of MSSS, this is accomplished using an inverted file technique with a data base of "abbreviated" mass spectra. These are spectra in which only the two most intense peaks in consecutive gaps of 14 atomic mass units (amu) are retained. The resulting file is only about 30 percent the size of the full file, but, as has been shown in Biemann and his group at MIT, contains essentially all the information that was in the unabbreviated spectra. Although the abbreviated file is used for efficient searching, the full file is also stored in the computer in order that users may retrieve complete mass spectra to confirm identifications. Searching through this mass spectra data base can be accomplished in a variety of ways. These programs are summarized in table 1. Probably the most important of these methods is the "PEAK" search. This program permit's one to identify all the mass spectra in the file that contain a specific peak (m/e value) with an intensity that falls into a given range. The data base can be searched with a second peak and the two lists of hits are then automatically intersected to produce a list of spectra that contain both peaks. Intensities are not precisely reproducible in mass spectral measurements and so a range of acceptable intensities must be created for the purposes of a search. This can be done by the user who 75 can specify upper and lower values for acceptable intensities. Alternatively, if he specifies only one value, the program takes this value and accepts any peak whose intensity is within ± 30 percent of it. Table 1 Mass spectral search system (MSSS) Current and future options 1. Peak and intensity search 2. Loss and intensity search 3. Molecular weight search 4. Code search 5. Molecular formula search (a) complete (b) partial, stripped 6. Peak and loss search 7. Peak and molecular weight search 8. Peak and molecular formula search 9. Peak and code search 10. Loss and molecular weight search 11. Loss and molecular formula search 12. Loss and code search 13. Molecular weight and code search 14. Molecular weight and molecular formula search 15. Complete spectrum search (a) BIEMANN (b) STIRS (c) PBM 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. Dissimilarity Comparison Spectrum/Source Print-out Spectrum/source display Spectrum/source plotting Spectrum/source microfiche Crab-comments and complaints Entering new data (a) mini-computer interface (b) data collection sheets News-news of the MSSS MSDC Bulletin-literature search CAS Registry Data SSS-substructure search of CAS data WLN Molecular formula from isotope pattern Molecular weight from spectral data A different program called LOSS can be used to identify all the spectra in the file that exhibit the loss of a given neutral mass from the molecular ion. Such a search is of limited utility alone but it can be used in conjunction with the PEAK. This is an "and" type of search known as PEAK AND LOSS. As might be expected, it is a \/ery powerful means of narrowing a search down rapidly to a few candidate spectra. Other means by which the data base can be searched are given in table 1 and include molecular weight, partial or complete molecular formula or code search. This last method enables one to find all entries in the file of compounds that possess a particular func- tional group. The codes are a series of arbitrary multi-digit codes that are used to define functional groups and in some cases, compound type. Many binary "AND" combinations of these simple searches can be invoked and such combi- nations are generally found to act as much more powerful filters than a simple search. As the data base increases in size, such methods of searching become much more advantageous and with the file at its present size (29,936 spectra), we find that simple searches take considerably more of an operator's time than a combined search such as PEAK AND MOLECULAR FORMULA. In contrast to the "interactive" method of search through the mass spectral data base, there are programs (Biemann, PBM and STIRS) that will compare an unknown spectrum sequen- tially to every spectrum in the file. These programs retain the best fits, ranked in order of goodness of fit. Such techniques have the advantage of being operator independent; no decision is necessary as to which peaks to enter, all the peaks are used. The disadvantage of these methods are that they are relatively extravagant of computer time and they require that the complete mass spectrum be entered into the system. If this has to be typed in, this constitutes a rather discouraging preliminary. 76 The first of these problems has been countered by the development of a program which collects and holds the spectra of unknowns. It then puts through the searching procedure during off-peak hours, when the machine charges are considerably lower. The results of this search are available by 8:00 a.m. on the following day, which is not inconvenient for many workers. The second of the problems: the entering of data into the search has been overcome by the development of an interface that permits the user to couple his own mass spectrometer-minicomputer combination directly to the network computer. The search is carried out and the answers are relayed back to the user by way of mass spectral data flow from the mass spectrometer through the minicomputer and interface to MSSS. At present, this type of interface can be purchased to operate with the Varian, SI- 1 50 , Hewlett-Packard cassette and disk, INCOS and Finnigan 6000 data systems. Other manufacturers are in the process of developing the necessary software for their computer systems. 4. Data Retrieval The remaining programs in MSSS deal with the partial or complete retrieval of specified mass spectra from the data base. A file of complete mass spectra is available in the computer and one may, upon completion of a search and identification of the appropriate ID #, use this number to obtain a printout of the full spectrum or part of it. If one is using a terminal that is capable of plotting (such as the Tektronix 4000 series, the DEC GT40 series, Zeta Plotter, H-P Plotter, ete. ) then a spectrum may be plotted as a bar graph. Whether the data are reprinted or plotted, the origin of the spectrum is also given as are experimental conditions under which it was measured. 5. Applications During the five years that MSSS has been used on a regular basis, it has come to be of particular value in some well identified contexts in laboratories in the academic, industrial and government sectors. In general, the reasons for which MSSS is used are not recorded, but examples in which the system proves to be especially helpful often become more widely known by a number of mechanisms. That MSSS has been featured in the plot of a science fiction novel ("The Swarm" by Arthur Herzog) can be regarded as a form of recognition, dubious though it may be! An early example of the value of the MSSS that was fairly well publicized involved the treatment of a six-year old child admitted to a Denver hospital. The child had ingested some of the contents of an unlabelled bottle of liquid and was developing symptoms of serious intoxication. Mass spectrometry of the material by the local Denver EPA lab and application of MSSS revealed the toxic principle to be parathion. Confirmation of this was obtained by comparison with an authentic sample and a vigorous course of treatment was commenced, all within an hour. In retrospect, there seems little doubt that the fortunate outcome of the episode was due, at least in part, to the MSSS. The public is unexpectedly and seriously exposed to chemicals in a host of ways, such as oil spillage, train and truck accidents and the release of industrial waste into the environment. Rapid and accurate identification of compounds, which are often only present at low levels, is essential in the decision as to whether the chemicals pose a danger to the health of the community or the environment. The MSSS is used in a very routine way by analysts of the EPA and was involved in the recent, widely publicized, identification of halogenated organic compounds in the water supplies of several cities, most notably New Orleans. The analysis of the compounds in question was carried out with a gas chromatograph coupled to a mass spectrometer, the resulting mass spectra were examined with the help of the MSSS and well over sixty distinct compounds were identified in this way. Final confirmation of these identifications was in every case arrived at by a direct comparison of the experimentally obtained spectrum with the file spectrum of the appropriate compound. 77 values and calculate isotope incorporations has recently been added to the pilot version of MSSS and should become generally available soon. Finally, searching of the mass spectral literature via the Mass Spectrometry Bulletin from 1966-1975 is now possible. 9. Summary The purpose of this paper has been to describe the capabilities of the MSSS. The system is now in a relatively stable form on the ADP-Cyphernetics computer network and inquiries regarding its use are invited. We would also be pleased to learn of new sources of mass spectral data and will be happy to acquire and process such data. Details on obtaining an account with the Cyphernetics network can be obtained from The Manager, Data Base Services, Cyphernetics, 175 Jackson Plaza, Ann Arbor, Michigan 48106, telephone: 313-769-6800, or from The Manager, Cyphernetics International, J. C. van Markenlaan 3, Postbus 286, Rijswijk (Z.H.), The Netherlands, telephone: 070-94-88-66. The author would like to thank Professor K. Biemann of MIT for providing the data base that was originally used in the development of the MSSS. They would also like to thank all of their colleagues who have assisted greatly in the development .of the MSSS. In particular, they would like to thank the following: G. W. A. Milne, A. Bridy, W. Budde, H. H. Fales, R. J. Feldmann, R. S. Heller, T. L. Isenhour, D. Maxwell, A. McCormick, J. McGuire, F. W. McLafferty, M. Springer, V. Vinton, S. Woodward, and M. Yagyda. 80 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). METHODS FOR ANALYSIS OF TRACE LEVELS (yg/kg) OF HYDROCARBONS IN THE MARINE ENVIRONMENT S. N. Chesler, B. H. Gump 1 , H. S. Hertz W. E. May and S. A. Wise Trace Organic Analysis Group Bioorganic Standards Section Analytical Chemistry Division National Bureau of Standards Washington, DC 20234, USA 1. Introduction The low concentration of hydrocarbons anticipated in pollution baseline studies neces- sitates the development of analytical techniques sensitive at the sub-microgram per kilo- gram level. At this low level, the problems of analytical blanks and component recoveries become paramount. Furthermore, analytical methods that ultimately permit the identification of individual hydrocarbon components are desired. Several authors have developed methods for the analysis of hydrocarbons in marine samples [1-6] 2 . These methods are of three basic types: screening for total extractable hydrocarbons [1], static headspace analysis, specific for low molecular weight hydrocarbons [3,5], and digestion or extraction of the sample followed by clean-up, concentration and gas chromatographic (GC) analysis of the hvdrocarbon components [2,4,6]. These methods are subject to certain limitations. The screening technique for total extractable hydrocarbons is not sufficiently specific, i.e., it does not afford single-compound identification. Static headspace analysis methods are suitable only for compounds of relatively high volatility. Extraction/digestion procedures are generally lengthy, involve considerable sample handling and usually result in considerable losses of hydrocarbons having molecular weights of 200 and below. An alternative scheme for the analysis of hydrocarbons in marine sediments and in water has been developed in this laboratory. It involves dynamic headspace sampling and the trapping of volatile components on a TENAX-GC packed pre-column. The trapped compo- nents are subsequently analyzed by GC or gas chromatography-mass spectrometry (GC-MS). The non-volatile components are then analyzed by coupled-column liquid chromatography (LC). Samples which have been collected and frozen in the field, are thawed and transferred to a headspace sampling flask in a class 100 laminar flow, clean air hood maintained in a cold (4°C) room. They are then headspace-sampled for a period of four hours using scrubbed nitrogen gas to sweep volatile components into the TENAX-GC packed pre-columns. After headspace sampling is completed, the liquid remaining in the flask is pumped through an LC pre-column packed with Bondapak C18. Subsequent GC or GC-MS (utilizing capillary columns) and analytical LC are used to separate, identify and quantitate the volatile and non- volatile components of the sample. x Present address: California State University, Fresno, CA 93740 2 Figures in brackets indicate the literature references at the end of this paper. 81 Various headsapce sampling and coupled-column LC techniques have been applied in other analysis systems, e.g. [5,7-15]. Grob [8] used charcoal as an adsorbent for headspace sampling of drinking water. Trapped material was desorbed with carbon disulfide and analyzed by GC and GC-MS. Zlatkis and co-workers used Porapak P, Carbosieve and TENAX-GC as adsorbents in air pollution studies [12] and in the headspace-analysis of gases and biological fluids [9,11]. Trapped material was desorbed by heating and transferred to a pre-column for subsequent GC and GC-MS analysis. Snyder [15] has discussed the use of coupled-column LC techniques with Corasil-II and Porasil A for the separation of non-polar, intermediately polar, and polar antioxidants. The analysis of marine tissue samples for yg/kg hydrocarbon pollutants poses difficul- ties that the above procedure does not overcome. Most of the organic compounds present in this matrix are of biological origin; thus a suitable chemical clean-up of the sample is necessary in order to remove these interferences prior to actual trace organic analysis. Such a method has been developed. It uses dynamic headspace sampling of a homogenized tissue sample, as described above. Polar biogenic interferences are then separated from non-polar hydrocarbons on an LC column and the non-polar fraction is analyzed by GC/GC-MS. 2. Experimental Due to the low levels of hydrocarbons present in the samples, special caution was exercised at all stages during the collection and handling of water and sediment samples [18]. An internal standard is added to the water samples at the time of collection. All samples were quick-frozen in the field for return to the laboratory. Until analyzed, samples were stored in a freezer at -10°C. For analysis, the samples were allowed to thaw overnight in a laminar flow hood in a cold (4°C) room. They were then transferred into 2- liter tared flasks for headspace sampling. Approximately 100 g of sediment or 750 ml of water were taken for analysis. Approximately 600 ml of hydrocarbon-free water and 5pl of the internal standard solution were added to the sediment samples. The internal standard consisted of a solution of aromatic hydrocarbons, each present at a known concentration (approximately 2yg per 5yl). The inlet of each headspace sampling flask was connected to a supply of scrubbed nitrogen gas. The exit of each flask was connected to a 6.5 x 0.6 cm (1/4 in.) o.d. stainless steel column packed with TENAX-GC (Applied Science Laboratories, State College, PA) which had been heated previously to 375°C for 60 minutes to remove any contaminants. Experiments were carried out on six samples simultaneously, two of the six being a system blank that consisted of hydrocarbon-free water plus the internal standard. The headspace sampling procedure was as follows: the cool (M5°C) air flow around the TENAX-GC column was begun, the magnetic stirrer was started, and a flow of approximately 150 ml/tnin of pre-purified nitrogen was established through each sampling vessel. The headspace was sampled at room temperature for two hours. Then the flasks were heated to 70°C and the headspace sampling was continued for an additional two hours. At the end of the four-hour sampling period, each cooled TENAX-GC column was connected for two hours directly to the nitrogen line to remove trapped water from the column (flow rate of nitrogen = 150 ml/min). Each TENAX-GC column was then capped tightly and stored at 4°C until analyzed by GC/GC-MS. Water remaining in the flask, following headspace sampling, was decanted into a clean one-liter beaker which was then covered with cleaned aluminum foil. The water was then pumped through a 6.5 x 0.6 cm (1/4 in.) o.d. stainless steel liquid chromatographic pre- column at the rate of 10 ml/min by use of a Milton-Roy Minipump. This column was packed with a 37-50 ym pellicular (superficially porous) support with a bonded C 18 --stationary phase (Bondapak C18--Waters Associates, Milford, MA), and was fitted on both ends with Swagelok stainless steel reducing unions with 2-ym snubbers. 82 After the headspace sampled water was pumped through the LC pre-column, the pre-column was attached to a liquid chromatograph capable of gradient elution. The two reservoirs of the gradient pumping system contained degassed, redistilled, reagent grade methanol and hydrocarbon-free water, respectively. Ten ml of water were pumped through the pre-column to remove entrapped air. Then a 30 cm x 0.6 cm (1/4 in.) o.d. stainless steel analytical column packed with a 10-ym micro-particulate (totally porous) support with a bonded C 18 - stationary phase (yBondapak C18--Waters Associates, Milford, MA) was connected to the outlet end of the pre-column. Elution of adsorbed compounds from the coupled columns was begun with a 30:70 (v/v) methanol -water mobile phase pumped at 3 ml/min. The gradient was programmed to increase the percentage of methanol in the mobile phase to 100 percent in 40 minutes. The effluent from the analytical column was passed through a UV photometer (245 nm) and the chromatogram was recorded. Individual fractions were collected for subsequent analysis of the individual compounds (by UV, fluorescence and mass spectrometry). The TENAX-GC packed pre-column was installed between the injection port and a coiled, glass SE-30 coated, SCOT analytical column (100 m x 0.65 mm i.d. drawn and coated [19] in this laboratory). A removable aluminum heating block, powered by a high intensity cartridge heater and Variac, was placed around the TENAX-GC column. The first eight coils of the capillary column were isolated from the remaining coils for cryogenic cooling to provide thermal focusing of components released by heating the TENAX-GC column. A make-up carrier gas was introduced directly into the detector to reduce the effective detector dead volume. The first eight coils of the analytical column were sprayed with liquid nitrogen as the TENAX-GC pre-column containing trapped hydrocarbons was heated to 375°C for 8 minutes. The flow rate of carrier gas through the system was 20 ml/min during this operation. Then the flow rate was reduced to 6 ml/min, the cryogenic cooling was terminated, and the gas chromatographic oven temperature was raised to 80°C and maintained at 80°C for 4 minutes. For the chromatographic analysis, the oven temperature was programmed to rise at a rate of 4°C/min and hold at 275°C. A flame ionization detector was used. A gas chromatograph-mass spectrometer computer system (Model 5930, Hewlett-Packard, Palo Alto, CA) was used for GC-MS analysis of the samples. These analyses were performed in an analogous manner to the GC analysis. Mass spectra were accumulated every 4 seconds during the course of the chromatogram. Tissue samples (^25 g) were homogenized with an ultrasonic probe in 500 ml of hydrocarbon-free water to which 50 g of NaOH has been added. The homogenate is headspace sampled for 16 hours at 70°C. The TENAX trap is then dried for 4 hours and then attached as an injection loop of a yBondapak NH 2 LC column. The organic compounds on the TENAX are eluted onto the head of the LC column by passage of a pentane mobile phase. The polar compounds are totally retained by the LC column while the non-polar hydrocarbons are eluted. The pentane effluent is then slowly evaporated to 300yl total volume, and the concentrated solution is quantitatively transferred to a clean TENAX column for normal GC/GC-MS analysis. 3. Discussion By use of the methods described, we have found system blanks for the separate water, sediment and tissue methods that are consistently below 10-30 yg/kg. While the recovery of internal standards from water is reproducible to ^5-10 percent, large variances have been observed when analyzing some tissue and sediment samples. This is postulated to be caused by slow desorption from active sites in these types of sample matrices. Longer headspace purging has been observed to improve the recovery and has been adopted for the tissue methodology. The sediment and tissue analyses exhibit large (20-100 percent) standard deviations at yg/kg hydrocarbon levels; this is due to the difficulty in obtaining a truly homogeneous environmental sample of the complex matrix type. 83 4. Conclusions The use of dynamic headspace sampling and coupled-column LC either alone or in combina- tion offers a number of advantages over most of the current sample preparation techniques for yg/kg hydrocarbon analysis of marine sediment and water samples. These techniques require minimal sample handling, thereby reducing the risks of loss of sample components and of possible contamination. Volatile components of the sample are efficiently separated from the matrix in a closed system and collected in a concentrated form that is free from large amounts of solvents and ready for GC/GC-MS analysis. The headspace-sampl ing technique is well-suited for napthalene and substituted naphthalenes, compounds of special interest due to their suggested immediate toxicity to marine organisms [16,17]. Analysis at the sub-microgram per kilogram level is readily attainable by GC. High molecular weight non- volatile compounds, such as the benzpyrenes, may be readily analyzed by the coupled-column LC technique. This latter method can be wery sensitive, since trace amounts of hydro- carbons can be concentrated from very large volumes of water. The analysis is non- destructive and separated compounds may be detected by UV photometry (254 nm) or collected and identified by other techniques. A significant advantage derives from the fact that water is the only solvent needed for the preparation of water and sediment samples. The use of hydrocarbon-free water reduces opportunities for contamination of the sample by fossil hydrocarbons. Tissue samples must be stripped of biogenic nonhydrocarbon constituents prior to GC/GC-MS analysis. However, low blanks (<20 yg/kg) can be obtained if special care is used in distillation of solvents and preparation of columns used in the LC clean-up. References [I] Brown, R. A., Searl , T. D. Elliott, J. J., Phillips, B. G., Brandon, D. E., and Monaghan, P. H. , Proceedings of the 1973 Joint Conference on Prevention and Control of Oil Spills, pp. 505-519, American Petroleum Institute, Washington, DC (1973). [2] Clark, R. C, Jr., and Finley, J. S. , Proceedings of the 1973 Joint Conference on Prevention and Control of Oil Spills, pp. 161-172, American Petroleum Institute, Washington, DC (1973). [3] McAuliffe, C. D. , Chem. Tech. 1_, 46 (1971). [4] Warner, J. S. , NBS Special Publication No. 409, U. S. Government Printing Office, Washington, DC, 1974, p. 195. [5] Wasik, S. P., and Brown, R. L. , Proceedings of the 1973 Joint Conference on Prevention and Control of Oil Spills, pp. 223-230, American Petroleum Institute, Washington, DC (1973). [6] Farrington, J. W. , Teal, J. M. , Quinn, J. G. , Wade, T. , and Vurns, K. , Bull. Environ. Contam. Toxicol. ]0_, 129 (1973). [7] Melpolder, F. , Warfield, C. , Headington, C. , Anal. Chem. 25, 1453 (1953). [8] Grob, K., J. Chromatog. 84, 255 (1973). [9] Zlatkis, A., Bertsch, W. , Lichtenstein, H'. , Tishbee, A., Shunbo, F. , Liebich, H. , Coscia, A., Fleischer, N., Anal. Chem. 45, 763 (1973). [10] Versino, B., deGroot, M. , and Geiss, F. , Chromatog raphia 7, 302 (1974). [II] Zlatkis, A., Lichtenstein, H., and Tishbee, A., Chromatographia 6, 67 (1973). [12] Bertsch, N., Chang, R. C, and Zlatkis, A., J. Chromatog. Sci. 1_2, 175 (1974). [13] Novotny, M. , Lee, M. L. , and Bartle, K. D. , Chromatographia 7_, 333 (1974). 84 [14] Snyder, L. R. , Modern Practice of Liquid Chromatography, J. J. Kirkland, ed. , Wiley- Interscience, New York, 1971, pp. 232-235. [15] Snyder, L. R., J. Chromatog. Sci. 8, 692 (1970). [16] Wilber, C. G. , The Biological Aspects of Water Pollution, Charles C. Thomas, ed. , Springfield, IL, 1969. [17] Blumer, M. , Environmental Affaira ]_, 54 (1971). [18] Chesler, S. N., Gump, B. H., Hertz, H. S., May, W. E., Dyszel , S. M. , and Enagonio, D. P., NBS Tech. Note No. 889. [19] German, A. L. and Horning, E. C. , J. Chromatog. Sci. Y\_, 76 (1973). 85 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). POTENTIAL CARCINOGENS IN WATER: GC/MS ANALYSIS Ronald A. Hites Department of Chemical Engineering Massachusetts Institute of Technology Cambridge, Massachusetts 02139, USA 1. Introduction The presence of various synthetic organic compounds in the environment is a well-known and often repeated story. Chloroform in tap water, Kepone in river water, and Red Dye No. 2 in food are compounds which have caused recent concern. In general, this increased attention to organic compounds in the environment has resulted from the realization that 50 to 80 percent of all occurrences of cancer in man are due to environmental causes such as chemical carcinogens in our air, water, and food. Thus, it is obviously important to identify anthropogenic chemical carcinogens, isolate their sources, and determine their environmental fates. This paper will be restricted to the occurrence of organic compounds in water, and it will detail two approaches: a) study the sources and fates of those compounds known to be carcinogenic, or b) survey suspicious sources (such as drinking water or industrial waste- water) to determine the identities of specific compounds being emitted and then study their carcinogenic potential and environmental fate. This paper will present examples of both approaches. The first example centers on a compound class which includes several known carcinogens; namely the polycyclic aromatic hydrocarbons (PAH). The second example will demonstrate the survey approach as applied to industrial wastewater sources. 2. Discussion A. Polycyclic aromatic hydrocarbons We have reviewed our work on this compound class at the recent Symposium on Sources, Effects and Sinks of Hydrocarbons in the Aquatic Environment (held at American University, Washington, DC, August 9-11, 1976), [1-8]. B. Industrial wastewaters We are currently analyzing the wastewaters, receiving waters, and sediments of several cooperating companies to determine the fates of industrial organic compounds. One of the plants we are studying is a general-purpose chemical manufacturing plant located about 1.5 km from the mouth of a small river which empties into a bay. The flow rate of the wastewater out of the plant is about 10 6 gal/day, and the flow rate of the river is about 10 7 gal /day. Prior to release into the river, the wastewater is neutralized in a one million gallon equalization tank, passed through a trickling filter for biological degra- dation, and freed of most solids in a clarifier. Composite and grab water samples were extracted with dichloromethane, and sediment samples were extracted with methanol and then 3:1 benzene-methanol . The sediment extracts were separated into hexane, benzene, and methanol fractions using column chromatography on alumina and silica gel. The samples were analyzed on a dual source (EI/CI) Hewlett-Packard 5982A GC/MS system interfaced with a HP 5933A data system. The mass spectrometer was 87 coupled to the gas chromatograph via a glass-lined jet separator held at 300°C and was operated in the continuous scanning mode under control of the data system. A few representative compounds found in the plant wastewater, receiving waters, and sediments are presented in table 1. Although the concentration of these compounds in the river water was generally only a few parts per billion, analysis of the river sediment revealed very high concentrations of some of the components; up to 0.5 percent for 2 (2'- hydroxy-5' -methyl phenyl )-2H-benzotriazole. Some of the compounds were even found in the bay sediment 2.2 km from the source; this may result from transport of the contaminated river sediment. It should be emphasized that none of the compounds given in table 1 are known to be carcinogenic. From the results in table 1, we have concluded that normally water insoluble industrial organic compounds can be accommodated in wastewater by interaction with high concentrations of dissolved and suspended organic material (e.g. solvents, biological sludge). Dilution of this effluent with river water seems to cause many of the organic compounds to precipi- tate and become incorporated into the underlying sediment. Low molecular weight compounds are apparently rapidly lost through volatilization. These findings lead to the concern that high concentrations of anthropogenic organic compounds in sediments may continually repollute the water even after effluent controls are implemented. Table 1 Representative compounds and their approximate concentrations in the wastewater and receiving waters and sediments Approximate concentration (ppm) Compound Wastewater River Water (75 m) River Sediment (75 m) Estuary Sediment (2.2 km) Toluene 20 — — — Chlorophenol 0.1 -- -- — Di-t-butyl phenol 4 0.02 2600 3 Methyl-3-(3' ,5' -di-t-butyl -4' - hydroxy phenyl) propionate 9 0.03 1700 -- 2-Chloro-4,6-bis(isopropyl- amino)-s_-triazine 7 0.02 10 __ 2(2 '-Hydroxy-5' -methyl -phenyl )- 2H-benzotriazole 10 0.05 5000 8 2(2'-Hydroxy-3' ,5'-di-t_-amyl- phenyl )-2H-benzotriazoie 4.6 References 0.03 1900 20 [1] Hites, R. A. and Biemann, W. G., Identification of specific organic compounds in a highly anoxic sediment by GC/MS and HRMS, Advan. in Chem. Sev. 1_47_ 188 (1975). [2] Youngblood, W. W. , and Blumer, M. , Polycyclic aromatic hydrocarbons in the environment: Homologous series in soils and recent marine sediments Geoohim. Cosmoohim. Acta 39 1303 (1975). [3] Blumer, M. and Youngblood, W. W. , Polycyclic aromatic hydrocarbons in soils and recent sediments, Science 188 53 (1975). 88 [4] Hase, A., and Hites, R. A., On the origin of polycyclic aromatic hydrocarbons in recent sediments: Biosynthesis by anaerobic bacteria, Geoahim. Cosmoohim. Acta 40, 1141 (1976). [5] Hase, A. and Hites, R. A., On the origin of polycyclic aromatic hydrocarbons in the aqueous environment, in Identification and Analysis of Organic Pollutants in Water (Ann Arbor Science Pub!., Ann Arbor, MI 1976). [6] Lee, M. L. , Prado, G. P., Howard, J. B., and Hites, R. A., Source identification of urban airborne polycyclic aromatic hydrocarbons by GC/MS and HRMS, Biomed. Mass Spec, in press. [7] Hase, A., Lin, P. H. , and Hites, R. A., Analysis of complex polycyclic aromatic hydro- carbon mixtures by computerized GC/MS, in Carcinogenesis, Vol. 1 (Raven Press, New York, 1976). [8] Lee, M. L. and Hites, R. A., Characterization of sulfur containing polycyclic aromatic compounds in carbon blacks, Anal. Chem. (in press, October 1976). 89 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). A NEW SIMPLE METHOD FOR THE RECOVERY OF TRACE ORGANICS FROM WATER T. D. Kaczmarek Westinghouse Research Laboratories Pittsburgh, Pennsylvania 15235, USA 1. Introduction Currently there are four methods of general application for the analysis of organic pollutants in water. These are direct sampling, solvent extraction, adsorption, and gas sparging. This paper describes a new method which involves dissolving diethyl ether in the water sample and then "salting out" the ether with a salt such as sodium sulfate. The recovered ether contains representative quantities of organic pollutants originally present in the water. So far in our investigations, contaminants with boiling points above 115°C have been recovered; those with lower boiling points, although extracted, were not measur- able because of the ether interferences. 2. Experimental At room temperature, ether is soluble in water to the extent of about 7.5 percent. However, through salting out, it is possible to recover at least 0.5 cm 3 ether from 1.25 cm 3 in 100 cm 3 of aqueous sample and at least 0.5 cm 3 from 5 cm 3 in 1000 cm 3 of aqueous sample. It appears at present that for a given water sample size, the efficiency of the extraction depends upon the amount of ether added rather than upon the ether recovered, if there is no significant loss of ether by evaporation during the extraction. Ether (1.25 cm 3 for 100 cm 3 sample or 5.0 cm 3 for 1000 cm 3 sample) is added to the aqueous sample contained in a volumetric flask filled to mark. Forty grams of anhydrous sodium sulfate per each 100 cm 3 of sample is added to a clean, dry companion volumetric flask. The ether/aqueous sample is added to the flask containing the sodium sulfate to well up the flask neck. The flask is then stoppered with a Teflon or polyethylene stopper and immediately agitated to prevent caking of the salt and to hasten its dissolution. The operation is performed in a way that will minimize ether losses. A water bath at a temper- ature between 25°C and 30°C may be used with caution to facilitate salt solution. After the sodium sulfate has all dissolved and the solution cooled to about 20° C, additional ether-containing, sample solution can be added to make up the decrease in volume caused by solution of the solid sodium sulfate. Ether will immediately begin to collect at the surface in the flask. Within one hour, ether separation will amount to approximately 0.5 cm 3 . Swirling of the flask will dislodge ether globules adhering to the body of the flask. [Within several hours Glauber's salt (Na 2 S0i + -10H 2 0) will crystallize within the flask. There has been no study as to any effect, such as possible enhancement, this crystallization would have on pollutant recovery.] 91 The salted-out ether containing the organics of interest can then be analyzed directly without any solvent concentration, filtration or drying. There can be an annoying "scum" formation in the ether layer. The amount of scum depends upon sample size and, perhaps the amount of sodium sulfate used. This does not seriously interfere with removing aliquots of ether by hypodermic syringe. However, with elimination of the scum less ether would be necessary initially to recover 0.5 cm 3 finally, since ether is entrapped by the scum. This would improve organic pollutant detectabil ity and characterization. So far there has been no major effort to identify the scum or its source. The situation is not improved by solvent washing the sodium sulfate before use. For economic reasons, reuse of sodium sulfate (recovered as solid Glauber's salt) is desirable. This recycling may also solve the scum problem. 3. Results In our laboratory the recovered ether has been analyzed by direct injection of up to 200 yl into the Perkin-Elmer Model 270 Gas Chromatograph-Mass Spectrograph. With a Chromo- sorb 101 column, the ether matrix is eluted to atmosphere at a column temperature of 100°C; with a Dexsil 300 column, at 75°C. Within the scope of our experience this technique will not cause the loss of organic materials boiling at about 115°C or above. After removal of the ether by venting as above, only a few minor, matrix effects have been seen so far. These were not serious enough to warrant consideration of ether "purification" beyond the quality commercially available. The effectiveness of the method varies with different contaminants. 1 It ranges down to at least 50 parts per billion (ppb) for individual organics in 100 cm 3 of aqueous sample and as low as 10 ppb with 1000 cm 3 of sample. Standards were examined as mixtures of 3 to 6 components in water. As shown in figures 1 and 2, recovery was linear out to at least 1 part per million (ppm) with a 100 cm 3 sample and out to at least 50 ppb with a 1000 cm 3 sample. Typical quality of mass spectra for compound recovery at various levels is shown in figure 3 2 which also contains a reference spectra of pure material for comparison. As previously stated, this method in its present form was not found applicable to organic contaminants with boiling points below about 115°C. This limitation is due to the fact that the contaminants are recovered in a diethyl ether matrix. x Part, and perhaps all, of this would represent varying sensitivities of the analytical instrumentation to different compounds. 2 In figure 3, peaks at m/e 45, 59 and 74 are due to diethyl ether; at m/e 73 to Dexsil 300 column bleed. 92 M P4 P. H 60 O O u u 60 •H 3d 300 270 240 210 180 150 120 90 60 30 -T— 0.1 l i 0.25 0.5 Occurrence Level (ppm) 1.0 FIGURE 1. RECOVERY OF ORGANICS FROM WATER USING 100 ml SAMPLE 93 90 percent efficiency. In order to optimize conditions for efficient removal of BaP from tap water, a number of parameters were varied. These studies are summarized below. The retention of benzo(a)pyrene from tap water varied with the diameter of the column used for holding the foam. The efficiency increased as the diameter of the column was decreased; the value changing from 53 percent for a 50 mm column to 73 percent for a 20 mm column. Although squeezing the foam in a smaller column gave higher efficiency of BaP recovery, it produced difficulty in attaining reasonable flow rates. An approximate ratio of 2:1 between the plug and column diameter was considered the best compromise between retention efficiency and flow rates. The recovery was not significantly affected as the flow rates were successively increased from 130 cm 3 /min to 520 cm 3 /min. In subsequent studies, however, a flow rate of 250 ± 10 cm 3 /min was used because flow rates exceeding this value resulted in the sliding down of the foam plug in the column. Attempts to hold the plug on a support in the column caused lowering of the retention efficiency. The sorption of BaP on foam was also found to be pH dependent; the retention at pH 6.7 (tap water) was between 62 and 65 percent, but upon increasing the water pH to 10.0, the retention efficiency increased to 76 percent. Lowering of the water pH below 6.7 resulted in decrease in benzo(a)pyrene retention. The removal of free chlorine, or ions from tap water did not significantly improve the retention efficiency. The most dramatic effect on the recovery of benzo(a)pyrene from spiked tap water was observed when the temperature of the water was varied (fig. 1). The relationship between BaP recovery from tap water and temperature was found to be bi phasic. The percent reten- tion steadily increased with increase in temperature up to 40°C, but decreased with further increase in temperature. When the temperature was increased beyond 50°C, the increase in BaP retention was resumed until a plateau was reached starting at 60°C. The recovery of BaP at a temperature > 60°C was approximately 87 percent. 102 100 - > o u 0> CC 03 c 0) c GO 10 20 30 40 50 60 Temperature of Water Passed Through Foam ( °C ) 80 Figure 1. Effect of water temperature on recovery of benzo(a)pyrene on foam plugs. The water was brought to the desired temperature by passing it through a glass coil (10 ft x 6 mm) which was immersed in a Haake Model FE thermostated circulator. The coil was placed inside the reservoir housing through the opening of the cover plate. o = Tap Water (Unfiltered), • = Filtered Tap Water, A = Distilled Water The effect of temperature on BaP reco consequence of many interacting factors, presence of suspended particles in water was not seen with distilled water, and tha was millipore-filtered prior to spiking, observed with tap water and distilled wate appears to be linked to the foam itself, change in the conformation of the polymer foam plug prior to passing spiked water (a This suggests that the temperature-induced foam were reversible. very from tap water is complex and probably the The initial increase appears to be linked to the This is due to the fact that such an increase t the increase was less pronounced when tap water The increased retention of BaP beyond 50°C is r as well as filtered tap water, and, therefore, Such an increase could be attributed to a possible at higher temperature. Heat treatment of the t room temperature) gave low retention of BaP. changes in the sorption characteristics of the In view of a significant increase in the recovery of BaP with an increase in tempera- ture, it was decided to heat water routinely to enhance BaP retention on the foam. Further studies were carried out at a water temperature of 60 to 65°C. 103 In an effort to further improve the efficiency of BaP retention by foam plugs, the effect of coating the foams with chromatography phases was investigated. The chromato- graphic phases tested included nematic liquid crystal, SE-30 and DC-200. With the foams coated to the extent of 5 to 10 percent by weight an increase of 4 to 9 percent in the recovery of BaP was observed at the water temperature of 60°C. In view of only a small increase in the retention of BaP in the presence of chromatographic phases, the foam coating may be eluted from the foam along with BaP and interfere with analysis. The coating of the foams was not further considered. A study of the recovery of BaP by foam plug at various concentrations revealed that the recovery was not significantly affected with change in BaP concentration. In the concentration range (0.002 - 25 ppb) tested, the retention of BaP was found to be between 83 to 88 percent. The data show the applicability of foam plugs for concentrations of BaP at wide concentration ranges. By passing various volumes of spiked water over foam plugs, it was determined that up to BaP 5 liters of tap water could be recovered with one foam plug with an efficiency > 85 percent. With increase in volume of spiked water passed over one foam plug, the efficiency of BaP retention steadily decreased. The data suggested that for larger volumes of water, the number of foam plugs must be increased. It was found that for recovery of BaP from 20 liters of water with efficiency > 85 percent a minimum of 4 plugs, 2 each in 2 different columns were necessary. Prior to considering foam plugs for field monitoring, it is important to assess the stability of BaP on foam plugs. This was studied by passing known volumes of BaP spiked tap water through the foam plugs and storing the plugs with sorbed BaP under various condi- tions. The plugs were analyzed for BaP after various periods of storage. The foam plugs were stored in a Chromaflex column covered with aluminum foil to prevent photodegradation. The effect of storage at room temperature and in the refrigerator was compared. No signifi- cant decomposition or loss of BaP from foam plugs stored in refrigerator was noted up to 7 days. However, a small loss of BaP was observed when plugs were stored at room temperature. On the basis of these results it is suggested that foam plugs following sampling water should be cooled to 4°C during transportation to the laboratory for analysis. The applicability of polyurethane foam plugs for concentrations of BaP from untreated surface water was also investigated. Four liters of water was collected from Onondaga Lake, Syracuse, N.Y. (water pH 7.5, total residue on evaporation 940 mg/1), spiked with BaP and passed over a foam plug at 60°C. The recovery of BaP from lake water was found to be 69 percent. When the number of foams in the column was increased to two, the percent retention on the foam increased to 81 percent with 4 liters of water. The data suggests that for concentrating BaP from untreated-surface waters, it may be necessary to use larger numbers of foam plugs in the column to maintain higher retention efficiency. 4. Conclusion It is concluded that polyurethane foam plugs can be successfully used for concentration of trace quantities of benzo(a)pyrene and other PAH from water. The recovery is more or less independent of water flow rates but depends to a great extent on the temperature of the water. For efficient removal, the water temperature should be > 60°C. Under these conditions a single foam plug placed in a 25 mm column can effectively retain PAH from up to 5 liters of tap water. For larger volumes, the number of plugs in the column must be proportionately increased. Supported by Environmental Protection Agency Grant R80397702, and a grant-in-aid of research from Niagara Mohawk Power Corporation, Syracuse, New York. 104 Part III. MULTIELEMENT ANALYSIS NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977) THOUSANDS OF METAL ANALYSES PER MAN DAY - A REALITY IN THE U.S. EPA's CENTRAL REGIONAL LABORATORY: MULTIELEMENT (23) ANALYSIS BY AN INDUCTIVELY COUPLED ARGON PLASMA ATOMIC EMISSION SYSTEM (ICAP-AES) Richard J. Ronan 1 and Garry Kunselman U.S. Environmental Protection Agency - Region V 1819 West Pershing Road Chicago, Illinois 60609, USA 1. Introduction The U.S. EPA's first Inductively Coupled Argon Plasma (ICAP) multielement direct reader system has been in full operation for more than one and one-half years at the Central Regional Laboratory of Region V in Chicago. This ICAP system, described here, can analyze one sample every thirty seconds using a cycle time which includes sample rinse, 10 second integration and TTY printout of 23 elements. (This time period assumes concentrations typically found in surface waters and waste effluents). At this rate one might expect to produce in an 8 hour day: (8 h/day) (60 min/h) (2 samples/min) (23 elements/sample) = 22,080 elemental results. Not included in this estimate are other considerations which limit actual output per man day of effort. The ICAP was investigated as a replacement for the U.S. EPA accepted method for the analysis of metals, flame atomic absorption, and as a viable means for the analysis of large numbers of environmental water samples. In accomplishing this, three principal areas were focused upon. These were: 1 . Analysis of Sample 2. Preparation of Sample 3. Large Volume Data Handling The product of the evaluation of these areas has been the generation of over 5000 high quality quantitative results reported for a typical man day of effort. More important than simply a large number of results reported is the high degree of quality control made possible by the introduction of automatic data handling. 2. Analysis of Samples The ICAP method was compared in detail to the U.S. EPA accepted method for the analysis of metals, flame atomic absorption. Summaries of this work have been presented elsewhere [l] 2 . A brief review of the fixed elements' basic performance is given in table 1. The analysis of samples is accomplished using the Jarre! 1 -Ash Plasma Atom Comp 750. Author to whom correspondence should be addressed, current address: Jarrell-Ash Division, Fisher Scientific Company, 590 Lincoln Street, Waltham, Massachusetts 02154. 2 Figures in brackets indicate the literature references at the end of this paper. 107 Table 1 Basic performance characteristics Element A in nm D.L. 3 yg/1 LQD b yg/1 % RSD C 100 yg/l d Ag 328.1 4 20 1.8 96 ± 1 Al 396.2 7 35 0.8 95 ± 1 B 249.7 3 15 0.8 100 ± 1 Ba 455.4 <1 <5 0.9 106 ± 3 Be e 313.0 <1 <5 0.5 99 ± 1 Ca 393.4 364.4 <0.5 1 0.5 97 ± 1 989 ± 2 Cd 226.5 2 10 0.9 105 ± 4 Co 238.9 4 20 1 97 ± 3 Cr 267.7 1 5 0.8 96.2 ± 0.4 Cu 324.8 1 5 1.2 95 ± 7 Fe 259.5 2 10 1.0 94 ± 2 Mg 279.6 <0.5 1 1.1 106 ± 2 Mn 257.6 1 5 1.1 99 ± 3 Mo 203.8 5 25 1.0 102 ± 2 Na e 589.0 5 25 0.5 100 ± 1 Ni 341.5 15 75 0.5 100 ± 1 Pb 220.3 12 60 0.4 104 ± 1 Sn 190.0 12 60 1.2 105 ± 4 Ti 334.7 1 5 1.2 100 ± 1 V 309.3 1 5 1.1 99 ± 5 Y 417.8 1 5 1.0 100 ± 2 Zn 213.9 1 5 0.8 96 ± 2 This is the amount of material necessary to produce a signal that is two times the standard deviation of the background noise. These data are typical of 10 second integrations. The Lowest Quantitative Determinable concentration (LQD) is five times the detection limit. This is a percent standard deviation for a 1 mg/1 mixed standard of all elements run ten times with ten second integration periods. These data are typical . The system was standardized at 1 mg/1 and individual 100 mg/1 standards were run and averaged over 8 hours to demonstrate linear range. p Element not included in the NPDES study and subsequent acceptance because it was added in the field one year after installation. 108 As part of this same study three approaches were made to evaluate the precision and accuracy of the ICAP System. In the first approach, four common environmental water matrices (distilled, lake, dirty river, and sewage effluent water) were spiked at two levels with all metals listed in table 1. The eight samples were repeatedly analyzed for all elements by both AA and ICAP methods for six weeks. The results showed no problems at the levels chosen. These levels were 0.250 mg/1 to 50 mg/1 depending on element, and were chosen to match the flame atomic absorption ideal concentration response. In the second approach normal laboratory samples used for quality control were rerun and analyzed by the ICAP. Twenty-two different effluent samples and duplicates which were spiked at low levels of interest showed good agreement to flame atomic absorption. The third study involved a direct assessment of accuracy by the analysis of standard reference material supplied by U.S. EPA (5 samples), U.S.G.S. (2 samples), and NBS (1 sample). A typical comparison of the U.S.G.S., chosen here because it has the most elements certified, is given in table 2. All three studies showed the ICAP System equivalent or better in the analysis of surface and waste water than the reference method, flame atomic absorption. A detailed copy of this study is available upon written request to the Region V Laboratory. USGS b yg/1 ICAP C yg/1 Ag 6.4 ± 0.9 5.7 Al 71 ± 35 66.4 B 92 ± 29 a 113.0 Ba Ca 69.5 ± 2.5 a 65.7 Cd 4.7 ± 1.4 6.9 Co 5.1 ± 0.6 k4 Cr 16.5 ± 6 19.7 Cu 391 ± 24 379.0 Fe 87 ± 16 80.0 Table 2 USGS reference material 49 and 47 USGS b yg/1 ICAP C yg/1 Mg 18.3 ± 0.9 a 18.7 Mn 159 ± 14 160.0 Mo 56.6 ± 4.6 62.5 Ni 9.3 ± 6 k15.0 Pb 23 ± 11 15.5 Sn Ti V Zn 347 ± 28 346 Reference Material 47 - cone, mg/1 The USGS value represents the average and standard deviation of all laboratories who participated in the Analytical Evaluation Program for Standard Reference Water Samples 49 and 47 through May, 1975. All reference material run as a single blind experiment. One problem which was noted shortly after the instrument was set up, was the existence of stray light interferences. During the course of the development of the method described above, these interferences were corrected by a variation of the Dynamic Background Cor- rection scheme supplied by the vendor. The problem now has been further reduced by the replacement of all standard Hamamatsu (R300) photomultipl ier tubes used at wavelengths below 240 nm with solar blind tubes (R427). The magnitude of the stray problem is easily expressed in terms of the equivalent concentration, in mg/1 analyte, detected at each analytical channel when 1000 mg/1 Ca is sprayed into the plasma. The magnitude of the effect that is now observed while running the Ca test solution is less than 0.05 yg/1 for almost all elements listed in table 1. On the solar blind R427 channels where the stray 109 light problem has been shown to be severe, the maximum Ca effect is less than the detection limit in all cases. For those elements which exhibit a stray light effect and do not profit by the addition of a solar blind tube, significant improvements have been accom- plished elsewhere [2], In Wohlers' work the spectrum is wavelength shifted during a deter- mination and any wavelength independent shift in background (such stray light) is compensated for by subtraction. Six months after the original studies were completed the ICAP system was expanded by adding Be, Na, and a monochromator to allow individual selection of different elements. The Be and Na data are presented in table 1. Limited application to other elements using the monochromator has produced detection limits and precision similar to the literature [3]. In the near future the elements Li, K, Hg, As, Se and W are to be added as permanent channels. 3. Sample Preparation With this analytical evaluation completed it was then possible to consider the method of sample preparation as it impacted on sample throughput. The U.S. EPA defines by differ- ing operations in sample preparation several types of metals in water. This discussion will only consider total metals as defined in Section 4.1.3 of the Methods for Chemical Analysis of Water and Wastes (1974). Under this definition of total it is necessary to bring the acidified sample to dryness. This limitation defines the best method of high volume sample preparation as being the block digestion approach. Two hundred samples can be prepared by running batches of 40 samples with a metal block drilled to accept 40-200 ml test tubes and thermally controlled to 80°C. With this method one person can prepare 200 samples per work day. 4. Large Volume Data Handling The Region V Laboratory has available the following computer facilities in addition to the dedicated PDP8 of the ICAP. The system includes: Data General Nova 840, 64 K Core, Moving Head Disk, Fixed Head Disk and Magnetic Tape Storage. A file system was developed that allowed the following sorts of information to be coded into the computer system: 1. Date and time of each sample run. 2. Unique Name for each sample. 3. Unique association of each element with each sample. 4. Unique definition of the sample source. 5. Class code for each sample (standard, spike, blank, reference sample, in-house check sample, field blank, field duplicate, house blank, house duplicate and the like). 6. Special reference points for profiling exit slits. With the above system we were able to run complex sets of samples, standards, blanks, and quality control solutions and produce finished study reports of any type for any group of samples in the files. Because every element in each sample is dimensioned, it is possible to sort and compile a very large volume of data covering any desired time period. The effects of these improvements are: 1. Saving of many man-years of effort. 2. Compilation of statistically significant data base from which real assessments of quality can be judged. 3. Improved reliability in daily use. All of the above have been achieved with the Nova 840 system. 110 References [1] Ronan, R. J., Paper 361, 27th Pittsburgh Conference, 1976. [2] Wohlers, C. C, to be presented at FACSS, 3rd National Meeting, November, 1976, [3] Fassel, V. A., and Kniseley, R. N., Anal. Chem. 3 46, 1110A, 1974. Ill NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). MULTIELEMENT ANALYSIS OF RIVER WATER R. Schelenz Federal Research Center for Nutrition EngesserstraBe 20, D-7500 Karlsruhe Federal Republic of Germany 1. Introduction Standards for maximum permissible concentrations of some elements in drinking water have been set by the Federal Republic of Germany at the end of 1974. For quality control of surface and drinking water, the necessity for specific analytical methods which permit multielement determination, even at very low concentrations, with good precision and accuracy is evident. Instrumental neutron activation analysis (INAA) combined with high resolution gamma- ray spectrometry with Ge(Li) semiconductors and computer evaluation of the recorded spec- tral data [l] 1 is a promising technique for the simultaneous multielement determination in water. 2. Experimental A. Sampling Water collection is performed with cleaned 1«, polyethylene bottles with screw caps. Prior to sampling these containers are rinsed several times with the water to be analyzed. This procedure should prevent a possible contamination due to trace element from the inner surface of the sample container. After rinsing the bottles they are filled with the sampling water and allowed to stand for 10 minutes. This should adjust a sorption equil- ibrium of the trace elements of the collected water and the inner walls of the polyethylene sample container. After pouring out the water the bottles are rinsed once more and then the water to be analyzed is collected, frozen and stored at -18°C until the analysis can be carried out. B. Freeze-drying Previous studies have indicated that a 1 day irradiation of 200 mg liquid water in sealed quartz ampoules gave unsatisfactory results with respect to elemental detection limits. A direct neutron irradiation of greater amounts of liquid water in sealed quartz ampoules as proposed by Salbu et al [1] appeared too dangerous because of the high rate of gaseous radiolysis products induced in the water due to the high thermal neutron flux of the nuclear reactor FR 2 of the Nuclear Research Center of Karlsruhe. For technical rea- sons the irradiation of frozen samples as proposed by Brune [2] and Baudin [3] was not possible. figures in brackets indicate the literature references at the end of this paper. 113 In order to reduce statistical counting errors during gamma spectrometric measurement of the radionuclides produced in the sample and to improve the detection limits of the elements of interest, freeze-drying has been utilized as a preconcentration step. This technique has been used already by several workers for drying natural water [4-8]. In this study the so called "spinfreezing" method according to Schmitz et ail [9] and Schneider [10] was applied. After thawing and shaking the closed polyethylene bottles, 250 ml of the unfiltered water are transferred to a precooled and siliconized glass flask. The siliconization was found to prevent loss of trace elements due to sorption effect on the glass walls. To shorten contact time of the liquid water sample with the glass walls of the flask, the water is "spinfrozen" at -78°C. The principle of this technique is similar to that of a rotating evaporator. Care has to be taken to avoid losses of the dried material by electro- static charging. This problem can be overcome .by the installation of suitable filters and by controlling the velocity of the sublimation process. A sieve of stainless steel with pores of 30 ym diameter gave satisfactory results [10]. The water samples are dried for about 24 hours to give approximately 120 mg dry matter corresponding to an enrichment factor of nearly 2000. C. Irradiation and sample containers Both short- and long-term neutron irradiation are used for the determination of 21 elements in the preconcentrated water samples. For long-time irradiation (flux: 1.5 x 10 13 ncm~ 2 s _1 , irradiation time: 1 d, cooling time: 7 d) to determine the elements Au, Ba, Br, Ca, Co, Cd, Cr, Cs, Fe, Hg, La, Mo, Rb, Sb, Sc, Se, Th and Zn, 80--100 mg of the "dried water 11 are weighed into specially manufactured high purity ampoules of synthetic quartz. Possible elemental contamination of the surface of the quartz ampoules was removed by etching (5 s) with aqueous hydrofluoric acid solution (1:1) at room temperature. Prior to weighing-in of the samples, the ampoules are additionally cleaned by boiling them 30 min with concentrated nitric acid. The samples are cooled to -35°C, heat sealed and irradiated together with biological multielement standards such as NBS bovine liver (SRM 1577), NBS orchard leaves (SRM 1571) and Bowen's kale powder. Using purely instrumental techniques the sample is measured within the irradiation container. Therefore the trace element impurities of the container have to be known. To determine the amount of trace elements contributed by the irradiation container it was irradiated under the conditions given above for long-time activation. The content of 15 elements determined in the high-purity quartz are listed in table 1. Using these irradia- tion containers, no blank problem arises compared to the element concentrations found in the preconcentrated water samples. For example, the influence of the Hg blank of the quartz container {ca. 1.8 g) on a measured Hg value of 45 ng in the dry matter of 250 ml of liquid water is approximately 0.1 ng or 0.24 percent and thus within the error resulting from counting statistics. E Table 1 lement concentration of cleaned synthetic quartz (HF etching, 5 s; HN0 3 boiling, 30 min) Element Content pg/kg Element Content pg/kg Ba 30.0 Hg 0.06 Br 0.09 La 0.13 Co 0.2 Mo 5.9 Cr 0.7 Na 75.0 Cs < 0.001 Sb 0.01 Eu < 0.01 Sc 0.01 Fe 48.0 Se 0.2 Zn 1.9 114 For short-time irradiation (flux: 9 x 10 13 ncm" 2 s -1 , irradiation time: 5-10 s, cooling time: 20 min) to determine CI, K, Mn and Na, cq. 20 mg of dry matter are weighed into specially manufactured polyethylene vials (available from Prof. Spronk, Free University of Amsterdam, Netherlands). The elemental blank concentration of the plastic vials was deter- mined under the conditions for the short-time irradiation. For manganese 2 ppb were found, for sodium 30 ppb, and for chlorine 250 ppb. Potassium was not detected. The inherent influence of the blank of the polyethylene vial is - as determined for the quartz ampoules - negligible compared to the element concentrations found in the dry matter of the water samples. For example, the influence of the Mn blank of the polyethylene container {aa. 0.7 g) on a measured Mn value of 6.63 yg determined in the dry matter of 250 ml of liquid water, is approximately 0.001 yg or 0.02 percent. After a cooling period, due to the time required for transportation of the irradiated samples to the laboratory using short-time activation and to reduce the prominent radioacti- vity of 24 Na using long-time irradiation, the activated samples are measured directly within the irradiation containers after external cleaning with 6 N nitric acid to remove surface contamination. D. Shielding The shape and height of the background may play an important role in determing the detection limits of elements to be determined, especially when measuring low activities Therefore a circular shielding for the semiconductor was designed and built. The shielding materials (lead, iron, aluminum and plexiglass) are arranged with increasing ordinal number around the detector [11]. The effect of this shielding on the composition of the gamma spectrometric background is shown in figure 1. The upper spectrum indicates a measurement without shielding and the other spectrum a measurement with the shielding. A considerable decrease in the pulse heights is obtained mainly in the low energy region. g ( A /"mjiI • iM without shielding — ' ■ ] - — |— : ■ - .; L with shielding — U ' to. . ' ' 1 "■' k *-*-*JL ~~W, ,1,, - 500 WOO 1500 >w>V Figure 1: Effect of the shielding on the measured gamma spectrometric background of a Ge(Li) semiconductor detector; time of measurement: constant. E. Counting and data processing The installed gamma spectrometry system consists of a 113.4 cm 3 Ge(Li) semiconductor (energy resolution: 2.0 kev at 1332 kev for 60 Co, peak to compton ratio: 43:1, relative efficiency compared to a Na(Tl)J crystal: 18 percent) coupled with a 4096 multichannel pulse height analyzer. With the aid of a computer (Hewlett-Packard, model 2100 A, 16 K memory) and a program in "instrument basic", photopeaks are located and peak areas deter- mined by the correlation technique [12]. Element concentration is calculated by comparing the areas of interference free photopeaks both of sample and identically treated standard. 115 The detection limits are found to be 10~ u to 10" 8 g, depending on the element. The total time required for analysis is 8 days; this includes irradiation, cooling, measurement and data evaluation. F. Test of the method The accuracy and precision of this instrumental method has been tested by repeated analyses of Bowen's kale powder, NBS orchard leaves and bovine liver [13]. Furthermore our laboratory participated in interlaboratory comparisons for multielement determination in various biological materials, organized by the Analytical Quality Control Service of the International Atomic Energy Agency (IAEA) [13-15]. 3. Results and Discussion As an example of our activities, the results of water analyses of the Neckar river, where much industry is located, are discussed below. Between kilometers 136.5 and 175 (seen from the mouth, where the Neckar River flows into the Rhine River) 7 water samples were collected at different locations on the same day within 6 hours beginning upstream (fig. 2). The element concentrations found for Cd, Cr, Hg, Se and Zn in the water samples of this part of the Neckar river are given in table 2. It should be noted that the element concentrations given refer to the weight of the liquid water samples. The actual concentra- tions found in the dry residue are nearly 500-times higher. This means, that no detection problems arise in analyzing the trace element content of the "dry water". The average deviation from the mean values given for a single determination is 10-15 percent. Ludwigshafen Figure 2. Map of the Neckar-Enz-Rhine region •: locations of water sample collection, 116 Table 2 Content of some elements in the course of Neckar river water compared to the standard for drinking water of the Federal Republic of Germany Standard km (to mouth) 136.5 Enz 137 142.5 150 172 175 Element (yg/1) (yg/D Cd 6 <0.1 <0.1 2.4 <0.1 <0.1 <0.1 <0.1 Cr 50 9 5 6 5 5 5 3 Hg 4 0.3 0.1 0.2 0.3 0.2 0.4 0.4 Se 8 0.2 0.2 0.2 0.1 0.2 0.1 0.1 Zn 2000 41.0 56.0 33.0 29.0 18.0 45.0 28.0 The element contents determined are far below the maximum permissible concentrations for these elements in drinking water as set by the Health Ministry (Bundesministerium fur Jugend, Familie und Gesundheit) of the Federal Republic of Germany and below the corre- sponding recommendations of the World Health Organization (WHO). Furthermore these contents are also below the recommended values of the European Community for surface water suitable for preparing drinking water. The summary of the results of the 21 elements determined in the river water are indicated in table 3. Element concentrations are given in yg/1 (equal to ppb by volume = parts per 10 9 ). With the exception of the Cd content of the water at km 137, this element could not be detected in the other water samples with statistical significance. Compared to the values for the other elements at kilometer 137, shortly downstream of the mouth of the Enz River (fig. 2), the composition of the element spectrum determined in the river water has changed to a certain extent. At this location (km 136.5) significantly higher levels of Ba, Cr, Fe, Th and Zn have been found whereas the concentrations of Ca, CI, La, Mo and Na have decreased. Table 3 Variation of element concentrations in the course of Neckar river ^_^ km to mouth 136.5 136.8(EnzT W 142.5 ' T50 TT^ 175 Element Average concentration (yg/1) Au 0.3 2 to 10 ym range (coarse particles). Collection of these particles on thin membrane filters, which have a low trace element blank, provides samples, which are well suited for rapid x-ray fluorescence analysis, for elements with Z > 13 and sufficiently sensitive for monitoring applications [2]. Quantitation is usually achieved by comparision of the x-ray emission intensities of the elements of interest with those from thin calibration standards. That is, the specimen is so thin that it becomes essentially transparent to the incident and fluorescent radiation. Thin specimens are also ideal for x-ray analysis because absorption effects are usually neglig- ible for many elemental analyses. The mass thickness of samples considered to be suf- ficiently transparent is typically a few hundred micrograms per square centimeter. For such samples the concentration of an element is considered to be directly proportional to the x-ray fluorescence intensity of the element. For elements of atomic number < 21, however, particle size effects (mainly absorption effects) must be considered and the appropriate corrections made. Corrections are also necessary in many cases for x-ray absorption by the substrate when particles penetrate the substrate surface appreciably. 2. Experimental Samples of material suspended in water can be collected on thin membrane filters by filtration and subsequently analyzed. Analysis of dissolved trace metals, on the other hand, does require the use of preconcentration techniques. A number of enrichment proce- dures have been proposed; probably the most attractive of these are the use of chelating ion-exchange resin loaded filter papers. These include Chelex-100 (iminoacetate) and SA-2 Amberlite IR-120 which contain sulfonic acid functional groups. Some very recent work [3] suggests that heavy metal ions can be retained (greater than 95%) from seawater samples if calcium and magnesium ions are first removed by ion-exchange chromatography. The capacity of these filter papers are also sufficiently high to permit collection of ions at concen- trations up to 1000 ppm. figures in brackets indicate the literature references at the end of this paper. 125 In the x-ray spectrometric analysis of particulate matter, variations in particle size composition and size can lead to systematic errors in the results if these effects are not properly compensated. Particle size effects on the measurement result can be eliminated by employing destructive techniques such as fusion with borax or lithium tetra- borate. Fusion techniques have been used for many years for sample preparation especially in the analysis of geological samples. Under proper conditions the samples can be fused to form a homogeneous solid solution. There are basically three ways to prepare fused sample's for x-ray analysis. They are (a) fusing and casting, where a separate casting operation is performed to produce a disk suitable for analysis, (b) fusion and grinding; following solidification the fused sample is ground and pelleted, and (c) fusion and direct solidification where the melt solidifies directly in the crucible to produce a disk for direct x-ray analysis. With the development of the Pt-Au alloy crucible within the past few years it has been possible to prepare fused glass disks by direct solidification with a high degree of success. This method is currently in use in our laboratories for the quantitative analysis of a number of different kinds of samples including the environ- mental samples urban dust, river sediment, and fly ash. Sample sizes typically range from 0.2 to 1.0 grams and are fused with 6.0 grams of lithium tetraborate. In the application of fusion techniques the following variables must be considered; (a) sample type {e.g. refractories), (b) particle size of samples, (c) fusing agent (flux), (d) sample-to- flux ratio, (e) fusion temperature, and (f) fusion period. With the automated fusion device in use in our laboratories, fusion period and temperature can be controlled. Temperatures from 1000 to 1100°C can be attained using natural gas-air mixtures and from 1100 to 1200°C using propane-air mixtures. If necessary, the sample is first ground to at least 200 mesh before fusing to provide optimum mixing and to minimize the fusion time. In table 1 are listed fusion periods for fusing some typical sample types. Table 1 Automatic Fusion Typical sample types Time in minutes Cement Sediments Alumina Quartzite (Refractories) Low heat cycle (550-600°C) 3-4 3-4 3-4 Melt and fuse cycle (1100°C) 4 6-8 (typ) 12 (max) Mix and fuse cycle (noo°c) 3-4 8 (max) 8 (min) Ambient cool 3 3 3 Air cool 3 3 3 TOTAL 16 23 29 Cements and dolomitic limestones represent typical easily fused non-refractory samples, and calcined alumina and quartzite represent the more difficult to fuse refractory samples. Samples containing chromium concentrations above 10% by weight are often difficult if not impossible to fuse in lithium tetraborate but can be fused with lithium metaborate as the flux agent. The low heat cycle at about 550°C expels the low temperature volatiles such as carbon dioxide, then 2-3 minutes are required to melt the flux and start the dissolution of the sample. Four minutes is usually satisfactory for the more easily fused samples but the time can be extended to 12 minutes for the more- refractory type samples. Mixing at the fusion temperature requires 3-4 minutes to a maximum of 8 minutes. Longer times can be employed, however, during this cycle with an external timer. At the end of the fusion cycle the crucible is allowed to cool at room temperature for three minutes. Compressed 126 air then enters the burner to promote rapid cooling. The solidified disk is removed and ground on a 220 mesh abrasive and then polished with a fine grade of paper abrasive {e.g. 400 grit). Although the Pt-Au alloy crucible is essentially non-wetting, there is some wetting which is obvious as a concave meniscus on the solid disk. The amount of wetting varies with sample composition and experience in other laboratories has shown that sticking can become so severe that samples such as copper ores cannot be removed from the crucible without cracking the disk. Some investigators [4] have added halides, notably bromides to reduce the sticking substantially. With the addition of 2 drops of hydrobromic acid the concave meniscus is eliminated and the disk released easily from the crucible. Our results have shown that an appreciable amount of HBr is retained in the disk and can interfere in the analysis. We have found that after careful preparation of the inside surface of the crucible by first polishing with metal polish followed by cleaning in 1:1 hydrochloric acid the disk could be removed without cracking. This procedure was used for a number of different types of samples including cements, urban dust, river sediments and minerals successfully and minimized the need for wetting agents. Wetting may still present a problem, however, for some types of samples. 3. Results When thick specimens are analyzed, the measured x-ray fluorescence intensities are influenced by the elemental composition of the sample. That is, the intensity of emission, I a , of one element, a, depends upon the concentration of all the elements present: i a = f(c a . c b , C c , ...) (1) A number of matrix correction procedures have been devised which require the use of samples of known composition. In the empirical approach, one obtains interaction or influence coefficients from the standard samples which are then used to calculate the corrected concentration of each element in the unknown sample. Since disks can be prepared from samples of known composition and/or synthesized with reagent compounds, the coefficients can be easily determined. Another method often used to compensate for interelement effects requires the addition of a "heavy absorber" [e.g. La 2 3 , or W0 3 ) to both the analyte and the standard. This technique has been used especially in the analysis of mineral samples for elements from aluminum to iron. Because the heavy absorber attenuates the x-rays to a much greater degree than the elements excited in the sample, it is very effective in minimizing the interelement effect. We have used this technique in the analysis of urban dust samples and NBS-SRM 1633 Fly Ash. The analyses of the fly ash sample are in good agreement with the NBS certificate values and those from other workers. References [1] Goulding, F. S., Jaklevic, J. M. , and Dzubay, T. G., EPA-650/4-74-030, July, 1974. [2] Hammerle, R. H. , Marsh, R. H., Rengan, K. , Giauque, R. D. , and Jaklevic, J. M. , Anal. Chem. 45, 1939 (1973). [3] Kingston, H. M. , private communication. [4] Matocha, C. K. , Bolide Additions to Fusions in Platinvon-Gold Crucibles to Control Wetting, Analytical Chemistry Division, Alcoa Laboratories, Alcoa Center, Pa. 15069. 127 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) MULTIELEMENT ANALYSIS OF AIR AND WATER POLLUTANTS IN GOLD MINES BY THERMAL AND EPITHERMAL NEUTRON ACTIVATION C. S. Erasmus, J. Sargeant, J. P. F. Sellschop and J. I. W. Watterson Nuclear Physics Research Unit University of the Witwatersrand Johannesburg, 2001, South Africa 1. Introduction A multielement procedure for the determination of potentially harmful constituents in air and water samples, taken from underground working areas and slimes dam sediments (from the surface surroundings associated with the gold mining industry), is presented. The method utilizes thermal and epicadmium instrumental neutron activation analysis based on Ge(Li) gamma-ray spectrometry and computer evaluation of recorded data, which allows up to 40 elements to be determined without any chemical treatment or extensive sample manipulation. Industrialized countries are becoming more aware of the implications that result from pollution of both working areas and the surrounding environment. The control of air and water samples is of importance both to the mining industry and to those responsible for natural fresh water resources in South Africa. The excess water pumped from gold mines can become an important source of water in densely populated and heavily industrialized areas, where surface contamination and transmission of pollutants should be avoided and monitored. This implies that the long term effects of harmful elements and the mechanism of their transmission warrants investigation. The study of trace elements in atmospheric aerosols and water samples has become a field of great emphasis in recent years. Instrumental neutron activation analysis (INAA) is attractive for this type of study because of its high sensitivity for the simultaneous determination of a great number of elements in environmental samples. Several reports concerning a small number of elements have been published both for aerosols [l] 1 and water studies [2,3]. Tanner, et at. [4] stressed that the relatively high concentration of sodium present in underground waters interferes with the INAA of many elements in such water samples. The advantages of freeze-drying water samples as a preconcentration step have been suggested by Schmitz, et at. [5], while the efficiency of lyophyl ization has been investigated by Harrison [6]. In a recent paper, Salbu, Steinnes and Pappas [7] have shown that up to 40 elements may be determined in natural fresh water by INAA. This work demonstrates that quantitative determination, of 19 elements present in aerosols and water from underground working areas and slimes dam sediments from surface mining surroundings, can be performed using thermal and epi thermal instrumental neutron activation (the latter to overcome problems with high sodium content) and subsequent gamma spectrometry with high resolution Ge(Li) detectors. 2. Experimental Samples . The air in underground working areas of a gold mine was sampled by electro- static precipitation onto Whatman 541 filter paper discs. Approximately 7 m 3 air was sampled at positions corresponding to the face-level of a miner during his working day. Depending on the miner's working environment, from 1.4 mg to 3.4 mg of solid materials were collected per aerosol. The quartz component of these samples (determined by x-ray fluorescence) ranged from 0.08 to 0.6 mg of the total sample. 1 Figures in brackets indicate the literature references at the end of this paper. 129 Underground water was sampled from a gold mine situated on the Witwatersrand and 250 ml aliquots were cautiously evaporated to dryness in new Teflon beakers. Approximately 0.7 g of dissolved solids crystallized on the bottom of each beaker and were pulverized and homog- enized using an agate mortar and pestle. Slimes dam sediments were sampled at different depths (30 cm, 60 cm, and 90 cm) of a dam still being used. Each sediment sample was pulverized and thoroughly mixed before aliquots were removed for analysis. A well analyzed granitic rock reference sample and dilute solutions of certain elements {e.g., Uranium) were included as calibration standards. Sample preparation and irradiation . All irradiations were carried out in the ORR-type reactor of the South African Atomic Energy Board (Pelindaba). For elements giving nuclides with half-lives <3 hours (see e.g., table 1) the aerosols and 100 mg of the solid samples (dissolved solids obtained from the underground water and slimes dam sediment) were sealed in high purity polyethylene containers and irradiated for 3 minutes in the pneumatic tube facility at a neutron flux of 3 x 10 13 n cm 2 s 1 . The remaining elements (in table 1) were determined by thermal and epithermal methods of activation. In order to compare the two methods half of each aerosol sample and duplicates of approximately 200 mg of the solid samples were sealed in quartz ampules. One set was sealed in aluminum cans and irradiated for one hour in the hydraulic facility at 9 x 10 13 n cm 2 s 1 . For epithermal activation, the other set was incapsulated in cadmium cans of 1.5 mm wall thickness and irradiated in an epithermal flux of 1 x 10 12 n cm -2 s" 1 symbol for 20 hours. Gamma-ray spectrometry . The activated samples were counted on 60 cm 3 Ge(Li) detectors having resolutions of 2.4 keV and 1.9 keV for the 1332 keV and 1170 keV peak of cobalt-60 and relative efficiencies of 9.5 and 10 percent. The detectors were coupled to 4000 channel pulse-height analyzers calibrated at 1 keV per channel. Samples and calibration standards were counted at varying distances from the detector so as to optimize the compromise of maximum precision due to counting statistics and minimum correction for dead time in the analyzer. Dead time corrections were performed by means of a pulse generator as described by Anders [8]. The spectra accumulated were transferred to magnetic tape for subsequent analysis using a modified version of the Hevesy program [9]. All spectral data were nor- malized to a counting distance of 10 cm for comparative purposes. 3. Results and Discussion Quantitative results for 19 elements determined in aerosols and water samples from underground working areas and slimes dam sediments are reported in table 2. These results show that high uranium contents were found in the aerosol, and particularly in the under- ground water and slimes dam samples. Arsenic is present in both the aerosol and sediment samples, but before one establishes whether arsenic is present as arsenopyrite or another (more harmful) form, no inferences should be made. It should also be emphasized that corrections for reagent blanks (filter paper and sample vials) were applied. Figure 1 shows three spectra taken shortly after the 3 minute irradiation for each of the three sample types. The peaks representing the analytical photopeaks indicated in table 1 are labeled. In order to compare thermal with epicadmium activation, two spectra of the same sample irradiated by these two methods and recorded at approximately the same decay time are shown in figure 2. Elements that are better determined by that method of activation are indicated. 4. Conclusions In the present study instrumental neutron activation analysis, using epithermal irra- diation, is a very useful technique for the simultaneous determination of a great number of elements in aerosol and water samples, from underground mining areas and samples of the surface environs of mines. 130 .__ Epi thermal act such as Mn, Fe, Co, to those elements ( particular sodium a of other less sensi greater number of e analysis of materia rocks from which du that epi thermal act of the environment. ivation is shown to enhance the detection sensitivities of ' many elements Ni, Ga, As, Br, Sr, Mo, Sb, Ba, La, Sm, Tb, Ta, W, Au, Th, and U, relative e.g., Na, Sc, Cr, and Eu) mainly activated by thermal neutrons. In nd scandium, which often interfere with the thermal activation analysis tive elements, are subdued in epithermal activation, which allows a lements to be determined. This aspect is particularly important in the Is with a high sodium content such as most natural waters and terrestrial st (collected on aerosols) originates. It may therefore be suggested ivation analysis become the preferred technique for air and water studies Table 1 The elements found in underground air and water samples and slimes dam sediments from gold mines. Target Product Radio- Photopeak Underground Underground SI imes element isotope nucl i de measured air water dam half-1 ife E A (keV) samples samples sediments Na 21+ Na 15.0 h 1386 Q Q Q Mg 27 Mg 9.45 m 1014 P P P Al 28 A1 2.31 m 1779 P P P CI 38 C1 37.3 m 1642,2167 P P P K 42 K 12.5 h 1525 Q P Q Ca «Ca 8.8 m 3080 P P P Sc 46 Sc 83.9 d 889 P Q Q Cr 51 Cr 27.8 d 320 P N Q Ti 5i Ti 5.79 m 320 P N P V 52 V 3.76 m 1434 P N P Mn 56 Mn 2.56 h 847,1811 Q Q Q Fe 59 Fe 45.1 d 1099,1292 Q Q Q Co 60 Co 5.27 y 1173,1333 Q Q Q Ni 58 Co 71.3 d 810 Q Q Q Cu 66 Cu 5.1 m 1039 P P P Zn 65 Zn 245.0 d 1115 N N N As 78 As 26.3 h 559,657 Q Q Q Br 82 Br 35.9 h 111 P P P Rb 8 6Rb 18.7 d 1077 Q Q Q Ag 1 1 omAg 253.0 d 1384 P N P Sb 124 Sb 60.9 d 1691 P Q Q Cs 134 Cs 2.07 y 796 N N P Ba 131 Ba 11.3 d 496 P Q Q La HK) La 40.3 h 1595 P Q Q Sm 153 Sm 47.1 h 103 P P P Eu 152 Eu 12.2 y 1408 P Q Q Tb 160 Tb 72.1 d 879 Q Q Q Dy 16 5 Dy 2.36 h 95,280 N N P Hf 181 Hf 42.1 d 482 N Q N Ta 182 Ta 115.1 d 1222 Q N Q W 187 W 1.0 d 686 P N P Au l **ku 2.7 d 412 P N P Th 2 3 3p a 27.0 d 300,312 Q Q Q U 239 Np 2.35 d 228,278 Q Q Q Q = quantitative value P = pres .ence established N = not detected s = seconds, m = minutes, h = hours, d = days , y = years 131 Table 2 The average results obtained by instrumental neutron activation analysis of aerosol, water and slimes dam sediment samples. ement a ppm Aerosols E% Na 1.12% 0.8 K 2.9% 11 Sc 10 2 Cr P 30 Mn 745 0.3 Fe 7.0 % 10 Co 59 5 Ni 77 13 As 1.1 1 Rb no 30 Sb p 5 Ba 21 26 La p 1.3 Eu p 25 Tb 6 15 Hf N Ta 3.7 20 Au (0.7) 1.7 Th 14 15 U 16 0.6 Waters Sediments ppm E% ppm E% 40 0.4 0.14% 1.1 P 17 1.6 % 2.9 0.007 10 10.7 0.9 N 258 0.5 13.3 1.0 57 0.6 11 20 2.0 % 1 3.5 0.2 20 1.2 12 0.4 80 3.5 0.002 3.5 12 0.5 N 65 3.3 0.001 10 1.7 1.6 1.2 7.7 190 5 0.38 2.7 27 1.2 0.02 10 0.7 4.4 0.011 2.4 0.5 6.8 N 6.8 2.3 N 2.0 1.7 N (0.23) 1.8 0.012 2.5 6.4 4 1.6 1.2 12 1 Results are in parts per million (ppm) unless otherwise indicated E% = counting error P = present but not determined ( )= estimate N = not detected 132 AEROSOL B4 Td=12min Underground water Td=16min Slimes dam sediment Td = 40min 1000 2000 ENERGY (keV) 3000 4000 Fig. 1 Spectra taken soon after 3 minute irradiation 133 10* 10 4 -J !0 2 -J 10 1 Pulser B1 THERMAL IRRADIATION Td = 1,55d Na Na UiLL^« ^La Mn B1 EPITHERMAL IRRADIATION Td = 0,41d H 1 1 1 1 1 1 1 1 1 1- 500 1000 ENERGY (keV) Mn WW* H 1 1 1- 1500 2000 Fig. 2(a) Comparison of spectra obtained, for duplicate samples after thermal and epithermal irradiation 134 10 ! 10' 10°, in CM o CM LO LO Figure 5. AAS-ES intercomparison results obtained on three different digestions. 147 < CD < < CO LU CD o o oo CO *3" CM OO CD ^- CM Figure 6. AAS-ES intercomparison results obtained on three different digestions, 148 < CD < GO LU CD O •> DC LU Q_ O CO co .O CM ■i-'* *v*" CO .CO oo o LU o o ■KJ. •*• CO .CO CM 1^ CO Q_ •»• "** ■ CO CO L 1 1 —J J J* o rf CO CM CM «— ■— CO CO CM *— ■— CO Figure 7. AAS-ES intercomparison results obtained on three different digestions, 149 6. Conclusion The method of spectroscopic buffer addition to both calibration standards and samples provides an acceptable method for multi-elemental analysis of digested sediment and sediment-like materials. The method is simple, relatively fast and provides the accuracy necessary for routine "on-line" analysis of this type of environmental materials. The authors would like to acknowledge the continuous encouragement of Mr. Gerard C. Ronan, Director, Laboratory Services Branch. References [1] Theron, P. P., et al., J. S. Afr. Vet. Assoc, 44, 271, (1973). [2] Boratynski, K. , et al., Vol. J. Soil Sci. , 6, 95, (1974). [3] Moselhy, M. M. , Boomer, D. W. , Bishop, J. N. Diosady, P. L., A paper presented at the TARC International Symposium III, "Trace Analysis of Environmental Materials", Aug. 1976, Halifax, Nova Scotia, Canada. [4] Cowgill, U. M., Appl. Sped., 28, 455 (1974). [5] Scott, R. H., Kokot, N. 0., Anal, Chim. Acta, 75, 257, (1975). [6] Scott, R. H., Kokot, N. 0., Anal. Chim. Acta, 76, 71 (1975). 150 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). HIGH EFFICIENCY SOLVENT EXTRACTION OF TRACE ELEMENTS IN AQUEOUS MEDIA WITH HEXAFLUOROACETYLACETONE Morteza Janghorbani : , Max El linger and Kurt Starke Philipps University Marburg/L., Fed. Rep. of Germany 1. Introduction Widespread application of gas-liquid chromatography to analysis of trace elements in aqueous media has been severely limited due to lack of suitable methods for converting a wide range of elements to volatile and thermally stable species. Of the converters tested, the 3-di ketone family has shown the greatest promise [l] 2 . Furthermore, it has been conclusively shown that fluorine substitution of the methyl protons enhances the volatility of the resulting chelates, allowing column operation at reduced temperatures and greatly enhanced sensitivities by use of electron capture detection. These advantages are, however, said to be offset by the poor extractabil ity of elements with hexafluoroacetylacetone (Hhfa) as compared to acetylacetone (Hacac). Unfortunately, in contrast to Hacac, no systematic study of solvent extraction of elements with Hhfa, and to a lesser extent, with trifluoroacetylacetone (Htfa), has been reported [2]. In this communication, we present data on comparative extraction of Sc 3+ with Hacac, Htfa, and Hhfa obtained with a special radiochemical solvent extraction device. 2. Instrumentation A special solvent extraction device is reported in this -work based on gamma spectro- metry with Nal(Tl) detection and multi-channel pulse-height analysis. This device allows: (1) solvent extraction studies to be carried out at trace levels by use of high specific- activity tracers, (2) capability of simultaneous extraction studies of multi-element solutions to the extent permitted by Nal(Tl) detection resolution, thus permitting studies on synergistic/antagonistic effects of various elements to be carried outon a single solution, (3) acquisition of the entire %E-pH curve on a single solution aliquot reducing contamination problems and excessive use of reagents, (4) study of extraction-time behavior at various pH values, and (5) experimentation on extraction reversibility to be conducted easily. The basic features of this simple device are shown in figure 1. It consists of a glass cylinder inside of which a Teflon plunger with side clearance of about 1 mm moves vertically by a motor and motion-converter mechanism. Two detectors in Nal(Tl), 2in x 2in, continuously monitor gamma radiation from the two phases and a combination electrode measures pH of the aqueous phase. Suitable size lead shielding between the two detectors reduces radiation cross talk (to less than 1% for 1.33 MeV radiation in the present design). In this version of the device the two detector outputs are sequentially switched into a 1024 channel multi-channel analyzer. Thus, after any mixing period and attainment of phase separation, the ratio of integrated counts in the photopeaks of interest between the Present Address: Environmental Trace Substances Research Center, Route 3, Columbia, Missouri 65201 2 Figures in brackets indicate the literature references at the end of this paper. 151 two detectors, corrected for an instrument factor, gives the needed %E data. Changing solution pH is accomplished by addition of a small volume of acid or base as required. Shaking time can be controlled from a few seconds on and the extent of mixing is adjusted by controlling speed of plunger motion with the variable-motor employed. 5x5 Nal (T£) T 20 Pb-shield I 5x5 Nal (T£) < 20 1 to motor 1 aqueous phase —to pH meter pH electrode 'Teflon plunger Ti 38 16.25 ^0.1 — organic phase Fig.l-Solvent Extraction Device all dimensions in cm 3. Extraction Data Figure 2 shows comparative extraction kinetic data obtained for Sc 3+ with the three agents Hacac, Htfa, and Hhfa in chloroform. Note that these data were obtained with solution buffered at pH 5.2. It appears that the extraction rate increases in the order Hacac < Htfa < Hhfa. Although approximately two minutes are required for attaining equilibrium for the Sc 3+ - Hacac system, 20 seconds is sufficient for Sc 3+ - Hhfa. 152 *sE/%E max 1.0 0.9 0.8 . 0.7 0.6 0.5 0.4 0.3 0.2 0.1 / 1 " ■ w ii n. j " ■Htfa / / , -Hhfa ' / < Hacac 10 30 60 80 100 120 140 Mixing time, sec. Fig. 2-Comparative extraction kinetics of Sc -3-diketone system buffered at pH 5.2 Comparative extraction -pH curves for Sc 3+ and figure 3 based on long (a few minutes) mixing time, extraction efficiencies of better than 90% are obtai efficiency for Sc 3+ - Hhfa decreases drastically in due to hydroxocomplexation. This may also be the ca pH values. An important observation to be made from in the direction of the shift in pH% values in the s reactions, the Sc 3+ - Hhfa curve should lie to the 1 experimental curve is shifted significantly to the r from the di hydrate formation reaction. the chelating agents are shown in From these data it appears that ned for all the three systems. The., neutral to basic media, most probably se for the other two systems at higher these data is the apparent discrepancy eries. In the absence of complicating eft of that for Sc 3+ - Htfa. The ight; this being due to competition 153 %E 100 90 - 80 60 50 40 . 30 - 20 10 X ^X % --!_ X X x' •' ^ ° V x ' / o ' / / * ' / / v X V * • / ' i \ ' i ' v 1 , ' X * • ' ' ' ■ ! ! Htfa ' / o » ' ( i ' Hacac 1 i ' . , Hhfa ] X ' ,x" [ 1 1 II. » 1 1 , ' r* I ' f ? i ° ' I X 1 i i 1 » | X » 1 J 1 ? 1 • ' » ( * ' . * « ' ' X j f 1 / ' 1 X / ' x y ' i / • ' X //> --' . • • , ' "— r— *—r- 1 1— ■- -i— — i 1 ' PH 3+ Fig. 3- Extraction of 40 ng/ml Sc with 0.1M B-diketone in CHC1 154 %E 100 _ 90 80 _ I X 70 . 60 50 40 30 20 10 x N n 6 9 "^ pH 4.8 - 5.1, buffered i X< 1 \ X v — pH 2.9 - 3.2, unbuffered \ \ • _ -» V pH 1.4 - 1.5, unbuffered I I I I L J Xvi JL J L 20 60 100 140 180 Mixing time, seconds 220 Fig. 4-Extraction kinetics of Sc -hfa system for different pH values 155 Extraction-time plots for the Sc 3+ - Hhfa system at three pH values are shown in figure 4. Although the curve for the solution buffered at pH 5.2 attains a plateau, as expected for a simple extraction system, those obtained at pH values of 1.5 and 3.0 exhibit a maximum at about 30 seconds, clearly indicating that: (1) significant dihydrate forma- tion takes place in acid but not neutral or basic media, (2) kinetics of dihydrate formation are significantly slower than extraction for this system, and (3) the equilibrium extrac- tion efficiencies (corresponding to approximately three minutes) closely duplicate data shown in figure 3, thus explaining the observed right-ward shift of the Sc 3+ - Hhfa curve. Spectroscopic data on the [Hhfa] • [Hhfa • 2H 2 0] system also confirm these findings. 4. Conclusions Data presented in figure 4 indicate clearly that for elements whose extraction kinetics are faster than the dihydrate formation reaction and whose extraction with Hacac is efficient, Hhfa can be utilized effectively if shaking times are optimized. This would not only allow high-efficiency extraction of such elements with Hhfa, but would also permit group separations based on this consideration. Practical application of this concept, however, should await establishment of detailed extraction data on the elements of interest. References [1] Moshier, R. W. and Sievers, R. E., Gas Chromatography of Metal Chelates, (Pergamon Press, NY, 1965). [2] Stary, J. and Hladky, E., Anal. Chim. Acta., 28, 227-235 (1963). IKJ l . ^. 156 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). DETERMINATION OF TRACE AND MINOR ELEMENTS IN THE COMBUSTIBLE FRACTION OF URBAN REFUSE William J. Campbell, Harold E. Marr, III and Stephen L. Law College Park Metallurgy Research Center Bureau of Mines U. S. Department of Interior College Park, Maryland 20740, USA 1. Introduction With the national focus on energy, the combustible fraction of urban refuse is being extensively considered as a fuel supplement for coal in the generation of heat and power. One concern regarding this fuel supplement is the possible environmental impact from trace elements released to the atmosphere during the combustion process. The objective of these ongoing analytical studies is to determine the concentration of major, minor, and trace elements in the combustible fractions collected at various locations in the Bureau of Mines urban refuse recycling pilot plant located at College Park, Md. The samples processed through the plant are submitted by various municipalities that are considering resource recovery as an alternative to landfill or other means of disposal. Parallel studies are also underway to identify individual major contributors by analyzing separately, high- circulation newspapers and magazines plus various other types of paper and plastic products [I] 1 . 2. Separation of Combustibles In the United States refuse-derived fuels will generally be separated from the glass and metal fractions of municipal solid waste (MSW) prior to use for generation of heat and power. Many of the MSW separation processes will include part or all of these steps in- corporated in the Bureau of Mines urban refuse pilot plant [2]. This 5-ton-per-day pilot plant consists of shredders, magnetic separators, trommels, air classifiers, cyclones, and mineral jigs, as shown schematically in figure 1. The products collected in the three cyclones (figure 1) consist primarily of the lighter paper and packaging materials together with plastic film and light fabrics. The heavier combustibles such as wood, leather, rubber, and heavy-gage plastics are concentrated in the heavy organic product from the secondary air classifier. Most of the putrescibles and yard waste are concentrated in the organic wastes separated out by the mineral jig. Samples of combustibles from all five collection points in the plant are included in this study; however, it is probable that some refuse-derived fuels will be limited to the cyclone products. Therefore, this fraction is emphasized. Based on pilot plant data the following percentages of the combustibles entering the plant report in the cyclone fractions: paper, 93 percent; plastics, 70 percent; fabrics, 92 percent; cardboard, 46 percent; leather and rubber, 15 percent; wood, 14 percent; and a negligible amount of the putrescibles. figures in brackets indicate the literature references at the end of this paper. 157 Unburned refuse KEY ■ I ^ > Air to baghou •^^H Water to recycle system Figure 1. Raw refuse recovery flowsheet. 158 3. Samples and Analysis The MSW samples for this study were collected in the District of Columbia, Baltimore County and Montgomery County, Md., and Tampa, Fla.; MSW samples from other areas of the United States are presently being processed. The data in table 1 represent average U.S. values for the combustible fraction of MSW based on a 1973 survey [3]. The composition for individual samples varies significantly depending on factors such as economic status of neighborhood, time of year, and geographic location, particularly in regards to food and yard wastes. Table 1. Composition of combustible fraction of municipal solid waste in the U.S. Composition, Paper fraction, Component weight [ 50 percent 2 weight percent Paper Corrugated cardboard 26.7 Newspapers 18.0 Other, paperboard 13.3 Paper packaging 12.6 Office paper 12.2 Magazines, books 7.6 Tissue paper, towels 5.3 Other, nonpackaging 2.9 Paper plates, cups 1.4 Yard 17 8 Food 16 8 Plastics 5 2 Wood 4 6 Leather-rubber 3 4 Textiles 2 The sampling procedure for the combustible fractions from the pilot plant was designed to obtain samples for energy-related measurements such as Btu value and ash content [4]. For the cyclone products the total accumulated sample collected in each cyclone is passed through the secondary shredder and the secondary air classifier to obtain a more manageable particle size. A 10 to 20 percent grab sample is then taken and progressively coned and quartered to a final sample size of approximately 3 kilograms. A similar procedure is used for the heavy organics fraction, whereas the organic fraction containing the putrescibles and yard wastes is sampled periodically during the pilot plant runs. Two hundred grams of each type of sample are then put through a Wiley mill 2 to achieve a minus 2-mm particle size for the analytical samples. The large-volume individual contributors—paper, magazines, plastics, textiles, etc. --are prepared by reducing the entire sample to minus 2 mm in the Wiley mill. Any noncombustible material in these samples such as staples, buttons, etc., is removed prior to size reduction in the mill. Analysis of replicate samples of the combustible fraction indicates that the reproducibility of the analytical sample is adequate for these environmental impact studies. The largest variations were noted for those elements--Cu, Fe, Ni, Pb, and Zn--that may be present as fine wires or pieces of solder and screen in addition to their normal mode of occurrence in the combustible materials. As expected, the repro- ducibility between replicate samples was good for the large-volume individual contributors. The combustible fractions and the large-volume individual contributors were analyzed by two techniques—chemical digestion-atomic absorption spectrophotometry, and pressed pellet- energy dispersive x-ray spectrography. For the chemical-AA procedure, 4 to 5 grams of sample are digested in nitric and hydrofluoric acid with subsequent treatment by perchloric acid when necessary. The dissolved samples are brought to volume in a 100-ml volumetric flask and then analyzed by flame AA procedures. Reliability of the chemical digestion-AA method was evaluated by analyzing a National Bureau of Standards reference coal fly ash #1633. Reference to specific equipment or materials does not imply endorsement by the Bureau of Mines. 159 The pressed pellet-energy dispersive x-ray procedure was calibrated using previously analyzed samples of both the combustible fraction and individual large contributors as secondary standards. Samples and standards were prepared by blending 3.5 grams of minus 2- mm sample and 0.5 gram of Somar mix (as a binder) in a Spex mill and then pressing into a 1 1/4-inch-diameter pellet at 30,000 pounds per inch. Linear regression curves obtained by this procedure have correlation coefficients in the order of 92 to 95 percent. The deviations from the regression lines were of the same order as the spread in replication values. No interelement corrections were applied to the x-ray intensities, although there are significant variations in the composition of both the combustibles and individual contributors. Sampling of MSW is considered to be a more important source of error than the uncorrected x-ray analytical results. 4. Results Table 2 summarizes analytical data obtained on the combustible products of MSW collected during operation of the Bureau of Mines pilot plant. These data are now being extended to include other elements of environmental concern such as antimony, arsenic, bismuth, and mercury. The Oak Ridge National Laboratory coal values in table 2 represent a specific coal, whereas the eastern coal data are average values for a wide range of coals collected in the Eastern United States. Because of the wide variability in both types of fuel, small differences in concentration levels between coal and combustibles are not significant. The approximate weight fractions of the combined cyclones, heavy organics, and organic wastes are 0.76, 0.06, and 0.18, respectively. By limiting the combustibles used for fuel to the cyclones fraction only, it is possible to achieve a significant reduction in the amount of Cd, Cu, Ni, Pb, and Zn available for release to the atmosphere. Table 2 . Trace, minor , and major metals in combustible fractions and in coals. Coal Combust ibles Cyc lones Heavy organics Organic wastes 0RNL Ea stern Av 3 Range < 3- 20 Av 4 Range < 3- 8 Av Range Ag < 3 < 3- 4 Al 14,670 - 9,600 5,500-21,000 9,900 4,900-22,000 7,400 6,000- 8,600 Ba 96 94 160 < 20- 1,400 150 35- 280 170 70- 240 Be - 1.3 < 2 < 2 < 2 < 2 < 2 < 2 Bi - - < 15 < 15 16 < 15- 27 < 15 < 15- 20 Ca 5,790 - 6,500 1,500-17,500 26,000 4,400-44,000 25,000 15,000-38,000 Cd 0.64 - 4 < 2- 23 38 9- 70 32 6- 93 Co 4.6 20 3 < 3- 7 6 < 3- 12 5 4- 8 Cr 26.7 25 50 10- 240 180 55- 320 50 30- 85 Cu 12.2 14 190 20- 1,400 2,800 330- 6,600 690 290- 1,700 Fe 17,825 - 2,000 800- 5,100 5,900 1,100-22,000 3,000 2,400- 4,300 K 2,480 - 820 280- 2,000 1,300 510- 2,600 3,600 2,000- 4,600 Li - 62 2 < 2- 25 < 2 < 2- 4 2 < 2- 6 Mg 1 ,890 - 1,400 560- 4,100 2,900 860- 8,600 2,500 1,800- 3,500 Mn 58 28 130 50- 480 160 50- 310 150 120- 170 Na 930 - 4,300 1,500-11,000 3,100 900- 6,400 9,800 6,400-13,000 Ni 23 22 14 5- 35 85 16- 400 35 15- 50 Pb 8.2 5.9 300 90- 1,600 430 110- 950 1,700 310- 6,500 Sn - - 20 < 20- 80 35 < 20- 50 50 25- no Sr - - 15 < 10- 60 25 < 10- 60 30 10- 65 Ti 710 - 2,100 1,100- 4,500 3,100 100- 5,400 2,000 850- 2,800 Zn 94 25 850 150- 7,000 2,400 150- 9,300 560 330- 880 The technical assistance and advice of the Analytical Research and Service Group is greatly appreciated, in particular Ben Haynes, John Novak, David Neylan, James McConnell , and Margret Lang. Paul Sullivan and Harry Makar of the Secondary Resource Recovery Group supplied the pilot plant samples in addition to many helpful comments. 160 References [1] Campbell, W. J., Metals in the wastes we burn, Environmental Science and Technology, Vol. 10, p. 436-439, 1976. [2] Sullivan, P. M., Stanczyk, N. H., and Spendlove, M. J., Resource Recovery from Raw Urban Refuse, Bureau of Mines Report of Investigations 7760, 36 pages, 1973. [3] U. S. Environmental Protection Agency, Third Report to Congress: Resource Recovery and Waste Reduction, SW-161, 1975. [4] Schultz, H., Sullivan, P. M., Walker, F. E., Characterizing Combustible Portions of Urban Refuse for Potential Use as Fuel, Bureau of Mines Report of Investigations 8044, 26 pages, 1975. 161 Part IV. PHYSICAL CHARACTERIZATION OF AEROSOLS NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). PHYSICAL CHARACTERIZATION OF AEROSOLS K. T. Whitby Mechanical Engineering Department University of Minnesota Minneapolis, Minnesota 55455 1. Introduction During the past decade there has been a major revolution in the way aerosols are mea- sured and in the understanding of their physical and chemical size distributions. It is the purpose of this paper to review the most important findings and discuss some of the most important implications for aerosol measurement. Size distribution characteristics and concentrations found in various locations will be illustrated from the results of a number of field studies. 2. General Characteristics of Aerosols at the Earth's Surface Much evidence from a variety of field studies suggests that atmospheric aerosols are in general multimodal, with two to three modes being observable. The mass or volume distribution is usually bimodal with a minimum observed in the 1 to 3 pin diameter range. The particles larger than a few microns originate from natural or man-made mechanical processes. The mechanically produced particles are hereafter called "coarse particles". The particles smaller than a few microns arise predominantly from condensation processes. These aerosols smaller than a few microns are called "fine particles". The predominant man-made source of these fine particles is combustion or the condensation of chemical or photochemical reaction products on nuclei from combustion. The fine particle range may also show two distinct modes. For example, figure 1 shows a trimodal size distribution measured 30 m from the roadway during the General Motors Sulfate Study. The smallest mode is quite distinct in this case because the source (catalyst- equipped cars) emitted most of the aerosol in the 0.02 pm nuclei mode, and the accumulation mode (middle mode) was relatively small because the background on this day was very low. Figure 2 summarizes the nomenclature, mechanisms of formation, and removal mechanisms for the three modes. The first mode in the vicinity of 0.02 ym diameter results primarily from the direct emission of primary particles from combustion. The second submicron mode, in the 0.15 to 0.8 m range by volume, is the result of either the coagulation of primary particles or the condensation of reaction products or water on primary particles. The third mode, or coarse particle mode, consists of mechanically produced aerosols with the upper size limited by classification due to sedimentation. There appears to be very little ex- change of mass under most conditions between the fine and coarse particle ranges in the atmosphere. Figure 2 also summarizes the various mechanisms that contribute to the formation, trans- formations and removal of atmospheric aerosols. Most mass is inserted in the distribution either through the accumulation mode or through the coarse particle mode. Only under unusual circumstances near large sources of combustion aerosol such as a freeway, or in a plume from a stack, is appreciable mass injected directly into the nuclei mode. Some typical values of the concentration of fine and coarse particles in the atmosphere are shown in table 1. Because of the relative independence of fine and coarse particles, both nonurban and urban sites have yielded widely varying ratios of fine to coarse particles. On the average, however, the fine particles constitute between one-third and one-half of the total aerosol mass. .,_ 165 GM MILFORD PROVING GROUNDS SITE RUN NO. i WD: 354.9 CNC: WSP: 17.0 NTM: RH: 73.8 ST: vt: XIOI + 0) 8 GMP 15 15 TIME: I.02E 05 I.03E 06 4.53E 02 7.16 8:40 TO 9:00 HR EOT V2: V3: V3-: V4+: 1.35 2.32 3.67 3.49 NO/NOX VAN VflC VCP 0.30 1.30 2.76 4.09 DP (/im) Figure 1. Trimodal volume distribution measured 30 m from the roadway during the General Motors Sulfate Study, October, 1975. TRANSIENT NUCLEI OR AITKEN NUCLEI RANGE .1 1 I 10 100 PARTICLE DIAMETER, MICROMETER OR Ji^ ACCUMULATION MECHANICALLY 6ENERATE0 AEROSOL RANGE — COARSE PARTICLES NGE J RANGE FINE PARTICLES Figure 2. Schematic of the principal mechanisms of formation of a trimodal size distribution. 166 Table 1. Typical values of the concentration of fine and coarse particles in the atmosphere. Concentration - ug/m 3 (p = 1 ) Fine Coarse Location Condition Particles Particles Los Angeles Grand average 37 30 Los Angeles freeway Wind from freeway 77 59 Denver Grand average 16.6 232 Goldstone Clean 4.2 10.4 Mil ford, Mich. Clean 1.5 3 Pt. Arguello (seaside) Marine air 1.1 53 St. Louis Very polluted 365 114 3. Size Distribution Models for Source Related and Urban Atmospheric Aerosols Whitby [l] 1 has shown that atmospheric volume, mass and chemical size distributions can be fitted by three independent log-normal distributions. For example, figure 1 shows a typical, distribution observed alongside of the roadway during the General Motors Sulfate Study. The bars are the actual data from the three sizing instruments, and the solid lines are the fitted log-normal distributions. The trimodal log-normal fitting procedure has been applied to a large number of size distributions measured with the Minnesota Aerosol Analyzing System during the field projects in which we have participated during the last seven years. After examining this data in some detail, Whitby and Cantrell [2] have grouped the distributions into six categories. The modal parameters for these six distributions are presented in table 2, and the character- istics of each distribution are summarized below: Table 2. Modal parameters and integral parameters for six typical model atmospheric distributions. Nuclei Mode Accumulation Mode Coarse Particle Mode Integral Parameters DGV o n V a_ V DGV o n V NT ST VT Clean back- g g g ground — — .35 2.1 1.5 6 2.2 5 796 40.7 6.50 Average background .034 1.7 .037 .32 2.0 4.45 6.04 2.16 25.9 8.6E3 148 30.4 Background & aged urban plumes .028 1.6 .029 .36 1.84 44 4.51 2.12 27.4 1.6E4 938 71.4 Background & local sources .021 1.66 .62 .25 2.11 3.02 5.60 2.09 39.1 4.1E5 352 42.7 Urban average .038 1.81 .63 .32 2.16 38.4 5.7 2.21 30.8 1.4E5 1131 69.8 Urban & sources .032 1.74 9.19 .25 1.98 37.5 6.02 2.13 42.7 2.2E6 3201 89.4 DGV = Geometric mean size by volume, pm a = Geometric standard deviation, unitless V = Volume concentration in mode, ym 3 /cm 3 NT = Total number concentration in distribution, no. /cm 3 ST = Total surface area concentration, ym 2 /cm 3 VT = Total volum e concentration, ym 3 /cm 3 ! Figures in brackets indicate the literature references at the end of this paper. 167 Clean Background Observed only in large clean air masses Several hours away from combustion sources No nuclei mode Volume accumulation mode (VAC) < 2 pm 3 /cm 3 Average Background Mixture of Clean Background , small amounts of aged urban plumes and local combustion aerosol Small nuclei mode Volume accumulation mode (VAC) = 5 pm 3 /cm 3 Volume coarse particles (VCP) independent of VAC and dependent on local sources of dust Background + Aged Urban Plumes Average Background + a strong plume from a major urban area Small nuclei mode determined by local combustion sources VAC similar to that in an average urban area VCP determined by local sources of dust Background + Local Sources Strong local combustion sources increase Volume nuclei mode (VAN) to the urban concentrations of = 0.6 without much increase in VAC over background Distribution is very dependent on nature of sources Urban Average Nuclei mode determined by local sources, primarily automobiles Accumulation mode determined primarily by aged aerosol from the general area; VAC = 30 on the average Coarse particle mode determined by local sources Urban + Sources Strong local sources of combustion aerosol, e.g., automobiles, increase both nuclei and accumulation modes Coarse particle mode is influenced by the nature of the source. Fine particle and coarse particle sources are usually unrelated Concentration is very variable in time. The size distributions for which the parameters are tabulated in table 2 are plotted in figures 3 and 4. A typical size distribution measured in the Labadie coal fired power plant plume, located in St. Louis, is also shown in figure 4. Although both background and plume aerosol distributions measured from aircraft in St. Louis during the Midwest Interstate Sulfur Transport and Transformation Study (MISTT) program seem to have small nuclei and coarse particle modes most of the time, most of the aerosol mass is in the accumulation mode. Another characteristic that may be observed from table 2 and figures 3 and 4 is that, except for the clean background and Labadie plume distributions, the volume of aerosol in the coarse particle mode is relatively constant at about 30 ym 3 /cm 3 . It is seen that the accumulation mode volume is much more closely related to anthropogenic contributions than is the coarse particle mode. 168 m to E o 60 50 40- 1 30 a. o o» o 20- 10- 1 1 .BACKGROUND AEROSOLS i 1 - Urban Plume Influenced/ Background Average / Auto Influenced / Clean Background / — / \ / \ - . .. sL^ .^ J, \\\ \\ >-> - .003 .01 10 50 D Dl/ xm Figure 3. Volume Size distributions of four background model aerosols. Note that except for the CLEAN BACKGROUND, the volume in the coarse particle mode is about the same. The nuclei mode is an indicator of close (less than 1/2 hour transport time) sources of combustion aerosol except in those cases where photochemical ly produced nuclei may be ob- served in relatively clean air {e.g., small accumulation mode). Number concentrations of about 10 5 /cm 3 of nuclei of size less than 0.01 ym have been observed in the Labadie plume during the summer of 1976. These nuclei are apparently due to homogeneous reactions in the plume. The formation rate of new nuclei in a coal fired power plant plume is only about 3.5 nuclei per cm 3 • sec. These nuclei contained an insignificant amount of mass compared to the mass that condenses directly on the particles in the accumulation mode during the aero- sol growth in the same plume. 4. General Mode Characteristics of the Physical Size Distribution From modal characterization of the variety of aerosol size distributions, the following general conclusions have been reached: 1. Nuclei mode. For \/ery fresh combustions aerosols from clean combustion {e.g., alongside of a freeway), the geometric mean diameter by number (DGN) is about 0.01 urn. For more aged aerosols, DGN may approach 0.02 ym. The geometric stan- dard deviation (SG) is usually between 1.5 and 1.7. Except for well aged aerosols {e.g., away from sources on the earth's surface or well above the earth's surface), the nuclei mode accounts for most of the aerosol number and hence the Aitken nuclei count. 169 70 60 50 40 E =t 30 Q. Q o 20 < URBAN AEROSOLS Urban Urban Auto Influenced Labadie Power Plant Plume Urban 10 — 003 01 Dp,/im Figure 4. Volume size distributions of two model urban aerosol distributions and a typical size distribution measured in the plume of the Labadie coal fired power plant on 8-14-1974. The power plant plume has only the accumulation mode. 2. Accumulation mode. Average geometric standard deviation by volume (SGV) = 2.0 and average geometric mean diameter by volume (DGV) = 0.34 pm. Also aged aerosols have a somewhat greater SGV than fresh aerosols, the range being from about 1.8 for fresh aerosols to 2.2 for well aged aerosols. In well aged aerosols the nuclei mode disappears into the accumulation mode by coagulation and then the Aitken nuclei count becomes equal to the number of particles in the accumulation mode within the experimental error of the measurements {e.g., about ±30%). This situation appears to be the normal condition at altitudes greater than about 200 m and at the surface more than 50 Km from sources of combustion aerosol. 3. Coarse particle (CP) mode. The average geometric standard deviation by volume of the coarse particle mode (SGV) = 2.3, and the average geometric mean diameter by volume (DGV) = 5 pm. The log-normal distribution parameters are much more variable for the coarse particle mode than for the accumulation and nuclei modes, values of DGV having been observed between 3.5 and 25 urn. The mass concentration in the CP mode is also quite variable, from a few to several hundred pg/m 3 . An examination of the relationship between the volume of coarse particles, VCP, and the geometric mean size of the coarse particle mode, DGVCP, shows that DGVCP is nearly constant up to VCP = 30 um 3 /cm 3 and equal to about 5 pm. Above 5 pm, DGVCP increases linearly with VCP. 170 5. Chemical Size. Distributions Improved devices for the size classification of atmospheric aerosols have produced many new insights into the distribution of the chemical elements with respect to particle size in recent years. Table 3 shows some recent data obtained by Dzubay and Stevens in St. Louis, using the virtual dichotomous sampler which separates the fine and coarse particles at 2 ym. From the table it will be noted that the elemental distribution is about the same for the urban and rural site, indicating that the sources of the aerosol are essentially the same in both cases. Consistent with their origin by condensation, only S and Pb are found predomi- nantly in the fine particle range. Table 3. Comparison of the 18-day-average percentage composition at an urban and rural site in St. Louis, measured between August 18 and September 7, 1975, by T. G. Dzubay and R. K. Stevens of EPA. Site 106 (Urban) Element Fine (29 j. g/m 3 ) Coa rse (22 yg/m 3 ) Fine/Coarse S 13% 1.3% 10 Pb 2 0.6 3.3 Ti 1 2 0.5 K 0.5 1.1 0.45 Fe 1 5 0.2 Si 1 8 0.13 Ca 0.5 8 0.06 Site 124 (Rural) Element Fine (29 v g/m 3 ) Coa rse (15 yg/m 3 ) Fine/Coarse S 12% 0.9 13.3 Pb 0.5 <0.1 5.0 Ti <0.1 0.3 0.33 K 0.3 0.9 0.33 Fe 0.3 1.2 0.25 Si 0.5 4 0.13 Ca 0.4 4 0.10 Work has begun in our group to apply the trimodal log-normal models to the distributions of chemical elements measured during the ACHEX project in California and to other good data. The early results from this modeling effort suggest that within the accuracy of the data, elements like Pb, S, and Br, which one would expect to end up in the accumulation mode, can be fitted by a log-normal distribution having the same parameters as the physical accumu- lation mode distribution. If this is indeed true, it supports the concept proposed by Junge that the fine particles are usually mixed at distances beyond a few km from significant fine particle sources. Other elements such as Al , Fe, Si, and Ca, which come from the soil, are found in the coarse particle mode. There are a few elements (for example, CI) which are found in both modes and have a bimodal distribution very similar to the physical distribution. The evi- dence suggests that for Los Angeles at least, this is because the coarse particle CI comes from the sea and the fine particle CI from auto emissions. 171 6. Liquid Content of Atmospheric Aerosols Numerous recent studies of atmospheric aerosols have shown that in their normal state, at normal humidities, the fine particles contain an appreciable fraction of water or other volatile liquid. Typically the fine particles have 20% water by weight at 60% relative humidity (R.H.) and 50% water at 75% R.H. The water content usually does not approach zero until the R.H. decreases below 35%. The significant water content at normal atmospheric humidities has an important bearing on aerosol measuring and sampling procedures. Among the implications are: Changes in ambient humidity will cause changes in aerosol size and mass concentration. ° Changes in the ambient humidity to which collected samples are exposed will change the mass on the surface. ° Aerosols at humidities at which they are in the liquid state will not bounce on im- pactor stages. However, at humidities below about 25%, they may bounce. The coarse particles are usually dry and may bounce off the coarse particle stages of a cascade impactor, only to be collected on the finer stages where the more liquid fine particles have provided a sticky coating. This effect can cause serious distortions in the measured physical and chemical size distribution. ° Temperature changes inside of sampling lines or instruments can cause a significant change in the aerosol volume concentration measured. ° If the usually dry coarse particles are collected on the same surface as the wet fine particles, as in the Hi-Vol TSP measurement, the fine particles may react with the coarse particles to change the chemical composition. For this reason, chemical species measurements on TSP samples are questionable. 7. The Influence of Aerosol Characteristics on Measurement Techniques In the previous sections, the influence of such aerosol characteristics as water con- tent on the suitability of measurement techniques has already been mentioned. However, there are other physical properties which should be considered. Some of these are briefly mentioned below. The fact that the number distribution of aerosols is described approximately by dN/dlog D = constant x D p ~ 4 means that the number concentration decreases very rapidly with increasing size. When designing and operating continuous particle counting instruments, it is very important to take this into account. For example, in the case of a single particle optical counter, the usable counting range is ordinarily limited to one decade of size by the conflicting requirements of only one particle in the view volume versus adequate count statistics at the coarse end for a given design. ° Real aerosols have variable indices of refraction, and the coarse particles are often irregular in shape. The sophisticated optical techniques that depend on the aerosol being quite ideal are likely not to work \/ery well on real aerosols. ° The great range of size of atmospheric aerosols from about 0.003 to 100 pm requires that systems of instruments be used to cover the entire range. Better results can be obtained with three less ideal instruments covering overlapping but adjacent ranges than can be obtained with one expensive and elegant instrument that can only cover one decade of size. Because instrument systems, complicate data analysis, data analysis should be considered before instrument systems are designed. Also, the great size range of atmospheric aerosols means that useful data can be obtained with instruments which have only fair size resolution. ° The perishability of atmospheric aerosols means that they must often be analyzed where they are sampled. The ability to take aerosol instrument systems to the sampling site along with a lot of other equipment is often more important than the extraordinary virtues of any one piece of equipment. 172 This research was supported by U.S. Environmental Protection Agency Research Grant No. 803851-011. The financial support of the agency is gratefully acknowledged. References [1] Whitby, K. T., "Modeling of Multi-modal Aerosol Distributions", presented in the transactions of GAF meeting, Bad Soden, Germany, October 1974. [2] Whitby, K. T. and Cantrell, B. K., "Size Distribution and Concentration of Atmospheric Aerosols", presented at 82nd AICHE meeting, Atlantic City, N.J., August 1976. 173 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). MEASUREMENT OF AEROSOL SIZE DISTRIBUTION WITH A PARTICLE DOPPLER SHIFT SPECTROMETER Ilan Chabay Analytical Chemistry Division National Bureau of Standards Washington, DC 20234 1. Introduction A system has been developed at NBS which is particularly suited for testing the size distribution produced by aerosol generators and calibrating measurements of other particle size instruments. The technique involved is particle Doppler shift spectroscopy (PDSS) [l] 1 . It has virtues of being rapid, applicable to liquid or solid particles dispersed in a gas, covers the size range 0.5 to 100 urn radius, operable at any number density up to about 10 5 particles/cm 3 , nondestructive, does not disturb the aerosol or its size distri- bution, inherently accurate and internally calibrating. 2. Discussion The instrument measures size and size distribution by determining the slip corrected Stokes Law settling velocity and the intensity of light scattered from each falling particle. Doppler shift spectroscopy, that is, the measurement of the amplitudes and frequencies of signals from moving sources, is used to determine the falling velocity of particles. Light scattered out of a horizontally propagating laser beam is collected by apertures and a photodetector at a small forward angle in the vertical plane. The radiation field, which has been Doppler shifted by the motion of the scatterers (the aerosol), interferes with elastically scattered radiation from stationary sources to produce beat notes in the output photocurrent at the different frequencies. A given beat frequency implies a specific velocity which in turn implies a particle size. The relative amplitudes of the beat fre- quency component, after normalization by the calculated relative Mie scattering intensities and the relative residence times spent by the particle in the observation volume imply the relative number of particles of each size. The basic scheme of measurement is indicated in figure 1 . In figure 1, v is the velocity of the particle which is given by Stokes Law as v(r) = (2pg/9n)r 2 plus the slip correction factor where p is the particle density, g is gravita- tional acceleration, n is the viscosity of air, and r is the particle radius. The scattering vector K is defined in the drawing and is a function of the incident wave vector ko and scattering angle. The frequency shift is given by Av = (1/2tt) "v* • IT. l Figures in brackets indicate the literature references at the end of this paper. 175 I_ o +-t CO a» u B eu O) CO Q. Figure 1. Scheme of measurement of aerosol size distribution with a particle Doppler shift spectrometer. 176 Spherical particles of known index of refraction and density are produced by a Berglund- Liu vibrating orifice particle generator [2]. The particles are introduced at the top of a 1.5 m column above the scattering chamber. The column is filled with aerosol from the generator, then closed off by means of air-driven piston valves. As the particles then settle through the beam, the scattered light is observed and data gathered. For the most part, dioctylphthalate (DOP, an oil) was used in these experiments. Recently some work has been done also on As0 3 particles. These particles serve several purposes. The actual distribution of particles produced by Berglund-Liu generator under various conditions has been examined, Results of char- acterizing the degree of monodispersity of the generator will be discussed. The use of the particles to calibrate other instruments after passing through the PDSS will also be noted. A polydisperse aerosol can be produced intentionally by the Berglund-Liu device. This type of output has been used to provide calibration of the PDSS instrument. Comparison of the photocurrent power spectrum of a wide distribution of particles with the Mie scattering function as calculated from first principles, allows determination of the absolute size and of the resolution measured by the PDSS. The characteristic pattern of minima and maxima in the calculated Mie function and in the measured size spectrum provide a clear means of determining the presence of convective broadening and of measuring any existing slow, uniform flow rate in the chamber. The sharpness of the extrema in the measured spectra indicated the resolution (at present, it is less than 0.1 pm with comparable accuracy). A uniform shift in positions of the measured extrema relative to the calculated values is proportional to the velocity of uniform flow of air through the scattering volume. It is possible to design the scattering chamber so that there is essentially no convective flow. This technique is currently designed for use with spherical particles of known, uniform index of refraction and density, It is possible to adapt the calculations and measurements to study fibers of known index and density. Two commercial instruments, the Royco 225 with 508 plug-in unit, and the Particle Measurement Systems model ASSP 100, were tested using the aerosol generator and the PDSS instrument. Aerosol from the generator was passed through the PDSS instrument, measured, then allowed to flow on directly to the Royco or PMS. This ensured that virtually the same distribution was being measured by each instrument. These preliminary tests indicated that the readings for the Royco and PMS were consistent with PDSS data, though the resolution and range of sizes measured with the commercial instruments were limited due to their inherent lower resolution compared to the PDSS. An additional corraborative technique that is particularly useful in determining size of relatively monodisperse aerosols of less than 5 pm radius will also be discussed. This technique consists of using the angular pattern of maxima and minima of light intensity in the 1.5 m column of flowing aerosol. The Mie pattern is quite discernable by eye and traceable by photographic microdensitometer techniques. This provides an additional check on the size calibration of the PDSS. 3. Conclusion The PDSS instrument built at NBS has several important features. The method of size measurement can be calibrated from basic light scattering theory in two complementary ways: one using the angular distribution of intensity of a monodisperse aerosol, the other using 177 the size variation of intensity at a single angle. The measurement of size distribution then is internally calibrated with respect to size. Measurement is rapid, independent of number density (up to a limit imposed by multiple scattering events), independent of as- sumptions regarding the width or modality of the size distribution, and capable of producing good resolution with either liquid or solid aerosols. The primary function of this instru- ment is as the basic reference and calibrating system for other aerosol sizing devices. The PDSS instrument also can be used to study the coagulation, evaporation, condensation, and coalescence of aerosols. References [1] Gollub, J. P., Chabay, I., and Flygare, W.H., Appl. Optics U, 2838-42 (1973). [2] Berglund, R.N. and Liu, B.Y.H., Env. Sci. and Tech. 7, 147-152 (1973). 178 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). INSTRUMENTAL ANALYSIS OF LIGHT ELEMENT COMPOSITION OF ATMOSPHERIC AEROSOLS Edward S. Macias Department of Chemistry Washington University St. Louis, Missouri 63130, USA 1. Introduction The monitoring and control of atmospheric aerosols has received national attention and concern in the past few years due to the adverse effects of these airborne particles on visibility, climate, and health. Techniques for the study of the elemental abundances and chemical composition of atmospheric aerosols are of prime importance in order to understand the complex processes leading to aerosol formation in the atmosphere. No single technique is presently available which provides a complete analysis of a particulate sample. Ele- mental, analysis using neutron activation analysis [I] 1 and X-ray fluorescence [2] are excellent methods for analysis of elements heavier than sodium. However, except for sulfur, these heavy elements are present only in trace quantities in typical samples of atmospheric fine particles (dia. <3.5 ym) [3,4]. Four light elements, carbon, nitrogen, oxygen, and sulfur, account for most of the fine particle mass. Convenient, fast and nondestructive techniques to measure these abundant light elements are not well developed. However, such techniques are essential to determine the mass balance of atmospheric fine particles and to understand the chemistry of these particles. In this paper a new nondestructive technique for the simultaneous determination of several light elements including carbon, nitrogen, and sulfur is described. The composition of the sample is determined by analyzing gamma rays emitted from low-lying nuclear excited states of the stable nuclei of those elements excited by inelastic scattering of protons impinging on the target. This method is rapid (5-10 min per determination), accurate, and sensitive enough for the determination of elemental concentrations in ambient aerosols with 1-4 h sampling. Filters on which aerosol samples have been collected are irradiated without pre-treatment thereby avoiding errors introduced by sample dissolution and subse- quent chemical analysis. Thus the method measures the total elemental abundance in a sample not just the water soluble fraction. The gamma rays emitted from carbon, nitrogen, and sulfur are all above 2 MeV (half thickness >16 g cm" 2 ) therefore no sample absorption corrections are necessary. It is not necessary to correct for the absorption of protons in the sample for several reasons. First a 7 MeV proton beam loses less than 150 keV on passing through the filter. The cross section for the proton inelastic scattering reaction on the nuclides of interest does not change appreciably over these energy ranges. Secondly calibration standards are prepared in an identical manner to atmospheric aerosols which negates the need for a correction. Finally, the deposit is primarily on the front of the filter and therefore is even less effected by a slight proton energy loss. figures in brackets indicate the literature references at the end of this paper. 179 2. Experimental Methods Samples of atmospheric aerosols were collected on a Pallflex 2 glass fiber filter with thin cellulose backing using an automated two-stage sequential filter sampler (TWOMASS) [5, 6]. The filter medium was chosen for its high collection efficiency, low mass density, and low carbon, nitrogen and sulfur content. The mass density of this filter is 2.5 mg/cm 2 . The flowrate through the sampler is nominally 300 cm 3 /s. The total mass of the aerosol deposit was determined with an on-line beta attenuation mass monitor [5]. Filter samples were irradiated without the cellulose backing in the external beam facility of the Washing- ton University 135-cm sector focused cyclotron. A 7 MeV proton beam was used for carbon, sulfur, and nitrogen analysis as a compromise between large inelastic scattering cross sections to the first excited states of 12 C, 14 N, and 32 S and small cross sections to the first excited states of 16 0. Analysis of oxygen can be carried out with a higher energy proton beam in a separate bombardment because the Compton scattering from the 6132 keV gamma ray of 16 decreases the precision of the analysis of the other elements. Each sample, mounted in a standard 35 mm film slide holder, was irradiated with a collimated 7 MeV proton beam in a chamber maintained at 1 atmosphere of helium as shown in fig. 1. The irradiation chamber was built around a modified commercial 35 mm slide pro- jector to automate the sample changing. The identity of the sample being irradiated was monitored with a closed circuit television camera. A 0.003 mm thick Havar foil served as a vacuum seal between the scattering chamber and the cyclotron beam tube. The samples were typically irradiated with a 80 namp beam for 10 min. The beam current was determined with a digital current integrator which measures the total charge collected on the Faraday cup. In general, inelastic proton scattering excites a nucleus to its lowest lying excited states which decays by the emission of gamma rays. The resulting in-beam gamma-ray spec- trum includes at least one gamma ray from each element of interest as shown in fig. 2. Gamma-ray energies for some of the elements which can be analyzed by GRALE are given in Table 1. 2 Pallflex Corp., Putnam, Conn. 180 CO CO >- 4 >- < i < to H Z UJ S UJ u -J _J Ixl < H cr X o e> _i u_ o o O Uj CO o Uj 5 Ci ^ "^ -j -J -j CO o UJ Q- O Figure 1. Schematic diagram of the sample irradiation chamber. 181 UJ 3 Q (H cr \- < o m UJ ^ Q_ o CO go >- z < o (T \- o < rr ^ CL ^ < > e> UJ ^ 2 < i^ UJ m i u c o -O o o UJ Q > ro c ® o ro — *- oj CO CM ro > co OJ OJ CO a* »- CM co — JZ ^ a. o •go k- £ i. v, C 0) o S5 CD E C c o o |9uudl|3 jad sjunoQ Figure 2. In-beam gamma ray spectrum from 7 MeV proton bombardment. The upper curve is from a 123 mg atmospheric aerosol deposited on a glass fiber filter. The lower curve is from a blank filter. 182 CO o o 30,000 METHIONINE CALIBRATION 1 i iy CARBON r=0.98 20.000 — — 1 A ^ii i 10,000 */ — / 1 ■ i 20 40 60 80 CARBON MASS (tig) CO 10,000 I - SULFUR r=0.91 1 ■ i y^- 5000 - 1 1 \ m — /* | i i 10 20 30 40 SULFUR MASS (/xg) 10,000 CO 5000 - 5 10 15 NITROGEN MASS (£tg) Figure 3. Carbon, nitrogen and sulfur calibration curves for the GRALE technique using a known mass of methionine aerosol deposited on a filter. 133 CO UJ -J o h- cr < UJ o a: LU X a. CO o 10 o «o ( e uj/57/) N0I1 VdlN3DN0D Figure 4. Concentration of carbon, nitrogen and sulfur and total mass of fine particle ambient aerosol plotted as a function of time. The samples were collected in 4 hour intervals in St. Louis, Missouri, August, 1976. 184 CO UJ _J o < Q. UJ cr UJ i Q. CO O O O VO^/^A \\\\\\W\\\\\\\\ N j T^^ C \\V\\\A\\V'\"X] WMVtf' \A\\ aw ^ wyj ~ k^m\\\\\\vTvA\v ^■VA\\W\\\i |<^M -vnWWW I rex\\\\\\\\\\\ c k^^xAWVAX^AV I f . AVNWW.- \^yx^> c A\\\\\\\\\V\VW CO tO ^^ 1^- CO >• — < 00 o **■* UJ 2 u> 1- N- CO 1^- ro CO l kmmmw'wi o m ■n CM SSVW TV101 JO lN30H3d Figure 5. Particulate carbon, nitrogen, sulfur and oxygen (from sulfate) concentrations from the data in fig. 4, plotted as a percentage of the total fine particle mass. 185 Table 1 Properties of nuclides observable with GRALE Element Atomic Number Target Isotope Isotopic Abundance (%) E y (keV) a Carbon 6 12 C 98.89 4430 Nitrogen 7 it N 99.63 2311 Oxygen 8 16(j 99.76 6131 Fluorine 9 19p 100 110,197 Neon 10 20 Ne 90.92 1630 Sodium 11 23 Na 100 439 Magnesium 12 21t Mg 78.70 1369 Aluminum 13 27 A1 100 843,1013,2209 Sil icon 14 28 Si 92.21 1780 Phosphorus 15 3 1 p 100 1266 Sulfur 16 32 s 95.0 2237 Chlorine 17 35 C1 75.53 1220,1763 Data obtained for Lederer, C. M. Hollander, J. M. , Perlman, I., Table of Isotopes } 6th ed., John Wiley and Sons, N.Y. (1967). Gamma rays produced in the proton bombardment were detected with a 60-cm 3 lithium drifted germanium Ge(Li) detector (11.75 percent efficient relative to a 3" x 3" Nal(Tl) detector for 1332 keV gamma rays), with energy resolution of 2.5 keV full width at half maximum for 1332 keV gamma rays. A large volume detector was used to maximize the efficien- cy for the 4430 and 6131 keV gamma rays of 12 C and 16 respectively. Maximum energy resolution is not essential because the prompt gamma rays from 12 C, 32 S, and lk H are Doppler broadened. The output of the Ge(Li) detector was sent to a high-resolution fast rise time preamp and linear amplifier which are able to process high count rates (^1 0,000 count/sec) without appreciable degradation of energy resolution. The amplified signals were sent to a 13-bit 200-MHz analog to digital converter (Tracor Northern). Digital information was stored and processed in a PDP-11 mini-computer with a 28000 word memory (Digital Equipment). The spectra were analyzed immediately after each irradiation with the on-line computer. The intensity of each peak is determined from the integrated peak area after subtraction of background. The peak intensity was corrected for system dead time losses (typically 20 percent) determined from the area of a pulser peak in the spectrum produced from a 60 Hz tail pulse generator inputted at the preamp. These data were normal- ized to the proton beam intensity determined from the integrated current measured in the Faraday cup. The normalized peak intensities of the filter blanks were subtracted from the atmospheric aerosol results. The conversion of peak intensity into mass was carried out using standard methionine aerosols as described below. Filter blanks, atmospheric aerosol samples, and methionine aerosol standards were run under nearly identical conditions which yielded nearly equal detector count rates. All samples were analyzed in the same way. 3. Calibration The identification of a given element in a sample was determined on the basis of peak energy determined for the large peaks by the use of external standards before and after each run. The large known peaks were then used as internal standards for energy determina- tion of the smaller peaks. The amount of a given element in a sample was determined from 186 the ratio of the peak area in the sample to the area of the corresponding peak in a stan- dard sample of methionine aerosol (C 6 H n 2 SN) deposited on the same type of filter. The mass of methionine deposited on the filter was determined using a beta attenuation mass monitor [5,6]. A detailed methionine calibration was carried out over a period of several days. The results of the GRALE analysis of carbon, nitrogen and sulfur in standard methionine aerosol samples are shown in fig. 3. The linear correlation coefficient for all determinations is quite good (r > 0.9). The method was cross checked by analyzing several atmospheric samples by GRALE followed by flash vaporization-flame photometric sulfur analysis [7] of the same sample. The results of the two techniques agree within the uncertainties of the methods. For the analysis of atmospheric aerosol samples, only a few standard methionine samples need be run in order to verify that the system is operating correctly. These data must be corrected for the relative solid angle subtended by the detector determined by counting long-lived standard radioactive sources in the sample position of the irradiation chamber. With the calibration curve given in fig. 3, the results are accurate to about ±15 per- cent. However, much of the inaccuracy is due to the normalization technique. It is anticipated that much better accuracy will be possible by normalizing to an internal standard. Experiments are in progress to perfect this normalization technique. 4. Field Measurements A study of the ambient aerosol sulfur, carbon and nitrogen content was carried out in St. Louis in August, 1976. Samples were collected in 4 hour intervals in two size fract- ions with a two-stage sampler (TWOMASS) but only the fine particle fraction (diam. <3.5 ym) was subsequently analyzed by the GRALE technique. The total mass of these samples was determined using a beta attenuation mass monitor during collections [5]. The results of this analysis for a 4 day period are shown in fig. 4. These data show that the concentra- tions of these elements vary substantially in a time scale of a few hours. In fig. 5, the data are plotted as a percentage of the total aerosol mass. The oxygen concentration from sulfate aerosol only has been estimated by assuming all of the particu- late sulfur is in the form of sulfate. These data indicate that carbon, nitrogen, sulfur and oxygen from sulfate constitute a substantial fraction of the total aerosol mass. The average percentage of each element for the entire 4 day period was found to be: carbon 19 percent, sulfur 10 percent, nitrogen 9 percent, and oxygen from sulfate 20 percent. Taken together these elements account for a total of 58 percent of the fine particle mass. The total mass was determined during sample collection and therefore includes a substantial amount of water. It should be expected, therefore, that water accounts for much of the remaining mass. It is interesting to note that the ratio of sulfur to nitrogen is equal to the stoichiometric ratio of these elements in ammonium sulfate. 5. Conclusions It has been shown that the GRALE technique is a powerful nondestructive technique for analysis of abundant light elements in atmospheric aerosols. The technique requires no sample preparation and is sensitive enough for 1-4 hour sampling intervals. The main cause of error is the uncertainty in the normalization of the beam current. It is expected that the use of an internal standard will remove much of this uncertainty. Another source of error is the uncertainty in the weight of the calibration standards. It has also been shown that this technique can be used to determine the elemental concentrations consti- tuting most of the fine particle mass. 187 The author gratefully acknowledges many helpful discussions regarding the project with Professors Rudolph B. Husar and Demetrios G. Sarantites. The assistance of Robert Fletcher, Thomas Fulbright, Roland Head, H.-C. Hseuh, Janja Husar, Charles Lewis, David Radcliffe, and Bernard Smith during various phases of this work has been invaluable. This work has been supported in part by the U.S. Environmental Protection Agency Field Methods Develop- ment Section under grant R803115. References [1] Zoller, W. H., and Gordon, G. E., Anal. Chem. 42, 257 (1970). [2] Johansson, T. B., Van Grieken, R. E., Nelson, J. W., and Winchester, J. W. , Anal. Chem. 47_, 855 (1975). [3] Dzubay, T. G., Stevens, R. K. , Environ. Sci. Technol. 9_, 663 (1975). [4] Flocchini, R. G., Cahill, T. A., Shadoan, D. J., Lange, S. J., Eldred, R. A., Feeney, P. J., Wolfe, G. W. , Simmeroth, D. C, Suder, J. K., Environ. Sci. Technol. J_0, 76 (1976). [5] Macias, E. S., and Husar, R. B., Environ. Sci. Technol. (in press). [6] Macias, E. S., and Husar, R. B. , Fine Particles, B.Y.H. Liu, Ed., Academic Press, N.Y., p. 535 (1976). [7] Husar, J. D., Husar, R. B., Stubits, P. K., Anal. Chem. 47, 2060 (1975). 188 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). X-R-D ANALYSIS OF AIRBORNE ASBESTOS PREPARATION OF CALIBRATION STANDARDS 1 M. Fatemi , E. Johnson, L. Birks, J. Gil frich and R. Whitlock Naval Research Laboratory Washington, DC 20375, USA 1. Introduction The use of a novel x-ray diffraction technique coupled with electrostatic fiber alignment has been demonstrated to have considerable promise for routine measurement of the concentration of airborne asbestos collected on membrane filters [l] 2 . The aligned fibers are mounted in a thin nitrocellulose film and are measured in transmission geometry using a broad collimated x-ray beam. By measuring the x-ray intensity, with the fibers oriented normal and parallel to the plane of the incident and diffracted x-ray beams, the asbestos signal can be corrected for background due to both scattering and diffraction by other non-aligned materials in the sample. Using a modified commercial x-ray spectrometer with a Cr-target spectrographs tube operated at 1100 watts," a 500 second 3a detection limit 3 of 0.15 ug of pure chrysotile asbestos has been achieved. 2. Discussion The x-ray measurement of the aligned chrysotile fibers is very straightforward. However, the alignment of the fibers is affected by several physical and chemical para- meters. Futhermore, unlike pure standards which are aligned aliquots from large, preweighed quantities, the unknown field samples contain only a small quantity of asbestos, all of which must be processed and aligned. This extention of the method from large known standards to small unknowns requires a few modifications in the original sample preparation and, therefore, makes the procedure more critical. Effects of many suspected parameters on the alignment of small samples have been investigated. The most critical are found to be the following: 1. Chemical composition of the alignment medium. The alignment medium is established as a solution of parlodion in distilled amy! acetate. The suitable concentration of parlodion in amyl acetate has been determined to be between .001% to .002% w/v (10 and 20 ppm). The primary function of parlodion is fiber dispersion, but once the parlodion concentration is increased beyond .002%, good alignment is not always possible (although on occasion it has been observed). Supported by the Environmental Protection Agency under Interagency Agreement EPA-IAG-D5-0651. 2 Figures in brackets indicate the literature references at the end of this paper. 3 Limit of detection C|_ in micrograms is calculated from the formula 3 Nb/(SxT) where S is the sensitivity in counts per second per microgram, Nrj is the background in counts per second and T denotes the counting time. This definition conforms to the recommendation by IUPAC. 189 2. Filter pore size and fiber retention. The effect of filter pore size on fiber retention has been found to be more striking than previously thought. In investigating this parameter, three pore sizes - 0.22 ym, 0.45 ym, and 0.8 ym were used. Sample batches were sonicated in water-Aerosol 0T solution for 40, 60, and 120 minutes respectively. In a typical case, the 60-minute sonicated samples showed an average sensitivity of 5.2 c/s,yg on 0.22 ym filter, 4.0 c/s,yg on 0.45 ym, and 2.5 c/s,yg on 0.8 ym. The two-hour sonicated samples showed sensitivities of 4.2 c/s,yg on 0.22 ym, 2.5 c/s,yg on 0.45 ym and - 1.3 c/s, yg on 0.8 ym. 3. Radio Frequency ashing of small samples. Radio Frequency ashing of small samples is found to be the most critical aspect of sample preparation. It is necessary to contain the ashed samples in a small volume, such as a test tube, for dispersion in the alignment liquid. The extent of exposure to the Radio Frequency field varies from sample to sample depending on gas flow condition, Radio Frequency intensity, and vacuum level. This varia- tion in exposure is not always easily controllable, and in some cases affects the amount of ashing residue from the filter itself. The greater the ashed residue from a given size filter, the worse the alignment, since the higher residue implies either poor, incomplete ashing, or the transformation of the material into an unashable compound. The most striking example of this variation in the ashed residue is seen in the case of filters containing a large amount of asbestos (5-75 yg/cm 2 ) which ash satisfactorily and filters containing small amounts of asbestos (0-1 yg/cm 2 ), which ash poorly. The latter often shrinks into a hardened mass which is difficult either to disperse in a liquid, or to ash further. It appears that the asbestos fibers or other particles (say, from air col- lection) help support the filter membrane to permit better, more uniform exposure to Radio Frequency, which in turn reduces the ashed residue. 4. Relative humidity surrounding the alignment medium. The relative humidity surrounding the alignment medium has also been shown to be a significant factor. At room temperature (20-22°C) the alignment is practically nonexistent for low humidities (20- 25% RH). At higher humidities (35-45%), the alignment quality improves. For still higher humidities, alignment is possible but is accompanied by electrode corrosion. The optimum humidity range is therefore defined as 40-45% at 20-22°C. 5. Contamination effects. Finally, it is important to note that the presence of soluble organic and inorganic matter in the alignment medium can greatly affect the fiber alignment. It is, therefore, essential that all rules of cleanliness be strictly adhered to, especially in the last stages of preparation, namely, prior to dispersing the clean, ashed samples in the alignment medium. References [1] Birks, L. S., Fatemi , M. , Gilfrich, J. V., and Johnson, E. T., Quantitative Analysis of Airborne Asbestos by X-Ray Diffraction: Final Report on Feasibility Study, Environmental Protection Agency Report EPA-650/2-75-004, Jan. 1975; also NRL Report 7874, Feb. 28, 1975. 190 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). RESPIRABLE AMBIENT AEROSOL MASS CONCENTRATION MEASUREMENT WITH A BATTERY-POWERED PIEZOBALANCE Gil more J. Sem Thermo- Systems Inc. 2500 N. Cleveland Ave. St. Paul, MN 55113 1 . Introduction In 1975, Sem and Tsurubayashi [l] 1 introduced a new, portable, battery-powered instru- ment, the piezobalance, for near-real-time measurements of respirable aerosol mass con- centration. The instrument uses electrostatic precipitation to deposit respirable particles down to 0.01 ym diameter onto a piezoelectric microbalance mass sensor. Primary applications of the piezobalance include [2] 1) industrial hygiene walk-through surveys to locate industrial areas requiring better smoke and dust control or requiring compliance measurements, 2) measurements in offices, stores, restaurants, arenas, and other public buildings to assist in adjustment of ventilation systems for control of tobacco smoke, and 3) outdoor measurements to characterize human exposure to respirable aerosol while engaged in activities such as walking down the street, riding an auto or subway train, barbequing dinner in a back- yard, or camping in a remote wilderness. This paper will 1) suggest a rationale for respirable or fine particle measurements of outdoor air, 2) review the design, operation, application and performance of the piezobalance and 3) present some typical results. 2. Why Measure Outdoor Respirable or Fine Dust Concentrations? Current Environmental Protection Agency regulations applicable to outdoor ambient air are based on "total" suspended dust concentration rather than "respirable" dust concentration. Outdoor areas are subject to a maximum yearly average of either 60 or 75 yg/m 3 . Daily average concentrations must not exceed 250 yg/m 3 more than once per year. The standard measurement instrument is the hi-vol sampler with a specifically prescribed design. Although no regulations currently exist, outdoor environmental scientists [3-8] often measure the "fine" or "submicron" particle fraction, containing primary and secondary combustion, condensation, and photochemical smog aerosols. [3] The larger, supermicron particles become airborne by different processes, usually involving the breakup of larger particles, such as wind-blown and automobile-generated dusts and fly ash. Recent data [3- 8] strongly suggests that this is a basic, nearly-universal characteristic of aged atmo- spheric aerosol. The particle size which divides the 2 types of particles is usually in the 1-2 ym range. The fundamental difference in the source of the two aerosol fractions points toward a future regulation covering small particles, based either on the concentration of specific chemical species or the combined concentration of all species. figures in brackets indicate the literature references at the end of this paper. 191 Current Occupational Safety and Health Administration and Mine Enforcement and Safety Administration regulations apply to outdoor air only where employees are working, not areas frequented by the general public. The regulations are based on "respirable" dust concen- tration measured by a personal lapel sampler worn by the employee during his work shift. The definition and rationale of respirable dust, based on particle deposition in various parts of the human respiratory system, is discussed by Lippman [9] and others. Respirable particles dre defined by the American Conference of Governmental Industrial Hygienists and the Atomic Energy Commission as particles which pass a sampling pre-cutter with 50 percent penetration at 3.5 pm aerodynamic equivalent diameter. Respirable particles penetrate deeply into the respiratory system and are thus capable of harming one's health. Since particles smaller than about 2 pm become airborne by processes much different from those which suspend larger particles, [3] and since respirable particles cause the greatest potential hazard to human health, some form of measurement of these particles in the outdoor environment seems justified. 3. Why Use a Battery- Powered, Hand-carried Instrument? Current governmental monitoring of total suspended particles in the U.S. is done at sets of stations covering each metropolitan area. Fixed-station monitoring determines changes in concentration within the urban cloud on a monthly or yearly time scale. However, such stations do not usually measure the particulate exposure of the people at street level and the currently used hi-vol sampler can provide no information about exposures with a time scale less than a day. The personal lapel sampler used by the Occupational Safety and Health Administration cannot be conveniently worn by people performing normal outdoor activities. Another problem of the lapel sampler for non-occupational sampling is that its accuracy for 10-100 pg/m 3 levels, even for 5-10 hr samples, is not very good. With the above limitations, respirable or fine particle aerosol measurements on the street or in other public areas can best be done with a small, portable, battery-operated, real-time sensor capable of accurate measurements from 5-10,000 pg/m 3 . The piezobalance is a strong candidate for the measurement. 4. Description of the Piezobalance Sem and Tsurubayashi [1] and Sem, et at. [2], recently described the piezobalance in considerable detail. This section briefly reviews major points of interest. Figure 1 is a photograph and schematic diagram of the piezobalance. A small pump sucks 1 liter per min of aerosol into the instrument. Particles greater than 3.5 pm aerodynamic equivalent diameter are removed from the air stream by an impactor. Particles passing the impactor are carried to an electrostatic precipitator which deposits them onto a piezoelectric microbalance sensor oscillating at its natural frequency. The particles on the sensor cause the natural frequency to decrease by an amount proportional to the particulate mass. Each 10-sec frequency shift is displayed by the digital display. Each 2-min frequency shift is converted to units of mg/m 3 and displayed digitally. The piezobalance includes a simple wet-sponge mechanism for wiping the particles off the sensor when the accumulated deposit exceeds 5 pg. The cleaner operates by a manual turn of the cleaning knob. Cleaning sponges require rewetting every 1-2 days and replacement after 20-50 cleanings. The sensor is a disk of crystalline, AT-cut quartz, about 13 mm diameter and 0.2 mm thick. The crystal forces an electrical circuit to oscillate at a highly stable resonant frequency. The frequency decreases in direct proportion to the particulate mass added and adhering to the sensor. The practical sensitivity is about 10" 9 g. 192 «.v IMPACTOR PRECIPITATOR CORONA- PRECIPITATOR NEEDLE HIGH VOLTAGE IN L "^g^ TO PUK -CORONA-PRECIPITATION REGION TO OSCILLATOR-MIXER AND READOUT CIRCUITS -PARTICLES ^-QUARTZ CRYSTAL SENSOR >3.5>jm DEPOSIT PARTICLES <3.5pm HERE DEPOSIT HERE FIGURE I : PHOTOGRAPH AND AIR FLOW SCHEMATIC DIAGRAM OF THERMO-SYSTEMS MODEL 3500 PIEZOBALANCE RESPIRABLE AEROSOL MASS MONITOR. 193 The shift in resonant frequency Af (Hz) during sampling time At (sec) at constant sampling rate Q (m 3 /sec) is related to aerosol concentration C (yg/m 3 ) by: C = ] —^- (1) L SQ At lU S is the mass concentration coefficient, expressed in Hz/yg, which accounts for crystal mass sensitivity, aerosol collection efficiency, and particle sensing efficiency. S is factory calibrated for each instrument with welding smoke and is nominally about 200 Hz/yg. Since the resonant frequency of the sensor is about 5 MHz, the measured frequency is reduced to a more convenient range by electronically beating the sensor frequency against a reference frequency 1-3 KHz higher than the sensor frequency. The resulting mixed frequency is the difference between sensor and reference frequencies. Af in equation 1 is the change in mixed frequency as particulate mass is added to the sensor. For the 50-2000 yg/m 3 respirable concentration range, the piezobalance uses a 2-min, direct-readout mode. When the operator presses the "measure" button, the pump draws Q = 1 liter/min (1.67 x 10" 5 m 3 /sec) and the electrostatic precipitator is turned on. Sample time begins when the operator presses the "start" button. For concentrations below 2000 yg/m 3 , 120-sec sample time is used; for 1-10 mg/m 3 , the 24-sec sample time. When sample time ends, mass concentration appears on the digital display in XX. XX mg/m 3 format. Statistical accuracy of variable concentration measurement can be greatly improved by making 5 or more 2-min measurements at a single location. For the 5-100 yg/m 3 range, an indirect readout mode is used to obtain the highest accuracy. A 120-second sample time is selected and two seconds after pressing the "start" button, the mixed frequency displayed on the digital readout is recorded. The piezobalance is operated in measure mode for about 15-30 min. Even though the piezobalance displays a concentration continuously after 2 min, it continues to precipitate particles onto the sensor. After the 15 to 30-min sample time, the "start" button is repressed and mixed frequency is again recorded after 2 seconds. By subtracting the first mixed frequency from the second, the difference is Af in equation 1. The piezobalance is not a continuous monitor. It has no provision for recording data. It cannot sense particles above 10 ym, cannot measure concentration above 20 mg/m 3 , and cannot measure aerosol streams above 40 °C. 5. Accuracy of Piezobalance Measurements Sem, et at., [2] compared piezobalance and low-volume 47-mm filter measurements on several aerosols. Piezobalances had been previously calibrated using welding smoke. Ar- rangement of the test setup was found to be important for accurate, repeatable results. Figure 2 shows results for outdoor air, oil mist, and several combustion smokes. Nearly all piezobalance concentrations were within ±10 percent of filter measurements. Although the operator should calibrate his piezobalance on his specific aerosol, reasonable results can be obtained without such calibration in cases where respirable aerosol is primarily combustion- or condensation-generated. Most outdoor aerosol below 1-2 ym appears to fit this description. [3-8] The piezobalance measures aerosol at ambient humidity. Most filter sampling procedures call for drying filters before weighing. Agreement cannot be expected between piezobalance and low-volume measurements if filters are dried before weighing. Filters used must be highly hydrophobic, such as Millipore fluoropore membrane filters. The piezobalance will not measure the correct concentration if relative humidity varies more than -0.5 percent during a measurement, rarely a problem for 2-min measurements, but potentially serious for the 15 to 30-min low concentration mode. 194 -H0% + 10% 10% 1.00 2.00 3.00 4.00 5.00 6-00 LOW-VOLUME FILTER CONCENTRATION, mg/m 3 FIGURE 2: COMPARISON OF PIEZOBALANCE MEASUREMENTS WITH LOW-VOLUME FILTER MEASUREMENTS ON SEVERAL AEROSOLS. THE PIEZOBALANCES WERE CALIBRATED EARLIER WITH WELDING SMOKE? 6. Outdoor Measurements During the past year, outdoor aerosols were measured in rural and urban areas of Utah, Colorado, and Minnesota. Table 1 lists results of such measurements. All concentrations were measured with the 5-100 pg/m 3 data mode, At > 20 min, and S = 200 Hz/yg. Although no low-volume filter samples could be obtained within reasonable sample times at these low concentrations, measured concentrations were consistent with each other and with volume concentrations and size distributions measured by Sverdrup, et al. [10], in the Mojave Desert in November, 1972, and recent urban measurements. Recent rural Minnesota measurements suggest considerable variability in fine particle concentration over relatively short time periods, probably caused by local upwind combustion sources. Some 20-min rural concentrations approached urban street corner concentrations. 195 TABLE I Outdoor Respirable Aerosol Concentrations Measured in Minnesota, Colorado, and Utah. All Sample Periods Were 20 Minutes or Longer Unless Noted. DATE TIME LOCATION COMMENTS MASS CONCENTRATION pg/m 3 -9 -75 11145 -9-75 15145 -10-75 13:45 -10-75 I4:i5 -10-75 17:30 -29-75 is: 10 -29-75 16:40 5- -29-76 15:50 19:05 19.25 19:45 \ ! 20:05 5- -30-76 00:10 09:45 10:05 10:25 10:45 11:05 11:25 1 F 12:05 7 -4-76 19:15 19:35 19:55 20:15 20:35 \ r 21:20 7- -5-76 J 18:50 19:10 I9'.30 7- -6 -76 18:20 18:45 19:05 19:25 20:10 20:30 20:50 21:10 \ 1 21:30 8 -2 3-76 09:30 10:00 10:20 10:40 11:00 11:20 11:40 12:10 12:50 13:10 13:30 13:45 14:00 MONUMENT VALLEY, UT MONUMENT VALLEY, UT NEAR NATURITA CO NEAR NATURITA CO 100 Km S GRAND JUNCTION, CO 8 Km N UNDERWOOD, MN IN AUTO ON FREEWAY, ROSEVILLE -MINNEAPOLIS, MN DOWNTOWN MPLS, NE CORNER 7th-NIC0LLET, UPWIND OF BUS TRAFFIC DOWNTOWN MPLS, NW CORNER 7th-NIC0LLET, DOWNWIND OF BUS TRAFFIC IN AUTO ON FREEWAY, MINNEAPOLIS-ST PAUL, MN DOWNTOWN ST PAUL.MN, SW CORNER 7th-CEDAR DOWNTOWN ST PAUL.MN, IN PARKING RAMP IN AUTO ON FREEWAY ST PAUL-ROSEVILLE, MN CLEAR, IO°C, WIND W 3-8Km/h, DESERT CLEAR, IO°C, WIND W 3-8Km/h, DESERT CLOUDY, 5°C, WIND N 3 Km/h, SEMI-DESERT CLOUDY, 5°C, WIND N 3 Km/h, SEMI-DESERT P CLOUDY, 2°C, WIND S 0-30 Km/h, RANCH CL0UDY,-8°C, WIND NW 5Km/h, FARMYARD CL0UDY,-8°C, WIND NW 5Km/h, FARMY/ 1 j P CLOUDY, 20°C WIND SE 10-20 Km/h, LAKESHORE R CLOUDY, 20°C WIND SE 5-15 Km/h, FARMYARD P CLOUDY, 20°C, WIND CALM, FARMYARD CLOUDY, I7°C, WIND CALM, FARMYARD HAZY, 30°C, WIND E 0-3 Km/h, FARMYARD HAZY, 30°C, WIND SSE 10-20 Km/h, FARMYARD P CLOUDY, 27°C, WIND W 3-8 Km/h, FARMYARD (RAINED I cm, WIND W TO 50 Km/h AT 17:00) P CLOUDY, 25°C, WIND N 15-20 Km/h ACROSS LAKE CLEAR, 27°C, WIND SE 0-IOKm/h, URBAN FREEWAY CLEAR, 28°C CLEAR, 28°C CLEAR, 28°C CLEAR, 30°C CLEAR, 30°C CLEAR, 30°C CLEAR, 30°C CLEAR, 32°C CLEAR, 32°C CLEAR, 32°C CLEAR, 32°C CLEAR, 32°C URBAN STREET 5 5 13 14 8 13 13 17 30 31 29 31 25 22 18 22 21 18 15 15 II 8 7 6 18 20 7 8 7 3 7 3 5 4 2 2 3 6 31 20 URBAN STREET 22 URBAN STREET 25 URBAN STREET 41 URBAN STREET 33 URBAN STREET 39 URBAN FREEWAY 49 URBAN STREET 46 URBAN STREET 44 URBAN STREET 34 URBAN PARKING RAMP 90* URBAN FREEWAY 44 ^AVERAGE OF THREE 2 MINUTE SAMPLE PERIODS. 196 References [1] Sem, G. J. and Tsurubayashi , K., A New Mass Sensor for Respirable Dust Measurements, Am. Ind. Hyg. Assoc. J. 36:791 (1975). [2] Sem, G. J., Tsurubayashi, K., and Homma, K., Piezobalance Respirable Aerosol Sensor: Application and Performance, Presented at the Am. Ind. Hyg. Assoc. Annual Meeting, Atlanta, GA, May 17-21, 1976. [3] Hidy, G. M., et al. 3 Summary of the California Aerosol Characterization Experiment, J. Air Poll. Control Assoc. 25:1106 (1975). [4] Whitby, K. T. , Husar, R. B., and Liu, B. Y. H., The Aerosol Size Distribution of Los Angeles Smog, J. Colloid Interface Science 39:177 (1972). [5] Will eke, K., Whitby, K. T. , Clark, W. E., and Marple, V. A., Size Distributions of Denver Aerosol s--A Comparison of Two Sites, Atmos. Environ. 8:609 (1974). [61 Durham, J. L., et al. 3 Comparison of Volume and Mass Distribution for Denver Aerosols, Atmos. Environ. 9:717 (1975). [7] Lundgren, D. A. and Paulus, H. J., The Mass Distributions of Large Atmospheric Particles, J. Air Poll. Control Assoc. 25:1227 (1975). [8] Whitby, K. T., et al. 3 Aerosol Size Distributions and Concentrations Measured During the General Motors Proving Ground Sulfate Study, Report EPA-600/3-76-035 3 NTIS, Springfield, VA (April 1976). [9] Lippman, M., Respirable Dust Sampling, Am. Ind. Hyg. Assoc J. 31:138 (1970). [10] Sverdruo, G. M., Whitby, K. T., and Clark, W. E., Characterization of California Aerosols--II Aerosol Size Distribution Measurements in the Mojave Desert, Atmos. Environ. 9:483 (1975). 197 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). A CASCADE IMPACTION INSTRUMENT USING QUARTZ CRYSTAL MICROBALANCE SENSING ELEMENTS FOR "REAL-TIME" PARTICLE SIZE DISTRIBUTION STUDIES D. Wallace IBC/Berkeley 1 Irvine, California 92664 USA and R. Chuan Defense Division/Brunswick Corporation 2 Costa Mesa, California 92626 USA 1. Introduction Sampling instruments using cascaded impaction type nozzles and collection plates for the determination of particle size distribution have been in use for many years and have established the validity of the classical impaction nozzle theory. In many applications, a serious limitation in this type of instrument is the long sampling times required to collect a sample of sufficient mass for accurate weighing with an analytical balance on the collection plate. Thus, phenomena with short time constants cannot be observed because of the long sampling times required; only integrated distributions are measured without the fine structure of individual events. A new instrument is available which uses the classical impaction nozzle in a cascaded series of stages but replaces the sample collection plates with individual quartz crystal mtcrobalances (QCM), measuring in real-time and capable of weight resolution in the nanogram range. With the sensitivity obtained with the QCM sensing crystals, sampling times are shortened to two minutes or less depending upon the sampled concentration, thus quasi-real- time measurements can be made while at the same time retaining the primary measurement advantage of the impaction nozzle technique, i.e. , measurement based upon the mass and aerodynamic diameter of the particle. 2. Impaction Theory for Particle Sizing Particle size discrimination is obtained when using impaction flow nozzles by a combination of inertial and viscous drag forces on the particles as illustrated in figure 1. Depending upon the nozzle and plate geometry and the particle diameter and velocity on leaving the nozzle, the particle has a definite probability of striking the collection plate. The parameter characterizing the capture probability is the Stokes Number which is given in figure 1, along with other pertinent nozzle flow parameters. Marple and Willi ke [l] 3 have established design criteria for the Stokes Number for 50 percent collection efficiency as a function of nozzle-to-plate spacing, Reynolds Number, [etc. ) , which facilitates cascade design. The controlling relationship between the 50 percent cutoff particle diameter Dp , and geometrical and flow parameters is k 50 r- D.N P n J o % K 50 c Q P y a a 1/2 (1) formerly IBC/Celesco Industries Inc. 2 Formerly MSD Division/Celesco Industries Inc. 3 Figures in brackets indicate the literature references at the end of this paper, 199 where Dj is the nozzle diameter, N is the number of nozzles in a particular stage, c is the Cunningnam Slip Coefficient, Q, is the volume flow through the unit at ambient pressure and P /P is the ratio of the nozzle pressure to ambient pressure. 777777777777, = P P CV o P P 9jJ Dj P Q v ru a a Po N Dj 2 > A / 50 AP»f. V 2 r a r Dj 3 N D„ *) -1 50 L P a. \ S/Dj Figure 1. Impaction theory criteria, To achieve the collection of low submicron particles, the design must include some combination of reduced nozzle diameters and high volume flow with the implied high nozzle velocity, v , and attendant large pressure drop, AP. The high particle velocity can lead to particle bounce and reduced collection efficiency as reported by Rao [2] but can be essentially eliminated by the proper selection of an adhesive coating on the collection disk, as reported by Cahill [3]. A cascade impaction instrument is now available using these design criteria in a ten- stage arrangement with calculated collection efficiency curves, shown in figure 2. The 50 percent cutoff points cover the particle size range of 25 micrometers down to 0.05 micro- meters. 20Q CO 1 j (N — ■— -^_ 00 — i ~~~~— ~ «*- in — to - = ^^^ ~""~— - p> — 00 - ~^ en — o- O o -z. LU cc o LU Li- 1— LL Li. LU LU LU 2 h- iii LU o O cc o < Z> 1- ^ Q LL < 1 in o u_ _i Q O LU _J 1- o cc cc ooooo o o o /o Fiyure 2. A3N3IOIdJ3 NOI133YIOO Collection efficiency curves. 3. Quartz Crystal Microbalance as Weighing Technique The use of the piezoelectric quartz crystal as a microbalance was first suggested by Sauerbrey [4], who formulated the sensitivity equations relating crystal frequency shift to mass addition. A later perturbation analysis by Stockbridge [5] correctly accounted for the addition of discrete mass increments and applies more directly to the use of the QCM in this particle collection application. The crystal sensitivity to mass for 10 MHz crystals is 713 Hz/yg. The ambient concentration of particles in the size range defined by the stage collection curve can be determined by measuring the frequency shift rate and the volume flow of air sampled. 201 r _ 1402.5 Af/At (2) % where C is the ambient concentration in pg/m 3 of a certain particle size, Af/At is the frequency shift rate in Hz/min and Q a is the volume flow of air sample measured at ambient conditions in ml/min. The design volume flow is determined then by the minimum reliably measurable frequency shift rate and the minimum desired concentration. Frequency drift rates of one Hertz per five minute interval are easily obtained. Thus, in design, one Hertz per minute was conservatively used for the minimum signal . Minimum concentrations per stage of 5yg/m 3 were chosen leading to a volume flow of approximately 250 ml/min. Once this volume flow was chosen, the separate stage nozzles were designed for the desired particle sizes according to Eq. (1). 4. Verification of Stage Collection Efficiencies Calibration of a cascade instrument is made difficult by the absence of absolute aerosol standards of accurately known size distributions. The aerosol generated for calibration purposes must itself be characterized as to concentration and size distribution by yet other instruments with their own particular built-in assumptions. This is partic- ularly true of the low submicron range, which is below the range capability of commercial aerosol generators. The alternate approach of correlation with similar instruments is questionable since completely different collection techniques are used. The primary area of concern is particle collection efficiency, particularly in the submicron range where high nozzle flow velocities exist. Low capture efficiency would permit particles to bounce off the collection plate and carry over into the following stage. This would result in a broadening of the range of particle sizes collected in a stage and a greater uncertainty in the actual size distribution. Studies with a scanning electron microscope tend to verify the accuracy of the calculated collection efficiency curves shown in figure 2. The SEM photos in figures 3 and 4 for stages 5, 6, 8 and 10 verify the cut points in these stages. Very few particles are observed to be larger or smaller than the 50 percent particle diameter dimension indicated on the photograph. Extensive agglomeration of particles occurs, with the low submicron particles forming an effective collecting surface for subsequent particles. As evidenced by the photo in stage 10, the mound of particles which couples to the vibrating crystal is orders of magnitude greater than a few particle monolayers. 5. The Instrument In the instrument shown in figure 5, the particle size range of 25 micrometers to 0.05 micrometers is measured in ten cascade stages. Each separately removable stage is a complete unit with impaction nozzle, quartz crystal microbalance and hybrid chip electronics Stage electronic output is thus a frequency which is amplified by electronics in the base cabinet and can then be fed into a multiplexed printer or directly to a computer. A self- contained pump located in the base provides a constant volume flow through the stages. 202 MAGNIFICATION 1 K MAGNIFICATION 2K MAGNIFICATION 5K K 50 -1.6^im p= 2gm/cc MAGNIFICATION 1QK K 50 =0.8 M m Pp=29m/cc STAGE 5 STAGE 6 Figure 3. Collected particle size verification by SEM photomicrographs. 203 MAGNIFICATION 1 K MAGNIFICATION 200 MAGNIFICATION 1 OK Pp= 29m/ cc STAGE 8 MAGNIFICATION 2 OK K 50 =0.0S^m P p =2gm/ Ce STAGE 10 Figure 4. Collected particle size verification by SEM photomicrographs. 204 Figure 5. Ten stage QCM cascade, 205 Figure 6. 0.1 1.0 Dp - um VERTICAL PROFILE OF AEROSOL SIZE LOS ANGELES BASIN 10-16-75 206 6. Field Data The real-time measuring capability of the QCM Cascade makes possible transient measure- ments from airplanes or automobiles. Three studies exemplifying this technique were performed in 1975: (1) a vertical profile of aerosol size over the Los Angeles Basin was taken from a light plane; (2) a profile of particulate size produced by a major brush fire in the San Gabriel Mountains near Los Angeles was measured from an automobile; and (3) a light plane was used to measure power plant plume aerosol size distribution changes as a function of distance downstream from the stack. Vertical profile of aerosol size over Los Angeles Basin: The variation in aerosol size distribution and total concentration is shown in figure 6 as a function of altitude over the Los Angeles Basin on October 16, 1975. The measurements were taken from 50 feet to 1900 feet in altitude with the well-defined bimodal aerosol size distribution shifting as expected to greater fractions of submicron particles at the higher altitudes. Particulate size distribution produced by San Gabriel Mountains brush fire: A brush fire in the San Gabriel Mountains of Southern California produced smoke which extended over portions of three counties in and around Los Angeles. Size distribution measurements were taken along the freeway system beginning outside the smoke area and traversing the entire smoke region into Orange County. The locations of the measurement points are indicated on the map in figure 7A, along with total mass concentrations and mass mean diameters. Lines of constant concentration are roughly indicated as interpolated from the measured data. The size distributions presented in figure 7B began on the western edge of the smoke with a significant amount of large particles caused by the high winds. Entering the heavy smoke area, a trimodal distribution was observed which was common throughout the smoke area. Preliminary analyses indicated the significant middle mode at 0.8 pm was responsible for light scattering giving a bluish color to car headlights. Aerial measurement of particulate size distribution in Four Corners Power Plant Plume: Measurements were taken from a light plane of plume particulate size distribution as a function of downstream distance at the Four Corners Power Plant. This data was taken under the direction of Dr. Rudolf F. Pueschel of the National Oceanographic and Atmospheric Administration. The flight path and two of the measured distributions are shown in figure 8. The plume directly over the power plant has a relatively high MMD of 2.2 v m. Downstream approximately 25 miles submicron sulfate aerosols have formed and the MMD has decreased to 0.45 ym. 207 Figure 7a. Measuring locations, total particulate concentrations and mass mean diameters during brush fire episode of November 24-25, 1975. 2 08 NO. 2 WESTERN EDGE OF SMOKE C= 78 /ug/m3 MMD = 2.4 jun NO. 6 NEARING SOUTHEASTERN EDGE OF SMOKE C =59^g/m3 MMD = 1.2jum NO. 3 SAN FERN N O VALLEY C = 185jng/m3 MMD = 0.23 jim Dp — M' Figure 7b. Particulate size distribution during brush fire of November 24, 25, 1975. 209 Hiq\ ri' E a. i Figure 8. o NH 3 , H 2 0, N0 X , numerous organic species, et at. [4]. As a result, the physical properties, i.e., mass and mobility of an ion cluster are also sensitive to the atmospheric trace gases. Whatever the fate of an individual ion, a spectrum of ions of differing chemical and physical properties will be 1 Thi s work was supported by the Division of Biomedical and Environmental Research, U.S. Energy Research and Development Administration, Washington, D.C. 2 Figures in brackets indicate the literature references at the end of this paper. 213 in evidence whenever a number of ions are simultaneously present. Consequently, there is no intrinsic meaning to the citation of a single ion species (or a physical property such as mobility) as characterizing the small ions under a particular set of conditions in the atmosphere. The motion of an ion in the vicinity of an aerosol particle depends upon the physical characteristics of both the ion and the particle. For the sake of mathematical traceabil- ity, the particle is usually approximated as a sphere, an approximation which is well justified in many atmospheric circumstances. Two limiting, conceptually straightforward cases are relevant in treating aerosol charge acquisition. One is the macroscopic or collision-dominated case wherein the ion mean free path, with respect to collision with the neutral gaseous molecules, is s/ery small in comparison to the particle radius. Under such conditions, particle charge acquisition should be capable of accurate description in macroscopic terms since the variables determining charging are accessible to macroscopic control. The other case is the free-molecular approximation wherein the ion mean free path is considerably larger than the particle radius. When the ion mean free path is of the same order of magnitude as the particle radius, the ion is in the transition regime of kinetic theory [5]. The equations for bipolar polydisperse aerosol charging by polydisperse ions can be written as _d_ f P P \ - P P «. iiP T P pP t i|P tP + pP ? R"P dt l V ~ J(n-l) J U j(n-l)i l i P jn Z . U jni ! i P j(n+1) l . B j(n+l)i i:p - pP z b:p. i: p i jn jm l + ion equations pP = number density of particles in size-class j, carrying n charges of sign p l!j = number density of small ion species i which is of polarity p U ini = vo l umetr1c acquisition rate of ion specie (i,p) by a j-sized, n-charged J particle of polarity p B ini = v °l UITie tric acquistion rate of ion specie (i,p) by a j-sized. n-charged J particle of polarity-p. Properly speaking, P and I should be considered as random variables [6] with mean values as measured in the laboratory. In that case, the charging equations become stochastic differential equations reflecting the nonuniformity of the ion-aerosol mixture. The substantial and possibly important additional complications such a treatment would entail have not yet been addressed. Therefore, the above equations will be considered as deter- ministic (which means that their solution is unique for a given set of initial conditions). For the sake of clarity, only unipolar, monodisperse aerosol charging will be con- sidered below. This is expressed by setting I~P = and dropping the j and p indices. Nonetheless, the following arguments apply equally well to the bipolar case. Their conse- quences for polydisperse aerosol charging will be discussed below. The unipolar charging equations can therefore be written ^r- = P i s U ,. I. - P l U . I. dt n-1 . n-li l n . m i 214 In the collision-dominated case, U . ■> U c . = f c u. ni m n M i where f n is a factor dependent solely upon characteristics of the aerosol particle and its charge state and y-j is the electric mobility of ion specie i. This gives dPjl= ( P fC _ P fCj s !. dt v n-1 n-1 n n' . l l Since Ey. Ij is the ionic conductivity of the atmospheric segment under consideration and i ' is a macroscopic quantity which can be controlled to compensate for external conditions, a well-defined charge state can be achieved for the aerosol. By similar considerations for the free molecular case, dPn = ( P f f.m. _ P f f.m.) z 7. r . dt v n-1 n-1 n n '.in ■f m where f ', ' is again a factor dependent solely upon particle characteristics and v-j is the mean ionic speed. This time Z v-j 1-j is not an obvious macroscopic variable. This implies that aerosol charging in the free molecular regime is governed by a quantity which is poorly defined in macroscopic terms. If R is the particle radius and X-j the ion mean free path, the transition regime is usually delimited as 0.25 < Kn-j < 10 where Kn-j = A-j/R. To understand the difficulty of bringing an aerosol in this size range to a well-defined charge state, it is useful to express the general volumetric charge acquisition rate as U ni = f n R X V i In the collision-dominated regime we can then show that f = f „ 7n-r (e = elementary unit 3 n n 4kt J of charge, k = Boltzmann's constant, T = temperature) and X = X-j while in the free molecular regime f n = f/j-m- R and X = 1. This demonstrates that in the transition regime X is explicitly a function of ion properties and at least implicitly a function of R while ^n = f?i r remains a function of particle properties alone. Then X = X (R, X-j , . . . ) and the charging equations are ~= (P , f tr , - P f tr ) RE X (R, A., ...) I, dt v n-1 n-1 n n ; - v ' i ' l Since X (R, A-j , ...) is a function about which very little is known either experimentally or theoretically, aerosol charging in this regime is equally poorly defined. Polydisperse aerosol charging for particles, which are all in either the collision- dominated or free molecular regimes, may still be treated experimentally by starting with some calibration procedure wherein an ad hoc determination of the charging variables is made. However, this procedure cannot be extended to an aerosol in which the transition and either of the other kinetic theory regimes is included. The fundamental physical obstacle is that charging depends upon differing parameters and the careful control of one, such as conductivity, does not guarantee that another, such as £ v-j 1^, will have any clearly defined value. This difficulty is compounded by the fact that even the 215 mathematical form of X (R, X-j, ...) for the transition regime is not known. 3. Aerosol Measurement If the atmospheric cluster ion spectrum were unique or at least clearly defined, some calibration procedure which related the peculiarities of that spectrum to the charging of each of a variety of sizes of monodisperse aerosols could be devised. As has been discussed, this is not the case. This means that measurements dependent upon diffusion charging made in one set of atmospheric conditions will most likely have little quantitative relationship to those taken at either another time or place unless the ionic conductivity was controlled and comparisons were made only for particles in the collision- dominated regime. In terms relevant to measurements made in the lower troposphere, the free molecular regime corresponds to particles below 0.01 ym radius or larger. 4. Illustrative Example The operational consequences of the foregoing discussion can be very well illustrated by a brief discussion of the performance of the Electrical Aerosol Analyzer or "EAA" (Thermo-Systems, Inc., St. Paul, Minnesota) [7]. This is the field instrument based upon the original Whitby Aerosol Analyzer that first used the "diffusion charging mobility analysis hypothesis" for aerosol size distribution measurements. In the EAA, the aerosol stream is first passed through a charger whose design is intended to maintain a constant number density of ions. The most closely related parameter which can be controlled is the ion current and this is what is actually regulated in this device. Since the ion current is directly proportional to the ionic conductivity, the EAA does produce a well-defined charge distribution upon the largest particles of the aerosol, which are in the collision-dominated regime. General agreement has thus been found between the EAA and optical particle counters when both devices have been properly calibrated. Similarly, in a study to determine the role of cluster ion variability on the performance of the EAA [8], a statistical analysis of data showed narrow confidence intervals for parameters related to charging of particles in or near the collision dominated regime. Conversely, that same study showed that at a 90% confidence interval the particle current due to the transition regime particles were broad and could vary over almost an order of magnitude. A mathematical model for the ideal EAA has been constructed and computed. This model includes six sizes of particles and six species of cluster ions and incorporates the essential features of constancy of total small ion conductivity in the charger and classi- fication of particles according to their electrical mobility. Variability of the cluster ion spectrum produced in the charger corona is included by weighting the ion distribution among the species represented in differing ways. Ion removal is accomplished either by collision with an aerosol particle or by removal due to the constant ion current. Table 1 gives the calculational inputs and results. N(R) is the number density of sampled particles of R micrometers and the headings "Successive Numerical Flux Differences" give the differences in fluxes of charges carried by particles between successive total flux measure- ments according to the EAA's operational parameters. The first is zero because of the omission of particles under 0.005 ym from the model. The fourth and higher differences are each equal in the two cases presented here because the corresponding particles were computed as if they were all in the collision dominated regime due to the inavailability of any reasonably accurate expression for U^ r that included the image force for particles of 0.05 pin radius. The ionic fractions from the corona were chosen as being representative of plausible real variations in the ion spectrum that can occur in the presence of differing fractions of atmospheric trace gases that may occur under field conditions. As a result, a 70% discrepancy in the flux differences corresponding to the transition regime charged particle flux is computed. By comparison the "diffusion charging mobility analysis hypothesis" predicts that the numbers are equal. 21i Table 1 Jumerical results of mathematical model of EEA performance N(.05) = 5 x 10*+ N(1.0) = 50 5 148 N(R): N( 005 = 10 6 N(.Ol) = 7 x 10 5 N( 1) = *\0 k N(.5) = 500 Ion Index 1 2 3 Mass (AMU) 69 73 88 Mobility (cm 2 /V-sec) 2.2 2.1 2.0 Mean free pa (ym) th 0.022 0.020 0.018 Ionic fraction from corona Ionic fraction in charging region 0.0 Ionic fraction from corona Ionic fraction in charging region Case 1 0.10 0.10 0.10 109 1.8 0.015 0.10 0.062 0.065 0.068 0.076 Successive numerical flux differences 1.0 x 10 5 1.4 x 10 s Case 2 0.40 0.20 0.10 0.10 0.0 0.33 0.17 0.091 0.10 Successive numerical flux differences 1.7 x 10 5 2.3 x 10 5 1.5 0.011 0.20 0.18 2.4 x ^0 ^ 0.10 0.12 2.4 x 10 4 6 279 1.0 0.0056 0.40 0.55 0.10 0.18 5. Conclusions The foregoing discussion indicates that the physics of the process of charge acquisi- tion by an aerosol particle dictates that a rigorous condition exists for giving any aerosol a well-defined charge distribution if the particles can all be assumed to have similar physical properties. The condition is that all the particles be in the collision- dominated region (A-j/R < 0.25) and that the cluster ion conductivity be held constant. For smaller particles, no such conditions are possible nor is any laboratory calibra- tion of the process meaningful due to the variable and indeterminate nature of the cluster ions which are central to the charging process. The meaning of these results for current instrumentation is that quantitative compar- ison of data acquired at different times or places is not possible. The methods have meaning as relative measures of size distribution, in one location and over a time span, during which the atmospheric trace gases may be considered to be constant. These conclu- sions will be elaborated upon in a series of papers now in preparation. 217 References [1] Israel, H., Atmospheric Electricity, Volume I (translated by D. Ben Yaakov and Baruch Benny), Israel Program for Scientific Translations Ltd. (1971), NTIS TT67-51394/1 . [2] Whitby, K. T., and Clark, W. E., Electrical Aerosol Particle Counting and Size Distribution Measuring System for the 0.015 to ly Size Range, Tellus 18, 573 (1966). [3] Mohnen, V. A., Discussion of the Formation of Major Positive and Negative Ions Up to the 50 km Level, Pure and Applied Geophysics 84, 141 (1971). [4] Castleman, A. W. , Tang, I. N., Munkelwitz, H. R. , Clustering of Sulfur Dioxide and Water Vapor About Oxonium and Nitric Oxide Ions, Science 173 , 1025 (1971). Huertas, M. L., Marty, A. M. , Fontan, J., On the Nature of Positive Ions of Tropospheric Interest and on the Effect of Polluting Organic Vapors, J. Geophys. Res. 79, 1737, (1971). Kadlecek, J. A., Ion Molecule Reactions of Atmospheric Importance: Positive Ion Clusters Involving N0 2 , NH3, SO2; Ion Induced Aerosol Formation, Publication No. 263, Atmospheric Sciences Research Center, State University of New York at Albany, 1974. [5] Marlow, W. H. , and Brock, J. R. , Unipolar Charging of Small Aerosol Particles, J. Coll. Inter. Sci. 50, 32 (1975). Bird, G. A., Molecular Gas Dynamics, (Clarendon Press, Oxford, 1976). [6] Sedunov, Y. S. , Physics of Drop Formation in the Atmosphere (translated by D. Lederman and edited by P. Greenberg), John Wiley and Sons (1974). [7] Liu, B. Y. H., Whitby, K. T. , Pui , D. Y. H., A Portable Electrical Analyzer for Size Distribution Measurement of Submicron Aerosols, Air Poll. Contl. Assoc. Journal 2A_, 1067 (1974). Liu, B. Y. H. and Pui, D. Y. H. , On the Performance of the Electrical Aerosol Analyzer, J. Aerosol Science 6, 249 (1975). [8] Marlow, W. H., Reist, P. C, Dwiggins, G. A., Aspects of the Performance of the Electrical Aerosol Analyzer Under Nonideal Conditions, J. Aerosol Science ]_, 457 (1976). 218 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE USE OF A MODIFIED BETA DENSITY FUNCTION TO CHARACTERIZE PARTICLE SIZE DISTRIBUTIONS Alan S. Goldfarb Continental Oil Company Baltimore, MD 21226, USA James W. Gentry University of Maryland College Park, MD 20742, USA 1. Introduction The physical properties of an aerosol system are strongly dependent upon the size of the aerosol particles. In general, particulate matter is polydisperse and in order to describe a system of particles, the distribution of particle sizes must be specified. It is convenient to be able to specify the distribution of a system of particles in the form of a mathematical function. A mathematical function that has had a high degree of success in describing the distribution of particle sizes is the log normal probability function. This function is commonly used to characterize the size distribution of atmospheric par- ticles and emissions from stacks. The log normal probability function has the advantage that only two parameters, the median diameter and the geometric standard deviation, are required to define it and the cumulative distribution plots as a straight line on loga- rithmic probability coordinate graph paper [I] 1 . In addition, if the distribution by mass is a log normal function, the surface area distribution and the number distribution are also log normal with the same geometric standard deviation. 2. Discussion Inherent shortcomings of the log normal probability function are that it is a symmetri- cal function and implies the presence of particles of sizes from to infinity. Particle size distribution data frequently indicate a maximum and/or minimum particle size for the sampled particulate system. Sometimes the characteristics of the instrument used to sample and size particles have an upper and lower limit of particle size. In other instances, the particles may have passed through a particle collecting device with 100% efficiency of collection for certain sizes. For example, a cyclone has a high efficiency of collection of large particles (>5 microns) and thus the distribution of particles leaving a cyclone will tend to be asymptotic towards an upper size limit. This is shown in figure 1 . A membrane filter has a 100% efficiency of collection of particles larger than its pore size and a high efficiency of collection of small particles due to diffusion within the pores of the filter. Thus particles leaving the membrane filter have a narrow dis- tribution and tend to be asymptotic towards an upper and lower size limit. This is shown in figure 2. i Figures in brackets indicate literature references at the end of this paper. 219 T~ i — i — i — i i i i i i i — ryi n ORIGINAL DISTRIBUTION DISTRIBUTION AFTER CYCLONE I I I I I I I I I L_L 2 10 30 50 70 90 98 CUMULATIVE PERCENT LESS THAN SIZE Figure 1. The effect of a cyclone on a log normal distribution of particles. A modified Beta probability function has been defined which asymptotically approaches an upper and lower size limit. The cumulative distribution of the beta probability function has the shape of an 'S' on logarithmic probability coordinate graph paper. The modified beta probability function is defined by f(xl (a + b - 1) ! (a - 1): (b-1) (1 - x)' (1) where In r rrmn In rmax rrmn < X < 1 rmax and rmin are the maximum and minimum particle size respectively and a and b are parameters of the function. 220 0.03 - t~i — r i i i i i i ORIGINAL DISTRIBUTION DISTRIBUTION AFTER MEMBRANE FILTER J_l I I ' ' I I I I 2 10 30 30 70 90 98 CUMULATIVE PERCENT LESS THAN SIZE Figure 2. The effect of a filter on a loq normal distribution of particles. By varying the values of the parameters a and b the function can be negatively skewed, symmetrical, positively skewed, or simply the uniform distribution. Thus it has the Dotential for describing a wide variety of particle systems. Sampling data are fitted to the modified beta function by a trial and error procedure. Trial parameter values are used to calculate values of the cumulative distribution and the calculated values are compared with the actual data as the sum of the squares of the differences between the actual value and the calculated value. The procedure is repeated with new parameters until the error is minimized. The direct search method of Hooke and Jeeves [3] is used to choose the parameter values for trial. If the maximum and minimum particle sizes are unknown, their values may also be varied in the trial and error procedure. An improvement in the fit of the beta distribution to a set of experimental data results if the maximum and minimum particle size is allowed to vary along with the parameters a and b in the search for the best distribution to fit experimental data. The above described procedure was used to fit a beta distribution to the calculated distribution of particles leaving a cyclone and a filter. The beta distribution was found to describe the particle size distributions better than the log normal distribution. This is illustrated in figures 3 and 4. 221 1 I I 1 I I I I I I I I TH" a, -a / 4 / / / / / / 7 r ACTUAL DISTRIBUTION LOG NORMAL FIT BETA FIT J_l 1 I ' I I I I I 2 10 30 50 70 90 98 CUMULATIVE PERCENT LESS THAN SIZE Figure 3. Comparison of fit of log normal distributions and beta distribution to particles leaving a cyclone. A cascade impactor is a device that separates particles in an air stream into two or more size classifications by inertial separation. It is commonly used to measure the cumulative mass-size distribution of particles in the atmosphere and of emissions from stacks. Cascade impactors consist of consecutive stages with progressively finer orifices through which the sampled gas passes. An obstruction after each stage forces the gas to change direction before passing through the next orifice. Particles that cannot change direction with the air stream will impact on the obstruction. The progressively finer orifices cause the velocity of the gas and entrained particles to increase and progressively finer particles are collected on each successive obstruction. Ideally, each stage of an impactor would collect all particles larger than a certain size and none smaller. The fraction of the total mass of particles that escape being collected on the stage would represent the cumulative mass distribution fraction for that "characteristic" size. In real impactors, there is overlap in the particle sizes collected by the different stages and the collection efficiency of a real impactor stage as a function of particle size is described by an S-shaped curve. 222 10 i l l i I i i i — I — i — rT / 4 / ^ «/ iff V a/ y Y / ■ - ACTUAL DISTRIBUTION O LOG NORMAL FIT A BETA FIT ? J_l l_l ' I I i i I I L 2 10 30 50 70 90 98 CUMULATIVE PERCENT LESS THAN SIZE Figure 4. Comparison of fit of log normal distribution and beta distribution to particles leaving a membrane filter. In analyzing impactor data, it is common practice to determine a characteristic diameter of the particles collected on each stage. If the mass of uncollected particles larger than this diameter is equal to the mass of collected particles smaller than this diameter, the real impactor captures the same mass of particles as the ideal impactor. Often, the charac- teristic diameter is specified as that diameter for which the stage has a 50% collection efficiency. This procedure can lead to a false size distribution curve. An improved method for obtaining an estimate of the size distribution curve from cascade impactor data has been developed. The method involves assuming a functional form of the particle size distribution and using the direct search technique of Hooke and Jeeves to locate the parameters of the distribution which, when applied to an analog of the cascade impactor, will result in stage mass accumulations comparable to that accumulated by the stages of the real impactor. 223 The Mass of particles, M, collected on the mth stage of the impactor is calculated by M = m m-1 71 (1 " S j )S mV r ' a 3 Ur j = l (2; where Sj and Sm are the collection efficiency of impactor stages j and m respectively and f (r, a-) is the initial particle size distribution. A comparison between the size distribution curve obtained by a characteristic diameter approach to interpretation of cascade impactor data and that obtained from the above described procedure is illustrated in table 1. Figure 5 is a graphical illustration of the comparison. en z o cc o cc UJ I- UJ < o UJ _l o I- o rr (3) 15 *££: (4) 15 Days Figure 1. Behavior of 203 Hg(l— -•) and e5 Zn(0 0) (1-5 ppb) added to (1) pond water, (2) sea water, (3) distilled water and (4) artificial sea water. 100 ml polyethylene containers were used. The detection limit of this method is as low as 0.05 ng Hg, the lowest value determined is about 0.5 ng Hg. The results tested with reference materials from NBS and Sagami Central Research Lab shown in table 1, are in satisfactory agreement. Table 1. Determination of Hg in standard solutions, NBS-SRM 1642 Sagami Central Research Lab Certified Found 1.18±0.05 ppb 1 .09+0.05 ppb Calculated 12.40 ppb Found 12.70±0.40 ppb Found 12.48±0.13 ppb (measured after dilution x 100) 234 c o o CO c CO lOO 50 -O"- ■dl (2) - ioo<<* <*- o o o •5 a 50 (3) ■Ok (4) 15 2 Days Figure 2. Effect of preservatives on the loss of 203 Hg(l ppb) (1) pond water, (2) sea water, (3) distilled water and (4) artificial sea water. (•) no preservatives, (a) 1 mM cysteine, (■) 1 mM cysteine + 0. IN HC1, (a) 10 ppb Au, and (0) 10 ppb Au + 0.1N HN0 3 . References [1] Dokiya, Y., Yamazaki, S., and Fuwa, K., Environ. Lett. ], 551 (1974). [2] Dokiya, Y., Yamazaki, S., Ashikawa, H., and Fuwa, K., Spectrosa. Lett. ]_, 551 (1974) [3] Yamazaki, S., Dokiya, Y., and Fuwa, K., 34th Forum of Jap. Anal. Chem. (Muroran) (1973). [4] Yamazaki, S., Dokiya, Y., Hayashi, T., and Fuwa, K., Annual Meeting of Aar. Biol. Chem., (Tokyo) (1974). [5] Watanabe, T., Dokiya, Y., Toda, S., Fuwa, K., Annual Meeting of Jap. Chem. Soc. CSagami) (1976). 235 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) SAMPLING FOR WATER QUALITY Willie R. Curtis Northeastern Forest Experiment Station Forest Service U.S. Department of Agriculture Berea, Kentucky 40403, USA 1. Introduction Many parameters of water quality are affected by man and his use of the land. Even in a natural environment, water quality is not constant over time. We are passing quickly from having a surplus of land to having competing demands for the same land. We need to predict the effects of all these demands and their interactions upon the land and its associated resources. Only when we can do this, will we be able to manage our land and water resources for the greatest benefit. Much of the needed infor- mation can be obtained through study and observation of the streamflow and water quality from small watersheds. Any land use, but especially surface mining, affects water quality. Freshly exposed rock is subject to rapid breakdown, and undergoes many chemical reactions. Sulfuric materials found in coal and associated rock strata are the principal producers of the acid found in mine drainage. Pyrite and marcasite, both sulfides of iron, oxidize when exposed to air and water to form iron sulfate and sulfuric acid. Secondary reactions between the sulfuric acid and organic and inorganic materials produce other chemicals often found in acid mine drainage. The amounts of acid, iron, and other pollutants in drainage from a particular surface mine will vary, depending upon the rate of flow, extent of area disturbed, distance from the stream system, amount and quality of precipitation, type of geologic materials encount- ered, temperature, and vegetative cover. Therefore, the chemical composition of mine drainage cannot be accurately determined from a "one-shot" sample. Continuous or periodic sampling, for a period of time that depends upon the objectives, is required. Samples cost money. So do errors. The objective of this paper is to provide some basis for designing a sampling program that will satisfy current and anticipated needs for data on the chemical quality of water affected by surface mining. The aim is to make enough observations to obtain the desired information—no more, no less. Good technique is the basic requirement; results obtained with poor technique are of little value and may be biased. It is most important to obtain a representative sample of the flow. This may best be achieved by following these basic rules: Take samples where the water is homogeneous; avoid skimming the surface or bottom of the discharge channel; take samples at midstream at approximately one-half stream depth, if possible. Bottles used for sampling should be clean and should have-a label for identifying the sample. They should be rinsed several times with the water to be sampled before they are filled to reduce the possibility of contaminating the sample from the container. The bottle should be filled completely, leaving no air space, and sent to the laboratory for analysis as quickly as possible. 237 Methods of collecting and analyzing water samples are well documented Ql ,2 ,5] x and will not be discussed here. Rather, the remainder of this paper will be devoted to the frequency of sampling, particularly of first- or second-order streams that drain basins where there is surface mining. The frequency of measurement at each sampling site will depend on the variability of each variable and the desired error of estimate. The USGS has long used daily sampling for the usual comprehensive chemical -quality investigation [2], but monitoring the effects of surface mining on small headwater streams may not require such frequent sampling. Chemical data from six first-order streams draining both mined and unmined watersheds in eastern Kentucky have been analyzed to determine what sampling regimen is necessary to characterize the stream quality. The most commonly used estimates are the sample averages and the regression coefficients. Once the statistics have been estimated within satisfact- ory limits, additional observations will have little value except for detecting changes in water quality due to long-term natural changes or to specific changes in land use. We are interested in detecting changes in water quality that result from surface mining. To do this we need to determine either the mean concentration of some substance or a regression equation using some easily measured variable. Our objective is to make an accurate estimate of the chemical parameters with the fewest and least expensive measurements. 2. Discussion Specific conductance can be measured rather simply and at low cost. It has long been accepted as a measure of the total concentration of ionized material in solution [5]. For most natural waters the specific conductance, in micromhos per cm at 25°C, multiplied by 0.65 approximates the total dissolved solids in milligrams per liter [2], For an unmined study watershed we found values of 0.52 to 0.62, and for surface-mined watersheds, values ranging from 0.61 to 0.84. It is interesting to note that the factor increased with time after mining. The average values for Jenny Fork, an unmined watershed, are compared with those for Miller Branch and Mullins Fork, both mined watersheds, in this tabulation: Water Year 3 Jenny Fork Mi Her Br. Mullins Fork 1970 0.57 0.61 0.63 1971 .61 .66 .65 1972 .60 .67 .68 1973 .62 .70 .68 1974 .52 .69 .71 1975 .57 .81 .84 Water year extends from Nov. 1 through Oct. 31. Plottings of specific conductance against the concentration of ionized constituents in water from both mined and unmined watersheds generally show a linear relationship (figure 1 ) . figures in brackets indicate the literature references at the end of this paper. 238 X CO **■ in in CM CO IT >- CO **■ CD in CO CD CO 3Z o o z + < cc in CO CM CO in cc r>* LU in CO —1 CM —1 1 d ^— ii ii s >- cc CO o in CM CO o CM " < o a o CJ CO CO CO CM — V9iai sanos qbaiossiq ivioi Figure 1. Specific conductance plotted against total dissolved solids for Jenny Fork and Miller Branch for water year 1975. 239 Concentrations of many chemicals in water from both mined and unmined watersheds were found to be correlated with specific conductance. It correlates quite well with SO^, Ca, Mg, and HC0 3 (table 1). Sulfate has been found to be one of the most important indicators of stream pollution from mining activity; correlation coefficients for it range from 0.651 to 0.991. Specific conductance is poorly correlated with Zn, Al , Fe, Na, K, and Mn. Regression equations are shown in table 2. In all cases the Y-intercept is higher for the mined watershed. Table 1 Correlations between specific conductance and ionized constituents, by water years Water year HC03 S0<+ Ca Mg Jenny Fork (unmined A/atershed) 1969 .905 .883 .919 .816 1970 .843 .815 .833 .887 1971 .799 .818 .829 .736 1972 .810 .985 .991 .991 Miller Branch (mined watershed) 1969 .984 .969 .809 1970 .654 .906 .951 .899 1971 .651 .919 .682 .579 1972 .711 .878 .765 .953 1975 .743 .609 .852 .874 Table 2 Regressions of specific conductance on some dissolved constituents in streams draining mined and unmined watersheds for water year 1972 Waters hed Cons tituent Regression equation Correlation Name Status coefficient Jenny Miller Unmined Mined Ca Ca Y = -1.628 + 0.083X Y = -1.789 + 0.066X 0.989 .765 Jenny Miller Unmined Mined Mg Mg Y = -0.284 + 0.042X Y = -2.461 + 0.065X .987 .953 Jenny Miller Unmined Mined S0 4 S0 4 Y = -8.683 + 0.379X Y = -27.369 + 0.396X .985 .878 For each water year and for each chemical element tested, the 52 weekly analyses were separated into an odd-numbered and an even-numbered set. The sets were paired. This made 26 pairs, representing biweekly data, on which paired T-tests were performed. Then to eliminate bias the first value was dropped and the remaining values paired as before and tested by the T-test. No statistically significant differences were found. Therefore, sampling twice monthly appears to be adequate for headwater streams in eastern Kentucky. Tests for the seven water years from 1969 to 1975 indicate that any one of those years would have given good estimates of water quality from the unmined watershed. Seasonal differences and trends can be established with monthly sampling, as evidenced by the plot in figure 2. Moving averages of three values, with the middle value double weighted, smooth the curve and allow trends to show more clearly. This method of smoothing can be used for any sampling interval selected. 240 CM in Lf) oo CM CO az < CO oo 3 o93 IV IAIO/SQHI/\in 3aNV13nQN00 3!dl03dS Figure 2. Specific conductance for Jenny Fork, showing the relationship of different sampling intervals during water year 1972. 241 Figure 2 also shows that monthly (4-week) samples defined quite well the specific conductance of water flowing from an unmined watershed. Plots of mineral constituents show essentially the same thing, i.e., that monthly sampling is generally adequate to define baseline water quality data. The same thing has been noted for the mined watersheds. Figure 1 shows how we can determine the influences of a particular land use, in this case surface mining, on the quality of water in streams that drain that land. In the lower left are data from an unmined watershed. Not only is there a difference in the slope of the regression line, but there is a large difference in actual values. Plotting the same variables for both these watersheds before mining showed a similar slope in the same value range. From plots of the 95 percent confidence interval in figure 3, it can be seen that after a certain point adding samples results in only a small decrease in the interval. For example, the 95 percent confidence interval width for Jenny Fork is 1.23 mg/1 for 12 analyses, 0.87 for 26 analyses and 0.56 for 52 analyses. Because the concentrations of many chemicals change seasonally, sampling must be spread over a year. Similar computations can be made for any variable of interest and an optimum sampling interval can be defined to meet any requirement. 242 CO CM 1/9IAI H1QIM 1VAUB1NI 33N3QUN00 CO IT) 00 LU 00 >- Q CQ CM Figure 3. Number of analyses plotted against 95 percent confidence interval width for magnesium, for Jenny Fork and Miller Branch for water year 1972. 243 3. Conclusion In general, sampling should be based on the relation between specific conductance and the concentration of ionized constituents. When the relationships are defined, sampling can be reduced to a frequency that will detect trends in ion concentration. Biweekly sampling for 1 year before surface mining should be adequate to define the regression equation if conductance is the independent variable. Monthly samples should normally be enough to detect trends. In some cases quarterly sampling may be enough. It would be better to sample through the complete range of stream discharge and to use discharge-weighted concentrations, but that procedure may be too expensive for routine water quality monitoring. Biweekly to monthly sampling on a fixed schedule will generally be adequate to define the effects of surface mining on water quality. References [1] Standard Methods for the Examination of Water and Wastewater, American Public Health Association, American Water Works Association and Water Pollution Control Federation, 13th ed. Publ. Off., Am. Public Health Assoc, Washington, DC (1971). [2] Brown, E., Skougstad, M. W., and Fishman, M. J., Methods for collection and analysis of water samples for dissolved minerals and gases, Tech. Water-Resour. Invest. B. 5, Chap. 160, U. S. Geol . Surv. (1970). [3] Hem, J. D., Study and Interpretation of the Chemical Characteristics of Natural Water, U.S. Geol. Surv. Water-Supply Pap. 1473, 2nd ed. 363 (1970). [4] MacKicham, K. A. and Stuthmanu, N. G., Preliminary Results from Statistical Analysis of Water Quality in Selected Streams of Nebraska. U.S. Geol. Surv. Open file rep. Lincoln, Neb., 35, illus., (Oct. 24, 1969). [5] Rainwater, F. H. and Thatcher, L. L., Methods for Collection and Analysis of Water Samples, U.S. Geol. Surv. Water-Supply Pap. 1454, 301, illus. (1960). 244 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). MONITORING BACTERIAL SURVIVAL IN SEAWATER USING A DIFFUSION CHAMBER APPARATUS IN-SITU George J. Vasconcelos Environmental Protection Agency Seattle, Washington 98103, USA 1. Introduction Efforts to simulate the natural aquatic environment have led to the development of a membrane enclosed diffusion chamber capable of allowing interaction and exchange between microorganisms in the chamber and the outside environment. Although there is considerable evidence suggesting that enteric bacteria do not survive extended periods outside the intestinal tract, the factors affecting their short-lived viability are varied and complex. To help resolve this problem, a microbiological survival chamber for in-situ environmental studies was developed. 2. Apparatus The apparatus consists of three main parts or units (fig. 1): (1) the chamber unit itself, (2) a supportive base unit to cradle the chamber, and (3) a stirring mechanism for Figure 1. Monitoring bacterial survival in seawater using a Diffusion Chamber Apparatus in-situ. 245 continuous internal agitation. The detachable chamber unit was constructed entirely of autoclavable polycarbonate to permit steam sterilization for 15 minutes at 15 lbs. psi. A small Teflon coated stirring pellet placed inside the chamber prior to sterilization pro- vided the necessary internal movement when magnetically coupled to the electric DC motor positioned below. The stirring mechanism was included in the apparatus to ensure a homo- genous cell suspension within the chamber and simultaneously enhance solution-transport through the membrane sidewalls by decreasing the internal Nernst film at the membrane-water interface. To stabilize, orient, and physically protect the membranes of the chamber, plastic guide fins and a nose cone were attached to the supportive base unit. 3. Discussion Workers in the past have utilized dialysis sacs or bags made of regenerated cellulose, cellophane, parchment and other materials to culture microorganisms in aquatic systems. Unfortunately, these membranes vary in thickness and can undergo changes in porosity upon hydration; all of which limit the size of molecules that can enter and leave the system. Because of the rapid development in microf iltration technology, dialysis bags have given way to rigid chambers and rings that support microporous filter membranes (or membrane filters) fabricated from mixed cellulose esters, asbestos, or plastic. During evaluation of the chamber diffusion, experiments were conducted using a variety of filter membranes. These included regular (150 ym) and ultra thin (25 \m) cellulosic membranes from Millipore Corp. and polycarbonate membranes (10 ym) from Nuclepore Corp. The ultra thin cellulosic membranes tested provided rapid diffusion but were too fragile for field work. Consequently, only the regular cellulosic (Millipore, HAW P-047-00) and poly- carbonate (Nuclepore, N-040-CPR-047-00) membranes were subjected to further evaluation. Seawater diffusion experiments were designed to compare the solute permeability of both membranes, with and without internal agitation, using sodium fluorescein and glucose as test substances. Standard concentrations of both substances were injected into the chambers with a syringe and withdrawn similarly at 12 hour intervals. Results showed that polycarbonate type membranes were superior to cellulosic membranes with regard to solute permeability, requiring less time to achieve a given percent exchange between the chamber contents and outside environment. Internal agitation lessened the time necessary for this exchange. In addition, after 10 days exposure to seawater, the cellulosic membranes became exceedingly brittle and crumbled easily upon finger contact. Since then, other investigators have confirmed that cellulosic membranes undergo biodegradation in seawater as a result of microbial enzymatic activity. To demonstrate the applicability and general usefulness of the chamber apparatus, experiments were conducted with eight representative species of bacteria all recently isolated from environmental sources. These included five opportunistic pathogens and three indi- cators of water quality. Of the three bacterial indicators, Streptococcus faecalis was found to persist longer than either member of the coliform group. The survival characteristic of this organism is thought to be* related to the electrolyte content of seawater, but no mechanism has yet been proposed. The five pathogens examined included Klebsiella pneumoniae, Staphylococcus aureus and Pseudomonas aeruginosa. With the exception of the latter organism, little variation in viable count was observed between these pathogens over a seven day period. By far the most dramatic diversity in survival was shown with Vibrio parahaemolyticus (a true marine pathogen) was compared to Escherichia coli (control) and Salmonella enteriditis . In this experiment, E. coli fatalities were much higher in comparison to the salmonella pathogen it was intended to represent. Bacterial viability in seawater is influenced by a multitude of environmental factors, among which are temperature, salinity, pH and availability of nutrients. The study was conducted over a period of a year and rearranging the data on a seasonal basis revealed that the survival of E. coli was definitely a function of water temperature. As seasonal temper- atures rose, so did fatalities, clearly showing an inverse relationship between survival and water temperature. 246 4. Conclusion If low temperatures can prolong the life of enteric bacteria in seawater, then the curtailment of wastewater disinfection during the cold winter months would seem inadvisable for many coastal regions. This would be particularly true of those areas used extensively for the production and harvesting of fish and shellfish. Considering the wide variation in the persistence of both indicator organisms and enteric pathogens, marine waters receiving treated wastes should be evaluated more closely. Because of its improved design and performance, the survival chamber described in this presentation would be a useful tool in evaluations of this type. 247 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) CLEAN LABORATORY METHODS TO ACHIEVE CONTAMINANT-FREE PROCESSING AND DETERMINATION OF ULTRA-TRACE SAMPLES IN MARINE ENVIRONMENTAL STUDIES C. S. Wong, W. J. Cretney, J. Piuze 1 and P. Christensen .Ocean Chemistry Division Institute of Ocean Sciences Victoria, B.C., Canada, V9A 3S2 and P. G. Berrang Seakem Oceanography Ltd. Victoria, B.C., Canada, V8Z 1B2 1. Introduction Reported baseline concentrations of some trace metals in open-ocean waters have been generally decreasing over the years as is indicated in table 1. It is likely that a large part of the data on trace metal concentrations found in the literature is worthless [l] 2 , being only indicative of the level of contamination, since careful interlaboratory calibra- tion exercises [2] have demonstrated that rigorously clean sampling, handling and analytical techniques will produce accurate data. Table 1 Baseline concentrations of some trace metals in open-ocean waters reported in the last 35 years Reference Units Cd Cu Pb Zn A. Compiled Data Sverdrup et al., 1942 [25] yg/kg present 10 4 5 Goldberg, 1965 [26] ug/£ 0.11 3 0.03 10 Brewer, 1975 [27] ug/£ 0.1 0.5- 0.03 4.9 B. Recent Original Data Zirino and Healy, 1971 [28] yg/£ - - - 1.7 Chester and Stoner, 1974 [29] yg/£ 0.07 0.8 - 1.4 Eaton, 1976 [30] y g/kg 0.06 Present address: Environment Canada, Peches et Sciences de la Mer, Quebec, Canada, G1K 7X7 2 Figures in brackets indicate the literature references at the end of this paper. 249 This paper reviews briefly the methods used by the Ocean Chemistry Division in obtain- ing trace analysis data and puts particular emphasis on the description of a pair of port- able shipboard clean laboratories, which we believe to be the first of their kind used for marine work. The discussion will pertain to the analysis of ultra-traces of metals and petroleum or petroleum-like hydrocarbons in sea water since this work is plagued by a myriad of contamination problems because of the extremely low concentrations involved. 2. A Unified Approach to Ultra-Trace Analysis in Sea Water Although workers have published fairly detailed information on specific aspects of ultra-trace analysis such as sampling, storage, or clean laboratory techniques, very few have described a unified approach to obtaining meaningful data in sea water. By a unified approach, we mean one that would include reducing the probability of contamination as much as possible in all of the following steps: - on shore laboratory manipulations including precleaning and packaging of sampling equipment for use on shipboard - obtaining samples at sea - shipboard handling and sample workup - storage and preservation of samples - instrumental methodology There has been a tendency to dissociate the sample collection from the chemical analysis [3], In ultra-trace analysis, the dirtiest step is going to most affect the accuracy, so that all of the above steps must be carefully examined for contaminating influences. Even extra- ordinary efforts of cleanliness spent on one step of the analysis may be negated by contami- nation in another. A. Contamination-free sampling and handling on an oceanographic vessel When collecting and handling sea water samples for ultra-trace analysis, one must keep in mind that contamination lurks in every corner of a ship and in the surrounding surface waters. Possible sources of metals or hydrocarbons include: - the ship itself, as most parts are made of metal, which is either exposed or coated with paints containing metals and oils or plasticizers - grease and oil on the ship - smoke, fumes, sewage, garbage, lubricants and fuel released to the air or sea by the ship - the hydrographic wire, messengers and weights - materials used in the construction of the oceanographic samplers themselves - the sea surface microlayer, which is enriched in both metals and hydrocarbons [4] Surprisingly, for a long time oceanographers did not seriously concern themselves with such problems, even though many of them had been pointed out twenty years ago [31]. Thus, sampling and shipboard handling techniques have been the weakest links in marine ultra- trace analysis. Improved techniques have been designed in recent years in order to curb contamination. 250 1) Sampling techniques We have employed a number of hand collection methods using small launches away from the mother ship. These methods minimize the contamination, but they cannot be used unless the sea is \/ery calm. These methods moreover yield only shallow samples and are not well suited for collecting large numbers of samples. Pumping systems of bulk samplers attached to a hydrowire can be used on a ship. Not all pumping systems or bulk samplers are suitable for obtaining sea water for ultra-trace analysis. Somewhat different sampling equipment moreover is required for ultra-trace hydrocarbons than for ultra-trace metals. a. Ultra-trace metal samplers Most pumping systems for ultra-trace metal sampling are too contaminated [5] and the clean ones, such as those using peristaltic pumps and acid-cleaned Teflon tubing, cannot be used to obtain yery deep samples. We have been using PVC 3 Niskin bottles with Teflon- coated stainless steel coil springs and with all other sources of metals removed [6]. The bottles are acid-cleaned and used with stainless steel hydrowire, messengers and weights. These samplers, however, and the newer models replacing them such as the Top-Drop Niskin [5] and the Go-Flo Sampling Bottle [7], are not entirely satisfactory because they pass through the surface open and/or they are made of PVC, a possible source of zinc and copper among other metals [8J. Clean samplers which penetrate the surface of the ocean in a closed position should soon be available for ultra-trace metal analysis. Patterson and co- workers have been testing a super-clean deep ocean water sampler [9,10]. Seakem Oceanography Ltd. has developed a Teflon and nylon sampler of different design [11]. b. Ultra-trace hydrocarbon samplers Pumping systems have been used successfully for dissolved hydrocarbon gases [12] but their utility has not been demonstrated for polycyclic aromatic hydrocarbons or other high molecular weight hydrocarbons. As in the case of ultra-trace metal analysis, bulk samplers should not pass through the surface slick open and should be built of contaminant-free materials. The Blumer Organic-Free Water Sampler [13], consisting of an aluminum pressure casing with a glass liner, is available commercially [14]. We have been using this sampler with a few modifications [15], one of which is a long inlet tube as an aid in getting samples uncontaminated by oil originating from the hydrowire or the outside surface of the sampler itself. 2) Shipboard handling techniques a. An answer to shipboard contamination - a seagoing clear laboratory In many instances, a certain number of operations should be performed on sea water samples immediately after collection. These operations include subsampling, acidification, spiking, filtration, preconcentration, extraction, eta. Traditionally, these operations have been carried out in the wet laboratories of oceanographic vessels, thus being subjected to contamination as discussed earlier. In order to avoid contamination at this stage in ultra-trace metal work, it has been suggested that sea water be handled in plastic enclosures away from other activities. A similar approach could be used in ultra-trace hydrocarbon work. Instead, we have chosen to use portable seagoing laboratory modules, complete with Polyvinyl chloride 251 clean room sections, which can be hoisted and bolted on to decks of different research vessels. The modular concept has been used before for GEOSECS, but only involved portable wet laboratories rather than clean laboratories. Our shipboard laboratory modules con- stitute clean enclaves in a dirty environment. b. Shipboard laboratory modules (1) Description of the Ocean Chemistry Division modules Our two shipboard laboratory modules (fig. 1) are identical in construction. Each module, including its steel lifting and support frame, has overall dimensions (4.72 m length x 2.63 m width x 2.55 m height) which were dictated in a large part by the oceano- graphic vessels available to our Division. The gross weight of each module is about 3200 kg. Steel lifting and support frames completely box in the modules. When a module is moved to or from a ship, lifting slings for a crane are attached to brackets in the four top corners and the module is hoisted from above. The rigidity of the frame prevents the module from twisting and buckling under its own weight. A steel mounting platform, which conforms to the curvature of the deck, is used with the module on the two most used oceanographic vessels. The frame, mounting platform and deck are bolted securely together. On other ships, which are used occasionally, the frame is welded to leveling brackets which are in turn welded to the deck. The walls and ceiling of each module are made of 8.26 cm wide panels which have a foamed polyurethane core bonded between an inner and outer skin of steel prefinished with a white baked-on epoxy coating. The floor consists of galvanized sheet metal over plywood. The underside is sprayed with an asphalt coating and the upperside coated with epoxy paint. The paint surface is covered with a self-adhesive Teflon-over-vinyl laminate. The outer doors are solid core with white baked-on epoxy paint over steel. The internal walls, doors and panels are constructed of wood and are coated with white epoxy paint. All of the doors are provided with rubber seals. The bench tops throughout the modules are made of wood with a skin of 316 stainless steel sink. Besides the usual 115 V, 15A circuits, each module is provided with two 208 V, 35A circuits. Conduits and electrical outlets are wall mounted and coated with epoxy paint. Incandescent lighting is used throughout. The configuration of the air systems is consistent with a desire to use a little potential working space as possible and a necessity to place air intake and exhaust ports high on the modules to protect them against sea water breaking across the deck. In the preparation room, entering air is passed through a small non-resinous fiber filter in a protected port. Exiting air is vented through a 2.8 m 3 /min (100 cfm) fan. This arrangement assures adequate air exchange in the small preparation rooms and also provides a slight negative pressure so that air would tend to travel into the preparation room from other parts of the module. The combination change room, air lock and air shower is, we believe, a reasonable compromise to reduce space use. The air shower is provided by compressed air being forced through perforations along the length of a pipe located in each corner. Air from the shower is drawn into the main air system of each module. The clean room in each module is serviced by an air filtering and recirculating system. Air from the change room, clean room and outside enters a small plenum above a 95 percent efficient basket filter of dimensions 61.0 cm x 61.0 cm x 305 cm. After passing through the basket filter, the air travels through an activated charcoal filter of the same dimens- ions, air-cooling coils, and a 28.3 m 3 /min (1000 cfm) squirrel cage fan. The filtered air enters the clean room from beneath the bench via an opening which provides space for mount- ing a 76.2 cm x 61.0 cm x 30.5 cm HEPA 3 filter (not yet installed). Manipulations requiring 3 High efficiency particulate air. 252 1 pm) bag filter, activated carbon filters and a propeller fan delivering air at a rate of about 20 m 3 /rnin. The handling of samples in ultra-trace work is carried out in Class 100 VLF hoods. D. Instrumental analysis Most analytical instruments used in ultra-trace work are too sophisticated to operate routinely on shipboard. Often they require more space than is readily available, more power than the ship can spare, drawers of spare parts and an electronics technician stand- ing by. Although a few days of instrumental down time may be tolerable in a shore-based laboratory, it may mean the loss of valuable data on a ship since oceanographic cruises generally operate within a tight schedule. We have adopted a policy of doing as much of an analysis as is practicable in the shipboard laboratory modules and completing them in our shore-based clean rooms, where some of our instruments, such as the GC/MS and the anodic stripping unit, are set up. ^Vertical laminar flow. 254 E. Some of our data illustrative of a unified approach to ultra-trace analysis of sea water The plot shown in figure 2 was obtained in the fluorimetric analysis of extracts of sea water of the Southern Beaufort Sea. The study zone contained a mixture of Mackenzie River water, ice water and Arctic ocean water. The plot shows a correlation (r = 0.75) between the concentration (in chrysene equivalents) of fluorescent extractable compounds (FEC) and salinity. We feel we have successfully reduced general contamination influences, which would be expected to have a levelling effect on the data, to a point where the correlation shown in figure 2 appears. In the case of trace metal analysis, a recent study [23] conducted in the waters of the Strait of Georgia, British Columbia, using clean laboratory methods, yielded the background data presented in table 2. Results compare very favorably with the more recent values in table 1, especially when one considers that the results in table 2 are for estuarine waters. Table 2 Background concentrations of some trace metals in estuarine waters of the Strait of Georgia, B.C. [23] Metal Component Measured dissolved Number of Samples 3 22 Range of Concentrations [yg/kg] Average Concentration [yg/kg] Analytical Technique ASV (TFE) C Cd 0.01 - 0.05 0.02 Cu dissolved 22 0.20 - 0.98 0.50 ASV (TFE) Pb dissolved 22 0.04 - 0.28 b 0.15 ASV (TFE) Pb total 10 0.04 - 0.13 0.07 ID d Zn dissolved 22 0.33 - 1.74 0.89 ASV (HMDE) e a Samples taken at depths ranging from 1 to 200 meters One value of 0.55 not included Differential pulse anodic stripping voltammetry with a thin film rotating glassy carbon electrode Isotope dilution mass spectrometry p Differential pulse anodic stripping voltammetry with hanging mercury drop electrode 3. Conclusion "The analytical results should characterize the original system and not one that is a modification created by the analytical processing" [24]. 255 § CN O 00 >o (T>6u) NOTlVillN3DNOD D3d 256 References [I] Hume, D. N., Fundamental Problems in Oceanographic Analysis, Analytical Methods in Oceanography, Gibb, R. P., Jr., Ed., 1-8, Advances in Chemistry Series 147, American Chemical Society, Washington, DC (1975). [2] Participants of the Lead in Seawater Workshop, Inter-laboratory Lead Analyses of Standardized Samples of Seawater, Marine Chemistry 2_, 69-84 (1974). [3] Burrell , D. C, Atomic Spectrometric Analysis of Heavy - Metal Pollutants in Water, 87-100, Ann Arbor Science, Michigan (1974). [4] Duce, R. A., Quinn, J. G., Olney, C. E., Piotrowicz, S. R., Ray, B. J., and Wade, T. L., Science 176, 161-163 (1972). [5] Segar, D. A. and Berberian, G. A., Trace Metal Contamination by Oceanographic Samplers, Analytical Methods in Oceanography, Gibb, R. P., Jr., Ed., 9-15, Advances in Chemistry Series 147, American Chemical Society, Washington, DC (1975). [6] Wong, C. S. and Berrang, P. G., Lead in Sea Water, Reference Manual for Ocean Chemistry Sampling Techniques, unpublished manual, P1-P10, Institute of Ocean Sciences, Victoria, British Columbia V9A 3S2 (1976). [7] Go-Flo Sampling Bottle Model 1080, Data Sheet No. 108-75A, General Oceanics Inc., Florida 33127. [8] Robertson, D. E., Role of Contamination in Trace Element Analysis of Sea Water, Anal. Chem. 40, 1067-1072 (1968). [9] Patterson, C. C. and Settle, D. M., The Reduction of Orders of Magnitude Errors in Lead Analyses of Biological Materials and Natural Waters by Evaluating and Controlling the Extent and Sources of Industrial Lead Contamination Introduced During Sample Collecting, Handling and Analysis, Accuracy in Trace Analysis: Sampling, Sample Handling, Analysis, LaFleur, P., Ed. Vol I, 321-351, NBS Spec. Pub. 422, U.S. Government Printing Office, Washington, DC 20404 (1976). [10] Patterson, C. C. and Schaule, B., California Inst, of Techno!., personal communication (1976). [II] Berrang, P. G., New Samplers for the Ultra-Trace Analysis of Heavy Metals and Hydro- carbons, unpublished manuscript. [12] Sigalove, J. J. and Pearlman, M. D., A Continuous Ocean Sampling and Analysis System, Undersea Technology, 24-26 (March 1972). [13] Clarke, R. C, Jr., Blumer, M. , and Raymond, S. 0., A Large Water Sampler, Rupture- Disc Triggered, for Studies of Dissolved Organic Compounds, Deep-Sea Res. 14_, 125- 128 (1967). [14] Blumer Organic-Free Water Sampler Model 1730, Data Sheet No. 173A, Benthos, Inc., Mass. 02556. [15] Cretney, W. J., Operation of the Blumer Organic-Free Water Sampler, Reference Manual for Ocean Chemistry Sampling Techniques, unpublished manual, H5-H6, Institute of Ocean Sciences, Victoria, British Columbia V9A 3S2 (1976). [16] Robertson, D. E., The Adsorption of Trace Elements in Sea Water on Various Container Surfaces, Anal. Chim. Acta 42, 533-536 (1968). 257 [17] Gordon, D. C, Jr. and Keiser, P. D., Estimation of Petroleum Hydrocarbons in Seawater by Fluorescence Spectroscopy: Improved Sampling and Analytical Methods, Tech.. Report No. 481, Bedford Institute of Oceanography, Dartmouth, Nova Scotia (1974). [18] Thiers, R. E., Separation, Concentration, and Contamination, Trace Analysis, Yoe, J. H. and Koch, H. J., Jr., Eds.«, 637-666, J. Wiley and Sons, New York (1957). [19] Pinta, M., Detection and Determination of Trace Elements, translated from French (Dunod, Paris (1962)) by Bivas, M., Israel Program for Scientific Translations Ltd., 430, Ann Arbor Science Publishers, Ann Arbor (1966). [20] Blumer, M., Contamination of a Laboratory Building by Air Filters, Contam. Control 4_, 13-14 (1965). [21] Tolg, G., Extreme Trace Analysis of the Elements - I. Methods and Problems of Sample Treatment, Separation and Enrichment, Talanta 1_9, 1489-1521 (1972). [22] Mitchell, J. W., Ultrapurity in Trace Analysis, Anal. Chem. 45, 492A-500A (1973). [23] Wong, C. S., Berrang, P. G., and Erickson, P. E., Data Report for Cruise 0C-76-IS-001 , Contract DSS File Ref. 5508. KF832-5-SP009B, Department of Environment, Ocean and Aquatic Affairs, Pacific Region, Ocean Chemistry Division, Victoria, British Columbia (March 1976). [24] Ciaccio, L. L., Ed., Water and Water Pollution Handbook, Vol. I, vii, Marcel Dekker, New York (1971). [25] Sverdrup, H. U., Johnson, M. W. , and Fleming, R. H., The Oceans: Their Physics, Chemistry, and General Biology, Prentice-Hall, New Jersey (1942). [26] Goldberg, E. D., Minor Elements in Sea Water, Chemical Oceanography, Riley, J. P. and Skirrow, G., Eds., Vol. I (First Edition) 163-196, Academic Press, London (1965). [27] Brewer, P. G., Minor Elements in Sea Water, Chemical Oceanography, Riley, J. P., and Skirrow, G., Eds., Vol. I (Second Edition) 415-496, Academic Press, London (1975). [28] Zirino, A. and Healy, M. L., Voltammetric Measurement of Zinc in the Northeastern Tropical Pacific Ocean, Limnol. Oceanogr. ]6^, 773-778 (1971). [29] Chester, R. and Stoner, J. H., The Distribution of Zinc, Nickel, Manganese, Cadmium, Copper, and Iron in Some Surface Waters from the World Ocean, Mar. Chem. 2, 17-32 (1974). [30] Eaton, A., Marine Geochemistry of Cadmium, Mar. Chem. 4_, 141-154 (1976). [31] Cooper, L. H. N., J. Mar. Res. 17, 128-132 (1958). 258 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) A MODIFIED PROCEDURE FOR DETERMINATION OF OIL AND GREASE IN EFFLUENT WATERS G. M. Hain and P. M. Kerschner Cities Service Oil Company Cranbury, NJ 08512, USA 1. Introduction There are currently three procedures listed in EPA's "Manual of Methods for Chemical Analysis of Water and Wastes" for the determination of oil and grease in effluent waters. The methods listed are: Storet 00550 - Oil and Grease, Total Recoverable, Soxhlet Extraction Storet 00556 - Oil and Grease, Total Recoverable, Separatory Funnel Extraction Storet 00560 - Oil and Grease, Total Recoverable, Infrared All three are based on extraction of the contaminants with a suitable solvent such as carbon tetrachloride or FreonR 113 followed by quantizing the amount of oil and grease extracted. The methods differ either in mode of extraction or the quantitative method of measurement. 2. Discussion Storet 00550 collects the oil and grease on a diatomaceous-silica filter, followed by a Soxhlet extraction of the filter cake with FreonR 113. The Freon R 113 is distilled from the extract and the residue weight is a measure of contaminants present. Storet 00556, as the title implies, uses three successive extractions with Freon R 113 using a separatory funnel. As in the previous procedure, solvent is distilled from the extract and the residue weight is a measure of contamination. Storet 00560 uses the separatory funnel procedure for extraction but the degree of contamination is determined by a standard infrared procedure. All three procedures require considerable time and manpower and suffer other limitations as well. The distillation step in the first two limit them to relatively non-volatile hydrocarbons boiling above 70°C, and the multiple steps give rise to sample loss. The three methods were tested by a single laboratory (MDQARL) on a sewage dosed with a No. 2 fuel oil and Wesson oil. Based on this testing, precision and accuracy statements are given with the method write up. Storet 00550 - Soxhlet extraction gave an 88 percent recovery with a standard deviation of 1.1 mg. Storet 00556 - Separatory funnel extraction gave a 92 percent recovery with a standard deviation of 0.9 mg. 259 Storet 00560 - Infrared gave a 99 percent recovery with a standard deviation of 1.4 mg. The high recovery for the infrared method is somewhat misleading. The scope of the method states, "the method is applicable to measurement of most light petroleum fuels, although loss of about half of any gasoline present during the extraction manipulations can be expected." Had the sewage been dosed with a gasoline as well as No. 2 fuel oil, recovery would have been less than the 99 percent for the IR procedure and even lower than stated for the other two. A further discrepancy found by this laboratory for the IR procedure was a reversal of peak profile in some instances at low levels of oil and grease effluent contamination. Since the procedure uses a differential IR scanning technique, it was evident that the unknown solvent extract lost something contained on the reference solvent cell which, in this case, was FreonR 113. Close analysis of the solvent showed that it contained varying minor quantities of low boiling contaminants. If care was not taken to use the same solvent in both reference cell and extract or if during the filtration of solvent extract considerable solvent was lost, the scan reversal was observed. The reversal was only observed in samples of low level oil and grease. At higher levels the effect was masked by the stronger hydrocarbon absorbances. Some effluents have a tendency to form stable emulsions with the extracting hydrocarbon, which extends the overall workup time and increases evaporation losses during filtration of extract, thereby aggravating the solvent contaminant effect. The cited limitations made it desirable to develop a modified procedure. The IR method was selected as the base method since it was more amenable to low boiling hydro- carbons. The following goals were set: 1. Reduce overall time and manpower requirement. 2. Use smaller volumes of solvent to increase concentration of extracted hydrocarbon in solvent. 3. Prevent loss of solvent to minimize light hydrocarbon loss and solvent contaminant effect. 4. Be able to handle emulsion-prone samples with a minimum of handling. The modified procedure developed has six steps. First, a one-liter sample is collected in a wide mouth jar of 1250 cm 3 capacity. Sodium acid sulfate is added at the time the sample is taken to assure an acid pH. Second, 30 cm 3 of FreonR 113 is added to the sample and the water/FreonR 113 is agitated on a Red Devil paint mixer for 3 minutes. Third, the solvent extract is allowed to settle to the bottom of the sample jar. Fourth, a small sample of extract is removed by syringe equipped with a long needle. Fifth, the syringe is connected to a filter assembly containing a MilliporeR teflon filter and sample forced through directly into an IR cell. Sixth, a differential IR scan is made over the range of 3200 cm -1 to 2700 cm" 1 . This procedure meets all the goals originally set. Analysis time was reduced so that 50 samples can be run per eight-hour shift by a single technician, whereas the current method requires 2 persons to finish just 10 samples in a like period. Severe emulsions can reduce the number below the 10 samples/day. Smaller volumes of solvent require adequate mixing to assure complete extraction. International Harvester describes a standard test for water tolerance in lubricants in which oil and water are intimately mixed using a Red Devil paint shaker or its equivalent 260 for 5 minutes. It was found that 30 cm 3 of Freon R 113 effectively extracted all the hydrocarbons from a 1 liter sample of terminal effluent in a single extraction by agitating with a paint shaker for 3 minutes. This gives a concentration 33.3% greater than in the current procedure where the solvent extract is diluted to 100 cm 3 before running the IR scan. The 30 cm 3 of Freon^ 113 are added directly to the sample container thus eliminating the transfer of sample to separatory funnel and rinsing of sample jar with FreonR 113, resulting in subsequent loss of solvent and light hydrocarbons. After agitation, the Freon^ 113 extract settles to the bottom of the sample jar and the water layer acts as an effective barrier to any solvent/hydrocarbon loss. The pressure filtration through the Teflon^ Millipore^ filter acts as a coalescer and not only dries the sample but breaks emulsions as well. In the most severe cases water droplets may pass through the filter and a second filtration may be required. Direct injection into an IR cell completes the sample handling. Since there is no solvent loss, all calculations are based on a 30 cm 3 solvent extraction. Standard curves are prepared at varying concentrations using standards which match the type of contaminants expected from a given terminal. Standards containing gasoline showed no difficulty from light hydrocarbon loss. The infrared spectrograph used is a Hilger-Watts Infragraph H-1200, and a special absorbance paper was designed so that ppm water can be read directly from the IR scan. The modified procedure was checked against Storet 00560 for a series to terminal effluents collected at the same time. This particular terminal did not handle gasoline, and the problem of light hydrocarbons was not present. The results proved the effectiveness of the modified procedure. Work with gasoline storage tank bottoms brought out an additional advantage of the modified procedure when large quantities of polar components are suspected. A 5 to 10 cm 3 sample withdrawn with the syringe can be transferred to a vial, a large excess of silica gel added, and the contents shaken thoroughly. A direct comparison of hydrocarbon content before and after exposure to silica gel gives a reading of total hydrocarbon vs. non-polar hydrocarbons. 3. Conclusion The modified procedure is now used by CITG0 and samples have been handled on a same- day in and out analysis basis. It is felt that this procedure may offer the EPA an alternate method for determination of oil and grease in effluent water. 261 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). VARIABILITY OF TRACE METALS IN BED SEDIMENTS OF THE PO RIVER: IMPLICATIONS FOR SAMPLING M. T. Ganzerli-Valentini , V. Maxia, S. Meloni Institute of Inorganic and General Chemistry University of Pavia Viale Taramelli 12, 27100 PAVIA, Italy and G. Queirazza and E. Smedile ENEL Thermal and Nuclear Research Center Environmental Laboratory of Trino Vercellese 13039 TRINO, Italy 1. Introduction Since 1971 a stretch of the Po river (the largest Italian river) (fig. 1), nearby the site where the fourth Italian nuclear power station is under construction, was submitted to environmental analysis in order to achieve a .thorough characterization of the rivers ecosystem before plant operations started. Special attention was devoted to the aquatic ecosystem controlling most of its components (water, suspended material, periphyton, benthic organisms, bottom sediments, fish). This investigation required a detailed sampling program with respect to the choice of the sampling sites and to the sampling frequency. The stretch under investigation is located in the middle course of the Po river; the pollution load brought by effluents is very high, this area being downstream from the major pollution sources (industrial, agricultural, and urban) of northern Italy. This stretch may be distinguished into 2 zones: the first, upstream from the dam of a hydroelectric power station, characterized by a decreasing river current, and a second one, downstream, with a swift river current. Despite these unfavorable conditions, the water quality is generally satisfactory, thus indicating the efficacy of the self-purifying power of the river. This paper deals mainly with problems associated with the sampling and analysis of the most significant components (water, suspended material and bottom sediments) of a riverine ecosystem affected by different pollution sources. The investigated stretch (approx. 20 Km) covers eight sampling stations (fig. 1), six upstream and two downstream of the dam. In each station there are three sampling sites, nearby the right bank (A), in the middle of the river (B) and nearby the left bank (C). 2. Experimental Surface bottom sediment samples r.2,4] 1 were collected quarterly with a "Van Veen" mechanical dredge from May 1973 to June 1974. An aliquot of the samples was freeze-dried and used for neutron activation analysis and atomic absorption spectroscopy investigation for elemental analysis. Another aliquot, oven-dried at 80°C, was analyzed for organic nitrogen (after Kjeldahl) and carbon [1,2]. Grain size measurement was carried out by a figures in brackets indicate the literature references at the end of this paper. 263 z o < t- O z < U- o (/) Z o < o o < cc < o a < 264 wet-sieving technique and the fraction below 2ym was submitted to x-ray powder diffraction analysis to investigate the mineral ogical composition. A 20-liter single sample of river water was collected weekly over a one year period at one station and submitted to chemical and physical analysis. Water samples from the 21 sampling sites (collected at the water surface and near the bottom to check any surface and vertical variations of the chemical composition) were analyzed for the physical and chemical parameters according to APHA, FWPCA and AWWA methods [3]. The chelating capacity was measured with the method described by Kunkel and Manahan [4], Suspended material was collected by means of a composite sampling device [5] and submitted to gravimetric determination and chemical analysis. Element analysis included the determination of iron, manganese, zinc, copper, nickel, lead, cadmium, cobalt, chromium, mercury, cesium, selenium, and arsenic. Cs, Co, Cr, As, Se and Hg were determined by neutron activation analysis at the Radiochemistry Laboratory of the University of Pavia; Co, Cr and Cs content was evaluated by an instrumental method, whereas As, Se and Hg required the use of radiochemical separations: they were carried out either with a distillation method developed by Orvini, et at. [6] or by adsorption on in- organic adsorbers or by solvent extraction according to S. Meloni, et at. [7]. Fe, Mn, Cd, Pb, Cu, Ni and Hg (only in the water samples) were determined by atomic absorption spectroscopy using flame and flameless techniques according to the nature of the sample. Fe, Mn, Zn and Ni, in acidified (HN0 3 ) water samples, were determined after precon- centration by conventional AAS. Pb, Cd and Cu in water samples were determined according to Dolinsek and Stupar [8]. Hg determination was carried out with a cold vapor technique [9]. Sediment and suspended materials were mineralized with a mixture of HN0 3 -HC10 4 and HF before AAS analysis. 3. Results Water data reported in tables 1 and 2 show the ranges, average values, standard de- viations and coefficients of variation of the measured parameters and of the elemental concentrations with respect to time (table 1) and space (table 2). A comparison of the coefficients of variation of each parameter or concentration in the two tables indicates that the time variation is much greater than the space variation. To obtain a better evaluation, on a statistical basis, of the influence of the time and space variations on the measured parameters and concentrations, their average values were compared to a single result using the t-test. The t-values for data on table 1 show that most of the parameters and concentrations have meaningful time variations for a 95 percent probability (t=2.01, 49 d.f.) except for B0D 5 , conductivity, alkalinity, chloride, sulfate, detergents, phenols, cobalt and chromium. The results for detergents, phenols, chromium and cobalt are very poor: their high variations are responsible for the fact that the t-values do not indicate any significant differences. As for the space variations the data in table 2, relative to 21 sampling sites, show a limited variation (zinc and copper) or negligible variation for a 95 percent probability (t= 2.086, 20 d.f.). The data from the grain size measurement are reported in fig. 2. The grain sizes were divided into 3 groups: sand (>63ym), silt (<63ym), sandy silt (mixed com- position). The clay mineral fraction (<2ym) was not classified as this fraction was found to be always below 2%. According to the present classification the bottom sediment samples collected from sampling sites T1(C), T2(A), T3(A), T6(A), T8(A,B,C), and T9(A,B,C) were considered sand; the samples from sampling sites T1(B), T2(B,C), T3(B,C), T4(A,B,C), T5(A,B) and T6(B) were considered sandy silt; the samples from sampling sites T1(A), T5(C) and T6(C) were con- sidered silt. The data from fig. 2 indicates that the bottom sediments at all stations from Tl to T6 are not homogeneous in respect to grain size, wereas at sampling stations T8 and T9 they are highly homogeneous and exclusively sand. The lack of homogeneity in the grain size of the bottom sediments may be related to the hydrological regime [10]. The x-ray investi- gation carried out on the sediment fraction (<2ym) shows the presence of quartz, calcite and the following clay minerals, listed according to their abundances: illite > kaolinite > chlorite = vermiculite > montmorillonite = interlayered clays > quartz = calcite. 265 Table 1. Annual range, standard deviation, C.V., and average of selected water parameters in the Po river at ENEL hydroelectric power station. Parameters Averages S.D. C.V. Annual range ■1 3 -1 Water discharge, m s Temperature, °C PH Conductivity, ymhos cm Dissolved oxygen, mg ? I" Oxygen saturation, % C D, mg 2 I" 1 B D 5 , mg 2 I" 1 Detergents, mg MBAS I" Phenols, yg C 6 H 5 0H I -1 Chelating capacity, mg Cu I' Organic nitrogen, mg N I~ Ammonia, mg N I" Nitrite, mg N I" Nitrate, mg N I" Phosphate, mg P I" Total phosphorus, mg P I" Sulphate, mg S I" Chloride, mg CI I" Silica, mg Si I Total hardness, mg CaCO^, I Total alkalinity, mg CaCO^ I -1 Total iron, mg Fe I Total manganese, mg Mn I~ Total zinc, yg Zn I~ Total copper, yg Cu I" Total nickel , yg Ni I~ Total chromium, yg Cr I" Total lead, yg Pb I" 1 Total cadmium, yg Cd I~ Total mercury, yg Hg I" Total selenium, yg Se I" Total arsenic, yg As I" Total cobalt, yg Co I~ Total caesium yg Cs I" -1 900 - - 300 - 4,300 13.5 6.6 49 3 - 25 7.6 0.2 3 7.1 - 8.2 370 62 17 260 - 465 8.0 1.2 15 5.2 - 10.1 76 9 12 54 - 104 36 35 97 9 - 237 3.9 2.7 69 1.2 - 6.2 0.01 0.02 200 <0.005 - 0.07 1.5 1.7 113 <0.1 - 4.6 0.32 0.47 147 0.13 - 2.80 1.0 0.5 50 0.4 - 3.1 0.5 0.3 60 0.2 - 1.3 0.04 0.01 25 0.02 - 0.07 1.5 0.5 33 0.3 - 2.7 0.13 0.05 38 0.03 - 0.29 0.22 0.10 45 0.10 - 0.50 15.5 7.0 45 6 - 28 11.7 3.4 29 6.3 - 18.2 1.8 1.1 61 0.2 - 4.0 173 27 16 126 - 250 119 17 14 90 - 164 1.39 1.95 140 0.20 - 9.85 0.12 0.14 117 0.02 - 0.77 55 33 60 22 - 190 18 30 167 2 - 180 21 28 133 3 - 146 88 131 149 <15 - 474 9.9 13.9 140 1.4 - 82.0 0.66 1.03 156 0.07 - 5.10 <0.1 - - - <0.2 - - - 1.98 1.70 86 <1 - 5.63 4.80 4.12 86 <2 - 12.6 3.31 1.87 56 1.9 - 9.0 266 Table 2. Space range, standard deviation, C.V., and average of selected water parameters in 21 sampling sites. Parameter Average S.D. c.v. Space range Water a- u 3-1 discharge, m s 928 a - - - Temperature, °C 17.5 0.4 2 16.9 - 18.1 pH 7.6 0.1 1 7.5 - 7.8 Redox (E h ), mV 0.44 0.04 9 0.36 - 0.49 Redox (E y ), mV 0.48 0.04 8 0.40 - 0.53 Conductivity, yS cm" 325 12 4 315 - 345 Disso' ved oxygen, mg 0~ I 6.3 0.4 6 5.5 - 6.8 Oxygen saturation, % 65 3.9 6 59 - 71 COD mg ? I 7.9 0.9 11 6.9 - 9.4 Detergents, mg MBAS I" 0.040 0.005 13 0.030 - 0.050 Chelating capacity, mg Cu I" 0.21 0.14 66 0.10 - 0.50 Ammon' a, mg N I 0.30 0.02 7 0.27 - 0.34 Nitrite, mg N I" 0.12 0.01 8 0.10 - 0.14 Nitrate, mg N I" 1.04 0.15 14 0.86 - 1.36 Phosphate, mg P I" 0.13 0.02 15 0.09 - 0.14 Sulphate, mg S I" 11.8 0.5 4 10.7 - 12.8 Chlor de, mg CI I" 12.1 1.1 9 10.6 - 13.5 Silica, mg Si I" 4.2 0.8 19 2.9 - 5.6 Total hardness, mg CaCO^ I 142 2.8 2 139 - 146 Total -1 alkalinity, mg CaCOo I 116 3.3 3 no - 122 Total -1 iron; mg Fe I 1.59 0.74 46 0.76 - 3.02 Total manganese, mg Mn I" 0.190 0.076 35 0.100 - 0.330 Total zinc, yg Zn I" 68 30 44 40 - 140 Total copper, yg Cu I" 18 10 56 11 - 44 Total nickel , yg Ni I" 48 12 25 35 - 70 Total lead, yg Pb I" 1 8.4 4.1 49 2.9 - 17.2 Total cadmium, yg Cd I" 0.23 0.13 57 0.13 - 0.50 Total mercury, yg Hg I" <0.1 - - - Total chromium, yg Cr I" 28.4 12.9 45 16.0 - 50.0 Total arsenic, yg As I" 1.8 0.6 33 1.0 - 2.8 Total cobalt, yg Co I" 2.4 0.9 37 1.0 - 4.1 Total caesium, yg Cs I" <2 - - - Single value The contents of organic carbon and nitrogen and of the other 13 elements determined in the bottom sediments are summarized in table 3. 267 E3 0.5 - 0.063 mm D.W. 100 1 2 3 4 100 0- 100 0J 100. 100 0J 100 100 0J 100 — -—- '■:'■:'•:'*:'■ llll 1 2 Left 3 4 bank ■I < 0.063 mm Q > 0.5 mm 12 3 4 1 2 3 4 iiii no data :■:■:■:■ 1 R Y.Y, no data iiii iVt'ij T9 T8 T6 T5 T4 T3 T2 Middle of the river Right bank C B Fig. 2 - Grain size measurement data (weight %): 1 sampling of May 197C ; 2 sampling of October 1973; 3 sampling of January 1974; 4 sampling of June 1974. T1...T9 sampling stations. A sampling site nearby the right river bank; B sampling site in the middle of the river ; C sampling site nearby the left river bank. 268 Table 3. Summary of organic carbon and nitrogen, and of element abundances in the three grain size groups of the bottom sediment. ELEMENT 2 SEDIMENT FRACTION SAND SANDY SILT Range Median or average+S.D. Range Median or average±S.D. SILT Range Median or average±S.D, Carbon, % dry weight 0.01-0.10 0.04±0.02 0.03-1.40 0.10 Nitrogen, % dry weight <0. 001-0. 010 0.001 0.001-0.026 0.004 Iron, mg/g 12-30 17±4 13-43 26±9 Manganese, mg/g 0.21-0.48 0.34+0.06 0.25-1.13 0.47 0.50-1.90 1.4±0.4 0.008-0.064 0.026+0.010 31-54 43±7 0.62-1.52 0.9±0.3 Zinc, Copper, Lead, Cadmium, Nickel, Cobalt, Arsenic, Selenium, Mercury, Chromium, Caesium, 26-85 6-10 9-27 0.7-1.8 65-100 6.1-10.9 0.2-3.4 <0.1 <0.03 90-245 1.1-1.8 58±16 7±1 16±4 1.2±0.3 81 ±8 8.6±1.4 0.9 162±40 1.5±0.2 26-690 7-195 13-160 0.8-7.5 75-174 8.0-21.2 0.1-7.9 <0. 1-0.9 <0. 03-0. 46 115-320 1.3-8.0 20 26 1.6 109+29 13±5 2.7 0.5±0.2 0.08 190±52 3.3+2.0 220-1070 104-325 70-450 3.0-9.8 85-177 12.9-23.6 0.1-13.6 0.22-1.34 <0. 03-2. 00 165-425 2.5-6.1 585+248 199+56 202±110 6.5±2 136±32 18±4 1.2 0.9±0.4 0.9 246±78 4.5±1.1 yg/g except as noted. The data are presented according to the three grain size groups and for each element the concentration range was observed. The standard deviation of the average values are also reported. The relatively low standard deviation for the element content in the sand type of sediments indicates a small variability of the element content over a one year period and from one sampling station to another: this implies a reduced capability of adsorption or desorption of the elements in the sand sediments. The elements may be classified as major and minor elements. Major elements are: carbon, nitrogen, iron and manganese, the latter are considered to be present as coatings of hydrous oxides on the sediment grains. All the other elements are considered minor elements and are considered to be adsorbed or coprecipitated on the surface of the sediment particles. The data (table 3) show an increasing concentration of the elements, arsenic excluded, from the sand type sediment to the silt type sediment. The increase of the minor elements may be ascribed to the increased specific surface area of the grains in the silt type sediment. The rate of increase is not the same for all the minor elements. Zinc, copper, lead, cadmium and mercury, show a higher increase than cobalt, nickel and cesium. This observation is consistent with the suggestions advanced by Leland et al. [11], which divided the trace elements in the bottom sediments of Lake Michigan into two classes: accumulating and non- accumulating. The accumulation effect is observed when the iron and organic carbon content increases. Differences between accumulating and non-accumulating elements in the variation of sediment contents were apparent (table 4). The data confirm that for selected sediment samples with mixed grain size composition, the averages and their variability are uniform with respect to sample numbers and sampling frequency. 269 Table 4. Averages and variability in concentrations (yg/g) of copper and nickel for sediments samples upstream from the dam. ELEMENT N. SAMPLING PERIODS N. SAMPLES AVERAGE ± S.D. Copper 1 18 52 ± 85 Copper 4 72 64 ± 78 Nickel 1 18 103 ± 22 Nickel 4 72 111 ±32 The element content of the sediments collected in sampling sites, where the fraction <63ym was present, were normalized to 100% of this fraction. An index for each is thus obtained and the indices have been compared for all the sampling periods. The results are shown in table 5. The last column in table 5 shows the average value of the indices and the relative standard deviation. For most of the elements the coefficient of variation is \jery low: this indicates that, notwithstanding the highly variable pollution load (table 1) and the large station to station variability of the element content, the time variability is limited. Only mercury and arsenic do not fit into this trend. Most of the mercury analyses are reported with a high analytical error as the measured concentration was near the sensi- tivity limit of the method: this may give rise to the large variation of the indices. The large coefficient of variation of arsenic may not be ascribed only to analytical error: a different uptake mechanism may occur for this element, but at the moment no valid suggestion may be advanced. Analyses of the suspended material (5-114 mg I" 1 in intermediate flow conditions) show that the element content is higher or equal to the silt sediment content: this indicates that the deposition of the suspended material is the major source of the trace element enrichment in the bottom sediments. An analysis of the distribution of the considered elements between the suspended and the dissolved materials (<0.45ym) shows that the occurrence of the elements in the solid phase varies from 4 to 90% according to the series Cs < Co < Cr < As < Ni < Zn < Pb < Mn < Cd < Cu < Hg < Fe. 4. Conclusions The analytical data gathered in the course of the present investigation provide useful information for planning programs of environmental control of river ecosystems lightly polluted from the chemical point of view. Water analyses show a large time variability. The station to station variability is limited so that the volume flowing in the interested stretch may be considered homogeneous. The sediment analyses show, upstream of the dam, a large station to station variability mainly connected to the conspicuous variability of granulometric characteristics of the bottom. The presence of the dam increases the deposition rate of the suspended material and consequently the element contents of the bottom sediment. According to current profile in the stretch, the rate of deposition may be different along the same section Tl . . .6(A,B,C) thus giving rise to sediments at different element concentrations. Vice versa in the T8 and T9 sampling stations, downstream of the dam, where the water current is swift and the deposi- tion rate negligible, no differences in element contents were observed across the stations. The occurrence of a correlation among the different types of sediments present in the in- vestigated stretch was tested by means of the ratio-matching method of Anders [12]. The results (fig. 3) confirm a good correlation among the sediment samples from sampling sites having similar hydrologic conditions; on the other hand no correlations were observed among samples from sampling sites having different hydrologic conditions. 270 Table 5. Element contents in sediments from the Po river (normalized to 100% of the fraction <63 vim). I~l «>«mmn4» SAMPLING PERIOD Average±S.D. L 1 cltlcrl L (c.v.) June, 1973 October, 1973 January, 1974 June, 1974 Carbon, % D.W. 1.94 1.85 1.82 1.81 1.86 ± 0.06 (3) Nitrogen % D.W. 0.053 0.056 0.040 0.030 0.045 ± 0.012 (27) Iron, mg/g 46 57 55 54 53 ± 5 (9) Manganese, mg/g 1.16 1.12 1.28 1.30 1.22 ± 0.09 (7) Zinc, yg/g 595 619 651 745 653 ± 66 (10) Copper, yg/g 195 268 233 196 223 ± 35 (16) Lead, yg/g 149 207 185 204 186 ± 27 (14) Cadmium, yg/g 8.6 8.1 8.1 9.0 8.5 ± 0.4 (5) Nickel, yg/g 188 211 191 169 196 ± 21 (ID Cobalt, yg/g 16 17 19 25 19 ± 4 (21) Arsenic, yg/g 7 - - 3 5 ± 3 (59) Selenium, yg/g 0.7 1.04 1.29 1.52 1.14 ± 0.35 (31) Mercury, yg/g 0.24 1.33 0.91 2.05 1.13 ± 0.76 (67) Chromium, yg/g 266 231 194 399 273 ± 89 (33) Caesium, yg/g 7.3 4.2 4.5 7.4 5.9 ± 1.7 (29) In regards to the time variability, the normalized data (table 5) obtained for sediment samples, collected upstream of the dam containing the granulometric fraction <63ym, show for most of the elements a negligible variability. Finally, the data presented herein suggest the following considerations: To obtain an estimate of average and range contents of elements in the river sediments, it is necessary primarily to set up a current velocity investigation and/or granulometric analysis of the sediments to map out the river sites where bottom sediments are to be collected. In an area characterized by sediments with quite different granulometric compositions, the evaluation of mean level and range of the element contents implies the choice of several sampling sites to take into account the main hydrologic conditions and consequently different granulometric compositions. In an area characterized by sediments with similar granulometric composition a single site provides a sample representative of the area. 271 o o - 35 3°- 6 ;#*/ / / / 0.19 ,0.72 0.5^ / / / Hydroelectric plant / 0.68 0.54 |o.57 t t F 0.54 0.68 (038 ! I I 0.54 0.48 Earths Surface IR Detector Figure 3. Laser monitoring schemes for the atmosphere using heterodyne detection against the sun as a black body source and using an active or a passive space satellite. A rather extensive number of different types of tunable lasers are presently used for atmospheric monitoring. In the visible region, dye lasers have been used primarily for laser radar systems with flash lamp excited systems delivering the largest energy per pulse. The recent development of the rare gas-fluoride lasers such as KrF, XeF, etc., may provide a ^ery efficient and high energy pump source for dye laser systems. Parametric oscillators [2] have become much more refined in their operational character- istics in recent years. The most versatile system uses a high power, high repetition rate Nd:YAG laser to pump a LiNb0 3 crystal for tuning in the 1-5 ym region. A computer control- led, narrow-linewidth system can cover longer wavelengths by frequency down-converting using a nonlinear crystal or possibly by Raman Stokes shifting in hydrogen or other gases. In addition, it is possible to cover the visible wavelength region. The most conceptually simple of all infrared tunable laser diode lasers [3]. Figure 4 shows the wave length range over wh but finer tuning capability can be fabricated by selection of 1 Most diode lasers operate best near liquid helium temperature, been made in devices which operate continuously above liquid ni spectral output of a diode laser usually occurs in a number of quency tunes continuously for about one cm -1 before a mode jump the relative resolution capabilities of a tunable diode laser a spectrometer is shown in figure 5, together with a Doppler 1 i mi C 2 H h . The major advantage of tunable laser sources over such s body radiation is the brightness of the source as much as the h devices are semiconductor ich devices with smaller, ead salt alloy composition, but recent progress [4] has trogen temperature. The modes each of whose fre- occurs. A comparison of nd a high resolution ted absorption spectrum of pectrometers using black igh resolution. 230 £ o (T Ld CD Ld > < i . o o CO - o o 0) X n? Q ID CJ ro X CO *f x x — PO 00 o o o c _ CO X CO CL Z Z o o 1 Q_ ro c\J X O — ■z. CO o o ID — X X 1 - o o o m i CvJ _ o - 00 E J. X I- o Ld _l Ld > < - to - o o Q LU < or CvJ Figure 4. Gross bandgap tuning for various lead-salt alloy compounds. Also shown are approximate band centers for several important molecular atmospheric contaminants. 281 o C/) (J) -z. < or LU > LU CC Spectrometer Slit Function (p = 0) 0.5-Torr C 2 H 4 30-cm Cell 80-^m Slit Width 0.18 cm 945.1 WAVENUMBER (cm -1 ) Figure 5. Transmission of a high-resolution laboratory grating spectrometer taken using an extremely high-resolution tunable diode laser. Also shown superimposed on the diode laser scan is the Doppler-limited absorption lines of 02^. Data taken by E. D. Hinkley. 282 A very important spectral region for the measurement of atmospheric contaminants is between 2 and 4 ym where the fundamental hydrocarbon bands occur. A laser spectrometer [5] has been developed which uses the difference-frequency between a continuous wave (cw) argon-ion laser and a cw dye laser generated in the nonlinear crystal LiNb0 3 . An example of the low pressure (Doppler-limited) spectrum and an atmospheric pressure broadened spectrum of methane taken with this device is shown in figure 6. Spectra of this sort are quite necessary for modeling of atmospheric spectral signatures, particularly in the presence of other contaminants and water vapor. While the difference-frequency spectro- meter now used commercially available components, the LiNb0 3 crystal does not transmit beyond 5 ym. Wide wavelength coverage with sufficient power for spectroscopic applications can be expected when suitable nonlinear crystals, which transmit in the visible and infrared beyond 5 ym, are developed. 10 TORR CH 100 r CO CO CO < 10 TORR CH, IN 1 ATM OF AIR J L 2948.6 8.4 8.2 8.0 7.8 7.6 -1, FREQUENCY (cm 1 ) Figure 6. Absorption of Methane taken using the cw dye-laser difference frequency spectro- meter in the low pressure and high pressure cases. Spectral resolution is about 10 MHz. Data taken by A. S. Pine. 283 3. Conclusion An alternate approach to a cost-effective and efficient tunable or quasi-tunable infrared source is the use of the newly developed chalcopyrite nonlinear crystals such as CdGeAs 2 and AgGaSe 2 for frequency mixing of infrared molecular gas lasers. Figure 7 shows the wavelength covered by the sum and difference frequency combination bands of a multi wave- length carbon monoxide and carbon dioxide laser. Line densities in these regions are more than enough to coincide within a pressure broadened linewidth for most atmospheric con- taminant molecules. A significant improvement in the efficiency and average output power for second harmonic generation of a CO2 laser has been achieved [6] using a 1 cm length of CdGeAs 2 . Figure 8 shows the results of one experiment where the average second harmonic output power increases linearly as the square of the input power, as it should theoretically, without any sign of saturation due to crystal heating. Maximum average second harmonic output power using a high repetition rate Q-switched C0 2 laser of over one watt has been demonstrated with peak and average power conversion efficiencies of 35 and 20 percent, respectively, demonstrated in seperate experiments. Present crystals which are still quite difficult to grow with high yield are capable of one watt average output power levels over much of the sum and difference frequency bands. Use of these systems for differential absorption measurements as well as laser radars is ^ery attractive because of the simplicity of frequency calibration, overall power efficiency, and the use of simple molecular gas lasers. CO + CO„ SH CO SH C0 2 2.5-3.2pm 4.5-5.5pm CO - co 2 9.2-23pm m eh CO Laser C0 2 Laser 5-6.5pm 9-Upm 10 20 30 50 WAVELENGTH (m) IR TUNING RANGE FOR NONLINEAR MIXING OF CO AND C0„ LASERS 2 Figure 7. Wavelengths covered by the sun and difference-frequency combination bands of a CO and C0 2 laser. 284 AVERAGE INPUT POWER (W) 12 3 4 5 6 E 500 400 3 Q_ h- O cc UJ o a. ^ 300 < g 200 o o UJ c/> UJ o < 100 €C UJ CdGeAs 2 / / / / /• / V / / / / / / / / V / / / L T •/ IZ_J I L i 1 i i I i 1 i 5 10 15 20 25 30 35 40 AVERAGE INPUT POWER SQUARED (W) Figure 8. Average second harmonic output power from CdGeAs 2 at 77K obtained using a high repetition rate Q-switched C0 2 laser. From reference [6]. 285 References [1] Patel, C. K. N., Burkhardt, E. G., and Lambert, C. A., Science 1_74, 1173 (1974). Burkhardt, E. G., Lambert, C. A., and Patel, C. K. N., Science 188, 1111 (1975). [2] Byer, R., Tunable Lasers and Applications, in proceedings of an international conference, Mooradian, A., Jaeger, T., and Stokseth, P., Eds., Springer-Verlag, publishers. [3] Melngailis, I. and Mooradian, A., Laser Applications in Optics and Spectroscopy, Jacobs, S., Sargent, M. , Scully, M., and Scott, J., Eds. (Addisbn-Wesley Company, 1975] [4] Groves, S. H., Nill, K. W., and Strauss, A. J., Appl. Phys. Lett. 25, 331 (1974). Walpole, J. N., Calawa, A. R., Harman, T. C, and Groves, S. H., Appl. Phys. Lett. 28, 552 (1976). [5] Pine, A. S., J. Opt. Soc. Am. 64, 1683 (1974). [6] Menyuk, N., Iseler, G. W., Mooradian, A., Appl. Phys. Lett, to be published, October, 1976. 286 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) LASER MONITORING TECHNIQUES FOR TRACE GASES Will iam A. McClenny U.S. Environmental Protection Agency Research Triangle Park, North Carolina 27711, USA and George M. Russwurm Northrop Services Incorporated Research Triangle Park, North Carolina 27711, USA 1. Introduction Monitoring techniques based on laser sources have developed along lines that allow utilization of their unique qualities. These qualities include high spectral radiance and beam collimation. Three monitoring systems which use these qualities are discussed in this paper: an opto-acoustic (OA) system, a long path monitoring system, and a laser induced fluorescence system. The last two systems have been tested in field studies during the Regional Air Pollution Study [1.2] 1 . Using an opto-acoustic system based on a C0 2 laser, NH 3 , C 2 Hit, Freon 11 and Freon 12 detection has been investigated under controlled laboratory conditions. Only in a few cases can the cost and sophistication of current laser-based monitoring systems be justified for ambient trace gas monitoring. However, feasibility studies using prototype systems have established the type and quality of data which can, at this point, be obtained. Of particular interest as monitoring tools are the long path monitors that, by their very nature, do not compromise the integrity of the sample being measured. This feature is of particular importance in measuring reactive gas species such as ammonia and hydrogen chloride. All three of the techniques discussed have low minimum detectable limits for certain trace gases and they operate in real-time. 2. Optoacoustic Detection The primary goal of recent opto-acoustic research at the EPA, Research Triangle Park, has been to monitor NH 3 in the ambient air. Max and Rosengren [3] have shown a linear OA response to ammonia over the range 0.1 to 10 ppm using a resonant cell; they have projected a detection limit of 0.1 ppb based on system response parameters. Fischer and Sclwell [4] also have indicated low detection levels for ammonia and have measured a surprisingly high absorption coefficient for NH 3 at 360 torr at the R(30) line in the 9.4 \m band of the C0 2 laser [5]. The interest in NH 3 detection has led to the measurement of NH 3 absorption coefficients by personnel at Lincoln Laboratory [6]. One of the measurements repeated Fischer and Schnell's work with the same result but also determined the ammonia absorption coefficient at R(30) for atmospheric pressure and found a value of 75 cnrUtm" 1 . This value was significantly higher than previously reported values at C0 2 laser wavelengths, x Figures in brackets indicate the literature references at the end of this paper. 287 e.g. 21.9 cirr^atnr 1 for R(8) in the 10.4 pm band [7] and 36 cm^atnr 1 for the R(18) line in the 9 ym band of the C 13 2 6 laser [8]. Values for R(28) and a number of other lines in the 9.4 ytn band had previously been measured at EPA [7]; the R(28) value was found to be only 0.38 cnr^atm" 1 and hence effectively zero with respect to NH3 absorption when compared to R(30). With this previous work as a basis, an experimental 0A system was designed for NH 3 measurements. Main system components included a grating tuned C0 2 laser; a variable speed chopper; a 15 cm long, 0.95 cm diameter (inside) stainless steel 0A cell; an electret microphone (Model 5336 fabricated by Thermo Electron Corp.); a laser power meter; a lock-in amplifier; and the necessary support equipment. To obtain known concentrations of NH 3 , dry air was passed over a permeation tube and then diluted to the desired concentration. The resulting sample was continuously passed through the 0A cell at a flow rate of 60 ml min -1 . The cell was heated and a small amount of humidified air was added to the sample prior to the cell inlet. Linearity of response to NH 3 was obtained over the range from 17 to 70 ppb, the lowest concentration used being 17 ppb. A background signal due to window (Irtran 2) absorption was equivalent to 51 ppb. A noise equivalent concentration of 1.4 ppb was obtained using a 10 second time constant at a chopping frequency of 300 Hz. Higher system sensitivity by a factor of 10 can be obtained using a lower chopping frequency with a closed cell; however, desorption and adsorption of NH 3 from the cell walls makes repeatable measurements difficult. The remaining problem with NH 3 measurement is discrimination against interference. At low concentration levels of NH 3 , absorption due to major atmospheric constituents such as H 2 and C0 2 results in 0A signals comparable to those due to NH 3 . Since the 0A response is proportional to the product of absorption coefficient, a, and gas concentration, c, direct comparison of ac values indicates the extent of interference for a single wavelength determination. At R(30) the comparison of ac values and the resulting interference equiv- alent using the background window signal as a reference is given in table 1. Shumate et al. [9] indicate that the ac values for H 2 as given in table 1 have to be considered with some caution since an interference due to ammonia was not entirely resolved. Since variation of H 2 and C0 2 concentrations during a measurement sequence would ordinarily be only a small fraction of their absolute concentrations, measurement at a second wavelength would allow a maximum effective interference equivalent due to C0 2 and H 2 to be measured, thereby reducing measurement uncertainty for NH 3 . Other interferences also exist for NH 3 determination [10]. Interference problems of this type referred to in reference 10 have been handled before by the use of several wavelengths along with a mathematical treatment of data [10,11]. An alternative simpler approach to the interference problem is the use of a selective scrubber for NH 3 , one which will pass interferences and remove NH 3 quantitatively. Table 1 Interference equivalents of H 2 and C0 2 Interference equivalent (ppb) 18.7 22.0 25.7 10.3 Measurements by Shumate et al. [9] using 0A cell Gas Concentration c H 2 5 torr a 1.40 x 10 -6 10 torr a 1.65 x 10" 6 15 torr a 1.93 x lO" 6 C0 2 330 ppm .77 x 10" 6 NH 3 0.07 ppm 5.25 x 10- 6 288 A second type of experiment involving separation of gaseous compounds prior to detect- ion has also been conducted using the opto-acoustic system as a detector. In a set of feasibility tests a portable gas chromatograph (Model 510, Analytical Instrument Develop- ment, Inc.) was interfaced with the opto-acoustic system and used to measure sub-ppm concentrations of Freon 11 and Freon 12. Cell volume was only 8 cm 3 , giving sufficient resolution of the two eluted compounds to provide complete baseline separation when a sample loop of 5 cm 3 was used. An unoptimized detection limit of 50 ppb was obtained using the R(30) line; using optimum wavelengths available with the C0 2 laser and limiting flow noise through the system, an improvement of an order of magnitude in sensitivity is pro- jected. This demonstration of feasibility shows the possibility of extending the GC-OA approach into new areas of technique development for air pollution problems. The combinat- ion of gas chromatographic and infrared absorption methods may find wide and useful applications. 3. Long Path Monitoring Two current efforts are being directed at the use of long path monitors to obtain unique measures of air quality. In one effort, measurements are being made around fixed monitoring stations to determine typical differences between long line averages over distances of up to 0.5 km and point monitor readings. Documentation of such differences provide a data base from which estimates of the accuracy of using point monitor data for mathematical modeling of the atmosphere can be determined. Significant measurements of this type have been made using a system based on a semiconductor diode laser and developed at Lincoln Laboratory by Hinkley, Ku and associates. Field tests of this system in St. Louis during the Regional Air Pollution Study (RAPS) consisted of area monitoring at sites 105 and 108 of the RAPS network of stationary monitors. The laser system is housed in a mobile van equipped with beam steering optics that can be rotated 270 degrees in the horizontal plane and ± 10 degrees in the vertical plane to locate remotely placed retro- reflectors. Results of field tests for monitoring carbon monoxide have been given elsewhere [1,12]. Analysis of the data for CO obtained during 1975 and 1976 field studies indicate the following: a. establishing zero and multipoint calibration is complicated by the existence of more than one wavelength in the output beam (multimoding) ; b. active feedback control of the laser output frequency (or frequencies in the case of multimoding) is required to fix the zero and calibration; c. under controlled conditions monitoring of CO on a real time basis can be accom- plished over distances of up to 0.5 km (1.0 km total path); d. data processing under current system constraints requires significant inter- pretation in order to insure accuracy to ± 10 percent. In a second effort, measurement of NH 3 is being attempted. Sample integrity of the ammonia is insured since the NH 3 is measured in-situ 3 i.e. in its natural state. Initial attempts to monitor NH 3 have been limited by the available power in state of the art diodes in the 9-10 ym spectral regions. The current limit of detection is approximately 10 ppb of NH 3 over a total pathlength of 200 m, the pathlength being restricted by the available laser power. 4. Laser Induced Fluorescence The Aerospace Corporation under EPA sponsorship has developed a laser-induced fluorescence monitor for N0 2 [13]. This monitor has been used in field test comparisons with a chemi luminescence N0 X monitor during the RAPS in St. Louis. Results show excellent agreement between the two monitors during controlled tests although the fluorescence monitor consistently reads slightly lower than the chemi luminescence monitor during ambient 289 air monitoring [2]. Other tests in smog chamber studies at the Environmental Research Center in the Research Triangle Park, North Carolina have indicated the lack of inter- ference of other nitrogen containing compounds; these tests suggest the use of such equip- ment to measure N0 2 in chamber studies [14]. References [I] Ku, R. T. and Hinkley, E. D., Long Path Monitoring of Atmospheric Carbon Monoxide, Report NSF/RANN/IT/GI-37603, Lincoln Laboratory Interim Technical Report to the National Science Foundation. [2] Birnbaum, M. and Tucker, A. W. , Field Test Comparison of Chemiluminescence and Laser Fluorescence Monitors, Unpublished report by Aerospace Corporation for the Environ- mental Protection Agency, Research Triangle Park, NC (October 1974). [3] Max, E. and Rosengren, L. G., Optics Communications, Y\_ 422 (1974). [4] Schnell, W. and Fischer, G., Rapport de la Societe Suisse de Physique, 26_ 133 (1975). [5] Schnell, W. and Fischer, G., Applied Optics, 14 2058 (1975). [6] Hinkley, E. D., Ku, R. T. , Nil!, K. W. and Butler, J. F., Applied Optics, 1_5 1653 (1976). [7] Patty, R. R., Russwurm, G. M. , McClenny, W. A. and Morgan, D. R., Applied Optics, 1_3 2850 (1974). [8] Allario, F. and Seals, R. K. , Jr., Applied Optics, T4 2229 (1975). [9] Shumate, M. S., Mengies, R. T., Margolis, J. S. and Rosengren, L. G. , Water Vapor Absorption of Carbon Dioxide Laser Radiation, Jet Propulsion Laboratory, to be published in Applied Optics. [10] Kreutzer, L. B., Analytical Chemistry, 46 235a (1974). [II] Morgan, D. R. , Spectral Absorption Pattern Detection and Estimation Techniques Using Linear Weights, Report R7SELS-024, General Electric Company, Electronics Laboratory, Syracuse, New York (May 1975). [12] Chaney, L. W. , McClenny, W. A. and Ku, R. T., Long Path Laser Monitoring of CO in the St. Louis Area, Paper #75-56-6, 68th Annual Meeting of APCA, Boston, Massachusetts, June 15-20, 1975. [13] Birnbaum, M. and Tucker, A. W., N0 2 Measuring System, EPA Report 650/2-74-059, Final Contract Report for EPA Contract No. 68-02-1225, Aerospace Corporation (May, 1974). [14] Unpublished results of smog chamber experiments, Environmental Sciences Research Laboratory, Research Triangle Park, NC. 290 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). LONG-PATH MONITORING WITH TUNABLE LASERS 1 E. D. Hinkley 2 and R. T. Ku Massachusetts Institute of Technology Lincoln Laboratory Lexington, Massachusetts, 02173, USA 1. Introduction By using a tunable laser whose signal is reflected from a distant target, differential absorption of the laser power can permit a quantitative determination to be made of the integrated pollutant concentration over the path due to a particular gaseous species. Many molecular pollutants, such as NO, N0 2 , S0 2 , CO, and 3 [l] 3 , have already been monitored in the atmosphere using this technique, employing different types of tunable lasers in the ultraviolet, visible, and infrared regions of the electromagnetic spectrum. Integrated- path measurements such as these are important for studying various computer models being proposed for region-wide prediction of pollution levels, since their basic grid size is usually around 1 km. Traditionally, pollutant concentrations have been measured by point-sampling instru- mentation. However, the limitations of these standard methods become obvious in cases where the average pollutant concentration over a large area must be determined. In this paper, we describe a tunable laser system representing a development in the direction of a versatile and reliable monitor for such in situ ambient-air measurement. Moreover, with eventual utilization of the new widely-tunable diode lasers [2], it is possible to monitor several different pollutant gases simultaneously. A multipollutant capability is extremely useful since many pollutants interact with each other, and the time evolution of their concentrations can be incorporated into the mathematical models, along with meterological and topological data, for advance prediction of air pollution levels. We have developed a tunable semiconductor diode laser system for monitoring pollutants over long outdoor paths. The laser source is one of the Pb-salt types [3,4] which have several useful properties for field applications, such as small size, ruggedness, and ease of wavelength tunability. By chemically tailoring various combinations of Pb-salt compounds we can effectively cover the infrared wavelength range from 3 to 32 ym. Many important atmospheric pollutants can be detected by lasers in this range. By using various PbS^xSex lasers, which operate in the 4-6.5 ym range, we have monitored CO, H 2 and NO over long paths. Once a laser is constructed to operate nominally and in the wavelength region where a particular pollutant has strong absorption lines, the laser can be tuned simply by varying the injection current or laser temperature. x This work was supported by the National Science Foundation (RANN) and the U. S. Environ- mental Protection Agency. 2 Present address: Laser Analytics, Inc., Lexington, Massachusetts 3 Figures in brackets indicate literature references at the end of this paper. 291 2. Discussion The essential components of the laser optical system are shown in figure 1. The diode laser is mounted in a closed-cycle cryogenic cooler, and its emission is collimated by an Al-coated parabolic mirror M-l , 12 cm in diameter. The beam is transmitted down- range to a remote retroref lector (hollow corner-cube) M-2 which reflects it back towards M-l, and then refocuses it onto the infrared detector situated behind a calibration cell. In order to minimize the effects of atmospheric turbulence on system sensitivity, a derivative spectroscopic technique is employed in which the laser is frequency-modulated at 10 kHz at the pollutant gas absorption line of interest. CM i O tr o LU CO < cr o UJ H co >- CO < o I- Q. O Figure 1. Long-path diode laser monitoring system. 292 This system was first used at our Laboratory's 300-meter test range in Bedford, Massachusetts, where an experimental detection limit of five parts per billion of CO was established [5]. An identical system was then incorporated into a mobile van which has since been utilized in St. Louis, Missouri, for atmospheric measurements of CO at various sites during the summers of 1974, 1975, and 1976, in conjunction with the Regional Air Pollution Study (RAPS) of the U. S. Environmental Protection Agency [6]. In addition, our mobile system has also been driven to Cambridge, Massachusetts for monitoring atmospheric NO in the vicinity of a traffic rotary. The sensitivity and accuracy of these measurements are discussed, and the long-path measurements are compared with point sampling results in order to evaluate the potential of the long-path laser monitor for providing more reliable and acceptable quantitative measures for air quality. 3. Measurement Techniques Tunable laser spectroscopic measurements in the laboratory are usually performed by propagating the laser radiation through an absorption cell. The change in laser power transmission during tuning can be used to obtain the absorption coefficients, line widths, and line shapes of the spectral lines. The experimentally-determined absorption coeffi- cients can then be used to measure an unknown pollutant concentration using the amount of laser absorption in conjunction with Beer's Law. Field measurements of ambient gases are similar to the laboratory procedures. The amount of absorption over a long atmospheric path can be related to the average pollutant concentration over that distance. In order to minimize atmospheric turbulence effects on laser beam propagation, a derivative spectroscopic technique can be employed [5]. Syn- chronous detection at a high a.c. modulation frequency, about the desired laser infrared frequency, provides the derivative of the absorption signal. Atmospheric effects are reduced by ratioing the derivative with the direct transmission signal. System "zero" is achieved by tuning the laser to line center (where the derivative/ratio signal should be zero) or by placing a retroref lector near the transmitting optics (to simulate a signal with effectively zero absorption). Calibration is achieved by placing a known concentration of pollutant-N 2 mixture in the 10 cm cell. For example, if the monitored path outside is 610 meters, a calibration gas of 1,000 ppm mixture produces the same signal as 164 ppb over the long path. Linearity is confirmed by using several mixtures of different concentrations in the calibration cell. Inaccuracies in the measurements which occur as a drift of the zero-ppm signal and changes in linearity and repeatability of the calibration points, were mainly due to variations in the laser frequency. By proper controls, we have been able to reduce these effects to achieve an accuracy of ±5 percent of the nominal reading. 4. Monitoring Results In order to evaluate the measurement technique, "zero," and calibration procedures, comparative tests were made of pollutant variability using the long-path laser monitor and an air bag sampler which was filled during a traverse of the laser path. Results will be shown to support the validity of our measurement and calibration techniques. The CO pollutant concentration was found to be quite dependent upon the location of the monitored area. For example, results will be shown for a generally low-concentration farm site in Illinois over which clouds of CO occasionally pass. In contrast, large changes in concentration were noted for an inner city site in St. Louis at various hours due largely to local traffic conditions. Significant spatial variations were also observed by means of a conventional point sampling instrument moved along a 1 km path, indicating the desirability for a long-path monitor for path-averaged pollutant measurements. 293 5. Conclusion The long-path diode laser system has permitted unattended monitoring around the clock, with calibration checks reduced to two or three times a day. Although these results have demonstrated the feasibility and usefulness of long-path laser monitoring, several improvements must be made in future systems in order to increase reliability and provide legally-acceptable air quality measurements. References [1] Hinkley, E. D. , Ku, R. T. , and Kelley, P. L., Laser Monitoring of the Atmosphere, Chapter 6, edited by E. D. Hinkley, Springer-Verlag, Heidelberg (1976). [2] Hinkley, E. D., Ku, R. T. , Nill, K. W., and Butler, J. F., Appl. Opt. 15, 1653 (1976). [3] Harman, T. C, J. Phys. Chem. Soo. Suppl. 32, 363 (1971). [4] Calawa, A. R., J. Luminescence 7_, 77 (1973). [5] Ku, R. T., Hinkley, E. D., and Sample, J. 0., Appl. Opt. T4, 854 (1975). [6] Ku, R. T. and Hinkley, E. D., M.I.T., Lincoln Laboratory Interim Technical Report, Long-Path Monitoring of Atmospheric CO — 1975 RAPS Study, St. Louis, Missouri (1976), 294 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). DEVELOPMENT OF A TWO FREQUENCY DOWNWARD LOOKING AIRBORNE LIDAR SYSTEM J. A. Eckert, D. H. Bundy, J. L. Peacock Remote Sensing Division Environmental Monitoring and Support Laboratory U.S. Environmental Protection Agency Las Vegas, Nevada 89114, USA 1. Introduction Extensive testing and operational experience gained with two existing airborne LIDAR systems have demonstrated the utility of such devices for monitoring the height of elevated inversion layers representing mixing depths over large geographical areas in relatively short periods of time [1.2] 1 . During several field testing missions using a prototype system, both point source and urban plume measurements were obtained suggesting even greater utility to this type of device. The two-wavelength downward looking LIDAR responds to the following specific monitoring problems: 1. The determination of mixing layer height over relatively large geographical areas in short time periods. This type of information is particularly useful in determining the behavior of the mixing layer height during morning and evening transition periods. 2. The determination of point source plume dimensions referenced to a ground coordi- nate data base. The information can be used not only to characterize the plume behavior but also to position sampling aircraft using in- situ techniques. 3. Obtaining dimensions and point source components of urban or other multi-source plumes. The data are of principal interest to modelers but in some cases can also be of interest in pinpointing emissions violations. Although the three monitoring problems all utilize downward looking airborne LIDAR systems, the monitoring requirements are substantially different for each and are differ- entiated in the following table: Table 1 Monitoring requirements differentiated in matrix of three monitoring problems all utilizing downward looking airborne LIDAR systems Vertical Horizontal Particle Size Real Time Resolution Resolution Discrimination Data Display Mixing layer height Moderate Low No No ^ 10m ^ 100m Point source plume High High No Yes dimensions ^ 3m ^ 3m Urban plume characteristics Moderate Low Yes No ^ 10m ^ 100m 1 Figures in brackets indicate the literature references at the end of this paper. 295 A TO D CONV. & REAL TIME PROCESSOR a. 5 < cs o < cs o LU CO RANGE GATE RANGE GATE LU 00 S Q 2 a. Figure 1. Electronics subsystem flow chart. 296 Figure 2. Two color CRT real time display flowchart 297 Because of the relatively high capital costs ($120K) involved in constructing LIDAR systems as well as the substantial commitments in personnel for system design, it is desirable to construct a single general purpose system which will address itself to all three problem types. The monitoring problems are being solved through the design and construction of a two frequency downlooking airborne LIDAR system and the subsequent technique development necessary for implementing the operational system. 2. System design/laser selection In order to meet the various monitoring requirements, minimal system requirements would include real time display capabilities, good horizontal and vertical resolution, and some ability to discriminate between distinctly different particle size distributions. In addition, certain operational constraints must be met for aircraft weight and power limita- tions, system reliability, and operator functions. The system consists essentially of three components; a laser, a telescope receiver, and a data handling and control system. The system can be installed in a small twin engine aircraft (C-45 and larger) and will meet all air safety criteria. The system will be flown at 10,000 feet (3,050 m) above ground, a value consistent with both eye safety requirements for the casual observer and air traffic safety requirements over large urban areas. Vertical resolution constraints are met by having a laser with a sufficiently short pulse and a receiver and data handling subsystem capable of processing the pulses. The resolution constraints are forced by the specifications of currently available commercially built lasers. The laser procured for the system is a Nd:YAG laser transmitting both the primary wavelength (1.06 urn) and the frequency doubled wavelength (0.53 ym). The laser was designed specifically for airborne applications operating directly off aircraft power. Pulse widths are 15 nsec for each wavelength and this parameter controls the maximum vertical resolution for the system (3 m). Repetition rate for the laser is a maximum of 10 pulses per second which, coupled with the minimum ground speed for the supporting aircraft, dictates the horizontal resolution. The use of the two widely spaced laser wavelengths will yield some particle-size information, namely a qualitative differentiation between markedly different particle size distributions [3]. 3. System design/telescope Telescope size is dictated by the port size in the bottom of the typical aircraft (0.4 m x 0.4 m). The two previous systems used Fresnel lens refracting telescopes which have great weight benefits in non-imaging applications; however, because of the differences in wavelength and the reduction of off axis light, a reflecting telescope has been incorporated into the design. The device used a F/3 0.38 m Newtonian telescope. The two wavelengths are separated through a dichroic beam splitter, filtered and imaged on the respective photomultiplier tubes. 4. Systems design/electronics subsystem Figure 1 is a block diagram of the electronics subsystem for the LIDAR device. The four principle functions of the subsystem are: 1) conversion of the data into a digital format; 2) recording of data; 3) provision of a real-time display of the data or subset of the data; 4) provision for overall control of the system and the operator/device interface. 298 Output from the photomultipl ier tubes is first passed through log amplifiers because of the wide dynamic range of the detector signals (40 DB). The amplifiers are incorporated into the photomultiplier tube housings and yield minimum signal bandwidths of 50 MHZ. Prior to each laser firing, background levels are monitored, digitized, and stored for later processing. Photomultiplier tube gating and range correction (1/R 2 ) are performed simultaneously with the detection of the returning LIDAR pulse through varying the potential of four alternate dynodes as a function of time (range). Both range and photomultiplier tube response characteristics are corrected by sequentially selecting a digital value for dynode voltage from a Programmable Read Only Memory (PROM). As the laser fires, a small amount of the output light pulse is optically coupled to the detector to obtain instantaneous power. Signals are then collected, digitized and stored for a period of 20 ysec, a value which allows detection of the ground return pulse from an altitude of 10,000 feet (3,050 m). A buffer memory in the real-time processor now contains background values obtained prior to the laser firing, a value for instantaneous power, and the LIDAR return plus background. A fast digital processor using bipolar microprocessor technology subtracts the background and adjusts the data for variations in laser output. The processor generates a 256-byte record for each frequency and each laser firing. This record is operator selectable to extend over the entire 10,000-foot (3,050 m) aircraft-to-ground path (40-foot (12 m) vertical resolution) or over the lower 2,500 feet (762 m) (10 foot (3 m) vertical resolution). Operator options thus include looking over the same vertical distance with two different frequencies to obtain particle size informa- tion or observing two different vertical scales by using the different frequencies. Output from the real-time processor is next routed to a large (32K-byte) circulating solid state memory which serves multiple functions including the interfacing of the bipolar microprocessor system and an 8-bit NM0S microprocessor system which controls the data output and control aspects of the overall system. The memory operates in a first in-last out mode dumping onto a magnetic tape deck at the option of the operator. Contents of the memory are displayed on a color CRT as 64 x 2 (frequencies) vertical traces yielding a real-time display of the last 64 laser firings. The two frequencies are assigned different colors with eight levels of luminance. At the maximum repetition rate of the laser, the display incorporates 6.4 seconds of data. The real-time display is described in figure 2. The CRT display is continuously recorded on a video tape for immediate analysis of the data upon mission completion. The video tape is multiplexed with a downward looking vidicon used to obtain ground reference data. The systems controller (see figure 1) formats the magnetic tape record of the data with navigational and other reference data. The controller also performs some general housekeeping functions of the device including warning the operator of malfunctioning components or subsystems. Operator interface to the system is through a small CRT and keyboard. 5- Summary A two-frequency downward looking LIDAR system is now being constructed with the following operational and physical parameters: Physical : Laser - Q-Switched Nd:YAG Pulse length: 15 nsec Output wavelength: 1.06 ym or 0.53 ym Energy per pulse: 500 mJ (at 1.06 ym) 200 mJ (at .53 ym) Beam divergence: 0.25 mrad Telescope - F/3-15-inch Newtonian with dichroic beam splitter Detectors - 2-50 mm PM, S-20 and S-l photocathodes Size - 1.5m 3 Weight - 100 kg Power requirements - 1.5 kw 299 Operational : Altitude - 10,000 feet (3,050 m) above ground level Signal rate - 10 pulses per second Vertical resolution - 3 m or 12 m Real time display Vertical resolution element - 13 m or 50 m Total traces - 128 (64 x 2 wavelengths) Min. storage time - 6.4 seconds Output - 9-track magnetic tape Video tape of real-time display When completed, the system will respond to a variety of monitoring needs involving aerosol structure in the lower atmosphere. Specific applications include measuring mixing layer heights and dimensions of point-source and urban plumes. The system is scheduled to be completed in the fall of 1977. References [1] Eckert, J. A., McElroy, J. L., Bundy, D. H., Guagliardo, J. L., and Melfi, S. H. , Airborne LIDAR RAPS Studies, February 1974, EPA report EEPA-600/4-76-028, June 1976. [2] Eckert, J. A., McElroy, J. L., Bundy, D. H. , Guagliardo, J. L., and Melfi, S. H., Downlooking Airborne LIDAR Studies - August 1975, published in the proceedings of the International Conference on Environmental Sensing and Assessment, EMSL-LV, Las Vegas, NV, September 1975. [3] McCormick, M. P., Laser Baoksoatter Measurements of the Lower Atmosphere, A Thesis Presented to the Faculty of the Department of Physics, the College of William and Mary in Virginia, 1967. 300 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). REMOTE ANALYSIS OF AEROSOLS BY DIFFERENTIAL SCATTER (DISC) LIDAR SYSTEMS M. L. Wright Stanford Research Institute Menlo Park, California 94025, USA and J. B. Pollack and D. S. Colburn NASA-Ames Research Center Moffett Field, California 94035, USA 1. Introduction A remote sensing system capable of determining the chemical composition of atmospheric aerosols would be desirable for reducing the cost of long-term measurements, especially in the stratosphere, and for avoiding some of the difficulties encountered in aerosol sampling systems. A differential scatter (DISC) lidar system uses characteristic differences in the infrared backscatter spectra of aerosols to identify the chemical composition of the aerosol. Recent studies of backscatter spectra for stratospheric aerosols have shown that substantial amplitude variations occur over relatively narrow wavelength ranges [l] 1 . Figure 1 shows typical backscatter spectra for several different stratospheric aerosols. This differential backscatter could be a useful mechanism for determining the composition of stratospheric aerosols. Backscattered signals received by an actual DISC lidar system are affected not only by the differential aerosol backscatter but also by the gaseous attenuation of the atmosphere and changes in the lidar system. The selection of lidar system parameters to provide optimum performance for measurements on a specific aerosol must take into account all of these factors. The use of the computer with the Modular Atmospheric Propagation Program (MAPP) to perform these system optimizations is described for applications to stratospheric aerosol measurements. Initial calculations indicate that a portion of the h^SO^ concentra- tion range of stratospheric aerosols can be measured by the DISC technique. Multiple operating wavelengths will be necessary in order to discriminate between the relative backscatter that is produced by the various constituents in a stratospheric aero- sol. The optimum number of wavelengths and the optimum location for each of these wave- lengths is dependent on the characteristics of the actual lidar system, the atmosphere through which the optical signals must propagate, and the expected range of variation in constituents in the stratospheric aerosol. The primary focus on the present work centers on a ground-based C0 2 lidar system. For this system, the optimum number of wavelengths and their locations were determined for a variety of stratospheric aerosol models representing a range of different constituents and different constituent concentrations. figures in brackets indicate literature references at the end of this paper. 301 1 r Figure 1. Relative backscatter for several atmospheric aerosols 2. Objective The first task in the analysis is to determine the atmospheric propagation character- istics for all the possible operating laser wavelengths for the system or systems desired. Each of these lines is then evaluated for its low attenuation, freedom from interference by gaseous constituents in the path, and relative system performance at that wavelength. This evaluation results in a priority ranking for the laser lines, ranging from most to least favorable for the specified application. After selection of a priority ranking for laser lines, the next task is to select the optimum number of wavelengths and their wavelength locations. This selection process is treated as a pattern recognition problem because the system makes decisions among various stratospheric aerosol models by choosing the model backscatter spectrum that best fits the observed experimental data. 3. Results To determine the best overall system performance, some criteria are necessary for making comparisons of system performance with changes in the number or location of wave- lengths. Unfortunately, a single performance criterion cannot be specified for operation under all possible conditions and for all possible laser systems. Several goodness measures were investigated for a variety of measurement scenarios. Different criteria gave slight differences in the number of wavelengths and their locations; nevertheless the differences among these criteria is shown in figure 2. This example is for a nonisotope C0 2 laser designed to discriminate between (NHi t ) 2 S0 1+ , H 2 S0 4 (75 percent), and obsidian. A fixed experimental uncertainty (noise and other error mechanisms) of 40 percent was assumed for this example. This figure shows that improved capability to discriminate between the various materials changes very slowly after the number of wavelengths is increased beyond 3 to 6. 302 0.2 0.1 FIGURE 2 %/AVE • MIN CRITERION 4 6 8 NUMBER OF MEASUREMENT WAVENUMBERS 10 12 LA-4358-4 MEASURE OF GOODNESS vs NUMBER OF MEASUREMENT WAVENUMBERS FOR NONISOTOPE C0 2 LASER LINES. Fractional error = 0.4. Materials: H 2 S0 4 (75), (NH 4 ) 2 S0 4 , obsidian. Once the optimum number of wavelengths has been determined and the location of each of these wavelengths has been specified, the next task is to indicate the level of system performance that would be obtained in such a measurement. Several performance measures were considered, and an example of the system performance that can be expected will be presented. DISC lidar systems are useful for various aerosol measurements in both the stratosphere and troposphere. For example, the DISC technique could be useful for measuring the sulfuric acid content of aerosols emitted by power plants, refineries, and automobile catalytic converters. References [1] Colburn, D. S. and Pollack, J. B., Infrared Backscatter Spectra for Differentiation of Stratospheric Aerosol Composition, presented at the Seventh International Laser Radar Conference 3 Stanford Research Institute, Menlo Park, California, November 4-7, 1975. 303 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) DIAL SYSTEMS FOR MONOSTATIC SENSING OF ATMOSPHERIC GASES E. R. Murray, J. E. van der Laan, J. G. Hawley, R. D. Hake, Jr., and M. F. Williams Stanford Research Institute Menlo Park, California 94025, USA 1. Introduction The differential absorption lidar (DIAL) technique has been established as a sensitive technique for making range-resolved and integrated concentration measurements of gaseous species El',2] 1 . Range-resolved DIAL demonstrations have been performed using visible [3-5] and ultraviolet [6,7] lasers. Extending DIAL capabilities to the infrared region of the spectrum permits access to the characteristic infrared absorption lines of hundreds of additional gaseous species with currently available high-energy discretely tunable gas lasers. Considerable effort has been devoted to the development of longpath infrared monitoring systems using remotely positioned retroreflectors [8-12]. Several of these systems have been assembled into fieldable configurations and have been used to obtain useful data in the field. Demonstration of the first single-ended system using topographical reflections was reported by Henningsen, Garbuny, and Byer [13]. This system, using a parametrically tunable laser, was used for remote measurement of a sample of carbon monoxide [13]. Recently, a single-ended chemical laser system has been used for remote measurement of hydrogen chloride, methane, and nitrous oxide as reported by Murray, van der Laan, and Hawley [14]. The operation of an infrared DIAL system using aerosol backscatter was first reported by Murray, Hake, van der Laan, and Hawley [15,16]. The objective of our program has been the development and demonstration of systems for remote measurement of gases using discretely tunable high-energy gas lasers. Results reported here were obtained with two infrared lidar systems: A lidar system based on a deuterium fluoride laser for remote measurement of integrated concentrations of HC1 , CH^ , on --- V.W. , - WW. .WW .WWW. . W, , W.MW..W M.WW-W. WM.W.. V. W . . ,, WW*J. W WWW WW..WW,, W, W W . W..W W , , ,W . , W . . q. , and N 2 using scattering from topographic targets, and a lidar system based on a C0 2 laser for measuring range-resolved concentration profiles of water vapor using scattered radiati from naturally occurring aerosols. Performance predictions for both the DF and C0 2 lidars indicate that high-sensitivity, range-resolved measurements can be made of numerous gaseous species at 10 km range with commercially available components. 2. Remote Measurement of HC1 , CH^, and N 2 Using A DF Laser A diagram of the single-ended DF laser system used to monitor HC1 , CH^, and N 2 is shown in figure 1. This system used a topographic target to provide the backscattered signal, and measured the integrated concentration over the path between the lidar and the target. The DF laser beam was transmitted collinearly with the receiver axis and directed through a sample chamber positioned 300 m away. The backscattered signal from the topo- graphic target was collected by the receiving telescope and focused onto an infrared figures in brackets indicate the literature references at the end of this paper. 305 detector. The detector signal was amplified and displayed on a chart recorder. Different concentrations of the gases being measured were injected into the sample chamber. The concentrations were measured with both the lidar and an in- situ monitor, and the values were compared. TOPOGRAPHIC TARGET TRIGGER 1 AMPLIFIER " GATING ELECTRONICS AND DISPLAY FIGURE 1 EXPERIMENTAL APPARATUS Experimental parameters are shown in table 1. The laser wavelengths that were used and the corresponding absorption coefficients are shown in table 2. The absorption coef- ficient for the HC1 was measured by Bair and Allario [17], and those for CH^ and N 2 were measured by Spencer, Denault, and Takimoto [18]. The wavelengths were established by Rao [19]. Table 1 Experimental parameters DF Laser Transmitter Energy Pulsewidth Beam divergence Typical PRF Receiver Telescope diameter Field of View HgCdTe detector D* (3.7 ym) Size Time constant 100 to 150 mJ/pulse 1.0 ys (FWHM) 1.0 mrad (FWHM) 1/6 Hz 31 .75 cm 3.0 mrad (FWHM) 5(10 9 ) cm Hz' W 1 x 1 mm 75 ns (to 1/e point) 306 Gas HC1 CH 4 N 2 Table 2 Laser wavelengths and absorption coefficients DF Laser Line P 2 3 (3) b P.l(9) P 3 (7) Wavelength [ml) 3.636239 3.715252 3.890259 Absorption Coefficient (cm" 1 atm" 1 ) 5.64 0.047 1.19 Vibrational quantum number ""Rotational quantum number A summary of the HC1 test for different concentrations is shown in figure 2. The HC1 concentrations were determined from the lidar data by calculating the transmission through the sample chamber using the ratio of the backscattered to transmitted signals. The lidar- measured HC1 concentration is plotted as a function of the syringe-injected value. The solid line represents perfect agreement between the lidar and the syringe values. The dots are the experimentally determined data. The error bars shown represent the standard deviation of 15 data points. Generally good agreement was found between the lidar and the syringe value. This confirms that the lidar is functioning correctly and the absorption coefficient used in the data reduction is accurate. The right vertical axis shows the equivalent product of the concentration and the path length for the individual measurements, The sensitivity of the system (the minimum detectable concentration) is defined as the concentration that equals the standard deviation. For HC1 , the sensitivity was determined to be 0.05 ppm over a 1-km path, or 50 ppb over 1 km. 50 100 150 200 250 SYRINGE-INJECTED HCI CONCENTRATION — FIGURE 2 SUMMARY OF REMOTE- MEASUREMENT TESTS OF HCI IN A SAMPLE CHAMBER The summary of the CH 4 data is shown in figure 3. Again, good agreement was obtained between the lidar and the in-situ value, thereby verifying correct system operation and the absorption coefficients used in the data reduction. For CH^, the system sensitivity was found to be 6 ppm over 1 km. 307 LINE OF EXPECTED - VALUES • MEASURED DATA ^ STANDARD DEVIATION I < E < I rr ' 30 g o 20 8 1- z 10 y < > 3 0.5 1.0 1.5 2.0 2.5 IN SITU-MEASURED CH 4 CONCENTRATION — percent FIGURE 3 SUMMARY OF REMOTE- MEASUREMENT TESTS OF CH 4 IN A SAMPLE CHAMBER A summary of the N 2 data is shown in figure 4. The triangles represent data obtained with a juniper tree of t-m diameter used as the back-scattering target; the circles repre- sent data taken using a plywood board. The two targets were used to demonstrate system operation on two different materials with different reflectivities. The foliage target was used because it provides a reflectivity typical of field operation. Both targets were placed immediately behind the sample chamber. The lidar results obtained with both targets agree well with the in-situ values. The system sensitivity for N 2 was found to be 0.24 ppm/km. 750 500 250 1 1 r LINE OF EXPECTED VALUES- » FOLIAGE TARGET • WOODEN TARGET" £ STANDARD DEVIATION 250 500 750 1000 1250 1500 FRINGE-INJECTED N 2 CONCENTRATION - FIGURE 4 SUMMARY OF REMOTE- 3.0 z o 2.5 $ DC H 2.0 g I o ' z ; 1.5 o " ! -.0 z' 0.5 5 rotational transition in hydrogen gas at a Raman frequency shift of 1033.4 cm -1 was investigated. The wp pump frequency of 18787.6 cm -1 (532.1 nm) was produced by a frequency doubled, acousto- optic-Q-switched, cw, krypton-arc-lamp-pump Nd:YAG laser and the W5 beam at 17754.2 cm" 1 (563.1 nm) was provided by a rhodamine 6G jet-stream dye laser pumped by the 532.1 nm radiation. The frequency doubled Nd:YAG laser produced pulses at a repetition rate of about 4600 pulses per second and with a pulse energy of 0.076 mJ and a peak power of 175W. For 250 torr of H 2 gas, the observed rotational CARS signal was 1.33 x lO 4 counts per second from which the true rotational CARS signal was calculated to be about 1.68 x 10 5 photons per second. The rotational CARS power which was calculated theoretically was in good agreement with the experimental results. 3. Conclusion The significance of the rotational CARS experiments is that they offer a way of improv- ing the detection sensitivity of the CARS process. One way of improving the CARS detection sensitivity is to utilize Raman transitions which have relatively large resonant third order susceptibilities. The resonant third order susceptibility is directly proportional to the differential Raman scattering cross-section and inversely proportional to the spontaneous Raman linewidth. In general, the differential Raman scattering cross sections are larger for rotational Raman transitions than for vibrational Raman transitions and the rotational linewidths are smaller than the corresponding vibrational Raman linewidths. Therefore, the resonant third order susceptibilities for rotational transitions are larger than those for vibrational transitions and, since the generated CARS power is proportional to the square of this nonlinear susceptibility, a substantial increase in CARS power will be realized for experiments utilizing rotational rather than vibrational Raman transitions. The CARS technique is well suited for applications in such areas as: Air pollution measurement and monitoring Combustion diagnostics on flames and plasmas Kinetic investigation of gas phase reactions High resolution spectroscopy (cw CARS techniques). It is expected that the range of applications for the CARS technique will increase as the experimental research in this area expands. References [1] Regnier, P. and Taran, J. P. E., Laser Raman Gas Diagnostics, edited by M. Lapp and C. M. Penny, p. 87 (Plenum Press, New York, 1975). [2] Barrett, J. J. and Begley, R. F., Appl. Physios Letters 27, 129 (1975). [3] Nibler, J. W. , McDonald, J. R. , and Harvey, A. B. , CARS Measurements of Vibrational Temperatures in Eleotrio Discharges, Optics Comm. (in press). [4] Barrett, J. J., Generation of Coherent Anti-Stokes Raman Scattering by Means of Pure Rotation and Rotation-Vibration Raman Transitions, Fifth International Conference on Raman Spectroscopy, Freiburg, Germany, September 2-8, 1976. 316 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). HIGHLY SELECTIVE, QUANTITATIVE MEASUREMENT OF ATMOSPHERIC POLLUTANTS USING CARBON MONOXIDE AND CARBON DIOXIDE LASERS D. M. Sweger, S. M. Freund 1 , and J. C. Travis National Bureau of Standards Washington, DC 20234, USA 1. Introduction Infrared gas lasers have many properties that make them suitable for applications in analytical chemistry, particularly for detection of organic compounds in the gas phase. Among those properties is a narrow bandwidth, which results in a high level of selectivity in the vibrational transitions excited. A major difficulty in the analytical application of gas lasers, however, is that they are not tunable in a continuous sense. Thus, in the absence of any other tuning mechanism, one is restricted to exact coincidences between the laser frequency and an absorption frequency in the molecule of interest. While exact coincidences are few, near coincidences are much more probable, and the analytical useful- ness of lasers can be increased manyfold if some technique can be found to utilize these near coincidences. 2. Objective "Perturbation spectroscopy," or the use of electric (Stark effect) or magnetic (Zeeman effect) fields to perturb energy levels and tune a near coincidence into exact coincidence with the laser, offers such a technique. In addition to "tuning" the molecular absorption, the same perturbation may be used to modulate the absorption and allow the use of available ac detection methods. The purpose of this paper is to summarize the results of two experiments applying perturbation spectroscopy to the quantitative determination of gases at concentrations normally encountered in polluted environments [1,2] 2 . 3. Experimental and Results In the first experiment, Stark spectroscopy techniques were applied to the detection of ppm levels of vinyl chloride monomer (VCM) in air. Although VCM has made headlines because of its carcinogenic properties, it has in the past received some attention among laser researchers due to its use in passive Q-switching. VCM has several known near coincidences with C0 2 laser frequencies in the 10 ym region, and by superimposing an appropriate static electric field, an electric field sweep, and a modulating field, it is possible to Stark shift several vinyl chloride infrared transitions into exact coincidence with the laser, sweep the transition across the laser, and to use phase-sensitive detection of the absorbed signal. In addition, one near coincidence with a CO laser line in the 6 pm region was found, and the two strongest absorptions, one from the C0 2 laser and the one from the CO laser, were used in the experiment. Twelve calibrated samples of VCM in air Present address: Los Alamos Scientific Laboratory, P.O. Box "1663, Mail Stop 565, Los Alamos, NM 87545 2 Figures in brackets indicate the literature references at the end of this paper. 317 were prepared by the Air Pollution Analysis Section of the Analytical Chemistry Division, NBS, and the absorption as a function of concentration was measured. Because of the sine- wave modulation, the observed signal is the first derivative of the absorption line profile, and the peak-to-peak intensity is the measured parameter. The data are normalized to one concentration and are a linear function of concentration from at least 0.1 to 1000 ppm. The line extrapolates through the origin, indicating that only one point need be taken to calibrate the apparatus. Improvements in the designs of the Stark cell and C0 2 laser should permit extension of measurements to the ppb range. The second experiment investigated the quantitative detection of nitrogen dioxide in part-per-million concentrations in nitrogen using magnetic field modulation of the molecular absorption of 1616 cm -1 carbon monoxide laser radiation. The method relies on the near coincidence between the 6 16 ■*• 6 15 transition of the v 3 band in N0 2 [3] and the P(ll) transition in the 20 -> 19 band of the CO laser and is an obvious extension of the recent laser magnetic resonance spectroscopic investigation of this molecule in the 1600 cm -1 region of the infrared [4], and of similar detection schemes demonstrated for nitric oxide [5,6]. Using NBS SRM 1629 N0 2 permeation tubes, we adjusted the concentration of N0 2 in N 2 in a flowing system to any value in the range 1-200 ppm by setting the N 2 flow. The concent- ration range was limited at the higher concentrations by available flowmeters. The flowmeters were calibrated to maintain an overall system accuracy of about 5 percent. The pressure in the cell was maintained at 27 torr (1 torr = 133.3 Pa) in order to eliminate pressure broadening effects while maximizing the signal. N0 2 is a paramagnetic molecule, and a magnetic field of less than 500 gauss is sufficient to alter its energy levels such that the infrared transition of interest is swept in and out of coincidence with the laser. Consequently, a modulation in the absorption of the laser light is observed by a gold-doped germanium detector. The detected ac signal is processed by conventional phase-sensitive amplification and appears as a dc signal which is proportional to the modulated absorption. The apparatus is found to have a linear response to the N0 2 concentration over the two orders of magnitude investigated. 1 ppm of N0 2 in N 2 are easily detected with a 1 second time constant. 4. Conclusion The principal advantage of these techniques is high selectivity. In addition to the selectivity for absorption inherent at low pressures with a narrow-band laser, only those molecules with the same electric or magnetic dipole moment will be shifted appropriately into coincidence with the laser. Absorption by molecules that have no dipole moment, even though they may absorb strongly in the region of interest, will not interfere with the measurement, since their absorption is not modulated. Investigation of fourteen chemicals likely to be found in an industrial atmosphere (see table I) uncovered one possible inter- ference (acrylonitrile) with vinyl chloride on the C0 2 laser line used for the VCM measure- ments. Note should be made, however, that over half of these chemicals are strong absorbers in the same spectral region as vinyl chloride, so that the number of interferences is greatly diminished. Even when an interference is encountered, such a problem may be over- come by making a measurement on another laser line or at another electric field for which there is no interference. One can further increase selectivity, at a sacrifice in sensi- tivity, by taking a voltage scan and looking at line shapes in order to detect the presence of interferents. 318 Table 1 Chemicals observed in stark cell using perturbation spectroscopy and C0 2 laser at fields less than 10000 V/cm Chemical Freon 11 Freon 12 Freon 113 Acetone Acrylonitrile Chloroform Methanol Methyl chloride Methyl Fluoride Tetrahydrofuran Toluene Trichloroethylene Vinyl acetate Vinyl chloride Signal on P(42), 10.8 ym nothing nothing weak (<10% VCM) at 250 V/cm nothing Strong comparable with VCM nothing nothing nothing nothing nothing nothing nothing nothing strong Other R(4)-R(34), 9.6 ym band; strongest on R(22) Absorbs M to S (P14)-P(26), 9.6 ym band near zero field. Absorbs W P(32)-P(42) 10.6 ym band No absorptions P branch, 10.6 ym and R branch 9.6 ym Many absorptions throughout P and R branches, 10.6 ym band, with the strongest at P(28). Nothing in P branch, 9.6 ym band. Vinyl idene chloride nothing Various absorptions, some VS. One of strongest P(26) 9.6 ym band. Various absorptions 9.6 ym band; S on P(18), M on P(20), P(22), and W on higher P branch lines. Absorptions P(8)-P(36) 10.6 ym band. Additional strong absorptions on P(46), P(36), P(30)-P(24), P(20), P(16), P(2) and R(6), 10.6 ym band and M on P(38), 9.6 ym band. S on R(24) 9.6 ym band. 319 While the sacrifice of sensitivity in favor of selectivity is not seen as a limiting factor in many applications, there are cases where it may be. Some order of magnitude calculations for the case of the vinyl chloride experiment indicate that at 0.1 ppm we are 4-6 orders of magnitude from a theoretical minimum detectable number of molecules. The primary source of noise appears to be the amplitude instability of the lasers, which were not designed for this type of application. One of the more promising recent developments is the construction at NBS of a prototype waveguide C0 2 laser. In addition to being compact and easy to operate, the 20 cm long waveguide laser appears to be 3-4 orders of magnitude more stable in amplitude than the 1.3 meter lasers used in these experiments. Such dramatic decreases in noise leading to an increase in S/N coupled with improvements in cell design should make perturbation spectroscopy a practical method for solving some difficult problems. References [1] Freund, S. M. and Sweger, D. M. , Anal. Chem. 47, 930 (1975). [2] Freund, S. M. , Sweger, D. M., and Travis, J. C, Anal. Chem. 48, 1944 (1976). [3] Lafferty, W. J., private communication. [4] Freund, S. M. , Hougen, J. T., and Lafferty, W. J., Can. J. Phys. 53, 1929 (1975). [5] Kaldor, A., Olson, W. B., and Maki , A. G. , Science T76, 508 (1972). [6] Bonczyk, P. A., Rev. Soi. Instrum. 46, 456 (1975). 320 Part VII. CHEMICAL CHARACTERIZATION OF AEROSOLS NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). CHEMICAL CHARACTERIZATION OF AEROSOLS: PROGRESS AND PROBLEMS William E. Wilson U.S. Environmental Protection Agency Research Triangle Park, North Carolina 27711, USA 1. Introduction Although substantial progress has been made in the chemical characterization of aerosols during the last five years, many problems still remain. It was told years ago that a major analytical problem was to determine the chemical compounds in aerosols as opposed to elemental analysis. This remains a key problem for environmental chemists today. However, if a specific chemical species is of interest, it can usually be mea- sured. It is useful, therefore, to consider not only what measurements are needed, but the most appropriate technique in a variety of measurement situations. The variety of situations in which aerosols need to be characterized as well as the types of measurements that are needed are examined here. Some measurement techniques that should be developed to better study aerosol formation, transport, and removal processes are discussed. This talk will be limited to ambient air measurements and will not consider source measurements. 2. Discussion Measurement situations in which aerosol characterization is needed cover a variety of conditions: time resolutions from one week to less than one second; time intervals of a few seconds to years; data availability from real-time, continuous output to analysis days or months after the event. These varying data needs give rise to a wide variety of mea- surement needs. A number of measurement situations and associated time scales, shown in table 1 are discussed. As new techniques and instruments are developed, it is useful to consider each specific measurement situation. Table 1 Measurement situations and time scales associated with aerosols Measurement situations Compliance monitoring Episode identification Epidemiological measurements (community and personal) Laboratory chambers (clinical exposure and smog chamber) Model development and evaluation Aerial measurements (plumes) Aerosol transport Resolution Interval Data output' 24 hours years R 1 hour days C 24 hours months R 1-15 minutes hours C 1 hour months R 1 second minutes C 3-6 hours months R 323 Table 1 (continued) Source identification Ecological deposition Deposition parameters 1 hour days R 1 week years R 1 second hours R R = retrospective, results needed hours to days after measurements. C = continuous, real-time data output needed. Some basic measurements of aerosols and some specific effects or properties of aerosols, given in table 2 are discussed. In considering the basic measurements, we are concerned with (1) both total and size-segregated samples, (2) the analysis of collected samples and in- situ measurements, (3) minimal disturbance of the sample volume, and (4) remote as well as point measurements. Some relatively new techniques for collecting samples, including the dichotomous sampler and the streaker are described. Of special concern at the sulfates both particle size and ant. A number of laboratories sulfates in collected samples. Recently, a thermal technique u successful , but only for separa using a variety of detection de from other sulfates. A techniq differentiating acid sulfates, present time is the analysis of sulfate compounds. For the particular ion associated with the sulfate are import- have tried to develop thermal techniques for separating These have all been inadequate due to a variety of reasons, sing hot, dry air as a heating mechanism appears to be ting h^SO^ from other sulfates. In- situ thermal techniques vices also appear to have some value in separating h^SO^ ue based on differences in hygroscopicity shows promise for H 2 SC\ and NH^HSO^, from (NHj 2 SCv The flame photometric detector, which is widely used for sensitive measurements of gaseous sulfur compounds, has also been shown to give quantitative results for certain sulfate particles. Changes in burner design, now being studied in several laboratories, show promise of providing sensitive measurements of sulfates, perhaps with differentiation of h^SO^ from less volatile sulfates, and with sufficient time resolution to be applicable to aircraft operations and aerosol deposition studies. The use of flame induced radiation or ionization shows promise for other types of aerosol measurements. Table 2 Basic measurements and specific effects or properties of aerosols Basic measurements Mass Volume Number Elemental composition Valence state Radical components SC%", NO3, NHj, PO^", NO2, SO3' Compound analysis sulfates, nitrates, organics Specific toxic components Specific effects or properties Visibility reduction Scattering properties Absorption properties Charge carrying capacity Cloud condensation nuclei Ice condensation nuclei Hygroscopicity Volatility Acidity Mutagenic potential Carcinogenic potential 324 3. Conclusion Characterization of the organic fraction of the ambient aerosol presents several problems. Most historical data gives the mass of the benzene soluble fraction. ft&ceiH, studies, however, show the existence of a substantial mass of more polar organic rattter that is not soluble in benzene. Studies with gas chromatography-mass spectroscopy and high resolution mass spectroscopy have identified hundreds of organic compounds. These tech- niques provide more data than can be usefully handled except for ^jery special and limited studies. We, therefore, need techniques that fall somewhere in between the simple and detailed techniques mentioned above. Also, more characterization studies are needed to determine the composition of organic compounds in emissions from specific sources and in rural/urban areas as a function of time. Techniques to analyze for elemental carbon, organic carbon, and organic compound types, and plans for various systems to characterize the organic fraction of ambient aerosols are discussed. A number of non-aerosol measurement needs have been identified in attempts to study the formation, transport, and removal of aerosol. These needs, which involve meteoro- logical measurements, remote measurements, and measurements of gaseous species are described briefly. 325 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464 Methods and Standards for Environmental Measurement Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). DETECTION OF INDIVIDUAL SUBMICRON SULFATE PARTICLES Yaacov Mamane and Rosa G. de Pena Department of Meteorology The Pennsylvania State University University Park, Pennsylvania 16802, USA 1. Introduction Significant quantities of sulfates exist as particulate matter in the atmosphere, particularly in polluted air over densely populated areas and to a smaller degree in rural areas and mountains far from human influence [I] 1 in Antarctica [2], and in the stratosphere [3]. Recent data [4] at selected stations throughout the U. S. indicate that the concentra- tion of sulfates (S0^) ranges from about 8 to 16 yg/m 3 at urban sites, from 5 to 9 -yg/m 3 at non-urban sites, and about 2 yg/m 3 in the mountains. A number of investigators have been concerned with airborne sulfates, but very few have dealt with individual particles. A recent critical review on the analysis of airborne sulfate [5] states that the physical and chemical nature of airborne particulate sulfate and the size distribution, are largely unknown. The inadequate amount of data on size distribution for sulfate is limited to impactor data. But in order to collect sufficient amounts of sulfate on every stage of the impactor for bulk analysis, many hours of sampling are needed [6]. Also the size separations provided by the different stages of an impactor are by no means clearly defined. 2. General Method The method described in this paper, because of its high sensitivity can be used when- ever a high resolution in time is required (airborne samples, low concentration, eta...). It is basically a modified spot test and it is based on a method suggested by Bigg et at. [3]. King and Maher [8] applied this method for large hygroscopic particles in the size range 0.5 to 200 yyg (0.8 ym < d < 5.8 ym if a density of 2 g/cm 3 is assumed). The purpose of the present work is to apply this method to lower sizes in a more quantitative way that has been done before. The method consists of bringing a sulfate particle into contact with a very thin film of barium chloride. In the presence of water vapor a reaction takes place. A persistent spot (Liesegang type rings) of barium sulfate is formed and can be examined under the electron microscope (EM). Under constant relative humidity (RH) and thickness of the barium chloride film the ratio between the diameter of the spot and the particle diameter is a constant. A "calibration curve" can be obtained for different sizes of particles, as in figure 1, providing a method for both spotting and sizing sulfate particles. Under the conditions of our experiment the size of the ammonium sulfate halo is about 1.5 times the original particle. For sulfuric acid it is about twice as much. The halos formed by sulfuric acid and ammonium sulfate are shown in figures 2 and 3. figures in brackets indicate literature references at the end of this paper. 327 3.5 3.0 - 2.5 - I 5 2-0 1.5 - .0 - 0.5 - 0.0 + + •/' + + + + *^ +• " + y Sy *s * »• / • - CASE I yf Im +- CASE 31 /+ + i *** + Y = 0.06 +3.67 X - I.00X 2 — T¥ / REGRESSION LINE 185 PARTICLES + J 4&* 1 I 00 0.5 1.5 H 2 S0 4 PARTICLE DIAMETER, «m Figure 1. Calibration curve of halo-particle diameter relation for sulfuric acid particles. 328 Figure 2. Electron micrograph of sulfuric acid particles (a) before and (b) after reaction with barium chloride. 329 Figure 3. Electron micrograph of ammonium sulfate particles (a) before and (b) after reaction with barium chloride. 330 The main steps of the technique consist of the following: (a) Collect particles on electron microscope (EM) carbon coated grids (numbered and lettered grids are the most convenient). Examine some representative fields under the EM using the least intense beam and the smallest magnification. (b) Deposit few milligrams of barium chloride on the sample in a vacuum (at about 10" 6 Torr) bell jar from a heated tungsten filament. For submicron particles, 0.1 - 1.0 pm, a o layer of about 300 ± 100 A give the best results. (c) Expose the grid in a close chamber to a known relative humidity, usually 60 to 80 percent for an hour or so. Examine again the same fields of view seen before for reaction spots. The above technique has been used to identify submicron sulfates generated in the laboratory, including sulfuric acid, ammonium sulfate, and sodium sulfate. The particles collected on a BaCl 2 layer were exposed to a relative humidity of 70 and 80% respectively to obtain a complete reaction. This study indicated that the vapor pressure (RH) needed for the particle to react is lower than the equilibrium vapor pressure of a bulk saturated solution as was observed by Junge (1963). 3. Results The technique described gave positive results with particles as small as 0.04 ym diameter (figure 4). This lower limit is only the result of the method of collection (Casella impactor) and the grain of BaCl 2 . There is no practical limit to the upper size. However, a thicker film has to be used for larger particles. S *1 # ^ *>- V&WQ* ^ ^^0 4 Figure 4. Electron micrograph of the smallest halos observed (NH 4 ) 2 S0i + ). The other particles are BaCl 2 . [original particle was 331 The method is reproducible as can be seen in Table 1 where the results of three runs are shown. Table 1 The relation between the halo diameter and that of the original particle y = ax + b run # I II III 1.53 (± 0.11) 1.66 (± 0.18) 1.66 (± 0.13) -0.05 (± 0.06) -0.08 (± 0.10) -0.03 (± 0.07) n 172 85 215 x, particle diameter in ym. y, halo diameter in ym. n, number of particles. a The 95 percent confidence intervals are given in parenthesis. The structure of the reaction spot is different for different sulfate particles. Figures 5 and 6 illustrate the spots obtained for F^SO^ and (NHtJ2 SO^. ftSlllllMli Figure 5. A typical reaction of F^SO^ particle produced in the laboratory with a thin layer of BaCl 2 . 332 9 fg* '*,* *:•■%' Figure 6. A typical reaction of (NHi + ) 2 S0i + particle produced in the laboratory with a thin layer of BaCl 2 . The methods allow for distinguishing between sulfate and sulfite particles. In this field our preliminary results indicate that BaS03 spots are destroyed in HC1 vapor while the BaS0i+ remain unchanged (figure 7). References [1] Junge, C. E., Air Chemistry and Radioactivity, Academic Press, New York, 382 (1963). [2] Cadle, R. D. , Fisher, W. H., Frank, E. R. , and Lodge, J. P., Jr., Particles in the Antarctic Atmosphere, J. Atmos. Sci., 25_, 100-103, (1968). [3] Bigg, E. K. , Stratospheric Particles, J. Atmos. Soi. 32, 910-917, (1975). [4] Altshuller, A. P., Regional Transport and Transformation of Sulfur Dioxide to Sulfates in the U. S., J. Air Poll. Control Assoc, 26, 318-324, (1976). [5] Tanner, R. L. and Newman, L., The Analysis of Airborne Sulfate--A Critical Review, J. Air Poll. Control Assoc. 26, 737-747, (1976). [6] Kadowaki , S. , Size Distribution of Atmospheric Total Aerosols, Sulfate Ammonium and Nitrate Particulates in the Nagoya Area, Atmos. Env., 1_0, 39-43, (1976). [7] Bigg, E. K. , Ono, A., and Williams, J. A., Chemical Tests for Individual Submicron Aerosol Particles, Atmos. Env. 8, 1-13, (1974). [8] King, W. D. and Maher, C. T., The Spatial Distribution of Salt Particles at Cloud Levels in Central Queensland, Tellus 28, 11-23, (1976). 333 '.■#■*? l Av wmjf^t >•'#" R3&£ •' ;*• * ♦ • (a) ## %** ,y. ■:.■*> |& f. i (b) Figure 7. An example of Na 2 S0 3 reaction with BaCl 2 (a) before and (b) after exposure to HC1 vapor. 334 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977) AN ANALYSIS OF URBAN PLUME PARTICULATES COLLECTED ON ANDERSON 8-STAGE IMPACTOR STAGES Philip A. Russell Denver Research Institute University of Denver Denver, CO 80210, USA 1. Introduction It is becoming increasingly apparent that human lungs selectively capture particles from approximately 5.0-0.1 urn in diameter and effectively extract surface associated metals and organic compounds. Even relatively inert materials such as quartz may be harmful if deposited in the lungs. Thus interest in the respirable fraction of suspended particles and their identification has become of increased concern. 2. Discussion Cascade and virtual air impactors have been used extensively in particle size separation for industrial and environmental sources. A combination of electron microscopy and energy dispersive X-ray fluorescence spectrometry provides a powerful analytical tool for examining impacted (and filtered) substrates. The electron beam of a scanning electron microscope (SEM) may be used to examine an impaction spot or individual particle for morphology and characteristic X-ray emission. The electron beam of a transmission electron microscope (TEM) can be used to produce a selected area electron diffraction (SAED) pattern which can be used to identify crystalline particles by chemical composition. Urban plume particles collected on Anderson 8-stage cascade impactor substrates, LoVol filters and paraffin coated mylar were examined using the techniques mentioned above. Particles on Millipore and Nuclepore substrates from impactors and LoVol filters were examined in an SEM equipped with an energy dispersive X-ray spectrometer (EDS) after coating with a thin layer of carbon. TEM-SAED investigations required the transfer of material from collection substrates to carbon coated grids. 3. Conclusions Examination of impactor spots on substrates of 8-stage cascade impactors demonstrated that (1) sulfur (non mineral) is always found predominantly on the final impactor stage; (2) lead and lead bromochloride were found in stages 5 and 6 or stage 7; and (3) the amount of sulfur increased significantly during pollution episodes when air mass movement recircu- lated the urban plume back into the sampling area and was not correlated with lead concentrations. Elemental information from X-ray intensities of impact spots and quantitative estimates of LoVol samples collected simultaneously at two different sampling sites is compared. Total impact spot intensities for stages 3-7 and LoVol quantitative estimates had poor correlations for lead, moderate for mineral elements (Si, Ca, K, Fe) and excellent (r = 0.90) for sulfur. Possible explanations for variance in correlation are (1) the wide range of sulfur and low range of lead concentrations, and (2) the predominance of mineral particles in the upper stages of the impactor where impact spots were not analyzed. 335 The variances of elemental compositions determined by simultaneous independent studies at two sampling sites is similar for the different elements, suggesting that certain elements (S, Si, Ca, K and Fe) were representative of a well-mixed plume while others (Pb) were associated with point sources near the sampling site. Examination of individual particles impacted on various substrates of stages of cascade impactors demonstrated that (1) there was little or no difference in particle sizes from the outer edge of the substrate to the center; (2) particle sizes did decrease from stage to stage 7 but variance, even for specific particle species, was large; (3) the measured physical effective diameters of minerals and flyash were significantly different on all stages; (4) mineral or mineral-like elements were present in all stages while lead, lead bromochloride, zinc, sulfur (non mineral) and carbon particles became more numerous and sometimes dominant in the last impactor stages; and (5) sulfur was always associated with auto emission particulates. Comparisons of average diameters of mineral and flyash particles were made for different sampling periods for an impactor stage where minerals and flyash were often present. Flyash diameters determined by electron microscopy for four different dates were not different at the 5% significance level where F = 1.195 for 3 and 34 degrees of freedom. However, the average mineral particle diameters determined by electron microscopy were the same only when the significance level was 0.1% where F = 4.842 for 3 to °° degrees of freedom; thus it is unlikely that the mineral species size populations collected during different sampling periods are equivalent. There is apparently an unknown factor that results in differential segregation of mineral particles from day-to-day sampling. TEM-SAED was used to further analyze the particles collected on Stages 5 and 7 of one Anderson cascade impactor. The transferred material from the fifth stage of the Anderson impactor was predominantly mineral and flyash with some lead-rich particles. Quartz particles produced sharp SAED patterns, as did other minerals. None of the flyash or lead- rich particles observed produced diffraction patterns. Particles observed on stage 7 were very different from those observed on the stage 5 substrate. Most of the particles were carbon, but some flyash was also present. Diffraction patterns were occasionally observed for particles associated with flyash. Sulfur-rich particles that were collected from the Los Angeles urban plume and examined by TEM-SAED were observed to react strongly with the TEM copper grid when transferred in xylene. No electron diffraction patterns were produced by any of these particles. The particles were observed to be agglomerates of smaller particles less than 0.1 ym in diameter. Fine particles of sulfur, sulfamic acid and ammonium sulfate produced in the laboratory and transferred to copper TEM grids in xylene did not produce particles or grid corrosion like the fine particles transferred from the environmental sample. It was concluded that the seed material was probably carbon, and the surrounding material was a noncrystalline sulfur compound. 336 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). SIZE DISCRIMINATION AND CHEMICAL COMPOSITION OF AMBIENT AIRBORNE SULFATE PARTICLES BY DIFFUSION SAMPLING Roger L. Tanner and William H. Marlow Atmospheric Sciences Division Department of Applied Science Brookhaven National Laboratory Upton, New York 11973, USA 1. Introduction A method for sampling aerosols has been developed [I] 1 by which particles below the optical scattering region (<0.3 ym) can be separated into size-related categories by diffusion in a battery of diffusion cells (collimated hole structures) of exceptional length. The size-segregated particle fractions are efficiently collected on chemically inert, phosphoric acid-treated quartz filters and the chemical composition of the sul fate- containing particles are determined by state-of-the-art analytical techniques. 2. Theory The "collimated hole structure" type of diffusion battery used in this work consists of a series of cylindrical stainless steel blocks each containing aa. 14,000 parallel cylin- drical holes. When the battery is operated under conditions where the fractional penetra- tion, f a , of aerosol particles of diffusion coefficient, D, passing through a cell length, i, at a volume flow, g, (see eqs. 1 and 2 (Gormley-Kennedy equation) [2]), the fractional penetration of an aerosol stream passed through a series of diffusion cells is the product of fractional penetrations through each cell. From two or more simultaneously collected ambient aerosol samples, one collected from untreated air and the other(s) after diffusion processing, the chemical content of the smaller size fraction of the aerosol (removed preferentially by diffusion to the cell walls) can be inferred by subtracting the concen- trations measured with the diffusion processed sample from those with the ambient aerosol. f a = 0.819 exp(-3.657a) + 0.097 exp(-22.3a) + 0.032 exp(-57a) + 0.027 exp(-123a) + 0.025 exp(-750a) (1) where a = nD£/g (2) 3. Experimental The total particle contents from ambient and diffusion-processed air were collected on 47 mm circles of acid-treated quartz filter material and leached in pH 4 aqueous solution. The leach solution was first titrated with standard base and the net acid determined from figures in brackets indicate literature references at the end of this paper. 337 the Gran titration curve [3]. Portions of the leach solution were then used to determine the soluble sulfate concentration by a turbidimetric technique [4], ammonium by the indo- phenol colorimetric technique [5] and nitrate by reduction to nitrite, derivatization and colorimetry [6]. Sulfate determinations were supplemented by soluble sulfur determinations by the flash volatil ization--flame photometric detection technique [7]. Precision of data is ± 5-10 percent except ± 15 percent for acid data. 4. Results and Discussion The results reported are from a nine-day, intensive, diffusion-sampling experiment conducted at Glasgow, 111., a rural location 120 km N of St. Louis, Mo., during the period of July 22-30, 1975, in coordination with the RAPS summer 1975 field program. Twelve-hour samples were taken each day and night with the Sinclair diffusion battery [2] sampling ambient air (samples designated PQU), diffusion processed air with a 50 percent penetration diameter (d5Q%) of 0.035 ym (PQDBi), and diffusion processed air with dcQ 0/ = 0.13 ym (PQDB 2 ). Size distributions were also calculated from data simultaneously obtained using an 11- port screen-type diffusion battery in conjunction with a condensation nuclei counter. The size distributions were determined by a least-squares fit of 11 representative particle sizes from 0.02 ym through 0.4 ym diameter. The calibration of this battery is that of Sinclair, et at. [7]. Assuming the particles to be spherical, surface and volume distri- butions were computed. Volume distributions thus calculated have been combined with the fractional penetration curves to obtain distributions of the diffusion sampled aerosols collected as samples PQDBx and PQDB 2 . Appropriate averaging of individual volume distributions yields reasonable average volume distributions for each sampling period (7/26-7/30) which may be directly compared with the chemical composition data. The analytical data for sulfate (figure 1) in samples PQU and PQDBi are i n 9°od agreement indicating little sulfate mass in the very small particles (<0.05 ym). However, comparison of PQU and PQDB 2 sulfate data indicate removal of 20-50 percent of the sulfate mass in the diffusion battery, demonstrating clearly that for these rural ambient aerosol samples, 1/5 to 1/2 of the sulfate mass is present in the suboptical size range (< 0.3 ym). Comparison of the analytical data with the volume distribution data shows that most of the suboptical mass can be accounted for by sulfate, if one makes reasonable assumptions concerning the density of the small particles. The analytical data for acid and ammonium during the sampling period have also been compared with that for sulfate. Qualitatively different temporal patterns were observed for acidity compared with sulfate. Sulfate episodes during the 7/22-23 and 7/26-27 periods were not accompanied by large acid concentrations whereas during the 7/29-30 period large acid concentrations were present and showed diurnal variation. During daytime samples for 7/29 and 7/30 periods, the acid content was maximum (figure 2) and predominant in very small (< 0.05 ym) particles; during the 7/29-30 night sample it was much reduced and associated with larger particles. This contrasts with the diurnal behavior of ammonium (figure 3) which was highest during the 7/29-30 night sample than on either 7/29 or 7/30 and was associated principally with optical-sized particles. Since the ambient sulfate concentration remains nearly constant during the 7/29-30 period, there is substantial evidence for photochemically generated sulfate on this occasion. 338 L 1 , 1 1 1 r O • I 1 ro 1 )N. 1 h~ . : t : , ; i 0) CM I iCN i |V> i i i CO i CM _ 8 : ( • t fe i co r i CO in 1 w r~ ! i h~ o « 1 cx> . - - 1 : • CM . » m r 1 o q : i CM o i o. : ■ H CN i w CM | : i D i CD M >1 r-l 3 i 1 ! CM - rH Q a r-l r-l a : 1 r "1 ! _ w a u H C c i L_, L-, IjO g 8 - p 3 j 1 CM u w O 1 j 1 Jjs. - H « o CO rfl i-i •1 ! .._ h- -— a" O" • Jl i, N. i 1 i • i N- 1 1 h ( o o o o o Oh- o o o o o < o 00 CD ST » :m Q d313IAI OianO U3d SlN31VAin030NVN Nl NOIlVdlN30NO0 339 tf M w r-l CM o m o ~ Q « O ex Pj ro dt &i CM II O CN H cn >1 D r-l fa ■3 fa b H O , r-l Q r-| m § ■--1 p H o fa Q - fa :s n H fa; u • w U) t oS m Fh ,-i • § o • •. CJi E> !2i c O O ■A fa H r-l H a II <: F. p; rri g CO w 5-1 C) a a o CM o r-l Q M C) < o CD CN CN CN CM o o CO i l;- 1 o CO o o Q) -h tjTo o U -P C d) g ^ C «J -rl C O M U3 o o CN o o 340 Q W CO co w o o &< i a o H CO En fa 341 Six-hour backward trajectories from the Glasgow, Illinois site have been determined for this period by Meyers and Cederwall of our laboratory, and correlate well with both the chemical composition and size distribution data. Trajectories from the north or west result in lower sulfate levels and negligibly small acidity but with a significant fraction of aerosol volume in the <0.1 ym range. Trajectories passing through urban and/or power plant plumes for brief periods may result in higher sulfate concentrations which is not necessarily acidic sulfate. Stagnant air masses such as the one passing through the Glasgow site on 7/29-30 which passed through St. Louis, then over several major power plants in the previous four days, may result in \/ery high acid and sulfate levels. References [1] Marlow, W. H. and Tanner, R. L., A Diffusion Battery Method for Ambient Aerosol Size Discrimination with Chemical Composition Determination, Anal. Chem. 48, in press (1976). [2] Sinclair, D., A Portable Diffusion Battery. Its Application to Measuring Aerosol Size Characteristics, Am. Ind. Hyg. Assoc. J. 33, 729 (1972). [3] Askne, C, Brosset, C, and Ferm, M. , Determination of the Proton-Donating Property of Airborne Particles, Swedish Air and Water Pollution Research Laboratory, Gothenburg, Sweden, IVL Report B 157, 1-20 (1973). [4] Sulfate Method Vlb via Turbidimetry , Technicon Industrial Systems, Tarrytown, NY (1959). [5] Bolleter, W. T. , Bushman, C. J., and Tidwell, P. W. , Spectrophotometric Determination of Ammonia as Indophenol , Anal. Chem. 33, 592 (1961). [6] Mull in, J. B. and Riley, J. P., The Spectrophotometric Determination of Nitrate in Natural Waters, with Particular Reference to Sea-Water, Anal. Chem. Acta. 1_2, 464 (1955). [7] Husar, J. D. , Husar, R. B., and Stubits, P. K. , Determination of Submicrogram Amounts of Atmospheric Particulate Sulfur, Anal. Chem. 47, 2062 (1975). 342 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) THE IDENTIFICATION OF INDIVIDUAL MICROPARTICLES WITH A NEW MICRO-RAMAN SPECTROMETER 1 E. S. Etz and 6. J. Rosasco Institute for Materials Research National Bureau of Standards Washington, DC 20234, USA 1. Introduction The ability to probe the molecular identity of individual microparticles is of import- ance in a variety of areas. These include air and water pollution, corrosion, forensic chemistry, pathology, and the study of terrestrial soils and lunar fines. To analyze such specimens, it is usually necessary to work with particles less than 10 ym in linear dimen- sions and frequently detect as little as 10" 13 gram of a substance of interest. In the past there did not exist a practical technique for performing molecular analyses; consequently, the determination of the molecular composition of individual microparticles has been neglected. Laser-excited Raman spectroscopy applied to the chemical characterization of discrete, small particles offers valuable information to the microanalyst. If the sample is Raman active, the technique has the potential of furnishing not only the structural formula of the molecular species contained in the particle, but in addition may also yield information on the crystalline (or glassy) phase of the material under investigation. The Raman spectrum will therefore, in many cases offer a unique "finger print" of the constituent chemical species and their structural coordination for a broad range of inorganic and organic materials. 2. Discussion Earlier studies at NBS [l] 2 demonstrated the feasibility of acquiring Raman spectra from discrete, micrometer-sized particles. It was shown that these spectra could provide a basis for the chemical identification of small particles present in many forms of particu- late matter. These investigations have led to the development [2] of a new laser Raman spectrometer system for microanalytical applications. The design of the laser Raman micro- probe is optimized for the routine spectroscopic analysis of single particles less than 10 ym in linear dimensions. The instrument has largely been constructed from commercially available components. Its basic design is identical in principle to that of conventional Raman spectrometers. It makes use of a gas laser as the source of excitation; it has beam directing, pre-filtering and focusing optics and a separate optical system for collection and transfer of the scattered radiation to a double monochromator employing photoelectric detection and a digital photon counting system. The instrument incorporates a number of unique mechanical and optical components which optimize the detection of the Raman signal instrument development partially supported by the U. S. Air Force, Technical Applications Center. 2 Figures in brackets indicate the literature references at the end of this paper. 343 from a single particle of size 1 ym, and smaller. The system is stable mechanically and allows the positioning of a particle sample precisely (to less than 0.2 ym) at the focus of the exciting laser beam and utilizes a highly efficient ellipsoidal mirror to collect the scattered light. The system is interfaced to a minicomputer for experimental control and for automated, optimized data acquisition. The analytical capabilities of the new instrument are illustrated by representative Raman spectra of individual microparticles of well -character! zed laboratory source materials. Detection and analytical characterization is demonstrated for several classes of inorganic compounds, including oxides, carbonates, nitrates, sulfates and phosphates. Particle Raman spectra were obtained for several selected organic materials and polymers, in every case for single particles less than 10 ym in diameter. A variety of applications are in progress at the present time. Of particular interest is the spectroscopic analysis and identification of single particles in samples of airborne particulates. The discussion will emphasize results on the characterization of respirable- size particles in urban dust samples and in fly ashes from power plants. Important in this work has been the speciation of sulfur (as in sulfate) and nitrogen (as in nitrate or ammonium) bearing particulates. A major objective of our studies is to establish a ref- erence file of analytical-quality Raman spectra for the identification of single particles in particulate samples from environmental sources. For analysis, the particles of interest are mounted on the polished surface of a substrate which is chosen to minimize spectral interferences. Sapphire (a-Al 2 3 ), lithium fluoride, or highly reflecting metallic substrates can be used. Atmospheric aerosol can be sampled for spectroscopic analysis on the collection surface of the Raman substrate when it is inserted on the stages of any of the cascade-type impactor samplers now widely used for the field collection of airborne particles in size-stratified samples. Any given particle in the sample is observed in the spectrometer microscope (at 400X magnification) to achieve precise positioning for measurement. Routine spectroscopic measurements on particles of known compounds and on particles isolated from environmental samples have employed the green line (514.5 nm) of the argon ion laser for excitation of the Raman spectrum. For colored samples, or those contaminated with absorbing surface films, the possibility always exists for the microparticle to absorb the incident radiation. Appreciable absorption of the laser irradiation invariably leads to heating of the sample. When this occurs, it can often be observed in the spectrum, as a temperature rise in the particle causes a shift in the Raman line(s) to lower frequency and simultaneously brings about a broadening of the Raman band(s). In some cases, particle modification or destruction can result from this heating. Selection of a different exciting line (e.g. 647.1 nm) has been useful for some of the colored particles examined. In general, however, it is particularly important for successful analysis to judiciously control the irradiance (power/area) of the exciting laser focused on the sample. Radiation- sensitive materials must be measured at reduced irradiance levels and_commensurately longer measurement times. _Spectra are acquired at rates varying from 200 cnTVmin (0.2 s integra- tion time) to 20 cnTVmin (2 s integration time) over the region from 50 to 3600 cm" 1 . The positional stability of all mechanical and optical components of the instrument iS'Such that measurements can readily be performed over extended periods of time, without loss in signal due to either beam or particle drift. Specific examples are discussed that demonstrate the power of this new technique of microanalysis. Raman spectra are shown for single, solid particles of size <3ym of the various types of sulfates known to exist in atmospheric aerosol. In these cases, the particle samples were prepared from laboratory source materials. The compounds studied include sodium and ammonium sulfate, together with the bisulfate salts, and calcium sulfate, each of which contributes to total particulate sulfate in air. The spectra of these microparticles reproduce those obtained from bulk samples (e.g. single crystal or powder samples) of the same source materials. They show that the various neutralized forms of sulfuric acid can readily be characterized on the basis of their respective Raman spectra. 344 Reference spectra such as these have allowed us to identify small, single particles isolated for analysis from bulk samples of environmental particulate samples. Representa- tive of these successful measurements is the particle Raman spectrum, shown in figure 1, of an "unknown" particle in a sample of urban air particulate dust. The particle analyzed is supported by a sapphire substrate of optical quality. Included with the particle spectrum, and shown as the lower trace, is the "background" spectrum arising from the weak Raman scattering of the substrate material (a-Al 2 3 ). The vertical bars placed below the record- ing trace indicate the Raman frequencies of the S0^~ bands as observed on the spectrum of a small particle of a single crystal calcium-sulfate anhydrite. Spectroscopic characteriza- tion of a variety of sulfate minerals has shown all to be distinguishable on the basis of their fundamental Raman bands. The 8 pm particle is therefore identified as calcium sul- fate (anhydrite) based upon the close similarity of its spectrum with that of single- crystal anhydrite. These studies on microparticles have recently been extended to the Raman investigation of liquid aerosols. Work in progress centers on the spectroscopic characterization of the various types of sulfate in microdroplets of laboratory-produced sulfate aerosol. Included are measurements on small single droplets of free sulfuric acid prepared from aqueous sulfuric acid solutions of known initial concentration. The aim is to establish the limit of detection for undissociated H 2 S0i+ and the equilibrium concentrations of the HSO^ and SO?;" ions in acid droplets of size 5 urn and below. These measurements also involve the study of the formation of microparticles of NH^HSOtt and (NHt t ) 2 S0 4 from microdroplets of acid aerosol exposed to various trace concentrations (in the range from 1 to 100 ppm) of ammonia vapor in air. Of interest is the estimation of the amounts of sulfate and bisulfate present in the liquid NH t+ HS0i + /(NHi + ) 2 S0i + system and the monitoring of any time dependent changes in the relative concentrations of these species both in the liquid and solid phase during the transformation from the microdroplet to the solid microparticle. The phenomena governing the rates of formation of solid sulfate aerosol from liquid acid aerosol are also being examined as a function of the sizes of the microdroplets under study. The Raman spectroscopic characterization of these sulfate aerosol systems is unique in that observa- tions are made of the molecular compositon of individual microscopic entities. The results derived from these studies thus present an opportunity to better understand the peculiar type of behavior associated with extremely small quantities of matter generally existing in particulate forms with high surface-to-volume ratios. The successful analysis of micrometer-sized samples has involved the development of appropriate particle collection, handling and analysis techniques. These techniques include the use of various aerosol sampling methods, overcoating of particles to preserve integrity, procedures to remove surface contamination, and ways to mount, register and observe--on suitable substrates--single particles for spectroscopic analysis. The unique information obtained from the micro-Raman analysis of single particles is briefly reviewed and compared to the data on the composition of such samples gained from other microprobe techniques, such as electron probe and ion probe microanalysis. References [1] Rosasco, G. J., Etz, E. S., and Cassatt, W. A. , Appl. Spectrosc. 29, 396 (1975). [2] Rosasco, G. J., Etz, E. S., and Cassatt, W. A., Particle Analysis by Laser-Excited Raman Spectroscopy, presented at 1976 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy, Cleveland, Ohio, Feb. 29 - March 5, 1976. 345 Figure 1. Raman spectrum of an unknown particle in a sample of urban air particulate dust. 346 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). A COMPACT X-RAY FLUORESCENCE SULFUR ANALYZER 1 L. S. Birks, J. V. Gilfrich and M. C. Peckerar 2 Naval Research Laboratory Washington, DC 20375, USA 1. Introduction Measurement of the sulfur concentration in particulate effluent from emission sources has become even more important recently in view of the potential necessity to use high sulfur fuels for generating electrical power. The x-ray measurement of sulfur requires a helium path or, preferably, a vacuum path x-ray spectrometer. In addition, the presence of intermediate and higher atomic number elements causes some difficulty because of line interferences. Use of a high resolution crystal spectrometer minimizes this problem. 2. Discussion The x-ray equipment used in the laboratory to perform the analyses of air pollution particulate samples can be very sophisticated enabling the determination of as many as 20 or 25 elements in a sample every few minutes [1,2] 3 . Unfortunately such equipment requires a relatively large capital investment and is quite bulky, making it inconvenient for on- site use. In response to the need for a field instrument, the Environmental Protection Agency (EPA) initially requested the Naval Research Laboratory (NRL) to design and build a general, compact x-ray analyzer [3]. This crystal spectrometer instrument, which was delivered to the EPA laboratories in Research Triangle Park, NC, in February 1975, was limited to the measurement of atomic numbers above 23 (V) because it was an air-path instrument. A second-generation vacuum model of that air-path analyzer was suggested by NRL as a potential on-site device originally intended only for the measurement of total sulfur concentration. After the initial design of the prototype had been conceived, a concurrent project at NRL supported by EPA demonstrated that a high resolution single-crystal x-ray spectrometer could determine whether the sulfur was present as sulfide or as sulfate. If both forms were present, their proportions could be determined quantitatively [4]. The success of the valence work suggested that the sulfur analyzer should be redesigned to incorporate this capability. Since the measurement of valence state required that the spectrometer be adjustable (whereas the original concept had utilized a fixed crystal and detector), a logical extension was to design the spectrometer to cover a large enough Bragg angle range to measure elements other than sulfur. The prototype instrument is shown schematically in figure 1. Supported by EPA under Interagency Agreement EPA-IAG-D4-0490. 2 Present address: Advanced Technology Laboratories, Westinghouse Electric Corporation, Post Office Box 1521, Baltimore, Maryland 21203. 3 Figures in brackets indicate the literature references at the end of this paper. 347 Figure 1. Schematic Diagram of the Sulfur Analyzer. The excitation source is a specially designed Pd transmission target x-ray tube which requires only air-cooling at its rated power of 15 watts. Tube-target to sample distance is less than two centimeters. The divergence allowed by the collimator is 0.07 degrees. The crystal is freshly cleaved (200) NaCl for which we estimate the rocking curve breadth (FWHM) to be less than 0.01 degrees. Thus the resolution is defined by the collimator and is approximately 1 eV (.002 A) at SKa. The detector is a sealed proportional counter having 50 urn Be windows and a path-length of 4.7 cm filled with one atmosphere of Ne-C0 2 ; detection efficiency is about 65 percent at SKa. The spectrometer is manually adjustable over a 2e range of 90 to 150°, which corresponds to a wavelength span of 3.4 A to 5.4 A with the NaCl crystal. Substitution of other crystals would permit the measurement of any wavelength between about 2.0 and 25 A (although each crystal would only cover a small portion of this range). The 50 ym Be window on the detector would not be very transparent at wavelengths longer than about 8 A. The Pd transmission target tube was chosen because of the favorable relation between the wavelength of the Pd L-lines and the sulfur K absorpt- ion edge; it would not be \/ery efficient for many other elements. References [1] Wagman, J., Bennett, R. L. and Knapp, K. T. , X-Ray Fluorescence Multi spectrometer for Rapid Elemental Analysis of Particulate Pollutants, Environmental Protection Agency, Report No. EPA-600/2-76-033, (March 1976). [2] Goulding, F. S. and Jaklevic, J. M. , X-Ray Fluorescence Spectrometer for Airborne Particulate Monitoring, Environmental Protection Agency Report No. EPA-R2-7Z-182, (April 1973). 348 [3] Birks, L. S. and Gilfrich, J. V., Low Cost Compact X-Ray Fluorescence Analyzer for On- Site Measurements of Single Elements in Source Emissions, Environmental Protection Agency Report No. EPA-600/4-7 5-002, (July 1975). [4] Gilfrich, J. V., Birks, L. S. and Peckerar, M. C, X-Ray Analysis of the Valence State of Sulfur in Pollution Samples, Final Report to the Environmental Protection Agency, in preparation. 349 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). THE X-RAY IDENTIFICATION AND SEMI-QUANTIFICATION OF TOXIC LEAD COMPOUNDS EMITTED INTO AIR BY SMELTING OPERATIONS Peter F. Lott and Ronald L. Foster Department of Chemistry University of Missouri-Kansas City Kansas City, Missouri 64110, USA 1. Introduction Effects of lead and lead compound toxicity, which may produce conditions such as anemia, nephritis, mental retardation, vision stenosis and muscular dystrophy, is well known El ,2 S3] 1 . Such toxicity comes about either through oral ingestion or by inhalation. Up to 50 percent of the particles 1 ym or less that are inhaled are retained in the body, par- ticles 2 ym or greater tend to collect on mucous membranes from which they are ingested. The lower level for the appearance of mild systems of lead poisoning is observed with a blood level of 60-80 yg Pb/100 ml. Current methods of analysis are usually limited determining the percentage of lead as the "element," and not the lead compounds. This prevents definite source correlations and hazard evaluations, since the toxicity is usually a function of the compound involved. A solution lies in the application of x-ray diffractometry as the Bragg law reflection elu- cidation of crystal plane spacing provides unique identification of crystalline specimens. The method is non-destructive and rapid. Restrictions for air particulate analysis are, the relatively large sample size required to obtain a suitably thick path length specimen and the 1 percent lower detection limit for compound characterization. 2. Discussion Collection of the sample is accomplished by use of a "Hi-Vol" filtering unit. This filtered dust is further concentrated by washing off the particulates with a 5 percent Duco cement in acetone solution directed as a jet from a plastic squeeze bottle against the filter. The suspended particles are washed into the cup of a stainless steel Buchner type funnel (Schleicher and Schull 2.4 cm, #596 filter paper) mounted on a filter flask. After completion of the washing, suction is continued until the sample is dry. The concentrated sample is fastened with cellophane type over the opening of a Philips diffraction sample holder, mounted and scanned at 1° 2e/min from 20° to about 50° 2e. X- ray radiation is obtained from a Cu tube operated at 40kV, 25mA, and passed through a Ni filter to select the Cu-Ka radiation. The pulse height selector was set on the 111 reflec- tion plane of a lead metal film at a 2e angle of 31.33°. Patterns were identified by use of the Fink Index [4], and Inorganic Index [4,5], followed by comparison with the Powder Index File [6]. Additional identifications were made by direct comparisons with the patterns of standard compounds (prepared in the same manner as the samples) of mineral or chemical specimens (fig. 9-12). i Figures in brackets indicate the literature references at the end of this paper. 351 Comparisons indicated the certain presence of PbS and PbSCv Additional possible substances based on only one or two identifiable peaks were lead metal, PbO, and PbO-PbSCv The problem of direct comparison was increased by shifts in peak location by as much as 0.4° 2e due to various causes, such as sample thickness and geometry for x-ray reflection as well as inter-compound effects. This prevented discrimination where a few tenths of a degree 2e separates the location of the major peaks of two compounds. A very broad peak, observed from about 22° to 24° 28, (fig. 1) was due to amorphous scattering from the filter paper 100 i- 90 80 70 SCHLEICHER a SCHUELL FILTER PAPER 40kV 25 mA RIK TC2 Figure 1. Schleicher and Schvell filter paper background scatter pattern. sample backing when the sample was less than "infinitely" thick. The diminuation of the scatter peak height and the corresponding increase of compound peak heights were measured for a series of samples of different thicknesses. A plot of the logarithm of the latter versus the scatter peak height produced a parallel series of lines that served to correct peak heights to infinite thicknesses in thin samples. Consequently, the patterns could be directly compared by peak height on a semi-quantitative basis for the relative abundance of each compound. 35Z Further investigation of the earlier assigned [7] PbSO^ peaks at 26.7° and 20.85° 2e were made in light of their appearance in the background samples from Fulton and Maryville, MO with no supportive series of minor peaks. Quartz provided an alternative identification for the major peaks at 26.66° and 20.85° 2e, and should be expected to be present, being a wery ubiquitous compound. Additional minor peaks matched the pattern for CaC0 3 , nearly as common a background compound as quartz. The quantitative aspect of the x-ray diffractometry technique was explored using the ore concentrate. The gravimetric analysis [8] was performed by dissolving samples in concentrated HN0 3 , and 70 percent HCIO^. This resulted in Pb determinations of 71.12 percent and 71.03 percent respectively. By comparison, the measured peak heights in the diffraction pattern predicted 86.17 percent PbS in the ore concentrate, giving a lead content of 74.62 percent, only 3.51 percent greater than the average of the chemical determinations. Sta- tistical treatment of data gave the standard deviation 2.42, the 95 percent confidence limit 2.8, the 99 percent confidence limit 6.72, and the relative probably error 4.2 percent. Thus, the error is within the expected range and acceptable as a quantitative method, validating the sample preparation and mounting methods. Further semi-quantitative comparisons between samples necessitated the separation of the PbSO^/quartz overlapped peaks. For binary mixtures, a ratio of the two analytical peaks at 26.7° and 20.85° 2e, containing the contributions from the two compounds can be dis- criminated as to composition by plotting the variation of the ratio versus the weight fraction of one compound [9,10]. To correct for differences in diffraction efficiency, a pure sample of each known compound was prepared, and the peak heights of the major peak in the diffraction pattern (or second peak when the primary peak overlapped the pattern) were compared against quartz, the most efficient peak, and a ratio factor was calculated to equalize the height of the re- sponse to the weight fraction of a compound relative to the most efficient compound. No attempt was made to correct for interelement effects, which appeared to be of secondary significance. 3. Results Most patterns were very similar, with peaks of highest intensity for PbS, PbSO^ and quartz (fig. 2-6). The background comparisons from Fulton and Maryville, M0 show the presence of quartz, dolomite, and calcium carbonate (fig. 7,8). A typical smelter site sample of air-borne particulates from Glover, M0 (number 2224) (fig. 5) reveals a pattern strikingly similar to the ore concentrate. Strong background peaks of dolomite, ZnS, quartz and a slight CaC0 3 or chalcopyrite peak are observed at levels similar to the ore con- centrate pattern. The lack of significant PbS0 4 peaks rules out the blast furnace or sintering operations as possible sources, because both produce significant PbSO^. The observed high level of PbS further discriminates against the blast furnace operation since samples of this effluent contained no PbS. During the period when this sample was collected, the plant was on strike, and sintering operations ceased. Ore trucking operations took place on one sampling day. The similarity with pattern 2224 reinforces the interpretation of the ore concentrate as the sample source. The slight indication of PbSO^ is explainable as a residue from prior processing operations that was stirred up in the dust of the plant yard by the truck traffic. Conditions prevailing with the resumption of smelting operations are observable in a second typical pattern containing PbSO^. Sample 5117 was collected during a 24 hour period of sintering, blast furnace, and ore trucking operations. The relative intensities of quartz, PbSO^, and dolomite are among the highest found in this study. The ZnS is rather 353 CO Z> Q < _J _J UJ o o CO o < tr => CO < _l CD o oc m sqd BlIdAdOOl sqd 3±ii/\icnoa-< 31IWCTI sqd snwcnoa suz< BlIdAdOOIVHO Sqd SUZ 3imAdoonvH0 o CVJ 0000000000 OcDCOh-tom^rooj — Figure 4. Ore concentrate pattern. 356 "osqd 'ziavno £j •'osqd 'ziavno o o o o o o O en oo r*- id m Figure 5. Typical air sample pattern, #2224. 357 LlI cr o I- LU X CO LlI or < E m C\J > O LU o CO o o CD O O if) ^Ocvj > »- -** o_ _l <*(z < S uz S uz J L J I L ooooooooooo O0>CON-<0lO « * I •»:;<• V. 5 • >' i h.h *' -•.••*• L* .* ^ 10- 30 Manganese is used wi the harmful effects of su percent of the nationwide [19]. Other sources of 1 facilities engaged in the attributes the presence o in cities such as St. Lou by industrial sources [19 research area is the stee Louis areas. PARTICLF DIAMETER, ym dely in the steel manufacturing industry as an agent in nullifying lfur (National Academy of Science,) [15] and accounts for 80 anthropogenic emissions (U. S. Environmental Protection Agency, esser importance include cement manufacturing industries and manufacturing of dry cell batteries. Although Struempler [17] f aerosol manganese in western Nebraska to airborne dust particles, is, it is generally assumed that soil contributions are exceeded ]. The major anthropogenic source of manganese compounds in the 1 industry, concentrated in the Alton-Wood River and East St. Of the six distributions shown in figure 6, five show fairly uniform profiles both in average concentrations and overall shape of spectra. The large peak occurring between 1-2 micrometers in the Alton area is in close proximity to the second largest blast furnace in the area and is the suspected source of that maximum. Heindryckx [8] found a similar peak at the same particle diameter near a ferromanganese plant in the industrialized region of Belgium. The uniform, relatively flat profile obtained at the remaining locations is attributable to the background urban manganese aerosol obtained at various sites up and 374 downwind of industrial facilities. Once again Pere Marquette is a reference site for a nonaffected location. 4. Discussion The preceding histograms for the three metals Cd, Pb, and Mn illustrate that a wide range of both concentrations and shapes of particle size distributions is obtained for metals which naturally occur at low levels, which can be affected by localized emission sources. For this reason, it is crucial to accurately assess the methodology involved as it pertains to both precision of measurements and accuracy in determining size distributions. Recent work by Dzubay et at. [6], Rao [21], and Hu [9] points out that the material used as an impaction surface has an important bearing on the resultant size distributions obtained. Teflon and aluminum foil surfaces have been shown to suffer from particle bounce which results in a lowered capture of larger particles on early stages and a subsequent buildup on later stages and the backup filter. The net effect is a shift in the deduced MMD values to a smaller particle size. If the particle bounce phenomenon does occur, the percentage of error in the resultant size distribution will be greater if the bulk of particles present are in the upper size categories and, conversely, lower if they naturally occur in the lower size ranges, less than 5.0 micrometers. Additionally, an increase in pump flow rate will enhance particle bounce resulting in even larger errors. Recommenda- tions to alleviate this problem include: 1) coating of impaction surfaces with a thin layer of high vacuum silicone grease (Dzubay, et at.) [6] (Rao) [21], or 2) utilizing glass fiber filters as suggested by Hu [9]. The application of silicone grease to surfaces analyzed for trace metals is not recommended due to the introduction of contaminants in the coating process and also the problems of incompatibility with the analytical method, specifically an acid dissolution followed by atomic absorption analysis. Consequently, for studies of this nature, glass fiber or cellulose filters with a low trace metal content should be employed while operating samplers at a low flow rate to minimize particle bounce effects. An example of possible shifting of MMD values can be seen when comparing data from the METROMEX study to average MMD values for the St. Louis area obtained in the NASN study (Lee et at,) [12]. Cadmium and lead MMD values obtained in the St. Louis NASN study yielded average values of 1.54 and 0.70 micrometers, respectively. Averages from METROMEX for three summers revealed values of 1.34 and 0.63 micrometers, remarkably close to the NASN values. Manganese values, on the other hand, averaged 2.30 micrometers in the NASN study and 4.12 in this study. This indicates a possible downward shift in the manganese spectra of the NASN study, which is attributable to the use of aluminum foils as impaction surfaces and a flow rate of 154 1 min -1 instead of 28 1 min -1 (Lee and Goranson) [12]. A particle bounce phenomenon could explain the differences for the manganese results. 5. Conclusion The overall precision obtainable in atmospheric sampling of this type is best illustrat- ed as in figure 7 by plotting the results of two adjacent samplers operating simultaneously for an equal duration of time and comparing the values for different elements. In so doing, it becomes apparent that reproducibilities will vary corresponding to three factors: 1) the greater the concentration of an element naturally present, the lower the standard deviation of replicate determinations is likely to be, 2) analyses made by flame atomic absorption will have greater precision than non-flame techniques due to the inherent principles involved in each method, and 3) elements with known high volatilities may exhibit losses during the acid dissolution phase if excessive splattering occurs upon reagent addition. Applying these factors to figure 7, the lowest standard deviations are obtained for the elements calcium, potassium, and zinc, all determined by flame atomic adsorption. Calcium, present at the highest concentration, clearly shows the strongest correlation between adjacent samplers. The lower correlations shown for lead, manganese, and cadmium are attributable to: 1) low concentrations involved which result in minor contaminants contributing a large relative error, and 2) inherent lower precision obtainable 375 with flameless atomic absorption. These plots emphasize the meticulous care that must be taken during all aspects of sampling, processing, and analysis to minimize the introduction of contaminants from both reagents and the laboratory environment. Provided these para- meters are satisfied, direct flameless atomic absorption determinations of Andersen cascade impactor samples are achievable after an acid dissolution of cellulose filters. The precision of replicate atomic absorption determinations averages ±5 percent with the precision of the entire methodology largely element dependent. Figure 7- COMPARISON OF ADJACENT SAMPLERS AT PERE MARQUETTE 1.0 10 0.1 PARTICLE DIAMETER, ym I am grateful to F. F. McGurk for her help in completing many of the analyses. This work was performed under the general direction of R. G. Semonin and D. F. Gatz whose careful review of the manuscript was appreciated. The research was supported by the U. S. Energy Research and Development Administration, Contract No. AT(11-1 )-1199. 376 References [I] Barnard, W. M. and Fishman, M. J., Evaluation of the use of the heated graphite atomizer for the routine determination of trace metals in water. Atomic Absorption Newsletter, U{5), 118-124 (1973). [2] Campbell, W. J., Metals in the wastes we burn? Envir. Sci. and Teohnol. 10(5) , 436-439 (1976). [3] Changnon, S. A., Huff, F. A., and Semonin, R. G., METROMEX: An investigation of inadvertent weather modification. Bulletin American Meteorological Society, 52(10) , 958-967 (1971). [4] Davison, R. L., Natusch, D. F. S., Wallace, J. R., and Evans, C. A., Jr., Trace elements in fly ash. Dependence of concentration on particle size. Envir. Sci. and Technol. 8(13), 1107-1113 (1974). [5] Dulka, J. J. and Risby, T. H. , Ultratrace metals in some environmental and biological systems. Anal. Chem. , 48(8), 640A-653A (1976). [6] Dzubay, T. G., Hines, L. E., and Stevens, R. K. , Particle bounce errors in cascade impactors. Atmospheric Environment , J/J, 229-234 (1976). [7] Ediger, R. D., Atomic absorption analysis with the graphite furnace using matrix modification. Atomic Absorption Newsletter, 14^5), 127-130 (1975). [8] Heindryckx, R. , Comparison of the mass-size functions of the elements in the aerosols of the Gent industrial district with data from other areas. Some physico-chemical implications. Atmospheric Environment, Y0_, 65-71 (1976). [9] Hu, J. N. H., An improved impactor for aerosol studies—modified Anderson sampler. Environ. Sci. and Technol. 5_(3), 251-253 (1971). [10] Johnstone, H. F. and Coughanowr, D. R. , Absorption of sulfur dioxide from air. Industrial and Engineering Chemistry, 50(8) , 1169-1172 (1953). [II] Lee, R. .E. and Goranson, S., National air surveillance cascade impactor network. I. Size distribution measurements of suspended particulate matter in air. Envir. Sci. and Technol. 6(12), 1019-1024 (1972). [12] Lee, R. E., Goranson, S. S., Enrione, R. E., and Morgan, G. B., National air surveil- lance cascade impactor network. II. Size distribution measurements of trace metal components. Envir. Sci. and Technol. 6(12), 1025-1030 (1972). [13] Lee, R. E. and von Lehmden, D. J., Trace metal pollution in the environment. J. of the Air Pollution Control Assoc, 23(10), 853-857, (1973). [14] National Academy of Science, Lead; Airborne Lead in Perspective, 330, (National Academy of Sciences Printing Office, Washington, DC, 1972). [15] National Academy of Sciences, Manganese, 191, (National Academy of Sciences Printing Office, Washington, DC, 1973). [16] Segar, D. A. and Gonzalez, J. G., Evaluation of atomic absorption with a heated graphite atomizer for the direct determination of trace transition metals in sea water. Anal. Chim. Acta 58, 7-14 (1972). [17] Struempler, A. W., Trace element composition in atmospheric particulates during 1973 and the summer of 1974 at Chadron, Neb., Envir. Sci. and Technol. 9(13), 1164-1167 (1975). 377 [18] U. S. Environmental Protection Agency, Scientific and Technical Assessment Report on Cadmium. (Office of Research and Development, Washington, DC, 1975). [19] U. S. Environmental Protection Agency, Scientific and Technical Assessment Report on Manganese. (Office of Research and Development, Washington, DC 1975). [20] von Lehmden, D. J., Manganese in fly ash. U. S. Environmental Protection Agency, Research Triangle Park, NC, unpublished data, 1973. [21] Rao, A. K. , An experimental study of inertial impactors. Ph.D. Thesis, Dept. of Mechanical Engineering, University of Minnesota (1975). 378 Part VIII. AIR POLLUTION MEASUREMENT NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). AMBIENT AIR QUALITY MONITORING George B. Morgan Environmental Monitoring and Support Laboratory U. S. Environmental Protection Agency Las Vegas, Nevada 89114, USA 1. Introduction The monitoring of ambient air quality is of paramount importance for determining ambient levels of pollutants so that they can be related to adverse effects on man and his environment. The monitoring of air quality is also necessary to establish a quantitative relationship between air quality and pollutant sources to determine the efficacy of control measures, and to determine compliance with standards. 2. Reasons for Monitoring - Discussion Air quality monitoring is defined as the systematic collection of physical, chemical, biological, and related data pertaining to ambient air quality, pollution sources, meteoro- logical parameters, and other factors that influence, or are influenced by ambient air quality. This systematic approach is very complex. Without a systematic approach based on stated objectives and guidelines, proper ambient air quality assessment cannot be accom- plished. Failure to recognize and to take into account the complexities associated with ambient air quality has led to the present situation where we cannot document the relation- ship between sources and exposure or between exposure and effects. Major questions which must be considered in air quality monitoring system are as follows: 1. what are the objectives of the air monitoring network? 2. what is the area for which measurements are required? 3. what is the proper mix and location of fixed stations, moveable stations, airborne stations, and what is the role of modeling in achieving the objectives? 4. what level or degree of errors are acceptable? 5. what is the importance of exposure monitoring as related to the pollutants that are air oriented or to those that occur in other media including the food chain? 6. what related meteorological data must be collected with air quality data? 7. what is the importance of sample averaging times to the network design? 8. what are the effects of physical and chemical transformations on the sampling location and network design, for example, for monitoring ozone or sulfates? 9. what quality assurance program is necessary to assure that data are representative and legally and scientifically defensible? 381 Once an air monitoring network has been properly designed, it can furnish data to be used for one or more of the following uses: 1. to establish or revise standards; 2. to demonstrate that adequate progress is being made toward attainment of the standards; 3. to demonstrate that compliance with standards is maintained; 4. to furnish information during high pollution episodes or accidental discharges and provide guidance on choice of subsequent actions; 5. to define air pollution problems for periodic determination of priorities for resource allocations, and to develop control programs. Present air monitoring must be re-designed to provide an effective capability to detect and quantify ambient levels of toxic or non-regulated pollutants which may pose a threat to human health and welfare. In order to accomplish this, a priority list of toxic pollutants must be assembled. For those pollutants which transcend the media, an air monitoring program must be developed in conjunction with other appropriate monitoring programs so that total exposure to important receptors may be quantified. Pollutant priorities for monitoring and control must result from demonstrated effects, the probability that projected benefits will be commensurate with resources expended, and last but not least, from public concern. The following factors should be considered when establishing pollutant priorities: 1. Severity of known or suspected effects on human health, including neurotoxic, mutagenic, teratogenic and carcinogenic effects. 2. Severity of effects on soil, plants, animals, and structures. 3. Persistence of the pollutant in the environment, and accumulation in man or his food chain. 4. Conversion into more toxic substance {e.g., S0 2 to sulfuric acid). 5. Ubiquity and environmental levels which can be estimated from an emissions inventory. 6. Size and type of human population exposed. 7. Availability of adequate control technology. 8. Availability of adequate methods for measuring the pollutant in the environment. 9. Legal mandates. 3. Types of Monitoring Air monitoring activities may be divided into the following categories: (a) permanent fixed-site (trend) monitoring; (b) ambient source-linked monitoring; (c) exposure monitoring; and, (d) biological monitoring. A. Permanent fixed-site (trend) monitoring Permanent fixed-site (trend) monitoring is necessary to judge the attainment and maintenance of the present Ambient Air Quality Standards through the State Implementation Plans (SIP's). Included in the SIP's are air quality maintenance, land use planning, transportation controls, prevention of significant deterioration, episode prediction and control , etc. Monitoring of trends at permanent sites involves the measurement of pollutants and their effects over extended periods of time. These data are primarily for the evaluation of conditions over time whether at the source, in industrial areas, urban areas, rural areas or geophysical baseline areas. 382 B. Ambient source-linked monitoring Ambient source-linked monitoring involves relating ambient air quality to sources through modeling, considering other pertinent supporting data such as meteorology, demography, and topography. Computer models are becoming extensively used to provide a mathematical relationship between air emission sources and resulting air quality. These models are normally validated on the basis of air quality measurements for a limited number of fixed and mobile monitoring stations and time periods and can then be used to extrapolate or predict the variations of pollutant concentrations at locations and times which are not measured directly. Once models are validated they provide a basic tool for assessing the effectiveness of abatement strategies for immediate or long-term problems. Also they are the only method for evaluating the impact of proposed new sources. Much of research monitoring falls into the ambient-source linked category. Projects that typically- require significant research monitoring support include studies of the movement, distribution, fate, pathways and effects of a specific pollutant entering a given environmental medium and assessment of the effectiveness of experimental control systems or procedures. C. Exposure monitoring Exposure monitoring should give a true picture of the impact of air pollution control on reduction of exposures and adverse effects on human health and welfare. An exposure monitoring system which can provide such information is the preferred way of reporting progress toward cleaning the air and thus carrying out the intent of the Clean Air Act. Exposure monitoring involves measurement of pollutant concentration at locations where exposure may occur. The primary requirements for such systems are: 1. Identification of critical receptors at risk and the pollutants which produce the adverse effects. 2. Development of siting requirements and numbers of sites including mobile and remote measurement systems based on the time and variability of pollutant concentrations and the relevant receptor population. 3. Integration of networks for estimating receptor exposure with networks for other purposes, e.g., standards attainment and maintenance. D. Biological monitoring Biological monitoring can be an important part of an ambient air monitoring program. For many pollutants the major pathway from source to receptor is through the air. The monitoring of tissues and biological fluids collected from receptor plants, wildlife, domestic animals and humans can indicate levels, patterns and trends of atmospheric pol- lutants or their metabolites. Incidences of pollution can often be detected by unusual changes or mortality in animal populations caused by pollutants such as DDT, mercury, cadmium, lead, fluorides, and arsenic. Valuable information on lead, cadmium, mercury and arsenic exposure may be furnished by the analysis of selected human tissues or fluids such as hair, teeth, fingernails and blood. 4. Components of an Ambient Air Quality Monitoring Network To provide valid data on pollutant concentrations, an ambient air quality monitoring network must have the following essential components: central laboratory facility, manual sampling network, automatic monitoring network, and support facilities. The central laboratory facility is required for any air monitoring network. No matter how sophisticated the facilities are in the field, a central laboratory is necessary. 383 The manual network is the most efficient first step in establishing a total network. This system can provide basic data upon which a comprehensive network can be designed. The manual network will provide 24-hour integrated measurements of total suspended particulates, S0 2 , and N0 2 . A most important feature of the manual network is that valid data can be rapidly obtained with relatively simple and inexpensive field sampling equipment. The automatic network will provide continuous 'in- situ measurements of pollutants. This is really the only feasible way to get peak and diurnal concentrations of pollutants. In the long run, the automatic network is the most accurate and economical means for measuring the usual spectrum of pollutants of concern. This network will require some major types of support equipment. A fully equipped mobile laboratory is required for calibration of field stations, for quality control functions, for pollutant profile studies, and for interrelating monitoring systems. Meteorological support equipment is needed at all automatic stations. An automatic network for multi pollutant monitoring represents an extensive investment and merits special consideration regarding site location, station design, instrument selection, instrument calibration, and data acquisition and handling. Valid pollutant concentration measurements cannot be made without adequate instrumenta- tion, thus the selection of instruments for individual pollutants must be given due attention. Instruments should meet certain guaranteed performance specifications for accuracy, sensi- tivity, zero and span drift, freedom from interferences, response time, maintenance require- ments, eta. Fortunately, considerable information and expertise is available based on previous instrument evaluation programs. (The U.S. EPA equivalency document, 40CFR53, February 1975, is one such source.) Such information may be used to provide guidelines on instrument performance to be expected for a given pollutant and for performance specifica- tions to be included in procurement contracts. An automatic station providing continuous measurements of several air pollutants generates a copious amount of data. The acquisition, handling, storage, retrieval and utilization of such data requires a large investment in data systems and manpower. Auto- matic data acquisition systems are necessary for handling such large amounts of data. 5. Quality Control A quality control program involves taking all of the necessary steps to assure that monitoring data and supporting information upon which decisions are based are legally and scientifically defensible. A quality control system which utilizes reference methods is necessary if data generated by one network or one laboratory at a given time are to be comparable to similar data produced elsewhere at another time. It is only through such a program that the accuracy and precision of the data are known. It is necessary that all operational phases of a monitoring network system be considered in a quality control program; for example sampling, sample handling and storage, sample preparation, sample analysis, instrument performance, data calculations, data validation, data reporting, and data evaluation. In addition, any such quality control program must be applied in field operations for selection of sampling site, verification that the sampling site is adequately representative of the area, size of sample collected, sampling rate, and frequency of sampling. All of these parameters must be specified so that resulting data fulfill the goals and objectives of the monitoring network system. Guidelines and operational manuals for implementing and for operating a routine quality control program must be available for the following areas: laboratory construction, laboratory operation, supplies and equipment, personnel, training, data acquisition and analysis, interlaboratory calibration, intralaboratory quality control, and report preparation. 384 6. Conclusion Many of the responses to environmental monitoring needs have resulted in less than adequate data, have not been cost effective, and in certain instances have resulted in implementation of costly programs which provided questionable benefits. We must make maximum use of new concepts and methodologies as they become available. Such concepts as integrated monitoring systems, new optimization techniques and state-of-the-art measurement devices, such as those employing remote sensing techniques, are becoming operational in the sense of being available for testing and application, and failure to use them to their fullest capabilities will result in a loss of the opportunity to develop rational environ- mental assessment tools. The ability to assess ambient air quality depends heavily on the availability and applicability of appropriate sensors. Until recently, most pollutant sensors capable of providing quantitative information were of the in-situ or contact type. Such sensors are restricted to measurement of a parameter at a single point in space or, when mounted on a mobile platform, at sequential points as a function of time. Because of the difficulty of relating a point in space remote from the sensor to sensor data, great care must be taken in selecting the site for the sensor. As new devices become available, they should be incorporated into operational monitoring systems. There is no doubt of the ever increasing importance of remote sensing for EPA's monitoring programs. Yet these techniques will not replace contact monitors but rather will augment and improve monitoring methodology. As newer techniques and hardware become more available and enhance our ability to monitor our environment, we will be faced with the question of "what is the most cost-effective combination of fixed, mobile contact and remote sensors for a specific monitoring problem?" Another area where advances are yet to come is in the development of monitoring methods for assessing exposure-dose relationships. In the past, environmental monitoring has been carried out in response to an already existing hazardous condition. Future monitoring systems must be able to detect potential problems and monitor the appropriate parameters before they reach crisis proportion. Some possibilities which might be explored are the use of biological exposure indicators as trend monitors to predict changes, and the development of personal exposure meters, such as biochemical measurements which integrate the total exposure of an individual to a pollutant or class of pollutants. Another example is the fluorescent film technique for ozone exposure. When we achieve accurate, valid and broadly applied exposure monitoring, we then shall have made a major step toward achieving the ability to truly and rationally evaluate the management of our air resources. 385 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). APPLICATIONS OF REMOTE MONITORING TECHNIQUES IN AIR ENFORCEMENT PROGRAMS Francis J. Biros Technical Support Branch Division of Stationary Source Enforcement Environmental Protection Agency 401 M Street, SW Washington, DC 20460, USA 1. Introduction The Clean Air Act, as amended, June 1974, (42 U.S.C. 1857 et. seq.) authorizes the development of effective and practical processes, methods and prototype devices for the control of air pollution. This authority is granted to EPA to facilitate the achievement of the purposes of the Act which include the protection and enhancement of the quality of the nation's air resources so as to promote the public health and welfare and the productive capacity of its population. "Processes, methods and devices" have been developed by EPA and private industry to inter alia measure atmospheric emissions of pollutants (criteria and non-criteria) from stationary sources as well as to measure ambient concentrations of such pollutants. These pollutant measuring methods have been based on many principles ranging from manual to automatic, extractive to in-situ, and remote to proximate. The purpose of this discussion is to evaluate the utility of remote monitoring techniques (exclusive of visible emissions observations by trained observers) in stationary source enforcement programs based on a consideration of the possible program applications and their scope. In this process, advantages and disadvantages of remote monitoring technology will be defined and suggestions will be presented relating to possible future activities in the development and application of these techniques. Major elements of a control agency's program designed to improve environmental quality include the establishment of appropriate standards, identification and inventory of those sources to which the standards apply, notification of affected sources, conduct of surveil- lance activities and, finally, administrative or judicial enforcement of standards, if necessary, to ensure compliance. 2. Discussion Generally, the objectives of stationary source air enforcement monitoring are to locate affected sources; determine their compliance status; where necessary, develop compliance schedules; monitor increments in compliance schedules; and monitor to ensure final compliance. A special objective associated with case development monitoring is the establishment of evidence for administrative or judicial proceedings. Most of these ob- jectives can be achieved only by use of data obtained in field surveillance procedures employing pollutant emission measurement techniques such as remote monitoring. Remote techniques may be used therefore, in monitoring activities directly related to enforcement of Federal and state/local agency regulations including: (a) Determination of stationary source compliance status, i.e., whether a source (presently or previously on a compliance schedule) is complying with emission limitations; (b) Case development activities, i.e., collection of data specifically to support administrative or judicial actions; and 387 (c) Enforcement surveillance activities, i.e., screening studies to evaluate unknown status of sources. There are many enforcement related activities in which remote monitoring techniques may prove to be valuable; these include, but are not limited to, the following: (a) Evaluation of emission requirements for major sources when developing or revising regulations ; (b) Verification of the adequacy of control strategies to meet NAAQS; (c) Development of representative emission factors; and (d) Development and validation of long and short-term air quality modeling procedures. Control agency enforcement monitoring activities potentially encompass literally hundreds of thousands of sources. Thus the need for accurate, precise, cost-effective monitoring methods such as remote techniques is apparent. For example, enforcement activities under §110 of the Clean Air Act involve approxi- mately 200,000 stationary source facilities. To date, emphasis has been placed on approxi- mately 22,000 major sources which represent 85 percent of total stationary source pollution. Under §111, standards of performance for new stationary sources, Federal standards (some delegated to certain states) have been promulgated for 24 source categories. To date, approximately 350 sources have been monitored and it can be expected that at least 1,500 sources per year will come under the ambit of these regulations. Finally, §112 provides for emission standards for hazardous air pollutants. Under promulgated rules, approximately 800 fixed sources and 3,300/year transitory sources require enforcement monitoring on a routine basis. Remote monitoring techniques offer a number of operational and functional advantages which are of interest to air enforcement and regulatory programs. Cost-Effectiveness - Although the initial capital costs are higher for remote instru- ments than for other types of monitors, the operational costs are less because of the mobility of the instruments which allows coverage of more sources and areas in a shorter period of time than manual test equipment. Remote monitors are also less manpower intensive inasmuch as some may be operated by a single individual in the field, whereas manual instruments may require a team of three to five engineers. Unannounced and Non-interference Monitoring - Remote Monitoring techniques provide a most effective tool for compliance monitoring, even at night with active systems, without entry into the premises of the source. In addition, remote monitoring interferes with normal plant operations to a lesser degree than monitoring with manual methods. Rapid Response - In emergency episodes involving environmental pollutants, the highly mobile and flexible remote monitors can be used to assess the document the extent of the emergency more rapidly than stationary in-situ monitors. Enforcement countermeasures can, therefore, be instituted more rapidly. A number of disadvantages to the use of remote monitors can be identified. These include: • Inability to Measure Mass Emission Rates - Most regulations are written in the form of mass emission limitations. Therefore, documentation of violation requires determina- tion of mass emissions rates for a particular facility. Most remote monitors will provide measurements in relative concentration terms. Conversion to mass emission rates requires a second measurement, in-situ or remote, using instrumentation such as a laser Doppler velocimeter. 388 High Initial Costs - Remote monitoring instruments are generally more costly than the extractive or in-situ system. Similarly, active remote systems are more expensive than passive systems. It can be assumed, however, that cost of instrumentation would be reduced as more are produced and used in the field. Limited Application Under Certain Conditions - Adverse weather conditions such as fog, heavy rain, or extremely high particulate content in the atmosphere can affect the measurement capability of remote monitoring instruments. • Complicated Calibration Procedures - Remote techniques are more difficult to calibrate than extractive or in-situ devices because of the atmospheric background influence. Large calibration cells to simulate long atmospheric paths or test ranges with cali- brated stack emission generators may be required. 3. Conclusion Because of their significant advantages as well as the general acceptance by the scientific community of the validity and adequacy of the technical principles underlying remote monitoring instruments, however, the enforcement and regulation development use and application of remote monitoring can be expected to increase in the future. More remote instruments will need to be made operational and provided to field enforcement personnel for compliance monitoring purposes. In addition, successful use of these techniques in an enforcement case will be an essential initial step to administrative and judicial acceptance of the remote instrument technique as an enforcement monitoring tool. 389 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). INDIVIDUAL AIR POLLUTION MONITORS S. C. Morris Brookhaven National Laboratory Upton, New York 11973, USA and M. Granger Morgan Carnegie Mellon University Pittsburgh, Pennsylvania 15213, USA 1 . Introduction In the field of community air pollution, emphasis has been placed on general measures of air quality based on fixed sampling stations. Traditionally, epidemiological studies have used data from one or several fixed stations as an estimate of the exposure received by a neighborhood or by an entire city. The dearth of appropriate data makes estimation of the error introduced by using ambient air quality as a surrogate for population exposure difficult. Rough calculations place it between a factor of two and a factor of seven. As a further complication, the magnitude and even the direction of the error varies by pollutant and is not likely to be consistent from one city to another. As the averaging time for exposure decreases, the value of fixed monitor data as a surrogate for population exposure is likely to decrease. Nonetheless, studies using fixed station surrogates have demonstrated a correlation between air pollutants and health effects sufficient to form the basis for a public policy decision to begin air pollution controls. 2. Discussion There is little concensus among epidemiologists, however, on the specific quantitative relationship between air pollution exposure and health effects. Consider the experimental difficulties: air pollution varies with time and place, people move around, the population receives not a single exposure but a range of exposures at different concentration-time histories. In addition, individual susceptability varies with age, health, previous exposure history and genetic factors. The weakest link in attempts to determine exposure-response relationships for air pollution is accurate, quantitative estimation of population exposure. An approach to improving this situation is the use of individual air pollution monitors. In the work place environment, exposure to air pollutants is regulated in terms of individual doses received by workers. Because of this different legal and regulatory environment there has been considerable emphasis on developing individual monitors for use in the occupational field. Both the National Institute for Occupational Safety and Health and the Bureau of Mines have successful programs of research support in instrument develop- ment. A large number of commercial firms market individual monitoring instruments for occupational applications. Few of the instruments that have been developed under these programs are directly applicable to use in ambient air. Difficulties involve such factors as sensitivity, dynamic range, and running time. With the exception of a modest and 391 sporadic program run by the Environmental Protection Agency, no program of Federal support currently exists for developing individual air pollution monitoring instruments for ambient air. The current absence of a significant market for such instruments inhibits their development by private firms. Some informal efforts have been made to use available individual monitors developed for the work place to measure public exposures and a major study using a Bureau of Mines type instrument has been initiated by a group at the Harvard School of Public Health with support from the National Institute of Environmental Health Sciences [l] 1 . In July of 1975 the Biomedical and Environmental Assessment Group at Brookhaven National Laboratory convened a workshop of health effects and instrumentation specialists to consider the role of individual air pollution monitors in air pollution health effects studies, and to develop an assessment of research needs in this field. Two early conclu- sions of the workshop group were 1) "the importance of population exposure estimates in air pollution epidemiology makes it imperative that future epidemiological studies include exposure estimates more representative of what people actually breathe," and 2) "the use of individual air pollution monitors is a necessary factor in the design or performance of definitive studies of the health effects of air pollution" [2]. 3. Conclusion The Brookhaven workshop produced a first order ranking, by promise, of a number of candidate instrumentation technologies. For each candidate instrumentation technology, a first order estimate of the research needs was developed. It seems clear that a number of acceptable prototype devices could be produced within three years. The uncertainties about the ultimate performance of the various candidate technologies are still sufficiently large, however, to preclude an immediate focusing on one or a few approaches. Work supported by the L). S. Energy Research and Development Administration, Division of Biomedical and Environmental Research. References [1] Speizer, F., An Epidemiological Approach to Health Effects of Air Pollutants, Proceed- ings of the Fourth Symposium on Statistics and the Environment, National Academy of Sciences, Washington, DC, 1976, in press. [2] Morgan, M. G., Morris, S. C, Individual Air Pollution Monitors: An Assessment of National Research Needs, Brookhaven National Laboratory, BNL 50482, Upton, NY, 1976. figures in brackets indicate the literature references at the end of this paper. 392 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). INTERCALIBRATION OF NITRIC OXIDE/NITROGEN DIOXIDE/OZONE MONITORS D. H. Stedman and R. B. Harvey Department of Atmospheric and Oceanic Science The University of Michigan Ann Arbor, Michigan 48104, USA 1 Introduction In the process of a measurement program we have tested the photostationary state relationship: J(N0 2 ) [N0 2 ] = k 3 [N0] [0 3 ] (1) where J(N0 2 ) is the rate of photolysis of N0 2 and k 3 is the known rate of reaction between NO and 3 . These tests have been carried out successfully in urban polluted environments and in rural air masses. The success of these tests indicates that the above relationship can be used to non-invasively test air monitoring data as the measurement takes place. A number of new techniques developed are described below. 2. Discussion A. Optimized J(N0 2 ) detector A schematic diagram of the J(N0 2 ) detector is shown in figure 1. A solution of the error equations gave operating points; initial [NO2] = 26 ppm, flow time =0.7 second. An automated switching valve for zeroing is included. Figure 2 shows a comparison between the ROOF TOP LABORATORY APILLART ORIFICE EPPLE* UV RADIOMETER 3-WAY TEFLON SOLENOID VALVE 27 cm X Im QUARTZ TUBE NO-NO, CHEMILUMINESCENT ANALVZER *2 C12 O7 FILTER N02 SOURCE DRYING AGENT Figure 1. Schematic diagram of J(N0 2 ) detector. 393 0.2 0.4 J (N0 2 ) (miir 1 ) 45° -; Figure 2. Comparison between the measured J(N0 2 ) and an absolutely calibrated uv photometer. measured J(N0 2 ) and an absolutely calibrated uv photometer. The solid curve is the calculated response characteristics based on absolute photometry. B. Mobile N0/N0 2 detector We have constructed in a mobile unit an NO/NO chemi luminescent monitor capable of measuring accurately on scales as sensitive as 1 ppSn full scale with a ten second time response. This detector is the one whose sensitivity and accuracy were to be tested using the photostationary state equation (eq. 1) and other methods. C. Flow independent ozone calibration Figure 3 shows schematically a method for measuring the ozone content of a photo- chemical ozone source by means of its stoichiometric reaction with NO. Strictly this method intercal ibrates NO and 3 detectors such that any systematic errors are equal and of the same sign. Incidentally, it also gives a calibration of the NO converter for pure N0 2 systems. Ozone is also sometimes calibrated by photometry over a 2 m path. 394 NO CAL. IANK ^(NO). °3 SOURCE ^(o 3 ) REACTION VOLUME 1 no/no x DETECTOR VENT Gas d hase Titration Apparatus Figure 3. Gas phase titration apparatus D. Rapid response permeation tube calibrator We also calibrate the NO detector using N0 2 permeation wafers. The success of this calibration has been much improved by a rapid in-situ calibration of the permeation rate using a pressure technique. Figure 4 shows a schematic of the method and figure 5 shows traces taken in 100 minutes and in 10 minutes of the flow calibration. Notice that apart from increased noise the 10 minute calibration is satisfactory at a permeation rate as low as 25 ng/min. FLOW MEASUREMENT ^ ' CONSTANT TEMPERATURE ENCLOSURE .\ v \\\\\\\\\\\\\\\\\\\\\\\V Permeation Tube Calibration &, Flow Dilution System Figure 4. Permeation tube calibration & flow dilution system 395 Figure 5. Traces taken in 100 minutes and 10 minutes of the flow calibration. The overall result of the above techniques is already redundant calibration. Table 1 shows the species independently calibrated. Table 1. Calibration methods for N0/N0 2 /0 3 . Permeation tube NO Standard tank N0 X Flow independent calibrator 3 , NO Photometry 3 We thus have five measured independent parameters to fix three variables. This should be enough, but instrument malfunction in the field can cause any of these data to be in error. Further, the above calibrations are invasive in that their use ties up the instrument away from its measurement role. Thus we used our detectors as calibrated above to test eq. (1). Under all conditions of reasonable fetch of the impinging air mass the equation was found to hold within the limits of data measurement. Table 2 lists reasons when departures have been observed due to improper fetch. Table 2. Departures from the photostationary state will be observed. a) When the impinging air is in a region of heterogeneous J. i.e., the detector is in patchy shade or within ^60 s of flow time from a heavily shaded area. b) When a source of NO (road, heater vent eta.) is within 60 s of air flow. X c) If an inlet system residence time is greater than ^2 s. d) If a detector has a response time long compared to typical fluctuations (<60 s in a clean area). Thus, for a properly sited monitoring station in an open area, the equation can always be used. Note that at night either [NO] or [0 3 ] must be zero, as is observed. 396 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464 Methods and Standards for Environmental Measurement Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). A REACTIVE GAS GENERATOR Wing Tsang and James A. Walker Institute for Materials Research National Bureau of Standards Washington, DC 20234 i 1. Introduction In an earlier paper [l] 1 we have described the construction and operation of a reactive gas generator. In particular, we have demonstrated its utility in the continuous and accurate generation of low levels of formaldehyde, acetaldehyde, and acrolein. As mentioned in the earlier report, this is actually a general purpose instrument with capa- bility of generating a wide variety of thermally stable reactive gases. Possible uses include calibration of analytical instruments, the evaluation of analytical methodology and the carrying out of toxicity investigations. We now validate this claim by using the reactor to generate sulfur dioxide, hydrogen cyanide and hydrogen chloride continuously and at levels that are of interest in pollution and occupational health contexts. The reactive nature of these gases is well known. This renders unsatisfactory the usual static methods [2] for sample preparation. Obviously, a dynamic method where the required samples can be prepared immediately prior to use can be expected to circumvent some of these problems. This has been the basis of practically all present day approaches to this problem and is best exemplified by the use of sulfur dioxide permeation tubes for calibration purposes. A complete discussion of this and other related methods can be found in a recent text [2]. Our instrument represents a fusion of two of the methods mentioned in the text [2]. It involves the generation of a dilute concentration of a selected large organic molecule in an inert medium through the diffusion cell technique and then the complete pyrolysis of this molecule to produce the reactive gas of interest. The "parent" molecule of choice is one that undergoes pyrolytic decomposition exclusively via the reaction: parent molecule ■> hydrocarbon + reactive compound. Thus, knowledge of the concentration of the parent molecule gives the concentration for the reactive gas. Furthermore, from the stochiometry of the reaction one can deduce the quantity of reactive compound by the amount of hydrocarbon present. Calibration of the latter is straightforward. The presence of this internal standard is an added and extremely at- tractive feature. Finally, we note that this method has infinite dynamic range. In contrast, the concentration of static samples is always fixed. 2. Experimental Front and top views (with instrument open) of the reactive gas generator can be found in figures 1 and 2. Operation of the instrument involves flowing inert gas (N 2 , He', ete.) at a pressure of about 1.5 atm and flow rate of 20-200 cm 3 /min past the diffusion cell (where the parent molecule diffuses into the stream), through a buffer cell (to remove downstream pressure fluctuations) and into the pyrolyzer where the decomposition reaction is carried out. The newly formed reactive gas is now ready for use. figures in brackets indicate the literature references at the end of this paper. 397 Figure 1. Front view reactive gas generator 398 Figure 2. Top view (with instrument open) reactive gas generator. 399 The relationship between the concentration [C] of reactive gases generated and the physical parameters of the system is C = W ln ( P^~ } (1) M v where D = diffusion coefficient, A = diffusion tube cross-section, L = diffusion tube length, P = pressure in diffusion cell, Q = flow rate of carrier gas and P y = partial pressure of diffusing vapor. The reactive gas concentration can be changed by suitable adjustments on the physical parameters of the system. As a practical matter these are the flow rate, diffusion tube area and temperature. The latter arises from the exponential dependence of the vapor pressure on temperature and since we usually operate in the range P >> P v this is directly reflected in the concentration. In addition there is also a % 3/2 th power dependence of the diffusion coefficient on temperature. Overall this means that temperature is probably the most important variable with respect to the proper operation of the system. The diffusion cell temperature is thus controlled to better than ± .1 K. The working temperature range of the instrument is between ambient and 200°C. For most of the substances that we have tested this means a dynamic range of about two orders of magnitude. Note that since the flow rate and temperature can be externally set, reactive gas concentration can be "dialed." It should be noted that since the mass thru-put (yg/min) is independent of flow-rate for most applications, the tem- perature is a more useful variable. The compounds of choice for the present application are those which decompose uni- molecularly into molecular fragments and which do not have significant side reactions. The avoidance of surface or chain induced decomposition processes is of prime importance. These processes are frequently irreproduci ble and thus completely unsuitable for the present purposes. This is the reason for the use of the gold reactor. An excellent source for possible "parent" molecules is the review by Benson and O'Neal [3]. A more thorough discussion of the factors to be considered in the choice of compounds can be found in an earlier paper [4]. The unimolecular nature of these reactions dictate that the extent of conversion is dependent only on the reaction time (t), temperature (T) and the thermal stability of the appropriate compound. The relation is ln C i /C f = t A exp (-E/RT) (2) where A and E are Arrhenius parameters for unimolecular decomposition reaction [3] and characteristic of the compound of interest and C-j and Cf are the initial and final con- centrations. In the present context interest is focussed in the region of high conver- sions or where Cj/Cf is greater than 50. Thus, C. is ^ery close to the concentration of reactive gas. Under these conditions the concentration of products is relatively in- sensitive to changes in reaction variables. Thus these variables need not be strictly controlled and the operational procedure to locate the minimum necessary temperature is to increase the pyrolysis temperature until the product yield reaches a maximum and/or when the parent molecule disappears. 400 3. Results The experimental results will be presented in terms of the particular reactive molecule that is generated; A) HCN: The parent compound for hydrogen cyanide generation is ethyl cyanoformate. The decomposition reaction is C 2 H 5 0C0CN -> C 2 H tt + HCN + C0 2 Gas chromatographic analysis with a Poropak P-S 2 column and flame ionization detection yielded over the entire range (a factor of 30) an ethylene to hydrogen cyanide area ratio of 6.29 ± .09. Although the constancy of the area ratio is suggestive, the lack of any literature values on the relative sensitivity of flame ionization detections of hydrogen cyanide to hydrocarbons prevents the use of this number to establish the stochiometry of the reaction. Accordingly, absolute determinations of HCN concentration have been carried out by collecting HCN in NaOH solution and measuring CN" concentration using a specific ion electrode. This is compared with the concentration of ethylene as determined by gas chromatography. Over the entire range one to one production of ethylene and HCN is achieved thus confirming the postulated stochiometry. Maximum thru-put (100 percent conversion) in terms of HCN output/min as a function of cell temperature (30-95°C and 8.8 psig) obeys the following relationship. Log 10 [ M g HCN/min] = -2316 ± 29 [ j ] + 8.24 ± .09 (3) This covers a thru-put range of 4-100 yg/min and the standards deviation or "settabi 1 i ty" over this range is 4 percent. The use of the internal standard will improve the accuracy to ± 1 . 5 percent. For flow rates of 20-200 cm 3 /min (He) the temperature range of the pyrolyzer is 600-680°C. Continuous operation of the generator for 168 hours indicate that the maximum concentration variation is less than 3 percent. 2 Certain commercial materials and equipment are identified in this paper in order to specify adequately the experimental procedure.- In no case does such identification imply recommendation or endorsement by the National Bureau of Standards, nor does it imply that the material or equipment is necessarily the best available for the purpose. 401 B) SO2 : The parent compound for sulfur dioxide generation is trimethylene sulfone. The decomposition reaction is (CH 2 ) 3 S0 2 -> cyclo-C 3 H 6 + S0 2 propylene In an earlier report Cornell and Tsang [5], using an evaporative method [2] have demon- strated the utility of trimethylene sulfone as a source for S0 2 . In particular, they show that in the range covered (10-100 ppm and 20-40 cm 3 /min, flow rate) equivalent amounts of C 3 hydrocarbons and sulfur dioxide are produced. The present results will demonstrate that this compound is compatible with the new generator. Indeed, it is shown that the use of the diffusion technique offers a considerable increase in flexibility so that the range covered can easily be extended. All of the present results have been obtained with a diffusion cell head containing a column with a 1/4 in. diameter. This is in contrast to the 1/8 in. diameter columns used for the other compounds. Gas chromatography with a Poropak T column and helium ionization detection is used for analysis. Because of an interfering water peak, which eluted on the tail of the S0 2 peak, the Poropak T column was preceeded by a methyl silicone column. Quantitation of the experimental results is based on bottled samples of propylene (206 ppm, measured) and sulfur dioxide (1000 ppm, stated). It is assumed that in the range covered the detector is linear. Over the entire thru-put range (a factor of 100) the ratio of hydrocarbon to S0 2 yield is 1.024 ± .04. Together with the earlier results this is a satisfactory demonstration of the postulated stochiometry. The larger than usual un- certainty is actually an artifact that arose from the existence of a small water impurity which impinged on the tail of the sulfur dioxide peak. This is especially important at low thru-put. Thus there is an actual drift in the measured concentration ratio ranging from .97 at thru-put of 100 ug/min to 1.09 at the 1 yg/min level. It is suspected that the former number is more likely to be the correct one and that had it been possible to correct for the water peaks a lower uncertainty limit would have been obtained. We have also attempted to measure sulfur dioxide concentration using a sulfur dioxide specific ion electrode following the prescription given by Orion Research. Unfortunately, the results showed wide scatter (± 50 percent) and have not been used. Maximum thru-put O00 percent conversion) in terms of S0 2 output/min as a function of temperature obeys the following least squares relationship. Log 1Q [ y gS0 2 /min] = -3145 ±41 [ y ] + 9.16 ± .1 0) The standard deviation is ± 5 percent. These data cover a thru-put range of 1 to 100 ug/min and extend over the temperature range of 70-170°C. The generator is operated at 1.5 atm. Over a period of 168 hours of continuous operation the variation in concentration is less than 3 percent. For flow rates of 20-200 cm 3 /min the pyrolyzer temperature should range from 600-660°C. 402 C) HC1 : The parent compound for HC1 generation is cyclohexyl chloride. The de- composition reaction is C 6 H n Cl -» C 6 H 10 + HC1 \ C 2 H 1+ + C^Hg In the temperature range where these pyrolytic studies are carried out the conversion to ethylene and butadiene is less than 10 percent. Verification of the postulated stochiometry has been carried out using gas chromatography with a Porapak P-S column and flame ion- ization detection for the hydrocarbon and a CI" specific ion electrode for the quantitation of HC1 yields. The ratio of cyclohexene (including the small quantity converted to ethylene and butadiene) to HC1 is (1.007 ± .03) to 1 and covers a concentration range of a factor of 25. Maximum thru-put in terms of HC1 output per minute as a function of cell temperature follows the least squares relationship. Log 10 [ y g HCl/min] = -2239 ± 32 [ y ] + 7.81 ± .09 (5) The generator pressure is at 10 psig and the temperatures 30-90°C. In this temperature range the thru-put covers from 2 to 50 ug/min. The standard deviation is 4 percent. With flow rates in the 20-200 cm 3 /min (He) range the pyrolyzer temperature must be set between 580-650°C. Continuous generation over a period of 168 hours show concentration variation of less than 3 percent. This work is supported in part by the Office of Air and Water Measurement and the Center for Fire Research of the National Bureau of Standards. References [1] Tsang, W. and Walker, J. A., Anal. Chem. , in press. [2] Nelson, G. 0., Controlled Test Atmospheres, Principles and Techniques (Ann Arbor Science Publishers, Ann Arbor, Michigan, 1971). [3] Benson, S. W. , and O'Neal, H. E., Kinetic Data on Gas Phase Unimolecular Reactions, Nat. Stand. Ref. Data Series, Nat. Bur. of Stds. US, 2J_, 645 pages (Feb. 1970). [4] Tsang, W. , Journal of Research of the National Bureau of Standards, Vol. 78A, 157 (1974). [5] Cornell, D., and Tsang, W., Anal. Chem., 46,933, 1974. 403 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). SEMICONDUCTOR GAS SENSOR EQUATIONS FOR PREDICTING PERFORMANCE CHARACTERISTICS S. M. Toy Interscience Laboratory Palo Alto, California 94086, USA 1. Introduction Three semiconductor gas sensor equations have been derived so far to characterize and predict the Taguchi, TGS, type gas sensor performance characteristics [1 ,2,3,4, 5] 1 . 2. Experimental and Results The first equation relates the gas sensor resistance to gas concentration and is given in the form R = R.C n (1) where R is the semiconductor resistance in K ohm, Rj is a constant, K ohm C is gas concentration in parts per million (ppm) n is the straight line slope of a log-log plot. The equation parameters found for CO, hydrogen, methanol, and CO in 100 percent rela- tive humidity (100 RH) are listed in Table 1 and are valid under the test conditions investigated and where a linear relationship is found (see figure 1). Table 1 TGS semiconductor sensor equation parameters Concentration, C (ppm) 20 to 200 10 to 200 50 to 100 20 to 100 R = R.C Gas sample Ri (K ohm) n eq n exp CO dry 182.4 -0.98 -0.98 CO T00% RH 561.0 -0.98 -0.955 Hydrogen 164.2 -0.98 -0.98 Methanol 141.4 -0.98 -0.98 figures in brackets indicate literature references at the end of this paper. 405 I i I I I — r |i i I i i i — P — r o o genus S9i H0SN3S sv9 aod'iv/(ooz) iv 'ouva iNaaano and the experimental Figure 1. Current ratio, Al(200)/Al vs dry CO concentration. A comparison between the semiconductor equation values (R eq. . test data (R exp.) are listed in Table 2. For CO in 100 percent RH air (low emission air), there is good agreement between results. The experimental resistance, R, was determined by an approximate expression provided by the vendor as Vm = 0.7 R adj t R where Vm is the measured voltage reading taken across a variable resistance, R adj, adjusted to an appro- priate value [i.e., 300 ohm) and R is the calculated semiconductor sensor's resistance, alternate means to measure R was to measure the sensor's DC current and voltage. An 406 Table 2 Comparison between semiconductor sensor equation values and experimental values for CO in 100% RHin the range 10 to 200 ppm CO concentration (ppm) R eq (K ohm) R exp (K ohm) R/R?nn eq R/R ? nn exp 10 58.0 65.6 17.5 18.9 15 39.5 38.8 12.7 15.7 22 27.2 29.3 8.8 11.9 30 20.0 21.8 6.1 8.9 42 14.3 15.0 4.6 6.1 45 13.4 15.0 4.2 5.7 54 11.2 12.4 3.6 5.0 60 10.1 8.4 3.6 3.4 200 3.1 2.5 1.0 1.0 Standard gas mixtures were prepared in 19.9 liter glass bottles (distilled water bottles). Sensors were inserted into bottles. A few flow tests were run at 1 SCFH (standard cubic feet per hour). By a reiteration process, the R values were obtainable with an n = -0.98 and Ri = 561 which allows one to study the various gas effects on this semiconductor gas sensor in terms of one common equation given as i=x L i=l R. C-°- 98 (lai where Ri + R 2 ...R X are the constants found for the individual gases investigated under the test conditions reported. This general equation therefore allows one to predict the semiconductor sensor's performance in both single or mixed gases by examining the equation parameters with a suitable computer program. This program will provide a better technical basis for selecting, evaluating and specifying these type gas sensors for many safety applications. The second equation rela.tes gas sensor resistance to gas inlet temperature and is found to be §= -R^ C 2n (dV/dT) (2) where dR/dT is the change in sensor's resistance, dR, with a change in gas inlet temperature, dT, Ri is a constant K ohms for the ith gas, C is the gas concentration in ppm, and dV/dT=a is the gas inlet temperature coefficient in volts per degree. It has its origin by differentiating with respect to temperature T, an approximate experimental equation relating the gas sensor's resistance to voltage [1,2]. By substituting the required CO gas sensor's parameters for humid (100 percent RH) air, the equation reduces to the following expression: 407 R = -(561) 2 C _1 - 96 (.16 x 10" 3 ) (T 2 Tj (2a) where R is the gas sensor's resistance in K ohms, a=dV/dT is 0.16 x 10" 3 volts per degree (average value), and T 2 is the change in gas inlet temperature from a selected reference gas inlet temperature T : . Next, assume AT = 24°C = (T 2 Tj) (2b) then AR = 50.35 (24)C -1 - 96 = -1206.5 C" 1 - 96 Now log C* = [log (R/R-j ) * -0.98] without gas inlet temperature compensation. (2c) So log C* = log [(R + AR)/Ri i 0.98]. To obtain C*, the concentration, ppm, with the gas inlet temperature effect included solve for C*. The gas inlet temperature error introduced to the sensor's readings as predicted by this equation is in good agreement with the experimental test data. See figure 2. It also is in agreement with the reports in the literature that the gas inlet temperature affect is small at the high CO concentration ranges {i.e., 200 ppm CO). By plotting the predicted CO concentration error in ppm versus normal gas sensor concentration reading in ppm, one observes that the ppm error remains constant at 2.5 ppm beyond about 15 ppm and a peak error occurs at 4 ppm at about 5 ppm. See figure 3 and Table 3. The results provide an explanation for the reported negligible gas inlet temperature error reported at the higher CO concentration ranges, i.e., 200 ppm. For example, at 1000 ppm the gas inlet temperature error is 2.5/1000 or 0.25 percent error. However, at the lower CO concentration i.e., 10 ppm, the percent error is 28 percent. See figure 4 and Table 4. 408 aP o cc CC cc I— < cc UJ Q_ cc 20 30 40 CO METER READING. MV Figure 2. Percent gas inlet temperature error vs CO concentration. 409 '3 N0llVHiN33N03 03 Nl UOUUB Q313IQ3Ud Figure 3. Gas inlet temperature error effect on TGS sensor CO reading in ppm. 410 /u 1 1 1 ~r ~~r i I 1 II 5 60 — o LL. CONCENTRATION CJI CO HUMID CO (100 RH) AT = 24°C - 8 40 — E CC CO CC CC LU S 30 =3 1— < CC LU Q_ LU •" 20 LU CO < CO PERCENT o ^^^^ 1 1 I I I ~~~~1 10 20 30 40 50 CO CONCENTRATION C. ppm AT GAS INLET TEMPERATURE, T, 60 Figure 4. Percent gas inlet temperature vs CO concentration. 411 Table 3 Calculated gas inlet temperature effects on TGS type sensor reading: Humid CO (100% RH and AT = 24°C R = 561C- - 98 AR = -1206. 5C" 1 - 96 (R + AR) C (ppm) K ohm K ohm K ohm 140.2 Log C* a .614 C* (ppm) 2 284.4 -144.2 4.1 5 115.8 -51.46 64.4 .959 9.1 10 58.7 -13.2 45.5 1.11 12.8 15 39.5 -5.97 33.5 1.25 17.8 30 20.0 -1.53 18.47 1.51 32.3 40 15.0 -0.87 14.1 1.63 42.6 50 12.1 -0.56 11.5 1.72 52.5 60 10.1 -0.36 9.73 1.796 62.6 70 8.7 -0.29 8.41 1.86 72.4 80 7.6 -0.22 7.38 1.92 83.2 90 6.8 -0.178 6.6 1.97 93.3 100 6.2 -0.145 6.0 2.01 102.3 200 3.1 -0.037 3.06 2.31 203.7 300 2.1 -0.016 2.08 2.48 302.0 log C* = [log (R + AR)/561 . (-0.98)] Table 4 Calculated gas inlet temperature error effects on TGS type sensor readings in ppm and percent error C (ppm) AC IC* - C) (ppm) Percent error 2 2.1 105.0 5 4.1 82.0 10 2.8 28.0 15 2.8 18.7 30 2.3 7.7 40 2.6 6.5 50 2.5 5.0 60 2.6 4.3 70 2.4 3.4 80 3.2 4.0 90 3.3 3.7 100 2.3 2.3 200 3.7 1.8 300 2.0 1.0 The third equation relates the gas sensor heater voltage to the sensor resistance and is expressed as V = R R' e (3) where R = semiconductor resistance, K ohms, V = applied voltage across semiconductor sensor, volt (equivalent to heater voltage), 412 R = individual sensor constant, e n = the straight line slope on a log-log plot. Three individual semiconductor sensor's resistance vs. voltage were tested in humidi fied air and the test data revealed the above mentioned relationship. The individual sensor constants were determined and n was found to be - 0.15. See figure 5 for details. TTT TT GO T — r GO I I I I I I I : o - n CD cc 03 CC II o o m > O T II II II > CD <= CC 03 CC oo en o oo m ID 03 o GO GO LO p^ CM CO CO *a- ^ ^* ^fc 3fc 4fc ^ X • O 0) £5 o -t-> o CO +-< c 3 li N 0) (0 bo ■ • i-i C c 2 u-, O o 0» Z o E >*- o c o JC o c c 0l N-* o >» Q (A -Q LL. i_ ■V CO O Q. o E (D o CO O ,-' CO o> C> II CN m o • o II E I 1 ^ 1 — h 4—1- 416 c en c o *-> F o o a> -o -C i- >» LL 3 T (/) >* (0 2 c ro CO 0) I c o Z E a. Q. CM 00 I I to ^r E a o E o. E (0 CM E + fl» CM E + « CO E ■ - (0 E + a m CM 417 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). A STUDY OF VERTICAL DIFFUSION IN THE ATMOSPHERE USING AIRBORNE GAS-CHROMATOGRAPHY AND NUMERICAL MODELLING R. S. Crabbe National Aeronautical Establishment Ottawa, Canada 1. Introduction Air pollution studies have become a practical matter since passage in 1967 of the U.S. Federal Clean Air Act, e.g. the meteorology program of the EPA [6] 1 . Many of the familiar pollutant sources are located near the ground and emit contaminants more or less continuously throughout the daylight hours. In these cases, the mean crosswind distribu- tion of diffusing material in the plume has been observed to be Gaussian at mesoscale distances [8]. The mean vertical concentration distribution is less accessible to the ground observer and measurements are scarce. In many cases it is assumed to be Gaussian as well (the so-called Gaussian plume model). The adoption of a Gaussian profile for the vertical concentration distribution is, however, suspect and has recently been invalidated by the studies of Lamb, Chen and Seinfeld [7]. The problem is that the vertical eddy diffusivity is not constant with height but rather increases owing to the characteristic scaling of the vertical turbulence scales with distance above ground. Lamb and his co-workers have demonstrated that gradient- transfer theory, which models this height behavior, gives a prediction of the mean concen- tration field downwind of a steady source in the atmospheric boundary layer which is in much closer agreement with their detailed Lagrangian prediction than the Gaussian plume model. It therefore becomes an attractive method of predicting vertical diffusion in the atmosphere over mesoscale distances. In this paper, the predictions of this hypothesis are compared to measurements in a tracer-gas plume of an aircraft-mounted Gas Chroma tograph (G.C.) incorporating a short pre-column or adsorber as described in Elias [4], Crosswind sampling was performed to measure the mean crosswind-integrated concentration profiles which are predicted by the two-dimensional diffusion-advection equation of gradient-transfer theory. Good agreement is demonstrated between measurement and theory. On the other hand, the Gaussian plume model fails to predict the data, thus supporting the conclusions of Lamb and his co- workers. 2. Gradient-Transfer Calculations In gradient-transfer theory, the upward flux of material down the mean crosswind- integrated concentration gradient, 3x/3z, is equal to -K(z)3x/3z where K(z) is the vertical eddy diffusivity. The conservation of a passive scalar contaminant is then expressed by: 3 X /3t + U(z)3 X /3Z = |j [K(Z)3 X /3Z] 3 x /3z = at z = 0, z-j x (o,z) = Q6(z - z SRC )/U(z SRC ) (1) figures in brackets indicate the literature references at the end of this paper. 419 where U(z) is the mean wind speed and Q is the strength of the source located at a height zsrc- x is measured downwind and z vertically. Unidirectional wind with height is assumed here, appropriate to convective daytime conditions. Equation 1 is valid for vapors and for particulates whose terminal velocity is less than ^0.03w 2l s, where w is the fluctuating vertical wind speed [2,3], if turbulent deposition is neglected, z-j denotes the height of the inversion base. The profiles of vertical eddy diffusivity and mean wind speed in eq. 1 are modelled by: z sz«z N : K(z) = (u^z i /15)tanh(15kz/z i ){1.35(z/z N ) n +a/<|,[l(z/z N ) n ]} | «z(z7L) o Z N' z o *z«2z N /3 : kU(z)/u* = /< v -:,- /Li d/ z>2z /3 • kUfzVu = f z ^ z ' /l ) dz 1 - ^ I z -— dz' Z ^V J ■ kU l z " u * j z o z' az fu* J 2z N / 3 T 3y az \, In these expressions, u* is the friction velocity, =(-uw)q, u being the wind-speed fluctuation along x and 'o' denoting conditions at "ground level". Z|\|, are empirical functions of the atmospheric stability parameter, z/L, given in Businger et at. [1] where L is the Monin-Obukhov length. a(>0) is such as to preserve continuity of 3K/3Z at z\\. k. = 0.35, is Von Karman's constant, n, = 0.15, is an empirical constant and z is the surface roughness length, f is the Coriolis parameter, g is gravity, and 3T/3y is the lateral ambient temperature gradient. Eqs. 2 to 4 apply to a convectively-unstable mixing layer capped by an inversion lid at z-j and containing a region from z^ to z-j where we is increasingly negative. Their derivation is given in Crabbe [2]. 3. Experimental Procedure The three tracer-gas experiments described in this report were flown in the fall of 1975 during midday periods when approximately stationary meteorological conditions existed. Gaseous sulfur hexafluoride (SF 6 ) was released from a height of 1.5 m at the upwind end of the test range (fig. 1 ) at a rate of from 1-3 gs" 1 depending upon the mean wind speed and distance to the sampling station(s). The range consists of open fields and scattered wood lots with an assumed value of 0.3 m for z . The elevation varies from 3 to 400 ft MSL in the interior to between 2 and 300 ft in the river valleys. Table 1 summarizes the experiments. 420 w 4-1 c .0) g ■rH. X w 03 O I U 0) u nj M Eh C I U •H 4-' fC 4-> 10 ■H in o 423 u u •H X! 4-1 in 4-> rfl O 0) C T3 ■rH 3 S 4-1 Eh ■H 4-> «. rH QJ rC M 1 3 d) -P -M m 0) M M o > ro ^r r0 rH QJ 1 QJ TD O T3 p ^r P -P oo -P •H H •H P -P ■H » rH Hi u m 1 0) i (1) -p QJ -P -P P QJ O QJ P P U C U CO -H CO ■H S •H Q M Q X r «* X D o CN a, x w c •H co QJ .—I ■H o p QJ P P ra P QJ Dj e QJ P P> C QJ P O a, c -p fd P to •H LD I CN o CN (7DV-HX) apn^x^iv in o 425 I O O T3 0) ^^^--^^ c •H ^r 4r ■ U Eh a 1 \ w + N D (10V-WH) apn^T^tv 426 m O 00 o o o cn C •H T3 C Ti D 0) CD w s a en -H Ti m C 2 -H -H s C S-l S E cd s [S3 IS] O og En O rH X w in O l+H Cfl CD rH •H 4-1 in a >1 +J ■H > •H t/3 IM 4-1 ■H >1 TJ CD T3 c rfl TS 0) CD a m i ■a q •H t> cd -p H U H n3 U •H O H CI9V-WX-) epn^Tq-TV in o 427 X in rH o C ^-^ **\ ^< ~S*^ , — ' OJ ^s^ n \ \ — - * > c8 D ^-* rH < X w N \ ' o O ' x w o m U) CD rH •H m o u a >, -p ■H > •H 0} 3 4-1 4-1 ■H TJ >i CD C ca TS CD CD Cfi I TJ C T3 CD ■4-J fC rH P rH u ■H 428 4. Comparison of Theory to Measurement Calculated values of x(*»z) from ea .- 1 are compared to airborne G.C. measurements in figures 8 to 10. For exp. 3, both the prediction using the modelled eddy diffusivity, eq. and that using the polynomial fit to the T-33 values of K(z) are plotted. For the latter, the inversion base was placed at the height where K = 0, which was slightly greater than the preferred value of zi based on the measured temperature profile (fig. 4). Also shown are the predictions of the Gaussian plume model in exps. 1 a nd 3. Those denoted "ground- based observation" were calculated from the expression x = V^/tt (Q/Ua z ) exp (-z 2 /2a|) 5. using the 10-m wind speed with a z taken from the Pasquill-Gifford graph [5]. Those denoted as "fitted" used a value of a z calculated from the above expression using the measured surface value of x and the vertically-averaged wind speed (6 and 11 ms - *, respectively, in exps. 1 and 3). 429 (N I id u -H +J QJ ■ U Q) xi -p tP c ■H en 3 £ x >i Q) ,* M H -H o E . . •• 0) 4-1 ro X • • X! P M -P o -^ * O. rd n t • U • CD >i d) +J • 4-1 -P r-H rd ■ Ul -H •H • *~~ C > 4-1 rd TS (0 -H P CD n en J-l rd •P -P 3 a ■ ; ^r * .-1 P 1 4-1 /^ • 1 •H -p 4-) c / * en 4-4 C -H TO U • E • CD T3 ■H • • •H 1 tr O , V£> E T3 >i tr 1 m II rrj T3 5 u D o S-4 TJ re \ ' ID O CD c rt /x ,' II • D X ..*••■' -i E • • • i en o . / . • ■ e o /, • o . • rrj 1 > T3 U C QJ 3 en O X! S-l o E CD O en S-i rd ■ Jl •' 1- Cm X5 i E Cn 3. o o n a, x w E o ro -P rd 4-4 5-1 tx c o •H -P rrj U P C 0) U C O U CD -P rd ^ Cn CD P £ •rH I C •H W u u •H Cm O (1DV-WM) spn^TiTY 430 CM •ea- <-i rn u O = c tO C •H £ = = ■^r ro «> lUH m ^ 10 T3 S-l C , -p ■h m -p 1 10 U -H +J 3 -H > C +J -H i 0) CO •H S-l U 3 T! O 44 03 Q) QJ IW S-l X! -C ■H J O P -t-i t/ E 3 10 to c c CO ■H •H C E E -H E o ro o m in rH t— i c^ T3 T3 T3 c C C 03 (0 rfl O o in 1— 1 ro 00 r— 1 H C £ C CD 0) CD 0) a) 0) 5 CD £ cu ,5 Q) +J S-l +J ,M ■P H Q) 3 a) 3 CD 3 A rH .Q r-l X! H ■iH ■H ■H m fO m m m o3 -P m +j Mh 4-> M-l (t) 03 03 13 cu T3 -i i n 1 SH (U rG to H 4-1 c CD O C O U T3 C ■H I T3 C -H to to o S-l u en •H CN) I o I — in LD o (7DV - MX) spnixiTV 431 a, x w C £ -P 03 O u Ch C •H 4-> 03 U -P C 0) o c o o 0) P 03 P, Cn Q) -P £ •H I c en o s-i u (7DV-WM) epn^x^xv 432 Overall, it is evident that the present gradient-transfer model yields a prediction of the G.C. data very much superior to that of the Gaussian plume model, even when fitted values of a z and vertically-averaged mean-wind speeds are used. Since neither would, in general, be accessible to the ground observer, the apparent success of the fitted Gaussian profile is misleading. The success of the gradient-transfer hypothesis is a result of its incorporating the height dependence of K(z) which is lacking in the Gaussian plume model. In this context, the success of both the A/C-measured and theoretical K-profiles in predicting the G.C. data suggest the present diffusion model is adequate for mesoscale dispersion. 5. Conclusion A study of vertical diffusion in the atmosphere using airborne G.C. and numerical modelling has been described. In the tracer-gas experiments reported here, the prediction of gradient transfer theory are superior to those of the Gaussian plume model, in agreement with the conclusion of Lamb, Chen and Seinfeld [7]. The overall agreement suggests that quite economical computer predictions are possible for vertical diffusion over mesoscale distances by using a simple but physically plausible modification to surface-layer similar- ity values of K(z) to account for the presence of an overlying stable layer. For example, a typical calculation out to 200 km with the present model requires about 30 sees on the IBM 360. References [1] Businger, J. A., et al, Flux Profile Relationships in the Atmospheric Surface Layer, J. Atm. Sol., 28, #2, (March 1971). [2] Crabbe, R. S., Some Environmental Measurements of the Vertical Spread of Pollutants from Low-level Sources, Natl. Research Council of Canada, LTR-UA-28, (April 1975), and Examination of Gradient-transfer Theory for Vertical Diffusion over Mesoscale Distances Using Instrumented Aircraft, Natl. Res. Council of Canada, LTR-UA-37, (August 1976). [3] Csanady, G. T., Diffusion of Heavy Particles in the Atmosphere, J. Atm. Sol., 20, (1963). ~ [4] Elias, L., et al, On-site Measurement of Atmospheric Tracer Gases, Geophysical Research Letters, 3_, #1, (January 1976). [5] Gifford, F. A., Use of Routine Meteorological Observations for Estimating Atmospheric Dispersion, Nuclear Safety, 2, #4, 47-51, (June 1961). [6] Hosier, C. R. , The meteorology Program of the Environmental Protection Agency, Bull. Am. Meteor. Soc. , 56^, #12, (December 1975). [7] Lamb, R. G., Chen and Seinfeld, Numerico-empirical Analyses of Atmospheric Diffusion Theories, J. Atm. Sci., 32, (1975). [8] Pasquill, F., Atmospheric Diffusion, 2nd edition, (1974), (John Wiley & Sons) and Atmospheric Diffusion, (1962) (Van Nostrand, N.Y.). [9] Hanna, S. R., A Method of Estimating Vertical Eddy Transport in the Planetary Boundary Layer Using Characteristics of the Vertical Velocity Spectrum, J. Atmos. Sci., 25, 1026-1033, (November 1968). 433 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). IN-SITU QUANTITATION OF BACKGROUND HALOFLUOROCARBON LEVELS L. Elias National Aeronautical Establishment Ottawa, Canada 1. Introduction The technique described in this paper, originally developed for use with electrophilic tracer gases in atmospheric dispersion studies [l] 1 , has been adapted to the measurement of halofluorocarbon levels in ambient air. One advantage of the in- situ determinations obtained is that the method avoids the risk of the sample loss associated with other storage and retrieval procedures and, more significantly, allows for on-the-spot checking of interesting or unusual results that may arise. An integral part of the method is the calibrations which are performed in the field during the course of the analyses. 2. Apparatus and Materials A single column GC equipped with two 8-port gas sampling valves and an Ni-63 electron capture detector is employed, shown schematically in figure 1. A metered volume of sample air enters the GC through a 6 ft. x 1/16 in. o.d. stainless steel inlet tube and is drawn through a short precolumn, or adsorber, located outside the oven. Flow rate through the adsorber circuit is throttled to 60 cm 3 /min on the exit side of the air pump. The sub- sequent valving sequence allows for purging the adsorber with N 2 , isolating and heating the adsorber, and transferring the released vapors to the column via the N 2 (I) carrier stream. The total time involved in an analysis is generally less than 10 minutes. The adsorber used in this study is depicted in figure 2. It is a 1h in long x 1/4 in o.d. Pyrex tube vertically mounted between two heavy-walled steel arms attached to the GC oven. Swagelok fittings welded on to the steel arms as shown permit ready changing of the adsorber, which is connected by means of Vespel ferrules. An aluminum heating block, grooved to accommodate the precolumn, and a similar cooling block are moved transversely on a guide rail to heat and cool the adsorber, respectively, to 200°C and 0°C. . For use with SF 6 and CF 2 C1 2 the adsorber was packed with 60/80 mesh charcoal, while for CF 2 Br 2 , CFC1 3 and CC1 l+ Chromosorb 102 was suitable. For the work to date various columns have proven adequate, Molecular Sieve 5A for SF 6 , Porapak Q for CF 2 C1 2 , Carbowax 400/Porasil C for the others. The columns were 1/4 in o.d. by about 3 ft. in length and operated at 80°C. figures in brackets indicate the literature references at the end of this paper. 435 Air Inlet Column Pump N 2 (I) Plug Fig. 1 Schematic of GC dk 5 Glass Wool Packing IP in lii 1 GC Oven Fig. 2 Adsorber 436 The GC system was calibrated in two ways, depending on whether the calibration standard was stored as a compressed gas or at normal pressure. When the standard was a compressed gas, an auxiliary 6-port valve and sample loop installed in the N 2 (I) carrier line permitted the injection of a known volume of the standard to enter the column, either via the adsorber or bypassing the adsorber [1]. In the case of a calibrating mixture at ordinary pressure, injections were made with a gas-tight syringe through a septum installed in the air inlet line. For this purpose, the air drawn through the inlet during the calibration was filtered through a charcoal scrubber to provide a clean air reference. 3. Preparation of Calibration Standards Standard mixtures of 1 ppb (v/v) of the gaseous compounds in nitrogen were prepared in Fisher Lecture-Spheres by a two-stage dilution procedure. In this procedure, a 6-port valve fitted with a sample loop is used to transfer a known volume of the pure compound into the first bottle by means of a N 2 stream under about 10 psig pressure; the bottle (also of known volume) is then pressurized to several hundred psig with N 2 and subjected to a heat lamp for 1 hour or more to promote mixing. A measured volume of the stock mixture (of the order of 100 ppm in concentration) is similarly transferred to a second Lecture-Sphere, which is then pressurized to about 1500 psig and also heated. Cross-checks between stan- dards of differing concentrations prepared in this manner yielded analyses within 5 to 10 percent of the expected ratios in the cases of SF 6 , CF 2 C1 2 and CF 2 Br 2 . Although mixtures of SF 6 and CF 2 C1 2 appeared to be stable with time, the integrity of no mixture was relied upon one month after its preparation. Standard mixtures of a few ppb of CFC1 3 and CC1 1+ , as well as CF 2 Br 2 (all liquids at 20°C) were prepared by an alternative two-stage dilution procedure. The vessels in this case were 1 liter Pyrex volumetric flasks fitted with a septum and containing a quantity of glass boiling beads to create turbulent mixing when shaken. The flasks were made by cutting the necks just above the volume mark and sealing a short length of 1/4 in o.d. tube onto each. A threaded sleeve, made from one-half of a 1/4 in Swagelok union was slipped over the tube, the latter flared, and a septum held in place against it by the Swagelok nut. The flasks were flushed with N 2 by removing the septum and inserting a 1/16 in stain- less steel purge line. The first stage dilution consisted of injecting 1 yl of the liquid halocarbon into the N 2 -filled flask, shaking the flask to obtain thorough mixing, then injecting an aliquot of the mixture by means of a 100 pi gas-tight syringe into the second flask, again followed by mixing. The precision limits of the standards so prepared appeared to be within the reproducibility of the analyses attained by syringe injections of the standards, i.e., about 10 percent. The agreement between syringe samples of CF 2 Br2 standards prepared in the Lecture-Spheres and in the glass flasks was again within these precision limits, corroborating the procedures employed. Standards prepared in the glass flasks were kept no longer than two days. 4. Laboratory Testing A simple internal check on establishing the overall cleanliness of the GC system and a prerequisite to successful air sampling was the test of "collecting" a few hundred cm 3 of N 2 (I) carrier. Such a test normally resulted in a completely flat baseline. On one occasion, however, the persistence of residual peaks was traced to a leaky GC valve and the consequent diffusion of laboratory air into the adsorber. The efficacy of the adsorber for use with a particular halofluorocarbon was tested by comparing the peak areas derived from standard samples injected directly into the column with those obtained from collection on the precolumn. It was also important to determine the maximum volume of air which could be passed through the precolumn before a collected vapor began to elute. This volume was determined by first collecting a standard sample on the adsorber and then purging with purified air via the inlet probe for increasing times until the analyzed peak was reduced in size appreciably. The adsorber packings mentioned allowed a sample volume well in excess of 1000 cm 3 to be processed. It is estimated that the minimum detectable concentration for the above compounds is well below 1 part per trill ion. 437 Compound Co ncentration, ppt SF 6 0.2 - 0.4 CF 2 Br 2 < 1 CF 2 C1 2 150 ± 8 CFCI3 85 no 104 5. Field Measurements In addition to the SF 6 and CF 2 Br 2 plume dispersion study reported on at this Symposium [2], some initial results have been obtained on levels of CF 2 C1 2 , CFC1 3 and CC1 4 in the Ottawa-Hull area, a 1 ight-to-moderately industrialized region with a population of less than 1/2 million. The results are summarized in table 1. Table 1 Background levels near Ottawa Relative location Above 1500 ft. Upwind, downwind Upwind Upwind, 10 miles Downwind, 5 miles Downwind, 13 miles Downwind, 24 miles CC1 4 122 ± 28 Upwind, downwind The few measurements of CF 2 C1 2 were made outside the laboratory in early spring of this year under light NW winds. The CFC1 3 and CC1 4 determinations were made in late summer with the GC mounted in the laboratory van. Air sampling was carried out downwind of the capital region to a distance of 50 miles or more and upwind to 10 miles of the outskirts. A total of 50 samples were analyzed over a period of 3 days; in one series, CF 2 Br 2 was released at a rate of 7 lb/h to serve as a tracer for the city plume. Winds were generally SW at 5 to 10 knots. In the case of CFCI3 a distinct trend was observable: within about 10 miles of the outskirts, readings were as much as 25 percent larger downwind than those observed upwind, and remained slightly higher to distances as far as 25 miles. The CCl^ data, on the other hand, showed no such trend, for the most part fluctuating randomly within about 10 percent of the mean, but occasionally varying much more. It may be possible from such data to estimate the contribution to the total halo- fluorocarbon burden from a particular locale. Work is continuing along these lines. References [1] Elias, L., McCooeye, M. , and Gardner, G., On-Site Measurement of Atmospheric Tracer Gases, Geophysical Research Letters, _3_, (1), 17-20, January 1976. [2] Crabbe, R. S., A Study of Vertical Diffusion in the Atmosphere Using Airborne Gas- Chromatograph and Numerical Modeling, Presented at 8th Materials Research Symposium, Gaithersburg, Maryland, (September 1976). 438 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). ORIGIN AND RESIDENCE TIMES OF ATMOSPHERIC POLLUTANTS: APPLICATION OF 14 C L. A. Currie and R. B. Murphy Analytical Chemistry Division National Bureau of Standards Washington, DC 20234, USA 1. Introduction As man's activities yield continually increasing rates of production of atmospheric pollutants, it becomes imperative for us to improve our knowledge concerning their recycling by nature. Two of the critical parameters in this process are: (a) residence times (or the inverse, removal rates); and (b) the relative contributions of natural and anthropogenic sources, both to current pollutant burdens and to regional pollution episodes. Residence times are of concern because they are indicative of the time required to compensate for a transient increment in pollutant output. They relate also to the distributional breadth of locally emitted pollutants. Exchange between the northern and southern hemispheres, for example, must be taken into account if residence times are on the order of one year or more [26] 1 . The relative production rates from man's activities as opposed to natural processes provide basic input for the evaluation of the various potential control strategies. When combined with data on transport, atmospheric reactions and natural sinks, such source data permit one to effectively construct the overall cycle for a given chemical pollutant. Unique information concerning the above parameters can sometimes be obtained through isotopic measurements. Following a brief look of some pertinent applications of isotope ratios and molecular tracers, we shall address ourselves to the problem of carbonaceous pollutants. Experiments to discriminate among sources of such species, via lt+ C/ 12 C ratio measurements, will be discussed in the final section of the paper. 2. The Role of Tracers and Isotopes Budgets of contaminating species may be estimated by compiling data for all known natural and anthropogenic sources. The assumption of steady-state may be tested via a suitable record of measured concentrations along with a tabulation of all known removal processes, including transport. The residence time may then be calculated from the steady state concentration and the production rate [12]. The principal weakness of such an approach, of course, is the possible existence of unknown sources and sinks, which results in spurious estimates for residence times and production rates. The search for additional sinks to explain the relatively constant atmospheric concentration of carbon monoxide led to the so called "CO sink anomaly", as new sources were instead discovered [20]. Molecular tracers, along with stable and radioactive isotopes, can often provide critical information to supplement uncertain budget estimates which derive from production inventories. Work in our laboratory has taken this latter route: the determination of figures in brackets indicate the literature references at the end of this paper. 439 atmospheric transport parameters [15] was the goal of our measurements of 37 Ar [24]; the elucidation of the relative contributions of man and nature to atmospheric hydrocarbons is the goal of our present investigation [8]. The utility of molecular tracers lies chiefly in the quantitation of transport, uptake and exchange, while isotopic measurements can provide data concerning pollutant sources. A number of molecular species not normally occurring in nature and with relatively long residence times in the environment (sulfur hexafl uoride, chlorofluoromethanes, l3 CD tt ) have been successfully employed as tracers [7a]. Isotope ratios in the light elements (C,0,N) can be useful both for geophysical source characterization and for the investigation of sinks for pollutants involving (chemical) exchange. Some recent work involving stable isotopes includes: (1) lt+ N and 15 N for the identi- fication of sources of atmospheric nitrogen compounds [18]; (2) 34 S and 36 S for the tracing of atmospheric sulfur [19]; and (3) ^C/^C and 18 0/ 16 for the assignment of sources of carbon monoxide, as a function of both location and season [28]. The last study indicated five major sources of carbon monoxide, all characterized by distinct isotopic compositions. The application of radiocarbon ( 14 C) to the resolution of the relative magnitude of the fossil fuel contribution to the carbon dioxide balance was first advanced by Suess [29]. The "Suess Effect, "which is visible as a perturbation in recent radiocarbon dates, has been utilized by a number of investigators to assess the projected magnitude of the "Greenhouse Effect" on the Earth's climate [4,9]. 3. Carbonaceous Pollutants Aside from carbon dioxide, the principal species of concern include CO, CH^, gaseous and particulate hydrocarbons, and the halocarbons (chlorofluorocarbons and carbon tetrachlo- ride, in particular). Significant questions remain as to the nature of the sources and sinks for all of these contaminants. Carbon monoxide, for example, has been assigned residence times ranging from 0.1 year [30] based upon radiocarbon measurements [16] to 2.7 years from atmospheric budget considerations [22]. Similarly, the presumed major cycle for carbon monoxide in the troposphere has changed at least three times since 1969. The anthropogenic CO contribution is of some concern, as the annual production rate has increas- ed by more than a factor of two between 1966 and 1974 [27]. Though this (anthropogenic) CO had been believed to be the principal component, a recent compilation suggests that natural sources account for 90 percent of the total production [20]. Tropospheric methane, which arises largely from natural processes, is of special interest because its oxidation is currently believed to be the major source of CO, thereby linking the carbon and hydrogen cycles in the atmosphere [10]. Sources and sinks for halocarbons and hydrocarbons continue to be evaluated in light of the potential destruction of stratospheric ozone by the former [23], and the production of urban and rural photochemical oxidants by the latter. As with CO the halocarbons were originally believed to be primarily anthropogenic, but recent assessments have suggested the existence of very significant natural sources of carbon tetrachloride [1] and methyl chloride [11]. Consequently, the fluorocarbon contribution to the total atmospheric halocarbon balance may be as little as 25 percent [7]. On a worldwide scale, naturally-emitted "reactive" hydrocarbons exceed those produced by man by more than a factor of six [20]. The abundance of these species together with their large photochemical reaction rates With hydroxyl radicals [31] and the recurrence of rural pollution episodes continue to raise questions regarding the relative importance of vehicular vs botanical emissions [21,17]. Progress in identifying sources contributing to the insoluble organic fraction of atmospheric particulates has been made through the application of high vacuum pyrolysis--gas chromatography [13]. Measurement of the ratio 14 C/ 12 C in selected samples of various atmospheric hydrocarbon species should permit us to deduce the fractions in each case due to natural sources. The success of such an approach depends upon the assumption that anthropogenic carbon is principally derived from fossil fuels, and can therefore be considered "dead" in comparison with natural carbon, which arises primarily from recent biological activity. Previous 440 measurements of radiocarbon ratios have been carried out for atmospheric methane by Libby [13a], Bishop [5], and Bainbridge [3], all cited in [10]. Tropospheric radiocarbon has also been measured in CO by MacKay [16] and in particulates by Clayton [6] and Lodge [14]. The necessity for collecting large amounts of carbon added to the difficulty of such measurements, and led to the possibility of non-representative sampling. Such problems may be overcome with the use of our present system. 4. Collection and Measurement of Radiocarbon in Atmospheric Hydrocarbons The many problems of sampling and counting discussed above are now avoided in the low- level radiocarbon measurement system at the National Bureau of Standards, which is unique in several respects. First, we utilize very small gas proportional counters, which hold as little as 15 ml of fill gas, and require 10 mg of carbon or less. Figure 1 illustrates a typical small counter, constructed of high-purity quartz, with its associated high purity copper shield. The counter background is currently about 0.15 count per minute (cpm), and its dead volume is minimized by the use of a reentrant seal at the end to support the center wire. The second distinctive feature of our system is the ability to discriminate not only on the basis of pulse energy (pulse height analysis) but also by means of the pulse shape. This permits us to distinguish between different classes of particles because of the dependence of the relationship between pulse height and track length (pulse shape) on the type of event. Pulse shape is assessed by sampling each counter event during the first 10 nanoseconds; integration and differentiation then yields a signal whose amplitude is inverse- ly proportional to the rise time. w We have the capability of displaying two dimensional spectra from counting experiments, ith the pulse height defined on the x-axis and the pulse shape parameter on the y-axis. Two such spectra are shown in figure 2, where the pulses corresponding to short- range events in the counter lie in a narrow band at the 45-degree diagonal between the two axes. Longer range discharges will generally lie below the diagonal with respect to the x- axis. Events above the 45 degree line may for the most part be ascribed to electrical noise. Such noise is a serious problem, particularly in very small counters; it may arise as a result of electrical field distortion at the ends of the counter wall. Pulse shape discrimination is therefore extremely beneficial in such minicounter systems. An additional advantage to the use of pulse shape spectroscopy lies in the ability to separate contributions from certain contaminating radionuclides, such as 3 H or radon These nuclides often occur with the sample, the chemical reagents or apparatus or even in the construction materials of the counter. Regarding the choice of counting gas, we have found that while carbon dioxide may be used in the minicounter its pulse shape characteristics are relatively poor, and it exhib- its considerable sensitivity to trace chemical impurities. Performance appears to be optimal when methane or methane-noble gas mixtures are used. The current approach is therefore to reduce the C0 2 formed from combustion to CH^, after purification. For this purpose, we apply the procedure of Anand and Lai [2] with some minor variations. The feasibility of measuring carbonaceous pollutants has been reviewed by one of us [8] recently, where it was concluded that as little as 10 milligrams of carbon would be adequate in our system for source discrimination. It was shown that all species of inte- rest except the halocarbons could be collected in a reasonable samplinq period with trans- portable apparatus, without having to rely upon air liquefaction plants or similar facilities. This conclusion is based on typical concentrations in polluted air of 2 mg/m 3 for the total nonmethane hydrocarbons, 0.1 mg/m 3 for total particulates, 20 mg/m 3 for carbon monoxide, and 8 x 10 _Lt mg/m 3 for carbon tetrachloride (the most abundant halocarbon). 441 Figure 1. Photograph of the small quartz counter (15 ml volume) and associated shield. The shield, along with the inner wall of the counter, is constructed of OFHC copper. 442 Figure 2. Background Spectra: Pulse Shape vs Pulse Height. Fig. 2a shows a high gain spectrum (most pulses at amplifier saturation, with short range events due to a small amount of 3 H lying along the diagonal (dashed line). 443 Figure 2. Background Spectra: Pulse Shape vs Pulse Height. Fig. 2b, obtained with the 15 ml counter at relatively low gain, shows isolated noise pulses lying above the diagonal . 444 For the collection of gaseous species of interest, we utilize a large stainless steel trap which holds about 3 kilograms of NaX zeolite, grade 13A ("molecular sieve"); this trap may be cooled to the temperature of dry ice (-78° C) conveniently in the field. Preceding the NaX zeolite trap are similar ones containing CaA, NaA, and KA zeolites, respectively—all similarly cooled. The entire system is evacuated to a. a. 1 x 10 -3 torr and baked out at 200-250° C before use. The selectivity of the molecular sieves for hydrocarbons, CO, and CH^ over oxygen and nitrogen is such that enrichment factors up to 10 3 to 10 5 are projected. A noncontaminating stainless steel bellows pump, electrically powered, is utilized to draw air through the zeolite traps. The maximum flow rate of the system is 0.155 m 3 /minute, thus enabling collection from 100-200 m 3 of air in a reasonable amount of time. The collected organic pollutants are desorbed from the zeolite by carefully controlled heating in a vacuum. In a sense the sieve functions as a chromatographic column, and several fractions are removed from the vacuum system during the course of the programmed heating cycle. Following GC/MS identification of those species which are present, preparative gas chromatography is employed to provide sufficient quantities of separated components for combustion and conversion to counting gas. The collection of atmospheric particulate matter will also take place; this is especially valuable for the higher molecular weight hydrocarbons. In a preliminary measure- ment of a typical urban particulate sample we obtained an upper limit of 20 percent natural (contemporary) carbon, in approximate agreement with the work of Clayton, et at. [6]. Also under investigation is the possibility of the collecting intermediate molecular weight (C^-Cs) hydrocarbons from the atmosphere by means of rainwater. With a total organic carbon concentration of rougly 12 milligrams/liter, little sample is required for a mini- counter fill. For the study of individual species perhaps 10-20 liters would be desired. The organic compounds of interest are removed simply by flushing the rainwater with ultra- pure helium [25], followed by trapping with liquid nitrogen. The species of interest are then further separated gas chromatographically and converted to counting gas as described above. Preliminary data concerning the relative utility of the various methods of sampling have been obtained from an experiment conducted during a pollution alert in Auaust 1976 at the NBS atmospheric monitoring station in Gaithersburg, Maryland. We are currently com- mencing comparative studies in urban areas, represented by downtown Washington, DC, and rural areas in West Virginia and Pennsylvania. Thanks are due W. F. Libby and B. Weinstock for helpful discussions, and G. Ritter for programming assistance in the preparation of figure 2. Partial support by the Office of Air and Water Measurement of the National Bureau of Standards is gratefully acknowledged. References [1] Altshuller, A. P., Average Tropospheric Concentration of Carbon Tetrachloride Based on Industrial Production, Usage, and Emissions, Environ. Soi. und Tech. 596 (1976). [2] Anand, J. S. and Lai, D., Synthesis of Methane from Water for Tritium Measurement, Nature, 20J_, 775 (1964). [3] Bainbridge, A. E., Suess, H. E., and Friedman, I., Isotopic Composition of Atmospheric Hydrogen and Methane, Nature j_92, 648 (1961). [4] Baxter, M. S. and Walton, A., A Theoretical Approach to the Suess Effect, Proa. Roy. Soo. (London) A318, 213 (1970). 445 [5] Bishop, K. F., Dalafield, H. I., Eggleton, A. E. J., Peabody, C. 0., and Taylor, B. T., The Tritium Content of Atmospheric Methane, Proa. Sym. Tritium Phys. Biol. Sai. , Vienna, Austria, May 1961, 1, 55 (1962). [6] Clayton, G. D. , Arnold, J. R. , and Patty, F. A., Determination of Sources of Particu- late Atmospheric Carbon, Science, 122 , 151 (1955). [7] Covert, D. A., Charlson, R. J., Rasmussen, R. , and Harrison, H., Atmospheric Chemistry and Air Quality,' Review of Geophysics and Space Physics, 1_3, 765 (1975). [7a] Cowan, G. A., Ott, D. G., Turkevich, A., Machta, |_. , Ferber, G. J., and Daly, N. R., Heavy Methanes as Atmospheric Tracers, Science 191 , 1048 (1976). [8] Currie, L. A., Noakes, J., and Breiter, D., Measurement of Small Radiocarbon Samples: Power of Alternative Methods for Tracing Atmospheric Hydrocarbons, r. Berger and H. Suess Edit., 9th Internatl. Radiocarbon Conf. Univ. of Calif., Los Angeles and San Diego, June 1976. [9] Dugas, D. , Increase of Exchangeable Carbon in the Earth's Reservoirs from Combustion of Fossil Fuels, RAND-P-3990 (1968). [10] Ehhalt, D. H., The Atmospheric Cycle of Methane, Tellus 26, 58 (1974). [11] Grimsrud, E. P., and Rasmussen, R. A., The Analysis of Fluorocarbons in the Troposphere by Gas Chromatography -Mass Spectrometry, College of Engineering, Research Division, Washington State Univ., Pullman, WA 99163 (1975). [12] Junge, C. E., Residence Time and Variability of Tropospheric Trace Gases, Tellus 26, 4 (1974). [13] Kunen, S. M. , Burke, M. F., Bandurskii, E. L., and Nagy, B., Preliminary Investigations of the Pyrolysis Products of Insoluble Polymer-Like Components of Atmospheric Particu- lates. Atmos. Environ. J_0, 913 (1976). [13a] Libby, W. F., personal communication. [14] Lodge, J. P., Jr., Bien, G. S., and Suess, H. E., The Carbon-14 Content of Urban Airborne Particulate Matter, Intl. J. Air Poll. , 2, 309 (1960). [15] Machta, L., Argon-37 as a Measure of Atmospheric Vertical Mixing, Noble Gases, R. E. Stanley and A. A. Moghissi, Ed., ERDA TIC: C0NF-730915 (1973). [16] MacKay, C, Pandow, M., and Wolfgang, R., On the Chemistry of Natural Radiocarbon, J. of Geophysical Research, 68, 3929 (1963). [17] Maugh, T. H., II. Air Pollution: Where Do Hydrocarbons Come From? Science, 189 , 277 (1975). [18] Moore, H., Isotope Measurement of Atmospheric Nitrogen Compounds, Tellus, 26, 169 (1974). 4 [19] Nielsen, H., Isotopic Composition of the Major Contributors to Atmospheric Sulfur, Tellus, 26, 213 (1974). [20] Rasmussen, K. H., Taheri , M., and Kabel , R. L., Global Emissions and Natural Processes for Removal of Gaseous Pollutants, Wat., Air and Soil Poll. 4, 33 (1975). [21] Rasmussen, R. A., What Do the Hydrocarbons from Trees Contribute to Air Pollution? J. of the Air Pollution Control Assn. 22, 537 (1972). [22] Robinson, E., and Robbins, R. C, Sources, Abundance, and Rate of Gaseous Atmospheric Pollutants. Final Report, PR-6755, Stanford Research Institute (1968). 446 [23] Rowland, F. S. and Molina, M. J., Chlorofluoromethanes in the Environment, Reviews of Geophysics and Space Physics, 1_3, 1 (1975). [24] Rutherford, W. M., Evans, J., and Currie, L. A., Isotopic Enrichment and Pulse Shape Discrimination for Measurement of Atmospheric Argon-37, Anal. Chem., 48_, 607 (1976). [25] Saunders, R. A., Blachly, C. H., Kovacina, T. A., Lamontagne, R. A., Swinnerton, J. W. , and Saalfeld, F. E., Identification of Volatile Organic Contaminants in Washington, DC Municipal Water, Water Res. 9, 1143 (1975). [26] Schmidt, U., Molecular Hydrogen in the Atmosphere, Tellus 2§_, 78 (1974). [27] Seiler, W. , The Cycle of Atmospheric CO, Tellus 26_, 116 (1974). [28] Stevens, C. M., Krout, L., Walling, D., Venters, A., Engelkemier, A., and Ross, L. E., The Isotopic Composition of Atmospheric Carbon Monoxide, Earth and Planetary Science Letters 16, 147 (1972). [29] Suess, H. E., Radiocarbon Concentration in Modern Wood, Science 122 , 415 (1955). [30] Weinstock, B., Carbon Monoxide: Residence Time in the Atmosphere, Science 166 , 224 (1969). [31] Winer, A. M., Lloyd, A. C, Darnall, K. R., and Pitts, J. N., Jr., Relative Rate Constants for the Reaction of the Hydroxyl Radical with Selected Ketones, Chloro- ethenes, and Monoterpene Hydrocarbons, J. Phys. Chem. 80, 1635 (1976). 447 Part IX. CHEMICAL CHARACTERIZATION OF INORGANIC AND ORGANOMETALLIC CONSTITUENTS NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). CHEMICAL CHARACTERIZATION OF INORGANIC AND ORGANOMETALLIC CONSTITUENTS Robert S. Braman Department of Chemistry University of South Florida Tampa, Florida 33620, USA 1. Nature of the Analytical Problem The field of environmental microanalytics is not without its challenges. The first and perhaps the most important one is to define the objectives and nature of the task facing an investigator. Typical objectives of analytical studies might be: to develop a system for monitoring a specific pollutant source, perhaps for several components; to monitor at locations far removed from immediate pollution sources; to analyze for pollutants at remote locations; to study chemical reactions of specific pollutants in the environment; and to study in the laboratory the effects of selected pollutants on selected biological systems. When going beyond these general objectives one is faced with: identifying the specific chemical compound or compounds for which analytical data is sought; determining the concen- tration and range of concentrations at which they are present in samples; and determining the needed accuracy and precision. Designed into the analytical plan may be factors such as desired analysis frequency, synoptic survey considerations, diurnal, seasonal, and weather effects and perhaps the impact of anthropogenic sources. In a reasonably large number of circumstances, the identity and concentration of specific chemical forms of elements has been well known for a long time and excellent methods of analysis are available. This is true of such common ions as nitrate, nitrite, chloride, sodium, etc. Characteristically, such ions are either present in comparatively high concentration in samples or their chemistry does not permit more than one form under environmental conditions. In the latter case total element analyses are sufficient. This situation is generally not the case for most of the transition metals and metalloids although excellent methods are available for these too in higher trace concentrations. In many circumstances one may have only meager data on total element concentration and by methods which could be totally unreliable because of circumstances of sample acquisition. It is not uncommon that an investigator has nothing to go on other than pure speculation. This individual then finds himself defining the problem in the process of investigating it. It appears to this writer that the great majority of environmental analysis problems should be of the individual component type for the present. A clear distinction exists between monitoring methods and investigative methods. Monitoring implies a continuous operation, the data from which is used to assess some environmental impact. Monitoring is expensive and places severe strain on the reliability of individuals and equipment. Thus, the implementation of monitoring for a specific compo- nent is a serious matter. The most reliable monitoring methods are the less complex to use and thus minimize operator error. Many of the newer, more complex and often slower methods for various forms of the chemical elements may perhaps be considered for monitoring. It must first be justified that selective analysis produces data which can be more reliably compared to environmental effects than easier techniques for total elemental composition before their use in monitoring is justified. 451 For the most part then, most of us in the environmental speciation field are dealing with investigative work with little immediate concern for monitoring. We are concerned with the other problem areas, pollution studies near sources, remote location analyses, chemical transformation studies, and controlled laboratory experimental studies. A wide range of sample types and concentrations are encountered in environmental studies. Each sample type (air, water, soil-sediment, biota) requires a different sample modification step in analysis and each has its own concentration range. Whatever the analytical problem, we are generally faced with the necessity of obtaining good analytical precision in the analysis from 1 ng to 1 microgram of analyte per sample. Air particulate analytes may range from 1 pg to 100 micrograms/m 3 . The portioning of 4- 10 ng/m 3 of arsenic into As(III), As(V) and the methyl arsenic compounds is illustrative of the problem which can arise when speciation is a goal. Modification of the acquired sample size is about the only factor which can be changed to meet the need for improved concentration limits of detection. Remote sampling for analysis, for example at the polar regions, presents the maximum need for low limits of detection (LD) in air analysis. Many cubic meters of air must be sampled in order to obtain a sufficient sample for analysis of particulate trace metals. Water samples are often similar. Non polluted samples often require parts-per-trillion (ppt) concentration limits of detection. Mercury in open ocean sea water is present at concentrations on the order of 10 ppt. The speciation of that amount of mercury into Hg°, HgCl 2 , CH 3 HgCl and CH 3 HgCH 3 forms has yet to be solved. Fortunately biota are inclined to preconcentrate metals, somewhat selectively, and so their analysis is less challenging. Parts-per-million (ppm) concentrations are more common. The major problem is sample preparation so as to obtain essentially quantitative removal of all analytes sought. Homogenizing and extraction are the most used techniques. Sediments and soils, are generally similar to the biotic concentrations. Nevertheless, the chemical forms are often non-volatile, non-soluble and non-extractable without altering the chemistry of the analyte sought. Much more work on analysis of these solids is needed. 2. Sample Processing for Analysis Sample processing for analysis is in two parts, acquisition and modification. The object is to be able to present to the detector in a form suitable to the measurement, the analyte or analytes extracted from the sample. The analyte must be present in the active volume of the detector in a concentration or sample size above the limit of detection. The ultimate concentration limit of detection is achieved when the entire analyte is present in the active volume of the detector. Many analysis methods include a preconcentration step simply to achieve this purpose. Direct analysis without modification of the sample is the case for some analytical methods. Long path absorption spectrophotometry is a notable example. Atmospheric ozone, hydrocarbons, nitrogen oxides and sulfur dioxide have been measured down to ambient concen- trations with this approach. Direct analysis without preconcentration is used in the analysis of sulfur compounds in air after gas chromatographic separation. Here the limit of detection of the detector is sufficient to meet the limit of detection required by the sample concentration. Direct spectrophotometry can be applied to analysis of solutions after conversion of the analyte to a form more recognizable from the matrix, i.e., to a colored complex. Determinations of silica, phosphate ion, nitrite ion, nitrate ion and certain metal ions have been done thereby to parts-per-bil 1 ion (ppb) concentrations in natural waters. 452 Nearly all other sample processing involves preconcentration techniques often combined with a phase transfer such as from a gaseous form in air to an absorbed state on a filter or in a scrubbing solution. Frequently these sample modification steps offer opportunities for greatly improving both the specificity and limits of detection of an analysis system. Filters such as used in collecting air particulates exhibit a very high degree of preconcentration with a minimum of complexity. Scrubbing solutions for air analysis are probably next in popularity and offer opportunities for use of specific chemical reactions to achieve specificity. Nevertheless, a further preconcentration of the scrubbing solution may be needed to reach detectable concentrations of analytes. In air analysis active surface sampling tubes or tapes offer excellent preconcentration and selectivity possibilities. If solution chemistry is contemplated, solution volumes may be generally kept small and reagent contamination may be minimized. This approach has been recently used for the determination of mercury forms and the methylarsines in air by techniques developed in the laboratory. Sulfur also shows considerable promise. Cryogenic sampling has been largely used in organic analysis and it could be useful for collecting inorganic or organometall ic compounds. It is not very selective and samples require separation. Separation techniques are almost always necessary for purposes of eliminating inter- ferences in all speciation type analyses. Gas or liquid chromatography is most often used and many commercial models are available that are more or less complex. It should be noted that chromatography does not usually preconcentrate samples, it only separates components, and in fact, it actually decreases the sample concentration presented to the detector. Consequently, preconcentration is usually a necessary step prior to and separate from the actual separation. Separation by volatilization out of a large sample volume is widely used as a precon- centration method. Hydride formation by analyte reaction with a strong reducing agent such as sodium borohydride in the analysis of arsenic is a typical example. The hydride may be cold trapped into a \/ery small volume thus effecting a large preconcentration ratio. A growing number of elements may be preconcentrated by hydride evolution and includes: As, Bi , Sb, Ge, Sn, Se, Te, and some compounds of Si and Pb. Organometal 1 ic hydrides have also been produced for some of these elements. Preconcentration by solvent extraction and by co-precipitation are classical methods which also continue to be used and improved. 3. Detectors and Detection Systems The salient feature of any analysis system is its detector. Each detector has the characteristics: selectivity, fundamental lower limits of detection, dynamic range of detection and active volume. The active volume is defined as the volume in which the measurement is made during detection of the analyte. The fundamental lower limit of detection is the amount of material in that volume which gives a discernible signal above the background electronic noise detected. Detectors are always reported in the literature associated with sample processing and it is sometimes difficult to determine the above mentioned characteristics of a detector from the publication. There is often a confusion between the detection concentration limit of a method and the detection fundamental limit of the detector. For example, a limit of detection of 1 ng per sample gives a concentration limit of detection of only 0.1 ppm if 10 microliters is the maximum sample size. Of course, if needed, preconcentration may improve the concentration limit of detection. It is in the area of combination of separation, sample preconcentration, and detection that new methods for environmental analysis are being developed. Some of the more used or readily available techniques have been selected and will be treated in some detail in following sections. 453 4. Detection Systems-Solid Surface Analyses A. Electron spectroscopy Electron spectroscopy is as yet the only technique available for the direct examination of solid surfaces for the chemical form of trace metals present. Identification of chemical forms depends upon the number of compounds present, their concentration and the chemical shift available. The detection of seven different forms of sulfur in air particulate by ESCA [1,2]* is a good demonstration of a probably favorable case. Czuha and Riggs [3] in an excellent study of their ESCA approach to trace metal detection demonstrated a limit of detection of 10 ng for silver on an acrylic acid grafted to a polyethylene surface. Since they found that some 90% of the silver had penetrated to beyond ESCA analysis depth, the potential limit of detection must be approximately 1 ng. Specificity of ESCA is only moderately good and a complex matrix probably would require some pretreatment. Removal of heavy loads of organic or inorganic materials may also be needed to avoid "burying" the trace materials sought. B. X-Ray fluorescence spectroscopy This method is more sensitive than ESCA and is used more for quantitative analysis. Quantitative multielement trace element analyses are easily achieved but the method cannot be used for detection of chemical forms. Sample modification is needed for this and would involve deposition of a particular separated analyte onto a surface for analysis. Limits of detection depend upon the element; examples are, 0.02 ng(Zn, U), 5 microgram(Al ) [4,5]. 5. Detection Systems-Solution Measurements A. UV-vis absorption Absorption spectroscopy is deserving of substantially more consideration in application to environmental analysis. Based upon the absorption of ions or complex compounds, the approach holds excellent promise for speciation uses. There are several appealing features of spectrophotometry. Literally thousands of compounds have been investigated for one application or another and a large background of literature exists from which to select specific complexing agents. A good current review listing a large number has been presented by Boltz and Mellon [6]. Limits of detection are often considered poor for spectrophotometry at least in com- parison with some other instrumental methods. Nevertheless, it is easily demonstrated that the methods may be applied to ppb concentrations of analytes. Table 1 of Beer's Law calculations gives the sample weight and concentration which are needed for the molar absorptivities listed, using a cell path of 1 or 10 cm, for which A s = 0.005. Performance with preconcentration is also given. This method of analysis is related to the .high performance liquid chromatography (HPLC) methods in that UV absorption detectors are used in the latter method. A major disadvantage is that spectrophotometry methods usually are applied to analysis of one component at a time. If 10 different components must be measured, 10 different methods would probably be needed. The technique has been automated extensively. figures in brackets indicate the literature references at the end of this paper. 454 Table 1 Beer's law calculations no preconcentration analyte preconcentrated from (10 ml volume) 1 liter to 10 ml e = 10 4 E = 10 5 50 ng ( 5 ppb) 5 ng (0.5 ppb) 1 cm cell 10 cm cell £ = 10 4 500 ng (0.5 ppb) 50 ng (0.05 ppb) £ = 10 5 500 ng (0.05 ppm) 50 ng ( 50 ppb) 50 ng (0.05 ppb) 5 ng (0.005 ppb) B. High performance liquid chromatography Ultraviolet absorption detectors are popular for trace HPLC work. Although largely applicable to UV absorbing organic complexes of metal ions, inorganic complexes can probably also be detected with good sensitivity. A great variety of inorganic separations have been reported in a recent review by Walton [7]. An approach to improving limits of detection is the use of highly absorbing (or fluorescing) derivatives [8]. For molar absorptivity = 14,000 the predicted limit of detection is on the order of 10 ng per sample [9] probably typical of the ultimate for the HPLC techniques. C. Fluorescence spectroscopy Largely used in organic analysis this technique has also been applied to the analysis of metal ions down to the ppb range. The need for separations in procedures is considerable to avoid interferences. The method is potentially useful for speciation analyses since specific compounds are detected. Perhaps a typical example of detection limits is the determination of lead and cadmium by Hefley and Jaselskis [10] who report a limit of detection of 2.24 ng/ml or 2 x 10" 8 M Cd. D. Ion selective electrodes Ion selective electrodes have the theoretical advantage that measurements may be made on a sample without disturbing its composition. An electrode is specific for a selected ion. There are many practical limitations. Concentration limits of detection are resonably good. Many electrodes studied have been used down to 10" 6 to 10 -8 M analyte concentrations and thus are satisfactory for direct measurements on aqueous samples in the ppm to ppb range [11]. Preconcentration by ion-exchange chromatography in HPLC prior to use of electrodes can be used to reach limits of detection and to effect necessary separations. An example is the separation of nitrate and nitrite ions by HPLC followed by detection using a nitrate selective liquid ion exchange membrane electrode [12]. Note that both ions were detected by the same "selective" electrode. The active volume of the detector was 50 microliters and the limit of detection was 0.1 to 0.3 ng for the nitrate and nitrite ions respectively. The active volume of an ion selective electrode is not easy to define unless used in some fixed cell holder as in the previously described case [12]. From the active volume and concentration limit of detection one can estimate the sample size limit of detection. If we assume a 0.1 ml active volume with analyte at 10 _8 M for cupric ion for example, then this 0.1 ml sample will contain 0.06 ng, clearly a trace element sample size. A resonably large number of ions may be sensed by electrodes already developed including: 5 Na + , Mg 2+ , Ca 2+ , Cu 2+ , CI - , F", Br - , CN" , I" and NO, with more or less good selectivity. 455 Copper ion selective electrodes have received considerable attention. Jasinski, Trachtenberg and Andrychuk [13] have demonstrated that "uncomplexed" copper on ocean water may be directly determined at approximately 1 ppb concentrations. They noted that measure- ments inshore indicated that complexing agents were binding some of the copper. Andrew [14] in an elegant piece of work demonstrated that fathead minnows and daphnia magna responded to the toxicity of "uncomplexed" copper indicated by a copper selective electrode. E. Anodic strippng voltammetry and differential pulse polarography A number of electrochemical methods may be applied to environmental analyses. The more sensitive ones are listed in table 2, taken in part from reference [15]. Table 2 Electrochemical methods applied to environmental analyses Method LP (reversible systems) LP (irreversible systems) Classical dc 5 x 10" 6 M 5 x 10~ 6 M Normal pulse 5 x 10" 7 M 5 x 10~ 7 M PPP 8 x 10~ 9 M 10" 7 M Rotating disc + Hydro- dynamical ly Modulated Pise Electrodes 5 x 10~ 8 M ASV methods have variable sensitivity and have been applied to concentrations as low as 0.005 ppb for some metals [5]. Nevertheless, the theoretical ability to selectively operate on an ion via potential control has not been well established in practice for environmental samples. 6. Oetection Systems-Measurements in Vapor State Nearly all of the detectors in this group must be used with a separation system and one which eventually presents the analyte to the detector in vapor form. An exception might be the direct analysis of solutions in plasmas capable of vaporizing the solvent. Even in this case the analyte is in vapor form during detection. These detectors have little capability for differentiating between one chemical form or another of an element. Nevertheless, owing to their considerable sensitivity and with separations, they have been widely used in environmental work. A. Flame ionization and electron capture detectors These classical GC detectors are comparatively non-specific. Volatile organic com- plexes should be detectable but such applications have been few. The main environmental analysis use of the ECP has been the determination of methylmercury type compounds in biological samples. Limits of detection range from 10" 9 to 10 _12 g in small active volumes for good cases. A recent application of the FIP in combination with HPLC exhibited a limit of detection of 3 to 4 ng per sample [16]. Although this value was for organic compounds, the organic complexes of metals should be similarly sensed. 456 B. Mass spectrometry GC-MS combinations are in wide use and need no introduction. An interesting and potentially useful combination is HPLC-MS. Lafferty et at. [17] have adapted a MS inlet system for directly handling the solvent stream from a HPLC. Approximately 1% of the effluent is analyzed by the MS. The limit of detection for organic compounds was found to be near 0.5 ng per sample. Thus only 5 picograms of sample was detected in the MS. Improvements would appear to be possible. Separation and analysis of metal complexes would appear to be feasible even though not yet reported. A variety of metal chelates have been determined by chemical ionization MS with limits of detection near 10 _11 g [5]. C. Atomic absorption spectrometry The best lower limits of detection of this technique are obtained with the so-called flameless modification. Analyte metals are atomized by thermal means usually in a graphite tube furnace or high temperature metal atomizer. Detection limits are generally in the 10" 11 to 10" 12 g/sample range with the flameless methods. Concentration limits of detection are generally in the ppb range. Although used mainly for total element analyses, the technique may be used in separation systems. Major limitations are lifetime of the high temperature furnace and difficulties in atomization of elements which "TTave tendencies to form stable oxides. D. Emission methods Methods in this group are similar with differences due to the method of producing electrical discharge or plasma. All are approximately equivalent in the type of specificity obtained, atomic line or diatomic band emission spectra are observed. These detectors have the advantage of providing for readily verifiable positive signals, i.e. emission lines of the analyte. Inductively coupled plasma sources have excellent stability and multielement detection capability. Limits of detection are in the ppb range [18]. Direct current discharges in helium carrier gas have been used to determine inorganic and organometallic As, Ge, and Sn compounds after separation of hydrides or methyl hydrides in a U-trap [19]. Limits of detection are near 1 ng per sample providing a 0.01 ppb concentration limit of detection for 100 ml samples. The method is under study for use in detection of Pb, Bi , Se and Te. D.C. discharges have also been used in the speciation of mercury in air [20]. A microwave stimulated plasma used with a GC system has been applied to the analysis of arsenic and methylarsenic compounds after reduction to the arsines [21]. Limits of detection are near 0.02 ng per sample with concentration limit of detection in the ppb range. Atomic fluorescence methods have limits of detection near 0.1 ng per sample. 7. Conclusions The great variety of detectors available for analyzing ng amounts of analyte provides a choice of combination methods for environmental analyses. It appears as if future effort is needed largely in developing selective chemistry for separations or preconcentration. Methods have already been developed for the methyl forms of those elements which are biometh- ylated in the environment. Success here is attributable to separation chemistry. 457 The next problem to attack is separation and/or determination of the oxidation states of transition metals of interest. This will likely be done with some combination of specific complexation followed by HPLC separation and elemental detection. More difficult tasks will be detection of labile metal ion complexes and the analysis of sediments. References 1] Craig, N. L. , Harker, A. B. , and Novokov, T., Atmos. Environ., 8, 15 (1974). 2] Novakov, T. , Joint Conference on Sensing Environmental Pollutants (Proc), 2nd conf . , 197, 15A (1973). 3] Czuha, M., Jr., and Riggs, W. M. , Anal. Chem. 47, 1836 (1975). 4] Elder, J. F., Perry, S. K. , and Brady, F. P., Environ. Soi. Technol. , 9_, 1039 (1975). 5] Dulka, J. J. and Risby, T. H. , Anal. Chem. 48, 640A (1976). 6] Boltz, D. F. and Mellon, M. G., Anal. Chem. 48, 216 (1976). 7] Walton, H. F. , Anal. Chem. 48, 52R (1976). 8] Jupille, T. H. , American Laboratory (5), 85 (1976). 9] Borch, R. F. , Anal. Chem. 47, 2437 (1975). Hefley, A. J. and Jaselkis, B., Anal. Chem. 46, 2036 (1974). Buck, P. P., Anal. Chem. 48, 23R (1976). Schultz, F. A., and Mathis, I. E., Anal. Chem. 46, 2253 (1974). Jasinski, P., Trachtenburg and Andrychuk, D., Anal. Chem. 46_, 364 (1974). Andrew, R. W. , Proceedings of Workshop on Toxicity to Biota of Metal Forms in Natural Waters, Great Lakes Research Board, Ch. 6, (1975). Miller, B. and Bruckenstein, S. Anal. Chem. 46_, 2026 (1974). Szakasits, J. J. and Robinson, R. E., Anal. Chem. 46, 1648 (1974). McLafferty, F. W. , Knutti , R. , Venkataraghavan, R. , Arpino, P. J. and Dawkqns, B. G. , Anal. Chem. 47_, 1503 (1975). Fassel , V. A. and Kniseley, R. N., Anal. Chem. 46, 1110A (1974). Braman, R. S. and Foreback, C. C. , Science, 182, 1247 (1973). Braman, R. S. and Johnson, D. L. , An. Env. Sci. Technol. , 8, 996 (1974). Talmi, Y. and Bostick, Anal. Chem. 47, 2145 (1975). 458 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). ANALYTICAL TECHNIQUES FOR THE STUDY OF THE DISTRIBUTION AND SPECIATION OF HEAVY METALS IN AQUATIC SYSTEMS H. J. Tobschall, N. Laskowski 1 and K. Kritsotakis Department of Geosciences University of Mainz Saarstrasse 21, D 6500 Mainz Federal Republic of Germany 1. Introduction The awareness of the hazards deriving from elevated concentrations of heavy metals in aquatic systems has increased the activity in the development of trace analytical techniques, However, knowledge of the total concentration of a trace metal in a particular aquatic compartment is not sufficient to permit an accurate interpretation of both its biological effects and geochemical reactions. The chemical behavior of a metal in the aquatic environment depends on its forms, those in solution, and those present in colloidal and particulate phases. For example, the toxicity of copper to some marine organisms is controlled by the formation of copper organic complexes [l] 2 . Speciation influences the participation of an element in geochemi- cal processes such as precipitation-dissolution and adsorption-desorption. Obviously, an understanding of these processes is essential to determining pathways by which heavy metals are transported through the aquatic media. Additionally, only by characterizing trace metal species and the reactions by which they are transformed, can we hope to understand the changes of biological and geochemical processes which are initiated by man's activity. The kind of data needed includes the type and number of natural and synthetic organic compounds, their chemical stability and their capacity to react with metals. Furthermore, we have to improve our understanding of interfacial phenomena and apply our knowledge of the competitive equilibria in aqueous solutions to interfacial regions, (particularly to the sediment-water interfaces) if eventually we are able to quantify the processes and mechanisms whereby heavy metals are released, transported and accumulated in both natural and polluted aquatic systems. The objective of this paper is to consider a series of selected analytical tools which might be capable of providing data closing the information gap outlined above. The analytical techniques are illustrated by discussing our study on the distribution of Ni , Cu, Zn, Ag, Cd, and Hg in sediments and waters of a highly polluted cut-off channel of the Rhine River near Mainz [5,6]. 2. Discussion Recent work to identify chemical species of metals has centered around two basic concepts. One is to elaborate techniques which can distinguish different chemical forms present in a water sample; the other is to develop chemical models of the aquatic environ- ment based on the general chemistry of the relevant elements. Present address: Department Geochemistry, University of Cape Town, Rondebosch, Cape, South Africa. 2 Figures in brackets indicate the literature references at the end of this paper. 459 Techniques of the first concept lead to chemical speciation which is operationally defined. They include analytical measurements following various types of physical or. chemical separation such as filtration, centrifugation, dialysis, chromatography, and solvent extraction. Furthermore, measurements of the response of a particular species to various chemical perturbations such as oxidation of dissolved organic constituents and pH shifts are applied. At present, anodic stripping voltammetry (ASV) has received considerable attention from water chemists because of its great sensitivity, precision and resolution. Additional- ly, it offers new dimensions with respect to the determination of metal species in water since some forms {e.g. certain organic complexes) are not reducible at the mercury electrode whereas other forms are. Most known organic complex formers in natural waters are weak acid salts. Thus it is probable that metal-organic interactions occur under neutral or alkaline (i.e. under environmental) conditions but under strong acid conditions the metals exist as free hydrated species. The difference in peak current under environmental condit- ions and under experimental acid conditions may be interpreted as representative of the amount of metal ions present in "nonlabile" organic complexes of unknown nature. Studies on the proportion of heavy metals present in ionic forms have been performed by coupling an ion exchange technique with dialysis [2]. A major advantage of this procedure is a reduction in adsorption and contamination problems. We are using this technique to obtain information on the percentage of ionic forms of Cd, Cu, Pb, Zn, and Fe in natural waters. A \/ery promising technique for determining the total concentrations of heavy metals in waters has been published by Kinrade and Van Loon [4]. Using two chelating agents, ammoniumpyrrol idindithiocarbamate (APDC) and diethylammoniumdiethyldithiocarbamate (DDDC) the metals Cd, Co, Cu, Fe, Pb, Ni , Ag, and Zn can be extracted simultaneously. References [1] Davey, E. W., Morgan, M. J., and Erickson, S. J., A Biological Measurement of the Copper Complexation Capacity of Seawater, Lirnnol. Oceanogr. }&_, 993-997 (1973). [2] Hart, B. T., Determination of the Chemical Forms of Selected Heavy Metals in Natural Waters and Wastewaters, Progress Rep. 1, A.W.R.C. Research Project 74/60, Caul field Institute of Technology (1976). [3] James, R. 0. and Healy, T. W., Adsorption of Hydrolyzable Metal Ions at the Oxide- Water Interface, J. Colloid and Interface Science 40, 42-81 (1972). [4] Kinrade, J. D. and Van Loon, J. C, Solvent Extraction for Use With Flame Atomic Absorption Spectrometry, Analy. Chem. 4£, 1894-1898 (1974). [5] Laskowski, N., Die Gehalte der Elemente Ni, Cu, Zn, Rb, Sr, Y, Zr, Nb, Ag, Cd und Eg in Korngrossenfraktionen der Sinkstoffe und Sedimente des Ginsheimer Altrheines. - Ein Beitrag sur Geochemie einee Fluviatilen Gewassers mit Intensiver Urban- Industrieller Belastung, Diploma thesis, Department of Geosciences, University of Mainz, 130 unpublished (1975). [6] Laskowski, N., Kost, Th., Pommerenke, D., Schafer, A., and Tobschall, H. J., Abundance and Distribution of Some Heavy Metals in Recent Sediments of a Highly Polluted Limnic- Fluviatile Ecosystem near Mainz, West Germany, Nriagu, J. 0. (Ed.), Environmental Biogeochemistry 2_, 587-595, Ann Arbor Science Publishers (1976). [7] O'Connor, T. P. and Kester, D. R., Adsorption of Copper and Cobalt from Fresh and Marine Systems, Geochim. Cosmochim. Acta 39, 1531-1543 (1975). 460 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, I . 1 . I I ' Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). !<')";■ .... INORGANIC SPECIATION OF COPPER IN ESTUARINE ENVIRONMENTS 1 David C. Burrell and Menq-Lein Lee Institute of Marine Science University of Alaska Fairbanks, Alaska 99701, USA 1. Introduction This report is primarily concerned with some initial work on the nature of inorganic complexes of copper present in seawater and also on the partition of this element between suspended sediment and water under estuarine conditions. The latter work relates to Alaskan fjord-estuaries which offer ideal environments for the study of the nature and behavior of heavy metals under predominantly inorganic marine conditions. Anthropogenic contamination is absent, natural dissolved and particulate organic contents are very low at most times of the year, and \/ery large amounts of glacially derived, silicate particulate sediment accompany the fresh water inflow during the summer months. 2. Discussion It is necessary first to briefly consider the role of organic complexation in marine waters since there has been a good deal of speculation on this topic in recent years. Interest here has, in fact, centered on copper, probably because this has been the most comprehensively studied transition element and also because copper forms high stability complexes with a number of organic ligands. Various field and laboratory investigations have suggested that upward of 50% of the total copper in unimpacted seawater may be present in this form. Identification has largely been attempted via separation and analysis of "organic fractions" defined by various solvent extraction systems, by differnece between oxidized and untreated sample aliquots, or by indirect, bioassay procedures. No one copper organic complex has been identified to date, however. It is believed that the various seawater equilibrium models incorporating organic ligands which have been published to date are also largely unrealistic. Total dissolved organic carbon contents are frequently over estimated, for example. But the major errors are introduced both because of the assumptions inherent in attempting to project ideal conditions to the natural environment, and through the choice of exotic organic ligands. The latter is a consequence of the general paucity of thermodynamic data for naturally occurring organic compounds in saline solution. A complexometric titration of Alaskan fjord water has given a "complexation capacity" for copper of 2 x 10" 7 M. This would approximate to a dissolved organic carbon content in the range 3 x 10 -3 mg C/l ; which is undetectable by conventional techniques. This isolated experiment is probably unrepresentative and does not per se preclude complexation; however, we consider [l] 2 that the current evidence for a major role for organically complexed metals in "average" seawater is unconvincing. Contribution no. 292 from the Institute of Marine Science, University of Alaska, supported in part by U.S. Energy Research and Development Adminstration Contract No. AT(45-1 )-2229. 2 Figures in brackets indicate the literature references at the end of this paper. 461 There is considerable controversy at the present time regarding sorption-desorption reactions of heavy metals at the fresh-marine junction. This cannot be considered in any detail here, but it is instructive to consider the definition of the "particulate" and "soluble" fractions. The generally used convention calls for partition using a 0.4 ym membrane filter; an operational convenience without much scientific basis. It was initially considered that this would be a reasonable working definition for use in these fjords since particle floes in excess of 1 mm are produced when the salinity exceeds 4 to 5 ppt. A marked correlation between suspended sediment load and "soluble" trace metal concentrations has long been noted, however, and we have found [2] this relationship to hold when the particulate material is fractionated into a series of size divisions down to 0.1 ym. It is likely that the filtering action regenerates some fine grained, primary-sized silicate material which is subsequently analyzed as part of the soluble fraction. There is also increasing evidence that some stabilized colloidal size material may not be flocculated following injection into the higher ionic strength medium, so that a wide size spectrum of particulate metal species may coexist in seawater. We have no evidence for the release of sorbed copper from suspended sediment following mixing with seawater but, at the same time, it would seem an almost impossible task to define a meaningful soluble fraction. As a generalization, it might be expected that the fraction of free metal ion should increase passing from fresh to marine water because of a decrease in effective complex formation constants with increase in the ionic strength. In fact, for many metals, includ- ing copper, the free ion concentration (as measured polarographical ly , for example) decreases in seawater as a direct consequence of complex ion formation with the major anions. The following ligands might be expected to be important: OH", C1-, C0|, (and possibly, SOi; for some metals). The major inorganic carbon anion in seawater is HCO3, but no unequivocal bicarbonato complex with the common heavy metals have yet been identified. Since the carbonate activity exceeds all the trace metals in solution, this distribution is unlikely to be limiting, however. The fact of inorganic complexation in seawater has long been appreciated but very few individual associations have been directly identified. The nature and abundance of many sets of inorganic metal complexes have been calculated most commonly via thermodynamic equilibrium models in the same fashion as noted above for the organo-metallic species. The same caveats apply. Table 1 is a compilation from the recent literature (in order of publication) of suggested inorganic copper species computed for matrices approximating seawater. Table 1 Equilibrium copper species computed for seawater {% total metal in solution; literature data)[l] 12 3 4 5 C U 2 + Ci iCO^ Ci J(C0 3 )| Ci jC1 + Ci jC1 2 Ci iOHCl Ci jOH" Ci J(0H)£ - - 3 14 - 11 22 93 4 49 2 - 6 2 65 10 97 _ _ _ - - 4 - 22 - 83 462 It may be seen that this modeling approach has not lead to a consensus regarding the predominating species in seawater. The primary reasons for this are an absolute lack of stability constant data for many of the complexes of interest, and differing approaches for computing the activity coefficients needed to relate thermodynamic constants to the saline medium. It would appear that there is an urgent need for the direct qualitative and quantita- tive identification of the major inorganic forms of the heavy metals existing in seawater. We are currently conducting such a program to obtain this speciation information for copper, and can give here preliminary data for chloro and hydroxo species. These data have been determined from pH and pCl titrations, following the oxidation peak shifts via anodic stripping voltammetry. The relevant Nernst relationship for a ligand L and peak height potential E p is: Cu 2+ + xL + Cu L^-x RT AE p = -§1 In (e x [L] x ; The experiments have been conducted in perchlorate solution to match the ionic strength of seawater and the NaCl used (0.7 M) has been purified most carefully to remove traces of other halogen ions. Employing this ionic medium convention, [L] represents the concentration of ligand not complexed with copper. Previous workers in this field have reported opera- tional difficulties due to over-lapping of the copper and mercury stripping signatures; we have effected excellent separation using thin-film, glassy carbon electrodes under closely controlled analytical conditions. Non-linear titration curves (AE p vs pCl or pOH) confirm the formation, under these conditions, of a series of hydrolysis and higher chloro species. Previous work on zinc by Bradford [3] determined that these latter complexes formed insuf- ficiently rapidly at the electrode surface to be observed. This does not appear to be the case with copper. Table 2 lists our initial values for the relevant formation constants: Table 2 Uncorrected stability constants for copper complexes (I = 0.7) log Bi log e 2 log e 4 OH 6.4 15.0 CI 3.2 6.4 8.9 It must be emphasized that these data have been obtained from uncorrected projections of titration curve tangents and, at this stage, are order-of-magnitude estimates only and subject to revision [4]. For a system of i controlling ligands, the fraction of copper present as any one species Cu L-j is given by: >1x< L 1> X i + zie ix (L i ) x 463 Using the $i values cited above for chloro and hydroxo species, together with litera- ture data (log 3 l5 3 2 = 6.8, 10.0) for carbonato complexes, it is possible to calculate the proportion of species present in seawater of pOH, pCl and pC0 3 equal to 6, 0.26 and 3.6 respectively. Thus, table 3 provides a representative estimate. Table 3 Approximate % fraction of inorganic copper species calculated for seawater (pH 8; pCl 0.26; pC0 3 3.6) Major (%) Minor (% x ~\0 h ) CuCl + 10.0 CuCl^" 98.8 Cu0H + 3.0 CuClo 1.0 CuC0° 1.6 Cu(0H)° 0.14 Cu(C0 3 )2 _ 0.06 Cu 2+ 0.01 It may be seen that some 99% of the total dissolved inorganic copper is present as CuCl^" and that this order of magnitude predominance will not be greatly affected by refinements of the stability constant data. These calculations have not, however, taken account of potential mixed ligand complexes (notably 0HC1 ) , and further experimental work is required in this area. The importance of copper chloro complexation in seawater has not generally been appreciated (see table 1). In the absence of definitive formation constant data, carbonato complexation seems to have been intuitively favored. A re-evaluation of the sorption characteristics of this metal would be of considerable interest (cf. the behavior of HgCl^"). Carbonato complexation may be supposed to predominate in natural fresh waters. Table 4 illustrates a computation for waters having [C0 3 ] of 2 x 10" 6 M at pH 7: Table 4 Approximate % fraction of inorganic copper species calculated for freshwater ([C0 3 ] = 2 x 10" 6 ; [OH] = 10" 7 ) Species %_ CuC0 3 Cu(0H) 2 Cu 2+ Cu0H + Cu(C0 3 ) 2 - Fresh waters have quite variable carbonate ion contents, and the effect of this on the pontential free copper ion concentration should be noted. 51 5 43 4 3 1 1 2 464 References [1] Burrell, D. C. , A review of the chemical speciation of copper in seawater, Mar. Sci. Commun. , (in press, 1976). [2] Burrell, D. C. , Trace metal associations in fjord-estuarine environments, Unpublished report to U.S. Energy Research and Development Adminstration, Institute of Marine Science, University of Alaska (1975). [3] Bradford, W. L. , The determination of a stability constant for the aqueous complex Zn(0H)2 using anodic stripping voltammetry, Limnol. Oceanogr. 18:757-762 (1973). [4] Lee, M-L and Burrell, D. C, The chloro and hydroxo speciation of copper in a saline medium, MS in preparation. 465 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg , Md. (Issued November 1977). USE OF ION-SPECIFIC ELECTRODES IN STUDYING HEAVY METAL TRANSFORMATION IN AQUATIC ECOSYSTEM S. Rama mo or thy and D. J. Kushner Department of Biology University of Ottawa Ottawa, Ontario, Canada 1. Introduction The mobility of trace metals in fresh surface waters and their availability for interconversions such as methylation into toxic forms depends on competitive physico- chemical reactions with various compartments of the ecosystem. The following chemical mechanisms partition trace metals in natural waters [l] 1 . (1) Solution as ionic species and ionic associations. (2) Complexing or chelating with dissolved organic molecules. (3) Adsorption on sediments. (4) Precipitation on solids. (5) Incorporation into biological materials. (6) Incorporation into crystal structures of mineral grains. The compartment comprising metal ions complexed with dissolved organic molecules No. 2, is especially important in determining the biological interactions of these metals, including their toxicity. Organic-bound Hg compounds such as methylmercury are several orders of magnitude more neurotoxic than inorganic Hg compounds. However, the rate of methylmercury production depends on the concentration of free Hg 2+ in the ecosystem [2,3]. Lead alkyl compounds are much more poisonous than free Pb 2+ , and biological methylation of lead was recently shown to occur in certain natural conditions [4]. The increased toxicity of these organometallic compounds may be due to their lipid solubility and hence transport across cell membranes. In contrast, Cu 2+ and Cd 2+ are detoxified when these ions become chelated [5-8]. Though all effects of complex formation on the biological activity of toxic metals are not understood, the above considerations show that it is very important to determine the chemical state of the toxic trace metals in surface waters. This may be more specifically illustrated by considering the possible fates of Hg 2+ ions in water. 2. Discussion The forms in which metal ions are present in natural waters depends on pH, E n , presence of chelators and other factors. The pH of natural waters is usually around 7, and Hg 2+ and Cu 2+ ions are likely to be present in hydrolyzed forms, Hg(0H) 2 and Cu(0H) 2 , if chelators are not present. However, in the presence of chelators there will be competition between the OH groups and the chelators for the heavy metal ion. The stability constants of Hg 2+ -organochelators, for example, are several orders of magnitude higher than those of the Hg 2+ -hydroxy complexes; hence, Hg 2+ will preferentially bind to chelators instead of being i Figures in brackets indicate the literature references at the end of this paper. 467 hydrolyzed. Moreover, the pH-Eh relationship for the distribution of various charged forms of mercury, Hg°, Hg + and Hg 2+ will be determined by the amount of Hg 2+ bound to chelators. The equation, E = 850 + 30 log [^f-] (1) -no- where E = potential in mV necessary for oxidation of Hg° to Hg 2+ and k= the binding constant representing the strength of binding between Hg 2+ and ligand, shows that the higher the value of k, the lower will be the value of E at which other forms are converted to a form of Hg 2+ able to combine with the ligand. The biological availability of Hg 2+ for methylation and other reactions within the biota is determined by the magnitude of the k factor. If the organic ligands present in natural waters regulate the availability of the trace metals to the biota, knowledge of the abundance and metal binding strengths of such ligands may be essential in understanding the biological and toxicological effects of such metals. The complexing capacity of natural waters has been measured by determining the extent to which trace metals are complexed and held in bound forms. Several methods have been used to measure this capacity [9-14]. Mercury entering natural waters is largely taken up by bottom sediments, and this explains the low levels of mercury in water compared to the relatively high amounts in bed sediments of the same ecosystem. The mechanisms for release of mercury from sediments are not well known, although it is clear that sediment types play an important role in making mercury available for microbial production of methyl mercury. For example, mercury bound to sulphides in sediment is not available for bacterial transformation unless preceded by an oxidation step [15,16], whereas organic-bound mercury and mercury bound on mineral grains are readily available for bacterial methylation [17]. Jernelov [18], and Langley [193 demonstrated that the rate of methylation in sediments does not correlate with the total mercury content. The correlation of heavy metal sorption-desorption characteristics of sediments with respect to variables such as pH, time, sediment type (grain size, specific surface area, cation exchange capacity, and organic matter) and natural/synthetic leachates, is important in assessing the role of sediment compartment (No. 3) in regulating the release of these metal ions into natural waters for further transformations. Electrodes specific for a number of different metallic cations have been developed in the last few years. We have found that these provided rapid, convenient, and sensitive ways of measuring very small amounts of free toxic metal ions. The ions so far studied (Hg 2+ , Pb 2+ , Cu 2 *, and Cd 2+ ) are all important pollutants in Canadian waters. These solid state electrodes can detect low levels of metal ions with the following limits of detection: Hg 2+ =lyg/l, Pb 2+ =20yg/l , Cu 2+ =2ug/l and Cd 2+ =10yg/l; calibration plots for these cations showed a perfect Nernstian response. In an uncomplexed medium, the optimum pH for measurement of [Hg 2+ ]f ree is 2-6, since Hg 2+ undergoes hydrolysis at alkaline pH's. In contrast to Hg 2+ ion, the other cations hydrolyze only at pH values above 8.0 in an uncomplexed medium. The specific-ion electrodes used were previously shown to give good Nernstian responses to their respective cations in highly complexed media, show greater sensitivity, and are capable of detecting very low levels of M 2+ ions in strongly complexed media, due to efficient buffering of these cations [20]. Electrodes can be calibrated directly in terms of concen- tration, provided the total ionic strengths of the metal ion solutions used for calibration and the test solutions are roughly similar. When the total ionic strength is below, 10" 3 M (which is usually the case with metal ions in natural fresh waters) the difference in total ionic strength between standards and test solution can be as large as fivefold without resulting in serious errors. 468 Lower limit of detection for the ion-specific electrodes may be, in some cases, higher than the levels of heavy metals found in natural environments. The metal binding constant calculated under equilibrium conditions varies only with the activity of the metal ion. Activity of such metal is equal to their concentration below 10" 3 M (the activity coefficient of each metal ion is unity below 10" 3 M). Therefore, the calculated constants and hence metal-speciation are applicable to concentrations below 10" 3 M without any correction. Moreover, natural waters have a constant ionic background ( 10 _1+ M) despite the trace levels of heavy metal ions, thus making the extrapolation of data to trace levels valid. The aquatic ecosystems studied ranged from simple two component systems such as river sediment and water to multi-component such as river water containing dissolved micro- and macrosolutes (of different size fractions), and particulate matter. Studies were also carried out on heavy metal speciation in natural waters sustaining algal growth. A. Mercury sorption and desorption characteristics of some Ottawa River sediments Bed sediments of the Ottawa River downstream of the Gatineau River confluence are predominantly sandy, with variable organic contents, chiefly wood chips. Mercury sorption by these sands was studied at constant temperature using a mercuric-ion specific electrode, and varying added [Hg 2+ ] and pH. Sorption rates are highest for organic-rich sands, but variation in particle size is apparently not sufficient to reveal sorption trends related to this parameter. Sorption data were fitted to a linear form of the Langmuir equation, from which sorption maxima and mercury bonding coefficients were derived. The maxima do not vary between samples, whereas the bonding coefficient relates most closely to organic content. Mercury sorption was little affected by pH. Desorption rates are low:less than M Hg was leached from sediment after agitation in distilled water for 70 hours, and a similar amount in fulvic acid solution. The mercury- sediment bonding is evidently much stronger than that between fulvic acid and mercury, irrespective of organic content of the sediment. Above studies have been extended to Pb, Cd and Cu with variety of fresh sediments. Desorption of heavy metals in the presence of CI" and NTA and metal displacement order were also studied. B. Heavy metal binding components of river water Ion-specific electrodes were used to measure the heavy metal binding capacity of river waters near Ottawa. Binding capacity was measured in unfiltered water and in water passed through filters retaining particles (0.45 ym) and macromolecules of molecular weight (MW) 45,000, 16,000 and 1,400. In the most studied water samples, almost all the Hg 2+ -binding ability passed through the smallest filter. Filters of different pore sizes retained substantial fractions of the binding ability towards other heavy metal ions. Binding strengths and conditional binding constants were calculated for each metal ion and low MW components of the Ottawa River water. Binding in Ottawa River water was not due to HCO3 or C0^" ions; in the Rideau Canal, and probably in other bodies of water, such ions caused a substantial amount of binding. After complete ashing of Ottawa River water and reconstitution with deionized water almost all the metal binding ability was lost; thus, an organic compound(s) is responsible for binding. The binding pattern towards different metal ions of fulvic acid isolated from soil was different from that of unfiltered or filtered Ottawa River water. Fulvic acid is not the sole binding component of this water. These experiments suggest a way of assessing the importance of fulvic acid and other humic substances in heavy metal binding by natural waters. 469 C. Heavy metal binding by algae growing in natural waters Algae can secrete many extracellular substances, including polypeptides, polysaccharides and low molecular weight compounds. Many of these have the ability to bind heavy metal (HM) cations. In addition, the external layers of algae might be, and in some cases have been shown to be, important heavy metal binding sites. Ion-specific electrode techniques have been applied to laboratory cultures of blue-green algae and also to natural algal blooms in lakes and rivers. In the summer of 1975 Leamy Lake near Hull, Quebec showed an important bloom of blue-green algae, mainly Aphanizomenon flos-aquae and Andbaena spiroides, which persisted eight weeks. Samples were taken at two week intervals until the bloom disappeared. Heavy metal binding was determined in the lake water with algae suspended in it and in fractions obtained after filtration through a 0.45ym filter as well as through ultrafilters of different pore sizes, down to those retaining substances of molecular weight greater than 500. The different suspensions and solutions were assayed for ability to bind Hg 2+ , Pb 2+ , Cu 2+ , and Cd 2+ ions. Chemical analyses for total organic and inorganic carbon, CI - , phosphate and other ions were carried out on the water samples. A similar procedure was followed with samples taken above, in and below a thick bloom of the filamentous green algal species, Ulothrix aequalis located in the Rideau River just above the Rideau Falls by which this river enters the Ottawa River. The bloom of blue-green algae caused very high metal binding, approximately one hundred times that of water entering the lake. In contrast, the green algal bloom from the Rideau River seemed to contribute little to metal binding by these waters. Attempts were made to correlate the heavy metal binding in terms of the species compo- sition of each bloom and the chemical composition of the water. It is evident from the results thus far obtained that the kinds of algae causing a bloom can make a great difference in the metal-binding potential of each bloom. References [I] Gibbs, R. J., Science, 180, 71-73 (1973). [2] D'ltri, F. M., The Environment Mercury Problem, (CRC press, Ohio, 1972). [3] Bisogni, J. J., Jr. and Lawrence, A. W., Tech Report No. 63 (Cornell Univ., Ithaca, NY, 1973). [4] Wong, P. T. S., Chau, Y. K. and Luxon, P. L., Nature 253, 263-264 (1975). [5] Stiff, M. J., Wat. Res. 5, 585-599 (1971). [6] Water Pollut. Res. Dept. Environ. London, England, Her Majesty's stationer office, p. 36-41. [7] Fleischer, M., Sarofim, A. F., Fassett, D. W., Hammond, P., Shacklette, H. T., Nisket, I. C. I., and Epstein, S., Environ. Health Per spect. 253-323 (May, 1974). [8] Friberg, L, Piscator, M. , Nordberg, G. , and Kjellstrom, T. , Cadmium in the Environment, 2nd Ed. (CRC Press, Chemical Rubber Co. Ohio, 1974). [9] Allen, H. E., Matson, W. R. , and Mancy, K. H. , J. Water. Pollut. Control Fed., 42_, 573-581 (1970). [10] Kunkel , R. , and Manahan, S. E., Anal. Chem. 45, 1465-1468 (1973). [II] Manahan, S. E., and Jones, D. R. , Anal. Lett. 6, 745-753 (1973). [12] Davey, E. W. , Morgan, W. M. J., and Erickson, S. J., Lirnnol. Oceanogr. 18, 993-997 (1973). 470 [13] Chau, Y. K. , and Lum-shue-chan, J. Wat. Res. 8, 383-388 (1974). [14] Chau, Y. K. , Gachter, R., and Lum-shue-chan, K. , J. Fish. Res. Bd. Can. 3J_, 1515-1519 (1974). [15] Jernelov, A., Conversion of mercury compounds, Chemical Fallout, pp. 69-73 (Charles C. Thomas, Springfield, 111., 1969). [16] Gillespie, D. C, and Scott, D. P., J. Fish. Res. Board. Can. 28, pp. 1807-1808 (1971). [17] Wood, J. M. , Environmental Pollution by Mercury, Advances in Environmental Science and Technology,, Volume 2, pp. 39-56 (R. L. Metcalf and J. N. Pitts, Eds., Wiley-Interscience, New York, 1971). [18] Jernelov, A., Address to the Conference on Mercury in the Environment, Ann Arbor, Mich (1972). [19] Langley, D. G., Mercury Methylation, Am. Chem. Soc. 162nd Nat. Meeting, Div. of Water, Air and Waste Chem., 11, pp. 184-186 (1971). [20] Orion Ionalyzer Instruction Manuals for Speaific-Ion Electrodes 3 1974, IM 94-53/4701. 1968, IM 94-29/869. 1968, IM 94-48/071 . 1972, IM 94-82/276. 471 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). ELECTROCHEMICAL STUDIES OF THE METHYLMERCURY CATION Richard A. Durst Analytical Chemistry Division National Bureau of Standards F. E. Brinckman and Kenneth L. Jewett Inorganic Materials Division National Bureau of Standards John E. Doody, F.S.C. 1 Christian Brothers College Memphis, Tennessee 1. Introduction Even before the discovery of the high toxicity and the environmental impact of methyl- mercury, organomercury compounds had been the subject of numerous electrochemical studies. [1-7] 2 . Aside from the general agreement [8,9] that the reduction process occurs in two one-electron steps producing first the organomercury radical and second, metallic mercury plus the reduced organic radical, there has been little agreement on other aspects of the reduction mechanism or the observed electrochemical phenomena. Much of the dis- agreement is due to the sensitivity of the reduction processes to subtle changes in pH, ionic strength, depolarizer concentration, anion effects, etc., which lead to such electro- chemical irregularities as prewaves, peak "splitting", and unexplained "reduction" peaks during the anodic scan. Our studies of the electrochemistry of methylmercury were undertaken for two purposes: to gain a better understanding of the mechanisms of methylmercury redox processes and to use this information to develop an electrochemical procedure for the selective deter- mination of methylmercury compounds. At this point in time, our preliminary data have resulted in partial success toward both of these objectives. We have gained new insights into the complexity of the mechanisms involved and have also demonstrated the feasibility of using the first reduction step as the basis for the determination of methylmercury ions. 2. Experimental 3 The cyclic voltammetry was performed using a PAR 170 Electrochemistry System with both glassy carbon and hanging mercury drop electrodes. The pulse polarography was performed using a PAR 174 Polarographic Analyzer with the dropping mercury electrode. All l NBS Guest Worker, 1976-1977, NSF Faculty Research Participant. 2 Figures in brackets indicate the literature references at the end of this paper. 3 In order to specify the procedure adequately, it has been necessary to identify commercial materials and equipment. In no case does such identification imply recommendation or endorsement by the National Bureau of Standards, nor does it imply that the material or equipment is necessarily the best available for the purpose. 473 supporting electrolytes were purified by continuous control led-potential electrolysis using ESA 2014P Reagent Cleaning Systems. The pH of the samples was monitored with a combination pH electrode mounted in the voltammetric cell and with digital readout on a Radiometer PHM64 Research pH meter. A view of the voltammetric instrumentation is shown in figure 1 . Figure 1. View of the electrochemical instrumentation. Solutions of inorganic ions were prepared from ana mercially. Methylmercury chloride and dimethyl mercury checked for purity by infrared and nuclear magnetic res Both methylmercury nitrate [10] and methylmercury perch literature methods. Prior to preparation of their solu analyzed for residual by-product contaminations {e.g such impurities. All methylmercury salt solutions were impurities by NMR before dilutions [12]; in general all estimated to be >98 percent pure. lytical reagents obtained corn- were also commercial materials onance spectrometry prior to use. lorate [11] were synthesized by tions, the solid materials were Ag + or Br") and found to be free of checked for proton-containing methylmercury reagents were Preparation of the organomercurials required the use of a grease-free high-vacuum line and a recirculating N 2 drybox. Methylmercurials were weighed in the drybox prior to dissolution in Pyrex bottles fitted with Teflon caps previously washed with dilute HN0 3 . Care was taken to maintain these aa. 10" 2 M stock solutions in 0.01 M supporting electro- lyte under N 2 and away from fluorescent lights [13]. For each voltammetric study, a fresh aliquot of the stock solution was delivered directly into the PAR electrode vessel contain- ing 50 ml of the supporting electrolyte under N 2 . Initially, studies were performed using the glassy carbon electrode (GCE). These studies gave us our first indication of some of the complexity that was to follow. In figure 2 is shown a voltammogram (two 20 mV/s cycles scanned from +0.8 to -1.8 V vs. SCE) of an approximately 20 ppm solution of methylmercury chloride (MeHgCl) in 0.01 M KN0 3 . The first cycle shows an irreversible reduction of MeHg + to the radical MeHg T"drawn-out peak at about -0.85 V) followed by the second reduction step to mercury metal at about -1.55 V. The reverse scan of both cycles shows no significant oxidation process until potentials more anodic than +0.1 V vs. SCE are applied. The mercury metal is stripped 474 o t/i > Figure 2. Cyclic voltammogram of methyl mercuric chloride on the glassy carbon electrode. 475 (oxidized) from the glassy carbon as shown by the doublet peaks at about +0.25 V and +0.45 V vs. SCE. During the second cathodic cycle, a new peak appears at +0.05 V, cor- responding to the reduction of mercuric ions produced during the previous anodic scan. Interestingly, the reduction peak of MeHg + shifts anodically to about -0.6 V and has the appearance of a more reversible reduction. It appears from this shift that the MeHg + is more easily reduced at the mercury-plated glassy carbon electrode than on glassy carbon itself. There is no change in the second reduction step. Another interesting phenomenon was observed (fig. 3) when the GCE was rotated at CH 3 HgCI in 0.01 M KN0 3 J H3 mil 1-13 - 19 i^fT>7 ' — *? _//• IW yjljXi^^^ — i=o — I 5J 1/ JL p. -^N \* / \\\7~^p5 ^N i \ 1 9 A 1 s 1 1 \ vH ROTATING GLASSY CARBON ELECTRODE 1 1 11] 9 1 675 RPM 50mV/s 1 12 f 131 / 0.6 lo.4/ i \i j . ■-11 1*1. i 0.2 i 0.4 0.6 0.8 1.0 1.2 1.4 1.6 ■ •■■'• iii i i E VS. SCE Figure 3. Cyclic voltammogram obtained with the rotating glassy carbon electrode. 675 rpm and the potential scanned at 50 mV/s from +0.6 V to gradually increasing negative potentials (0.1 V steps from -0.4 V to -1.6 V). Up to scan number 7 (-1.0 V), wery little reaction occurs. At #7, a slight mercury oxidation peak occurs at +0.1 V indicating a small amount of MeHg + was reduced to the metal during the cathodic scan. At scans #8 and above, a new reduction process occurs in the region more negative than -0.5 V, but it only appears during the anodic scan! As expected, the mercury oxidation peaks increase as the cathodic scans increase in negative terminal values thereby producing more of the reduction product (mercury). The anomalous reduction process occurring during the anodic scan is apparently produced by the reduction of some adsorbed depolarizer. It was not possible, however, to elucidate the nature of the species from the available data. 476 These studies and subsequent ones on a mercury-plated GCE, showed that some very interesting phenomena were occurring during the anodic scan, which were dependent upon pH, scan rate, methylmercury concentration, surfactants, etc., and these effects were only observable by cyclic voltammetry on a mercury electrode. Consequently, the remaining studies described below were performed on a hanging mercury drop electrode (HMDE). Of the hundreds of voltammograms run on the HMDE under a wide variety of conditions, several particularly interesting ones have been selected to illustrate the most signif- icant results and the sensitivity of the electrochemical reactions to changes in the experimental conditions. In one series of experiments, solutions of MeHgN0 3 in 0.01 M NaC10 4 were run at several scan rates (50, 100, 200, 500, 2000 mV/s), pH from 2.1 to 11.2, and cyclic potential scan from +0.1 to -1.7 V vs. SCE (except where hydrogen ion reduction interfered). The pH was adjusted by the addition of HN0 3 purified by sub-boiling distillation or GAF high- purity NaOH. Since the solutions were not buffered (in order to avoid interactions, e.g. , complexation, adsorption, etc., with buffer anions), the voltammograms in the neutral region may be subject to significant pH variations in the vicinity of the electrode surface during redox processes involving hydrogen ions. Future studies will examine the effect of buffering. In general, one type of voltammogram was observed at pH values lower than about 4, another type at pH greater than 6, and very interesting but poorly reproducible effects in the intermediate pH region. In the pH <4 region, voltammograms of the type shown in figure 4 were observed. Both reduction peaks were observed {oa. -0.5 V and -1.3 V vs. SCE) during the cathodic scan. A prewave (sometimes a doublet) was usually observed before the first reduction peak. The size of the prewave was time dependent and is probably caused by slow adsorption of the cation. During the reverse scan (anodic), a reduction "hump" was sometimes observed in the region between -0.5 and -0.8 V and may be related to the production of molecular hydrogen during the cathodic scan. At higher scan rates, a small oxidation peak occurs at oa. -0.45 V, corresponding to the expected oxidation of MeHg- to the cation, MeHg + . In the neutral-to-basic range (pH 6-12), the most significant difference is the appearance of a very pronounced reduction peak at about -1.2 V during the anodic scan (fig. 5, 50 mV/s scan). This "inverse" peak is very dependent on the methylmercury concentration, scan rate (decreasing with increasing scan rate as shown in fig. 5), surfactants (disappears upon addition of Triton X-100), and only appears when the second reduction step has occurred. The height of this peak is poorly reproducible even under virtually identical experimental conditions, which further complicates its interpretation. The only other differences between the basic and acidic cyclic voltammograms are the disappearance of the "hump" and a shift in the potential of the first reduction peak to more negative values as the pH is increased. This shift is consistent with the formation of hydroxy 1 complexes with the methylmercury cation. One other experimental parameter which has an enormous effect on the cyclic volt- ammograms is the methylmercury concentration. Above 10 _1+ M, there appears to be much more interaction between the reactants, products and electrode surface which produces con- siderable "fine structure" in the form of peak doublets and higher-order multiplets, especially at potentials more negative than -1.0 V vs. SCE. In the intermediate pH range, there is a considerable amount of peak formation, disappearance, shifting, doubling, and blending which defy a unified description. The only peak which shows reasonable stability at all pH values and most other operating conditions is the first reduction peak. However, even this peak shows a shift to more positive potentials as the concentration of methylmercury is increased. 477 o — t/> Figure 4. Cyclic voltammogram in acidic medium on the hanging mercury drop electrode. 478 Figure 5. Variations in cyclic voltammograms caused by changes in scan rate. Finally, to summarize the cyclic voltammetry results, it appears that the first reduction step is reversible on a mercury electrode and suitable for the determination of methylmercury The second reduction peak is "usually" larger and sharper (less tailing) than the first reduction peak indicating some adsorption character along with perhaps catalytic or other regeneration of the electroactive species. The "inverse" peak observed during the anodic scan is highly dependent on kinetic factors and appears to be an electrode surface phenomenon involving the formation of a new electroactive species or a desorption process. Some possible mechanisms are proposed in the next section. 3. Reaction Mechanisms As indicated above, although the literature on this subject still contains conflictinq statements concerning reversibility, prewaves, and adsorption effects, it is generally agreed that the reduction mechanism involves two one-electron reductions- 479 CH 3 Hg + + e" -CH 3 Hg- (1) CH 3 Hg- + e" - — ^CH^ + Hg° (2) At high scan rates, the first reduction process approaches reversibility as indicated by the development of the corresponding anodic peak. At slow scans, the peak disappears suggesting a slow chemical reaction of the methylmercury radical. Several possibilities exist for this following chemical step: 2 CH 3 Hg — (CH 3 Hg)2 -(CH 3 ) 2 Hg + Hg° (3) CH 3 Hg. + nHg° -CH 3 (Hg)- n+1 — ^^~^(CH 3 ) z Hg n+2 ( 4 ) These steps may involve catalysis by or chemisorption onto the mercury electrode. The second reduction in acidic solution is likely to be the generally "accepted reduction to methane and mercury, reaction [2] above. This reaction effectively removes the intermediate products from further activity, and thus the "inverse" peak is not seen in acidic solutions except at higher methylmercury concentrations where the H + con- centration at the electrode surface is reduced. In basic solution, the following side- reaction may be postulated: CH 3 Hg- + e= — ^[CH 3 Hg _ ] — — 3 ii2 -(CH 3 Hg) 2 (5) and the reduction of this dimer, or its disproportionate product, dimethylmercury, may be the species reduced during the anodic scan. This latter possibility was tested by comparing the reduction of dimethylmercury to that of methylmercury. As can be seen in figure 6, the dimethylmercury reduction produces a very drawn-out peak at oa. -1.1 V vs. SCE. When methylmercury chloride is added to this solution, the usual reduction peaks are superimposed on the dimethylmercury voltammogram. Although there is a distinct difference of about 100 mV between the two peak potentials (at oa. -1.1 and -1.2V vs. SCE) which would indicate that the "inverse" peak is not due to dimethylmercury reduction, the overall ill- defined nature of these peaks and the possibility of shifts caused by adsorption cannot allow us to rule out the dimethylmercury reduction mechanism. In summary, the overall mechanisms postulated for the observed electrochemical reactions of the methylmercury cation are shown in figure 7. This scheme is based on cyclic voltammetric studies alone. Further work is planned by other techniques to help elucidate the mechanisms. 4. Quantitative Analysis Based on the foregoing observations, it was decided that the first reduction step provided the best electrochemical reaction for the voltammetric determination of methyl- mercury. Using differential pulse polarography, preliminary measurements indicate that this technique will be applicable to ppb levels of methylmercury and above. It is un- likely, based on our present data, that this technique will provide precise data below the ppb level. Studies are in progress to improve both the sensitivity and precision of the technique, and to determine methylmercury in the presence of other organomercurials and a number of interferences, including metal ions and complexing agents. This work was supported in part by the Environmental Protection Agency, Office of Energy, Minerals and Industry. 480 (CH 3 ) 2 Hg in KN0 3 /KCI SAME SOLUTION + CH 3 Hg CI 50mV/s HMDE -EVS.SCE Figure 6. Comparison of dimethylmercury and methylmercury cation voltammograms. 431 CH 3 Hg OXIDATION -e e~ 1ST REDUCTION CH 3 Hg' CH 3 Hg* / \ c;>' mvmBiwmm (CH 3 Hg), CH 3 Hg" CH 3 Hg Hg°+ (CH 3 ) 2 Hg ^ (CH 3 Hg)2 CH 4 + Hg" e" "INVERSE" REDUCTION t T CH 3 Hg. + CH 3 CH 3 Hg. + CHgHg Figure 7. Proposed mechanisms for the observed electrochemical behavior or the methylmercury cation. 482 References [I] Kraus, C. A., J. Amer. Chem. Soo., 35, 1732 (1913). [2] Benesch, R. and Benesch, R. E., J. Amer. Chem. Soo., 73, 3391 (1951). [3] Benesch, R. and Benesch, R. E., J. Phys. Chem., 56, 648 (1952). [4] O'Donnell, M. L., Schwarzkopf, A., and Kreke, C. W., J. Pharm. Sai., 52, 659 (1963). [5] Hush, N. S., and Oldham, K. B., J. Electroanal. Chem. 6, 34 (1963). [6] Toropova, V. F., Saikina, M. K., and Khakimov, M. G., Zh. Obshch. Khim., Z7_, 47 (1967). [7] Jensen, F. R., and Richborn, B., Electrophilic Substitution of Organomercurials , McGraw-Hill, New York, pp. 137-142 (1972). [8] Fleet, B. and Jee, R. D., J. Eleotroanal. Chem., 25, 397 (1970). [9] Heaton, R. C. and Laitinen, H. A., Anal. Chem., 46, 547 (1974). [10] Johns, I. B., Peterson, W. D., and Hixon, R. M., J. Amer. Chem. Soo., 52, 2820 (1930) [II] Goggin, P. L. and Woodward, L. A., Trans. Faraday Soo., 58, 1495 (1962). [12] Hatton, J. V., Schneider, W. G., and Silbrand, W., J. Chem. Phys., 39, 1330 (1963). [13] Jewett, K. L., Brinckman, F. E., and Bellama, J. M., ACS Symp. Ser. 18, Marine Chemistry in the Coastal Environment (T. Church, Ed.) pp. 304-318 (1975). 483 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). AN ELEMENT-SPECIFIC TECHNIQUE FOR THE ANALYSES OF ORGANOMETALLIC COMPOUNDS Y. K. Chau and P. T. S. Wong Canada Centre for Inland Waters Burlington, Ontario 1. Introduction Biological methylation of elements to their methyl derivatives has been documented for several decades. Its environmental significance has only recently been the subject of increasing concern since the discovery of mercury methylation in sediments. Biological transformation of several elements has been reported [1-6] 1 . In many cases the compounds are transformed into volatile forms. Here we describe a technique, primarily developed to study the biological transformation of elements at their environmental concentration levels. This technique utilizes the features of the combination of gas chromatography and atomic absorption spectrometry (AAS) to provide the capability of separation and speci- ficity for the analyses of low boiling organometall ics in gaseous and liquid forms. 2. Instrumentation A MicroTek Gas Chroma tograph 220 and a Perkin Elmer Atomic Absorption Spectrophoto- meter 403 were used to build the system. Any GC instruments with temperature programming capability and AAS with background correction accessory can be used. The coupling of these two instruments was made by a transfer line of a stainless steel tubing, 2 mm o.d., connected to the GC column outlet at one ena, and to the silica furnace tube of the AAS at the other end. For alkyl derivatives of mercury, a Teflon transfer line of similar size was used to reduce the possibility of decomposition and adsorption of the mercurials to the stainless steel tubing. The transfer line was wrapped with heating tapes. In all the analyses, the background corrector was used, and an integrator was used to measure peak areas. 3. The Sample Trap The gaseous sample was collected in a sample trap made of a glass U-tube, 6 mm dia., 26 cm long, packed with 3 percent 0V-1 on Chromosorb W (80-100 mesh), and immersed in a dry ice-methanol bath at aa. -70 °C. The sample was drawn through the trap and metered by a peristaltic pump at 130-150 ml/min. The trap was connected to a 4-way valve installed between the carrier gas inlet and the injection port of the GC. After warming up the trap to ca. 80-90 °C in a hot water bath, the sample was swept to the GC column by diverting the carrier gas in the 4-way valve. For samples in liquid forms (organometals absorbed in solvents), they can be directly injected into the GC column through the injection port, 4. The AA Furnace Tube The furnace tube was made of silica tubing, 7 mm i.d., open at both ends. Two types of furnace design were used. Type A: The furnace tube was 4 cm long. The GC effluent was introduced to the center of the furnace tube through a side arm. Hydrogen was added at the same point, the burning of which improved the atomization of the elements. The furnace tube was wrapped with asbestos and Chromel wire, and heated electrically to fa. 980-1000 °C regulated by a variable transformer (fig. 1). i Figures in brackets indicate the literature references at the end of this paper, 485 Ca. 20 V a.c. It 8 cm. r> H FROM G.C SILICA FURNACE A Figure 1. Silica furnace type A. 486 ^ Type B: For certain elements notably As and Se whose absorption lines (194 nm and 96 nm respectively) are in the far UV region (where organic solvents and hydrocarbons bsorb), a pre-combustion section made of silica tubing (3 mm o.d., 10 cm long with a side for air supply) attached to the furnace, was found necessary to "burn off the mter- ng organics. Both sections of the furnace were heated to aa. 980-1000 °C (fig. 2). 40 V AIR SILICA FURNACE B Figure 2. Silica furnace type B. 487 5. Operation Parameters The following is a listing of instrument parameters for the analyses of a number of alkyl metallic compounds. The GC column used was glass, 6 mm dia., 1.8 m long, packed with 3 percent OV-1 on Chromosorb W, 80-100 mesh, unless otherwise specified. Gases supply: N 2 40 psi ; H 2 20 psi; Air 30 psi. Sensitivity on the AAS was set at 0.25 A Full Scale (4X expansion) . Pb alkyls (Me^Pb, Me 3 EtPb, Me 2 Et 2 Pb, MeEt 3 Pb, Et^Pb) [7]: GC: carrier gas N 2 70 ml/min, temperature programs, initial 50 °C for 2 min, 15 °C/min until 150 °C, injection port temp. 150 °C, transfer line 150 °C. AAS: 217 nm line, lamp current 8 mA, spectral band width 0.7 nm, furnace type A, H 2 135 ml/min. Se alkyls (Me 2 Se, Me 2 Se 2 ) [8]: GC: N 2 70 ml/min, temperature programs, initial 40 °C for 2 min, 15 °C/min until 120 °C, injection port 225 °C, transfer line 120 °C AAS: 196 nm line, lamp current 15 mA, spectral band width 2 nm, furnace B, H 2 120 ml/min, air 130 ml/min. As alkyl arsines (MeAsH 2 , Me 2 AsH, Me 3 As): GC: column 10 percent 0V-1 on Chromosorb W, carrier gas N 2 30 ml/min, temperatures, oven 25 °C, injection port 25 °C, transfer line 100 °C. AAS: 193.7 nm line, lamp current 14 mA, spectral band width 0.7 nm, furnace B, H 2 120 ml/min, air 130 ml/min. Hg alkyls (MeHgCl , EtHgCl): GC: column 5 percent DEGS on Chromosorb W, carrier gas N 2 80 ml/min, temperatures, oven 145 °C, injection port 150 °C, transfer line, Teflon, 150 °C. AAS: 253.6 nm line, lamp current 15 mA, spectral band width 0.7 nm, furnace A, H 2 260 ml/min, mixed with air at 90 ml/min. Cd alkyl (MeCd): GC: N 2 70 ml/min, temperatures, oven 70 °C , injection port 80 °C, transfer line 80 °C. AAS: 228.8 nm line, lamp current 8 mA, spectral band width 0.7 nm, furnace A, H 2 135 ml/min. 6. Standardization Most of the alkyl compounds in this study were obtained from Alfa Chemicals (Beverly, Mass.). The mixed alkyls of lead were provided by Ethyl Corp. (Ferndale, Mich.). Methyl- arsine and dimethylarsine were prepared according to Braman et al. [9j. The method was calibrated by injecting a known amount of a standard (about 10 ng as the element) to the GC through the injection port or through the sample trap, and measuring the peak area. 7. Results and Discussion The retention times established with the standard compounds were used for the iden- tification of the compounds. As an atomic absorption spectrophotometer was used as the detector, the response was related to the element. Different alkyl compounds of the same element gave the same absorption response. The hydrogen introduced to the furnace tube was to generate a small burning jet which was found to improve the sensitivity. Since the conventional flame was not used in the atomization, the baseline was stable and higher scale expansion could still be used if desired. With a 4X expansion, the detection was about 0.1 ng of the element. Figure 3 shows the recorder tracings of several alkyl compounds. 488 80h (5 ng Pb) 2 4 6 8 10 RETENTION TIME (min.) CO I 40 Q_ CO Me2Se2 (16 ng Se) Me2Se t(10ngSe] \ i i i i 4 8 RETENTION TIME (min.) 80 - CO £40 CO Me 2 Cd (10 ng) J I L 2 min. 80 - CO o CO 40 Me Hg CI 20 ng Hg Et Hg CI 20 ng Hg J I I I l L 2 4 6 min. 60 CO o40 Q_ CO 20 - CO < CO < CO CD co co a) VJ '''I I L 2 4 min. Figure 3. Recorder tracings of several alkyl metal compounds. 489 This technique has been applied to the studies of methylation of lead [5] and selenium [6] in the environment. References [1] Wood, J. M., Science 183, 1049 (1974). [2] Braman, R. S. and Foreback, C. C, Science J82, 1247 (1973). [3] Huey, C. W. , Brinckman, F. E., Grim, S., and Iverson, W. P., Proa. Int. Congress on Transport of Persistent Chemicals in Aquatic Ecosystem, Q. N. Laltam, Ed, (NRC, Canada, 1974) pp. H-73 to 11-78. [4] Huey, C. W. , Brinckman, F. E., Iverson, W. P., and Grim, S., Abstract-Program, Int. Conf. Heavy Metals in the Environment, Toronto, 1975, C-214. [5] Wong, P. T. S., Chau, Y. K., and Luxon, P. L., Nature (London) 253, 263 (1975). [6] Chau, Y. K., Wong, P. T. S., Silverberg, B. A., Luxon, P. L., and Bengert, G. A., Science 192, 1130 (1976). [7] Chau, Y. K., Wong, P. T. S., and Goulden, P. D., Anal. Chim. Acta 85, 421 (1976). [8] Chau, Y. K., Wong, P. T. S., and Goulden, P. D., Anal. Chem. 47, 2279 (1975). [9] Braman, R. S., Johnson, D. L., and Foreback, C. C, First NSF Trace Contaminants Conf. Oak Ridge, 359 (1973). 490 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). CHROMATOGRAPHY - ATOMIC SPECTROSCOPY COMBINATIONS. APPLICATIONS TO METAL SPECIES IDENTIFICATION AND DETERMINATION Douglas A. Segar 1 and Adrianna Y. Cantillo 2 1. Introduction Determination of the concentration of a trace metal in an environmental sample is often a difficult task. The identification of the chemical species of the metal in such a sample is much more difficult, so difficult that such analyses have rarely been performed. Nevertheless, the chemical form of an element is critically important in determining the properties of the element in an ecosystem. For example, methyl mercury is more toxic and fat soluble than inorganic mercury compounds, whereas the cupric ion is more toxic to algae when it is not chelated. Most analytical techniques that are capable of identifying a metal species, particularly metal-organic compounds, require a sample with a high concentration of the species in a simple matrix, because of their lack of sensitivity and specificity. Within the last several years, relatively simple and inexpensive analytical systems have been introduced which inter- face chromatographic separations with the high sensitivity of atomic spectroscopy detectors. The simplest of this new family of chromatography - atomic spectroscopy techniques is the combination of gas-liquid chromatography and flame or flameless atomic absorption spec- troscopy. The availability of this simple technique has enabled many laboratories to de- termine, simply and routinely, in environmental samples the concentrations of volatile organometallics, such as the alkyl compounds of lead, selenium, arsenic, and cadmium. In only a short period, the data obtained have dramatically improved our knowledge of the occurrence and importance of alkylated metals in the environment. Several other chromatographic-atomic spectroscopy combinations have been reported in the literature. The most promising of these is perhaps the combination of a high-pressure liquid chromatograph with a flameless atomic absorption spectrophotometer. The principle advantage of this combination compared to the gas chromatography systems is its potential for analysis of the nonvolatile metal-organic compounds. Very little is known about the occurrence, transformations, and effects of these compounds in the environment despite their much greater abundance than volatile metal species. 2. Discussion The flameless atomic absorption high-pressure liquid chromatograph can be used in two modes. In one mode, the eluant flow through the chromatographic column is continuous and the flameless atomic absorption detector samples a fraction of the eluant stream every several minutes. In a second mode, the flow of the high-pressure liquid chromatograph is either stopped between injections to the flameless atomizer or flows at a rate such that virtually all of the column effluent is passed into the atomizer. The first mode is useful where it is desired to save a portion of the column effluent fractions for further char- acterization of the eluted species. The second mode is used when the quantity of material separated is limited. National Oceanic and Atmospheric Administration, National Ocean Survey/Office of Marine Technology, Engineering Development Laboratory, 6001 Executive Boulevard, Rockville, Maryland 20852, USA. 2 National Oceanic and Atmospheric Administration, Atlantic Oceanography and Meteorological Laboratories, 15 Rickenbacker Causeway, Miami, Florida 33149, USA. 491 Preliminary investigations of the copper compounds in coastal sea waters have been carried out using gel filtration on molecular exclusion gels (Sephadex) and microporous glass beads. Several different fractions of the metals are eluted from the columns. However, adsorption (particularly with the Sephadex) and reactions apparently caused by ionic strength, composition, and pH changes during passage through the column have made interpretation of the data difficult. It has not been possible to obtain trace metal and organic-free sea water for use as an eluant. Elution has, therefore, been carried out using dilute acid solutions and high molecular weight complexes which proceed down the column faster than the ionic salts, experiencing a different physicochemical environment. As the complexes pass ahead of the ionic form with which they are in equilibrium, they will dissociate. The net effect of the separation, if adsorption does not occur, is that nondissociable excluded metal-organic compounds pass through the column with the void volume or the volume characteristic of their molecular weight. Excluded dissociable metal-organic compounds cause the peak at the salt volume to be broadened and shifted forward (see fig. 1). The degree of broadening and shift is a complex function of the stability constant of the complex, its rate of dissociation, the stoichiometry, and the characteristics of the column including the flow rate. In general, the preliminary studies have shown that both dissociable and nondissociable compounds of copper are present in the sea water samples (New York Bight) which had been filtered and acidified to a pH of about 1. An aqueous extract of a marine sediment sample from the same area showed predominantly nondissociable excluded compounds. It should be emphasized that the compounds that are eluted in the column void volume need not necessarily be metal-organic but could equally be colloidal and inorganic in nature. CONCENTRA- TION IN ELUANT VOID VOLUME SALT VOLUME METAL TOTALLY EXCLUDED NONDISSOCIABLE COMPOUNDS EXCLUDED DISSOCIABLE COMPOUNDS INORGANIC SALTS ELUTION VOLUME Figure 1. Metal complex separations from sea water by microporous glass bead molecular exclusion chromatography. 492 Successive additions of 40 and 200 ppb of ionic copper to the sea water sample led to successive shifts of the broad dissociable complex peak towards the salt volume. However, the bulk of the added copper was still eluted ahead of the salt indicating that the excess complexing capacity of this water sample was high. 3. Conclusion The potential applications of the flameless atomic absorption high-pressure liquid chromatograph are diverse. Many column solid phase materials and eluants, sample types, and separation conditions are possible. However, the applications of molecular exclusion chroma- tography appear to be possibly the most exciting. Several different experiments are possible which offer the promise of quantitative characterization of the stability constants, kinetics of dissociation of metal complexes (with naturally-occurring organics) and the complexation capacity of natural waters. This information is vital to the understanding of trace metal toxicity and biological uptake in natural waters. One of the possible experiments can be carried out as follows. The natural water sample is passed continuously through a column of molecular exclusion support (microporous glass beads) until the metal concentration in the effluent is equal to the concentration at the input. A spike of the water sample with additions of either metal or a chelating agent is then passed onto the column and elution is continued with the unspiked natural water sample. The position and magnitude of the peak of higher or lower metal concentrations in the effluent may then be related to the complexation processes occurring in the natural water sample. This approach is similar to the method of stability constant determination for metal organic complexes proposed by Hummel and Dreyer 3 . The flameless atomic absorption high-pressure liquid chromatograph will permit such exper- iments to be carried out in natural waters with concentrations of metals and organic com- plexing agents within the natural range. More important, however, is the ability to in- vestigate the response of organic matter to metal additions (and vice versa) in natural water samples without being first required to separate and identify the organic compounds involved. 3 Hummel and Dreyer, Biochim. Biophys. Acta. v. 63, p. 530, 1962, 493 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). CHANGES IN THE CHEMICAL SPECIATION OF ARSENIC FOLLOWING INGESTION BY MAN Eric A. Crecelius Battel le-Northwest Sequim, Washington 98382, USA 1. Introduction Of the chemical forms of arsenic to which man is normally exposed, trivalent arsenicals are the more toxic. Little information is available on the changes in the chemical forms of arsenic that occur within the body. However, reviews and summaries on the biochemistry of arsenic in man indicate the following: 1) arsenic which enters the blood stream is excreted mainly in the urine; 2) arsenic has a biological half-life of 30-60 hours; and 3) arsenic is excreted in urine in several forms, including arsenite (As +3 ), arsenate (As +5 ), methylarsonic acid (MAA), dimethylarsinic acid (DMAA), and other organically bound arsenic compounds. Other information on the behavior of arsenic in the human body that is either contradictory or unsupported includes such suggestions as: 1) As +3 accumulates in the body; 2) As +3 is oxidized to As +5 in the body and then excreted; and 3) As +3 and/or As" 1 " 5 are methylated in the body. Because As +3 is much more toxic than As +5 , MAA and DMAA, processes that convert As+ 3 to other forms may be part of the body's normal protective response of detoxification. The purpose of this study was to determine in a semiquantitative manner the chemical species and excretion rates in urine following ingestion of known arsenic species. The results were expected to provide insight into the possible arsenic reactions and their rate within the human body. 2. Experimental Three different chemical forms of arsenic were ingested: 1) As +3 -rich wine; 2) As +5 -rich drinking water; and crab meat that contained an unidentified organo-arsenic compound. For several days after the ingestion of the arsenic-containing material, urine samples were analyzed for As +3 , As" 1 " 5 , MAA and DMAA. The species of arsenic were selectively reduced to volatile arsenic compounds and then detected using an emission spectrometer equipped with a helium plasma excitation source [l] 1 . 3. Results In the As +3 -rich wine ingestion experiments within 5-10 hours after ingestion, the urinary levels of As+ 3 , As+ 5 , MAA and DMAA had each increased by about a factor of five. The levels of As" 1 " 3 and As +5 rapidly decreased and were near normal 20 hours after ingest- ion. However, the MAA and DMAA levels remained elevated, reaching the maximum levels about figures in brackets indicate the 1 iterature. references at the end of this paper. 495 40 hours after ingestion, then gradually approached normal levels 85 hours after ingestion. Approximately 80 percent of the ingested arsenic was excreted in the urine within 61 hours. The major species of arsenic excreted was DMAA, which accounted for about 50 percent of the 63 pg of arsenic ingested. Arsenite and As +5 each accounted for 8 percent and MAA for 14 percent. Several urine samples were analyzed for organically bound arsenic (non-reducible arsenic) by first digesting with hot acids and then analyzing for As +5 . This procedure indicated that organically bound arsenic accounted for an insignificant amount of the arsenic in these samples. In the As +5 -rich water ingestion experiment, the level of urinary As +3 remained at the normal level of 1 to 2 ppb; however, As +5 did show a marked increase during the first 8 hours after ingestion. The greatest arsenic excretion occurred 10 to 30 hours after ingestion as the DMAA form. Both the As +3 and As +5 ingestion experiments indicate that inorganic arsenic can be excreted from the body, but the majority is methylated in the body and excreted as DMAA. In the crab ingestion experiment, 340 grams wet weight of Dungeness canned crab was eaten. During the following three days, no elevated level of As +3 , As +5 , MAA or DMAA was detected in urine. However, when urine samples (collected 1020 hours after ingestion) were treated with hot 2 N NaOH, high levels of DMAA were detected (200-300 ppb). The presence of high concentrations of DMAA only after NaOH treatment suggests that the arsenic is in an organic compound that cannot be broken down to inorganic arsenic, MAA or DMAA in the human body. Arsenic in crab muscle is apparently unavailable to humans because of the chemical form. The small amount of data available indicates that other marine shellfish and fish contain this same arsenic compound which is not broken down by cooking, mild acids or human digestion. Children living near the Tacoma, Washington copper smelter have been shown to often have high urinary and blood arsenic levels. The chemical species of arsenic in urine samples from these children showed a pattern of elevated levels of both As" 1 " 3 and DMAA indicating ingestion and/or inhalation of As+ 3 .and excretion of DMAA. References [1] Braman, R. S. and Forelock, C. C, Science 182 1247 (1973). 496 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). THE DETERMINATION OF LEAD IN AQUEOUS SOLUTIONS BY THE DELVES CUP TECHNIQUE AND FLAMELESS ATOMIC ABSORPTION SPECTROMETRY Haleem J. Issaq NCI Frederick Cancer Research Center Frederick, Maryland 21701, USA 1. Introduction The use of the Delves Cup Technique for the determination of lead in blood by atomic absorption spectrometry has been previously reported [l]. 1 The method was elaborated by Fernandez and Kahn [2], and modified by other investigators [3,4,5]. Although the Delves Cup technique represents a reliable and rapid microsampling method for the determination of lead in blood by atomic absorption spectrometry, it failed to give satisfactory results for the determination of lead in aqueous solutions [1,4,5] and in urine [4]. The reasons given for this failure are varied and sometimes contradictory. Delves [1] showed that the lead absorption signal in aqueous solutions is less than that in whole blood. He attributed this to the occlusion of the analyte in a matrix that may be more or less volatile than the analyte itself. Another explanation was given by Olson and Jatlow [4], who attributed the loss of signal in aqueous solutions to the penetration of the aqueous standards into the cup surface, which leads to broad absorption peaks. Another explanation [5] suggested that the lack of signal in aqueous solutions is due to the capillary migration of the solution over the edge of the cup, thus leading to low and imprecise signals. This explanation has been accepted by other workers in the field [6,7]. The purpose of this paper is to offer an experimentally based explanation of the decreased lead absorption signal when aqueous solutions containing lead are analyzed. 2. Experimental Apparatus: A Perkin-Elmer Model 403 atomic absorption spectrophotometer with a deuterium background corrector, a strip chart recorder and a lead hollow cathode lamp (Westinghouse) was used. The Perkin-Elmer Delves Microsampling System and nickel cups were used without any modifications. The 283.3 nm line was employed for the detection and measurement of the Pb signal . Reagents: All reagents, unless otherwise stated, were of analytical grade. Deionized water was used in all sample preparations. The stock lead solution (1000 ppm certified atomic absorption standard) and activated charcoal were obtained from Fisher Scientific. Eppendorf microliter pipets with disposable plastic tips were used for sample handling. Procedure: The bottoms of the cups were covered with activated charcoal, then introduced into the flame until no signal was observed. Following this conditioning, 10 pi of 0.4 mg/ml lead aqueous solution were pipetted into the cups which were then dried at 140°C on a hot plate. The cups were removed and allowed to cool to room temperature after which 20 yl of hydrogen peroxide were added. The cups were re-dried and samples were then analyzed and the signals recorded. Care was exerted in handling the cups so that no charcoal was spilled. The usual procedure for lead determination in blood was then followed [2]. figures in brackets indicate the literature references at the end of this paper. 497 3. Results and Discussions A comparison of lead absorption signals, with and without activated charcoal in the cups, is given in figure 1. The results show that the signals from cups containing charcoal is higher than that in cups without charcoal (CWC). Furthermore, the signal in CWC is broader and less uniform than that in the charcoaled cups. It has been suggested [4] that the broad peak in the CWC case is due to penetration of the aqueous standard into the cup surface. It is a known fact that charcoal surfaces are highly absorbent, yet the signal in the charcoaled cups is of narrow peak width and more intense than that in CWC. This suggests that penetration of the sample into the cup surface may not account for the decreased sensitivity. Berthel et at. [5], in contrast, argued that decreased signals occur as a result of capillary creepage of the sample from the cup. Since water has a surface tension of 71.97 dynes/cm at 25°C [8], it is difficult to conceive that water would creep up the cup. In addition, when blood samples are analyzed for lead, hydrogen peroxide is added to the dried blood in the cup, a froth is generated which fills the cup to the rim, and a considerable residue is left on the walls of the cup after drying. It is thus clear, from the observation reported here, that Pb losses due to creeping or penetration phenomena do not account for diminished Pb signals when aqueous solutions are analyzed. 10.0- E -♦-» .c "o .c XL 03 s. 5.0- (a) Time (sec.) 10 sec. Figure 1. Comparison of aqueous lead signals (4 ng) without charcoal (a) and with charcoal (b). 498 The lead signal in blood samples is higher than an equivalent concentration in aqueous solutions clearly because of a more efficient mechanism of generating free lead atoms. It is our belief that the carbon left in the cup, after incomplete oxidation with hydrogen peroxide (this is clearly indicated from the smoke peak that precedes the signal peak), offers a more favorable reducing atmosphere for the generation of lead atoms in the flame. Figure 1 supports this hypothesis, where the lead signal in a charcoaled cup is more intense than in the CWC case. It had been presumed [4] that coating a cup with albumin prevents the penetration of the solution into the cup. Albumin (a good source of carbon), affords the same vital characteristics as blood and charcoal, a favorable reducing atmosphere in the flame. It was shown [2] that the use of dilute albumin bovine solution (2 percent W/V) as the diluents for lead standards produced no change in sensitivity when the graphite furnace was used. This is understandable since the graphite tube itself offers a very favorable reducing atmosphere for lead ions. It is suggested here that if charcoal is employed, the signal of lead in aqueous solutions is equivalent to that in blood having the same lead concentrations. This is shown in figure 2. 10.0- E ■*-> .c g> sz CO 5.0- Time (sec.) I — I 10 sec. Figure 2. Comparison of lead signals (4 ng) in aqueous solutions added to charcoal (a) and to blood (b). 499 The graphite tube atomizer can be successfully used for the analysis of lead in aqueous solutions, blood and urine. However, each sample should be treated individually, i.e., dry, char and atomize; while with the Delves Cup, 20 samples can be treated with hydrogen peroxide at the same time, then atomized individually. While the Delves technique is faster, the tube is more reproducible and more sensitive. The procedure for the analysis of clinical or biological samples by the tube is as follows: 20 pi of hydrogen peroxide are pipetted into the tube followed by 10 pi of the sample to be analyzed, heated for 40 s at 70°C, charred for 40 s at 400°C, then atomized for 5 s at 2100°C. A caution is offered here, the drying temperature should be below 110°C to prevent sputtering of the sample, also to prevent the H 2 2 from drying too fast. Sputtering leads to irreproducible results, fast drying prevents complete oxidation of the sample. Addition of the sample to the hydrogen peroxide is preferred to adding the H 2 2 to the sample, because the blood (for example) is pipetted into a pool of H 2 2 which oxidizes it all; while, if H 2 2 is added to the blood, the bottom of the blood droplet is facing the graphite and might not digest completely. When analyzing for lead in aqueous solutions, the hydrogen peroxide is not needed. However, hydrogen peroxide or nitric acid 2 v/v percent should be added to the solution to prevent the adsorption of lead to the containers surface. [10] References [1] Delves, H. T. , Analyst, 95, 431 (1970). [2] Fernandez, F. J., and Kahn, H. L., At. Absorption Newsletter, 10, 1 (1971). [3] Ediger, R. D., and Coleman, R. L., At. Absorption Newsletter, 11, 33 (1972). [4] Olson, E. D., and Jatlow, P. I., Clin. Chem. , 18, 1312 (1972). [5] Barthel, W. F., Smreck, A. L., Angel, G. P., Liddle, J. A., Landrigan, P. J., Gehlbach, S. H., and Chisolm, J. J., JAOAC, 56, 1252 (1973). [6] Ediger, R. D., Perkin Elmer Corp., Norwalk, CO., Private Communication (1974). [7] Beaty, R., Perkin Elmer Corp., Gaithersburg, MD, Private Communication, (1974). [8] Handbook of Chemistry and Physics, 50th Edition, 1969-70, pp. 5-30. [9] Fernandez, F. J., Atomic Absorption Application Study No. 512, Perkin Elmer Corp. Norwalk, CO. [10] Issaq, H. J., and Zielinski, W. L., Jr., Anal. Chem., 46, 1328 (1974). 500 Part X. THE STATUS OF REFERENCE MATERIALS FOR ENVIRONMENTAL MEASUREMENT NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464 Methods and Standards for Environmental Measurement Proceedings of the 8th IMR Symposium Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE STATUS OF REFERENCE MATERIALS FOR ENVIRONMENTAL ANALYSIS John K. Taylor Analytical Chemistry Division National Bureau of Standards Washington, DC 20234, USA 1. Introduction The growth of modern analytical chemistry has generated a parallel demand for well characterized reference materials for the evaluation of analytical data and for further development of measurement procedures. The emergence of instrumental techniques, often comparative in nature, hence completely dependent on calibration materials, has further accentuated this need. In the environmental area, regulatory measures are requiring trace analysis of air, water, and process and waste materials for exotic substances at concentra- tion levels that would have sounded fantastic only a few years ago. Because of quality assurance requirements for these measurements, their traceability to national standards is fast becoming a legal requirement. The present situation was anticipated, in part, as early as 1966 when the NBS program for development of Standard Reference Materials was initiated. Progress has been slow partly because of the level of funding available, but largely because of the need to solve problems concerned with the accurate measurement of low concentrations of reactive or labile materials and stabilizing them for long-term storage as required for Standard Reference Materials. 2. General Considerations The ideal SRM would duplicate in all respects the samples routinely analyzed, both with respect to form and composition. Unfortunately, this is rarely achieved. Both air and water samples are complex in nature and consist of dynamic systems which are entirely unsuitable as reference materials. Accordingly, the best that can be achieved is to simulate the analytical problems, as far as practical, within the accuracy and stability requirements for reference materials. Stabilizers may need to be added in certain cases, while freeze-drying and/or radiation sterilization may be required in some situations. The fuel oils, coals, and fly-ash SRM's are exceptions to the above described situa- tions. However, the materials supplied as SRM's may differ in detail from those undergoing analysis at any particular time. Furthermore, the SRM provides no insight into the sampling operation which may impose major problems. Accordingly, the results obtained using SRM's need to be interpreted by analytical chemists versed in environmental measurements. 3. NBS SRM Program The National Bureau of Standards now offers 63 SRM's which are either directly or indirectly applicable to environmental analysis. Some of these have been specifically developed for evaluating measurements of ambient or emission levels mandated by air quality standards. However, most of the SRM's are more general in nature, hence applicable to a variety of analytical situations. 503 The enactment of the Occupational Safety and Health Act created additional needs for Standard Reference Materials. While some of the contaminants are the same as those speci- fied in the air quality standards, with only minor differences in the levels of interest, many substances peculiar to occupational exposure are encountered for the first time. The analytical requirements include both space monitoring and time-weighted-average exposures of workers, measured at the breathing zone. Clinical measurements of blood, sera, and urine are also required. Accordingly, an entirely new breed of SRM's is needed, in addition to extensions of existing ones. Table 1 lists the analyzed gases which are currently available as SRM's. These are binary mixtures, certified to one relative percent and stable for a period of 6 months to one year. In the case of the nitric oxide mixtures, the certified composition is further restricted to tank pressures in excess of 2.8 kPa (400 psig). Table 1 Analyzed gas mixtures SRM No. 1604a 1607 1609 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1673 1674 1675 1677 1678 1679 1680 1681 1683 1684 1685 1686 1687 2611 2612 2613 2614 2619 2620 2621 2622 2623 2624 2625 2626 Type Oxygen in nitrogen Oxygen in nitrogen Oxygen in nitrogen Methane-air Methane-air Methane-propane-ai r Sulfur dioxide-nitrogen Sulfur dioxide-nitrogen Sulfur dioxide-nitrogen Sulfur dioxide-nitrogen Propane in air Propane in air Propane in air Propane in air Propane in air Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon monoxide in nitrogen Carbon monoxide in nitrogen Carbon monoxide in nitrogen Carbon monoxide in nitrogen Carbon monoxide in nitrogen Nitric oxide in nitrogen Nitric oxide in nitrogen Nitric oxide in nitrogen Nitric oxide in nitrogen Nitric oxide in nitrogen Carbon monoxide in air Carbon monoxide in air Carbon monoxide in air Carbon monoxide in air Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Carbon dioxide in nitrogen Nominal Values 2 , 2 , 2 , CH^ , CH^ , CH4 , so 2 , so 2 , so 2 , so 2 , C3 H 8 C 3 H 8 C3 H 8 C 3 H 8 C 3 H 8 C0 2 , C0 2 , C0 2 , CO, CO, CO, CO, CO, NO, NO, NO, NO, NO, CO, CO, CO, CO, C0 2 , co 2 , co 2 , co 2 , co 2 , co 2 , co 2 , co 2 , 8' % % % 1 .5 ppm 212 ppm 20.95 Mol. % 1 ppm 10 ppm 4 ppm; C 3 H 480 ppm 950 ppm 1400 ppm 2500 ppm ,2.8 ppm ,9.5 ppm , 48 ppm , 95 ppm , 475 ppm 0.95 Mol 7.2 Mol. 14.2 Mol 9.74 ppm 47.1 ppm 94.7 ppm 484 ppm 957 ppm 50 ppm 100 ppm 250 ppm 500 ppm 1000 ppm 1 ppm 10 ppm 18 ppm 45 ppm 0.5 Mol. 1.0 Mol. 1.5 Mol. 2.0 Mol. 2.5 Mol. 3.0 Mol. % 3.5 Mol. % 4.0 Mol. % 1 ppm 504 Standard Reference Materials for ambient-level measurements of S0 2 and N0 2 are issued as permeation tubes and listed in table 2. The permeation rates for S0 2 are certified over the temperature range of 20 to 30°C while those for N0 2 are only at 25°C. Permeation tubes can provide reliable test mixtures but their high temperature coefficients (approximately 10 percent per degree centigrade) require accurate temperature control. Generated gas concentrations are also directly dependent on the flow-rate of the diluent gas. Accordingly, the accuracy of the generated test gas in the final analysis will depend on the calibration of the thermometer and flow meter used and in the care with which these measurements are made. Table 2 Permeation tubes SRM No. Nominal Permeation Type Rate per Minute (25°c; 1625 S0 2 Tube (10 cm) 2.8 yg 1626 S0 2 Tube (5 cm) 1.4 yg 1627 S0 2 Tube (2 cm) 0.56 yg 1629 N0 2 Device (0.5 to 1.5 yg) a Individual rates between limits shown. It has not been feasible, up to the present time, to provide low concentrations of reactive substances such as sulfur dioxide and nitrogen dioxide as compressed gases, due to stability problems. The use of specially treated aluminum cylinders now appears to be a promising remedy, hence this approach is presently being investigated. Gas mixtures would certainly be preferred by many of the present users of permeation tubes. However, the convenience, simplicity, and flexibility of concentrations that can be produced by permeation tubes suggests that they will continue to find considerable useage as calibration devices for the forseeable future. A group of SRM's classified as analyzed liquids is listed in table 3. There are four fuel oils certified for their sulfur content and another in which a number of trace elements are certified. Reference fuels, consisting of gasoline containing certified concentrations of lead are also available. In an entirely different vein, two samples of water certified for their mercury concentrations at the ng/ml and yg/ml levels are also offered. Table 3 Analyzed liquids SRM No. Type Nominal Values S--1.05 wt % S— 2.14 wt % S— 0.268 wt % S— 0.211 wt % Trace elements Pb— 12,21,28 and 773 yg/g 12 vials, 3 of each Pb--12,20, and 28 yg/g 12 vials, 4 of each Pb-773 yg/g 12 vials Hg— 1.49 yg/ml Hg-1.18 ng/ml ;er 19 trace elements 505 1621 Residual fuel oil 1622 Residual fuel oil 1623 Residual fuel oil 1624 Distillate fuel oil 1634 Residual fuel oil 1636 Reference Fuel 1637 Reference Fuel 1638 Reference Fuel 1641 Water 1642 Water 1643 Trace Metals in Wat A variety of solid materials of environmental interest have also been developed and certified. Table 4 contains a listing of those SRM's presently available and includes coal and fly ash certified for sulfur and trace elements. SRM 1571 Orchard Leaves is not strictly an environmental material but it simulates such and hence has found considerable use in methodology studies. Likewise SRM 1577 Bovine Liver has peripheral interest because trace elements are certified in a biological matrix. A sample of powdered lead- based paint, SRM 1579, is of interest in connection with the Pica problem. Table 4 Sol id materia Is SRM No. Type Elements Certified 1571 Orchard Leaves Lead, Mercury and 17 others 1577 Bovine Liver Lead, Mercury and 10 others 1579 Powdered Lead-Based Paint Lead, 11.87 wt % 1631 Coal , Sulfur in Sulfur, Ash 1632 Coal, Trace Elements in Lead, Mercury and 12 others 1633 Coal Fly Ash, Trace Elemen ts in Lead, Mercury and 10 others 1645 River Sediment Matrix elements, trace elements, nutrients 1648 Urban Particulate Matter Matrix elements, trace elements, organic constituents 4350 River Sediment, environmen tal 10 radionuclides radioactivity A series of SRM's developed for use with OSHA-related standards is given in table 5. The filter samples were developed to simulate the amount of material collected during an 8-hour work period using the personal sampler concept. The urine samples are freeze-dried materials which can be reconstituted to give samples containing normal and elevated concen- trations of fluoride and also of mercury. Table 5 Work-place atmosphere analysis SRM's SRM No. Type 2661 Liquids on charcoal 2662 Liquids on charcoal 2663 Liquids on charcoal 2664 Liquids on charcoal 2665 Liquids on charcoal 2666 Liquids on charcoal 2667 Liquids on charcoal 2671 Freeze-dried urine certified for mercury 2672 Freeze-dried urine certified for fluoride 2675 Beryllium on filter media 2676 Metals on filter media 2677 Quartz on filter media Certified for Benzene m- Xylene p-dioxane ethylene dichloride chloroform trichloro-ethylene carbon tetrachloride Hg Be Cd, Pb, Mn, Si0 2 Zn 506 The foregoing tables have listed SRM's presently available. Additional standards are under development or the feasibility of their preparation and certification is under investigation. A special publication may be obtained from the Office of Standard Reference Materials, National Bureau of Standards, Washington, DC 20234, which lists the SRM's currently available, their cost, and ordering information [l] 1 . 4. Use of SRM's The role of SRM's in a measurement system, both general and specific uses of SRM's, and selected fields in which they have made significant contributions are discussed in a recent NBS publication [2]. The NBS SRM's were designed to serve as primary standards, rather than working stan- dards or for day-to-day calibration. Their best use is in conjunction with the evaluation of methodology or in the development of new measurement techniques. Commercial suppliers could use them to check their own measurement routines and thus provide a means of trace- ability of working standards to NBS. No matter what the application is, SRM's should never be used until the analytical system in which they are employed has been demonstrated to be in a state of quality control. Under such circumstances a limited number of measurements using SRM's can evaluate the compatibility of analytical measurements from a given laboratory with those generated elsewhere. References [1] Catalog of NBS Standard Reference Materials, NBS Special Publication 260, U.S. Government Printing Office, Washington, DC 20402. [2] Call , J. P. , et al. , The Role of Standard Reference Materials in Measurement Systems, NBS Monograph 148, National Bureau of Standards, Washington, DC 20234, (January 1975). figures in brackets indicate the literature references at the end of this paper. 507 T NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) REFERENCE TYPE SAMPLES FOR WATER/WASTE ANALYSES IN EPA John A. Winter U.S. Environmental Protection Agency Cincinnati, Ohio 45268, USA 1. Introduction The quality assurance program of the U.S. Environmental Protection Agency is focussed in three major laboratories of the Office of Research and Development. These Environmental Monitoring and Support Laboratories (EMSL), are located at Cincinnati, Ohio, Las Vegas, Nevada, and Research Triangle Park, North Carolina, and are responsible for water and waste, radiation, and air analyses, respectively. EMSL-Cincinnati 's Quality Assurance Program for water and waste analyses has four functions: 1) To provide necessary manuals and guidelines on sampling, sample preservation, analytical methodology and quality control, 2) to provide samples for the within-laboratory quality control efforts, 3) to validate EPA's analytical methods through formal studies and 4) to evaluate the performance of EPA and state laboratories and certify them as required by law. Of these four functions, the latter three require use of reference- type samples to accomplish their objectives. As shown in table 1, EPA's use of reference-type samples in direct support of its Quality Assurance Program distinguishes these samples from NBS ' use of standard reference materials to support the basic measurement system. This paper will describe the design and development of EPA's reference-type samples. Table 1 Use-comparison of reference-type samples in NBS and EPA Certification Reference Methods Standard Reference Materials Basic units ot measure Interlab Q.C. Intralab Q.C. Method validation studies Selected analytical methods National Bureau of Standards U. S. Environmental Protection Agency 2. Sample Design Because routine samples are received in a fully-diluted form, it would be ideal to provide reference samples as natural water samples which the analyst could test directly without dilution or special treatment. However, as shown in table 2, requirements for reference-type samples are difficult to satisfy with natural or full-volume samples. 509 Table 2 Sample requirements Homogeneous replicate series (precision) Exact known concentrations for multiple test parameters (accuracy) Separate measure of bias and interference Use in a variety of waters Evaporation, leakage control and stability Preparation, storage and shipment EMSL-Cincinnati concluded that whenever feasible, these needs could be best met by samples prepared as liquid concentrates using the purest-available chemicals, then sealing and preserving them in all-glass ampuls. When diluted to volume with distilled and natural water by the analyst, the samples contain exact concentrations of the desired constituents. The advantages of this sample form in meeting EPA's objectives will be discussed in the following sections. A. Precision Precision, the most widely-used characteristic of analytical performance is meaningful only when other conditions are held constant or randomized. By preparation of a sample concentrate from pure, uniform chemicals and solvents, by continuous mixing of solutions and by preparation of samples as a single batch in a single day, we have good assurance of replicate sample homogeneity which is necessary to measure between-analyst error. B. Accuracy Accuracy, another important characteristic, is measureable only when chemicals and solutes can be obtained with the high levels of purity required to prepare replicate samples having known true values for each desired property. With the rapid increase in use of instrumental and automated analyses, accuracy has become even more important because the mechanical and electronic character of these methods achieves repeatability and reproducibility which are greatly improved over manual methods yet with no sense of true values. Consequently, their accuracy has the most potential for error. To measure accuracy, exact amounts of constituents are weighed, dissolved and brought to volume with ultrapure water or other solvent to form the sample concentrate. Multiple parameters are present in each concentrate to simulate natural samples. Later, when the analyst dilutes the sample concentrate to volume according to instructions, parameter levels are obtained which are established as the true values. Prior to release of samples, these true values are verified by repeated analyses over time. If not verified, the samples are discarded and the problem solved before remaking. The true values are never adjusted to the analyzed values. This concept is a key factor in EPA's sample design because it permits the establishment of values independent of bias and interference. True values and accuracy would be difficult or impossible to measure with natural or full volume samples. C. Bias and interference It is necessary to distinguish the inherent bias in the method from the positive or negative interferences in water samples. Measurement of method bias requires analysis in ultrapure water, of samples with exactly known values, whereas testing for interferences 510 requires analyses in a range of natural waters. These varied test water conditions can only be satisfied, practically, through addition of an increment to distilled and natural waters. D. Variety of test waters Evaluation of methods, techniques, instruments or analysts requires testing with different waters. Use of the spike technique is the most practical approach. Each lab- oratory selects a natural water for test under his own conditions. In an interlaboratory study, each analyst selects his own natural water for spiking and the method is evaluated in as many water conditions as there are analysts, far exceeding the types of water prac- tical to prepare as full volume samples. E. Evaporation, leakage and stability Use of all-glass ampuls avoids evaporation and leakage which are common to screw-cap or friction-stopper containers. Further, after filling and sealing, many sample concen- trates can be preserved by steam sterilization, without loss or leakage. Finally, with no problem of contamination from paper, rubber, metal or plastic liners or caps the all-glass ampuls can be used with acid preservation or organic solvents. F. Ease of preparation, storage and shipment Sample concentrates in volumes of 5-50 ml are easily prepared using small, fast and relatively inexpensive, filling, sealing and labeling equipment. Ampuls are light, easily handled and stored. For example, a 20 ml ampul requires only about 1/16 of the space of a full-volume, one liter sample. Further, storage and shipping expenses, two other cost factors that have been accelerating yery rapidly, are reduced drastically. 3. Constituents in Samples The parameters prepared in the reference-type samples relate directly to EPA's respon- sibilities for analyses under the current water laws. A. P.L. 92-500 The 1972 FWPCA Amendments Section 304 NPDES System limits discharge of some 71 pollutant parameters including: metals, solids, general organics, nitrogen, phosphorus, halogens, oil and grease, phenols, radioactivity, algicides, surfactants, chlorinated organics, and indicator bacteria. Section 307 Toxic Substances cites 65 pollutants as serious toxicants for which limits are to be set for waste discharges. These include many specific organic compounds from industry. Section 311 permits or regulates discharge of oil and hazardous substances in U.S. navigable and coastal waters. B. P.L. 92-532 marine protection, research and sanctuaries act of 1972 P.L. 92-532 marine protection, research and sanctuaries act of 1972 prevents or regu- lates dumping materials which would adversely affect human health, welfare, or marine environment. These materials include: radioactive, chemical and biological warfare compounds, persistent inert floating or suspended material, metals, organohalides and oils, organosilicones , cyanides, fluoride, chlorine, titanium dioxide wastes, petrochemical and other organics, biocides, oxygen-consuming materials and the toxic substances/hazardous materials listed in Sections 307 and 311 of P:L. 92-500. 511 C. P.L. 93-523 safe drinking water act The interim regulations set maximum permissible levels for seven metals, turbidity, nitrate-nitrogen, chlorinated hydrocarbon pesticides, herbicides and coliform bacteria. Based on program priorities, reference- type samples have been prepared for metals, demand, nutrient, mineral and physical parameters, and chlorophyll. Samples are in pre- paration for chlorinated hydrocarbon pesticides, polychlorobi phenyls, herbicides, bottom sediment, sewage sludge, suspended solids, cyanide and phenols. 4. Other Considerations in Preparation of Reference- type Samples The chemical compounds used must consider: 1. The form in which the parameter is found as a pollutant. For example, cyanide might be present as free cyanide, ferrocyanide, organic cyanide and cyanate. 2. The form must be analyzable by the test method. For example, cyanide can be measured as total cyanide, cyanide amenable to chlori nation, or cyanide measured by Roberts- Jackson method. 3. The form must be soluble and stable at the levels tested so as to be practical for preparation and under conditions of use. 4. The form must be available in a pure form for calculation of true values. Water, acetone, methanol, hexane and other organics are used as solvents for these samples. Generally, water is the solvent for inorganics and the organic solvents for specific organics. Obviously, the choice of solvent is based on the solubility of the chem- ical compounds but since samples are prepared as concentrates, the compound must be soluble enough for preparation at 100-500 times the final concentration in the dilute samples. The solvent also must not interfere with the method of analysis. This is particularly critical for specific organics which are separated and analyzed by gas chromatography. Here an organic solvent must not elute at the same rate as the solute or its peak will overlap and prevent a valid measure. The levels and number of samples respond to: 1. Measure of the range of concentrations normally tested by the method. For example, if the method is set up for 0.1-10 mg/1 , samples are prepared in this range. 2. Verification of the minimum detectable level reported for a method. 3. Applicability of the method to high concentrations found in specific wastes. 4. Applicability of method to limits set by discharge permits or water supply limits. 5. The number of concentrations necessary to fit the program need. The number may range from a single concentration for quality control to paired samples at the three or four levels necessary to establish equations of the line for measures of precision and accuracy. 5. Quality Control Practices in Preparation The chemical compounds used are selected from the purest available materials that are practical. For instance, if the purest aluminum is available only as a solid bar, it would be impractical for weighing small amounts and pure aluminum wire might be substituted. For other elements, a primary standard or analytical reagent grade chemical is chosen based on the availability and solubility of the salt or oxide. 512 The chemicals are dissolved in ultrapure water, water/acid solution or chromatography- grade organic solvents. The ultra-pure water is prepared by passage of distilled water through a recirculating purification unit composed of a prefilter, activated carbon, mixed- bed resin and final filter. Only redistilled reagent grade acids are used. To prevent false assumptions or calculation errors, the calculations and preparation plans are worked out independently by two people and checked by the laboratory chief. A print-out analytical balance is used to weigh chemicals. The paper tape print-out of weighings is placed directly in a laboratory notebook as a true record and to avoid trans- position of numbers. The preparation of a sample concentrate from stock solutions is critical because a number of volumetric measurements by pipet and by flask must be made as exactly as possible. A flow-chart of the preparation plan is drawn up and copies given to team members to pre- vent mix-ups in the series of dilutions. One person on the team designated as the observer, is responsible for monitoring measurements and verifying that pi pets and volumetric flasks are the proper sizes. Most importantly, the observer must verify that the measured volume of each stock solution is correct, is added in proper sequence and is placed in the proper make-up flask. Since a typical sample contains a number of chemical constituents, the sequence of addition and use of sufficient volumes of dilution water are critical, to prevent precipitation or other changes. Ultrapure water is equilibrated at 20°C overnight to assure accurate measure of solution volumes. A concentrate is made up in separate volumes in four, six, or ten liter volumetric flasks then combined in a single large volume. The prepared sample concentrate is continuously mixed as it is pumped through the filling and sealing machine via a Teflon line and glass syringe assembly into cleaned borosilicate glass ampuls. After the ampuls are filled, sealed and labelled, analyses are performed over a 90 day period to verify the stability and homogeneity of the series. Since any laboratory can exert systematic error, the prepared sample series is sent to at least two referee laboratories for analyses as unknowns. Multiple methods of analyses are used if available, but analyses must be made on the fully-diluted samples and must be performed in part by EPA's methods of analyses so that EMSL is assured the true values are obtainable. Data must agree within the expected limits. Differences must be resolved before the samples are considered ready for use. 6. Summary In EPA, the Quality Assurance Program for water and waste analyses requires reference- type samples for method selection, method validation, intralaboratory quality control, performance evaluation and certification functions. The samples are prepared as concentrates in sealed glass ampuls using water or an organic solvent. Aliquots are added to distilled and natural waters by the analyst and the recovery of the spike determined. Use of true values permits the measure of accuracy, bias and interference. The samples are used as knowns or unknowns and incorporated in intra- laboratory or interlaboratory evaluations as needed. The organic and inorganic chemical, biological and microbiological parameters respond to EPA's needs under the three water laws. 513 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE PREPARATION AND ANALYSIS OF A TRACE ELEMENTS IN WATER STANDARD REFERENCE MATERIAL J. R. Moody, H. L. Rook, P. J. Paulsen, T. C. Rains, I. L. Barnes and M. S. Epstein Analytical Chemistry Division Institute for Materials Research National Bureau of Standards Washington, DC 20234, USA 1. Introduction The determination of trace metal ions in natural waters is a difficult procedure which is complicated by more than just the low elemental concentrations being sought. The subject of valid sampling and noncontaminating storage of samples will not be examined here although their collective influence on an analysis can be easily appreciated. Because of its high sensitivity and high sample throughput it should not be surprising that most water chemists use atomic absorption spectrometry (AAS) or one of its variants such as graphite furnace atomic absorption spectrometry (GFAAS). Regardless of the method used, most analytical procedures are relative; that is, the signal developed must be compared to or calibrated by the signal developed by a standard under the same conditions. Unfortunately, experience has shown that a pure aqueous standard cannot compensate for interferences or changes in sensitivity caused by the analyte matrix. Indeed, the procedure of standard addition of an aqueous standard to a sample aliquot is seldom satisfactory. These problems are referred to in general as the matrix effect. What the water chemist needs is an artificial water reference material containing all of the trace elements of interest as well as the major matrix ions found in natural waters. By the calibration of chemical procedures and instrumentation with such a reference material, much of the uncertainty in these procedures is removed. Thus, although data from any given laboratory will not become any more precise, they will become more accurate. Meaningful comparison of interlaboratory data is a function of the accuracy of the data being compared. Any improvements in the accuracy of the data base will assist the water scientist in the better understanding of trace metal chemistry in water systems. Proposed SRM 1643 represents the best standard of this type which can be made at this time. It contains 19 trace elements of importance at levels approximating those found in natural fresh water estuaries [I] 1 . In addition it contains gold (stabilizing agent) [2] as well as the major matrix cations of Na, K, Ca, and Mg. Table 1 lists those elements present in proposed SRM 1643 together with their target concentrations and corresponding natural water levels as determined by polling available Department of Interior data [1]. While the proposed SRM cannot be a perfect match to all natural water samples, it does represent the best possible approximation to a natural water sample that has been filtered and acidified. figures in brackets indicate the literature references at the end of this paper. 515 Table 1 Proposed composition of standard reference materials 1643 Concentration, ng/g Concentration, ng/g Element Natural Water Proposed SRM 1643 Element Natural Water Proposed SRM 1643 Au — 10 Mo 53 105 Hg 1.1 Mn 29 30 Ag 2.3 3.5 Ni 13 50 Al 76 80 Pb 21 25 As 70 75 Sr 215 190 Ba 40 40 V 39 50 Be 0.12 20 Zn 60 65 Cd 9 8 Se — - 12 Co 15 20 Na 19 a 10 a Cr 11 16 K 2 a 2 a Cu 13 15 Ca 27 a 27 a Fe 52 75 Mg 7 a 7 a Concentration, pg/g. — Not determined. 2. Stability Studies Two of the most important concerns in the preparation of an SRM are those of analysis and stability. Although it is obvious that an unstable standard would be useless, proving the stability of a material is not easy. A preliminary stability study was carried out with radioactive tracers to determine possible losses to container walls or interelement effects. For this study a preliminary test solution was prepared by carefully diluting known amounts of ten selected metals (As, Be, Cd, Cr, Cu, Hg, Mn, Pb, Se, and Zn.) All of these elements were at or below the pg/g concentration level. The solution was stabilized with 0.5M HN0 3 and 10 ng/g of gold (to stabilize mercury). All storage containers were thoroughly cleaned in dilute nitric and dilute hydrochloric acids. Five radioactive tracers were added to one liter of the stock solution. The tracer concentrations were low enough so that elemental concentrations in the stock solution were not affected. After a number of analyses over a period of 255 days, no losses were detected with the possible exception of cadmium. However the tracer studies yielded no information of potential blank problems either from the reagents or from the containers. Fortunately, a concurrent study of a number of different container materials shed some light on potential blank problems as well as suggested a reasonable procedure for cleaning a large number of containers [3]. To provide additional information on stability and potential blank problems as well as to determine potential analytical problems, the trial sample of a multiple element standard was analyzed twice over a 17 week interval. The analytical determinations of this trial sample were performed by neutron activation analysis (NAA), by isotope dilution-spark source mass spectro- metry (ID-SSMS), and by AAS. The repeat analyses showed no significant increase or loss in concentration for any element over this time interval. 516 3. Preparation Concentrated solutions of each element were prepared by dissolving known amounts of spectrograph cally pure metal (carbonates or other salts used for some elements). By accurate dilution of these standard solutions, a test solution containing 17 trace elements plus mercury, gold, and the four matrix elements Na, K, Ca, and Mg was made. In certain instances, most notably for Be, it was necessary to elevate the concentration over the normal value to assure a reasonable chance for analysis by two independent methods. Analysis of this sample by NAA and AAS indicated no potential problems at these concentration levels. The polyethylene containers for SRM 1643 were carefully cleaned with dilute acid and pure water. A single, 55 gallon polyethylene drum and a polyethylene stirring paddle were cleaned in a similar manner. To prepare SRM 1643, we carefully added weighed amounts of pure water and pure HN0 3 [4] to the 55 gallon drum with stirring. Three master solutions, which together contained all 24 elements were carefully prepared by weight dilution of the original concentrated standard solutions. Aliquots of these three dilute master solutions were then added to the water and HNO3 in the 55 gallon drum with constant stirring. The resulting solution was stirred for several days and then to insure mixing vertically throughout the drum, aliquots were removed from a spigot at the bottom and poured into the top of the drum. Finally, after the mixing was judged to be complete, the solution was transferred into 230 clean one liter polyethylene bottles which were serially numbered. All of the dissolutions, dilutions, and other manipulations were carried out in a Class 100 clean laboratory using techniques designed to avoid contamination of the samples. Selected samples were taken for certification analyses and long range stability studies. 4. Analysis of Proposed SRM 1643 Three techniques, (NAA, ID-SSMS, and AAS and related methods), are being used for most of the analytical work on proposed SRM 1643. Not all elements will be analyzed by each method but no less than two independently determined values will be obtained for each element. It is beyond the scope of this paper to present the analytical methods of each technique; instead comments will be reserved for a review of the analytical program. To date, the only element for which no analytical results have been obtained is Fe. For all other elements the analysis program is in various stages of completion. For some elements this program is essentially complete while for others the program is only partially completed. For example, in table 2, preliminary results are presented for Cu, Pb, Mo, and Mn. Note that two mass spectrometric values were obtained for Pb, one by isotope dilution SSMS and the other by isotope dilution thermal emission mass spectrometry. No isotope dilution values were obtained for Mn. Table 2 Preliminary analytical results (ng/ml) Element Approximate Concentration IDMS GFAES GFAAS NAA Polarography Cu 15 16. 4 a 16 14.2 16.6 — Mo 105 no no --- 100 --- Mn 30 — - 28 27.5 30.0 --- Pb 25 20. 3 a 20.9 19.6 21 IDMS by thermal emission mass spectrometry, others by SSMS. — Not determined. 517 In addition, no NAA results have been obtained for Pb whereas results have been obtained for Cu, Mo, and Mn. Nearly every element is being analyzed by graphite furnace AAS and seven elements have been analyzed by the graphite furnace AES. Furthermore, results have been obtained by polarography and a few preliminary results have been obtained by the plasma emission spectrometer. In analyzing the preliminary data tabulated in table 2, it is necessary to recall the relative merits of each method for each element. A complete statistical review of the data has not yet been attempted since the analyt- ical program is incomplete. However, the inter-method agreement for data presented in table 2 is believed to be within error limits of the respective methods. With the exception of Pb and Ba, good agreement has been obtained with respect to the approximate (theoretical) concentrations. For Pb the results are low by 20 percent but appear to be stable. For Ba, a serious discrepancy exists between the theoretical value and the analytical results obtained thus 'far by three methods. By contrast, mercury may be slowly gaining in concen- tration due to migration of mercury vapor in the room air through the polyethylene bottle and into the solution. Thus, in summary, for most of the elements attempted, the analytical program will probably be sufficient to lead to the certification of 18 elements. For the moment the problem with Ba is unresolved and Hg is also in doubt though for different reasons. References [1] Kopp, J. F. and Kroner, R. C. , Trace Metals in Waters of the United States, A Five Year Summary of Trace Metals in Rivers and Lakes of the United States, Report of the U.S. Dept. of Interior (Oct. 1, 1962 - Sept. 30, 1967). [2] Moody, J. R., Paulsen, P. J., Rains, T. C, and Rook, H. L., The Preparation and Certification of Trace Mercury in Water Standard Reference Materials, in Proc. of 7th IMR Symposium, Accuracy in Trace Analysis, NBS Special Publ . 422, (P. D. LaFleur, Ed.) U.S. Government Printing Office, Washington, DC 20402, pp. 267-273. [3] Moody, J. R. , and Lindstrom, R. M. , Unpublished data, Publication in Progress. [4] Kuehner, E. C. , Alvarez, R. , Paulsen, P. J., and Murphy, T. J., Production and Analysis of Special High-Purity Acids Purified by Sub-Boiling Distillation, Anal. Chem., 44, 2050-2056 (1972). 518 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE STANDARD FINEPARTICLE Brian H. Kaye Institute for Fineparticles Research Laurentian University Sudbury, Ontario 1. Introduction The characterization of fineparticles is a topic of concern for many environmental engineers in problem areas such as smoke abatement-monitoring, pesticide application, industrial health hazards from inhaled dust and evaluation of suspended debris and other particles in water systems. In many of the theoretical discussions of the various procedures used to characterize fineparticles, in which dense smooth spheres are used as hypothetical fineparticles moving through the characterization equipment, it appears as if the charac- terization data generated by the instrument can be interpreted in terms of fineparticle magnitude parameters from physical theory without recourse to calibration procedure. In practice, however, the working scientist does not have access to all the relevant physical properties of the fineparticles to be characterized and it is necessary to calibrate the equipment using reference material. There are two main problems associated with the organization of a system to provide standard fineparticles for calibration procedures. The first problem is one of storing and handling fineparticles. By definition a fineparticle system is one in which surface forces compete with macroscopic forces to determine the behavior of the system. When a fineparticle system is stored the surface activity tends to change either through agglomeration of parti- cles, anealing of surface defects or the absorption of gases and/or moisture onto the sur- face of the system. The other problem which arises from the handling of powders is that segregation by size can occur in the handling process. In the next section physical problems associated with the organization of so-called fineparticle banks for the calibration of powder systems is discussed. The second difficulty involved in the provision of fine particles for calibration purposes arises from a semantic confusion in the definition of the behavior of many fine- particle systems. The art of fineparticle characterization has evolved rapidly over the last thirty years with scientists from many formal disciplines contributing to the develop- ment of the various characterization instruments described in the technical literature. One of the results of this multidisciplinary development of the subject is proliferation and confusion of terminology. One of the great needs of the subject of Fineparticle Science is the adoption of clearly structured unambiguous terminology [l] 1 . Closely associated with this problem are two other major sources of confusion in fineparticle characterization studies. These are: 1. The failure to specify information needed in a characterization study, 2. An inadequate understanding of the limitations of the interpretative hypotheses used to transform raw data obtained from a characterization study into fineparticle descriptor parameters. This latter problem is often obscured by the fact that data transformations in a fineparti- cle characterization instrument are carried out internally using dedicated electronics and descriptor parameters and statistics displayed in converted form without warning of the various assumptions implicitly incorporated into the data transformation procedures. If a figures in brackets indicate the literature references at the end of this paper. 519 "standard" fineparticle used to calibrate a fineparticle characterization instrument func- tions differently in the sensitive zone of the instrument, as compared to the operative system fineparticles to be calibrated, then unsuspected implicit data transformation proce- dures in the characterization study can lead to hopeless confusion in the use of the descrip- tor parameters to discuss operational problems. It is not possible to discuss in depth the many problems of this kind which arise in fineparticle science, we use the language of set theory and the symbolism of Venn diagrams to illustrate the problems of choosing appropriate fineparticle reference systems. 2. Ideal and Operative Fineparticle Reference Material When discussing the provision of standard fineparticles from a bank of reference material, it is useful to differentiate between two types of reference material. We will define them in this communication as ideal reference material and operative reference mate- rial . An ideal reference material is usually composed of dense smooth spherical particles, the characteristics of which can be measured directly using different characterization procedures and employing different physical measurement technology. Thus, a smooth dense glass sphere is an ideal reference material. A' glass bead can be passed through the hole of a sieve, or it can be sedimented in a viscous fluid to obtain a settling velocity. The calculated magnitude from the settling velocity can be directly correlated to the magnitude observed through the microscope and the magnitude of the opening in the sieve used in the experiment. As will be discussed in the next section, the problem with ideal standards is that they can give problems in interpreting data for fineparticles measured in an operative environment. (Note that in this communication, when we refer to an operative environment for a fineparticle, we are discussing the system of interest to the engineer who has requested characterization data. Thus, smoke particles in the atmosphere are in an operative environ- ment and when they are sampled and placed under the microscope they are functioning in a characterization environment.) Some of the first attempts to provide ideal standards uti- lized glass beads. However, glass bead standards suffer from the problems of having bubbles in the bead structure and individual beads often depart from complete sphericity when examined closely. Alternatively, attempts were made to use spherical metal powders such as bronze and tin manufactured by an atomization process. Again, such powders tended to have doublets in them produced by fusion at a point of contact during the atomization process and again were not completely spherical. Such standard powders were made available in one of the first attempts to provide a reference bank of material on a commercial basis by Stanford Research Institute in the early 1 960 ' s [2]. This type of standard is still being made available by the two commercial companies operating fineparticle reference material banks [3,4]. Many of the uses of glass bead standards for the calibration of measurement instru- mentation have been replaced by the use of latex spheres. These were originally available from the Dow Chemical Company but now are available from Dow Diagnostics and some of the companies involved in selling fineparticle characterization instrumentation [5,6,7,8]. The other type of reference system which has been used is the so-called operative reference system which attempts to develop non-spherical particle standards by means of comparative testing utilizing so-called round robin tests among many laboratories. Such tests have been organized in the past by such organizations as the American Society for Testing Materials and the Committee for Particle Size Analysis of the British Society for Analytical Chemistry. The problem with this type of material is that one batch of powder has to be set up as the initial reference material; it has then to be subdivided and quan- tities sent for characterization by various laboratories. Any future use of the material is then destructive and one is faced with the dwindling supply of the reference material which is eventually exhausted and the process has to begin again. Various physical problems arise in the storage and handling of both ideal and operative standards. For example, if one is using Dow latex materials to calibrate an instrument then places the latex particles in the beam of an electron microscope and the energy levels of the electron beam are not properly controlled, the latex particles can be distorted by heat from the energy of the electron beam. On the other hand, if the particles are to be placed in an electrolyte, such as is used in instruments of the Coulter Counter, or Cello-0-scope type, then electrolytes can alter the structure of the particles. 520 With operative standards, such as cement powders used in permeability calibration experiments, the inadvertent admission of moisture to the system can result in the deteri- oration of the powder when it is stored. At Laurentian University we are working on several techniques to improve the methods of handling operative reference powder systems. In parti- cular, a new type of storage container which can also act as a randomizing system shows promise of being a useful system [9]. Industrial organizations wishing to avail themselves of operative and ideal reference material must face the fact that the proper organization and operation of systems is expen- sive and that even after they have acquired subsamples from a parent population of reference powders, they must exercise stringent control over operation and handling of such reference material. Developments in the near future of these systems can be anticipated as the in- dustrial community comes to the realization that many of the methods for characterizing fineparticle systems are not primary methods which can be calibrated from physical theory alone but which require this type of reference material. It may well be that any initial attempt to establish a national centre for fineparticle reference systems may also have to undertake an educational program to advise users of such systems of the pitfalls inherent in the handling of fineparticle reference systems. The need for this educational program arises directly from the fact that engineers concerned with the operative environment of a fineparticle system are not always trained in the necessary surface physics and statistical descriptive theory associated with adequate storage and handling of fineparticle systems. 3. Semantic Problems Associated with the Provision of Reference Material for Fineparticle Characterization To illustrate some of the semantic problems of 'standard reference materials', let us consider the simple problem of comparing characterization data from a microscope examination, settling velocity studies in a viscous fluid and electrozone evaluation of a particle. In the first technique, the particle to be characterized is observed directly under the micro- scope. When particles are studied by sedimentation technology, the fineparticle is placed in a viscous fluid and allowed to sediment under the interacting forces of gravity and viscous drag. The terminal velocity reached by such a particle can be related to the magni- tude of the fineparticle. It is usual practice to interpret the settling velocity of the particle in terms of the Stokes' Diameter of the particle which is that radius calculated by assuming that the particle is equivalent in its behavior to a dense smooth sphere which has the same settling velocity of that observed for the particle under study. In the electrozone counters, such as the Coulter Counter equipment, the change in the electrical properties of a cylinder of electrolyte, with and without the fineparticle under study, is correlated, to the magnitude of the particle. If a smooth hard sphere is used in all three experiments, the measured diameter of the sphere will very closely correlate, provided that the flow \ regime of the falling particle in a viscous fluid is in the laminar flow region. Examine a typical problem which could arise in environmental science in which one needed to characterize agglomerates of spherical particles being emitted from a furnace. (These particles could be carbon particles or agglomerated fly ash particles.) Assume that various types of particles as shown in figure 1 needed to be evaluated. Consider now the effect of the presence of agglomerates of individual particles on the data generated by the three characterization procedures that have been briefly reviewed. First of all one notes that the vocabulary for describing the shape of agglomerates is very underdeveloped and that one is reduced to such terms as "straight" and "twigged" to describe the various agglomerates of this diagram. If one regards the agglomerates of this diagram as forming the population to be characterized, then the particles which have the same property as determined by the various methods can be grouped. If the set of particles which have the same projected length are organized, then the particles such as those indicated by the linkage in the diagram would be identical. On the other hand, if attempts to draw up the classification of particles according to their Stokes' Diameter (£. e . measured settlinq velocity) the same particle can have different diameters depending on the initial orientation of" the fineparti- cle when placed in the viscous fluid (see figure 1) and that apparently very different particles can be identical from the Stokes' Diameter parameter (see fiaure 2) 521 Subanit Number as Classification Parameter o 1. oo Descriptor ^L. 'projected diameter' f J \Would link these x^S-f-^ two particles. OOOOCX) Many variations in structure between these extremes are possible One particle at the bottom of the cluster. Figure 1. The descriptor classification of non-spherical particles can group together particles according to one descriptor parameter which differ greatly when described by an alternate descriptor parameter. This aspect of fineparticle characterization is illustrated graphically for agglomerates of spherical particles. 522 «*^ O CO <&> & ooo Set of particles of the same size - descriptor parameter "Projected Length", ooooco Possible members of the set of 'particles of identical Stokes' Diameter'. Possible members of the set 'all particles of the same volume" Figure 2. Many strangely different particles can be identical when described by a single characteristic parameter. Calibration of a characterization instrument using spherical smooth dense parti- cles will result in this kind of descriptor group membership. Again, with the electrozone equipment, one finds that the recorded particle diameters of agglomerates entering the sensitive zone of the instrument will depend on the orientation of the agglomerate. Also, problems may exist in that the analytical methods may cause degeneration of the operative magnitude of the agglomerate. Thus, the severe shear condi- tions present in the orifice, into which the particles are being sucked in an instrument such as the Coulter Counter, could result in a break-up of the agglomerate system. This is 523 not predictable beforehand without a knowledge of the fineparticle to particle bond in the agglomerate. In this connection it is interesting to note that in the extreme situation one could be collecting many agglomerates such as the particles of figure 1 in a method which sucked all particles into a liquid suspension such as that used in the Greenberg impinger [10]. Then, before such a suspension is characterized by any method, it may be subjected to severe ultrasonic agitation 'to disperse agglomerates'. Such Procrustean treatment may result in the conclusion that all particles leaving the chimney stack are of one size! In using characterization data one has to have a concern for the operative environment in which one will be using the characterization data. For example, if one has a concern for the rate at which particles fall out from a chimney stack, one requires the Stokes' Diameter of the agglomerate. If, however, one is interested in the possibility of the particles damaging the human lung, then one is interested in a combination of the settling velocity of the particle and its dimensions which control the ability of the fineparticle to lodge in the structure of the lung. Thus, industrial hygiene studies call for a combination of the descriptors involving both settling velocity and dimensional structure of the particle. To clearly summarize the problem of appropriate standards when using different methods of analysis, one can turn to the language of set theory and use the symbolism of Venn Dia- grams. Basically, the population of fineparticles to be described can be represented by the inclusive set shown in figure 3. When split into sets having the same descriptor parameter, intuitively one expects to be able to split the original population into clearly defined groups. However, the descriptor sets into which the parent population splits can have membership requirements which cause an overlap of the set if one changes from the analytical descriptors to the operative parameter descriptor sets based on Stokes' Diameter. Thus, descriptor sets in a characterization study using projected diameters in microscope studies will not give correct membership for the operative parameter descriptor sets. The fact that different methods of characterization will give identical subsets for populations of dense hard spheres is equivalent to saying that the difference between descriptor sets based on different physical measurements for non-spherical particles degenerate to zero for the case of fineparticles in which all particles are dense smooth spheres. Thus, a situation could be envisioned in which scientists wished to calibrate an instrument to be used to charac- terize asbestos fibers in a study of industrial dust hazards from such a dust. If such an instrument was calibrated using an ideal standard spherical particle so that all particles were described in terms of Stokes' Diameter of the equivalent smooth dense glass sphere, then the velocity with which the particles would settle in the factory area may be predicted. One could be very wrong in predicting the hazard to the worker from particles lodging in his lung. To obtain such shape dependent information by determining "lodgeability" another method of analysis would have to be used or one would have to calibrate the original equip- ment with asbestos fibers of known environmental hazard. Whichever way one goes, he faces a long and expensive calibration procedure. 524 T Particles to be characterized. Four subsets of particles with decreasing projected diameter (microscope measurement). All particles in micron range (projected diameter) 10 - 2C> All particles in micron range (projected diameter) 20 - 30/t/n All particles with Stokes' Diameter in the range 20-30 / A/n (This group contains particles of different shape which are outside the range 10 - 30/jwby projected diameter . ) Figure 3. From a simple perspective, any characterization procedure should be capable of uniquely classifying a population of fineparticles into well-defined subsets. In practice, for non-spherical dense smooth parti- cles, membership in any subgroup of a given 'size' range is not a well- defined concept. Descriptor groups formed by different physical methods for characterizing particles which are identical for spherical groups are not identical for non-spherical particles. 4. Summary A qrowing sophistication in the characterization of fineparticle systems is leading to the demand for standard fineDarticles. However, it is easier to conceive of the notion of standard fineparticle than it is to provide real systems. A great deal of work needs to be carried out on the provision of ideal standards and their storage but such a program would '525 have to concurrently advise people that there may not be a direct link between the parameters given by their instrument when calibrated with ideal standards vis a vis the operative performance of the fineparticles that they are studying. The alternative route of providing operative standards for industrially important powders is even more difficult to organize since any standard powder established needs to be carefully handled, characterized and is always a dwindling resource which has to be replenished at great time and effort at a later date. References [1] "The Laurentian Handbook of Powder Science and Technology", published by the Institute for Fineparticle Research, Laurentian University, Sudbury, Ontario. [2] "Particles in the Bank", The Stanford Research Institute Journal", Third Quarter, 1961, Vol. 5, Page 133. [3] Duke Standards Company, 445 Sherman Avenue, Palo Alto, California, 94306. [4] Particle Information Service, 600 South Spring Rd., Los Altos, California 94022. [5] Dow Diagnostics, The Dow Chemical Company, P.O. Box 6851 1 G , Indianapolis, Indiana 46268. [6] Alliet, D. F., "A Study of Available Particle Size Standards for Calibrating Electrical Sensing Zone Methods", Pow. Tech., 13, (1976) 3-7. [7] Robillard, F. and Patitsas, A. G., "Determination of the Particle Diameter of Dow Latex 642-6 x 4 Independent Methods", Pow Tech. 9(1975) 247-255. [8] "Something Old, Something New, Something Borrowed, Something Blue". A general interest article describing the use of Dow Latex particles, published in Industrial Research, August 1976, pages 46-49. [9] Kaye, B. H., "The Various Techniques for Sampling Powder Systems". In press. [10] Allen, T., "Particle Size Analysis", 2nd Edition, Chapman and Hall, Great Britain, 1976. 526 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) THIN FILM STANDARDS FOR X-RAY AND PROTON-INDUCED X-RAY FLUORESCENCE D. N. Breiter, P. A. Pella, and K. F. J. Heinrich Analytical Chemistry Division National Bureau of Standards Washington, DC 20234 1. Introduction Small samples such as air particulates and water residuals are increasingly being analyzed by multielement x-ray techniques. They are usually presented for analysis on thin substrates ranging in thickness from 100 ng/cm 2 to several mg/cm 2 . The use of thin samples minimizes but does not eliminate corrections which are required due to x-ray self absorption and/or proton energy loss effects. The accuracy of x-ray and proton-induced x- ray fluorescence spectrometry in many applications depends highly on the development and application of accurate and useful thin-film standards. 2. Discussion Thin-film reference materials can be homogeneous single- or multi-element film deposits or particulates collected on or in thin backing materials. These materials when well characterized for elemental composition and homogeneity can serve several functions in x- ray fluorescence and proton-induced x-ray fluorescence spectrometry. Such materials can directly improve the accuracy of certain calibration procedures. They can also be useful in interlaboratory comparisons [l] 1 . Another area of application is the improvement of the overall accuracy of the measurement process by aiding in the testing of mathematical models for physical processes {e.g., x-ray self absorption, proton energy loss, particle size effects) and background-peak stripping routines. Homogeneous thin film standards can be prepared by a variety of techniques including vacuum evaporation or sputtering [2], filtration of powders from air or liquid suspension, and evaporation of solutions deposited directly or via nebulization onto thin backing materials [3,4,5]. Films may also be cast from polymer material which has been doped with suitable organometallic compounds [6]. Particulate standards can be prepared by filtration of particles onto thin backing materials from air or liquid suspension [3,7]. These particles may be finely ground and sieved minerals, spiked particulates [3], or ground Standard Reference Materials [8]. The use of dried deposits of multielement solutions on wettable filter papers allows considerable flexibility in the composition of the standard and an excellent quantitative transfer from well -characterized solutions. Homogeneity is good as long as the filter paper is completely wetted before evaporation takes place and is large enough so that edge effects are not significant. Unfortunately, some of the thinnest available films of polyester and polycarbonate are not wettable. Use of such thin films (^lmg/cm 2 ) is desirable to minimize the errors due to the absorption of light element characteristic x-rays in the film and proton energy loss in the case of proton- induced x-ray fluorescence. Although the use of nebulizers in the deposition of multi- element solutions allows one to use nonwettable surfaces, accurate quantitative transfer of the solution to the thin backing filter is not possible. However, one does have excellent knowledge of the elemental ratios which provide relative x-ray efficiencies. With the figures in brackets indicate the literature references at the end of this paper. 527 addition of an absolute standard for a single element, such as a vacuum-deposited metal foil, absolute calibration can be achieved [5]. While thin deposits of pure elements can be made by vacuum evaporation or sputtering and have been used by many laboratories, they have generally not been weighed to better than 3%. These deposits are generally 50-200 yg/cm* on a ~1 mg/cm 2 backing. If accurately weighed by the best techniques available such deposits could possibly serve as primary standards against which other deposits could be measured by x-ray spectrometry. A recent approach to the manufacture of absolute element standards is the casting of polymer films in which organometallics have been dissolved [6]. Further measurements of homogeneity, stability, and element content will be necessary to assess the suitability of these materials as thin film standards. Thin-film metal standards containing various metal impurities are not susceptible to the radiation damage which can affect standards based on hydrocarbon or organic matrices. They could be formed by rolling precharacterized metals. However, uniformly rolling thin foils which are several microns thick is difficult. If one is looking at characteristic x-rays of an energy high enough so that the absorption correction errors due to thickness uncertainties are sufficiently small, then one can weigh the foils and get useful mass- per-unit areas for the elemental constituents. This same consideration holds for the cast polymer films mentioned previously. A second approach to thin metal foils is the character- ization of already existing foils for homogeneity and composition. While use of commercially- made foils results in a limited choice for elemental constituents, foil which has already been made in large quantity by industrial processes is likely to be homogeneous if one looks at different portions of a piece of foil which has been taken from a much larger factory rolled sheet. We have obtained commercially manufactured thin aluminum foil (6.3 ym, ~2 mg/cm 2 ) which is relatively pure. We have examined four square sections (~5 cm 2 ) from a piece of this foil for homogeneity by energy dispersive x-ray fluorescence, with a tungsten tube, a molybdenum secondary target, and a molybdenum filter. Homogeneity results were obtained for a 2000-second run at 40KV and 66 mA for aluminum and the impurities iron and gallium which are present at roughly the 6 yg/cm 2 (3000 ppm) and 0.1 yg/cm 2 (50 ppm) levels. The number of counts and the estimated standard deviation of these counts obtained for aluminum, iron and gallium under the above conditions were 28704 ± 539 (1.9%) 12019 ± 139 (1.2%), and 1333 ± 36 (2.7%) respectively. While homogeneous thin-film standards are yery useful for the calibration of x-ray spectrometric systems, particulate standards which are formed by the deposition of particles from air or liquid suspension onto or into thin backing materials are most applicable to the more specialized study of correction models for particle size and composition effects. Particulate standards have been made from finely ground rocks and minerals [3]. Quartz particulates (spiked with several elements) have been made by heating a mixture of quartz particles and aqua regia in which metals have been dissolved until all nitric and hydro- chloric acids are evaporated, all nitrates are decomposed, and nitrogen oxides volatilized [3], An organic particulate standard was developed at the National Bureau of Standards by grinding the Orchard Leaves SRM (1571) and filtering them via liquid suspension in cyclohexane onto a membrane filter [8]. Particles are usually fixed on backing materials by a thin hydrocarbon film such as paraffin. Since particulate standards generally serve more as a research tool than for calibration their development should rest largely on the specific research need in question. Particulate standards do, however, serve as an important calibration tool in applications where the particulate standard is very similar to the unknown sample being analyzed. Another approach to the manufacture of particulate standards is the deposition of glass particles made from synthetic glasses which can be generated by fusion techniques with a great variety of elemental components. By means of heating and aerosol separation techniques, controllable size distributions of spherical particles with precharacterized elemental concentrations can potentially be deposited on a variety of substrates. Finely ground glass particles from the trace elements in glass standard reference materials (SRM 610-traces at the 500 ppm level) were found to adhere well to an adhesive material which was applied to polycarbonate membranes. Ratios of elemental intensities measured by energy dispersive x-ray fluorescence spectrometry were found to be repeatable and particles 528 were seen to be on the surface of the adhesive by scanning electron microscopy. Absolute standards require the more difficult quantitative transfer of particulates to the support membrane. This particular material would not be useful for calibration because of the large number of interfering peaks. However, it might be useful as a material for testing overlap and background stripping routines. Glasses made from elements with non-interfering characteristic x-ray peaks could provide particles with controlled size distributions. Small particles would be useful for calibration purposes while larger particles could be used in establishing models for particle size effects. In contrast with the spiked quartz particles mentioned previously [3], particles made from fused synthetic glasses have elemental concentrations which are uniform throughout the particle. Further use and development of accurate and appropriate thin film standards will directly increase the accuracy obtainable in a variety of x-ray spectrometric applications. References [1] Camp, D. C. , Van Lehn, A. L. , Rhodes, J. R., and Pradzynski , A. H., Intercomparison of Trace Element Determinations in Simulated and Real Air Particulate Samples, X-Ray Spectrometry 4, 123-137 (1975). [2] Graham, M. J., and Bray, C. S., The Use of Evaporated Metal Film Standards in Thin Layer X-Ray Fluorescence Analysis of Mixed Oxides, J. Sci. Instrum. (J. Physics E) (1969) Series 2, Vol. 2. [3] Pradzynski, A. H., and Rhodes, J. R., Development of Synthetic Standard Samples for Trace Analysis of Air Particulates, ASTM Special Technical Publication 598. pp. 320- 336, (ASTM 1976). [4] Baum, R. , Walter, R. L. , Gutnecht, W. F., and Stiles, A. R. , Solution Deposited Standards Using a Capillary Matrix and Lyophilization, Proc. EPA Symp. on XRF Analysis of Environmental Samples (Ann Arbor Press, 1976) (to be published). [5] Giauque, R. D. , Garrett, R. B. , and Goda, L. Y., Calibration of Energy Dispersive X-Ray Spectometers For Analysis of Thin Environmental Samples, Proc. EPA Sympsoiwn on XRF Analysis of Environmental Laboratory Samples (Lawrence Berkeley Lab) LBL4481 . [6] Thin Polymer Films as Calibration Standards for X-Ray Fluorescence Analysis, Adv. X-Ray Analysis, Vol. 20, (1976-77). [7] Semmler, R. A., Draftz, R. D. , and Pureta, J., Thin Layer Standards for the Calibration of X-Ray Spectrometers, Proc. EPA Symp. on XRF Analysis of Environmental Samples (Ann Arbor Press, 1976) (to be published). [8] Pel 1 a , P. A., Kuehner, E. C. , and Cassatt, W. A., Development of a Particulate Reference Sample on Membrane Filters for the Standardization of X-Ray Fluorescence Spectrometers, Advances in X-Ray Analysis, 19, pp. 462-472 (1976). 529 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) REFERENCE MATERIALS FOR AUTOMOTIVE EMISSION TESTING Theodore G. Eckman General Motors Corporation Mil ford Vehicle Emission Laboratory Mil ford, Michigan 48042, USA 1. Introduction The automobile industry has been actively engaged in the measurement of exhaust gas characteristics for more than twenty years. Some of this research was a result of the air quality problem that was becoming evident in the Los Angeles basin. In the late 1960 ' s the test results took on added significance since during that time the California and Federal regulatory statutes came into force; California in 1966 and Federal in 1968. With compliance testing came a new array of difficult problems for the automotive industry. Perhaps the most obvious was, and still is, the very basic one of engineering the product to meet the standards set forth. However, less visible, but of equal signifi- cance, were the problems encountered in measuring the exhaust constituents at the levels necessary for compliance to these standards. Existing instruments demanded refinement, new ones were developed, and test procedures were written. These efforts were undertaken in order to lend assurance to both the automobile manufacturer and the government that a given emission test gave an accurate appraisal of actual constituent levels in an exhaust gas sample. Also, the state and Federal agencies charged with the enforcement of emission standards had test facilities similar to the industry's. Not only were we required to share our test data with these agencies but also it was reasonable to expect that data reproducibility would be acceptable between the test facilities. Automobile emission compliance testing (which is called "Certification") involves testing the same cars at both the manufacturer's test site and, particularly in the case of Federal standards, the govern- ment's test site. The manager of an emission test location is often faced with a myriad of problems, not the least of which sometimes is answering the question of why his test results are not the same as those obtained by EPA in Ann Arbor or CARB in Los Angeles. To say, that at present, correlation is perfect among test locations would be incorrect. However, it is true that correlation between test facilities has improved greatly over the years. This increase in correlation came by way of investing time and funds to improve areas such as instrumentation, test procedures, adherence to driving schedules, data reduction, and so forth. An area that was neglected to some degree in spite of its obvious importance was that of the gas standards. No test, no matter how well performed, can be any more accurate than the gas standards that are used to define the various analyzers' responses to the gases they are intended to quantify. If, unknown to you, your gas standards are 10 percent off, unknown to you the test results will also be 10 percent off. This, of course, neglects the possibility of cancellation of errors which is hardly a textbook method advocated to insure good results. Several calibration gas cross-check (correlation) programs between the various testing facilities evolved in an attempt to improve correlation. To a degree they were successful; several laboratories adjusted their gas concentration values to equal the mean. However, what these programs did not address was any consideration of accuracy and rightly so--since no primary standards existed that could be inserted into the programs. This led us to search for very accurate standards to be provided by a party not involved in either vehicle manufacturing or emission compliance testing. The National Bureau of Standards (NBS) had for years enjoyed a position of prominence in the world-wide standards community. It is for that reason that the automobile manufacturers and EPA turned to NBS as a natural choice to fill the much needed standards role. 531 2. Discussion In 1972, two significant events happened to initiate the availability of emission test calibration gas standards. First, a conference was held here in Gaithersburg which was attended by a diversity of interests in vehicle emission standards; the automobile industry, specialty gas vendors, NBS, and EPA. The purpose of the conference was to establish what gases and concentrations were needed and what problems were likely to be encountered in the issuance of these blends. The second event of significance was the signing of an inter- agency agreement between EPA and NBS whereby the Bureau was funded by EPA to perform the research and the production of the first sets of standards. The necessary gas standards, agreed upon principally by the automotive industry and EPA, reflected the test levels anticipated for the Certification of 1975 model cars. Certification test programs for a particular year normally start about one year before model introduction. For 1975 models, this meant that the standards were needed by no later than the fall of 1973. In addition to this, several months were needed to correlate existing standards with the Standard Reference Materials (SRM) gases. This established the target at January 1973. Current Federal and California standards require a vehicle manufacturer to limit the levels of carbon monoxide, hydrocarbons (reported as propane), and nitric oxide from the exhaust of the cars it manufactures. These standard levels are expressed in terms of grams per mile. In addition to this, carbon dioxide is measured as a constituent for the deter- mination of fuel economy. Early emission test standards were stated in terms of simple concentration, for example, 275 ppm HC and 1.5 percent CO. However, this did not take into account the volume of exhaust gas produced by various size vehicles. For example, a small vehicle while it may have constituent concentrations twice that of a large car, may actually contribute less to air pollution if its exhaust flow is less than half that of a large car. Therefore, a measurement method was developed, appropriately enough called "mass testing" to allow the emission standards to be converted from a concentration basis to a true mass basis. If one were faced with the task of determining the mass of a particular gas in a mixture of contained gases, it would be a relatively simple task if you knew: a) the volume of the enclosure; b) the concentration of the gas of interest; and c) the density of the gas of interest. By knowing these three things the weight can be determined. With this in mind, a search was made for some kind of instrument or device that could store the exhaust from a vehicle for a finite time duration. It also had to be able to accommodate different exhaust flows since a driving schedule was constructed, one which was generally representative of typical urban driving. The method chosen involves the passage of the vehicle's exhaust in its entirety through a Constant Volume Sampler (CVS) which is basically a fixed displacement pump which moves a constant volume per unit time. For exhaust emis- sion testing this volume is comprised of varying amounts of filtered air and the entire exhaust of the test vehicle and is corrected to reference conditions (528°R and 760 mm Hg). The vehicle is operated on a chassis dynamometer. Vehicle speed changes, required by the test schedule, cause changes in the exhaust flow rate. As this exhaust flow rate changes, the ambient air contribution to the total CVS volume changes in the opposite direction. If, for example, a CVS demands 300 cfm and the vehicle is at idle producing only 20 cfm, the remainder, or 280 cfm would be ambient air. During a moderate speed cruise when the vehicle produces, for example, 50 cfm, the ambient contribution will drop to 250 cfm to maintain the 300 cfm total demand of the CVS. Various CVS pump parameters are monitored throughout the test to determine the exact volume for the test. The Federal test driving schedule is 11.1 miles long and rather than store the entire volume of exhaust and CVS makeup air, small representative samples are stored in plastic bags for concentration analysis at the end of the test. After these concentration measurements are made, enough is known to determine the mass levels. We know the volume of the sample, the concentrations, and the densities of the constituents. From this we calculate the mass weight of each pollutant generated during the test. By dividing by the mileage we know the grams per mile. Again, it is important to recognize that it is not necessary to know what the volume of exhaust was during the test cycle; how much was exhaust; and how much was CVS added air. What is important is to know the total volume. As more air is added, the concentration decreases and viee versa. This example, of course, relates only to one specific type of 532 vehicle. The actual certification process is very extensvie covering a fleet of, in the case of General Motors, hundreds of cars; some of them driven 50,000 miles and tested every 5000 miles. The number of vehicles a manufacturer must test is determined by selection rules made by EPA and California. So, one can see that the levels of CO, C0 2 , NO, and hydrocarbons that we measure are not true tail pipe values but are diluted by the test equipment in order to obtain mass instead of concentration readings. At this point it was shown that a need existed for automotive emission reference standards. This need was met by the Bureau with the issuance of the following Standard Reference Materials. Propane in Air. Nominal values (ppm); 3, 10, 50, 100 and 500. Available February 1973. Carbon Dioxide in Nitrogen. Nominal values (percent); 1.0, 7.0, 14.0. Available February 1973. Carbon Monoxide in Nitrogen. Nominal values (ppm); 10, 50, 100, 500, 1000. Available January 1974. Nitric Oxide in Nitrogen. Nominal values (ppm); 50, 100, 250, 500, 1000. Available November 1974. The Bureau had some initial reservations with respect to sales projections for these SRM's. However as of May 1 of this year, a total of 930 cylinders of Standard Reference Materials had been sold at approximately $300 each totalling over a quarter of a million dollars in sales. We all know that with the passing of time, things change. However, we continue to need new SRM's. Contributing to this need are several factors. Two major ones are new legislation and the continuing decrease in exhaust constituent levels as required by more stringent automobile emission standards. The most immediate concern is the new fuel economy regulations. An interesting use of mass emission grams per mile results is the calculation of the fuel consumed during the test. Basically, fuel economy can be determined by taking the results of an exhaust emission test and using what is termed the "carbon balance" method of fuel economy determination. The basic premise of this method is that any carbon that enters an engine in the form of fuel (gasoline) exits at the tail pipe as various carbon containing compounds. These compounds (HC, CO, C0 2 ) each contain a known percentage of carbon. Since these compounds are routinely measured during an exhaust emission test, the total amount of carbon burned during the test is known. By assuming the total carbon exhausted that came from the fuel and knowing the amourlt of carbon in a gallon of qasoline, one can calculate the volume of fuel used. Recent fuel economy legislation enacted by Congress has two features which bear pertinent mention at this point. First, passenger automobile fleet fuel economy figures for various years have been set. The Energy Act has mandated a production weighted fuel economy of 18 mpg for 1978, 19 mpg for 1979, 20 mpg for 1980, and 27.5 mpg in 1985. The Act also provides for a penalty of $5 per 1/10 mpg below the applicable fuel economy stand- ard times the total model year production. If an automobile manufacturer were to produce 5 million cars in a year and missed Federal fuel ecomony standards by just a tenth of a mile per gallon, the manufacturer would be required to pay $25,000,000 in fines. A 2/10 miss-- $50,000,000 and so on. It seems needless to point out that we take this very seriously. A great amount of effort is currently being expended at General Motors in an effort to meet these standards. Our fuel economy results must be of highest quality so that the values we report are as accurate as possible. 533 As mentioned previously, fuel economy is determined by measuring the amount of carbon emitted by the engine, carbon in the forms of CO, hydrocarbons, and C0 2 . By far, the largest amount (greater than 98 percent) of carbon is in the form of C0 2 . This puts a heavy burden on analyzing this compound accurately. Presently available C0 2 Standard Reference Materials of carbon dioxide in nitrogen are 1.0 percent, 7.0 percent, and 14.0 percent. However, mass testing due to its dilution characteristics leaves us with carbon dioxide concentrations that seldom exceed 3.0 percent. Therefore, it is necessary to better define the 0-4 percent range with SRM concentrations of 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, and 4.0 percent. Fortunately this need has almost been met. The funds to do the research and development phase were provided to NBS this last Spring by the Motor Vehicle Manufacturers Association. The SRM's have been produced and currently are undergoing statistical batch analysis before release, which the Bureau anticipates will be in November. Standard Reference Materials involving methane are also needed. In addition to meeting Federal emission standards, the automobile manufacturers also must conform to California standards. California recently has allowed a "methane credit" on hydrocarbon measurements. What this simply means is that a certain fixed amount of hydrocarbons, which California feels is representative of methane content in exhaust, may be deducted from hydrocarbon that does not contribute to photochemical smog and it is for this reason that California has made the allowance. Standard Reference Materials of 5, 10, 20, 40, and 80 ppm are needed for CVS type testing. In addition, higher ranges would be desirable for engine concentration development work. An area allied to emission testing but not involving the actual exhaust stream concerns the fuel used for the tests. Two classes of chemical compounds contained in fuel are under scrutiny, lead and sulfur. We are all aware of the somewhat controversial role that lead has played in the area of automobile emissions. The only way the industry could meet current standards on most vehicles was through the use of oxidizing catalysts (hydrocarbons and carbon monoxide oxidized to carbon dioxide and water). Tests unequivocally demon- strated that alkyl lead bearing compounds in fuel poisoned the catalyst and it is for that reason that lead has been removed from fuel. However, test facilities cannot always be sure that gasoline used for emission test purposes is truely lead free. Therefore, lead in fuel standards were needed. Fortunately, these have been provided by the Bureau and standards of 0.03, 0.05, 0.07, and 2.0 g/gal are currently available. In addition to lead, sulfur has posed somewhat of a problem; although not the problem that EPA had originally believed it to be. However it is present in fuel and it is routinely measured. Standard Reference Materials in the range of 0.02 to 0.06 percent by weight are needed. 3. Conclusion In conclusion, a need has existed in automotive emission testing laboratories for Standard Reference Materials provided by the Bureau. This need has been ably met in several areas; however, we look for continued support from the Bureau in assisting us with the very difficult task of measuring emission levels by providing standards of the very highest accuracy and creditabil ity. 534 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). LONG TERM INVESTIGATION OF THE STABILITY OF GASEOUS STANDARD REFERENCE MATERIALS E. E. Hughes and W. D. Dorko Analytical Chemistry Division National Bureau of Standards Washington, DC 20234, USA 1. Introduction A program to produce gaseous Standard Reference Materials has been in existence for many years but only since 1972 has it constituted a significant part of the Standard Reference Materials Program. Standards for calibration of calorimeters for assessing the heating value of natural gas were issued for a time in the 1950' s. Standards of carbon dioxide in nitrogen at atmospheric levels, concentrations of oxygen in nitrogen from 1 ppm to 21 percent, and several low concentrations of methane in air were made available after 1968, The quantities of gas per sample were limited as were the sales. In 1972 the National Bureau of Standards embarked upon a program to supply standards for automobile emission measurements to the Environmental Protection Agency and to the industries involved, princi- pally gas suppliers and automobile manufacturers. The interest in this program and the resulting high sales volume of SRM's helped to define the true needs for gaseous SRM's and the program has been steadily expanded since then. The acceptance of these SRM's was not instantaneous and they were subjected to intens- ive scrutiny by the users. We have cooperated fully with the users in order to obtain as much confirmatory information as exists concerning the accuracy of these standards and equally important, of the stability. We have conducted a project within the gaseous SRM program concerned solely with stability of gas mixtures and in particular, with stability of samples which have been purchased and used for known periods of time by other government agencies and by industry. We have encouraged the return of standards for periodic recheck of concentration from those laboratories which we feel are unusually competent in gas analysis and who presumably have treated the standards with the care necessary to prevent contamination. In addition, we have retained from each individual lot of SRM's a number of samples which are periodically analyzed in our laboratories. The cumulative results and observations of this project are reported below. 2. Stability of Gas Mixtures Instability of a gas mixture contained in a cylinder generally refers to a decrease in concentration of the component of interest, although in isolated cases instability may be seen as an increase in the component. Instability may result from physical adsorption of materials on the walls of the container, a chemical reaction between the component and the cylinder materials, or a reaction between the component and a second gas phase component. In most cases, instability results from interaction between the container and the component and the only significant gas phase instability encountered thus far is that between nitric oxide and traces of oxygen. Further, instability seems to be quite dependent on the condition within the individual cylinder and is not usually predictable either from a knowledge of the cylinder material or the gaseous components. Variations in traces of moisture, grease or rust on the walls of the cylinder can cause varying degrees of instabil- ity. The impossibility of reproducing the surface conditions within a cylinder during fabrication and treatment makes it impossible to produce cylinders which will all exhibit f he same degree of instability with regard to a particular gas mixture. Stability, however, 535 may be achieved if the surface is treated in such a manner that it is completely inert to the particular gas mixture. This treatment may be only a thorough drying and evacuation for relatively non-reactive compounds or a robust chemical treatment for more reactive species. Stability of carbon dioxide mixtures in steel cylinders may be achieved simply by using a clean and dry container. Carbon monoxide, on the other hand can only be stored reliably in steel cylinders if the interior surface is completely coated with an inert wax. Cylinder materials, other than ordinary steel, may be substituted to achieve stability. Stability is best studied by comparison of the concentration of a component in a cylinder to a sample of known concentration. The essential point in this procedure is assuring the accuracy of the sample of known concentration. The procedures for assuring accuracy and the precautions which are observed have been reported elsewhere [l]. 1 At this point while reference to known mixtures is adequate it is not always convenient. Much information can be obtained concerning stability by analysis on a relative basis. Long term intercomparisons of a group of samples at or near the same concentration will reveal "drift" in the concentration of a sample by a change in the ratio of that sample to other samples in the group. If a large amount of samples have been filled by transfer from a single source, an analysis of each sample will reveal the degree of instability of any sample by agreement or lack of agreement to the average. If this series of analysis is performed at some extended time after the original transfer of material then it is possible to predict the stability of each sample for a considerable period of time in the future. This is possible because of the nature of the concentration vs. time curve for unstable gas mixtures. Figure 1 is a representation of some observations of the instability curves of several different gaseous mixtures. There is always an initial negative slope to the concentration line, the magnitude of the slope indicating the degree of instability. Thus, for a mixture such as sulfur dioxide in steel cylinders the instability is evident after a very short period of time. The early data for the particular sample of carbon monoxide shown would not have revealed the long term instability with such clarity. It should be noted that the curves are based on analysis of a single mixture of each type. Other analy- ses of mixtures of the same compounds would have given curves similar in nature but quite varied in slope because of the differences between each cylinder. The gas mixtures of major interest are obviously those already issued as SRM's or those which are contemplated as potential SRM's. These include mixtures of carbon dioxide in nitrogen, carbon monoxide in nitrogen and air, propane in air, methane in air, nitric oxide in nitrogen and sulfur dioxide in nitrogen. figures in brackets indicate the literature references at the end of this paper. 536 H +-> t/i nj Se > As > Zn > Sb > Mo > Ga > W > Pb > V > U > Ba > Cu > Cr > Be > Mn > Fe. These observations agree with the results of analyses of particle fractions collected with an in-stack cascade impactor at the same generating unit [11,12]. The data in table 2 have been used to calculate concentrations of selected toxic elements that can be transported by fly ash particles to a lung target cell, the pulmonary alveolar macrophage. Assuming the macrophage phagocytizes 10 particles with the average chemical and physical properties of Fraction 4, the resulting concentrations of most toxic elements would be substantially higher than concentrations found in normal lung tissue. 569 ro o O in o i— CM O CO O i— o it O CM CM r- COOCT^i — OOVO cm o co •— on CM CM + I?|T| ^ rr Q o i— O O i"" co o o o o **• o CM CO CO ID i— **■ O CM •— l£> CO CO .— CO O r— C^ N tSI O Tl?l cm oo .— CM f— , i — , in. + 1 ±1 + 1 +1 +1 +1 +1 en cr> o 00 w— O r- + 1 •X. >> i— o <*- CM CO O f— o •— o o cm o o cm CO «a- co o »— •— 00 en i — O en oo o in oo 3 CM. O, O. + 1 +1 +1 CO CO in i — CM + 1+1 co en eh o ■— .— o •— en CM i— 00 • CM 71 ?| Tl ?l .— 00 CO "3- t— o +1 CO O i— 00 <— r— LO cm en oo cm r- lO en i— in o oo § S s § n cm .— CM O i— + 1 +1 +1 +1 +1 +1 +1 +1 +1 I CO r— f— Ol CO Ifi * N > c o to 3 570 Table 2 (continued) C. INAA Only Fraction 1 Fraction 2 Fraction 3 Fraction 4 INAA a INAA INAA INAA As 13.7+1.3 56+14 87+9 132+22 Ce 113+4 122+5 123+6 120+5 Cs 3.2+0.1 3.7+0.2 3.7+0.2 3.7+0.2 Dy 6.9+0.3 8.5+0.9 8.1+0.3 8.5+0.8 Eu 1.0+0.1 1.2+0.2 1.2+0.2 1.3+0.4 Ga 43+12 116+52 1 40+23 178+90 Hf 9.7+0.4 10.3+0.3 10.5+0.3 10.3+0.5 La 62+3 68+4 67+11 69+3 Mo 9.1+2.5 28+1 .4 40+5 50+9 Nd 45+4 47+4 49+7 52+6 Rb 51+3 56+4 57+3 57+8 Sb 2.6+0.1 8.3+0.4 13.0+0.7 20.6+0.7 Sc 12.6+0.5 15.3+0.6 15.8+0.6 16.0+0.2 Se 19+2 59+2 78+2 198+20 Sm 8.2+0.3 9.1+0.4 9.2+0.4 9.7+0.4 Sr 410+60 540+1 40 590+140 700+210 Ta 2.06+0.09 2.3+0.2 2.5+0.3 2.7+0.1 Tb 0.90+0.05 1.06+0.06 1.10+0.07 1.13+0.06 Th 25.8+0.6 28.3+0.6 29+1 30+2 U 8.8+1.9 16+3 22+4 29+4 V 86+44 1 78+1 7 244+18 327+40 W 3.4+0.2 8.6+1.6 16+2 24+2 Yb 3.4+0.4 4.1+0.4 4.0+0.2 4.2+0.3 INAA values are the weighted averages of three determinations. Uncertainties given are the largest of: twice the weighted standard deviation, the range, or our estimate of the accuracy. 571 References [I] Bertine, K. K. , and Goldberg, E. D. , Science 173, 233 (1971). [2] Gordon, G. E. , Zoller, W. H., and Gladney, E. S., Trace Substances in Environmental Health-VII, D. D. Hemphill, Ed. pp. 167-174 (Univ. of Missouri, Columbia, 1973). [3] Linton, R. W. , Loh, A., Natusch, D. F. S. , Evans, C. A., Jr., Williams, P., Science 191, 852 (1976). [4] Natusch, D. F. S., and Wallace, J. R. , Science 186, 695 (1974). [5] Kaakinen, J. W. , Jorden, R. M. , Lawasani , M. H., West, R. E. , Environ. Sci. Technol. 9, 862 (1975). [6] McFarland, A. R. , Fisher, G. L., Prentice, B. A., and Bertch, R. W. , A Fractionator for Size-Classification of Aerosolized Solid Particulate Matter, (Radiobiology Laboratory, University of California, Davis, 1976) Annual Report, UUC 472-123, in press. [7] Ragaini, R. C, Heft, R. E., and Garvis, D. G., Neutron Activation Analysis at the Livermore Pool-Type Reactor for the Environmental Research Program, Lawrence Livermore Lab., Rept. UCRL-52092, (July 1976). [8] Gunnink, R. , and Niday, J. B. , The Gamanal Program, Lawrence Livermore Laboratory, Rept. UCRL-51061, Vols. 1-3 (1973). [9] Certificates of Analysis, Standard Reference Materials SRM 1632 and 1633, Office of Standard Reference Materials, National Bureau of Standards, U.S. Dept. of Commerce, Washington, D.C. 20234. [10] Ondov, J. M., Zoller, W. H. , Olmez, I., Aras, N. K. , Gordon, G. E., Rancitelli, L. A., Able, K. H., Filby, R. H. , Shah, K. R. , and Ragaini, R. C. , Elemental Concentrations in the National Bureau of Standards' Environmental Coal and Fly Ash Standard Reference Material, Anal. Chem. 47, 1102 (1975). [II] Ragaini, R. C, and Ondov, J. M. , Trace Contaminants from Coal-Fired Power Plants, Proc. Int. Conf. Environ. Sens. Assess., Las Vegas, NV, 1975, p. 17-2; Lawrence Livermore Lab., Rept. UCRL-76794 (1975). [12] Ondov, J. M. , Ragaini, R. C, and Biermann, A. H., Wet Scrubbers vs. Electrostatic Precipitators, Lawrence Livermore Laboratory, Rept. UCRL-78359, (1976). 572 IT NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). COLLABORATIVE TESTING OF A CONTINUOUS CHEMILUMINESCENT METHOD FOR MEASUREMENT OF NITROGEN DIOXIDE IN AMBIENT AIR John H. Margeson Environmental Monitoring and Support Laboratory U. S. Environmental Protection Agency Research Triangle Park, North Carolina 27711, USA and Paul C. Constant, Jr., Michael C. Sharp, and George W. Scheil Midwest Research Institute 425 Volker Boulevard Kansas City, Missouri 64110, USA 1. Introduction A continuous chemi luminescent method for measurement of N0 2 in ambient air was subjected to a collaborative test. The method involves calibration of a monitoring instrument by gas phase titration, NO + 3 ■*■ N0 2 + 2 . Ozone atmospheres - analyzed by the 1 percent neutral buffered potassium iodide method - were used to analyze an NO cylinder which was then used to calibrate the N0 2 response of the instrument and to generate N0 2 for calibration of the N0 2 response. 2. Objective The objective of the collaborative test was to determine the bias and precision of the method. Volunteer collaborators were used. All collaborators were asked to follow the method description provided to them and to analyze their NO cylinder prior to coming to the test site. The at home analysis of the cylinder was done to minimize biasing the test results. 3. Experimental The collaborative test was carried out by having 10 collaborators sample ambient air and the same ambient air spiked with a reliable source of N0 2 for four days (September 23- 27, 1974) at a common site in Kansas City, Missouri; N0 2 concentrations of 60 to 308 pg/m 3 were sampled. The integrity of the N0 2 spike was maintained by (a) using a high ambient air flow rate, 50 liters/min., to minimize N0 2 residence time in the sample generation system and the possibility of losses due to reaction with water vapor, and (b) using Teflon parts in the sample generation system. The integrity of the spike was confirmed by monitoring the N0 2 concentration in the ambient air with and without the spike. 573 4. Results The results of the test show that, based on one-hour average concentrations, the within laboratory standard deviation is 6 percent of the concentration over the range 60 to 308 ug N0 2 /m 3 , and the between laboratory standard deviation is 14 percent of the concentra- tions over the same range. The results also show that the method has an average bias of -5 percent over the above concentration range. The lower detectable limit of the method was determined to be 22 yg/m 3 . Additional information can be obtained from EPA report No. 650/4-75-013. This report is available from: U.S. Environmental Protection Agency, Office of Forms and Publications, Highway 70 Warehouse Facility, Durham, North Carolina 27711. 574 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). COLLABORATIVE TESTING OF EPA METHOD 11 Joseph E. Knoll and M. Rodney Midgett Quality Assurance Branch Environmental Monitoring and Support Laboratory Research Triangle Park, North Carolina 27711, USA and George W. Scheil Midwest Research Institute Kansas City, Missouri 64110, USA 1. Introduction The Quality Assurance Branch, Environmental Monitoring and Support Laboratory, has been engaged in a systematic program to standardize or validate EPA source test methods, which are used to determine compliance with Federal emission standards. Of fundamental importance to this program is collaborative testing. By this process, the method is observed in the hands of typical users, practical field problems are brought to light and information about precision and accuracy is obtained. This information is then compiled, made available to the public and is valuable for developing quality assurance programs, compiling users manuals and assessing compliance measurements. 2. Experimental EPA Method 11 determines the hydrogen sulfide content of petroleum refinery fuel gases and process gases. H 2 S is collected as the cadmium salt and is measured iodimetrically. Originally, a CdS0t l /Cd(OH) 2 mixture was used as the absorbing medium, but this also collect- ed thiols, which caused a serious interference. Investigations at EPA's Environmental Monitoring and Support Laboratory and at Midwest Research Institute resulted in the develop- ment of an absorbing solution that circumvented this problem. It consists of 0.016 M CdS0i + /H 2 S0i + at pH 3.0 and has been used in the measurements reported in the present study. Selection of collaborators was accomplished as follows: Qualification samples were submitted to 22 laboratories and 10 were selected at random from among the 15 laboratories that submitted results free from gross errors. The collaborative test took place at the Midwest Research Institute in Kansas City, Missouri. The collaborators furnished their own equipment, followed the revised test procedure and made all measurements in duplicate. Two separate kinds of gas samples were employed. The first consisted of three cylinders of H 2 S in research grade methane. These cylinders were analyzed by the bottled gas manufacturer, both before and after the test. The second type of sample was prepared using a source simulator. This system was used to generate known levels of H 2 S in commercial natural gas, while also adding quantities of interferences and other compounds normally found in refinery fuel gases. These consisted of constant amounts of propylene (4 percent), S0 2 (50 ppm), methyl mercaptan (100 ppm) and ethyl mercaptan (56 ppm). Regulation was achieved using glass capillary flow controllers and calibrated flowmeters. The primary calibration of H 2 S levels in the source simulator was by the calculated dilution factors. Further checks were made using a potentiometric titration with standard silver nitrate and by flame photometric gas chromatography analysis. 575 3. Results Each of the ten collaborators measured the cylinder samples in duplicate and repeated them three days later. These results are listed in Table 1. Using the source simulator, five runs were made at a level of approximately 200 mg H 2 S/m 3 , five runs at approximately 400 mg H 2 S/m 3 and six runs at a level of approximately 100 mg H 2 S/m 3 . The sixth run also contained carbon oxysulfide (COS) at a level of 100 ppm. These results appear in Table 2. Table 1 H 2 S concentration (mg/m 3 ) determined by collaborators for standard cylinder samples Cylinders No. 1 103 mg/m 3 No. 2 350 mg/m 3 No. 3 184 mg/m 3 Collaborators 1 2 3 4 5 6 7 8 9 10 Test #1 157.7 118.9 116.0 117.1 88.9 82.8 93.1 90.4 90.8 93.0 92.3 93.0 92.9 89.9 100.1 100.6 87.3 87.9 100.9 90.6 364.1 377.4 204.5 199.5 366. 354. 193.3 197.3 331.0 333.0 171.0 168.0 334.9 335.7 181.7 179.5 330.2 323.6 172.1 173.6 342.0 339.0 177.0 179.0 343.9 332.4 228.9 227.2 176.9 172.4 297.1 314.8 166.5 168.4 329.0 332.5 177.7 170.1 Test #2 No. 1 103 mg/m 3 No. 2 350 mg/m 3 No. 3 184 mg/m 3 117.3 107.2 329.8 346.8 192.4 190.1 101.1 97.4 309.4 305.6 186.8 184.3 94.9 94.3 328.0 338.0 175.0 183.0 93.4 91.0 333.2 333.9 179.7 178.1 88.8 89.9 317.4 318.6 170.1 169.7 95.0 95.0 338.0 334.0 180.0 174.0 100.9 102.1 189.3 182.8 83.8 90.3 355.7 284.3 335.1 286.4 155.8 153.2 111.1 106.8 312.5 312.8 173.9 174.7 95.5 94.1 330. 333, 184.5 180.1 576 Table 2 H 2 S concentration (mg/dscm) determined by collaborators at test levels A, B, C RunS./ H2S present Collaborators Level 1 2 3 4 5 6 7 8 9 10 la 180 187.9 271.0 217 h/ 246^ 184.2 181.7 180 191.83 122.2- 144. 5^' 170.79 219.1 lb 199.2 277.9 181.0 181.7 182 179.72 176.43 221.5 2a 180 191.9 210.0 174 182.8 177.3 174 180.34 114.2-^ 164.02 205.8 2b 203.9 214.2 177 176.0 173.8 177 173.29 150. l^ 7 172.82 205.6 A 3a 188 215.1 221.9 194 196.6 190.3 195 175.55 144.4 183.88 224.6 3b 208.2 204.1 194 193.2 189.3 196 187.19 157.4 188.94 216.3 4a 191 213.8 212.2 195 , 236^ 198.4 193.7 197 230.466 164.7 183.91 193.6 4b 216.0 203.8 197.6 195.9 199 190.750 204.8 186.81 217.7 5a 192 223.2 211.1 197 200.7 194.2 200 202.66 167.0 184.25 224.3 5b 218.5 214.1 - 202.4 194.7 204 196.13 220.8 191.05 227.1 la 342 375.3 356.3 338 334.8 328.4 337 347.99 312. 6^ 315.15 364.3 lb 333.7 351.2 346 333.3 327.7 339 337.70 321.98 369.9 2a 380 406.7 399.0 376 375.0 364.8 375 439.55 331.6 290.51 392.6 2b 407.7 340.3 364 374.0 362.1 386 417.51 314.5 354.18 401.1 B 3a 378 384.2 392.2 369 379.7 365.3 377 439.61 366.3 342.31 388.6 3b 395.1 388.4 373 373.5 361.0 376 426.05 356.3 354.03 394.1 4a 426 447.7 . 420.4 399 395.1 386.1 402 416.55 416.9 372.88 411.2 4b 432.8 446.6 393 402.2 383.7 405 408.72 393.2 384.65 409.4 5a 426 434.2 436.4 397 404.1 382.4 400 424.32 378.5 384.43 420.8 5b 415.5 427.2 403 399.4 390.0 405 422.01 371.7 375.27 417.3 la 100 109.3 114.0 111 115.99 108.6 113 118.61 110.8 109.61 182.66 lb 108.5 116.7 112 113.7 109.4 113 116.34 110.6 115.31 144.4 2a 100 107.4 118.2 114 114.3 111.1 11-3 119.86 102.4 50.19*' 144.9 2b 105.0 117.7 113 111.6 111.8 112 112.98 103.0 143.05 C 3a 99 115.3 121.7 113 lli.l 109.5 111 117.73 99.4 102.20 142.9 3b 110.9 122.2 112 110.7 109.4 119 113.48 100.4 115.13 143.8 4a 97 127.0 123.3 110 112.0 110.6 114 118.95 109.7 114.48 138.9 4b 123.9 114.4 114 111.9 110.4 115 115.22 109.9 118.31 143.9 5a 102 128.4 120.4 113 116.9 115.5 114 128.71 113.7 123.08 150.3 5b 127.6 118.6 116 115.4 115.6 113 116.74 107.9 118.03 146.1 6a 6b 100 123.1 117.9 117 111.5 L13.5 120 120.76 106.6 121.52 151.1 122.6 121.3 117 112.8 112.5 116 112.99 105.8 125.24 129.0 a/ Each collaborator ran two trains simultaneously each run. b/ Error during sampling or analysis. Sample deleted. 577 Only two percent of the measurements were not carried through successfully. -Other points were rejected as a result of applying Grubbs' test for multiple outliers [l]. 1 This test is based on criteria tabulated for ratios of the sum of squares of deviation for a reduced sample (obtained by omitting the possible outliers) to the sum of the squares for the entire sample. The method has the advantage that it does not suffer from problems of repeated application or from masking. Use of Grubbs' criteria resulted in the elimination of only three of the 118 cylinder gas sample data points. These are underlined in Table 1. Measurements made at the beginning of the test agreed with those made three days later, since mean values agreed statistically and there was little change in precision. This indicated that day and experience were not significant factors in the results and that polling the data was justified. Analysis of the pooled data revealed a negative bias that was a function of concentration within the range studied. It had an average value of -4.8 percent and may be attributed to incomplete absorption of hydrogen sulfide from the gas sample or to incomplete recovery of cadmium sulfide from the impingers. The standard deviation was also an increasing function of concentration. The coefficient of variation and the within laboratory and between laboratory coefficients of variation were also calculated and are listed in Table 3. The bias, which is also listed, has a magnitude that is approximately equal to the coefficient of variation. Table 3 Precision and accuracy summary Total coefficient of variation, % Within lab coefficient of variation, % Between lab coefficient of variation, % Relative bias, % Source simulator measurements were made at approximately the same concentration levels as the standard cylinder gases (Table 2), but with five runs per level. Application of Grubbs' criteria eliminated 17 data points. Analysis of variance was performed and is summarized in Table 3. These values are wery similar to the values determined from the standard cylinder gas data. It may therefore be concluded that no deterioration in pre- cision resulted from the interferents present in the source simulator samples. However, when the average bias was calculated, the pattern obtained was unlike that which appeared in the standard gas cylinder measurements. The bias was positive at low H 2 S concentrations and became negative at higher concentations. The range of the bias did not exceed the precision of the data. The positive bias values obtained at low H 2 S concentration may reasonably be attributed to the interfering substances present in the source simulator Standard gas Source simulator cylinders samples 7.0 6.6 4.3 4.6 5.5 4.7 -4.8 +2.8 samples. 4. Conclusion The primary results of this collaborative test are shown in Table 3. The precisions are good and the biases are small and stable. Method 11 has been shown to yield accurate measurements over a range of H 2 S concentrations and in the presence of substantial inter- ference levels. The method functioned well under simulated field conditions. figures in brackets indicate the literature references at the end of this paper. 578 • References [1] Grubbs, F. E. and Beck, G., Extension of Sample Sizes and Percentage Points for Significant Tests of Outlying observations, Technometrias U, 847-854 (November 1972). 579 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). EVALUATION OF INTERLABORATORY COMPARISON DATA BY LINEAR REGRESSION ANALYSIS Donald E. King Ontario Ministry of the Environment Laboratory Services Branch P. 0. Box 213, Rexdale, Ontario M9W5L1 1. Introduction The Ontario Ministry of the Environment Laboratory Services Branch has been increas- ingly involved in interlaboratory comparison studies with municipal, industrial and com- mercial laboratories in Ontario whose data could be used in assessing routine water and waste water quality. The objective has been not only to determine that differences exist in some instances, but to attempt to define the nature of the disagreement so as to provide advice as to the possible corrections required. The approaches presently under development permit the use of linear regression techniques by ensuring that intercompar- ison samples cover the range of analytical concentration of interest. However, in the course of applying linear regression analysis to such data it has become apparent that this technique, if misapplied, can easily lead to improper conclusions; such as to the presence or absence of bias between laboratories. At the same time it is able to direct attention to several potential areas of interlab incompatibility. By rights, linear regression analysis should not be applied when comparing methods or laboratories where the precision of observation, or analytical measurement, is similar. However, its ready availability and simplicity make it attractive to the average scientist or engineer who has little or no formal training in statistics. The more elegant tech- niques are unknown to most of us and may require information and appreciation of purpose not readily available. Linear regression is a statistical tool and like all tools must be applied with care in recognition of the assumptions inherent to it. Since the inclusion of linear regres- sion programs in even hand-held calculators increases the possibility of misuse, the purpose of this paper is to demonstrate the effects of such misuse and thereby provide a mechanism for checking the validity of linear regression findings when applied to the evaluation of interlaboratory comparison data. 2. Review of Linear Regression In order to point out the problems in interpreting regression and correlation data, it is important to appreciate their derivation. Therefore the following is a review aimed at clarifying the difference between what linear regression gives us and what in fact is required in order to assess interlab differences. Given Paired Data : CXiYj), (X 2 Y 2 ), ... (Xn Yn) Average : J = EX/n; T = zY/n Deviation : x = (X-X); y ■ (Y-Y) Variance : s£ = zx 2 /(n-l); s 2 , = ly 2 /^-}) 581 The following review is simjDlvfied by discussing the data (X,Y) in terms of devi- ations {x,y) from the averages (X,Y). For calculation purposes, Y.x 2 = EX 2 - nX 2 Zy 2 = EY 2 - nY 2 ixy = zXY - nXY. Linear Regression : assumes that the variability in the Y data set can be 'explained' as partially dependent upon_a_corresponding change in the X data. The reference point for these deviations is at (X,Y). It is assumed that none of the scatter seen in a plot of Y versus X is caused by uncertainty in the X or independent variable. Given a deviation x and a suitable coefficient b the corresponding deviation y = bx can be estimated. The remaining deviation in y is 'unexplained' by regression and b is calculated to minimize this residual variance. Residual 'Unexplained' Deviation : {y - y) =' (Y - Y) Residual Variance : s 2 = i{y - y) 2 /{n-2) This is clearly a variance based on the moving average Y rather than Y. By substituting y = bx this equation can be rewritten s 2 . x = zz/ 2 (l-r 2 )/(n-2) where r 2 is shorthand for {ixy) 2 /zx 2 zy 2 Correlation Coefficient r: by comparing the equations for Sy and Sy. x it can be seen that r 2 is the fraction of ly 7 'explained' by using b. The correlation coefficient r is fre- quently used to determine whether sufficient y-data scatter has been explained to justify the statement that b f 0. It tends to unity as data scatter is reduced. Regression Coefficient b; b = y/x = zxy/ix 2 The use of b minimizes the residual variance but is overly sensitive to error in points distant from (X,Y). Variance of r: s 2 = (l-r 2 )/(n-2) Variance of b: si = s 2 /T.x 2 Therefore s 2 = s 2 {zy 2 /zx 2 ) = s 2 (s 2 /sj) The equation y = bx can be written Y = a + bX where a = Y - bX, or 'inverted' X = a' + b'Y where b' = 1/b and a' = -a/b 582 1 The symbol " indicates the assumed dependent variable for regression purposes and the symbol ' indicates the equation coefficients for calculating X rather than Y. The intercept coefficient a is the value of Y at X = whereas a' is the value of X at Y = 0. Since this line rotates about the point (X,Y) an error or uncertainty in 'b' magnifies the error or uncertainty in 'a 1 . 3. Interlaboratory Comparison Many intercomparisons are poorly designed. One or two samples are distributed, analyzed, and the resulting data is averaged. Based on some criteria, outliers from the mean are identified but there is no way to identify the nature of the discrepancy. The Youden technique is better in that, if properly applied, it can discriminate between random and systematic error. There are situations where this assessment could be wrong. Since there are two sources of systematic error, one in the blank determination and one in slope calibration, it is possible for the two errors to cancel in certain ranges of concentration. Some studies compare two laboratories based on a series of samples and use the t-test to_demons_trate that no (statistically) significant difference exists between the averages X and Y, and therefore no difference between laboratories. Yet regression analysis may demonstrate a slope difference of 5 to 10 percent. Usually when comparing laboratories we can expect similar levels of precision of observation. However other factors may affect the overall data_ scatter in unknown ways. Since linear regression puts extra weight on points far from (X,Y), one or two large errors may grossly affect the estimates of b and a, particularly when little data is available. Suspect points should therefore be deleted in the initial evaluation. (If the line is found to pass through these points they in fact will confirm what otherwise might have been an unusual finding.) The residual scatter in a plot of Y versus X, about the regression line is measured by s? . This will be the sum of several variances. Thus 3 J y-x yx s z + s A ox oy + st where s^ = observational variance lab X oy r observational variance lab Y variance due to sample handling, etc. When natural samples are being split in the field, or sample homogeneity is not certain, s^ can be quite significant. Usually one knows his own analytical or obser- vational variance, e.g. , s£ x . If Sy.x>>s ox then either s ov or s n is dominant and it is 'ox* '■' °y-X'"' ;> ox , -" c " ci unci o y proper to assume initially that X is the independent variab assignable to lab X). If Sy. Y - /2 s have equal precision. If s le (unless Sk may be entirely then s n may be negligible and the two labs will 11 v -' ~ s px ^ X is the most significant source of data scatter, and therefore should not nave been chosen as the independent variable. Data compatibility depends more upon freedom from systematic error than upon com- parable precision of analysis. We expect, and hope to find no significant difference between laboratories at zero concentration, and a ratio of 1:1 between paired results over the concentration range of interest. However, since regression analysis provides es- timates of X, Y and b, any error in the estimation of 'b' will magnify the estimated difference 'a' between labs at the detection limit. Unfortunately this is the area in trace analysis where data compatibility is most important and most difficult to achieve so that estimates must be valid. In the following discussion emphasis is placed on the potential for error in 'b' and 'a' resulting from misapplication of the linear regression technique. The variances sS. x and s£ are of no value in estimating the extent of such error, since they measure precision not accuracy. 583 4. Interlaboratory Regression When comparing two laboratories over a range of concentration on the basis of a limited amount of paired data, it is usually invalid to assign all of the data scatter to lab Y, (often lab Y is the other guy), and yet linear regression does exactly this. In the absence of other information, 50 percent of the time the wrong data set will be arbitrarily defined as the dependent variable, and therefore the calculated regression coefficient will be invalid. Obviously there are two regression lines both of which must be examined, These are: Y on X regression: Y = a 2 + b 2 X X on Y regression: X = a 2 + b 2 Y The latter equation can be 'inverted' for comparison with the former. Thus X on Y invert: Y = a 2 + b 2 X The relationship between b : and b 2 is as follows. Since bj = lxy/lx 2 and b 2 = l/b 2 = zy 2 /zxy thus b : b 2 = iy 2 /ix 2 = s 2/ s 2 and b : /b 2 = {lxy) 2 /T.x 2 T.y 2 = r 2 Therefore, since r 2 < 1, bj is always less than b 2 . If, in fact, it were proper to assign all scatter to the Y data set then b 2 would be a valid estimate of the slope relating labs Y and X and b 2 would be in error high by 100(l-r 2 ) percent. If the opposite were true, b 2 would be correct and bi would be low by the same amount. Of course the truth lies somewhere between these extremes shown in figure 2. In fact, if the data scatter can be equally a ssign ed to both laboratories then both bx and b 2 will be in error by an equal amount and /bxb 2 will be the best estimate, if r 2 = 1 , bj = b 2 = /BTbJ. If s n is negligible or evenly attributable to both data sets, then a difference of about 2X in the analytical precision of the laboratories is sufficient to define the better lab as the independent variable and make bi the better estimate. There is a way to resolve this dilemma. The most valid estimate will be one which is most independent of the data set sele cted . Thus if the total set is randomly or otherwise divided into subsets and b : , b 2 and /b 2 b 2 are calculated for each subset, one of them will be appreciably more 'stable' for all subsets. 5. Division Into Subsets The following data (Tables 1 and 2) was used to prepare figure 1 to demonstrate the difference between the Y on X and X on Y (inverted) regression lines. If this data set is subdivided in any one of four ways other estimates of b, and b, can be obtained, (see fig. 3)! 584 y = Y-Y y = bx where b = Sxy/Zx 2 40 60 80 B DATA ON X AXIS Figure 1.^ The least squares line of best fit minimizes z (y - y) 2 given the deviations x = X - X. 585 < ►- < a >- : ^. assumed p distribution /"^ of readings • Soy • / . Y = a, + b,x < a \ • i i • i > A S ox i i i X DATA assumed distribution of readings Sox Xv V v X = a'„ + b', A* oy Y DATA J I I I I I 1 L. < < b] and b2 represent extremes j i i i i i 1 1 l. X DATA Figure 2. For any set of data two regression lines can be calculated. Their yMnxr lty . depends on the rati0 of the respective precisions of observation. (NOTE: figure 2 consists of 4 separate figures a), b), c) and d).) 536 < < ■ ■ • / / - */ / - • // • • / m * // /•/ first 20 data pairs ■ ■ A < ■ i i i i i / B DATA - •/£ - //* ™ - *s - - - "" - - - • / • — • / - - // • last 20 data pairs - / ■ i ■ i i i i / B DATA < < high 20 data pairs B DATA low 20 data pairs t 1 i_ B DATA Figure 3. Use of data subsets may verify regression line validity. In this case b 2 varies by less than 2 percent whereas b 2 varies over a range of 18 percent. (NOTE: figure 3 consists of 4 separate figures.) 587 a) Table 1. Raw data in mg/1 b) c) d) A B A B A B A B 44 * 48 38 36 24 28 57 * 53 27 33 57 * 65 35 30 17 25 42 * 51 75 * 66 82 * 77 34 40 49 * 41 63 * 60 50 * 46 55 * 49 30 37 52 * 55 41 44 83 * 74 46 * 52 43 39 30 27 78 * 75 21 23 46 * 51 22 26 41 46 38 36 50 * 45 38 37 38 44 17 21 57 * 52 70 * 64 60 * 58 35 32 70 * Lab Lab 64 A = B = data data 45 set Y set X 47 22 26 Table 2. Slope and intercept estimates ratio mg/1 bi b 2 ai a 2 Y X 1) 1.058 1.226 -2.98 -10.59 45.0 45.4 2) 1.181 1.228 -7.98 -10.13 46.1 45.8 3) 1.117 1.370 -4.69 -19.19 59.3 57.3 4) 1.370 1.249 -1.62 -10.47 31.8 33.4 All) 1.134 1.228 -6.14 -10.40 45.6 45.6 1) 2) 3) 4) The four subsets examined were: columns a) and b) columns c) and d) high data marked by asterisk low data remaining 588 Except in case 3) the X on Y inverted regression estimates are very stable and the slope estimates fall in a range of about 2 percent. This suggests that the overall precision of data submitted by lab B is worse than that from lab A. The Y on X regression data is not a good predictor of the blank diffe rence between labs and is lower in the slope estimate by about 8 percent. In case 3) /bTbJ = 1.237 and the corresponding inter- cept estimate is -11.58, in fair agreement with the b 2 estimates for the other subsets. This suggests that in the higher range the precision of the two labs is more equal. In this example r varies from 0.89 for sets 3) and 4) to 0.98 for set 2). The residual standard deviation varies from Sy. x = 4.1 to 5.7 so that the standard deviation for lab A probably falls between 1 and 3 mg/1 whereas lab B probably falls in the range 3 to 5 mg/1. These estimates can be compared to known values if available. Obviously when comparing two laboratories only relative biases can be detected. No knowledge of the absolute accuracy is gained unless the data set contains known standard reference materials. There should always be a plausible explanation for any significant bias observed between laboratories. Since most laboratories concentrate on slope control, it is not usual to observe severe slope differences. Blank control tends to be neglected. 6. Multilab Intercomparison In some recent studies distributed by the Quality Control laboratory of the Ontario Ministry of the Environment, a mix of natural, standard, and spiked samples (6 or more) have been distributed to as many as 10 to 15 laboratories and analyzed for 6 to 9 para- meters. The inter-comparison design is intended to permit linear regression analysis, but successful interpretation of the study depends upon the development of a concise format for presenting the massive amount of data generated, in a format readily appreciated by the participants. If the data is plotted and a linear regression equation given, most readers will be lead to the appropriate deduction. How can this be done when perhaps 100 diagrams are needed. A procedure is being developed, however, for attacking this type of study based on 'difference linear regression' which is outlined below. As noted already regression requires that the X data set be more precise than the Y data. Since the standard devi- ation for an average is better than that for the individual data, the X data set is chosen to be the average reported by all participants on a given sample. What we want is to examine the difference between the individual laboratory's data and any suitable reference value such as the average for each sample. Therefore why not calculate D = (Y-X) and regress D on X so that if d = (D-D) = {y-x) then b 3 = zxd/ix 2 s 2 . x = E( 80 20 40 60 ~ — — ■ — __ / X DATA • • -10 10 - • • • D on Y} • 20 40 60 80 • • Y DATA • • -10 Figure 4. Comparison of Y on X, X on Y, D on X and D on Y data plots and regression lines. (NOTE: figure 4 has two parts a) and b) . ) 590 Table 3. Data used for plots of Y vs. X, X vs. Y, D vs. X and D vs. Y. Y = 20 20 25 25 70 70 75 75 X = 10 30 12 32 60 80 62 82 X = 47.5, Y = 46. : paired-t = 0.39 r = 0.927, r D-X = °- 362, r D>y = 0.015 Y = 7.74 + 0.864X (Y on X) Y = 1.225 + 1.0060X (X on Y invert) D = 7.74 + 0.1 36X (D on X) X = -8.956 + 1.157Y (Y on X invert) X - -1.218 + 0.9941Y (X on Y) D = 1.218 + 0.00594Y (D on Y) Figure 5 shows y on x and D on X plots from a recent intercomparison study. Difference plots occupy much less space for the same amount of information. (Note the data set was chosen to demonstrate X precision poorer than the Y precision. Therefore the valid re- gression is X on Y or D on Y). Figures 6, 7 and 8 demonstrate how three laboratories compare for nine parameters relative to reference averages for each of six samples calculated from data submitted by six participants. (One laboratory submitted duplicate results for some parameters). Only six parameters were analyzed directly by each laboratory, the other three were calculated from the data supplied to permit comparison and to demonstrate interchange of nitrate and ammonia due to sample aging. The vertical scale covers differences D ranging between ±10 percent of the maximum X given. Even without the data, relative precision and accuracy can be assessed readily and interactions between parameters can be observed. 7. Summary Linear regression techniques can be successfully used to compare laboratories provided the precautions described are taken. The use of difference linear regression permits multi- lab, multi-parameter, mixed sample and standards intercomparisons to be evaluated whereas at present data from such studies when attempted sit unevaluated. Proper evaluation should clarify and identify possible sources of bias and should not leave this task to the par- ticipant alone. 591 s 'X3 -Q cm >~ Ss II CMf— 1 w 13 S- IN s> s> rO CM _Q X CM-Q to S- ■o c rO X H -— t -Q -O c o II 4- O o <^ CO E i- - Q >- CM.Q CM J3 S- CD +-> -a CD CD >- < H E +-> c\ ro S- c .O S- CD O ro > II a. e x Si 1 — u B 5 CM S3 CM CM CM S- -O S- >- >- X • • • X CMX CM>- CM.Q 10 L0 oo +-> ro >- vcm c: XI CD c CD o II S +-> X - w jo \.-h _Q \f—i s> CM VCM | .Q H S- ro lx x*k 00 r— ro CD CD -Q res I— -Q S3 H -CI II a lx l>- S3 H- >- cM-Q 4- SZ 4-> O O C •i- CD C CD +-> -i- T> O 4-> ro O CD • i — ro r— «i — i- +-> F CD 4- rn S- CD CT cr to O O to CD CD U <-> -t-> -r- +J ro X c c ro O 0) +-> 1 '1— O-i— r— CD CO O '-* CD ro «e o 00 -i- CD <_> <-) 3 c CD 4- Q. S- -r- -a ro S- 4- O CD +-> O) CD ■— +-> S_ 00 si CD O oo C CD CD ro S_ O •■- > s- > CD CD (J Q. C O ro i— ■I- to S- ro 4- > O 592 QLa Y = 0.004 + 1.02 X r = 0.995 0.013 mg/l 0.1 0.2 0.3 .04 - 0.2 • • m • • • • • • - • , i * 9 %'* _ • • 0.1 0.2 0.3 _ -.02 • D = 0.004 + 0.018 X - .04 - r DX = 0.156 S D .x ■ 0.013 - Figure 5. Interlaboratory comparison of a series of natural samples analyzed for ammonia. Note the clarity of pre- sentation in the difference plot. 593 CM o" I 1 1. 1 • 1 1 LU 1 •1 1 1 | • 1 1 i- - , 1 • ' CM ' e C/5 LU t 1- < i X 1 • Q. V) 1 a X •i Q. r t- • ' o < .' LU / K / I- 1 / _i / • u. i . 1 ■ • I. JL \ » 4. \ \ 4 LU CM ec \ t- z • - + \ LU I t- \ e i- \ z k\ + \ • l" < 1 • z .1 o S 1 2 < \ I , 1 CM c 3 CM CM z *~ 1 u z < a ec o * ii < z o 2 2 < • % 1 • ac l- i- . l i m • • 1 J z 1 CM z 1 LU 1 ca . 1 « o cc 1 h- | Z 1" _j 1 I < 1 a 1 .j LU 1 • -9 ac • I" _■ I < t- o 1 • 1 _l. i J O ^■_ I ? CM CM O i z + * CO o z • + z ac « J h- hm a Z • — i < t- o . 1 , | ..i _ i CM CM e NNB^O^ CON cs o «-; o o o o *7- ,- ,. ,- I Figure 6. Multi-sample, multi-parameter comparison of three CM r laboratories versus calculated reference values by difference regression. 594 CM 1 J' Z | i • UJ K 1 I 1 1 — t iZ » 1 1 1 > ■ {* CM •— 1 \ 1 UJ • t t- I oc I t- I • Z I 00 I UJ \ « 1- \ cc » 5 SI • I • I I 1 . , cm CM o CO UJ 1- < X B. CO 1 e 1 1 Q. 1 »- ti u / < .• UJ / oc 5 u. */ 1 1 t — 1 / i 1 CM 1 1 • / / / 1 < / 2 / e ? S / 2 / < / »- /• _* / u. . 1 i. i CM ^ ] /. UJ eg "/ o cc / H 1 Z _i 1 X 1 < 1 o —J 1 • 1 * ac *; „ j ; < ^ 1 f o ■ 1 ■ + < « J CN G 3 CM * CM Z ^ 1 u Z < C9 OC o * II < z o s 2 < i • • • a* h- V- -4- f CM « o CM f / + / m o • - ■ / / • + / / ►- • • 1 e Z — i < - • 1 / /* 1 •/ / / 1 ' i i CM I I Figure 7. Multi-sample, multi -parameter comparison of three laboratories versus calculated reference values by difference regression. 595 1 IAJ ec < < o ■ o • — ' ■ N 00 • / I I I I I I ,. I I ! • cv e tn Ul h- < \ X ' & CO \ o • \ 3C & . \ K \« u \. < \ UJ SB \ \ H» -J v • Ik ■ i . OJ \ -i \. CM ° & _l \ < H- \ g* • ti • % ik 1 oB I & \ \ _j < H- \ O • • \ i i V * > J I UJ C9 C«J < CO ll < < I \ * r o e* * \ + \ »• k I ° \ 2 \ \ <« IS J u \ 1 \ \ J u r Figure 8. Multi-sample, multi-parameter comparison of three laboratories versus calculated reference values by difference regression. 596 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). POTENTIAL ENFORCEMENT USES OF EMISSION TEST COLLABORATIVE STUDIES Louis Paley Division of Stationary Source Enforcement Environmental Protection Agency Washington, DC 20460, USA and Walter S. Smith Entropy Environmentalists, Inc. Research Triangle Park, NC 27711, USA 1. Introduction Over twenty-eight documents have been written by EPA or their contractors on the subject of collaborative studies of stationary source emission measurement methods. These delineate the results of evaluations conducted on nine of EPA's reference test methods (No's 2,3,5,6,7,8,9,10 and 104). The results of these studies have been used for various purposes by the scientific and legal community. The purpose of this paper is to describe three major ways in which enforcement agencies can utilize the results of collaborative tests. The primary application of these results for enforcement is in estimating the reli- ability of test methods. Since most agency determinations of the status of compliance of a source are based on emission measurement data, the degree of reliability of this data is of the utmost importance. 2. Discussion The between-laboratory standard deviation is the statistic provided by the collaborative studies which best quantifies a method's reliability. This term estimates the degree of agreement to be expected among different laboratories who independently collect and analyze "identical" samples. The between-laboratory standard deviation is frequently called standard deviation of reproducibility, or just "reproducibility". A value twice that of the "reproducibility" indicates the limit within which one can have 95% confidence in his final value. Compliance with the New Source Performance Standards (NSPS) and the National Emission Standards for the Hazardous Pollutants (NESHAP) is based upon an average of more than a single run. Since EPA collaborative results have been based on a single run or sample, these results must be adjusted by dividing them by the square root of the number of runs required to be averaged in the standard. For example, the reproducibility of Method 5 (particulate matter) collaborative test results at a municipal incinerator was 12.1%. Conversion of that data to the three-run basis is accomplished by dividing the reported value by /3, thus yielding a three-run reproducibility factor of 7.0%. These values and the associated values for the 95% Confidence Level are presented below for several reference test methods. 597 Table 1 Reproducibility values and associated values for the 95% confidence level of EPA reference test methods 95% Confidence 3 Method Parameter Reproducibility (% of mean value) Level (% of mean value) 2 Volumetric Flow 3.2 6.4 5 Particulate Matter (incinerator) 7.0 14.0 6 S0 2 (power plant) 2.4 4.8 7 N0 X (power plant) 2.7 5.4 8 Sulfuric Acid Mist 38.2 76.4 'Sulfuric Acid Plant! Quantities recalculated from those given in the collaborative test reports in order to reflect the number of runs required by the NSPS and NESHAP. Once enforcement personnel have a good estimate of the reliability of a method as it is applied to a specific category of sources, they can make a more intelligent judgment of the compliance status of a specific source. As an example of this application, consider the submittal of valid Method 5 test data by the operator of a municipal incinerator. Agency personnel can be 95% confident that if a second test team conducted a valid test at the facility, while it was under identical emission conditions, the second set of results would be within 14% of those obtained by the first team. Most cf the collaborative tests have indicated that the reference test methods are very reproducible. However, in the case of the sulfuric acid mist test, and the early particulate tests, for example, the results appeared to indicate that Method 8 is not very reproducible. It is unclear to what degree this result is affected by limitations of the method, poor quality assurance procedures of testers, the spatial and temporal variations of the emission stream during the tests, or other problems. Until tests, which have this degree of uncertainty, are carefully repeated to evaluate and pinpoint this variability, enforcement agencies must be very cautious when using this data. A second application of the collaborative tests is to identify any problem area associated with the design and use of reference test methods. The most important finding of this type were the necessity for having highly reliable methods, which are clearly and thoroughly written; and for the implementation of effective quality assurance procedures as an integral part of all emission testing programs. The collaborative studies showed that at that time a crude state of the art existed among some "average" stack sampling laboratories. In some cases, obvious gross errors were the norm. The most recent test demonstrated that the presence of qualified on-site observers and the use of meaningful quality assurance procedures can reduce the probability of most of the gross errors. 3. Conclusion Because of the identified problem areas in the methods, proposed revisions for these methods were published in the June 8, 1976, Federal Register. The majority of the revisions centered around (1) more detailed explanations; (2) more detailed calibration procedures; and (3) detailed quality assurance procedures. 598 Another valuable use of collaborative studies is to give the tolerances one should consider when setting revised or additional standards for existing sources. When such a standard is written it should contain (or reference) a specific testing method. As in the cases of EPA's NSPS and NESHAP standards, the standard should take the method's reproduc- ibility factor into account. For example, if the agency decides that a certain type of source needs to be restricted to ten pounds per hour in order to obtain suitable ambient air quality, and the measurement method has a 10% reproducibility factor, then the standard should be set at 8.3 pounds per hour in order to ensure (with 95% confidence) attainment of the desired ambient air quality. 599 Part XII. AIR POLLUTION, WATER ANALYSIS ) NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). ION CHROMATOGRAPHY - A NEW ANALYTICAL TECHNIQUE FOR THE ASSAY OF SULFATE AND NITRATE IN AMBIENT AEROSOLS James D. Mulik, Ralph Puckett, Eugene Sawicki Environmental Sciences Research Laboratory U. S. Environmental Protection Agency Research Triangle Park, NC 27711, USA and Dennis Williams Northrop Services Corporation Research Triangle Park, NC 27711, USA 1. Introduction There are many methods currently available for the assay of sulfate and nitrate in ambient aerosols. These methods appear to be inadequate because of poor sensitivity and/or selectivity or are cumbersome to use. H. Small, T. S. Stevens and W. C. Baufnan [I] 1 recently reported a new technique called ion chromatography (IC). The technique, uses ion exchange chromatography, eluant suppression, conductimetric detection and is ideal fflr the analysis of sulfate and nitrate in ambient aerosols. Ion exchange chromatography with conductimetric detection has been attempted in the past for the assay of anions and cations with little success because of the high background produced by the electrolyte or eluant used. Dr. Small uses a novel technique of eluant suppression, which allows the use of a universal conductivity detector. Eluant suppression is carried out by means of a second ion exchange column that reduces or suppresses the unwanted eluant ions without affecting the eluting ion species. This communication describes the first successful application of ion chromatography to the analysis of sulfate and nitrate in ambient aerosols. 2. Experimental A Model 10 Ion Chromatograph (Dionex Corp., Palo Alto, Calif.) was used for the analysis of water soluble sulfate and nitrate in ambient aerosols. A schematic of the flow system is shown in figure 1. The flow system consists of a separator or analytical column, suppressor column, four solvent reservoirs, injection valve with 0.5 ml sample loop, two Milton Roy fluid pumps, conductivity detector, and a valving system to direct the flow through various parts of the instrument. The system uses air activated Teflon slider valves throughout. figures in brackets indicate the literature references at the end of this paper. 603 REGENERANT TIMER Q REGENERANT VALVE WASTE FLUSH VALVE SAMPLE ^o LOOP WASTE REGENERANT PRESSURE 0--- MANIFOLD SEPARATOR k T SUPPRESSOR COLUMN , ' ANALYSIS- REGENERATION ALTERNATE WASTE FIG. 1 Ion Chromatograph Flow System 604 For the analysis of sulfate and nitrate the separator column (500 mm x 3 mm I.D.) contains a strong basic anion exchange resin and the suppressor column (250 mm x 6 mm I.D.) contains a strong acid ion exchange resin. For the data described herein the eluant is .003 M NaHC0 3 .0024 M Na 2 C0 3 prepared in distilled deionized water. The water should have a conductivity of less than 10" 5 ohm -1 cm -1 . The separator column separates anions in a background of carbonate eluant which then pass into the suppressor column where the sample anions and the carbonate eluant are converted to their acid form and pass unretarded into the conductivity detector. The carbonate is converted into carbonic acid, which has a very low conductivity (eluant suppression), whereas the nitrate and sulfate are converted to nitric and sulfuric acid, which have a high conductivity. Thus, producing the sensitivity that, heretofore, has been unattainable. Results The minimum detectable level (MDQ) of sulfate and nitrate was found to be 0.1 yg/ml : the MDQ can be increased by going to a larger sample loop. The relative standard deviation is 3 percent for sulfate and 1 percent for nitrate at the 95 percent confidence level for 10 replicate injections at the 5 yg/ml level. Our initial experiments with the IC technique involved the assay of hi-vol glass fibre filter strips (3/4 in x 8 in) containing known amounts of sulfate and nitrate. The strips were extracted by soaking in 25 ml distilled water overnight. The strips were washed three times with 5 ml of distilled water through a Buchner funnel and the final volume brought to 50 ml. Aliquots of the 50 ml were taken for injection into the ion chromatograph. In most cases it was necessary to make dilutions. The concentrations found by IC compared to the actual spiked concentrations are shown in table 1. Table 1 Glass fiber hi vol filter strips impregnated with sulfate and nitrate standard N03" yg/Strip % Diff. S0^ = yq/Strip % Diff. Sample # Added Found (I. c) Added Found (I. C.) 6195 1381 1441 + 4.0 477 477 - 6.0 6196 1381 1286 - 7.0 477 467 + 2.1 6197 1381 1385 + 0.1 477 438 - 1.5 7089 1116 936 -16.0 3857 3553 - 7.8 7090 1116 1100 - 1.4 3857 3769 - 2.3 7091 1116 993 -11.0 3857 3786 - 1.8 8189 229 218 - 5.0 5078 4530 -10.8 8190 229 240 + 5.0 5078 5183 + 2.1 8191 229 201 -12.0 5078 5251 + 3.4 5184 Blank 0. 61 Blank 7. 3 The large negative bias shown in several samples is probably due to the extraction process and not to the analytical technique. It emphasizes the need for an improved extraction procedure. Several common anions that might possibly interfere with the IC method for analysis of sulfate and nitrate are shown in table 2. The concentration of sulfate and nitrate was 5 yg/ml with the interfering anions at the same concentration. 605 Table 2 Common anions that may interfere with the IC method for analysis of S0 4 and N0 3 Anions SCUT N03" Fluoride None None Chloride None None Bromide None None Nitrite None None Sulfite None None Silicate None None Carbonate None None Phosphate None None Sulfide None None Our earlier research [2] showed that bromide and phosphate would interfere because they could not be resolved from the nitrate with the 250 mm anion analytical column. Bromide and phosphate are no longer interferences as shown in table 2 because we are now using a 500 mm anion analytical column which resolves the bomide and phosphate from nitrate. Several water extracts of actual ambient aerosols were assayed by the IC procedure. These extracts were also analyzed by the methyl thymol blue colorimetric procedure for sulfate and by the copper cadmium reduction colorimetric procedure for nitrate. The comparison of the sulfate and nitrate analysis by the IC technique to that of the two colorimetric procedures is shown in table 3. The excellent agreement between the IC procedure and the two colorimetric procedures for nitrate and sulfate indicate that IC is certainly a promising method for both sulfate and nitrate. Table 3 Comparison of ion chromatographic (IC) method with colorimetric methods for sulfate and nitrate in actual ambient aerosols S0<+= lug/ml) NO3- (ug/ml Sample Ion Methyl Ion Cadmi urn Number Chromatography Thymol blue Chromatography Reduction 1 65.1 66.2 9.23 14.8 3 79.4 72.9 21.2 20.4 5 29.2 30.9 6.4 7.0 7 52.9 53.2 23.6 23.5 9 35.5 35.3 9.3 9.6 11 88.8 82.1 4.3 4.5 13 23.6 25.1 5.2 5.9 15 35.9 36.5 17.5 18.2 17 47.6 48.7 12.3 12.4 4. Conclusion Although additional research is needed on interferences, repeatability, and accuracy, sufficient information has been presented herein to demonstrate that the ion chromatographic technique can measure sulfate and nitrate in ambient aerosols with a sensitivity and selectivity that heretofore has been unattainable. One of the most attractive features of the IC method is that it has the potential of becoming a multi pollutant analyzer. The development of ion chromatography for the assay of other pollutants such as NH t+ + , gaseous NH 3 , aliphatic amines, sulfite, nitrite, phosphate, halides, S0 2 and N0 2 is also under investigation. 606 References [1] Small, H., Stevens, T. S. , Bauman, W. C, Anal. Chem. , 47, 1801 (1975). [2] Mulik, J., Puckett, R., Williams, D., Sawicki, E., Analytical Letters 9(7) 653 (1976; 607 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE USE OF A GAS CHROMATOGRAPH-MICROWAVE PLASMA DETECTOR FOR THE DETECTION OF ALKYL LEAD AND SELENIUM COMPOUNDS IN THE ATMOSPHERE Donald C. Reamer, Thomas C. O'Haver and William H. Zoller Chemistry Department University of Maryland College Park, Maryland 20742, USA 1. Introduction Considerable interest in the determination of organo-metallic compounds has been generated by studies which indicate that organic mercury and arsenic compounds are present in natural waters and in the atmosphere. Organic forms of heavy metals are generally more toxic to man due to increased solubility. Atmospheric levels of organic mercury have been found in the ng/m 3 range [l] 1 and organic arsenic in the sub ng/m 3 range [2]. Arsenic, mercury, lead, and selenium dre now known to be methylated by microbial organisms. The high atmospheric enrichment of elements, such as selenium, has prompted investigators to propose a vapor phase for these enriched elements due to their volatility resulting in their possible release from natural or anthropogenic processes [3]. Selenium is one of the elements found to have an enrichment factor of 10,000. While biomethylation of selenium is known to occur, there is little information as to the species present in the atmosphere. The origin of organic lead compounds—lead additives to gasoline--is well known, but, there is still a need for more data to ascertain the environmental effects of this pollutant. 2. Discussion During atmospheric sampling, the high concentration of organ the atmosphere relative to that of organo-metallic species can ca ing analytical techniques that can satisfactorily measure down to maintaining a relatively interference-free performance. Several been employed for the determination and identification of organo- chromatography-atomic absorption for atmospheric organic lead [4] detection for organic mercury and arsenic in air and water [1,2,5 microwave plasma detection (GCMPD) for organic arsenic environmen capture has been used as a GC detector for organo-metallic analys sensitivity to some metal species and instability due to fouling column bleed. ics and water present in use interferences requir- these levels while instrumental methods have metallic species; gas , membrane probe emission ], and gas chromatography- tal samples [6]. Electron is, but suffers from poor by contaminants from The GC-MPD has proven its sensitivity and selectivity for a number of metal species, primarily in a chelated form. This paper describes the applicability of a GC-MPD system with wavelength modulation for background correction for the analysis of atomospheric samples of alkyl lead and selenium species. The GC-MPD system is basically the same as that described by McCormack, Tong, and Cooke [7], with some instrumental modifications. The system is comprised of a transparent quartz capillary which contains the effluent from the GC. The capillary is contained within a 3/4 wave cylindrical microwave cavity. The argon plasma contained within the capillary is generated by a 200-watt Microtron microwave figures in brackets indicate the literature references at the end of this paper. 609 generator. The signal is viewed at right angles to the quartz capillary, which is parallel with the entrance slit of the monochromator. The wavelength modulation system, which is similar to that described by Epstein and O'Haver [8], consists of a torque motor and a 1 in. x 1 in. x 1/8 in. quartz plate mounted at the entrance slit within the monochromator. The desired wavelength of the monochromator is modulated, via the vibrating quartz plate, over a small wavelength interval and the resulting ac component of the signal is measured by a frequency-selective ac detector system. Wavelength modulation was used to remove two types of background interferences: broad band emissions resulting from organic molecular species in the sample and residual impurities in the argon and background plasma emissions. The high concentrations of organics in atmospheric samples, relative to that of alkyl- metals , makes it difficult to achieve good chromatographic separation prior to the sample entering the plasma region. For selenium analysis at 196 nm, the 193 nm atomic carbon line generates a sloping continuum- type background interference, which can be substantially reduced with wavelength modulation. For the analysis of organic lead compounds at 405.78 nm, there are few interfering emission lines, so wavelength modulation is not as necessary. The selectivity ratio of the selenium 196 nm emission line, when compared to methanol, is approximately 6 x 10 3 in the dc mode and 6 x 10 5 in the ac mode. The selectivity ratio of the lead 405.78 nm emission line, when compared to methanol, is approximately 1 x 10 4 in the dc mode and 8 x 10^ in the ac mode. Consequently complete resolution of the solvent, organic species and alkyl-metal species is not required in the ac mode. The GC conditions can be selected somewhat to enhance selectivity and sensitivity. The detection limits, defined as the amount of the element which produces a signal twice the size of the background noise, was approximately 12 pg for dimethyl selenide and 40 pg for tetraethyl lead in the ac mode and 7 pg for selenium and 26 pg for lead in the dc mode. Approximately one percent hydrogen was added to the argon during the tetraethyl lead analysis to minimize a memory effect which occurred within the quartz capillary. Some of the lead, after being decomposed in the hottest portion of the plasma, plated onto the quartz walls. When the next aliquot was injected, the solvent removed some of this lead from the walls producing an erroneous lead signal following elution of the solvent. The intensity of repeated injections was also reduced as a result of this lead plating. Hydrogen was found to be better than nitrogen or oxygen in reducing this lead depositing effect. The sorbent collection system for tetraethyl lead employed a Teflon cartridge tube filled with ten percent SE-52 on 80/100 mesh Chromosorb-P. The collection tube was main- tained in a methanol-dry ice slurry during sampling and stored in dry ice prior to analyzing. Samples of ambient air were drawn through a 0.2 ym Nuclipore filter and the sample cartridge at a flow rate of 0.3 m 3 /h. The filter retained lead particulates, and the passed vapor phase of tetraethyl lead was quantitatively retained by the sorbent. The recovery of the organic lead was accomplished by a two-step procedure. Since low power microwave plasmas are sensitive to water, it was necessary to remove the water by freeze drying. A miniature freeze drying system was used to desorb the trapped gases within the sorbent and the water. The drying procedure removed the water and volatile lead species. Following this step the aqueous sample was extracted with benzene. The recovery and accuracy of this method was evaluated by collecting simulated air samples which were spiked with 200 ng of tetraethyl lead. The overall efficiency of the method ranged from 83 percent to 108 percent at this concentration level . Gas and particulate samples were collected for two-hour intervals. The particulate lead samples were digested for six hours on a hot plate in a solution of 25 ml water, 1 ml of perchloric acid and 25 ml of a 4:1 mixture of nitric acid and hydrochloric acid. The Nuclipore filters and standards were analyzed by flameless atomic absorption using a Varian Model 63 carbon rod furnace. The GC-MPD optima for tetraethyl lead analysis are shown in table 1. The results of the samples collected in the Baltimore Harbor Tunnel are presented in table 2. The organic lead values are in the same range as those reported by other methods [9,10]. The average percentage of organic lead to particulate lead is approximately 1 .9 percent. 610 Table 1 Operating parameters: GC-MPD optima for tetraethyl lead analysis GC system: Pb: Se: 10% SE-52 on 80/100 mesh Chromosorb-P (2 ft x 1/8 in) 15% Carbowax 20M on 60/80 mesh Chromosorb-W (10 ft x 1/8 in) Quartz capillary: Carrier gas : Carrier gas flow rate: Column temperature: Injection temperature: Microwave power: Monochromator: Slit height: Spectral bandpass: Wavelength: Modulation interval: Photomultiplier tube: 1.0 mm i . d . , 6.0 mm o . d . Argon - 1% hydrogen Pb-75 ml/min, Se-20 ml/min Pb-90, Se-70 Pb-130, Se-160 Pb-lOOW, Se-30W 12 mm .22 nm Pb-405.78, Se-196 nm .38 nm Jarrel-Ash - R106 Table 2 Results of air analyses from the Baltimore Harbour tunnel Volume of Air py/ Sampling time Sampled (m 3 ) Tetraethyl Lead Particulate Lead 12 - 2 pm 0.7 — 19.1 ± 1.7 2 - 4 pm 0.6 0.8 ± .023 20.6 ± 1.6 4 - 6 pm 0.6 0.13 ± .02 19.8 ± 1.4 6 - 8 pm 0.6 0.07 ± .013 14.9 ± 1.2 8 -10 pm 0.6 0.12 ± .014 16.3 ± 1.1 3. Concl usion In conclusion, the GC-MPD is well suited for atmospheric to its sensitivity and selectivity for volatile metal -contain of this GC detector lies in the preparation of the sample for Due to its susceptability to overloading, the plasma can be q of sample or solvent is too high. Preliminary results indica in the plasma has minimized the other problem in this system other metals deposits. This study has shown that the GC-MPD tion can be used for atmospheric alkyl-lead analysis. The in to have the capability of detecting minute amounts of volatil quantitative collection system has yet to be fully developed. organo-metallic analysis due ing species. The limitation introduction into the plasma, uenched, if the concentration te that the addition of hydrogen -the development of carbon and system with wavelength modula- strument has also been shown e selenium compounds, but a 611 References [1] Johnson, D. L. and Braman, R. S. , Environ. Sci. and Technol. 8_, 1003 (1974). [2] Johnson, D. L. and Braman, R. S. , Chemosphere 6_, 333 (1975). [3] Zoller, W. H., Gladney, E. S. and Duce, R. A., Science 183, 198 (1974). [4] Chau, Y. K., Wong, P. T. S. and Saitah, H. , J. of Chromat. Sci. 14-, 162 (1976; [5] Soldano, B. A., Bien, P., and Kwan , P., Atmospheric Environ. 9_, 941 (1975). [6] Talmi, Y., Anal. Chimi. Acta 74, 107 (1975). [7] McCormack, A. J., Tong, S. C, and Cooke, W. D. , Anal. Chem. 37^, 1470 (1965). [8] Epstein, M. S. and O'Haver, T. C. , Spectrochimica Acta 30B , 135 (1975). [9] Purdue, L. J., Enrione, R. E., Thompson, R. J., and Bonfield, B. A., Anal. Chem. 45_, 527 (1973). [10] Harrison, R. M. , Perry, R. , and Slater, D. H., Atmospheric Environ. 8_, 1187 (1974). 612 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE DUTCH NATIONAL AIR POLLUTION MONITORING SYSTEM— A FOCAL AND REFERENCE POINT T. Schneider Rijks Instituut voor de Volksgezondheid Nederland 1. Introduction Over the last few years environmental monitoring has become an important issue in the Netherlands. Scientific, technological and political thinking has been focussed on it; instruments and techniques have been developed. The Netherlands have one of the highest densities of population in the world together with major industrial centers. Meteoro- logical conditions and wind direction in this part of the world are extremely variable and so are pollution conditions. Not only will the emission in a given air space vary from day to day and hour to hour but so will its chemical composition and place of origin. Compli- cating the matter is the fact that this country's neighbors, Belgium and West Germany, are also heavily industrialized. So, in addition to the pollution generated internally, the Netherlands must often cope with that coming across international boundaries, from the industrial concentrations in the Ruhr area and northern Belgium. It is not surprising, therefore, that the history of monitoring and surveillance is a long one in the Netherlands. Using a multitude of methods and laboratories, a national picture was not exactly easy to obtain. In the late sixties a decision was taken to create a national air pollution moni- toring network to achieve comprehensive surveillance throughout the country. It was designed to provide ultimately the following information: A reliable estimate of the quantity, composition and origin—international as well as national— of the pollution; data for establishing trends in the degree of pollution from year to year and the influence of zoning decisions, growth of industry, traffic and pollution on these trends; evaluation of the effectiveness of abatement strategies; and information for short-term warning purposes, to allow forecasting of undesirably high degrees of pollution under unfavorable dispersion conditions. 2. Discussion The network consists of 217 monitoring stations, of which 109 form a regular base line grid covering the entire country. The remaining stations cover special topographical features, such as densely inhabited areas, large industrial centers and areas along the national borders. Each station is equipped to monitor, on a regular basis, one or more polluting components in the atmosphere and to transmit the data to its regional center. Some of the stations also monitor meteorological factors such as wind velocity and wind direction. In the first phase the network is equipped with monitors for S0 2 and wind speed and wind direction only. The capabilities of approximately 80 stations will be expanded to measure in addition CO, NO, N0 2 , and 3 . The first monitors for the measurements of these pollutants have recently passed the tests and the first series will be installed this year in one of the regions (South Holland) of the network. Each measuring instrument is cali- brated by remote signal from the computer in a regional center. At each region information is collected e^jery minute; minute values and hourly averages are calculated. The central communication unit, located in the National Institute of Public Health (R.I.V.) in Bilthoven, automatically collects information from the regions and stores it on magnetic tapes and discs. It calculates and presents hourly averages on a line printer. It checks pollution levels in the regions. Threshold values are set and monitoring stations exceeding these values are automatically indicated by lights on an illuminated map. All these and addition- al information can be presented on a CRT display, a logging typewriter, an analog recorder 613 or punched papertape. A facility for minute to minute check and control of the whole network is available. The information stored on magnetic tape is processed off-line for statistical and model studies. The continuously recorded information is also processed to national and regional surveys every month, every six months and every year to study trends in pollution levels. For an optimal interpretation of network data additional measurements are carried out using mobile units. In this way the density of measurement in space is increased temporar- ily in order to obtain information on possible occurring spatial structures. The concen- tration fields as reconstructed with optimal interpolation parameters, are compared to measured transport patterns along dynamic traverses. Apart from this mobile addition to the network, the effect of air pollution on plants is studied at 30 stations of the national network. Special indicator plants, specific for one or more pollutants are used. Among the indicators used are spinach as an indicator for ozone and S0 2 , Petunia as an indicator for ethylene, tobacco as an indicator for ozone, and grass and lettuce as an indicator for PAN. Ozone damage on tobacco plants especially was found all over the country, which for ozone indicates the importance of a surveillance over a large area. A prominant role in the whole of the national system is reserved for the final refer- ence point of the system, the dynamical calibration unit in Bilthoven. Here all instru- mental methods to be used in the Dutch network (and others in between) have been examined. Although most of them were rejected at first we finally succeeded in convincing a few manufacturers of the advantages of the development of air pollution monitors that could pass even the severe tests specified by the RIV. These tests were executed on the calibra- tion unit built especially for this purpose. The unit consists of an air cleaning device (compressor and heatless dryer with dust filters) that deliver dry gas with a dewpoint below -60°C and without C0 2 . CO and NO. The massflow is selectable between and 50 K liters/min. A specially constructed evaporation block is used to supply water vapor to the carrier gas. The "pollution" is added using permeation tubes (S0 2 , H 2 S, CH 3 SH, N0 2 ); permeation tubes and photolysis of N0 2 for NO or UV radiation of 2 and comparison with N0 2 from a permeation tube for 3 ; or cylinders traceable to NBS (CO, NO and C 2 H t+ ) . The permeation tubes are kept at a temperature of 25 ± 0.03°C. The flow of the mixing gas is determined by a volumemeter with optical regulation (inaccuracy ^ 0.5 percent). Taking into account the uncertainties in the permeation rate and in the air flow, the inaccuracy of the total system is for S0 2 - H 2 S - vinylchloride and CO ^ 1%; for N0 2 < 2%, for NO (photolysis) ^ 3% and 3 (gasphase titration) ^ 4%. Specifications that are determined at the calibration unit are among others: lower detectable limit, inaccuracy, precision, interference, sensitivity, lag time, rise and fall time, zero drift and span drift. External factors that are determined are: influence of temperature and humidity. The control of the most important characteristics—zero drift and span drift—should be performed regularly at the monitoring station itself. For this purpose a portable calibration unit has been developed. With this unit a dynamic multi-point calibration can be performed in the field which uses a mixture of known amounts' of S0 2 , supplied by per- meation tubes, with known volumes of cleaned air. The flow rate is controlled by two thermostated critical orifices in conjunction with a vacuum pump. The overall error in the calculated test gas concentration is ^ 4 percent. 614 3. Conclusion Details such as those for the calibration and reference systems given above could also be outlined for the necessary maintenance and servicing systems used in connection with the Dutch national network. The final interest lies in the receipt of the signals of reliable measurements of all the stations at the national center in Bilthoven. Once a reliable monitor has been developed a transmission network can be constructed to carry the signals from a station to the regional and national centers. In the end it is just as important, if not more so, that a reliable system remains a reliable system. To reach that goal a load of hard but rewarding work has to be performed. 615 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) ANALYSIS AND CALIBRATION TECHNIQUES FOR MEASURING AIRBORNE PARTICULATES AND GASEOUS POLLUTANTS I. Delespaul, H. Peperstraete and T. Rymen Chemistry Department Belgian Nuclear Research Centre Mol , Belgium 1. Introduction A. Heavy metals network Since May 1972, the daily average levels of Ba, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn have been measured in airborne dust, collected in fifteen stations [l^] 1 . The lay-out of this network is given in figure 1. The large cities of Antwerp, Brussels, Charleroi and Liege have three stations each. Background levels are measured in Oostende (seaside) and Botrange (inland). A pilot station at Mol is used for further control of the network. NETHERLANDS LUXEMBURG Figure 1. Belgian network for the determination of heavy metals (ZM-network) . figures in brackets indicate the literature references at the end of this paper. 617 2. Experimental Samples are collected on a 24-hour basis, using high-volume samplers with cellulose filters. Filters are changed manually each morning at 9:00 a.m. Emission spectrometry has been retained as the analytical technique because it is a good compromise between detection limit, accuracy, speed and price. The set-up is composed of a direct reading spectrometer (MBLE-SM 150), a multisource (ARL-6900 F), a measuring unit (MBLE-PV 8700), a PDP 11/20 computer with interface and teletype, and a direct link to the central IBM 370 multi-user computer. Sample preparation starts by cutting a 5 cm 2 portion from a loaden filter and wetting it with a solution containing 10 mg NaCl (buffer) and 2 micrograms In and Pd (Internal standards). These filter strips are ashed at 440 °C for 2 hours, and mixed with 100 mg graphite powder. A fraction of this mixture is pressed in a L 4006 graphite electrode from NCC, and excited in a 7.5 A D.C. arc for 30 seconds. The calibration of the spectrometer is made with synthetic reference samples obtained either by dry mixing of oxides or by absorbing aliquots of solutions of the metals onto filter paper. Drift on the calibration is cross-checked by repeating previously analyzed samples. The accuracy is controlled by intercomparison with neutron activation, atomic absorption spectrometry and x-ray fluorescence [3,4]. These calibration data are given in table 1. Table 1 Emission spectrographs calibration data Element Wave length Detection Upper limit Deviation 3 on Accuracy limit (10 -8 cm) (ng/m 3 ) Ba 5535 5 Cd 2288 4 500 8.5 1.06 Co 3453 5 1000 — 1.03 Cr 4254 1 150 18.9 0.89 Cu 3247 1.5 1250 8 1.04 Fe 3734 12 25000 9.1 1.13 Mn 2576 7 1150 7 1.01 Ni 3492 2.5 1500 28.6 0.99 Pb 4057 20 5000 8.1 0.96 V 4379 4 600 13.8 0.96 Zn 1 2138 10 Zn 2 3345 60 50000 12.6 0.96 Deviation = lOoW — - — n = number of duplicates , _ 1st reading - 2nd reading 1st reading + 2nd reading Expressed as the ratio of the emission spectrometric result to the mean of the results of the other methods. of working range (ng/m 3 ) duplicate samples (%) 1000 500 1000 150 1250 21.5 8.5 18.9 8 25000 1150 1500 5000 600 9.1 7 28.6 8.1 13.8 60 50000 12.6 618 3. Results Interpretation is made on a yearly basis starting from the measurements stored in the data bank. The cumulative frequency distribution is calculated for each station. The results give the possibility to specify the variability of the daily levels, their average and highest level over a long period. An example of the visualization of the general situation for Pb over the Belgian area is given in figure 2. 3 I 05° I NETHERLANDS N Figure 2. LUXEMBURG Median values and 95 percentiles of the daily Lead levels in Belgium during the period 5/73-4/74. A. Gaseous inorganic pollutants Gaseous pollutants are sampled to provide information on local pollution levels in urban and industrial areas. The sampling period is of the order of . 5 to 1 hour and is usually performed with several samplers simultaneously. The general approach consists in locating the possible sources by simultaneous sampling up-wind and down-wind, by operating a continuous monitoring station in the area, and by completing the campaign by an "at random" sampling for statistical evaluation. Campaigns were organized for measuring S0 2 , NO, N0 2 , NH 3 , Cl~ and F~ for periods ranging from two months to two years with an average daily amount of 30 to 60 samples. The most commonly applied analytical techniques are summarized in table 2. The sampling device is shown in figure 3. 619 Table 2 Analysis of inorganic gaseous pollutants Pollutant Analytical method Sampl Volume ing Time Detection limit ng/m 3 Instrumental method S0 2 colorimetry [5] 30 l 1/2 h 15 coulometry & flame N0&N0 2 colorimetry [6] 12 i 1/2 h 18 chemilumines NH 3 colorimetry 30 l 1/2 h 15 cence CI" , . . d colorimetry 60 l 1 h 5 coulometry F" ion selective electrode 60 l 1 h 0.5 colorimetric method with Nessler reagent colorimetric method with diphenyl carbazon 1 - SAMPLE INLET 2 - IMPINGER 3 - DRYER 4 - PUMP 5 - TIMER 6 - ROTAMETER Figure 3. Sampling device for inorganic pollutants. Calibration of methods and apparatus is achieved with a multi-component calibration device (fig. 4). Permeation tubes or pulsed injection are used as pollutant sources. Permeation tubes are made on a routine basis for S0 2 , N0 2 , Cl 2 , H 2 S, CH 3 SH and NH 3 and have permeation rates in the range of 0. 2 to 6 yg of pollutant per minute. The stability of the permeation rate at constant temperature is of the order of 2 percent. Pulsed injection introduces 0.050 or 0.100 ml amounts of pollutants in a gas stream at a rate of 10 injections per minute. 620 VENTILATION CHANNEL 1 SOLENOID VALVE FLOW CONTROLLER THREE WAY SOLENOID VALVE 4. MASS FLOWMETER 5. MANOMETER 6. TEMPERATURE REGISTRATION 7. PERMEATION TUBE 8. THERMOSTATIC COLUMN 9. DUST FILTER 10. AEROSOL GENERAL CONDENSOR OVERFLOW MIXING CHAMBER SOLENOID VALVE MANIFOLD THERMOSTAT CRYOSTAT ROTATION PUMP 19. WATER CONTAINER Figure 4. Multi component Calibration device 4. Conclusion A. Gaseous organic pollutants The well known differences among organic pollutants with regard to their adverse health effects, either direct (vinyl chloride, polynuclear aromatic hydrocarbons [7-10]) or indirect {via their participation in photochemical smog formation [11-14]), obviously raises the necessity of an individual tracing. Moreover, because of the yery low permissible levels, extremely sensitive quantities and reliable sampling techniques are required. As a rule, pollution measurement campaigns were performed in two steps: a preliminary, intended to establish an order of priority for the pollutants present in the area to be controlled, and the actual quantitative campaign, performed by sampling in the field and analysis in the laboratory, in conjunction with an automatic control station ,in the field. Samples are collected by aspiration of air in "multi-wall" sampling bags, that were previously controlled against adsorption and memory effects. The sampling set up is shown in figure 5. The sampling time is normally set at 30 minutes. In the automatic field station, an integrated air mixture representing 30 minutes sample is analyzed. During the preliminary campaigns, preconcentration of the pollutants on activated charcoal [15,16] was used a few times alternatively with direct sampling. Pollutants are analyzed by gas chromatography. During the preliminary campaign conditions instruments are set for a sensitive detection of all the pollutants which are suspected to be present. In hi 1 e this information is accumulating, gas chromatographic conditions are progressively adjusted for fast, sensitive and accurate quantitative analyses. Examples are given in table 3. 621 1 - ALUMINUM CILINDER 2 - SAMPLING PIPE 3 - HERMETICAL SEALS 4 - SAMPLING BAG 5 - AUTOMATED PUMPING SYSTEM WITH TIMER 51 OF AIR ARE SAMPLED IN 30 MINUTES Figure 5. Set up for direct sampling. Table 3 Instrumental conditions for organic pollutants Area Vinylchloride plant Paint industry Prior pollutants ethylene vinylchloride 1.1 dichloroetnane 1 . 2 dichloroetnane dichloromethane 1.1.1 . trichloro- ethane trichloroethylene :°) tetrachloroethylene (x) benzene toluene ethyl benzene (m+p) xylene g.c. conditions alumina F, column 5 1 x 1/4" operating at 110 °C carrier gas : N 2 » 60 mi min detection: flame ionization, 150 °C tricresyl phosphate 5 % on chromosorb W, AW, 80-100 mesh, 7'xl/4", operating at 60 °C. carrier gas : N 2 , 15 nu min -1 detector : flame ionization, 150 °C tricresyl phosphate 5 % on chromosorb W, AW, 80-100 mesh, 7'xl/4", operating at 65 °C carrier gas : N 2 , 30 mi min" detector: (°) flame ionization (x) electron capture ( 63 Ni), 150 °C) 1,2,3 tris (2 cyanoethoxy) propane 15 % on chromosorb W, AW, 8'xl/4" operating at 65 °C carrier gas : N 2 , 60 m£ min" detector : flame ionization, 150 °C. -l 622 Calibration of the instruments is carried out with standard mixtures prepared from gases or liquids. The first step consists of the injection of a known volume of liquid or gas in an evacuated 1 litre flask. After evaporation of the liquid, the flask is filled with nitrogen up to atmospheric pressure. A suitable volume of this primary mixture is then injected in a line while nitrogen is flowing through and entering a sampling bag. The total volume of nitrogen used to fill the bag is measured with a wet gas meter. Calibration with these mixtures proved to be better than 5 percent, based on 20 consecutive preparations over a period of 6 months. The over-all efficiency of the analytical procedure is checked by duplicate measurements and correlation between analyses in the laboratory and in the automatic field station. For the area of the vinyl chloride plant the reproducibility of duplicate results are: ethylene 3.9 percent, vinyl chloride 4.9 percent, 1.2 dichloroethane 9.5 percent. Finally it should be mentioned that, when combined with the information about sampling place and meteorological conditions, the results obtained always allowed the identification, among a set of plants, of the major sources for the detected pollutants. This work was sponsored by the Belgian Ministry of Public Health under the form of contracts and was carried out in cooperation with the official instances. References [I] Ministry of Public Health - S.C.K. /C.E.N. Mol Onderzoek naar de niveaus van luchtverontreiniging door zware metalen Jaarverslag 1974. [2] Ministry of Public Health - S. C.K. /C.E.N. Mol Onderzoek naar de niveaus van luchtverontreiniging door zware metalen Jaarverslag 1975. [3] De Regge, P., Lievens, F., Delespaul, I., Monsecour, M., Intercomparison of Neutron Activation Analysis with Other Instrumental Methods for the Elemental Analysis of Airborne Particulates, International Symp. Vienna, March 15-19, 1976. [4] Kretzschmar, J. G., Lievens, F., De Rijck, Th. , Verduyn, G., Delespaul, I., The Belgian Network for the Determination of Heavy Metals. Submitted for publication in Atmospheric Environment . [5] West and Gaeke, Anal. Chem. 28, 1816 (1956). [6] Saltzman, Anal. Chem. 26, 1949 (1954). [7] Levine, S. P., Hebel , K. G., Bolton, J., and Kugel , R. E., Anal. Chem. 47, 1075 A (1975). [8] Kennaway, E. L., Brit. Med. J., 1_, 564 (1924). [9] Cook, J. W., et al., J. Chem. Soc, 395-405 (1933). [10] U.S. Environmental Protection Agency, Air quality data for 1968, Pub. APTD-0978 (1972). [II] Leighton, P. A., Photochemistry of Air Pollution, Academic Press, New York (1961). [12] Haagen, A. J., Smit, and Wayne, L. G., Air Pollution (A. C. Stern, ed.) 2nd ed. Vol. 1, A. P. New York, p. Ill (1967). [13] Altshuller, A. P., J. Air Poll. Contr. Assoc, ]b_, 257 (1966). [14] Masters, G. M., Intro, to Environ. Science and Tech., J. Wiley & Sons, Inc., 1974. [15] Grob K., and Grob, L., J. of Chromatography, 62, 1-13 (1971). [16] Burghardt, E., and Jeltes, R., Report H 216-11, IG-TNO, Delft (1974). 623 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). FACTORS GOVERNING THE CONTENTS OF METALS IN WATER D. J. Swaine CSIRO Division of Mineralogy, North Ryde, New South Wales, Australia 1. Introduction If the current deluge of data in environmental science is to be meaningful, certain fundamental aspects must be considered. In the case of heavy metals in natural waters, more needs to be known about (a) the meaning of the content of a metal in water (b) the factors governing the state of metals in water (c) reactions at the sediment-water interface (d) how to sample, preserve the sample, and analyze, in order to obtain real results even at levels of the order of 1 in 10 9 . The total content of a metal in water is made up of a soluble part (inorganic and organic; usually defined as that passing a 0.45 ym filter, an insoluble part (particulate matter) and that associated with living matter. A variable proportion of the total amount associated with the particulate matter is adsorbed on clay, on organic matter, on iron and manganese oxides, and on thin films of these oxides on minerals or rock particles. These various forms of a metal in water may differ chemically, ranging from simple complexes to highly dispersed colloids. The solubility and form a metal may have in water depend on several factors, perhaps the most important ones being pH, oxygen concentration, salinity and the nature of the major ions present. 2. Discussion The significance of these factors has been established by laboratory experiments on the extraction of different rocks by saline solutions (A. M. Giblin and D. J. Swaine, unpublished work). In most natural waters of low salinity, pH and oxygen concentration are the main factors. Adsorption has a major role in the distribution of trace constituents between water and solids. What happens in a particular case depends on the metal, the nature of the solids present and on the chemical environment, typified by pH, oxygen con- centration, salinity and the kind of ligands present. In many natural waters, a relatively high proportion of the total metal content may be adsorbed on the particulate matter, but this depends on the factors mentioned above, and in general the status of a metal must be considered in terms of a dynamic system. Reactions at the sediment-water interface are relevant to the supply of metal ions to natural waters. In the uppermost layer of a sediment there is a zone of intense biological activity, centered on the bacterial breakdown of organic matter. This in turn causes changes in pH, oxygen concentration and organic matter, and increases in the levels of carbon dioxide. Metal ions may compete for bisulphide ions and organic matter of humic acid or fulvic acid types. Some metals in recent sediments were found to be preferentially 625 bound to humic acid, while others were mainly associated with sulphide phases [I] 1 . Perhaps the depletion of organic matter with consequent decrease in biological activity causes the consolidation of the sediment, resulting in stabilization of pH and oxygen concentration. Movement of metals to the sediment-water interface is thereby markedly reduced. A better understanding of the supply of metal ions from sediments to water requires research on the kinetics of reactions of metals in sediments. Such studies are essential for a proper evaluation of the effects of metal-rich effluents discharged into natural waters. The dilemma posed by the accumulation of metals in sediments cannot be completely resolved until we have estimates of the rate at which a metal will move from the sediment into the overlying water. 3. Conclusion The analytical chemistry of metals in waters is difficult, and should take into account geochemical aspects dealing with origin, composition, distribution and migration of metals in aquatic systems. In sampling, it is essential to realize that the system is dynamic and that geochemical cycles involving inputs from man's activities are imposed on a natural background. Such a notion is similar to the use of blank determinations in analysis. Just as the level of a metal in a blank should be very low, so the level of metals in many natural waters may also be low. The various factors already discussed are pertinent to sampling. It is imperative that the analytical chemist is involved at all stages, namely sampling, analysis and assessment of results, otherwise meaningful results for total con- tents and for species cannot be expected. The dual role of many metals, namely essentiality and possible toxicity, can only be properly assessed by careful and reliable analysis, and likewise legislation for water- quality parameters should rest ultimately on the work of the analytical chemist. A plea is made for the consideration of the meaning of the content of a metal in water and of the various factors governing the state of metals in water, so that data will be more realistic. References [1] Nissenbaum, A. and Swaine, D. J., Geochim. Cosmoohim. Aata 40, 809, 1976. figures in brackets indicate the literature references at the end of this paper. 626 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). EFFECTS OF WATER SOLUBLE COMPONENTS OF REFINED OIL ON THE FECUNDITY OF THE COPEPOD, Tigriopus japonious l Colin Finney and Anthony D'Agostino New York Ocean Science Laboratory Montauk, NY 11954, USA 1. Introduction Crude and refined oils spilled in open waters often remain unnoticed and their impact unmeasured. However, when they occur in shallow waters and are wind-driven to shore the biota of intertidal zones invariably becomes heavily impacted [I] 2 . Considerable information has accumulated on the severity of kills resulting from massive spills [2,3,4,5]. There is evidence that certain micro-organisms metabolize oil [6], but the photosynthetic ability of phytoplankton [7] and sea weeds is impaired [8]. Likewise, there is abundant but contrasting information on the tolerance of several intertidal invertebrates [9,10,11] but no data on the effect of petroleum products on trophic relationships and fecundity of marine organisms. Recently, Kontogiannis and Barnett [12] studied the acute toxicity of "crude oil" on Tigriopus aallfornicus. They concluded that the oil killed mainly by preventing the diffusion of oxygen but speculated that the observed mortality may have been caused also by toxic water soluble substances. Tigriopus spp. are herbivorous harpaticoid copepods unbiquitous to intertidal zones and tidal pools [13]. Most species are easily reared in the laboratory; one strain has been kept for well over 15 years feeding on Tetvaselmis maoulata and other micro-organisms [14]. The relatively short duration of its life cycle and its consistent reproductive behavior through many successive generations, make these species ideally suited for bioassaying the effects of water soluble fractions of refined oils on growth and fecundity. T. gaponicus was reared in a microcosm shared by two algal prey species and several types of bacteria. This trophically linked assemblage was stressed with lethal and sublethal concentrations of the water soluble fractions of fuel oils. 2. Materials and Methods The water soluble fractions (WSF) of #2 and #6 American Petroleum Institute reference fuel oils were extracted by dispersing twenty-five cm 3 of the oils in 250 cm 3 of sterile charcoal -treated and filtered seawater. The mixtures were agitated at 25 strokes/mi n for 24 hours in a 4°C ambient temperature chamber to prevent bacterial degradation. The oil and water mixtures were allowed to settle in separatory funnels for 24 hours at 4°C. The water phases containing the WSF were recovered and filtered through 0.22p membrane filters. These stock solutions were considered to contain 100% of the extractable WSF and from these appropriate dilutions were made for inclusion in the test media. Present address: College of Marine Studies, University of Delaware, Lewes, Delaware 19958 2 Figures in brackets indicate literature references at the end of this paper. 627 Details on the maintenance of the stock cultures of Tigriopus, its food algae, and the criteria and procedures used for the standardization of the test method were published elsewhere [15]. Briefly, gravid females of T. japonious were placed into test tubes (screw- capped 20 x 125 mm, Pyrex) containing: 10 cm 3 of a test medium, consisting of charcoal- treated sea water of salinity 30°/ oo enriched with micronutrients, essential metals and vitamins [15], the food algae Rhodomonas lens and Isochrysis galbana, and the appropriate quantities of WSF of #2 and #6 fuel oil extracted in sea water. A single dose response test consisted of 90 tubes, grouped into 6 sets of 15 replicates each. One set of 15 tubes functioned as controls, the remaining 5 sets contained serial dilutions of WSF. The parent females were removed as soon as the Fj nauplii had hatched, usually within two days. The resulting progeny was expected to develop into ovigerous females within 21 days, corresponding to the mean generation time for T. japonious when reared under optimal laboratory conditions [15]. The test period lasted 48 days and allowed the development of two complete generations of Tigriopus. Three tubes from each set were harvested on day 3, 6, 12, 24, and 48. The contents were fixed with a drop of 3% (v/v) formaldehyde solution. Tigriopus was sorted into the different developmental stages and counted. The number of bacteria found in the microcosms varied between 6.1 x 10 5 to 9.9 x 10 5 cells/ml. Seven types were consistently identifiable by their characteristic colony developed on 2216 Marine Agar, cell morphology, motility and gram staining. The initial food algae inoculum gave 78 x 10 3 cells/ml in the test media and it was expected to provide ad libitum feeding for the duration of the experiment provided that the WSF did not inhibit further growth of the algae. Algae counts ranged from 5.1 x 10 3 to 8.5 x 10 3 cells/ml at the 48th day and end of the test period in both the controls and the WSF stressed microcosms. The only significant deviations occurred in test tubes containing high concentrations of WSF #6 fuel oil. In these the algal and bacterial counts exceeded the above by one or more order of magnitude probably because there was no predation of algae or bacteria since Tigriopus died shortly after inoculation. In order to facilitate numerical treatment, tabulation and graphical representation of the data, it was found convenient to group the 12 different developmental stages of T. japonious into 7 events, to which, in turn were assigned numerical values corresponding to the fraction of the life cycle which they represented (table 1). This expedient per- mitted scoring of the populations found in the experimental tubes and the computation of a single developmental index value which simultaneously expressed both survival and growth [15]. Table 1 Values assigned to grouped developmental events in the life cycle of T. japonious Developmental events Code 1st filial generation Fj Naupliar stages 1-3 • r 1 ,N 1 _ 3 Metanaupliar stages 4-6 F^N^.g Copepodite stages 1-5 Fi>Ci-5 Adult (copepodite stage 6) A Gravid female $■ 2nd filial generation F 2 Naupliar stages 1-3 F 2 ,N!_3 Fractional value of life cycle 0.00 0.09 0.20 0.37 0.50 0.78 1.00 1.09 628 3. Results WSF of #2 fuel oil proved very toxic. At concentrations of 100 and 75% (v/v) the females were moribund within 2-3 days. Similarly, at 50 and 25% although the gravid females survived 3 or more days, no hatching took place. At 10% nauplii hatched successfully but survival and growth was suppressed. After 48 days, the population was made up of mature adults but there were no gravid females (Table 2). Table 2 Development of T. japoniaus stressed with water soluble fractions of fuel oil Elaspsed time in days Culture Condi tions 3 6 12 24 48 WSF a of #2 oi- 1, % (v/v) lO 25 50 75 100 9 b £ S d 9 3d fd 32 56 50 WSF of #6 oil c i'{ v/v) 10 9 20 35 50 81 25 9 9 20 37 50 50 9 9 9 20 20 75 100 9 9 9 9 9 Controls ±s D. 9 e ±0 20±7 29±5 46±6 70±9 a Water soluble fraction extracted in sea water. Developmental index value, calculated, mean of 3 replicates. c Nauplii unhatched, gravid female alive. Gravid female dead, nauplii unhatched. e Mean and ± S.D. of 30 replicates. The WSF of #6 fuel oil was relatively less toxic. At 100% (v/v) gravid females aborted the egg sacs within 3 days or less and died, at 75% the nauplii were able to hatch but seldom developed past the third naupliar stage. Development stopped at the 4th or 6th naupliar stage at 50% and continued to mature females at 25%. Hgriopus cultured with 10% WSF of #6 fuel oil in the medium showed growth comparable or better than the controls, on the 48th day nauplii of the F 2 generation reached stages 4-6 while the control tubes carried F 2 nauplii in stages 1-3 (Table 2). 629 4. Discussion Water soluble extracts of crude and refined oil are toxic to marine life. The pelagic copepods Acartia clausii and Oithona nana died within 3 to 4 days following exposure to lyl of oil per liter of sea water [10]. The response of barnacle larvae was less certain. Smith [16] reported that crude oil had no discernible effect on the survival of nauplii of Elminius modestus, but Spooner [17] found for this species a 1 hour TL 50 value of 100 ppm. Crude oil in concentrations of 0.5 to 5% (v/v) were toxic to oyster and sand shrimp. Tigriopus japoniaus was less tolerant of the water soluble extract of the #2 than that of #6 fuel oil. This response was expected since Notini and Hagstrom [18] showed that light fuel oil spilled on an intertidal Fucus community and eliminated all gammarids while a similar spill of heavy fuel oil only slightly reduced the number of epiphytic species associated with this sea weed. The toxicity of the water extract of the #2 fuel oil is attributed to its relatively high content of low boiling aromatic hydrocarbons. Fuel oil #2 may give as much as 400-600 pi of WSF per liter of sea water [19]. A 48-hour extraction of #6 fuel oil gave 5.53 ppm of WSF in sea water [20].. Obviously, both the type of oil as well as the method used for the extraction may influence the quantity of WSF in sea water. Assuming that the extraction of #6 fuel oil, for this study, gave a concentration of WSF of the same order of magnitude as that reported by Bender et at. [20] then the results would indicate that Tigriopus could tolerate approximately 0.553 ppm of WSF without loss of fecundity. The growth and fecundity of Tigriopus was dependent on the continued well-being of the algae and bacteria sharing the same stressed microcosm, hence these also were tolerant of the water extract of #6 fuel oil. The significantly larger population densities of bacteria and algae observed in those test tubes which were carrying concentrations of WSF lethal to Tigriopus suggest that the algae and bacteria were more tolerant than Tigriopus. 5. Conclusion The test method utilized in this study permitted the measurement of growth and fecundity of an harpacticoid chronically stressed by sublethal concentrations of water soluble extracts of fuel oil. Concomitantly, since the fecundity of the harpacticoid depended on the sus- tained availability of nutritionally adequate food algae and/or bacteria [14] then the reproductive behavior of the herbivore reflected the viability of the prey species as associated entities in a trophically linked complex. Successful production of the F 2 generation should be the principal and perhaps only criterion for the identification of potentially toxic concentrations of pollutants. References [1] Straughan, D. , Factors causing environmental changes after an oil spill, J. Petroleum Technology, 250-254 (March 1972). [2] Ottway, S. M. , A review of world spillages, 196-1971. Oil Pollution Research Unit, Orielton Field Center, Pembroke, Wales (1972). [3] Rutzler, K. , and Sterrer, W. , Oil pollution damage observed in tropical communities along the Atlantic seaboard of Panama, Bioeoienoe, 20 222-224 (1970). [4] Bellamy, D. J., Clarke, P. H. , John, D. M. , Jones, D. , Whittick, A., and Darke, T. , Effects of pollution from the Torrey Canyon on littoral and sublittoral ecosystems, Nature, 216_, 1170-1173 (1967). [5] O'Sullivan, A. J. and Richardson, A. J., The Torrey Canyon disaster and intertidal marine life, Nature 2T4, 448-450 (1967). 630 [6] Ahearn, D. G. and Meyers, S. P., eds., The microbial degradation of oil pollutants. Workshop, Georgia State University, Atlanta, Publ. No. LSS-SG-73-Ol, Center for Wetland Resources, Louisiana State University, Baton Rouge (1973). [7] Nuzzi , R. , Effects of water soluble extracts of oil on phytoplankton, in Proceedings of Joint Conference on Prevention and Control of Oil Spills, American Petroleum Institute, Washington, D. C, pp. 809-813 (1973). [8] North, W. J., Neushul , M. , Clendenning, J., and Clendenning, K. A., Successive biological changes observed in a marine cove exposed to a large spillage of mineral oil, in Symp. Poll. Mar. Micro-Org. Prod. Petro., Monaco, pp. 335-354 (1965). [9] Horn, M. H., Petroleum lumps on the surface of the sea, Science, J_68, 245-246 (1970). [10] Mironov, 0. G. , Viability of some Crustacea in seawater polluted with oil products, Zool. Zh., 68 (1), 1731-1735 (1969). [11] Chipman, W. A. and Galtsoff, P. S., The effects of oil mixed with carbonized sand on aquatic animals, V. S. Fish and Wildlife, Special Report, 1_, 1-52 (1949). [12] Kontogiannis, J. E. and Barnett, C. J., The effect of oil pollution on survival of the tidal pool copepod, Tigriopus calif ornicus , Environ. Pollut., 4_, 69-79 (1973). [13] Fraser, J. H., The occurrence, ecology, and life history of Tigriopus fulvus (Fischer), J. Mar. Biol. Ass., UK, 20, 523-536 (1936). [14] Provasoli, L. Conklin, D. E., and D'Agostino, A., Factors inducing fertility in aseptic Crustacea. Helgolander wiss. Meeresunters , 20, 443-454 (1970). [15] D'Agostino, A. and Finney, C, The effect of copper and cadmium on the development of Tigriopus japonicus, Pollution and Physiology of Marine Organisms, J. Vernberg, ed. , Academic Press, NY, pp. 445-463 (1974). [16] Smith, J. E. , Torrey Canyon - Pollution and Marine Life, Cambridge University Press, pp. 1-196 (1970). [17] Spooner, M. , Some ecological effects of marine oil pollution, Proceedings, Joint Conference on Prevention and Control of Oil Spills, American Petroleum Institute (Washington, DC, 1969). [18] Notini, M. and Hagstrom, A., Effects of oils on a Baltic littoral community as studied in an outdoor model test system. NBS Special Publication 409, 251-254, U. S. Department of Commerce, Washington, DC (1974). [19] Gordon, D. C, and Prouse, N. J., The effects of three oils on marine phytoplankton photosynthesis, Marine Biology, 22 (4), 329-333 (1973). [20] Bender, M. , Hyland, J., and Duncan, T. , Effects of an oil spill on benthic animals in the lower York River, Virginia, Marine Pollution Monitoring (Petroleum) , NBS Special Publication 409, 257-259, U. S. Department of Commerce, Washington, DC (1974). 631 Part XIII. CHEMICAL CHARACTERIZATION OF AEROSOLS NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977). A COMPARISON OF ELECTRON MICROSCOPE TECHNIQUES FOR THE IDENTIFICATION OF ASBESTOS FIBERS C. 0. Ruud, P. A. Russell, C. S. Barrett and R. L. Clark Denver Research Institute, University of Denver Denver, Colorado 80208, USA 1. Introduction It is fairly well accepted that airborne and waterborne asbestos poses a potential health hazard. Sources of asbestos in the environment include asbestos and other mineral mining and beneficiation operations, manufacturing of asbestos-containing consumer products, fugitive dust sources, such as unpaved roads, consumer use of asbestos-containing products, and natural weathering of minerals. Natural asbestos minerals are found in many portions of the country since asbestiform mineralization occurs in rocks that are ubiquitous to all mountainous regions and to a large percentage of the earth's crust. Although many varieties of asbestos minerals exist, only six are generally recognized as having commercial import- ance. These are chrysotile, which is a sheet silicate serpentine mineral, and amosite, tremolite, actinolite, crocidolite and anthophyl 1 ite, which are chain silicate amphibole minerals, and which originate in entirely different geologic formations from chrysotile. Chrysotile constitutes about 95 percent of the world's production of asbestos. Analytical techniques that have been proposed for the determination of concentrations of asbestos in the environment include x-ray diffraction, differential thermo-analysis, infra-red spectrometry and other methods which do not provide fiber size information. The methods which provide size information are optical, scanning electron and transmission electron microscopy. Because fibers are found in the natural environment that are often less than a micron in diameter and because there is no definite information on toxicity dependence with fiber size, any valid technique for asbestos fiber concentration determi- nation must be able to provide reliable fiber species identification and accurate size measurements. This limits the methods of examination to electron microscopy. These include transmission electron microscopy (TEM) and scanning electron microscopy (SEM). The scanning electron microscope (SEM) with energy dispersive (ED) x-ray microanalysis has been suggested as a means of identifying asbestos fibers. The SEM has the resolution necessary to detect very small fibers, on the order of 100 angstroms in diameter; however, even with an x-ray microanalysis attachment the identification of fibers is not positive o and x-ray microanalysis requires a fiber at least 400A in diameter. SEM's with field emission electron sources, however, may be able to produce adequate x-ray spectra from smaller fibers. Ambiguities of asbestos fiber identification may also arise from x-rays produced by adjacent or adhering particles, from uncertainties in determining the exact chemical composition of an asbestos mineral due to its chemical change in the environment or from the fact that a given mineral can exist over a wide range of compositions. Furthermore, the position and attitude of a fiber in the electron microscope can give apparent varia- tions in composition. On the other hand, many researchers consider the transmission electron microscope (TEM) coupled with selected area electron diffraction (SAED) the only reliable method for identifying asbestos fibers. This method has some disadvantages. However, its overriding 635 advantage is that for the most part it is specific with respect to the identification of chrysotile or amphibole fibers, and it permits accurate size measurement of the fiber even when that size is on the order of a few hundred angstroms in diameter. The highly magni- fied shadowgraph obtained in transmission electron microscopy is nearly always an accurate representation of the length and width or diameter of the fiber. Chrysotile fibers are usually circular bundles of fibrils or round single fibrils. Often the fibrils can be distinguished by the fact that they are tubular and the hollow center can be seen in the electron microscope image. This tubular appearance is unique for chrysotile but not always present, and therefore if a fiber does not appear to be hollow, this does not rule out that it is chrysotile. Amorphous material can be attached to the surface and fill the tubes, thereby giving the appearance, as far as density is concerned, that the fiber is solid. At any rate, it is well to have an identification method in addition to morphology for chrysotile, and it is imperative for the amphibole minerals since non-asbestos materials can appear in the electron microscope to be fibrous, i.e., they may have a 3:1 length-to- width ratio. Also many chain silicate non-asbestos minerals fracture in the same general way as the asbestos minerals so that morphology does not lead to a reliable identification. The most effective additional identification method is selected area electron diffraction. 2. Methods and Material This laboratory has mainly been concerned with distinguishing fibrous submicroscopic asbestos from non-asbestos minerals. In order to compare the relative ability of SEM with ED and TEM with SAED to distinguish the asbestos and non-asbestos fibers, we selected several silicate minerals from our collection of rather well characterized minerals. These were ground to fine particles and in e\jery case, even the non-asbestos minerals, fibers were observed in the electron microscope. Six asbestos minerals and seven non- asbestos sheet and chain silicates were studied and the results from the two electron microscope methods were studied. 3. Discussion The SEM-ED elemental spectra of the asbestos minerals were indistinguishable from certain of the non-asbestos minerals. In ewery case, the TEM-SAED patterns showed charac- teristic layer line and spacing patterns for the asbestos minerals which were always distinguishable from the non-asbestos mineral. TEM-SAED analyses is therefore considered the only technique capable of identifying and sizing asbestos particles collected in a low concentration environment. A more complete treatment describing the procedures and results of this study will be published in micron. 636 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). DETERMINATION OF REDUCING AGENTS AND SULFATE IN AIRBORNE PARTICULATES BY THERMOMETRY TITRATION CALORIMETRY L. D. Hansen, D. J. Eatough, N. F. Mangelson, and R. M. Izatt Thermochemical Institute Brigham Young University Provo, Utah 84602, USA 1. Introduction While extensive determinations of the" elemental composition of airborne particulates have been made, very little information is available concerning the oxidation states of the elements in these particulates. Also, wery little is known about the compounds existing in the particulates. Specific information on the oxidation states and compounds is extremely important in an environmental sense because the behavior of an element in the environment can differ markedly with oxidation state and with the other species with which it is com- bined. Toxicity and movement in the biosphere for example are both affected by the oxida- tion state and chemical combination of an element. Specifically we are interested in the speciation, S, N, and As in airborne particulates. 2. Methods For the studies we wished to do it was desirable to have a rapid, inexpensive, broadly applicable, and sensitive method for the determination of specific species in airborne particulates. The classical types of spectrophotometry methods do not lend themselves well for this type of analysis because of the complexity of the samples involved. Electron spectroscopy for chemical analysis (ESCA) has successfully been applied to this problem, but it is slow, the equipment is expensive, and the procedure is complex enough that it cannot be used as a routine method. Since there did not seem to be any other options available for a direct analysis of the solid particulate, we elected to extract the sample and then analyze the supernatant solution. The extractant chosen was 0.1 M HC1 , 0.005 M FeCl 3 for reasons that have been previously given elsewhere [l] 1 . The analysis of the extractant solution for specific species could be accomplished by a variety of methods, however, the one chosen should be as free as possible from inter- ferences and should give unequivocal evidence of the identity of the species as well as quantitative data on the amount present. Thermometric titrimetry seemed to best fill these requirements. A thermometric titration of a 0.1 M HC1 , 0.005 M FeCl 3 extract of a sample of airborne particulates with K 2 Cr 2 07 solution provides enough data to identify (from the AH value for the reaction) and quantitatively determine (from end-points) a number of reducing agents with a sensitivity in the nanomole range. To date, we have done extensive studies on S ( I V) and Fe(II) and preliminary work on S ( 1 1 ) , S°, As ( 1 1 1 ) , alcohols, aldehydes and olefins [1]. In order for another substance to interfere with a species of interest it must have the same AG and AH values for reaction with C^Oy". No such interferences have been found. figures in brackets indicate literature references at the end of this paper. 637 If species specific reagents are available, direct injection enthalpimetric determina- tions of non-oxidizable species can conveniently be combined sequentially with the thermo- metric titration to determine such species. We have used BaCl 2 to determine sulfate sulfur and have recently developed a method using sulfamic acid to determine nitrite nitrogen. Details of the methods for S( IV) and sulfate have been published [1]. 3. Results Thus far we have identified and determined significant amounts of S( IV) , sulfate, Fe(II), As (III), Sb(III), and nitrite in samples of airborne particulates from various locations [2,3]. We have also found a reducing agent in a sample from New York City of unknown identity which is much more strongly reducing than Fe(II) and which reacts with Cr 2 0y~ with liberation of more heat than Fe(II). We conjecture that it is an oxidizable nitrogen compound (not nitrite) which has not been detected as yet by any other method. Being as powerful a reducing agent as it is, it almost certainly has implications for human health. We have done extensive work on the oxides of sulfur in airborne particulates collected along the Wasatch front in Utah. A map of the area showing our sampling sites is given in figure 1. High-Vol samples were collected from the workroom environment (reverberatory furnace and converter) at the smelter and at several sampling stations in Utah and Salt Lake valleys. The copper smelter located near the southern tip of the Great Salt Lake is the major stationary source of sulfur oxides in this area. The major industrial sources of metal containing aerosols are the copper smelter and a steel mill located near Provo, Utah. 638 Plant 'SPRSNGVILLE Figure 1. High-vol collection sites, and major point sources <£> , in the study regi on. The samples obtained at Magna, Kearns, Salt Lake City and Cedar Fort were collected on days when the wind direction was from the smelter towards the sampling site. The sample obtained at Provo and at the two locations north of the steel mill were collected when the wind direction brought the emissions from the steel mill across the sampling site. The Springville and Ogden data are probably representative of normal rural and urban environ- ments in the study area when not influenced by the plume from the two major metallurgical facilities. Table I gives the results of our analyses of the High-Vol samples for sulfite and sulfate. Samples of the particulates collected from inside the smelter have also been analyzed by ESCA and S(IV), S( VI ) and S ( 1 1 ) were shown to be present [5]. 639 Table I. Concentration of Sulfite and Sulfate in Collected Particulates. c/> 4-» at i- r— r- O E D»t CD a •— O Part. SP° 3 S(IV), S(VI), Location S£ ^ Size.u yg/m S0%" S0| ~ Smelter 7 >7.0 1450+690 2.1+0.4 2.0+1.1 1.1-7.0 680+750 1.9+0.5 6.6+4.2 <1.1 500+630 1.5+0.4 16.0+15.8 TSP 2630+2000 1.9+0.4 5.9+4.2 Magna 4 3 >7.0 31+9 1.2+1.0 17.7+1.8 1.1-7.0 47+11 2.8+1.0 30.7+5.2 <1.1 29+4 2.1+0.9 28.9+9.8 TSP 107+22 2.3+1.0 26.2+4.1 Kearns 15 2 >7.0 22+1 1.1+0.3 16.7+0.5 1.1-7.0 34+8 2.7+0.3 19.5+15.1 <1.1 24+3 1.1+0.6 22.5+12.2 TSP 80+12 1.8+0.1 19.9+9.9 Salt 26 2 >7.0 21+2 1.9+0.1 16.7+9.7 Lake 1.1-7.0 33+1 2.8+0.1 21.5+1.3 City <1.1 24+5 1.7+1.2 17.4+0.4 TSP 78+8 2.2+0.4 18.9+3.0 Cedar 44 1 >7.0 21 1.3 19.9 Fort 1.1-7.0 28 2.5 46.5 <1.1 15 1.7 19.1 TSP 64 1.9 31.4 Ogden 62 2 >7.0 43+11 0.5+0.1 5.4+2.7 1.1-7.0 67+30 0.5+0.4 8.0+0.7 <1.1 49+18 0.2+0.2 10.7+4.2 TSP 1 59+59 0.4+0.2 8.2+1.7 Steel 7 b 3 >7.0 30+9 0.9+0.3 12.9+2.8 Mill 1.1-7.0 25+1 1.2+0.3 43.1+6.6 <1.1 15+4 0.9+0.2 17.5+2.8 TSP 70+13 1.0+0.1 24.8+2.6 Spring- 20 b 1 >7.0 59 0.42 6.6 ville 1.1-7.0 46 0.48 35.8 <1.1 49 0.39 19.8 TSP 154 0.43 19.5 a SP is the suspended particulates, either total, TSP, or ind icated size fraction. Kilometers from the steel mill (Note: Uncertainties in all Tables are standard deviations. ) 640 The concentration of sulfite in the particulates apparently remains constant at about 2 percent as long as the majority of the particulates are derived from the copper smelter plume. The sulfite concentration in aerosols produced by the steel mill facility is about 1 percent by weight. Background levels of sulfite for the area appear to be about 0.5 percent of the total particulate mass. In contrast, concentrations of sulfate in the ambient air near the smelter are much higher than concentrations found in particulates inside the smelter facility (near the converter and reverberatory furnaces) and are compar- able to levels found in ambient aerosols influenced by the steel mill emissions. The background levels of sulfate (Ogden and Springville) appear to be highly variable. Sulfite ion has been shown to form complexes with Fe 3+ , Hg 2+ , Ag + , and Cu 2+ in aqueous solution [6,7]. The moststable complex is formed by Fe 3+ . An excellent correlation exists between Fe and S0 3 concentrations in the particulates collected from inside the smelter. A Mossbauer spectrum of a composite sample indicates the iron is present in the trivalent state. These data suggest Fe(III) and S03~ form a complex in the aerosol. The reaction of S0 2 gas with metal oxides in particulates to form metal sulfite complexes may be an important reaction in terms of sulfite toxicity since it provides a mechanism by which sulfite, together with metal ion catalysts can be deposited deep in the lungs. 4. Summary To summarize our work on sulfur oxide speciation, sulfite species which are stable to air oxidation have been shown to exist in aerosols produced by a copper smelter. The data suggest the sulfite exists as an Fe(III) complex. Formation of these sulfite complexes is not correlated with either S0 2 gas or sulfate. There have been ambiguities in past epidemi- ological studies on the combined toxicologic effects of S0 2 and particulates. These differences may, in part, be due to differences in the actual sulfur species present in the aerosols, suggesting that complete sulfur speciation in aerosols and the toxicity of newly identified species, such as the metal-sulfite complexes, should be investigated further. Samples of flue dusts from the flue pipes of two copper and three lead smelters have also been analyzed for reducing agents and sulfate by the methods given above [4]. The results are given in Table 2. Sulfite and Fe(II) were identified in the dust samples from both copper smelters and in the sample from one lead smelter. As (II I) was identified in one sample only. The assignment of the species in samples 2 and 3 which is oxidized by Cr 2 7 with a AH value of -11.5 kcal/eq is not confirmed by other measurements but is believed to be Sb(III). In summary, thermometric methods of analysis are rapid, inexpensive, and broadly applicable to speciation studies of environmental samples. This work was supported in part by the Energy Research and Development Administration and by the National Institute for Occupational Safety and Health. 641 Table 2 Species Analyzed Thermometrically in a 0.1 M HC1 , 2.5 mM FeCl 3 Extraction of Each Sample, Given as wt% of the Total Sample Smelter Type S(IV) as SO3" S(VI) as SO? Fe(II) As(III) Sb(III)(?) Cu 1.54 ± 0.10 5.0 ± 1.9 2.1 ± 0.2 11.7 ± 0.7 n.d. Cu 0.69 ± 0.10 15.5 ± 0.6 3.5 ± 0.3 n.d. a n.d. Pb n.d. 7.5 ± 0.5 12.9 ± 0.4 a n.d. n.d. 1.35 ± 0.13 Pb n.d. 1.7 ± 0.7 n.d. n.d. 1.14 ± 0.13 Pb 0.85 ± 0.11 6.9 ± 3.0 2.3 ± 0.4 n.d. n.d. d n.d. = not detected above background levels. References [1] Hansen, L. D., Whiting, L., Eatough, D. J., Jensen, T. E. , and Izatt, R. M., Anal. Chem. 48, 634 (1976). [2] Smith, T. J., Eatough, D. J., Hansen, L. D., and Mangelson, N. F., Bull. Environ. Cont. Tox. 3 in press. [3] Hansen, L. D. , Eatough, D. J., Mangelson, N. F., Jensen, T. E., Cannon, D., Smith, T. J. and Moore, D. E., Proa. Int. Conf. Environ. Sensing Assessment, Las Vegas, Nev., September 1975. Paper #23-2. [4] Eatough, D. J., Mangelson, N. F., Hill, M. W., Izatt, R. M., Hansen, L. D., and Christensen, J. J., Final Report to NI0SH on Analysis of Copper and Lead Smelter Dust Samples, 19 March 1976. [5] The ESCA spectra were obtained by Robert G. Meisenheimer, Lawrence Livermore Laboratory, Livermore, California. [6] Hansen, L. D., Eatough, D. J., Whiting, L., Bartholomew, C. W., Cluff, C. L., Izatt, R. M. , and Christensen, J. J., Transition Metal-S0 3 " Complexes: A Postulated Mechanism for the Synergistic Effects of Aerosols and S0 2 on the Respiratory Tract, Trace Substances in Environ. Health, VII., University of Missouri Press, 1974, p. 393. [7] Sillen, L. G. and Martell, A. E., Stability Constants of Metal- Ion Complexes, Special Publication No. 17, The Chemical Society, London, 1964. 642 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). DETERMINATION OF ACIDIC AND BASIC SPECIES IN PARTICULATES BY THERMOMETRIC TITRATION CALORIMETRY D. J. Eatough, L. D. Hansen, R. M. Izatt and N. F. Mangelson Thermochemical Institute Brigham Young University Provo, Utah 84602, USA 1. Introduction Recent reports on studies of pH of rain water in the northeastern United States [1-3] 1 and in isolated storm events around fossil fuel burning stations [4-6], a copper smelter [7] and an urban center [8] indicate S0 2 (and presumably N0 X ) [7,9] emissions result in an increase in the acidity of precipitation. In these studies there is a direct relationship between measured sulfate concentration and pH, although the impact of the point source compared to ambient levels on both sulfate and pH is not always predominant [5]. The role S0 2 , H 2 S0/+ and related particulate emissions may play in long range transport of acidity is not well understood [9-13]. While it is often assumed that H^SO^ plays a primary role in determining the acidity of such precipitation, [1,2,9] a recent study by Frohliger and Kane [11] indicates that acid rains in the northeast are buffered by a weak acid and not by H2SO4. They assume that only one buffering agent is present although the agent is not identified by their study. Total acidity and pH measurements by Brosset [13] in precipi- tation samples collected over Sweden are interpreted to show hydrolysis of Al 3+ and/or Fe 3+ controls acidity of these samples in spite of high SO^" concentrations. Acidic species in particulates are not much better understood. Raman spectroscopy, [14] thermal degradation techniques, [15,16] and the conversion of H2SO4 to a perimidylam- monium sulfate [15] have been proposed for the determination of sulfate, bisulfate and sulfuric acid species but have not been shown to be applicable to the study of ambient aerosol species. The most widely used techniques for identification of acid sulfate species in the environment involve the semi-quantitative sensing of neutral and acid sulfate aerosols by nephelometric [17,18] and infrared [19] techniques, and the quantitative extrac- tion of H 2 S0/4 from NaHSOi* and NHi+HSO^ by organic solvents [20,21]. These results suggest that aerosols originating from the coastal areas are acidic in nature, while those orig- inating from continental or urban sources are weakly acidic or neutral with respect to HSO4 species. Reported studies, however, do not investigate the acidity of particulates beyond these sulfate species. Thus it appears that techniques which give more detailed information for the study of acidic species in both particulates and precipitation would be valuable. We report here initial studies on the development of one possible technique. 2. Procedure We have completed preliminary studies on acidic and basic species in flue dust samples collected from two copper and three lead smelters and a single Hi -Vol New York City partic- ulate sample. Acidic and basic components of the aqueous extracts (1 mg sample/ml H 2 0) of these samples were characterized as follows: a. The pH of the water extract solution was measured. b. Parallel pH and thermometric titration data were obtained for all samples by titrating aliquots of the water extract with HCIO^ to pH = 1.5 and with NaOH to pH = 12.5. figures in brackets indicate the literature references at the end of this paper. 643 d. The pK and AH values were determined for proton ionization of the species titrated at the pH of the initial extractant solution using a published technique [22]. In addition, the pK and AH values for all proton ionizable species were determined for some samples. Based on these values species were identified where possible [23-25]. Cold water soluble sulfate was also determined by acidifying the water extract to 0.1 M HC1 and determining sulfate thermometrically by BaSO^'s) precipitation [26]. 3. Results The combined pH and calorimetric acid and base titration data allow the identification of all weakly acidic species in the aqueous extracts of the samples. As the data are complex for the acidic and neutral flue dust samples, the available information is illustra- ted only for a basic flue dust sample and the New York City particulate sample. The pH and calorimetric titration curves for the aqueous extract of the flue dust sample, Lead-1 , are given in figure 1 and the data are summarized in table 1. The assignments are based on comparisons with available literature data [23-25] and on x-ray fluorescence spectroscopic analyses for Ca and higher atomic number elements. The concentration of Ca 2+ is consistent with the elemental data. The assignment of the group with pK = 7.8 and aH-j = -0.2 to phosphate is reasonable. It should be noted that the pH titration curve in the region from pH = 9 to pH = 5 for sample Lead-1 is very different from that for the more acidic flue dust samples. Several species with AHj values ranging from 6 to 12 kcal/mol are seen for the acid samples. These species are transition metal ions which hydrolyze in this pH region. Such species are not discernable in the acid-base data for Lead-1 and are indeed not present as confirmed by the elemental analysis. Table 1 Species identified in the acid-base titrations of the aqueous extraction of flue dust sample lead-1 Probable species £K AHi (kcal/mol ) mmol/g wt% Ca 2+ 10.6 13.1 1.26 4.9 ± 0.5 ?ol~ 7.8 -0.2 0.31 2.9 ± 0.5 soi; - , POl}" <3.6 > 0.0 2.5 --- 644 LEAD SMELTER DUST, pH =8.9 2000 1000 No OH 1000 2000 HCiq^ — - NANOEQ/mg OF SAMPLE Figure 1. Calorimetric and pH titration data for the aqueous extract of sample Lead-1 lead smelter dust. The most striking result from the aqueous extraction studies is the apparent wide range of acidities in the dust samples as summarized in table 2. The two copper smelters give an acidic pH for the extraction. Large quantities of metal ions are also extracted. This is apparent from the x-ray fluorescence analyses and was also verified by the acid- base titrations where hydrolysis of Fe(III), Ca(II), Pb(II), Zn(II) and Fe(II) was readily apparent. Based on the determined pK and AH values, the species which act as buffers for the extraction solution at the extraction pH are probably hydrolyzed Fe(III) and AT (III) species. The extraction pH values for the lead smelters vary from acidic to basic. It is most probable that the species which control the pH of the extraction solutions for lead dust samples 1, 2, and 3 are Ca(II); Fe(III) and H 2 SO^; and Pb(II), Zn(II), and Cu(II); respectively. 645 Table 2 Species analyzed in an aqueous extraction of flue dust samples Acid buffer group at extraction pH Extraction pH mmol/g £K AH(kcal/mol ) 4.23 0.10 3.9 -1.3 4.39 5.09 <3 -0.9 8.9 1.26 10.6 13.1 3.79 5.13 3.7 -0.9 7.2 0.46 7.2 8.6 Elemen tal concentrations determined by x-ray fluorescence, wt% Ca Fe Cu Zn Pb As_ Cd 0.8 0.9 7.4 2.6 n.d. a 1.9 n.d. 0.8 0.8 2.6 1.6 0.2 5.9 n.d. 3.0 0.02 0.03 0.04 n.d. n.d. n.d. 0.9 0.4 0.3 1.0 1.2 0.6 1.0 0.1 \ 0.2 0.8 0.4 1.8 0.07 0.9 Smelter type Copper - 2 Copper - 3 Lead - 1 Lead - 2 Lead - 3 Copper - 2 Copper - 2 Lead - 1 Lead - 2 Lead - 3 n.d. - not detected above background or sensitivity levels It has been previously reported that benzaldehyde will selectively extract H 2 S0 4 in aerosol samples [20,21]. Possible presence of acid sulfate species in sample Lead-2 was checked by extracting 35 mg of the sample with 2 ml of benzaldehyde. The insoluble mater- ial was then removed by centrifugation and 1.0 ml of the benzaldehyde was extracted with 10.0 ml of H 2 0. The resulting aqueous solution was then analyzed for total sulfate and for acid species by calorimetric titration with NaOH. The calorimetric titration curve indi- cated strong acid (aH^ = -12.9 ± 1.2 kcal/mol) and two weak acids (AH^ = -5.7 ± 1.2 and -1.0 ± 0.7 kcal/mol) were present. The weak acids were identified as Cu 2+ and/or Zn 2+ and Ca 2+ based on the measured AH^ values for reaction of these ions with NaOH. The results of these determinations, together with data from the aqueous extraction of the same sample are summarized in table 3. Twelve percent of the sulfate in the sample was extract- ed by benzaldehyde. Mass balance from the data in table 3 indicates this sulfate was extracted as bisulfate salts of Ca 2+ and Cu 2+ and/or Zn 2+ . These bisulfate salts quanti- tatively account for the observed pH (table 2) in the aqueous extraction. There is, therefore, no evidence for H 2 S0i+ in the sample. Strong acid was not extracted from any of the other flue dust samples by benzaldehyde. Table 3 Comparison of results for SO4", H + , Ca 2+ and Cu 2+ + Zn 2+ in H 2 and benzaldehyde extractions of lead-2 smelter dust sample, given in mmol/g of sample Solvent SO 2 " H^ 2x(Cu 2+ + Zn 2+ ) 2xCa 2+ H 2 1.79 ± 0.08 — 0.38 ± 0.10 0.58 ± 0.04 Benzaldehyde 0.22 ± 0.03 0.22 ± 0.04 0.06 ± 0.02 0.13 ± 0.02 646 The calorimetric and pH acid-base titration curves fo extract of the New York City particulate sample are given obtained from these data plus independent determinations o 4. The initial pH of the extract solution was 5.2, indica acidic. The elemental data, table 4 and table 5, also ind does not contain sulfate as an ammonium or a transition me and Ni suggest the sample does not contain emissions from increased solubility of iron and lead in the acid as compa only substantial change in elemental extraction noted with the extractant solution. The extraction pH of this sample acid groups. The data also indicate that this sample woul if exposed to moderate levels of acids, i.e., S0 2 or N0 X . r the titration of an aqueous in figure 2 and the results f sulfate are summarized in table ting the aerosol is not strongly icate that the particulate sample tal salt. The low levels of V oil burning point sources. The red to the aqueous extract is the the marked change in acidity of is controlled by the carboxylic d not have much buffering capacity This work was supported in part by the Energy Research and Development Administration and by the National Institute for Occupational Health and Safety. Appreciation is expressed to T. J. Kneip, New York University for furnishing the New York particulate sample. NEW YORK AEROSOL pH c =5.2 200 100 -+ NaOH 100 HCIO, 200 NANOEQ/mg OF AEROSOL Figure 2. Calorimetric and pH titration data for the aqueous extract of a New York City particulate sample. 647 Table 4 Acid and base species determined in the aqueous extract (1 mg sample/ml H 2 0) of a New York City particulate sample Species AHi (kcal/mol) £K ymol/g sample wt% Carboxylic acid 0.85 ± 0.07 4.7 50 + 10 --- Phenol and Ammonia 9.10 ± 0.5 9.3 23 ± 5 23 ± 5 0.04 ± 0.01 Al kyl amine 12.65 ± 0.05 10.9 120 ± 80 --- sor --- 1340 12.9 HSO^, H 2 SC\ Table 5 not present Sulfate and sulfite values determined thermometrically and elemental values determined by x-ray fluorescence spectroscopy for aqueous and 0.1M HC1 extractions of the New York City particulate sample, given as wt% of the total sample S(IV) S(VI) S Solvent as S0^~ as SOu as SOu" Ca VI^FeNi_CuZnPbBr H 2 — - 12.9 17.2 1.80 0.03 0.03 0.03 0.04 0.04 0.35 0.17 0.36 0.1 M HC1 0.16 9.8 17.5 2.09 0.07 0.04 3.75 0.05 0.25 0.39 2.16 0.58 References [1] Position Paper on Regulation of Atmospheric Sulfates, U.S. Environmental Protection Agency, EPA-450/2-75-007, (Sept., 1975). [2] Likens, G. E., and Bormann, F. H., Acid Rain: A Serious Regional Environmental Problem, Science, 184, 1176 (1974). [3] Cogbill, C. V., and Likens, G. E. , Acid Precipitation in the Northeastern United States, Water Resour. Res., 10_, 1133 (1974). [4] Li, T. -Y., and Landsberg, H. E., Rainwater pH Close to a Major Power Plant, Atmos. Environ., 9, 81 (1975). [5] Granat, L., and Rodhe, H., A Study of Fallout by Precipitation Around an Oil-Fired Power Plant, Atmos. Environ. 7_, 781 (1973). [6] Jones, H. C, Tenn. Valley Auth. Rep. E-EB-74-1 (1974). [7] Larson, T. V., Charlson, R. J., Knudson, E. J., Christian, G. D., and Harrison, H., The Influence of a Sulfur Dioxide Point Source on the Rain Chemistry of a Single Storm in the Puget Sound Region, Water, Air, Soil Pollut., 4, 219 (1975). [8] Hogstrom, U. , Residence Time of Sulfurous Air Pollutants from a Local Source During Precipitation, Ambio, 2, 37 (1973). [9] Likens, G. E., and Bormann, F. H., Science, ]88, 958 (1975). [10] Newman, L., Science, 188, 957 (1975). 648 [11] Frohliger, J. 0., and Kane, R. , Precipitation: Its Acidic Nature, Science, 189 , 455 (1975). [12] Tabatabai, M. A., and Laflen, J. M., Nitrogen and Sulfur Content and pH of Precipita- tion in Iowa, J. Env. Qual., 5_, 108 (1976). [13] Brosset, C, Air-Borne Acid, Ambio, 2, 2 (1973). [14] Stafford, R. G., and Chang, R. K. , Absolute Raman Scattering Cross Sections of Sulfate and Bi sulfate and Their Application to Aqueous Aerosol Monitoring, International Conference on Environmental Sensing and Assessment, 23-5, Las Vegas, NV, (Sept. 1975). [15] Maddalone, R. F., Thomas, R. L., West, P. W. , Measurement of Sulfuric Acid Aerosol and Total Sulfate Content of Ambient Air, Env. Sci. Tech., ]0_, 162 (1976). [16] Huntzicker, J. J., Izabelle, L. M. , and Watson, J. G., The Continuous Monitoring of Particulate Sulfate by Flame Photometry, International Conference on Environmental Sensing and Assessment, 23-4, Las Vegas, NV (Sep. 1975). [17] Vanderpol , A. H., Carsey, F. D., Covert, D. S., Charlson, R. J., and Waggoner, A. P., Aerosol Chemical Parameters and Air Mass Character in the St. Louis Region, Science, 190 , 570 (1975) Atmos. Environ., 8, 1257 (1974). [18] Charlson, R. J., Vanderpol, A. H., Covert, D. S., Waggoner, A. P., and Ahlquist, N. C. Sulfuric Acid - Ammonium Sulfate Aerosol: Optical Detection in the St. Louis Region, Science, 184 , 156 (1973). [19] Cunningham, P. T., and Johnson, S. A., Spectroscopic Observation of Acid Sulfate in Atmospheric Particulate Samples, Science, 191 , 77, 1976. [20] Leahy, D., Siegel, R. , Klotz, P., and Newman, L., The Separation and Characterization of Sulfate Aerosol, Atmqs. Environ., 9_, 219 (1975). [21] Tanner, R. L., Leahy, D., and Newman, L., Separation and Analysis of Aerosol Sulfate Species at Ambient Concentration Levels, 1st Chemical Congress of the North American Continent, Mexico City, (December 1975). [22] Izatt, R. M., Hansen, L. D., Eatough, D. J., Jensen, T. E., and Christensen, J. J., II. Recent Analytical Applications of Solution Calorimetry, in the book Analytical Calorimetry, Vol. 3, Plenum Press, N.Y. (1974). [23] Si 11 en , L. G., and Martell , A. E., Stability Constants of Metal- Ion Complexes, Special Publication No. 17, The Chemical Society, London, (1964). [24] Christensen, J. J., Eatough, D. J., and Izatt, R. M. , Handbook of Metal Ligand Heats and Related Thermodynamic Quantities, Marcel Dekker, New York, (1975). [25] Izatt, R. M., and Christensen, J. J., Heats of Proton Ionization, pK, and Related Thermodynamic Quantities, in Handbook of Biochemistry, 2nd edition, H. A. Sober, Ed., " Chemical Rubber Co., Cleveland, OH (1970). [26] Hansen, L. D., Whiting, L., Eatough, D. J., Jensen, T. E., and Izatt, R. M. , The Determination of Sulfur(IV) and Sulfate in Aerosols by Thermometric Methods, Anal. Chem., 48, 634 (1976). 649 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gai thersburg, Md. (Issued November 1977) SINGLE-PARTICLE ANALYSIS OF THE ASH FROM THE DICKERSON COAL-FIRED POWER PLANT John A. Small 1 and William H. Zoller Department of Chemistry University of Maryland College Park, Maryland 20740, USA 1. Introduction The scanning electron microscope equipped with a Si-Li x-ray detector is becoming increasingly important in the identification and analysis of particulate emission from anthropogenic sources [1-3] 2 . To date, most of the work in single-particle analysis of emission sources has been confined to morphological studies and qualitative elemental analysis. In addition, these studies have not included a detailed investigation of the emissions from a single source. This paper contains the results of a single-particle study of both the exhaust system and plume of the Dickerson power plant in western Maryland. The samples were collected from four different locations in the exhaust system of the plant and at a distance range of 0-5 km in the gas plume. The different samples were analyzed by scanning electron microscopy and x-ray microanalysis for particle morphology and semi- quantitative elemental composition [4]. In addition the samples were also analyzed by instrumental neutron activation analysis for bulk element composition. The Dickerson power plant is operated by the Potomac Electric Power Company and is located approximately 50 km northwest of Washington, DC. The locations of the different sampling ports are shown in fig. 1. The plume particulate samples were collected with an instrumental aircraft. 2. Morphological Study The results from the morphological study of the different ash samples are shown in figures 2-8. Figure 2 is a secondary electron image of a particle identified as partially burned coal. This particle shape was observed in all in-plant samples and was concentrated in the bottom and in the economizer ashes where it accounted for 75 percent and 30 percent of the observed particles, respectively. In addition, this particle type accounted for 5 per- cent of the precipitator ash and over 80 percent of the particles collected on the first stages of the in-stack impactor. The particle size was generally larger than 100 ym. The partially burned coal forms when the pulverized coal remains in the burners long enough to melt the alumino-silicate minerals. The material was mainly aluminum and silicon with small amounts of potassium, calcium, iron, and titanium. 3 Current Address: Analytical Chemistry Division, National Bureau of Standards, Washington, DC 20234, USA. 2 Figures in brackets indicate literature references at the end of this paper. 3 Elements lighter than atomic number 11 cannot be analyzed using the solid state detector. 651 Boiler -Stack Hopper w i Por " Dickerson Generating Station Figure 1. Locations of the different sampling ports. Figure 2. A secondary electron image of a particle identified as partially burned coal 652 20pm Figure 3. An example of ash particle formed from the high-iron regions of the coal. This particle type is characteristic of particles which are mainly iron with very small amounts of silicon and aluminum. Figures 4 and 5. Examples of particles that are formed from regions of the coal which are high in iron and have modest amounts of silicon and aluminum. 653 It ■ 2um Figure 7. On in-plant ashes, this material was in the form of amorphous surface structures on large ash particles. Figure 6. Secondary electron image of a smooth sphere. Figure 8. In this plume sample, the material was more crystalline in struc- ture and often formed individual particles. 654 < o I- O UJ M _l < a: o z z o ELEMENT RATIOS TO Al FROM SEM-X-RAY ANALYSIS Figure 9. Plots of the average elemental concentrations normalized to aluminum for the various collection locations. B.A. EA PPT SP P COLLECTION LOCATION Figures 3, 4, and 5 are examples of ash particles which are formed from the high-iron regions of the coal such as pyrite inclusions. The particle type shown in figure 3 is characteristic of particles which are mainly iron with very small amounts of silicon and aluminum. The particles are generally spherical in shape with diameters greater than 50 \im and were found mainly in the economizer ash. The shape also was observed to some extent in the bottom and in the precipitator ash samples. The surface texture is the result of the rapid cooling and solidification of the material in the flue-gas stream. Figures 4 and 5 are examples of particles that are formed from regions of the coal which are high in iron and have modest amounts of silicon and aluminum. These particles are spherical with diameters larger than 10 pm and were found primarily in economizer and precipitator ash samples. During the solidification of these particles, the light element fraction separates from the iron matrix so that the main particle is predominantly iron and the surface structure is predominantly aluminum and silicon. Figure 6 is a secondary electron image of a smooth sphere. This shape was the most predominant shape observed in the field samples. It was observed most often in the precip- itator ash, stack ash and plume ash where it accounted for 80 percent, 80 percent, and 100 percent respectively of the observed particles. The size variation of the particles was from 0.5 ym to over 100 ym in diameter. The elements Al , Ca, Fe, K, S, Si, and Ti were determined by x-ray analysis of the spheres. The particles are formed from the melting and solidification of the silicate mineral phase of the coal. The final morphological classification is shown in figures 7 and 8. This classification was relatively rare, accounting for less than two percent of the total number of particles observed. The material was Ca and S with smaller amounts of Si, Al , K, and Fe. On in- plant ashes, this material was in the form of amorphous surface structures on large ash particles (see fig. 7). In the plume samples shown in figure 8, the material was more crystalline in structure and often formed individual particles. This shape may be the result of sulfur dioxide gas reacting with ash particles to form calcium sulfate crystals which grow as the material ages in the plume. 655 3. Elemental Analysis The results from x-ray microanalysis of the different samples were used in three ways. First, they were used to determine the variability in composition of the different ash particles. Second, they were used to study elemental distributions and correlations in the ashes collected at the different locations in the plant and plume. Finally, the results from single-particle analysis were compared to the results from bulk-sample analysis by instrumental neutron activation. The elemental concentration of the different particles was determined by comparing the characteristic x-ray peaks for the particles to the peaks generated by a series of flat polished standards. The resulting ratio was then corrected for atomic number effects, matrix absorption and secondary fluorescence [5]. The oxygen concentration was determined by difference. The resulting concentration estimates are semi-quantitative since no corrections were made for particle shape and size. The elements observed in the ash samples include Al , Ca, Fe, K, Mg, P, Si, and Ti . The frequency of occurrence for the different elements is given in table 1. It shows that almost all particles have Si, Al , and Fe, about half have K, Ti , S, and Ca and only a few have Mg and P. The variation in elemental concentration of the different ash particles is shown in table 2. This table separates the particles by collection location and reports an average and standard deviation. The different elements, observed in particles collected from that location, indicate that on a single-particle basis the ash samples are relatively inhomogeneous. Table 1 Frequency of occurence of detectable amounts of various elements observed by SEM/X-Ray analysis Element Si Al Fe K Ti S Ca Mg P Percent of particles 97 97 89 56 55 53 48 13 9 Table 2 Average concentration (in wt. %) ± standard deviation for elements analyzed by SEM/XRF Sample Bottom Ash Economizer Ash Electrostatic Precipitator Ash Suspended Particles Airplane Si Al K Ca Ti Fe 6.7± 5.6 5.3±3.6 0.4 ±0.5 0.7 ±1.0 0.48±0.34 0.52±0.28 14.0±24.0 12.0±14.0 7 . 1 ±4 . 2.5 ±2.3 0.76±0.48 0.64±0.19 56.0±29.0 9.5± 5.2 7.3±3.8 2.4 ±2.9 1.9 ±1.1 1.6 ±1.4 0.58±0.52 8.0±13.0 13. 0± 6.0 9.5±4.0 0.88±1.2 1.1 ±0.5 0.70±0.40 0.57±0.36 4.7± 3.3 9.2± 5.8 5.9+3.1 1.5+1.9 1.0 ±1.2 1.3 ±1.4 0.46±0.42 2.6±2.3 656 The distribution of the different elements is shown in figure 9 which contains the plots of the average elemental concentrations normalized to aluminum for the various collection locations. B.A. is bottom ash, port one (see fig. 1), E.A. is economizer ash, port two, ppt is electrostatic precipitator ash, port three, S.P. is suspended stack particles, port four and P. is plume ash. All the elements studied except iron have a minimum value for the aluminum ratio in the suspended stack particles and a maximum value for both the bottom and plume ash samples. The plots for the elements potassium and possibly calcium have a second minimum in the economizer ash and decrease steadily to a minimum in the plume ash. In addition to elemental distributions, the concentrations can also be used to study correlations between elements. The most consistent correlation is between silicon, aluminum and possibly titanium. This correlation is expected since these elements are all associated with the mineral phase of the coal. Calcium and sulfur also correlate fairly well but only in the precipitator, stack and plume ash. This may reflect the growth of calcium sulfate on particle surfaces. A comparison of results from x-ray microanalysis and macroanalysis by instrumental neutron activation is given in table 3 for the elements Al , Ca, Ti , and Fe. The absolute concentrations are substantially different for the two methods of analysis which reflects the uncertainty in single-particle quantitative analysis. The two techniques with concen- tration values normalized to aluminum are in much better agreement. Most of the values agree within the uncertainty limits of the techniques and elemental trends such as the high Fe/Al ratios for the economizer ash are observed in both analytical methods. Table 3 Values for elemental concentrations and Al ratios from INAA and SEM/XRF analyses Element Bottom ash Al Ca Ti Fe Concentra tions in wt. % Al ratios INAA X-ray INAA X-Ray 12.0 6.7 < 0.96 0.48 0.08 0.07 0.60 0.52 0.05 0.09 9.3 14.0 0.8 2.0 Economizer ash Al Caa Ti Fe Precipitator ash Al Ca Ti Fe 9.4 0.70 15.0 13.0 0.96 0.69 12.0 7.1 0.64 56.0 7.3 1.6 0.58 8.0 0.07 1.6 0.07 0.05 0.92 0.09 8.0 0.04 0.08 1.1 657 Table 3 (continued) Stack ash Al Ca Ti Fe Plume ash Al Ca Ti Fe 4.0 9.5 0.4 0.70 0.1 0.10 0.24 0.57 0.06 0.06 2.3 4.7 0.58 0.47 b b c c b b 0.07 0.10 b b 0.05 0.078 b b 0.43 0.44 a No Ca values from SEM/XRF. Absolute concentrations could not be compared since airplane values are in yg/m 3 . c Values from background-corrected airplane plume filter APF28 taken 0-8 km from Dickerson stacks. 4. Conclusion The results from the single particle study of the Dickerson power plant provided information on particle shape and origin, sample homogeneity and elemental composition which was not available from bulk analysis. In addition, for a select number of elements, the semi-quantitative analysis of a relatively small number of ash particles was used to predict elemental trends and correlations which were in good agreement with results from bulk analysis. In these applications, scanning electron microscopy is an excellent compli- mentary technique to bulk analysis for the investigation of anthropogenic emission sources. References [1] Waller, R. E., Brooks, A. G., Cartwright, J., An Electron Microscope Study of Particles in Town Air, J. Air Wat. Poll. 7, 799 (1963). [2] McCrone, W. C, Delly, J. G. , The Particle Atlas, 3 2nd ed., (Ann Arbor Science Publishers, Inc., Ann Arbor, 1973). [3] Moyers, J., Progress Report, ATM Sciences Laboratory, Dept. of Chemistry, University of Arizona, (1974). [4] Small, J. A. , An Elemental and Morphological Characterization of The Emissions from the Dickerson and Chalk Point Coal-Fired Power Plants, Ph. D. Thesis, University of Maryland (1976). [5] Colby, J. W., Magic IV A Computer Program for Quantitative Electron Microprobe Analysis, Unpublished report, Bell Telephone Laboratories, Allentown, Pennsylvania (1974). 658 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). LASER-RAMAN MONITORING OF AMBIENT SULFATE AEROSOLS R. G. Stafford, R. K. Chang, and P. J. Kindlmann Yale University Department of Engineering and Applied Science New Haven, Connecticut 06520, USA 1. Introduction Presently there is a need for an analytical instrument which is capable of accurately measuring the concentration of sulfates in aerosols at ambient conditions. Ambient sulfate concentrations have ranged from 2 yg/m 3 to 54 yg/m 3 , depending on the location [1.2] 1 . The major sulfur constituents in aerosols are believed to be (NHi t ) 2 S0i + and H 2 S0 4 [2-4]. The greatest percentage of these compounds lies in the accumulation mode (0.1 - 1 ym diameter) of an aerosol size distribution [5]. Experimental methods now employed for the measurement of aerosol pollutants involve a collection phase where interparticle reactions may occur on the collecting surfaces, especially since the particle concentrations are considerably enhanced by the collection process [6,7]. The data resulting from these methods are therefore not necessarily indica- tive of the original aerosol composition. Considering the chemically reactive nature of atmospheric aerosols and gases, a method which can measure in situ the concentration of sulfate in aerosols would provide a more reliable analytical approach. We wish to report on the progress attained in utilizing a laser-Raman scattering technique to measure directly, without filter collection, the concentration of sulfate compounds in aerosols. During the initial phase of this project, we investigated the feasibility of measuring the cemcentration of sulfate and bisulfate in aqueous aerosols by the Raman scattering technique [7,8]. We have now successfully detected laboratory gen- erated (NHi + )2S0 4 aerosols in a continuously flowing system at concentrations down to less than 10 ppb (1 ppb ^4 yg/m 3 of S0L7 anion). 2. Experimental Two experimental setups will be described in this section. The first is a cw laser- Raman system which was used to measure the concentration of sulfates down to the low ppb range. The second is a pulsed laser-Raman system which was used to investigate the advan- tages that should be gained by going from a cw to a pulsed laser system. These advantages and associated problems will be treated in the following section. For the cw laser approach, the experimental setup was a modification of the standard 90° system used in our original laser beam single-pass system [7]. In the standard system, the Raman signal is collected in a direction perpendicular to the plane defined by the direction of the incident laser beam and its polarization, with a single pass of the laser beam through the sample region [9-11]. The new arrangement is shown in figure 1. The o incident cw laser beam at 4880A and 340 mW is from a Spectra-Physics Model 165 argon- krypton laser. The laser beam is partially trapped by an optical cavity formed by concave mirrors Ml and M2. These mirrors have high reflectivity, multi-layer dielectric coatings. This multi-pass scheme is a modification of an optical cell originally devised by Hill and figures in brackets indicate literature references at the end of this paper. 659 Hartley [12]. Ours is a simplified version using concave spherical mirrors rather than elliptical mirrors. The two mirrors are positioned so that their equal radii of curvature are coincident. In this configuration, a beam incident from outside the cavity and passing near the coincident radii of curvature will be partially trapped and forced to undergo many bounces before escaping out the other side. The focal region contains two images, each of which intersects many passes of the laser beam. A third concave mirror M3 redirects the Raman scattered photons that would normally be lost back towards the collection optics. The Raman photons are collected by lens LI and imaged by lens L2 onto the entrance slit of a Spex Model 1400 double monochromator. The monochromator was equipped with Jobin-Yvon o holographically ruled gratings, 1800 grooves/mm, blazed for 3500 - 7000 A. An RCA C31034 gallium-arsenide photomultiplier tube (PMT) was kept at -20°C for photon counting. The dark count rate was approximately 7 Hz with a 4 mm x 10 mm photocathode. The Raman signal is then amplified and monitored by a photon counter. The sample region was contained in a specially designed cell which directed the test sample through a nozzle into the laser beam. A flowing system was maintained by exerting positive pressure at the nozzle entrance and pumping at the exit port. Both standard gases and aerosols were used as test samples. The gas samples were pure N 2 , 10 ppm of N0 2 in N 2 , and normal laboratory air. The aerosols were generated by a commercial unit (Sierra Instru- ments Model 7330 fluid atomization aerosol generator) from water solutions of (NH 1+ ) 2 S0i + and Na 2 S0 4 . The molarities of the solutions used were 1 M, 0.1 M, 0.01 M, and 0.001 M. The aerosol generator contained an impactor which effectively removed all particles with diameters greater than 2 ym. The exiting aerosols were collected on a substrate and analyzed with an electron microscope to confirm that no aerosols with diameters larger than 2 ym were being admitted into the sample region. The mass flow rate from the generator was calibrated using a liquid nitrogen cold trap which collected the particles that passed through the impactor. The pulsed system utilized a high-power ruby laser (Spacerays) operating in the normal mode condition in a single-pass configuration. The setup is shown in figure 2. The laser beam diameter was stopped down to 1/4 inch and had a 0.5 msec halfwidth. The maximum energy was 4.1 J. The beam was focused into the sample region by a 1 cm focal length aspheric lens. The spot size diameter was 120 ym. After passing through the sample region, the beam was directed onto a scattering plate. Two silicon photodiodes, PD1 and PD2, monitored the scattered laser signal. PD1 operated in a saturation mode to deliver a gate pulse to the photon counter, so that photon counts from the PMT were accepted only at times coincident with the laser pulse. PD2 integrated the laser signal to give a voltage propor- tional to the total laser energy. The energy measurement electronics were calibrated with a TRG Model 100 thermopile. Mirrors Ml, M2, and M3 of figure 1 have not yet been incorpo- rated into the pulse of figure 2. 3. Results and Discussion The multiple-pass optical cavity is the chief source of the enhancement of our signal/ noise ratio. This enhancement allows a consequent increase in sulfate concentration measure- o ment sensitivity. In the single-pass scheme with the 4880 A line at 340 mW and a monochro- mator slit width of 1 mm and height of 10 mm, we obtained 5700 Hz from the PMT for the 2330.7 cm -1 line of atmospheric N 2 . Including mirrors Ml, M2, and M3 as shown in figure 1, we achieved 5.22 x 10 5 Hz, a gain of approximately 92. For these conditions, the back- ground count rate at the 981 cm" 1 line of sulfate was approximately 25 Hz with ambient air flow from the aerosol generator. As an initial test of our calibration of the system with atmospheric N 2 , we sampled two gases: atmospheric C0 2 (0.0314 percent ambient concentration [13] and a standard commercial mixture of 10 ppm of N0 2 in N 2 . Both concentrations were easily detected and measured with respect to ambient N 2 . 660 cr LU h- LlJ o cz h- o UJ Q. (/) < I UJ < o >- < -I Q_ CO o ir OHd CM UJ -J CD O Q O 5 O or i o o z o Figure 1. Schematic of a laser-Raman scattering instrument used for the detection of laboratory generated sulfate aerosols. The Raman signal induced by a cw laser is enhanced by a multi-pass optical cavity. 661 UJ h- LU o o UJ < cr i q: uj < Q UJ c/) QL > or O uj * 1- UJ uj F, 2E UJ i z O V) UJ m O o o < O tr X o o Figure 2. Schematic of a laser-Raman scattering instrument utilizing a pulsed versus cw laser. A gate pulse from a photodiode (PD1) is used for coincident photon counting from the photomultipl ier (PMT). 662 Before measuring the Raman scattering from (NH 4 ) 2 S0i + aerosols, we first calibrated the mass flow rate of the aerosols passing through the impactor of the aerosol generator. We found that only approximately 1.0 percent (±50 percent) of the original solute from the 1M solution was getting past the impactor. This yielded a molecular concentration of sulfate at the nozzle of the sample cell of approximately 8.2 ppm (±50 percent). The Raman spectrum of the v a line of SOj at 981 cm" 1 f or these (NtU)S0 4 aerosols [14- 16] in the flowing system is snown in figure 3. The slitwidth of 1 mm for our monochromator corresponds to approximately 14.1 cm"" 1 at the 981 cm -1 position, which is also approximately the halfwidth of the Raman line. We are therefore capturing the significant portion of the Raman photons for this transition. Since the concentrations of ambient N 2 and the labora- tory generated sulfate aerosols are known, and the Raman cross section for N 9 is also known [17], a rough estimate can be made of the effective Raman cross section for 981 cm -1 line of (NH [+ ) 2 S0i + aerosols. Previously, we had determined the cross section [7] for the Aqueous ., r . . £ ,, , M completely dissociated sulfate anion in aqueous solution, • • • • • • 2 o i i- o • • • SLITW 1 1 " 1 o CO en E o ^ 8 x o o o o o o o o o ro O O O CM O o (ZH) 1VN9IS NVIAlVd < i Figure 3. Raman scattering spectrum of (NH 4 ) 2 S0i t aerosols. The v x line at 981 cm" for the __ o S0 4 anion was monitored. The 4880 A line from a cw argon-krypton laser was used. The spectrum was scanned at an effective 1 Hz rate. The sulfate molecular concentration was approximately 8.2 ppm (±50 percent). 664 5 C2 en < LU O LU < (/) 10 -2 10 -3 10 ,-4 10' 10" I I I II I I I I II I II I I I I I 1 1 II i — i — i mm *- J I L I I 1 ll J I I I I Mill I ' ' ■ ■ "il i i ■ ■ ■ ■ ■■ 10 100 1000 10.000 SULFATE CONCENTRATION (ppb) Figure 4. Graph of the sulfate Raman signal versus sulfate concentration, normalized to the Raman signal for ambient N 2 . The accuracy in the sulfate concentration calibration measurement was approximately ±50 percent. Flowing (NHi + ) 2 SOi + aerosols were tested with the pulsed ruby laser system as shown in figure 2. The maximum averaqe laser intensity that the sulfate aerosols were subjected to was approximately 73 MW/cm 2 / This corresponds to a 4.1 J pulse of 0.5 msec halfwidth and 120 ym spot size. However, a normal mode pulse from a ruby laser is typically composed of a pulse train of many spikes rather than presenting a smooth profile [21]. In our case these spikes had halfwidths of 0.5 ysec. The peak intensity in the focal region is there- fore much greater than the average intensity. Under these conditions, the aerosol might be strongly perturbed. The Raman scattering and heating of single fine particles down to 0.7 ym diameter by a focused cw laser has been investigated by Rosasco et at. [22]. They encountered no heating problems for NaN0 3 mounted on a substrate. In our flowing system, the sulfate aerosols have no substrate to dissipate heat easily. To check for possible heating effects, the sulfate line was monitored with increasing laser intensity. No 665 abnormal increase in background fluorescence was seen up to an average intensity of 73 MW/cm 2 . In addition, the anti-Stokes component of the 981 cm" 1 line was checked. Presence of a strong anti-Stokes signal would indicate heating of the aerosols to high temperatures by the laser [23]. Since no such effect was seen, we do not believe that the aerosols are being sufficiently perturbed to interfere with a concentration measurement. The PMT and photon counter were checked for saturation effects which would affect linearity. These can arise from the 0.5 usee spike behavior of the laser normal mode pulse. Raman photons might arrive in bunches closer than the pulse pair resolution time of the photon counter. However, no such saturation was seen up to the full intensity pre- viously mentioned. Now that we are assured that using a high-power pulsed laser causes no serious experi- mental problems, we intend in the future to construct an optical cell similar to that used in the cw system. If the previous optical enhancement factor of 92 can be reached with the ruby laser, then the measurement sensitivity for sulfate concentration should be consider- ably improved. 4. Conclusion Our experimental program for detecting sulfate aerosols in situ has accomplished to date the following: 1) A Raman spectrometer using a cw laser for detecting flowing sulfate aerosols has been constructed with a sensitivity significantly greater than previously achieved. 2) Laboratory generated (NHi + ) 2 S0 1+ aerosols in a flowing system have been measured at concentrations down to the 10 ppb range. 3) The feasibility of using a pulsed ruby laser system to partially eliminate interference effects and to increase the detection sensi- tivity has been discussed and some favorable initial experimental results have been obtained. Work supported in part by the Northeast Utilities Service Company, Hartford, Connecticut. References [1] Schoff, R. A., New England Consortium on Environmental Problems, Publication No. RPTR-75-1 (September 1974). [2] Keesee, R. G. , Hopf, S. B., and Moyers, J. L., International Conference on Environ- mental Sensing and Assessment, Vol. 2, 23-6 (IEEE, New York, 1976). [3] Junge, C. E. and Ryan, T. G., Quart. J. Roy. Meteorol. Soo. 84, 46 (1958). [4] Brosset, C, AMBIO 2, 2 (1973). [5] Stevens, R. K. and Dzubay, T. G., IEEE Trans. Nucl. Sci. NS-22 , 849 (1975). [6] Forrest, J. and Newman, 1,,'APCA Journal 23, 761 (1973). [7] Stafford, R. G. and Chang, R. K. , International Conference on Environmental Sensing and Assessment, Vol. 2, 23-5 (IEEE, New York, 1976). [8] Stafford, R. G. and Chang, R. K., Seventh International Laser Radar Conference, Menlo Park, California (November 4-7, 1975). [9] Fouche, D. G. and Chang, R. K. , Appl Phys. Lett. ]8, 579 (1971). [10] Penney, C. M., Goldman, L. M., and Lapp, M., Nature Phys. Sci. 235 , 110 (1972). 666' [11] Fenner, W. R. , Hyatt, H. A., Kellam, J. M., and Porto, S. P. S., J. Opt. Soc. Am. 63, 73 (1973). [12] Hill, R. A. and Hartley, D. L., Appl. Opt. V3_, 186 (1974). [13] American Institute of Physios Handbook (McGraw-Hill Book Co., New York, 1972). [14] Herzberg, G., Molecular Spectra and Molecular Structure II (Van Nostrand, New York, 1966). [15] Ananthanarayanan, V., Indian J. Pure Appl. Phys. ]_, 58 (1963). [16] Wright, M. L. and Krishnan, K. S., Stanford Research Institute Report, EPA-R2-7 3-219, 1973. [17] Fouche, D. G. and Chang, R. K. , Appl. Phys. Lett. 20, 256 (1972). [18] Kerker, M. , The Scattering of Light and Other Electromagnetic Radiation (Academic Press, New York, 1969). [19] Gelbwachs, J. A., Bimbaum, M., Tucker, A. W. , and Fincher, C. L., Opto-Electronics 4, 155 (1972). [20] Fouche, D. G., Herzenberg, A., and Chang, R. K. , J. Appl. Phys. 43, 3846 (1972). [21] Ross, D., Lasers, Light Amplifiers, and Oscillators (Academic Press, New York, 1969). [22] Rosasco, G. J., Etz, E. S., and Cassatt, W. A., Appl. Spectry. 29_, 396 (1975). [23] Lapp, M. and Hartley, D. L., Combustion Sci. and Tech. 13, 123 (1976). 667 . NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). CHEMICAL CHARACTERIZATION OF PARTICULATES IN REAL TIME BY A LIGHT SCATTERING METHOD C. C. Gravatt Analytical Chemistry Division National Bureau of Standards Washington, DC 20234, USA 1. Introduction This paper describes a new light scattering method for the characterization of partic- ulate matter as to its chemical composition. Specifically, it is possible by this method to distinguish carbon, absorbing and metal containing particles from other materials, and in some instances to distinguish among various types of carbon containing and metallic particles. This technique can be applied either to the characterization of individual particles or to aerosols containing many particles. In many situations some degree of chemical characterization of particulate matter would be advantageous, and the present technique is especially attractive since it can be accomplished without the collection and subsequent chemical analysis of a sample. This is a light scattering method in which the scattered radiation is monitored in such a way as to provide information related to the chemical composition of the particle or aerosol. The following is a brief introduction to the important optical parameters to be used in this discussion. The total index of refraction, m, of any substance can be considered as a complex quantity m = n - ik (1 ) with real component n and imaginary component k. Both n and k are functions of the wavelength of the radiation, A, and the absorption spectra of a substance gives the wave- length dependence of k. Light scattering is sensitive to both n and k and this dependence is most pronounced in those cases where the size of the particle and the wavelength of the light are of similar magnitude. Consider a beam of ligjit of intensity, i , and wavelength, Xo> traveling in the direction given by a unit vector, k > and scattered by a sample at 0. A ray scattered in the direction given by unit vector, ^, is measured to have intensity, i , by a detector located a distance R from the sample. The scattering angle, 0, is the angle between the directions defined by ^ and 1<. The scattering plane is that plane containing the sample and the unit vectors 'k* and1< . For the purposes of the present discussion, it is also necessary to define the state of polarization of the incident radiation, and the following notation will be adopted. If the polarizer is oriented such that the direction of polarization is in the scattering plane, then the scattered radiation is represented by v, . If it is oriented such that the direction of polarization is normal to the scattering plane then the scattered radiation is represented by i^. 2. Single Particle Case Consider that there is a single particle located at and that the scattered intensit- ies i„ and i x are measured as a function of e. It is found that the scattered intensities are also a function of the particle size, index of refraction, and shape, and that this dependence can be expressed quantitatively by the theory of Mie. In order to investigate the effect of index of refraction on the scattering behavior of particulates, the scat- tering curves (i„ or i x vs. 0) have been calculated for a wide range of indices of 669 refraction and particle size using a modification of the procedure given by Dave [l] 1 . From a comparison of a number of these scattering curves a certain pattern was noticed; specifi- cally that the i 7/ curves for materials with k = differed markedly from those for materials with k > over the angular range from = 30° to = 90°. There is at least a one order of magnitude difference in the \i, values for these two types of materials over the range 30° to 80°. A more complete analysis of this effect has shown that the angular region from = 40° to = 70° for \„ is extremely sensitive to the magnitude of the k value of the particle index of refraction and can be used for chemical characterization of the particles. By employing a light scattering particle sizing instrument, and adding to it an optical system which collects light scattered by each particle over the angular range of 40° to 70° for i„ , it should be possible to classify particles which are larger than 0.3 ym as to whether they are non-absorbing (k = 0), carbon-like, or metallic. In many cases it should also be possible to distinguish among the types of metals. There are a number of possible applications of a single particle instrument as described above, in addition to the obvious ambient air pollution monitoring use. There are applications in coal mine and industrial plant monitoring in which it is of interest that not only the total particle size distribution be measured but also the fraction and size distributions of carbon containing or metallic particles be measured. It should be possible to tag certain dust particles with a dye of some type and then make the measure- ments at the absorption wavelength of the dye. These particles could then be used in various studies as tracer particles. In addition to the above applications in air, it should be possible to perform analogous studies in liquids, such as water. It would be necessary to reevaluate the most effective angular range since the index of refraction of the background would be different for the liquid case, but the general phenomena is the same. Thus one could characterize particulates in liquids as to their chemical composition and perform tracer studies with dye containing particulates. 3. Multi-particle Case In this section the chemical characterization technique will be extended to a device that monitors an aerosol containing many particles rather than measuring a single particle at a time. The specific application to be discussed is a smoke detector (fire-produced- aerosols) but several additional applications are possible. For an instrument which will measure the light scattered by an aerosol it will be necessary to consider two additional factors which were not of importance in the single particle case. These factors are the total number of particles in the beam, and the particle size distribution of the sample. The optical configuration of the instrument is as follows. The light source is either an incandescent bulb or a medium pressure mercury arc. An incident light system is of straightforward design and produces a reasonably well collimated, linearly polarized beam of uniform intensity and of approximately 1 cm 2 in cross section. ' The 55° (i„ ) scattered light detector system has an aperture of approximately ±15° and employs a photodiode-op-amp combination as the light detector.' The output of this detector is a quantity proportional to I„ where \, = T, {"V* a + N n I„ n } (2) figures in brackets indicate the literature references at the end of this paper. 670 N a is the number of absorbing (carbon-like) particles of diameter D, Nn is the corresponding number of nonabsorbing particles, I/ /a and I// n are the scattering intensities integrated over the angular range 40° to 70° for the absorbing and non-absorbing particles respectively. Although not explicitly shown, all the N and I ;/ terms are functions of the particle diameter and the sum extends over all particle sizes in the sample. l„ is relatively large for an aerosol containing non-absorbing particles and is relatively small for an aerosol containing a large number of carbon-like particles. There is also a second scattered light detector system normal to the plane of polari- zation centered at an angle of 90° with an aperture of ±10°. This system also uses a photodiode-op-amp combination as the light detector. The output of this detector is a quantity proportional to lj where E {Nalia + Vxn} (3) The N terms are the same as defined above, and the U terms are the scattered intensities normal to the plane of polarization integrated over the angular range 80° to 100°. Ij. is approximately proportional to the total number of particles in the beam, independent of whether they are absorbing or not. The theoretical studies have shown that almost any 20° angular range in ij. can be effectively used as a term proportional to the total particle density. In order to evaluate the effect of particle size distribution it was necessary to theoretically study the response of the instrument to changes in the particle size distri- bution. The distribution used in these studies was the Junge universal distribution N = AD" b (4) which gives the number of particles, N, of diameter, D, as a function of two parameters, A and b. The parameter b has been measured for a number of aerosols and is generally on the order of 4. The larger the b the more strongly the distribution is shifted to favor the small particles. Theoretical analysis for a number of cases has shown that the smoke detector response, l„ /I x , is a valid measure of the carbon content of an aerosol provided the particle size distribution has a b value of 6 or less. The actual response of the smoke detector was measured for a number of aerosol samples. It was shown that the instrument is effective in distinguishing fire-produced aerosols and other types of absorbing materials from non-absorbing aerosols. References [1] Dave, J. V., IBM Report 320-3237,, Palo Alto, Calif. (May 1968). 671 Part XIV. LABORATORY ACCREDITATION - PANEL DISCUSSION NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE NATIONAL VOLUNTARY LABORATORY ACCREDITATION PROGRAM T. R. Young National Bureau of Standards Washington, DC 20234, USA 1. Introduction A summary of the evolution of the National Voluntary Laboratory Accreditation Program (NVLAP) helps to identify issues that may need consideration when contemplating the estab- lishment of laboratory accreditation programs. NVLAP was established by the Department of Commerce on February 25, 1976, after several years of study and communication with affected parties in state and Federal government and in the private sector. NVLAP provides a national voluntary system whereby laboratory accreditation programs can be established for specific product areas having need. The purpose of such programs is to examine and accredit the professional and technical competence of testing laboratories that serve regulatory and non-regulatory product evaluation and certification requirements. Detailed procedures and definitions utilized by NVLAP are contained in Title 15, Code of Federal Regulations, Part 7 [l] 1 . Additional descriptive material regarding NVLAP including background, a review of procedures and discussion of NBS's supporting technical role is contained in proceedings of briefing sessions held in June 1976 in Los Angeles and Washington, DC [2]. 2. Discussion Proposed establishment and procedures for NVLAP were initially announced in the Federal Register in May 1975 [3]. The public review period that followed, including two public hearings, provided over 150 substantive critiques from Federal and state agencies, professional and trade organizations, industries, testing laboratories and individuals. This public review raised principle issues resulting in significant revisions of the proposed procedures. The proposed procedures provided for appointment by the Secretary of Commerce of laboratory accreditation boards for each class of technology for which a need for accredited testing laboratories is determined. Thereafter, establishment of accreditation programs for testing of products within the class of technology would be considered upon request. This presumed the existence of general and concurrent needs within classes of technology for accreditation of laboratories to test all, or many, of the various products within the class. The public review of the proposed procedures indicated that this assumption is not generally valid. Needs for laboratory accreditation tend to vary with the product that is tested. Thus needs for laboratory accreditation applicable overall to classes of technol- ogy or technical disciplines may be difficult to assess and to justify. Therefore, the proposed procedures were revised to allow initiation of accreditation services on a product by product basis as needs are determined. To counter the potential for establishment of innumerable and possibly duplicative separate services that might result from this approach, the revised procedures allow the grouping of similar or related products when initiating accreditation services. In addition, the procedures provide for review of applications for accreditation with reference to the applicant's prior accreditations. Only those examina- tion requirements not met in previous accreditations would be applicable to the laboratory's additional accreditation. figures in brackets indicate literature references at the end of this paper. 675 The second principle issue that arose during public review of the proposed NVLAP concerned participation of the private sector in development of accreditation services. The proposed procedures provided for an advisory committee of private and public members for each board of accreditation. A board, consisting of Federal employees, would be required to consult with its advisory committee when developing accreditation criteria. However, only the board would make final recommendations regarding accreditation criteria and operational aspects of the program. Many comments received during public review indicated that a voluntary program should provide for more direct and equal involvement of the private sector, particularly in regard to development of accreditation criteria. Accordingly, the procedures were revised to provide for accreditation criteria committees in lieu of separate boards and advisory committees. These criteria committees, composed of government and private sector members, develop and recommend accreditation criteria to be promulgated by the program. The third significant issue concerned coordination with other laboratory examination and accreditation programs in the private and government sectors. Although, the proposed procedures vaguely referred to support of coordination efforts, they lacked general or specific provisions regarding promotion or arrangement of coordination with other programs that may exist or be in development. Many comments, during public review, addressed this deficiency. They considered a major benefit of NVLAP to be its focus for coordination of laboratory examination and accreditation programs. Such coordination could work to reduce the duplication of laboratory examination efforts, reduce confusion regarding the criteria used and promote reciprocal recognition of laboratory assessment programs, domestically and internationally. In response, the procedures were revised to include such coordination as a goal of the program. For initiation of laboratory accreditation programs under NVLAP, provisions were added requiring consideration of existing programs in the public or private sectors. These provisions include consultation with government agencies that may be impacted by programs established under NVLAP. In particular, Federal regulatory agencies may halt, by written objection, the establishment of programs that would adversely affect their existing or developing laboratory accreditation programs. Other revisions in the program's procedures resulted from NVLAP's exposure to public review. Correspondence and transcripts of public hearings obtained in this review, together with a summary and analysis of such response is available for inspection and copying at the Department's Central Reference and Records Inspection Facility. References [1] Federal Register, Vol. 41, Number 38, Feb. 25, 1976 (pg. 8163-8168). [2] NVLAP Proceedings, American Council of Independent Laboratories, 1725 K Street, N.W. Washington, DC 20006. [3] Federal Register, Vol. 40, Number 90, May 8, 1975 (pg. 20092-20095). 676 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977). THE NATIONAL LABORATORY CERTIFICATION PROGRAM FOR WATER SUPPLY Charles Hendricks Environmental Protection Agency Washington, DC 20460, USA 1. Introduction In 1970, the Department of Health, Education and Welfare undertook a study of public water systems: - 36% of the tap samples contained one or more bacteriological or chemical constituents exceeding the limits in the Public Health Service Drinking Water Standards. - 11% of surface water samples exceeded the recommended organic chemical limit of 200 parts per billion. - 77% of the plant operators were inadequately trained in fundamental water microbiology. - 46% of the plant operators were deficient in chemistry relating to their plant operation. - 79% of the systems were not inspected by State or county authorities in the year prior to the study. - 85% of the systems did not analyze a sufficient number of bacteriological samples. For these and other compelling reasons, Congress enacted the Safe Drinking Water Act in 1974, "to assure that the water supplied to the public is safe to drink." The basic structure of the Act, as it pertains to public water systems, calls for the establishment of: (1) maximum contaminant levels for contaminants in drinking water; and (2) criteria and procedures, including quality control measures, to assure compliance with such maximum contaminant levels. To implement the National Laboratory Certification/Quality Assurance Program, EPA has proposed for discussion a three-step framework to certify all water supply laboratories by the effective date of the Revised Primary Regulations (March 1979). Also included were specific alternatives to the basic plan and either the three-step framework or any one of the alternatives could be adopted. The June 1976 draft of the Criteria and Procedures Manuals are currently in the review process and the second draft is expected in late October 1976. 2. Technical Requirements and Evaluation Criteria EPA believes that adequate quality control must be an integral part of the day to day operations of laboratories analyzing drinking water samples to help insure the availability to the public of a safe and dependable supply of drinking water. Accordingly, EPA has developed a set of manuals in chemistry (organic and inorganic), radiochemistry and micro- biology which set forth fundamental techniques, training standards, and evaluation criteria 677 for the operation of analytical laboratories responsible for analyzing samples from public water systems. EPA recognizes that there are many excellent State certification programs for microbiology currently in operation, and it is EPA's intention that only those portions of the National program be incorporated where they may be needed to augment present State programs. 3. Regulatory Basis for a National Laboratory Certification/ Assurance Program Under the present Implementation Regulations (Section 142.10(b)(3) and (b)(4) of the Interim Primary Drinking Water Regulations - Implementation {Federal Register, January 20, 1976), there are two requirements for a laboratory certification/quality assurance program. First, the State must be able to supervise the analysis of samples from public water systems. In order to do so, States must have access to laboratories capable of accurately measuring the contaminants regulated by primary drinking water standards. Second, the State must undertake a program to certify all laboratories conducting analyses for public water systems. In order to attain and maintain primary enforcement responsibility, both of these requirements are mandatory under the Interim Primary Drinking Water Implementation Regulations. 4. Proposed Implementation Plan The Implementation Regulations key the requirement for the certification of State principal laboratories to a date for the establishment of a National Quality Assurance Program. It is important to recognize that the implementation of the National program must take into account the availability of resources and a commitment to carry out the assigned responsibilities. To fulfill EPA's initial responsibility under paragraph (b)(4) of the Implementation Regulations, EPA will approve designated State principal laboratories on an interim basis using the Criteria and Procedures Manuals as guidelines. After the promulgation of the Revised Primary Drinking Water Regulations (September 1977) and the establishment of the National Laboratory Certification Program, EPA will begin to certify State principal laboratories upon request. All State principal laboratories will be certified by March 1979 using the revised Criteria and Procedures Manuals which will be based upon the Revised Primary Regulations. The revised Criteria and Procedures Manuals will be developed in sufficient time to be released with the Revised Primary Regulations. EPA's proposed framework for the implementation of the National Laboratory Certi- fication/Quality Assurance Program is as follows: Step 1 - Between now and the promulgation of the initial laboratory certification manuals EPA will grant "interim approval" to State principal laboratories if the State determines that they are in compliance with the requirements of the Interim Primary Regulations and with the State laws in effect when the State applies for primacy. In addition, the State must designate an official to serve as the State Laboratory Certification Officer and submit a plan for developing a process to certify local laboratories. Thus, processing of State requests for primacy will not be delayed. Step 2 - Between Fall 1976 and September 1977 (anticipated promulgation of Revised Primary Drinking Water Regulations) EPA will grant "approval" of the State principal laboratories after visitation and review using the initial laboratory certification manuals. Whether the manuals will serve as guidelines or as mandatory requirements will be addressed after the public review of the program. In addition, the States will be required to begin a certification program for local laboratories; local laboratories could be "approved" if the laboratories were in compliance with State laws in effect when the State attains primary enforcement responsibility. 678 Step 3 - Between September 1977 and March 1979 (anticipated effective date of the Revised Drinking Water Regulations) Prior to September 1977, EPA/Office of Research and Development will lead efforts to produce a revised laboratory certification/quality assurance manual consistent with the Revised Drinking Water Standards. Thereafter, EPA would formally certify State principal laboratories. In order to qualify for certification, the State principal laboratories must be in conformance with all of the requirements in the revised manuals. All State principal laboratories would be certified by March 1979. The States will also be required to formulate certification programs for local laboratories based on the technical requirements of the revised manuals or their equivalent as developed by the State. Depending upon the option chosen by EPA, the local laboratories must be certified before the effective date of the Revised Primary Drinking Water Regulations. 679 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) LABORATORY ACCREDITATION C. Eugene Hamilton Dow Chemical Midland, Michigan 48640, USA By definition, Laboratory Accreditation is a system by which to examine, evaluate and assure an acceptable level of competence of a laboratory to test, analyze or inspect a material and to provide valid reliable data. With this definition in mind, the problems, frustrations and progress toward developing water laboratory practice guidelines within the ASTM D-19 Committee on Water is presented. The first procedural step in ASTM is the agreement upon a written scope statement for the planned activities. The following scope statements were developed for the Laboratory Practices Section Activities: o to develop standard practices detailing the requirements and guidelines by which laboratories engaged in the analysis and characterization of water and water related material can be evaluated for reliable performance and operation. o to develop standard practices and guidelines for laboratories engaged in the analysis and characterization of water, waste water, and water related material to enable them to provide valid reliable data. These two statements differ conceptually between "reliable performance and operation" and "valid reliable data". This conceptual difference has not been unanimously resolved. Operationally the scopes are very similar, since "valid reliable data" is the best measure, if not the only measure of laboratory performance. The section developed five interrelated necessary parts of laboratory practices which will assure "valid reliable data". Namely, ^-Physical Facilities^ Personnel Training | I Quality Control Records No one part is more or less important than any other; each part must be adequate to fulfill the purpose of the laboratory. From the results of the D-19 member questionnaire on current laboratory practices, at the end of this paper, a priority was established to develop standard practices for the parts in this order: 1. Records, Documentation, Chain of Custody, etc. 2. Quality Control 3. Personnel and Training 4. Facilities (including instrumentation) Written documentation was missing for many common (good) practices of the laboratories. That is, most laboratories practiced procedures which yield "valid reliable data" but also most of these laboratories did not document these actions or programs. 681 Thus a Standard Practice for Laboratory Records has been prepared, but not yet approved by the subcommittee or the main Committee D-19 on Water or ASTM ballots. The other segments are being prepared for future consideration. These Standard Practices will help a person, or a laboratory determine the conformity of its practices for Records, Personnel, Training, Facilities and Quality Control, which will produce "valid reliable data". Laboratory Activity Survey Total What is the scope of your laboratory activity? Water analysis: industrial process water waste water: Sanitary Industrial high purity water drinking water other Water-formed deposit analysis Evaluation of water treatment materials Other Describe the breadth of your testing service. Waters (industrial process and drinking); inorganic parameters organic parameters microbiological parameters pesticide/chlorinated hydrocarbons marine radiological toxicology Waste & effluent waters:' biological inorganic parameters brines organic parameters microbiological parameters marine pesticide/chlorinated hydrocarbons toxicology radiological parameters Water-formed deposit analysis: spectrographic x-ray microscopic wet chemical radiological organic residue analysis Ion exchange material: capacity cross-linkage 88 63 110 59 71 35 63 57 11 121 82 56 48 32 21 31 67 116 44 83 50 26 46 34 18 43 23 40 73 10 38 45 15 682 Membranes 21 Flocculants 34 Activated Carbon 35 3. Is you laboratory an: in-plant lab 34 central services lab 57 independent testing lab 23 company affiliated testing lab 26 university affiliated testing lab 4 regulatory agency lab 20 specialty service lab e.g., NMR; EM; GC/MS; eta. 17 other 12 4. What is your personal relationship with the laboratory? administrator 57 supervisor 71 customer (user) 8 other (worker) 18 5. Do you sell your laboratory services? Yes 64 No 77 6. Do you utilize on stream in-line monitoring for process control and/or effluent quality? Yes 68 No 65 7. How is data provided by your lab used: process control 63 identify treatment requirements 89 monitoring treatment product performance 74 identify and quantify pollutants 105 monitor effluent quality 104 NPDES permit 77 waste treatment plant design 44 drinking water quality 64 design parameters 55 research 86 other 7 8. Please indicate the number and distribution of your laboratory staff. 18.8 permanent 3.9 part-time 10.6 professional 9.5 technician 9. Do you have a formal training program for your laboratory analysts? Yes 61 No 80 10. Do you have a formal internal quality control program? Yes 100 No 50 11. Please check appropriate items your program covers. % Qf Work Covered precision studies 40 accuracy studies 35 683 recovery studies "blind" standard analysis identified standard analysis replicate sample analysis "spiked" sample analysis 12. How often do you prepare calibration curves? each time test is run weekly monthly other 36 28 47 36 24 % of Analyses Covered 69 31 40 44 13. Do you service you instrumentation in-house? Yes No 84 76 14. Do you utilize manufacturers and/or outside service for the following: Atomic Absorption Units 98 Infrared Spectrophotometers 60 UV Spectrophotometers 64 Gas Chromatographs 65 Analytical Balances 109 Other 9 15. Which of the following areas of analysis do you feel require our priority attention for development of guidelines? If more than one - please rank in order of importance. 1st 2nd 3rd A. industrial process water quality B. waste water quality C. drinking water quality D. other 29 20 28 75 26 4 26 25 14 13 — — 684 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977) CERTIFICATION OF WATER, AND WASTEWATER LABORATORIES: A PROFESSIONAL CHEMIST'S VIEW H. Gladys Swope Waste Management and Pollution Control 214 N. Allen Street Madison, WI 53705, USA 1. Introduction Under the National Pollutant Discharge Elimination System (NPDES) which is part of Public Law 92-500, both industry and municipalities are required to meet certain permit requirements before discharging a waste into a watercourse. The most recent Water Quality Safety Act, Public Law 92-523, requires that certain chemical analyses, in addition to bacteriological analyses, are required of all water supplies serving the public. Under NPDES such requirements necessarily require that analyses be made. It is very easy to obtain a number by any analytical method you so choose, but unless the person doing the work is not only qualified but honest, careful and understands the meaning of the results obtained, how good is this number? There is even more involved than just the analysis. Taking of the sample itself and the manner of collection all have a bearing on whether the permit or law is met. Therefore, it is necessary for those of us who are professional chemists to try to see that anyone taking a sample or making a lab- oratory analysis be required by law to do it properly. The questions involved are: First, are the people who make such analyses qualified, and what are the requirements for qualification? The next question is quality assurance. Perhaps the personnel are qualified but does the laboratory take time to be sure that the results are correct? Are the solutions standardized properly? Are the instruments working correctly, etc.? Are duplicates run to check the precision of the analyst? Every labor- atory should make an analysis with the thought in mind that this work should be good enough to stand up in court. 2. Discussion Over two years ago questions were asked of the Committee on Environmental Improvement of the American Chemical Society as to what was being done about laboratories that were springing up to run water and waste analyses by people who were not qualified. It was known that almost, if not all states, required certification of laboratories running bacteriological analysis of water, but what about chemical analysis? A letter was sent out in November of 1974 by the American Chemical Society to all of the' state Boards of Health and to the environmental or sanitary engineer in each state asking if they certified laboratories, both public and private, and whether they had any requirements for the qualifications of chemists. It was a most pleasant surprise that not only did everyone reply, but in most cases they wrote fairly detailed letters. Most, however, had no arrangements for certifying laboratories and certainly not for the qualifications of chemists in these laboratories, but in almost ewery case bacteriological laboratories testing for milk and water, had to be certified. 685 The words, certification, accreditation, registration, informal and formal approval, may or may not have equivalent meanings. The replies to the survey indicated that the following states had no uniform approval procedure for environmental laboratories or of analytical chemists: Alabama 1 Alaska Arkansas Colorado Delaware District of Columbia Florida Idaho Illinois Indiana Kansas Louisiana Maine Maryland Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Dakota Oregon Pennsylvania Rhode Island South Dakota Tennessee Texas Virginia Washington West Virginia Wyoming Wisconsin 2 The following states have some form of approval for environmental laboratories: California (mandatory registration) Connecticut (mandatory registration, except for air analyses) Hawaii (voluntary certification) Iowa (approval of laboratories that do analyses of water supply and wastewater samples) Massachusetts (voluntary certification of laboratories in the fields of water bacteriology and water chemistry) North Carolina [certification of water treatment laboratories and laboratories that conduct chemical analysis of wastewaters) Oklahoma 3 (mandatory certification for county health laboratories) Ohio (approval or certification of laboratories that do water and sewage analysis) South Carolina (control program for chemical laboratories which analyze samples for water treatment plants) Utah (certification). They have a very well documented program. Vermont (state regulations provide for approval but lack of funds prevent implementation) Virginia (informal approval of independent laboratories doing tests in connection with stream discharge permits) The following states have introduced legislation to approve environmental laboratories: Arizona (license); Illinois (approval); Kansas; Montana; Pennsylvania (accreditation) and Wisconsin (water and milk analyses only). Two states have introduced legislation for some form of approval of analytical chemists: California (licensing) and Ohio (registration). x Since this survey a paper was published by Dr. John F. Regnier, Director of the Alabama Environmental Health Administration Laboratory. Water and waste samples are analyzed by the state and they have been developing a consolidated environmental laboratory capability. 2 Since this survey a law has been passed in Wisconsin for the certification of laboratories making water and milk analyses. 3 Since the survey, the Oklahoma Water Resource Board certification has emphasized the evaluation of laboratory performance through reference sample analysis. 686 Twenty states have indicated the need for some form of regulation of analytical lab- oratories and/or analytical chemists: Alabama Alaska Colorado Delaware Florida Idaho Illinois Kansas Minnesota Missouri Montana Nebraska Nevada New Jersey Oklahoma 4 Pennsylvania Rhode Island South Carolina Tennessee West Virginia Hawaii has mandatory licensing of personnel performing analyses of pollutants affect- ing health and safety and voluntary certification of laboratories. There is a practical problem, however, that presents itself in the case of small industries and small municipalities (10,000 or less population). They may not have the funds to pay for a full-time chemist. Therefore, arrangements will have to be made in which either the person making the analysis has been trained properly by a qualified chemist or simplified analyses which are geared to the less technical person and have been approved by the proper authority, may be substituted for the EPA methods. Even then, the person using a simplified method should be trained by a professional chemist. Since the original survey, a paper appeared in the January 1976 issue of Environmental Science and Technology [l] 5 , in which Dr. J. E. Regnier reported on a survey he sent out to the various states in which he was primarily interested in workload, staffing, budgets and facilities. He received 27 replies which represented 17 states and three federal laboratories. In the February 25 issue of Federal Register, the Department of Commerce discussed a procedure for a national voluntary laboratory accreditation program [2]. Howard J. Sanders of Chemical & Engineering Hews [3] had a special report in the March 31, 1975 issue discussing the licensing and registration or certification of chemists. It appears there are many pros and cons regarding the certification of laboratories and the licensing or certification of chemists, but in the interests of chemists themselves and to protect the public from people who are not qualified to do chemical work, there should be some legal basis for protecting both chemists, their laboratories and the public. References [1] Regnier, John E., Operating Environmental Laboratories, Env. Sci. & Tech., ]Q_, 28-33 January 1976. [2] Department of Commerce, Procedures for a National Voluntary Laboratory Accreditation Program, Federal Register, 4T_, No. 38, 8163-8168 February 25, 1976. [3] Sanders, Howard J., Do Chemists Need Added Credentials?, Chem. & Eng. News, 53, 18-27, March 31, 1975. 4 Since the survey, the Oklahoma Water Resources Board has an annual certification program for laboratories for all laboratories that analyze and submit data to their agency. 5 Figures in brackets indicate the literature references at the end of this paper. 687 NATIONAL BUREAU OF STANDARDS SPECIAL PUBLICATION 464. Methods and Standards for Environmental Measurement, Proceedings of the 8th IMR Symposium, Held September 20-24, 1976, Gaithersburg, Md. (Issued November 1977' LESSONS TO BE LEARNED FROM CLINICAL LABORATORY ACCREDITATION William F. Vincent Director, Cyto Medical Laboratory Bloomfield, Connecticut 06002, USA Consultant on Laboratory Quality Control Laboratory Division Connecticut State Department of Health Hartford, Connecticut 06101, USA 1. Introduction The question of laboratory accreditation, certification, or licensure is one which has been actively discussed by governmental officials and analysts in recent years. This decade has often been characterized as the "age of the consumer" and concurrently with consumer demands for better quality and quality assurance for both products and services, a rising interest has developed by consumer groups, professional and trade organizations and governmental agencies at all levels in establishing programs which will effectively monitor the quality of the products and services received by the consumer. Additionally, as increased emphasis is placed on laboratory data to be used as a monitoring device, it becomes obvious that there has to be a reasonable way to assure that laboratory analyses are approximately accurate and reproducible. The answer obviously is a program for laboratory "accreditation". 2. Discussion As we discuss the topic of laboratory accreditation, there are several questions that must be asked and be eventually answered: First, is there really a need for laboratory certification? To answer this, one must measure the accuracy, precision, and reproducibil- ity of the laboratory data being generated by a given laboratory or laboratories. There has been a sufficient number of proficiency testing surveys by governmental agencies, complaints to state officials, and other examples of inferior laboratory analyses to allow one to draw the general conclusion that a form of laboratory accreditation is needed in order to protect the public and even the laboratory community. Second, if accreditation is needed, who should carry out the program of laboratory accreditation? There are several possible alternatives—governmental agencies (State or Federal), private professional associations or combinations of those. Third, to what degree should accreditation be practiced? A program could be developed that accredits the laboratory as an entity or the laboratory could be accredited by special- ty. One could even go as far as to accredit a laboratory on a test-by-test basis. There is also the question of personnel qualifications. As part of the accreditation process, just the education, experience and training of key personnel such as the director and supervisor could be evaluated. An alternative is to insist that all personnel in the laboratory, who are performing analyses meet certain minimal requirements. Fourth, should accreditation be voluntary or mandatory, that is, should every labora- tory by virtue of law or statute be required to be accredited? 689 Fifth, who should pay for the accreditation program? Accrediting programs can be very expensive. Should the individual laboratory pay the entire bill or should the program be supported by government funds? An excellent way of beginning to answer these questions is to take a close look at the accreditation of laboratories performing clinical analyses. It was not unreasonable that the clinical laboratory would be the first to fall under close scrutiny in terms of the quality of work performed. A clinical laboratory analysis can be a ^ery direct effect on the health of the patient. Even a small misjudgment on the part of clinical analysts has led to the death of a patient. Because of the severe implications, a great deal of emphasis has been placed, especially in recent years, on monitoring the quality of the tests per- formed in the clinical laboratory. With few exceptions (notably the private physician performing tests on his or her own patients,) almost every clinical laboratory in the nation is regulated by a state agency, the Federal government (Medicare or Interstate) or both. In recent years, relatively good programs have been developed for laboratory registra- tion and licensure that could serve as a model for programs to be developed for environment- al laboratories. However, whenever analysts from non-clinical areas sit down to talk about laboratory accreditation, they like to believe that they are the first to do so. The truth of the matter, as this writer sees it, is that the clinical laboratory was subjected to accreditation procedures such as proficiency testing, inspections, personnel evaluation, quality assurance programs, etc. long before the average environmental analyst knew what the words meant. Rather than trying to "reinvent the wheel," we would be well advised to take a very close look at the clinical field. In some cases, there are programs which can be adapted with very little effort. For example, in Connecticut, a program of proficiency testing of environmental laboratories was started using the same format, computer programs, etc as was used for evaluating the proficiency of clinical laboratories performing clinical chemistry. The Federal Safe Water Drinking Act authorizes the Environmental Protection Agency to establish guidelines for the certification of laboratories performing bacteriological, chemical, organic, and radiological analyses in support of the monitoring activities mandated by the Federal act. The EPA Work Group on Laboratory Certification was charged with the responsibility of preparing guidelines for use in the certification of laboratories. To a limited extent, the draft guidelines do reflect the lessons that were learned from clinical laboratory accreditation. The EPA guidelines stress that the program should be conducted by the state agency, that is, the state should have "primacy." As a consequence, state agencies will be actively encouraged to develop certification programs which are "equal to or more stringent than" the Federal guidelines. Unlike the EPA guidelines, the Federal Interstate Licensure Program for clinical laboratories gives wery little encouragement to the state to develop adequate programs. The Federal reluctance to surrender licensure to the states eventually led to some rather estranged feelings between many of the states and the Center for Disease Control. The EPA guidelines, while they place heavy emphasis on the role state agencies hopeful- ly will play, do not provide for equivalency status for private professional accreditation programs. It is worth noting this fact as it may lead to a much better Federal/State relationship. In the case of the Federal Clinical Laboratory Improvement Act of 1967 governing laboratories engaged in interstate commerce, very little encouragement was given to state agencies. However, the act contained a separate section dealing with private organizations which made it easier for a private organization to substitute its accredita- tion program for the Federal licensure than it was for a state agency to do so. As a rule of thumb, if an accreditation program is to be mandatory, its implementation is best left in the hands of a governmental agency. The subject of personnel qualifications has been closely related to laboratory accredi- tation in the clinical area. The controversy surrounding this subject, while existent, has not been as great as that involving state agency status as an accrediting body as mentioned earlier. The biggest problem has been--just how far does one go in incorporating personnel 690 qualification regulations into an accrediting program? For the clinical laboratory, there have been almost as many different approaches as there have been programs! Some of these are: (a) A program which does not consider the experience, training and education of laboratory personnel but stresses, rather, inspection and proficiency testing of the facility as an entire entity. (b) Certification of only the laboratory director. He or she may be required to have a doctorate in medicine, chemistry, bacteriology, etc. and subsequent to graduat- ion four or more years of full-time clinical laboratory experience. In some cases, the director may have to be board-certified by a national accrediting body such as the College of American Pathologists or the American Academy of Medical Microbiology. (c) Certification of the laboratory director and the supervisors. (d) Certification of all laboratory personnel including laboratory trainees. Many persons feel that this is the bes^ approach and should be conducted nationally in order to give technologists the mobility to move from one state to another as job opportunities present themselves. While certain minimal requirements with regard to personnel education, training and experience appear to be desirable and necessary, there has to be a reasonable and gradual approach if one is to prevent basically well qualified laboratory personnel from being excluded from the field or severely limited in terms of job opportunities. One of the biggest problems that has confronted clinical laboratory certification has been precisely this type of situation. For example, under Federal Medicare regulations, certain minimal requirements have been set for medical technologists. Individuals not meeting those requirements must pass an examination by the end of 1977. While this appears to be an equitable approach at first glance, it has created many problems with the consequence that many very capable technologists may be unemployed at the end of 1977. Many of these indivi- duals were trained in a specific area such as hematology, bacteriology, or clinical chem- istry and know very little about the other clinical laboratory specialties. The Federal regulations, however, require that they pass an examination in all areas which for many will be virtually impossible. Consequently, as accreditation programs are developed for environmental laboratories, care must be taken to prevent this type of situation. The best way to do this is to limit the input from professional societies who will favor regulations and personnel qualification criteria favoring their membership. 3. Conclusion Now that mandatory laboratory accreditation is around the corner, thanks to the Safe Water Drinking Act, government agencies and professionals must begin to look closely at the problem. As mentioned earlier, many lessons can be learned from the trials and tribulations of the clinical laboratory programs. Laboratory accreditation, in order to achieve its goal, must be evolved gradually and realistically rather than as a massive undertaking. As experience has shown in the latter case, one may end up with unrealistic and even detriment- al criteria for accreditation. 691 * U. S. GOVERNMENT PRINTING OFFICE: 1977 O -248-136 NBS-114A (REV. 7-73) U.S. DEPT. OF COMM. BIBLIOGRAPHIC DATA SHEET 1. PUBLICATION OR REPORT NO. NBS SP-464 2. Gov't Accession No. 3. Recipient's Accession No. 4. TITLE AND SUBTITLE Methods and Standards for Environmental Measurement Proceedings of the 8th Materials Research Symposium Held September 20-24, 1976 5. Publication Date November 1977 6. Performing Organization Code 7. AUTHOR(S) William H. Kirchhoff, Editor 8. Performing Organ. Report No. 9. PERFORMING ORGANIZATION NAME AND ADDRESS NATIONAL BUREAU OF STANDARDS DEPARTMENT OF COMMERCE WASHINGTON, D.C. 20234 10. Project/Task/Work Unit No. 11. Contract/Grant No. 12. Sponsoring Organization Name and Complete Address (Street, City, State, ZIP) Same as 9. 13. Type of Report & Period Covered Final 14. Sponsoring Agency Code 15. SUPPLbMhN 1AKY NOlhS Library of Congress Catalog Card Number: 76-608384 16. ABSTRACT (A 200-word or less factual summary of most significant information. If document includes a significant bibliography or literature survey, mention it here.) This book presents the Proceedings of the 8th Materials Research Symposium on "Methods and Standards for Environmental Measurement" held at the National Bureau of Standards, Gaithersburg, Maryland, on September 20 through September 24, 1976. The Symposium was sponsored by the NBS Institute for Materials Research in conjunction with the Office of Air and Water Measurement. The volume contain and contributed papers symposium: Accuracy, in water, multielement characterization of ae analysis, the applicat monitoring, ambient ai characterization of in reference materials fo environmental laborato s extended abs in topics of the analysis o analysis, the rosols, in sit ion of laser t r quality moni organic and or r environmenta ry certificati tracts of the invited concern at the time of the f trace organic compounds physical and chemical u methods for water echnology to atmospheric toring, the chemical ganometallic constituents, 1 measurement and finally, on and collaborative testing 17. KEY WORDS (six to twelve entries; alphabetical order; capitalize only the first letter of the first key word unless a proper name; separated by semicolons) Accuracy; Aerosol; Air; Collaborative Testing; Laboratory Accreditation; Laser Technology; Multielement Analysis; Pollutants; Speciation; Standard Reference Materials; Trace Organics ; Water 18. AVAILABILITY [X Unlimited ■_j _^ For Official Distribution. Do Not Release to NTIS ly! Order From Sup. of Doc, U.S. Government Printing Office Washington, D.C. 20402, SD Cat. No. C13. 46:464 J\ Order From National Technical Information Service (NTIS) Springfield, Virginia 22151 19. SECURITY CLASS (THIS REPORT) UNCLASSIFIED 20. SECURITY CLASS (THIS PAGE) UNCLASSIFIED 21. NO. OF PAGES 65 9 22. Pr USCOMM-DC 29042-P74 There's anew look . . . the monthly magazine of the Nation- al Bureau of Standards. Still featured are special ar- ticles of general interest on current topics such as consum- er product safety and building technology. In addition, new sec- tions are designed to . . . PROVIDE SCIENTISTS with illustrated discussions of recent technical developments and work in progress . . . INFORM INDUSTRIAL MANAGERS of technology transfer activities in Federal and private labs. . . DESCRIBE TO MAN- UFACTURERS advances in the field of voluntary and mandatory standards. The new DIMENSIONS/NBS also carries complete listings of upcoming conferences to be held at NBS and reports on all the latest NBS publications, with information on how to order. Finally, each issue carries a page of News Briefs, aimed at keeping scientist and consum- alike up to date on major developments at the Nation's physi- cal sciences and measurement laboratory. (please detach here) SUBSCRIPTION ORDER FORM Enter my Subscription To DIMENSIONS/NBS at $12.50. Add $3.15 for foreign mailing. No additional postage is required for mailing within the United States or its possessions. Domestic remittances should be made either by postal money order, express money order, or check. Foreign remittances should be made either by international money order, draft on an American bank, or by UNESCO coupons. Send Subscription to: □ Remittance Enclosed (Make checks payable to Superintendent of Documents) O Charge to my Deposit Account No. NAME-FIRST, LAST I I I I I I I I I I l I I I I I I I I I I I I I I I COMPANY NAME OR ADDITIONAL ADDRESS LINE I I I I I I I I I I I I I I I I I I I I I I I I I I I I I STREET ADDRESS I I I I I I I I I I I I I I I I ) CITY I I I I I I I I I I I I I I I I ZIP CODE MAIL ORDER FORM TO: Superintendent of Documents Government Printing Office Washington, D.C. 20402 PLEASE PRINT Where can you find all the reference data you need? Right in the Journal of Physical and Chemical Reference Data! Now in its sixth year, this valuable publica- tion has proved that it fills the important gaps for you in the literature of the physical sciences. Published by the American Institute of Physics and the American Chemical Society for the National Bureau of Standards, this quarterly gives you quanti- tative numerical data, with recommended values and uncertainty limits chosen by experts in the field. Critical commentary on methods of measurement and sources of error, as well as full references to the original litera- ture, is an integral part of each of your four issues a year. Can you afford to be without this prime source of reliable data on physical and chemical properties? To start receiving your copies, just fill in the order form and drop into the mail. If you do use a pur- chase order, please attach the printed form as this will help us to expedite your order. Send for complete list of reprints! Journal of Physical and Chemical Reference Data American Chemical Society 1155 Sixteenth Street.N.W., Washington, DC. 20036 Yes, I would like to receive the JOURNAL OF PHYSICAL AND CHEMICAL REFERENCE DATA at the one-year rate checked below: Name Members Nonmembers Street City _ □ Home D Business State Zip Bill me D Bill company or school □ Payment enclosed □ U.S.. Canada. D $25.00 □ $90.00 Mexico Other Countries □ $29.00 □ $94.00 Please Attach This Order Form To Purchase Order. PEI K A .Tf,i/. N,VERSITY LIBRARIES A00QQ7nn 3 g t4 ■r ■ I H SfflB HHnl IHHIli BhhHb ■IP SSI m