CoS.S 32 (p ^QATMOS^, Data Quality Assurance Guidelines r/ WfNT Of For Marine Environment Programs Prepared Under the Supervision of Robert J. Farland Office of Ocean Engineering Rockville, Md. March 1980 U.S. DEPARTMENT OF COMMERCE Philip M. Klutznick, Secretary National Oceanic and Atmospheric Administration Richard A. Frank, Administrator Office of Ocean Engineering William 0. Barbee, Acting Director Digitized by the Internet Archive in 2012 with funding from LYRASIS Members and Sloan Foundation http://www.archive.org/details/dataqualityassurOOunit CONTENTS Preface — The purpose and benefits of Data Quality Assurance vii Acknowledgements x 1. Introduction 1-1 1.1 Definition for DQA 1-1 1.2 Definition of key terms used in the DQA guidelines 1-1 1.2.1 DQA Function 1-1 1.2.2 DQA Element 1-2 1.2.3 Level 1-2 1.3 Organization and use of this document 1-3 2. Preparation of a DQA plan 2-1 2.1 General considerations 2-1 2.2 Considerations for developing the DQA plan 2-1 2.3 Detailing the DQA plan 2-4 3. Determination and application of DQA level 3-1 3.1 Selecting a DQA level 3-2 3.1.1 Criterion 1 3.1.2 Criterion 2 3.1.3 Criterion 3 End use of the data 3-2 Relationship of accuracy required to the state-of-the-art technology 3-3 Degree of complexity of measurement program 3-3 3.1.4 Recommended operator performance for DQA level 3-4 3.2 Significance and impact of DQA level 3-4 4. DQA Functions 4-1 4.1 Preparation of the program 4-1 4.1.1 Objective of program 4-2 4.1.2 Audit requirements 4-2 4.1.3 Expected operational conditions 4-3 4.1.4 Resource allocation 4-3 For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402 4.2 Data acquisition system design 4-3 4.2.1 Methodology and system considerations 4-4 4.2.2 Availability of personnel to operate selected equipment 4-4 4.2.3 Suitability of system 4-4 4.2.4 Reliability 4-5 4.2.5 Logistics 4-5 4.2.6 Retrievability 4-5 4.2.7 Measurement performance 4-5 4.2.8 Availability of instrument and cost (existing inventory items, off-the-shelf-equipment, or new development equipment) 4-6 4.3 System certification 4-6 4.3.1 Existing in-house equipment items 4-6 4.3.2 Off-the-shelf equipment to be procured 4-7 4.3.3 New development 4-8 4.3.4 Personnel training and qualification 4-9 4.3.5 Bench calibration and test evaluation 4-9 4.3.6 Intercomparisons 4-1 1 4.3.7 Field testing 4-11 4.3.8 Acceptance testing 4-12 4.3.9 Documentation 4-12 4.3.10 Configuration control 4-12 4.3.1 1 Backup equipment and spare parts 4-13 4.4 Data processing planning 4-13 4.4.1 Verification of computer programs 4-13 4.4.2 Data reduction and validation planning 4-14 4.4.3 Personnel training 4-14 4.4.4 Data collection and format 4-15 4.4.5 Data checking 4-15 4.4.6 Data transfer standards 4-16 4.4.7 Support data collection and storage 4-16 4.5 Total error budget verification 4-17 4.5.1 Random error sources and limits of random error of various components 4-17 4.5.2 Systematic error sources and limits of systematic error of various components (estimated) .. 4-17 4.5.3 Total system error 4-1? 4.6 Audit (preoperations stage) 4-18 IV 4.7 Field operations 4-19 4.7.1 Personnel training 4-19 4.7.2 Equipment transportation 4-19 4.7.3 Standardization of operating procedures 4-20 4.7.4 Site selection 4-20 4.7.5 Sampling considerations 4-21 4.7.6 Predeployment checking and calibrations 4-21 4.7.7 Measurement comparisons 4-21 4.7.8 Ongoing checking and calibrations 4-22 4.7.9 Postdeployment checking and calibrations 4-23 4.7.10 Quality control and system audit.... 4-23 4.7.1 1 Data and sample collection, recording, protection, and transmission 4-23 4.7.12 Chain of custody 4-24 4.8 Analytical laboratory operations 4-24 4.8.1 Personnel training 4-24 4.8.2 Calibration 4-25 4.8.3 Establishing precision and accuracy of laboratory system 4-26 4.8.4 Standardization of testing 4-26 4.8.5 Interlaboratory and intralaboratory comparisons 4-27 4.8.6 Quality control and system audit 4-27 4.8.7 Functional checks 4-28 4.8.8 Data transferral check 4-29 4.8.9 Quality control check samples 4-29 4.9 Data reduction and analysis 4-29 4.9.1 Personnel training 4-29 4.9.2 Data processing 4-29 4.9.3 Reduction checks 4-30 4.9.4 Standard program computer analysis checks 4-30 4.9.5 Quality control and system audit 4-30 4.9.6 Support data collection and storage 4-30 4.10 Data validation 4-31 4.10.1 Review of support data 4-31 4.10.2 Statistical analysis of data 4-31 4.10.3 Data evaluation 4-32 4.10.4 Preparation of the uncertainty statement for all data 4-32 4.11 Audit 4-33 4.1 1.1 Review of Data Quality Assurance plan 4-33 4.1 1.2 Audit of actual operations.. 4-34 4.12 Data reporting 4-35 4.12.1 Preparation of report 4-36 4.12.2 Archiving 4-36 Tables Table 3.1. Inputs for choice of DQA level 3-5 Table 3.2. Criterion 3: Degree of complexity of measurement program 3-7 Table 3.3. Examples of the intensity of application of selected DQA actions for DQA levels 3-9 Table A-l Assurance levels of human error A-7 Figures Figure 1-1. Sequence of DQA elements 1-3 Figure 2-1. Considerations for developing the DQA plan 2-5 Figure 2-2. General function diagram for preperation of DQA plan 2-12 Figure 2-3. Process for detailing DQA plan 2-13 Figure 3-1. Function diagram for preparation of DQA plan — Level I DQA plan C-7 Figure 3-2. Function diagram for preparation of DQA plan — Level II DQA plan C-9 Figure 3-3. Function diagram for preparation of DQA plan — Level III DQA plan C-.l 1 Figure 3-4. Decision tree for determination of level 3-3 VI PREFACE Scientific investigations require the collection, verification, and analysis of data. The validity of the conclusions arrived at as a consequence of these activities is directly related to the provable quality of the data used. Traditionally, the basis for evaluation of data quality has been the reputation of the principal in- vestigator responsible for the measurement program. Today, many marine measurement programs in- volve many individuals or several organizations. So usually it is not possible for a principal in- vestigator to control personally all aspects of the data process; data from such programs can no longer be judged solely on the reputation of the investigator. If these data are to be used with confidence by others, they must be evaluated objectively and accompanied by a statement that defines their quality in terms of goals of the measurement program, the accuracy achieved, and the sources of error associated with the data. The preparation and execution of a Data Quality Assurance (DQA) plan is the means for defining the effects of measurement systems and their operators during the collection, processing, analysis, and reporting of data. In preparing a DQA plan, all aspects of the data process that can affect the data should be considered. Some of the aspects that can affect the data are measurement methodology, analytical techniques and procedures, instrument selection and certification, the effect of instrument operators, field and laboratory operations, and data processing and evaluation. A properly designed DQA plan leads to: a. Data that were collected by using the procedures formulated to control and define the error associated with the data; b. Data that have been examined to verify that they satisfy the measurement program objectives; and c. Information to support statements defining the quality of the data. When a DQA plan has been used to guide the operations of a measurement program, processes have been followed that are normally considered to be good scientific practice. Some examples of how DQA can increase credibility and quality of data are described below. Instrumental Inadequacies Nearly all data collection requires the use of properly calibrated instruments. The performance of the instrument during laboratory calibration often is quite different from its performance in the field. Ex- perience also has shown that the uncertainties in instrumental accuracy and precision stated in the manufacturer's literature or technical manuals should be verified before the instrument is used in a measurement program. The preparation (and eventual use) of a DQA plan requires consideration, identification, and verifica- tion of instrument performance in laboratories and under actual usage conditions. As instrument vii systems become more complex and greater varieties are manufactured, the potential for error will in- crease unless careful evaluations are made. An investigator must be satisfied that his instruments have operated properly and that the data obtained are accurate to within stated limits. Personnel Error The effects of human fallibility cannot be removed from the data collection process, even though the current trend towards sophisticated data collection instrumentation does reduce human input. Human errors range from the simple to the complex. Errors such as misreading or improperly manipulating in- struments and inadvertently transposing digits arise from causes as simple as momentary inattention to lack of training or general ineptness. Some studies 1 - 2 on the effects of human errors in the data process find that at least 20 percent (and a maximum of 80 percent) of inadequate performances of instruments result from human error. Given even such a wide range of performance values, a manager would be prudent to consider the potential effect of human errors in data programs. A DQA plan provides a way to evaluate these and other effects of operator performance. Data Comparability Normally, an organization develops its own in-house procedures, especially for routine or repetitive operations. Frequent checks may be made against internal standards (or by using alternative procedures) to assure internal consistency of results. Past comparisons of data made among several organizations, however, have shown little consistency, even though each participant was confident about the validity of his own data. Comparison provisions in a DQA plan therefore can reveal the existence of problems of this kind. Review and Correction of Deficiencies A DQA plan should provide for the review needed to identify procedural inadequacies or other deficiencies in the data process that can result in unacceptably large error sources or uncertainties. A DQA plan should provide for recording and documenting details of operations as they occur. This may include the recording of what appear to be trivial items such as lot numbers of chemical reagents or expendables. Thus, in programs in which DQA plans are used, suspect data sets can be identified as coming from questionably performing measurement systems. When such data sets can be corrected, amended, or flagged as unusable, data from acceptable sets can be used with confidence. Legal Issues When data are called for in legal proceedings, the quality of the data is of primary importance. In the course of such proceedings, data submitted as evidence first must be shown to be relevant. Secondly, it must be established that the technique used for collecting the data was reliable in a legal sense. Federal statutes clearly state that decisions must be based on reliable data. For legal purposes, reliability can be established by careful documentation on where, by whom, and how the sample was taken, and the standards used. In many cases, both Government and industry have lost legal decisions because of questionable data. A well prepared DQA plan should provide for the chain of documentation required for supportive evidence in legal proceedings. 1 Meister, D., 1964. "The Problem of Human-Initiated Failures," Proceedings of the 8th National Symposium of Reliability and Quality Con- trol, January 9, 1964, pp. 234-239. 2 Meister, D., 1964, "Methods of Predicting Human Reliability in Man-Machine Systems," Human Factors, Baltimore, Md., 6 (6). VIII Management A DQA plan can be considered a management tool during both its planning and its execution phases. The preparation of the DQA plan can, and should, be an integral part of all program planning. This aspect of DQA can prevent problems during program execution because all phases are considered systematically. During program operations, continual information on both the personnel and equip- ment performance is possible. Areas requiring management intervention or improvement of approach become evident by frequent reference to the DQA plan. SUMMARY DQA actions not only benefit the originating organizations, but they also serve the marine community at large. If investigative work is performed using a DQA plan and the resulting data are accompanied by a complete and accurate data quality statement, confidence in the data sets is established and second- ary users can assess the data for applicability to their own uses. Each year the input to national and international data banks is increasing. If the data are of unknown or unspecified quality, even though they may be of the highest order of precision and accuracy, users of these banks may reject them and repeat the measurements themselves to obtain the degree of certainty they required. The degree of preci- sion or accuracy of existing data is often satisfactory; confidence in the implied or stated precision and accuracy is often equally important. A DQA plan helps to establish measurement procedures that will yield data of the quality required for the project. During the execution of the program, the DQA plan provides a means for controlling the factors that affect data quality. Finally, a DQA plan provides the basis for stating the quality of the data obtained under a measurement program. IX ACKNOWLEDGMENTS This Data Quality Assurance Guidelines document was developed under the auspices of the Office of Ocean Engineering, National Oceanic and Atmospheric Administration. This document was initially prepared under contract by D'Appolonia Consulting Engineers, Inc., 10 Duff Road, Pittsburgh, PA 15235. Reviews or contributions of the following individuals at various stages during the preparation of this document are gratefully acknowledged: R. D. Angelari J. Coucheron L. J. Ladner D. G. Ballinger L. Friedman B. R. Malo M. Basileo C. W. Holly W. H. Oldaker B. C. Belanger A. Jarvis B.P. Polanin R. L. Booth H. H. Ku J. D. Westhoff Assistance in the general editing and in the preparation of the final manuscript was furnished under contract by Paul E. Lehr, 10815 Lombardy Road, Silver Spring, MD. 20901. Special acknowledgment is made to Joan Peifer for her dedication and diligence during the typing of several versions of this document. Inquiries regarding material contained in the DQA Guidelines should be made to the Office of Ocean Engineering, National Oceanic and Atmospheric Administration, Rockville, MD. 20852. 1. INTRODUCTION For every scientific or technical investigation a means must be provided to assure that the data used are sufficiently accurate to support the objectives of the investigation. A mechanism to attain such accuracy is the Data Quality Assurance (DQA) plan. Such a plan provides for the selection and certification of measurement devices, the specification of procedures to be used in both field and laboratory operations, and processing and evaluation of the data collected. Evaluation of the data includes the specification of acceptable limits of accuracy and the definition of the causes of errors. The DQA plan also provides for continuous monitoring of the data collection process to see that specified procedures are followed, that error analyses are being used to control data quality, that data meet program objectives, and that data quality is known at all times. This document provides information and guidance so that data collection activities in marine measure- ment programs can be effectively planned, controlled, and reported. Recommendations are made on the type and organization of activities needed for collecting data compatible with the intent of such programs. The guidance in this document is general; it can be used to develop a DQA plan for any type of measurement program. It is presented in a logical sequence for developing a DQA plan. Once a DQA plan is completed and has been reviewed by management, it becomes an integral part of the measure- ment program and provides a systematic way to control the uncertainty of the collected data. 1.1 DEFINITIONS FOR DQA Quality assurance is a set of coordinated actions such as plans, specifications, and policies used to assure that a measurement program can be carried out in a reliable and cost-effective manner. Quality control is not quality assurance; it is the routine use of procedures designed to achieve and maintain a specified level of quality for a measurement system. A quality assurance plan will include quality control procedures. A DQA plan describes a sequence of coordinated activities to be used as part of a specific data acquisi- tion program. By using a DQA plan, a program manager assures that data with error bounds that meet the program objectives are obtained. The plan includes an uncertainty statement on the accuracy and precision of the data. Preparation of a complete DQA plan involves both the planning and operations phases of a marine measurement program; the plan itself is composed of the operations phases only. 1.2 DEFINITION OF KEY TERMS USED IN THE DQA GUIDELINES Key terms used in DQA are defined below. Additional terms are defined in Appendix B. 1.2.1 DQA Function A DQA function is a specific activity undertaken during the course of a marine measurement program. Each can individually affect the quality of the data obtained. Examples of such functions are personnel training, equipment transportation, calibration, and data processing. 1-1 1.2.2 DQA Element A DQA Element is a group of functions related either by their effect upon the overall DQA plan or by their period of occurrence during the development and use of the DQA plan. Each element can be con- sidered individually during the preparation and execution of the DQA plan. The DQA planner should note that the DQA elements are separated into two basic groups: Planning and Operations. Planning Elements are used to collect necessary background information for developing the DQA plan which consists of Operations Elements only. Planning Elements can be used as guidance by the DQA planner as he considers all the factors that can affect the quality of the data acquired during the opera- tions phase of the measurement program. It is recommended that the DQA planner document the in- formation from the Planning Elements if there is a possibility that DQA decisions written into the Operations Elements may be questioned. (Under this criterion, Level III operations would require that the Planning Elements of the DQA plan be documented. Measurement programs conducted at DQA Level I or II need not have the Planning Elements documented.) Operations Elements deal with the actual collection and subsequent processing of the data. When developed and put into written form by the DQA planner, the Operations Elements form the DQA Plan for the measurement program. The managers of the measurement program use these as the Data Quality Assurance procedures to be followed while gathering and processing the data the measurement program was designed to obtain. Figure 1-1 lists the planning and operations elements appropriate for marine measurement programs. 1.2.3 Level The objectives of a marine measurement program determine the content of the DQA plan prepared for the program. A DQA level is defined as the intensity of application of DQA functions to a specific measurement program. Level is determined by the program objectives, the end use of the data, the sophistication required in the data-collection process, and the complexity and resources of the measurement program. The three DQA levels are described in chapter 3. 1.3 ORGANIZATION AND USE OF THIS DOCUMENT The information needed to develop a DQA plan is presented as follows: Chapter 2 discusses how a DQA plan is prepared and what should be contained in it. Chapter 3 describes the selection of DQA levels for a specific measurement program. The chapter also describes the three DQA levels, the criteria for differentiating between them, and the minimum DQA functions recommended for each level. Chapter 4 discusses the detailing of DQA elements and functions for a specific DQA plan. 1-2 Planning Elements Operations Elements Preparation of Program Data Acquisition System Design System Certification ] Data Procesing Planning Total Error Budget Verification ] Audit (Preoperations Stage) Field Operations ] Analytical Laboratory Operations Data Reduction and Analysis Data Validation Audit ] Data Reporting Figure 1-1. — Sequence of DQA Elements 1-3 2. PREPARATION OF A DQA PLAN The DQA plan for a marine measurement program specifies how the data collection and processing systems, personnel, and facilities are to be used to obtain the data to meet the program objectives. The plan includes an uncertainty statement about the data to be collected and details the procedures to be used to achieve the allowable error objective of the measurement program. Procedures to be con- sidered in developing the plan are related both to the planning and operations portions of the measure- ment program. The general considerations for preparing a DQA plan and the steps needed for outlining and detailing the plan are discussed in this chapter. 2.1 GENERAL CONSIDERATIONS The DQA plan should: a. Parallel the course of events in the measurement program. The measurement program plan (often referred to as the technical plan, operations plan, mission profile, or cruise plan) should provide the sequence of events. Figure 1-1 shows the recommended sequence of events for a marine measurement program. For many programs, the DQA and measurement program plans can be merged into a single document. b. Serve as a management tool for control of the measurement program. Thus, the program management must review the DQA plan. c. Identify existing organizational procedures appropriate for the program. (The use of existing procedures decreases the need to commit unnecessary resources, and organizational personnel involved in the DQA plan will most likely be familiar with them.) 2.2 CONSIDERATIONS FOR DEVELOPING THE DQA PLAN The following preliminary steps should be considered prior to detailing the DQA plan. (Fig. 2-1 shows these steps.) Step I— Select DQA Planner The manager of the measurement program should appoint an individual to prepare the DQA plan. This person should be directly responsible to program management. The DQA planner must have technical knowledge of the objectives of the measurement program, and must be familiar with the operational procedures to be used. Step II — Review DQA Guidelines The DQA planner should read these guidelines to: Become familiar with the objectives of DQA; become familiar with the steps in developing the DQA plan (ch. 2); understand DQA levels (ch. 3); and understand the functions in the DQA plan (ch. 4 and fig. 2-2). 2-1 Step III — Review the Measurement Program Plan The DQA planner should review the measurement program plan to determine: The objectives of the measurement program; the parameters to be measured; the accuracy required for all measurements; and the numbers and qualifications of the personnel needed to conduct the data collection activities. Step IV — Outline Objectives of Measurement Program Using the review of the measurement program plan in Step III, the planner should outline the objec- tives of the measurement program as follows: List the data required, the quantities to be measured or sampled, and the parameters involved. This is done by organizing the information gathered in Step III. Determine the end use of the data — an important consideration in determining DQA level (Step V). The planner should determine if the measurement program is being conducted to provide in-house data only, if the data are to be published as part of a scientific program, and whether the data are to be introduced into legal proceedings. Step V — Determine DQA Level Chapter 3 contains descriptions of three DQA levels, recommended minimum DQA functions for each level, and criteria for determining what level is appropriate for a specific measurement program. In general, Level I is used when a program provides in-house data only, Level II when program data are to be published, and Level III when data may be used as evidence in legal proceedings. Tables and figures in chapter 3 can be used to help in the selection of the DQA level for a program. Table 3-1 (sec. 3.1.1) lists the selection criteria and suggests appropriate "answers." Table 3-2 (sec. 3.1.3) lists the program complexity criteria and factors that affect control of the measurement program. Figure 3-4 is a decision tree for determining DQA level. It can be used in conjunction with tables 3-1 and 3-2. Figures 3-1, 3-2, and 3-3* present the minimum DQA functions recommended for the level selected. Step VI — Set Assurance Level Objective The assurance level is a measure of the procedures used in a measurement program to control human errors. Determination of the assurance level can be made by considering the effects of operator perfor- mance upon data quality. The following factors should be considered: Can operator errors be detected in the program data? Are these data sensitive to operator performance? Are available operators capable of performing the work tasks required in the measurement program? Will an extensive training program be needed to assure that operators can perform their work tasks? To verify that the operators are correctly performing assigned tasks, the technique of "pass-fail" tests can be developed. (Pass-fail tests provide a qualitative measure of whether the operator can correctly complete the task, not how much of an error he might introduce.) *Figure 3-1, 3-2, and 3-3 appear after appendix C. 2-2 The pass-fail tests measure operator performance which can be expressed as a percentage (or assurance level). Section 3.1.4 recommends operator assurance levels for each DQA level. The assurance level selected for a measurement program should consider whether: the operator can introduce undetectable errors; the collected data are sensitive to operator performance; the data are to become part of legal proceedings (if so, it is especially important to demonstrate operator proficiency); and new instruments are being developed as part of the measurement program. If the answer to any one of the above is yes, the minimum recommended operator assurance level for the DQA level should be increased. If the answer to more than one is yes, the assurance level should be further increased. The operator assurance level desired in the measurement program should be estimated at this point. It should be fixed only after the DQA level has been selected for the program. Step VII — Select Applicable DQA Functions Figures 3-1, 3-2, and 3-3 present the minimum DQA functions recommended for each level. Consider- ing the DQA level selected in Step V, the DQA planner should: 1. Use figure 3-1, 3-2, or 3-3 to determine the functions to be considered. He should then read the detailed description of each function (ch. 4) before making final selections. 2. Determine whether the recommended functions completely satisfy the needs of the measurement program. If not, additional functions (those not highlighted in the figure for the level selected) should be considered and, if applicable, should be added to the recommended minimum DQA func- tions. Step VIII — Prepare Specific Function Diagram The DQA planner should prepare a specific function diagram for the measurement program. This diagram will also serve as a framework for detailing the DQA plan. To prepare the function diagram the planner may: a. Refer to figure 1-1 and select elements that will be part of the measurement program. Selection will depend on DQA level and the activities to be included. b. Prepare a block diagram containing all elements. Elements should be repeated as necessary for each measurement in the program. The resulting block diagram should be a flow chart of the sequence of activities for the measurement program. Figure 1-1 can serve as a basis for this block diagram. The final diagram may include several parallel blocks for an element, such as field operations, if various parameters are to measured simultaneously. Also, the same element, such as analytical laboratory operations, can be repeated to show parameters that can be measured independently. c. List the functions to be applied for each element. Figure 2-2, General Function Diagram For Preparation of DQA Plan, can be used as an example of how a specific function diagram can be organized. d. Prepare specific descriptions for all DQAfunctions selected. The descriptions should relate direct- ly to the specific measurement program and should be as detailed as possible at this stage of the DQA plan development. 2-3 Step IX — Review Existing Procedures and Work Methods for Applicability The DQA planner should review existing in-house procedures and work methods to determine if any are applicable to the measurement program. These could include: test methods, instrument operation instructions, calibration requirements and procedures, personnel training programs and testing methods, data processing procedures, and equipment transportation instructions. The procedures selected can be noted on the specific function diagram (prepared in Step VIII) for reference when detailing the DQA plan. Step X — Allocate Resources for DQA Plan Detailing and Execution Using the specific function diagram, the DQA planner should estimate the resources (funding, person- nel, facilities, equipment, and time) required to detail and execute the DQA plan. The estimate should consider: a. Whether additional personnel, such as auditors, are required for execution of the plan. b. Whether the organization has facilities available for calibrating or testing instruments, for storage of samples, and for similar activities. c. Whether equipment needed for executing the DQA plan is available. This could include stan- dards and instruments for intercomparison programs. Program management should review and validate the estimate. If the currently available resources are not sufficient, several alternatives are possible: a. Provide additional resources. b. Establish an acceptable resource level and modify the measurement program requirements and DQA plan. c. Change the accuracy specifications of the measurement program if this will result in reduced ex- penditures, but will still satisfy program objectives. d. Decrease DQA level, e.g., by reducing a Level II measurement program to Level I, if program objectives can still be met. e. Reduce the overall scope of the measurement progn ^ so that the number of parameters to be measured is decreased. 2.3 DETAILING THE DQA PLAN A detailed DQA plan should include the following: 2-4 Review DQA Guidelines Review the Measurement Program Plan Outline Objectives of Measurement Program (Section 4.1.1) IV Determine DQA Level (Chapter 3.0) Set Assurance Level Objective (Section 3.1.4) VI Select Applicable DQA Functions '(Figures 3-2, 3-3, 3-4 and Chapter 4.0) VII Prepare Specific Function Diagram (Chapter 4.0) VII Review Existing Procedures and Work Methods For Applicability Allocate Resources For DQA Plan Detailing and Execution Detail DQA Plan (Section 2.3) Figure 2-1. — Considerations for Developing the DQA Plan 2-5 a. All test and calibration procedures for instrument certification and field and laboratory opera- tions. b. Requirements for personnel training. c. The pass-fail tests to be performed during operations to determine the operator assurance level. d. Recordkeeping requirements, formats, and means for data processing. The following steps, shown in figure 2-3, are recommended for detailing the DQA plan. Step 1 — Detail Objectives of Measurement Program The outline of the objectives of the measurement program developed in Step IV of section 2.2 should be completed. Depending on the detail arrived at in section 2.2, it is recommended that only one revi- sion be considered — the total allowable error objective for the measurement program together with an estimate of the errors associated with instrument and data processing systems to be used. It may be necessary to repeat Step 1 after considering Steps 3 and 4 below. Step 2 — Review DQA Level Adequacy The DQA level selected in Step V of section 2.2 should be reviewed. Proceed as follows: a. If the DQA level is adequate, proceed to Step 3. b. Consider whether the detailing of the measurement program objectives (Step 1 above) or comple- tion of steps VII, VIII, and X of section 2.2 indicates a need to change the DQA level. Such a change could be required because of a change in the resource allocation (Step X, sec. 2.2) or in the error source estimate arrived at in Step 1 above. c. If the DQA level is inadequate, determine a new level using the considerations discussed in Step V of section 2.2 and chapter 3. (See Steps 2, 2A, 2B, and 2C of fig. 2-3.) d. Upgrade or downgrade the objectives of the pass-fail test to the operator performance assurance level recommended if the DQA level is changed. However, the operator assurance level need not be downgraded simply because the DQA level is decreased. The decision to downgrade depends on why the level was decreased. For example, if the resource allocation resulted in downgrading the DQA level, it might be possible to retain the assurance level to preserve the quality of operator performance. (The reevaluation of assurance level is Step 2B of fig. 2-3.) e. If the DQA level is changed, review the DQA functions selected for the measurement program as in Step VII of section 2.2 (Step 2C of fig. 2-3) and prepare a new specific function diagram using the directions in Steps VIII and IX of section 2.2 (Step 2C of fig. 2-3). Step 3 — Complete Preparation of Program Element The detailing of the DQA function in the specific function diagram for the Preparation of Program Element should be completed as follows: a. Establish audit requirements for DQA Level II or III programs. Audits will be made for Level II programs by program personnel and by independent auditors for Level III programs. (Audit re- quirements are discussed in sec. 4.1.2.) 2-6 b. Estimate the environmental conditions expected during the measurement program, because they can affect the performance of the measurement system. The error source estimate prepared in Step 1 should be reevaluated after these estimates are made. c. If the error source estimate is affected by the environmental conditions, review the adequacy of Step 2. d. Review the resource allocation prepared in Step X of section 2.2 and revise it if affected by Steps 1, 2, or 3 of this section. Step 4 — Determine the Status of the Measurement System If the measurement system has been used before and is prescribed for the measurement program under consideration (Step 4A, fig. 2-3), the DQA planner should: a. Detail the procedures for the Data Acquisition System Design and System Certification elements of the DQA plan. b. Review the Data Acquisition System Design element to assure that the measurement system meets the program objectives. c. Detail the DQA functions in the System Certification element. d. Detail the DQA plan developed for the previously used measurement system so that all informa- tion is provided to complete the Data Acquisition System Design and System Certification elements. If the measurement system has not been used before, or is new instrument development, follow Step 4B of figure 2-3. The following must be considered for "new" systems: a. The details needed to perform the System Certification element will not be known until the Data Acquisition System Design element is complete. b. The DQA planner should prepare an outline as guidance for working on the Data Acquisition System Design element. c. The System Certification element in the DQA plan should be outlined after completing the Data Acquisition System Design element. d. After the System Certification element is completed, detail the functions in this element that will affect subsequent operations elements, such as calibration procedures. Steps 4, 5, and 6 may require detailing in the DQA plan as the elements are completed if the measure- ment system, data processing procedures, and field and analytical laboratory operational procedures are under development. (See Steps 4B, 5B, and 6B in fig. 2-3.) In this case, the DQA plan must be detailed after the respective planning elements have been completed so that all information on items such as operating procedures and calibration requirements is available to the operations personnel during the execution of the program. Step 5 — Determine Whether Data Processing Procedures Are Available. If the measurement system is known, detail the data processing procedures for the DQA plan. Include the following actions: 2-7 a. Review the data processing procedures identified in Step IX, section 2.2. b. Develop new or augmented procedures if existing procedures do not satisfy all the functions in the Data Processing Planning element. If the measurement system is new (4B, fig. 2-3), the requirements of the Data Processing Planning element should be outlined so the DQA planner can determine what information is needed from the System Certification element. (Information will be needed on items such as data format and how the data can be processed.) After the System Certification element is completed, the Data Processing Plan- ning element can be detailed in the DQA plan. Step 6 — Determine Availability of Field Operations and Analytical Laboratory Operations Procedures. If the measurement system is prescribed at the beginning of the DQA plan preparation (4A, fig. 2-3), field and analytical laboratory operations can be detailed during initial preparation of the DQA plan. The DQA planner should provide detailed procedures on: a. Field and analytical laboratory operations b. Training required to prepare personnel to perform these operations. c. Calibration and checking activities. The procedures should contain detailed information on equipment to be used, instructions for equip- ment operation, instructions for performance of tests, calibration requirements, and recordkeeping and requirements for handling samples. Existing in-house procedures should be specified provided they meet the requirements of the field and analytical laboratory operations and include the information cited above. If in-house procedures are incomplete, the missing information should be added. If a new or untried measurement system or data processing procedure (4A, and 5B, fig. 2-3) is to be used, the initial DQA plan should contain an outline of the field and analytical laboratory operations procedures that will be needed, and a list of the measurements to be made. After the System Certifica- tion and Data Processing Planning elements are completed, the procedures for the field and analytical laboratory operations elements should be detailed for the DQA plan. The latter procedures must be completed before operations elements are used in the measurement program. Step 7 — Prepare Other Elements for DQA Plan For Level III programs, the Total Error Budget Verification element should be executed after the Data Processing Planning element. The purpose of the total error budget analysis is to verify that the measurement system will meet the error limit objectives of the measurement program before data are actually collected. The DQA Plan for the Total Error Budget Verification element should detail the information needed to perform the analysis and the mathematical methods to be used. These details should include: a. The total error objective and error source estimate from the Preparation of Program element. b. The identity of the records needed from the System Certification element, such as calibration records. c. The sources and magnitudes of errors from the Data Processing Planning element. 2-8 d. The methods to be used to combine errors for comparison against the total error objective. (See appendix A.) If the DQA Plan is detailed as the planning elements are executed, this element should be detailed in the DQA plan only after operations procedures (6B, fig. 2-3) are outlined. The Data Reduction and Analysis, Data Validation, and Data Reporting elements should also be prepared at this time. If the measurement system to be used is known at the beginning of the DQA plan preparation, these elements can be detailed in the DQA plan before any work is done on the planning elements. If the measurement system was not known, these elements can be detailed in the DQA plan only after the operations procedures are outlined. After the operations elements are completed, the data processing elements can be detailed. The data processing elements are (fig. 2-2): Data Reduction and Analysis, Data Validation, Audit (when appropriate for DQA level decided upon), and Data Reporting. The procedures listed in the DQA plan must include: incoming data format, identification of computer programs or processing techniques, instruction for use of computer codes or processing procedures, output data formats, and pass-fail tests of operator performance. Step 8 — Plan and Develop Audits as Required by DQA Level The audits required by the DQA level established should be detailed in the DQA plan. These include audits of the DQA plan, operations, and data described in section 4.6 and 4.11. The detailing of the audits should specify: a. The audits to be performed (Sec. 4. 1 .2). b. The personnel to perform the audits: for Level II programs, identify the responsible operations personnel; for Level III, the independent auditor should be identified. c. That the outline for each audit should show: the work tasks to be observed, the applicable procedures, the records to be reviewed, and the requirements for and testing of operator performance. 2-9 d. When audits are to be performed. This depends on the schedule of the measurement program. The audit schedule should be adjusted to the measurement program schedule. If the measurement system is known to be operational, the audits can be planned, detailed, and scheduled immediately. If the measurement system is new or under development, the initial version of the DQA plan should state that audits are to be made as the procedures shown in 4B, 5B, and 6B of figure 2-3 are detailed. Step 9 — Include Possible Points for Feedback When the DQA Plan Is Implemented Figure 2-2 indicates two points for feedback. The first point occurs when the last planning element is completed; the second after the Audit element is completed. If the last planning element, the Total Error Budget Verification element, indicates that the allowable error objective has not been met, e.g., for a Level III program, the planning elements must be reviewed. The point at which the review (feed- back) is to be started depends upon the sources and magnitudes of the errors. If the unacceptable error source is attributable to data processing, the Data Processing Planning element is to be reconsidered. If the selected instrument system is incapable of meeting the error objective, a new system should be selected {Data Acquisition System Design element) and subsequently tested (System Certification element). If the instrument system cannot be changed or improved, the objectives of the measurement program should be examined to determine whether the total allowable error objective could be changed without affecting the end use of the data. It is important to note that a change must be made to match program objectives, allowable error, end uses of data, availability of resources, and so on. Changing one without changing another could result in a serious mismatch. Detailing of the Total Error Budget Verification element is not required for Levels I and II programs. But, no matter what DQA level has been assigned to a program, the total allowable error objective and estimated error sources should be compared against the results of the System Certification element to verify that the error objective is reasonable before starting the operations elements. The second feedback point shown in figure 2-2 is used to provide a means for correction during execu- tion of the operations elements. If the audits for Levels II and III programs indicate corrective actions are required, the DQA plan should outline them. Corrections could include: changes in working procedures, changes in data format, changes in calibration frequency, changes in instrumentation or calibration (indicated by intercomparison), and changes in data format or processing. Similar changes may be made in Level I programs solely on operational experience rather than on for- mal audits. 2-10 If changes occur during the execution of the operations elements, the DQA plan should be revised. The DQA plan should specify the mechanisms for effecting changes or correction, such as: who has the authority to approve changes, what procedures would be affected if a calibration requirement or a test method is changed, what record forms could be affected, and what operations personnel must be notified of changes. Step 10 — Review DQA Plan and Resource Allocation If a measurement system was acceptable during the detailing of the DQA plan, the plan should be completed before execution of the Data Acquisition System Design element. If the measurement system was new or under development, the DQA plan is not to be completed until after the planning elements are completed and the operations elements have been detailed. In either case, the DQA planner should review the completed plan to determine if it meets the objec- tives of the measurement program in terms of whether: a. All types of data to be collected have been included. b. The DQA functions selected consider the entire course of events of the measurement program. c. The uncertainty statement for the data can be prepared from the collected data and the records of data collection. d. The collected data satisfies the end use of the data. e. The measurement system satisfies the total allowable error objective. The second part of this step is a review of the resource allocation prepared during the completion of Step X (sec. 2.2) and Step 3 above. Determine whether the resources allocated are adequate to meet the requirements of the completed DQA plan. If not, the alternatives discussed for Step X should be con- sidered, and the DQA plan revised to match available resources. Step 11 — Submit DQA Plan to Measurement Program Management The completed DQA plan should be submitted to the measurement program manager to determine whether it meets the obectives of the measurement program. Concurrence of management is used to notify all program personnel that the DQA plan represents management policy for the program. 2-11 Planning Elements Operations Elements Preparation of Program 1 . 2. 3. 4. Objective of program - What data is required - What is end use of data - Determine level(s) - Total allowable error objective and estimate of error sources Audit requirements Expected operational conditions Resource allocation Data Acquistion System Design 1. Methodology and system considerations 2. Availability of personnel to operate selected equipment 3. Suitability of system 4. Reliability 5. Logistics 6. Retrievability 7. Measurement performance 8. Availability of instrument and cost (existing inventory items, or off-the-shelf equipment or new development) JL System Certification Off-The-Shelf Existing Equipment New In-House Ite ms _| — To Be Procured Development 1. Availability 1 1 Availability 1. Design 2. Evaluation ? Evaluation requirements & 3. Checkout 1 3 Source specifications (requisitioning) inspection 2. Inspection of 4. Recommissioninq • 4 Purchase components inspection 3. Instrument oper- 1 5. Performance and reliability tests 4. ating procedures Performance test 1 6. Procurement quality control 5. reliability Calibration of all components 1 _l A. Personnel training and qualific ation B. Bene l calibration and test eva luation C. Intercomparisons D. Field Test ng E. Acceptance testing F. Docu mentation (manuals and landbooks) G. Confi guration control H. Back jp eq uipment and spare | Darts Data Processing Planning 1. Verification of computer programs 2. Data reduction and validation planning 3. Personnel training 4. Data collection and format 5. Data checking 6. Data transfer standards 7. Support data collection and storage Total Error Budget Verification 1. 2. Random error sources and limits of random errors of various components (deterministic) Systematic error sources and limits of systematic errors of various components (estimated) Total system error Acceptable limits? 3-f Feedback if required Lj-I Audit Preoperations Stage H Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4. Site selection 5. Sampling considerations 6. Predeployment checking and calibrations 7. Measurement comparisons 8. Ongoing checking and calibrations 9. Postdeployment checking and calibrations 10. Quality control and system audit 11. Data and sample collection, recording, protection and transmission 12. Chain of custody Analytical Laboratory Operations 1. Personnel training 2. Calibration 3. Establishing precision and accuracy of laboratory system 4. Standarization of testing 5. Interlaboratory and intralaboratory comparisons 6. Quality control and system audit 7. Functional checks 8. Data transfer check 9. Quality control check samples Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5. Quality control and system audit 6. Support data collection and storage Data Validation 1. Review of support data 2. Statistical analysis of data 3. Data evaluation 4. Preparation of uncertainty statement for all data Audit 1. Review of DQA plan 2. Audit of actual operations Satisfactory? _l Feedback it required Data Reporting 1. Preparation of report 2. Archiving 1 Legend Indicates elements of DQA plan Indicates functions for a specifi element Figure 2-2. — General Function Diagram for Preparation of DQA Plan 2-12 Detail Objectives of Measurement Program (Section 4.1.1) Change DQA Level (Chapter 3.0) I Set New Assurance Level (Section 3.1.4) I 2B| Review Functions and Prepare New Specific Function Diagram I 2C Complete Preparation of Program Element (Section 4.1) I ] Determine the Status of the Measurement System System Used Before Yes New Development or First Use of System Detail Procedures for Data Acquisition System Design and System Certification Elements for DQA Plan 4A No Outline Data Acquisition System Design and System Certification Elements Complete System Certification Element 4B| Are Data Processing Procedures Available Yes Detail Data Processing Procedures for DQA Plan I 5A Outline What is Needed, Prepare After System Certification is Completed 5B| I Are Field Operations And Analytical Laboratory Operation Procedures Available Yes Detail Operating Procedures for DQA Plan 6A| Outline What is Needed and Prepare Prior to Operations 6B| \r t Prepare other Elements for DQA Plan I Plan and Develop Audits as Required by DQA Level Include Possible Points for Feedback When DQA Plan is Implemented Review DQA Plan and Resource Allocation I 10 Submit DQA Plan to Management of Measurement Program 11 Figure 2-3. — Process for Detailing DQA Plan 2-13 3. DETERMINATION AND APPLICATION OF DQA LEVEL The three DQA levels (see definition in appendix B) described in this chapter are sufficiently broad in scope to be applicable to most marine measurement programs. Level I — The minimum DQA functions that lead to data that can be presented with an uncertainty statement. Most functions associated with this level are related to the execution of a measurement program. Level II — DQA functions that provide direct control over activities which could affect the numerical value of the data. These functions are performed to assure that recorded data are within the limits of the error budget. Level III — DQA functions that are primarily related to the determination (computation) of the numerical values of the error statement accompanying the data. Provisions are included for audits of the measurement program by independent reviewers to assure that the DQA plan is used correctly and data values and the uncertainty statement are valid. The minimum DQA functions recommended for each level are shown in figures 3-1, 3-2, and 3-3.* The DQA level selected should provide data quality appropriate for the objectives of the measurement program under consideration. The DQA planner should consider all functions for each level to be sure that the level selected is appropriate for the particular measurement program. When a specific measure- ment program requires DQA functions beyond those recommended as a minimum for the DQA level selected, such functions should be added by the planner. The DQA level designation for a specific measurement program can be used as a descriptor in the un- certainty statement. Thus, when data are reviewed by a secondary user, the DQA descriptor indicates the intensity of DQA effort during data collection. The DQA level selected places demands on the measurement program resources in terms of funding, manpower commitments, equipment availability, and facilities. The resources needed must be deter- mined by the DQA planner. In selecting a DQA level for a specific measurement program, the following points should be con- sidered: a. The end use of the data. DQA level selection depends only on the end use of the data, not on the investigator or investigating organization. b. Organizational complexity (the number of individuals, groups, organizations, etc.) associated with the measurement program. * Figures 3-1, 3-2, and 3-3 appear after appendix C. 3-1 c. Combined operations measurement programs, which may require the selection of different DQA levels for different parameters, depending on the intended end use of each parameter. The same DQA level need not be applied to all data unless required by the measurement program objec- tives. d. The percentage or limits of error desired or required for a specific measurement program, which are not the primary consideration in determining DQA level. However, the relation between the required accuracy and the state-of-the-art technology is important. e. The size or scope of the measurement program does not in itself determine DQA level. The remainder of this chapter will help the planner to develop a set of criteria for selecting a DQA level, an evaluation of the impact of the various DQA levels upon a measurement program, and the recommended minimum DQA functions for each DQA level. 3.1 SELECTING A DQA LEVEL Fundamentally, the selection of DQA level is associated with the following three criteria (in descending order of importance): 1. end use of the data; 2. the relation between the accuracy required and current state-of-the-art and technology; and 3. degree of complexity of the measurement program. For each criterion there are several points to consider. Sections 3.1.1, 3.1.2, and 3.1.3 discuss the criteria. Table 3-1 outlines the main points of these sections. This table can be used as a guide for answering the considerations pertinent to each criterion. The decision tree shown in figure 3-4 can then be used to select the DQA level. 3.1.1 Criterion 1: End Use of the Data The major consideration is that the data meet the measurement program objectives. To decide the end use of the data, consider the following: a. Are the data collected for use primarily by the measurement program personnel? b. Will dissemination of the data be limited? c. Will the data be subjected to independent peer review? d. Must the data be available as evidence in legal proceedings? A general review of the objectives of the measurement program defines the starting point in the deci- sion tree shown in figure 3-4. If indicators point toward LEGAL, it is recommended that only Level III be considered. If a measurement program does not fall into the LEGAL category, the next criterion should be reviewed. If the data may be used in legal proceedings, a Level III program should be instituted. Level III re- quires the greatest DQA activity and provides sufficient checking and documentation so that the data are fully defensible. Generally, data can be considered as earmarked for legal use if they are to be available for use in response to requirements of statutes and regulations, or as evidence in a court of law. An additional legal consideration may be contract requirements. Level III should also be con- 3-2 Questions End Use of Data (Who/What Does it Satisfy) Answers yy No One (Project Group) } Relationship to State of the Art and Technolonv V Simplistic New Development Degree of Complexity (eg. No- of Persons, Groups, D arameters) \S Peers Simplistic or Sophisticated } New Development ^-^0 Legal Figure 3-4. — Decision Tree for Determination of Level Note: DQA Levels indicated by Roman Numerals. 3-3 sidered for measurement programs affecting public safety. Examples of Level III programs are envion- mental studies for nuclear powerplants, for establishing baseline environmental information for ac- tivities such as monitoring, or for collecting data to verify compliance with pollution criteria. Level II programs typically include nonregulatory engineering design studies and scientific experiments. Level I programs include background or preliminary studies that may be used for planning future work. If the LEGAL descriptor is inappropriate, the DQA planner should consider (using fig. 3-4) whether the data must satisfy PEERS (a peer group) or NO ONE (internal usage). PEERS include other marine organizations, coordinated design activities among various groups, or the presentation of the data in scientific journals or at meetings. Since most marine measurement programs will be reviewed or used by members of the marine community as well as by the measurement program team, the PEERS category is used most frequently. The NO ONE category should be reserved for measurement programs in which data dissemination will be limited, and accuracy, precision, and credibility are not of primary concern. Reconnaissance studies to provide preliminary information leading to more sub- stantive investigations are typical of limited dissemination (NO ONE) measurement programs. 3.1.2 Criterion 2: Relationship of Accuracy Required to the State-of-the-Art Technology The consideration here is control to achieve the required accuracy, precision, or assurance in the data quality. The question is not the accuracy itself, but how difficult it will be to achieve the desired ac- curacy. For a specific measurement program, the DQA planner should list what state-of-the-art sen- sors, instruments, and software are available. An additional factor is the experience of the personnel who will be using the system. If the instrument system is not complex and personnel who can operate the system are available, Levels I and II should be considered by the DQA planner. If the measurement program involves NEW DEVELOPMENT, i.e., it either requires instrumentation beyond current state of the art or further development of existing instrumentation, it is recommended that Level III be selected. The use of Level III for NEW DEVELOPMENT programs is to assure that the objectives of the measurement program can be met, evaluated, and defended. If the proposed measurement program is within the current state of the art, the system can be either SIMPLISTIC or SOPHISTICATED. The term SIMPLISTIC is used when system performance has been well proven and personnel are experienced in its use. Equipment in a SIMPLISTIC system probably is not sensitive to calibration drift, to environmental influences, or to careless handling or operating. If the equipment requires continuous attention to maintenance, sensitivity, or operation, the program should be considered SOPHISTICATED. Based upon criterion 2, a firm recommendation to select Level III is made in the case of development programs. Criterion 3, below, is used for determining DQA level for programs identified as SIM- PLISTIC and SOPHISTICATED. 3.1.3 Criterion 3: Degree of Complexity of Measurement Program A measurement program may be classified as either HIGH or LOW in complexity. To determine this complexity, the DQA planner should examine the considerations listed in tables 3-1 and 3-2. (Table 3-2 is essentially a checklist that may be used to determine program complexity.) In general, large numbers of people or organizations and large numbers of sites, stations, or parameters to be measured indicate a HIGH complexity measurement program. Conversely, few peo- ple, few sites, and few parameters to be measured indicate a LOW complexity program. At this point, the decision tree of figure 3-4 can be used to select an appropriate DQA Level for the program. For programs in which some factors have been classified as LOW and some as HIGH, it is recommended that the higher DQA level be selected. Two branches of the decision tree recommend 3-4 Table 3.1. — Inputs for choice of DQA level. (See Figure 3-4, Decision Tree for Determination of DQA Level.) END USE OF DATA (SEC. 3.1.1): Consider — — Who or what must the data satisfy? — What, if any, are the legal, contractual, regulatory, or data archive requirements? — What direct use will be made of these data? — Is the accuracy or credibility of the data critical in a design affecting human safety? — Is the parameter to be measured affecting the quality of the environment? Answers — — Legal — Data must be fully defensible. — Peers — Data intended for dissemination outside measurement program team. — No One (Measurement Program Team) — Data intended for reconnaissance or feasibility studies where accuracy, precision, and credibility are not of great concern, and — Limited dissemination is intended. RELATIONSHIP OF REQUIRED ACCURACY TO STATE OF THE ART OR TECHNOLOGY (SEC. 3.1.2): Consider — — How complex are the proposed techniques, sensors, instruments, systems, or software compared to the current state of the art? — How experienced is the measurement program team with the poposed techniques, hardware, and software? Answers — — New Development — The proposed techniques, hardware, and software are untested, unproven, or yet to be developed, or — The measurement program is an extension of the state of the art. 3-5 — Sophisticated — Techniques, sensors, systems, or software are proven, but more than routine care must be used to achieve the desired potential, or — The experience of the measurement program team in using the techniques, sensors, systems, or software is limited. — Simplistic — Techniques, sensors, systems, or software are relatively simple, tested, and proven and the team is experienced in their routine use, and — The uncertainty objectives are easily attainable by routine use because of relative insensitivity of the system to errors. DEGREE OF COMPLEXITY OF MEASUREMENT PROGRAM (SEC. 3.1.3, TABLE 3.2): Consider — — Number of personnel involved. — Number of "independent" groups involved. — Logistic complexity in number of locations, data transfers, and communications. — Number and end use of parameters. Answers — -High — Program management is difficult; large numbers of personnel or many "independent" working groups are involved, or — The data must pass through many transfer points, or — Processing and analysis is complex, or — A relatively large number of different or independent parameters are to be measured, or in- dividual (independent) data are to be combined to produce new output data. — Low — Effective management is easily attained; only one or two "independent" groups are involved, and — The data passes through few transfer points, and processing and analysis are relatively straightforward. 3-6 Table 3.2.— Criterion 3: Degree of complexity of measurement program Number of persons involved Number of independent groups involved Logistics (number of sites or stations) Number of data or sample transfers Communication Number of independent parameters Complexity of independent parameters Ability of program management to monitor entire program Few — LOW Many — -HIGH Few — - LOW Many — -HIGH Few — -LOW Many — -HIGH Few - -LOW Many - -HIGH Easy - -LOW Difficult - -HIGH Few - -LOW Many - -HIGH Simple - -LOW Complex - -HIGH Easy - -LOW Difficult - -HIGH 3-7 that, for specific measurement programs, either DQA Level II or DQA Level III may be appropriate; the DQA planner should decide level to be used. 3.1.4 Recommended Operator Performance for DQA Level It is recommended that the performance of operators be assured in accordance with the DQA level selected for the measurement program. Appendix A discusses the evaluation of operator performance in terms of "pass-fair' criteria. The purpose of these tests is to verify that operators are performing assigned tasks without affecting the resulting data; these tests are not a means for determining operator qualification. Following are the levels of operator performance recommended for each DQA level. Level I — 68 percent up to less than 95 percent, Level II — 95 percent to 99 percent, and Level III — greater than 99 percent. The pass-fail criteria recommended above can be increased if desired by the DQA planner. The assurance actually achieved in operator performance should be included in the uncertainty statement accompanying the data. 3.2 SIGNIFICANCE AND IMPACT OF DQA LEVEL In establishing these DQA levels for marine measurement programs, the extremes of DQA were con- sidered. Minimum DQA effort, Level I, is required if the measurement program objectives require that the collected data include only an estimate of the error associated with the data in the uncertainty statement. Level I incorporates only the DQA functions considered to be the minimal requirements of routine operational practice that can still provide data of known, although essentially estimated, quality. Level III, the maximum DQA effort, would be required if it is expected that the data will be challenged. Level III data must be totally defensible, with evidence of a comprehensive calibration program during the collection of data, complete documentation of results, and a subsequent error analysis. Level III requires careful management of the error budget so that a total assessment of the error can be made and defended. To reduce the uncertainty associated with Level I data, additional DQA functions, such as a more comprehensive calibration program, may be used. Such additions may change the program from Level I to Level II, which provides the means to control the quality of data. Level III includes additional calibration efforts such as intercomparison programs and independent audits of the measurement program. Table 3-3 shows examples of selected DQA actions as they apply to the three levels of DQA. An exam- ple of the effect of DQA level on documentation of a specific measurement program activity is described below. Example: Support Data Documentation Requirements Support data, not usually submitted in the final data report, include calibration records, equipment performance evaluations, and operating logs. This example shows how, as the DQA level increases, ad- ditional support data are required. Level I: Minimum documentation is required. The records submitted with the data need not include working procedures or personnel training records. The records to be permanently maintained include: 1. Ship's Log Book — contains entries of time of sampling or measurement, positioning of ship, depth, weather, personnel, sea state. 3-8 CO > CD < O Q to c o o < < O D ■o CD •** O CD C o o Q. Q. < o >> 03 c CD CD w _CD CL E CD X LU CO CO n CO << 2i =>o CO Q Minimal; ship's log, notebooks, annotated data records. Comprehensive. Main- tain calibration and personnel records. Comprehensive. Data audited and error analyzed. Complete record continuity. QC o QC QC LU SS E° 1— CL LU qj 5 QC >- c >. "§-6 3 CO 2-1 1- en LU CD Error recorded and com- pared against error budget. Error determin- ed on a discrete basis. System error reported. T3 c CO CO en >. CO c CO 2=5 LU CO EFFECT OF CALIBRATION UPON MEASUREMENT PROGRAM Need not stop opera- tions. May continue with error measured. Stop operations if over error budget. Correct equipment. Verify QC performed. If chronic problems exist, management solution required. Modify uncer- tainty statement or in- corporate specific devia- tions from error budget. co I- z z O LU i=5 < LU QC QC CQ5 -JO < LU O QC Acceptance and bench testing ongoing calibra- tion for field operations. Pre- and postdeploy- ment calibrations, QC audit. en ■ c ID O Hi en c -c ~ CO !t c CD E £• v CO CD JD s « z LU O £5 DC < co _i LU _J QC < Minimum additional resources required over basic operation. May require extra per- sonnel for data collec- tion and documentation. «J > CD is 11 ~ TO c F ■sl c , 0. — CO CO LU CO z> z LU _l < O a. > t- Reconnaissance or background data for fea- sibility studies. Used as basis for planning other measurement programs. Engineering and scien- tific studies. Also for instrument evaluation. Baseline and design studies. Regulatory requirements. New system development. Data may be used in legal proceedings. at O LU _l = - 3-9 2. Notebooks — contains name of operator, depth and location of sampling or measurement, calibra- tions, dates of operations, problems, maintenance history, occurrence of unusual phenomena. 3. Equipment List — contains names of equipment manufacturer, serial numbers, calibration records, operations and maintenance manuals, history of transfer. 4. Annotated Recordings — an example is a gravity trace on which operator indicates date, time, chart speed, power supply, operator, weather, location, and other pertinent information. Distur- bances or unusual conditions at the time of observation also must be noted. Level II: In addition to the minimum documentation of Level I, Level II requires that records of all measurement program activities be maintained. Examples follow: 1 . Standardized Field and Observation Book — lists all steps needed for assuring quality of data, such as procedures and recording requirements. Entries should be made at the time data are collected showing that each step of the procedure has been followed. 2. Calibration — document all calibration procedures, including standards to be used, accuracy and precision expected, environmental conditions of validity, and recommended calibration fre- quency. 3. Training — document training procedures and keep records of the application and evaluation of the training program. Keep a record of operators, their qualifications, and operator-performance checks. These documents should be available for review by measurement program personnel to verify proper performance of quality-related functions. Level III: At this DQA level, all documents pertaining to the measurement program are audited by an independent reviewer. The reviewer considers or examines the following factors and records: 1. Coverage — use of the DQA plan and the completeness and continuity of documentation throughout the measurement program should be verified. All documents should be well organized and readily available for the auditor's review. The auditor should be able to follow the progress of the program through its documentation. 2. Document Review — all documents should be reviewed to determine if specific environmental con- ditions during operations have affected the data or if calibration results have been unsatisfactory. If data have been so affected, have such effects been accounted for either by correcting the data or by annotating the records? 3. Reporting — an audit report should be prepared. Such a report should list the information reviewed and the descrepancies observed. The audit report should describe the program reviewed, evaluate the personnel involved and the program conformance to the DQA plan, describe any items that have been unsatisfactory, and recommend corrective action. Specific items of an audit are described in 3a, b, and c below. a. Data Examination — review support data to evaluate unusual data values. May include a review of data processing. b. Correction — the audit report should recommend means to correct discrepancies. c. Error — examine documentation used to determine both system error (such as drift in instrument performance) and discrete errors (such as one-time operator errors). 3-10 d. Determine whether the data were corrected for such errors and if corrections were noted in the records. The characteristics of the three DQA levels of intensity can be summarized as follows: At Level I few actions are taken to provide a quantitative measure of error associated with the data; the DQA functions recommended provide for minimum control of the data-collection process. A Level II DQA program includes sufficient control to ensure a high degree of assurance about the validity of the results. Level III provides for taking every conceivable action during the data-collection process to assure that the accuracy of the data can be substantiated. 3-11 'J 4. DQA FUNCTIONS In this chapter, the various elements and functions associated with the preparation of a DQA plan for marine measurements are described in detail. The sections in this chapter are idenrical with and are set down in the same sequence as the elements shown in figure 2-2, the General Function Diagram. The DQA functions included have been developed by consideration of the following factors: a. Operation of a measurement program. b. Computation and control of total error objective. c. Determination of the numerical value of the data. d. Documentation and preparation of the final report and accompanying uncertainty statement. e. Control activities performed by measurement program personnel to assure that recorded data are within the limits of the error budget. f. Assurance activities by independent personnel to audit the operation and to identify activities in the measurement program that require correction. Each subsection in this chapter begins with a brief introduction to the DQA element and continues with detailed discussions of each function associated with that element. Each function is defined and, where appropriate, the effect of DQA level on the function is described. 4.1 PREPARATION OF THE PROGRAM The preparation of a marine measurement program includes the development of a technical plan (of- ten referred to as the operation plan, the mission profile, or the cruise profile) and a DQA plan. For many marine measurement programs, the DQA plan and technical program can be merged into a single document. This Preparation of the Program element is used to identify the items needed to develop a DQA plan for a specific measurement program. To prepare the DQA plan, the measurement program objectives must be determined. This determination requires the preparation of the following: statement of the measurement objectives, enumeration and description of data to be collected, statement of the end use of data, determination of the DQA level, and establishment of the total allowable error objective (including an estimate of error sources). Each of these items is discussed in subsequent subsections. Determination of the functions discussed in this element should be completed before initiating any other activity for the measurement program. It may be necessary to revise some of these determinations during the course of developing and executing the program if actions to meet requirements of specific subsequent functions cannot be taken as plan- 4-1 ned. For example, if instruments to meet program goals cannot be obtained, it may be necessary to modify the measurement program objectives, the DQA level, or even the entire DQA plan. In completing this element, the audit requirements are set, the expected operational conditions are es- timated, and the measurement program resources are allocated. 4.1.1 Objective of Program The objectives of a specific marine measurement program should be established and stated. This state- ment, which should define clearly and explicitly the problems to be solved, is the basis for determining DQA activities. The statement, which can be a list of objectives and constraints, should consider the data required and the desired end uses of the data. The four points that establish the measurement program objective are discussed below. Each should be considered in developing the DQA plan. 1 . Data requirements — What quantities are to be measured or sampled and what parameters are in- volved? Are the parameters physical, chemical, biological, or geologic? 2. Use of data — Why are the data to be collected? Are the data for in-house use, will there be wide dissemination, will they be used in legal proceedings? 3. DQA level(s) — Using knowledge of the complexity of the measurement program and the end use of the data, choose the DQA level. (See ch. 3.) The selection of DQA level will indicate DQA functions to be considered in the DQA Plan. Note.The entire measurement program need not be conducted at the same DQA level. Different measurements may justify the use of different levels depending upon the end use of the data. (This point is discussed in ch. 2.) 4. Total allowable error objective and error source estimate — a total allowable error objective should be determined for each parameter to be measured. The uncertainty and limits acceptable for each error must be clearly defined. The estimation of error is an important indicator for the preparation of the DQA plan; it is suggested that Appendix A be reviewed at this point. 4.1.2 Audit Requirements In preparing the DQA plan, the planner should determine what audits or reviews of the measurement program will be needed to verify that the plan is followed correctly and the program objectives are met. Since audits require the commitment by management of personnel, time, equipment, and funds, audits should be indicated by the DQA planner only when firm requirements can be shown. Suggested scheduling points for audits are shown in the General Function Diagram, figure 2-2. These are the recommended minimum audits for Level III programs. Depending on the measurement program, audits should verify that: The instrument system has been assembled and tested so that the system will perform as intended. Calibration records and operating procedures for the instrument system have been documented. The data processing planning is complete and properly documented, and the processing will not af- fect the data. The error objective has been specified and is documented in the planning elements. The field operations and analytical laboratory operations are in accordance with the DQA plan. The data processing is conducted as described in the planning element. 4-2 Audits by independent personnel are recommended for all Level III measurements programs. It is further recommended that quality control and system audits be made during field and analytical laboratory operations of Level II measurement programs. Although measurement program personnel will make Level II audits, an allocation of resources is still required. Audit personnel, for both levels, should be technically qualified in the areas they audit. Results of audits should go directly to the program management. Audits benefit program management only if comprehensive. Audits must identify potential or actual deficiencies and must also examine the application of procedures, the performance of personnel, and the processing of data. 4.1.3 Expected Operational Conditions The DQA plan for a measurement program must anticipate a variety of operational conditions at the sites of observation. Vibration and shock, wind velocity, temperature, humidity, rain, fog, salt spray, waves, and surface and bottom currents are some of the conditions that should be considered. Ex- tremes of operational conditions should be determined from existing records or estimated so that the data gathering equipment selected can withstand the expected environment and measure the parameter within desired error bounds. The DQA planner can use the limits of variation of operational condi- tions to determine the relation between expected errors and the total allowable error objective. For ex- ample, the measurements from a wind direction measurement sensor mounted on a buoy are greatly influenced by the oscillations of the buoy, which, in turn, depend on wind, wave, and current direc- tions. The errors in wind measurements associated with large oscillations can be computed and com- pared with the allowable error objectives. Thus, measurements made under environmental extremes can be accepted or rejected in advance. 4.1.4 Resource Allocation In preparing the DQA plan, the resources needed to meet the measurement program goals must be assessed. DQA functions should be reviewed to assure optimal allocation of funding, personnel, man- hours, equipment, facilities, and time. After this estimate of the resources required has been prepared, it should be compared with available resources to determine if objectives of the measurement program can be met. Resource allocation depends, to some degree, on DQA level. For example, to conduct DQA activities for a Level I measurement program requires only a small addition of resources above those needed for basic operations. For a Level II program, additional personnel, equipment, and funding may be re- quired for quality control and documentation. For Level III programs, increased resources in the form of additional staffing and equipment are needed for independent audits. If the estimate of resource allocation is not compatible with the resources available, the program objectives may have to be revised. Changes in the data to be collected, the acceptable uncertainty of the resulting data, the DQA level, or the error objective must be considered. (See ch. 2 for discussions of several points in the DQA planning process where resource allocation is considered.) 4.2 DATA ACQUISITION SYSTEM DESIGN The design of the data acquisition system should take into account the equipment and techniques available to the measurement program, the availability of trained personnel to operate the selected system, and the ability of the system to withstand the expected operational environment. This planning element requires consideration of system reliability, logistics, retrievability, and measurement perfor- mance. 4-3 The functions within this planning element can be used as guidance in determining the requirements for the data acquisition system and the systems available to meet the requirements. It may be necessary to reexamine or rewrite material relating to these functions after the System Certification element has been developed. 4.2.1 Methodology and System Considerations This function includes the selection of equipment and accessories, data collection methodology, and software requirements for retrieval and transfer of data. Selection of the system should be based on the total allowable error budget; during the selection the various error sources involved with different components of the system must be considered. For example, a current measurement program requires the selection of current meter type, platform and mooring line system, and a data retrieval and transfer system that can be expected to meet the allowable error objective. Methodology and system considerations are important to overall planning and formulation of the marine measurement program. Equipment cost, cost of replacement parts and maintenance, resolu- tion, repeatability, operational suitability, downtime, adjunct equipment requirements, data format, storage, retrieval, and susceptibility to hostile environments must all be considered in the selection of a system. An additional factor at this stage of planning is consideration of functions under the System Certification element. Questions should be answered on how calibration is to be performed and, in general, information developed here should include details for use in developing the System Certification element. This function is equally important to all DQA levels. 4.2.2 Availability of Personnel To Operate Selected Equipment System selection must be based, in part, on the availability of personnel capable of operating the system or on the cost and lead time needed to train personnel to operate equipment. If requirements for data quality are such that the qualifications of operators are critical, the measurement program should be planned to assure the availability of competent operators. In some cases, the DQA planner may develop a training plan suitable for personnel assigned to the vessel. This may require formal or on-the-job training for assigned personnel, hiring of additional per- sonnel, or assignment of special operators to accompany equipment units. For example, many marine instruments are sensitive to operational practices not always specified by the manufacturer. Where possible, operators with significant experience in using instruments under similar conditions may be consulted to help in the development of training plans. The requirement that assigned personnel be able to change or repair equipment may also be an important consideration in selecting the correct type of personnel, equipment units, or systems. 4.2.3 Suitability of System The selected system must be suited to the environment in which it is to be used. It must respond as re- quired under laboratory predeployment, postdeployment, and routine calibration, under program operational conditions and environments, and under typical siting configurations. It must perform to specification upon deployment after packaging, storage, and shipment, and must withstand the operational environment without performance degradation. For example, data should be within the desired error limit when the sensor is affected by motions such as heave, pitch, roll, surge, and sway. The unit must also be suitable for the plan of operation, e.g., a thermometer that takes 10 minutes to stabilize is not acceptable for a measurement program requiring 20 readings per hour. The equipment and its interface with data storage, analysis, and use must be compatible. Planning should provide for routine field checks to assure proper equipment performance. 4-4 4.2.4 Reliability The reliability of a measurement system is the probability that the system will perform its intended function for a prescribed period of time under the specified operating conditions. Reliability is impor- tant in at-sea operations, because of the complexity and sophistication of sampling, analysis, automatic recording, and telemetering systems. Equipment failures that cannot be corrected on board can be very expensive, because of the high cost of running the ship and a possibly serious interruption in the measurement program. Data interpretation often depends on the availability of continuous measurements for trend analysis. As equipment becomes more complex, the potential for failure increases. In planning, system reliability should be evaluated using considerations of design, manufacturing, testing, system main- tenance procedures, and the necessity for spare parts. The DQA planner should consider the cost of downtime, the logistics of repair, and the cost of maintenance personnel. Thus, evaluation of the measurement system must consider human-machine interaction as well as the mechanical parts of the system. Even the most capable operators may become frustrated with unreliable equipment, thus further affecting the reliability of the data. As a minimum, the following actions should be planned for assessing the reliability of equipment for a given program. (A clear, concise record of the results of these actions should be available for review.) a. Inspect and test incoming equipment for adherence to design or contract specifications. b. Perform reliability tests for equipment (burn-in or demonstrated performance acceptance tests) under simulated or actual operating conditions. c. Instruct, train, and test personnel in procedures for preventive maintenance and unscheduled repair of the equipment, and for the control of equipment during operations. This can be done by use of literature, films, posters, and reliability information bulletins. d. Consider maintainability (including cost, inventory of spare parts, maintenance personnel, logistics, and cost of downtime) at time of purchase. e. Provide for records of maintenance, breakdowns, and analyses, and their use to initiate corrective actions. 4.2.5 Logistics The term "Logistics" as used in this document refers to a group of activities that includes purchase, maintenance, and moving of equipment, material, and personnel to and from measurement locations. All logistics activities affect the accuracy, quality, and timeliness of the data. The DQA planner must consider these effects in developing the plan. 4.2.6 Retrievability The data acquisition system design should include provisions for the ready retrievability of both primary and support data. If a system malfunctions, the data recorded prior to the malfunction should be recoverable. Storage, retrieval systems, and data transfer systems should be designed to prevent the possible erasure or elimination of data before data transfer has been verified. The DQA plan should provide for periodic checks to assure the validity of the stored information. 4.2.7 Measurement Performance Measurement performance of a system depends on data quality, accuracy and resolution, and response time of equipment. Data quality is related to the reproducibility and repeatability of an experiment. 4-5 Response time is related to the rate at which a parameter is measured; it must be compatible with the sampling rate or sampling frequency objective of the data collection program, i.e., the time interval between an event and the system's response to the event, which may include total time associated with data collection, transmission, processing, and reporting to the terminal. Reproducibility is the precision, usually expressed as a standard deviation, of measurements of the same sample made at different laboratories or locations. Repeatability is the precision, usually expressed as a standard deviation, of measurements of the same sample made at different times at a given laboratory or location. t The data acquisiton system must be designed to achieve the measurement performance required to satisfy the program objectives. As the data are collected, they should be reviewed or compared to assure they are within the objectives of the measurement program. i 4.2.8 Availability of Instrument and Cost (Existing Inventory Items, Off-the-Shelf Equipment, or New Development Equipment) The timely availability of instruments is an important consideration in the design of the data acquisi- tion system for a measurement program. Available in-house instruments that can provide data of the quality required are normally first choice, because of cost considerations. Despite the initial cost ad- vantage of in-house equipment, its efficiency should be compared to that of equipment available for purchase or under development. Final selection should be based on data quality requirements, capital and operating costs for the system, and availability of resources. If items are not on hand in-house, they may have to be purchased. The criteria governing the purchase of off-the-shelf equipment should be based on available resources, data quality requirements, past performance, and maintainability of the system (sec. 4.3.2). For programs requiring advanced technology, an equipment or instrument development program may be necessary. This process is described in the following section. 4.3 SYSTEM CERTIFICATION This DQA plan element deals with control and calibration procedures for any equipment considered for use in the program. The primary consideration for this element is certification that the system will satisfy the objectives of the measurement program. System certification will vary from program to program. Certification must deal with personnel train- ing and qualification, bench calibration and test evaluation, intercomparisons, field testing, acceptance testing, documentation, configuration control, backup units, and equipment reliability and maintain- ability. The selection of the DQA functions to be considered for the System Certification element de- pends on the DQA level; the application of a specific function may change with DQA level. 4.3.1 Existing In-House Equipment Items The following DQA functions should be considered in connection with existing in-house equipment. 4.3.1.1 Availability Determine the availability of equipment and backup. The measurement program schedule is a factor here. Lists of spare parts should be obtained and availability of the parts determined. 4.3.1.2 Evaluation Evaluate the suitability of the equipment to meet the objectives of the measurement program (e.g., ac- curacy, precision, data output format, operational limitations, sampling frequency, sensitivity to en- 4-6 vironment, effect of operator on data quality, reliability, durability, and cost). Past performance records or manufacturer's specifications may serve as starting points for evaluation. Records of other similar investigations should be obtained for comparison. 4.3.1.3 Checkout (Requisitioning) Obtain the unit, together with operator's manuals, prior calibration records, and spare parts; use documentation such as requisitions, endorsed checkout lists, and signed transmittals and expected return date to maintain an inventory record and to assist other investigators in scheduling their programs. 4.3.1.4 Recommissioning Recommission or refurbish the unit as necessary, make dry run and performance tests to check functioning of the equipment to specifications. The above Availability, Checkout, and Recommissioning DQA functions should be considered for all DQA levels. The Evaluation function should be written for Level II and Level III programs. 4.3.2 Off-the-Shelf Equipment To Be Procured When procuring off-the-shelf equipment, follow existing organizational procurement regulations. Contracts for the procurement of materials and equipment should specify DQA requirements. Before procuring service contracts, each program manager should review chapter 4 (DQA Functions) and determine which DQA elements and functions are relevant. Bidders on a service contract should sub- mit a DQA plan for review and approval by the program manager. When evaluating a bidding con- tractor's documents, preparing contract documents for purchase of off-the-shelf equipment, and cer- tifying a system, the six DQA functions described below should be considered. 4.3.2.1 Availability Determine the availability, cost, and time of delivery of equipment. 4.3.2.2 Evaluation Evaluate the capability of equipment to meet the measurement program objectives. Accuracy, preci- sion, data output format, operational limitations, durability, reliability, and cost should be considered. Evaluation may be based on manufacturer's specifications or information from others experienced in its use. Special testing also may be made a requirement of development contracts. 4.3.2.3 Source Inspection A source inspection can be made before the purchase of equipment or the award of a service contract. Provisions for factory inspection of the equipment or for maintenance service for a trial period may be made part of the request-for-bid document. The purpose of the source inspection is to determine the ability of a vendor to provide the equipment or its servicing. A review of the vendor's quality assurance program is appropriate. 4.3.2.4 Purchase Inspection After the decision has been made to purchase any piece of equipment or instrument system, it may be advisable to monitor the supplier's work to assure a satisfactory system. The monitoring may vary from unannounced inspection visits to a full-time inspection on the site. The choice depends on the 4-7 complexity of the system being purchased and its significance to overall data quality. For example, if the results of an entire data set are dependent upon the proper functioning of a few components, it would be advisable to arrange for full-time quality control inspection during assembly of the critical items. 4.3.2.5 Performance and Reliability Tests Extensive system performance and reliability tests should be conducted. This procedure is especially critical for complex monitoring systems that contain several sensing instruments for different pur- poses. Time requirements for such testing must be considered in the DQA plan. 4.3.2.6 Procurement Quality Control All procured equipment or parts should be inspected to verify conformity to the purchase order or contract requirements upon delivery of the equipment. Functional tests and physical inspections should be completed before final acceptance. All the foregoing DQA functions associated with off-the-shelf procurement should be considered for Level II and Level III programs. As a minimum for Level I measurement programs, the availability and purchase inspection functions should be considered by the planner. Care must be taken not to rely solely upon precedent or past performance when using equipment recommended by other users. 4.3.3 New Development In the development of a new system, consideration of the following functions is recommended. 4.3.3.1 Design Requirements and Specifications For new system development, decisions must be made as to the expected accuracies for various compo- nents of the system, how long the equipment will be in operation, the environmental conditions under which the system will be used, how it can be designed and manufactured, who will operate it, how it will be maintained, what will be required to support the system (power, platform stability, electrical signal), safety precautions, and the operational logistics involved. In large and complicated projects, it may be necessary to develop model equipment to study its performance under controlled laboratory conditions. These model studies should be considered before modifying the final design. Final design documents should contain comprehensive design requirements, drawings, and specifications. All drawings should show dimensions and configuration information for parts, details, and complete assembly. All dimensions should include acceptable tolerance limits. Specifications should describe characteristics of the product, such as strength and properties of materials, structural integrity, and fatigue and environmental limitations. This function should be used at all DQA levels. For example, a Level I program may require only an informal working document; a Level II program will require complete documentation of all design calculations, assumptions made, limitations, pertinent drawings of all components with their tolerance limits, and material specifications; and a Level III program will require that all designs, calculations, drawings, and dimensions of all details are checked, verified, and documented. 4.3.3.2 Inspection of Components All components or instruments should be inspected before assembly of the final product. Functional tests and physical inspections should be made. This inspection function should be used for all Level II and Level III programs. 4-8 4.3.3.3 Instrument Operating Procedures Documents and manuals on the operation and maintenance of the system, its components, auxiliary requirements (e.g., power units), backup units, and on training requirements for operating personnel should be prepared and be available for use by all program personnel. This function should be considered for all DQA levels, but documentation may vary with DQA level. For example, a Level I program may require minimal documentation, while for a Level III program, a well documented step-by-step procedure should be prepared. 4.3.3.4 Performance Test Reliability Prior to commissioning the system, an extensive performance test should be conducted and the reliability of the system should be established. The performance test should include testing of opera- tion of the system and its components in a laboratory under simulated operating conditions. The resulting data should be carefully examined and compared with reference data. Results of this testing should establish whether the system will meet program objectives. 4.3.3.5 Calibration of All Components All instrument components and subunits of the system that affect data quality should be calibrated whether or not previously calibrated by the manufacturer. This is very important for controlling total system error and for assuring the proper functioning of the system. This function should be considered for all DQA levels. For example, a Level I program may require calibration of all components with only minimum supporting documentation. A Level II program re- quires supporting documentation reviewed by program supervisors, and a Level III program requires, in addition, an independent review of the supporting documentation. 4.3.4 Personnel Training and Qualification Training operating personnel is preferable to having them learn only by experience on the measure- ment program. All personnel involved in any activity that affects the system certification should have training, qualifications, and experience in their appointed tasks. All must be able to perform their work so that data quality remains acceptable. A review of the training and qualifications of the person- nel who certify the data acquisition system should also be required for those whose work directly af- fects data quality (e.g., calibration personnel, bench chemists). This function should be considered for all DQA levels. A Level I program may require that personnel be trained (with or without certification), a Level II program should require a comprehensive training program with certification, and a Level III program may require periodic recertification and main- tenance of performance certificates as support documents. Section 3.1.4 outlines minimum assurance levels for personnel. Appendix A 2.1.7 presents one technique for evaluating the effectiveness of train- ing in terms of personnel performance during operations. 4.3.5 Bench Calibration and Test Evaluation All instruments in the data acquisition system, no matter how obtained, should be calibrated, tested, and evaluated in the laboratory before testing or deployment in the field. The results of a comprehen- sive test program can be used to determine the suitability of an instrument, anticipated calibration stability and required calibration frequency, a tentative plan for preventive maintenance and spare parts packages, and data that can be used to substantiate the expected magnitude of the instrument error. The instrument should be thoroughly checked and evaluated, and its performance documented 4-9 under both static and dynamic test conditions that approximate the expected operational environment. The validity of the vendor's stated accuracy should be examined and, if necessary, modified to reflect actual performance capabilities. Laboratory tests must be used to determine the accuracy of equip- ment. A detailed calibration plan should be provided for controlling the accuracy of measurement and test equipment and calibration standards used. The plan should provide for the following items as a minimum: a. Field or laboratory calibration of the entire system. For example, if a platform system is used for measuring current, the laboratory calibration plan should simulate field conditions as closely as possible. b. Realistic calibration intervals for measurement and test equipment, and designation of satisfac- tory calibration sources for each calibration standard. The following factors should be con- sidered in determining calibration frequency. The more severe the effect each of these factors has on the resulting data, the more frequently calibration should be performed. — What is the effect upon instrument performance and the data collected if the instrument is not calibrated? — If the instrument drifts from stated calibration limits, is the drift evident during operation of the instrument? — Is the calibration drift predictable and can the resulting data be corrected for the drift? — Is a standard calibration frequency recognized in the marine community or recommended by the equipment manufacturer? — Is the instrument sensitive to severe operating conditions or improper handling so that ran- dom or unpredictable changes in calibration can occur? c. A listing of all required calibration standards with proper nomenclature and identification num- bers assigned. d. Establishment of the traceability of calibration standard to those available at the National Bureau of Standards or to other recognized national or international fundamental standards. e. The environmental conditions (e.g., temperature, density, salinity, relative humidity, pressure) under which the calibrations will be performed. f. Written calibration procedures for measurement and test equipment and calibration standards, and document control numbers for reference purposes. g. Description of a calibration record system, including samples of labels, decals, and record cards. This function should be considered for all DQA levels. A Level I measurement program may require only the calibration of essential components of the system, and the resulting error analysis may be based on an estimate of the performance of the system. A Level II program may require a thorough calibration of the system with complete error analysis. The calibration process for a Level III program may require an independent review of calibration procedures and the maintenance of all calculations and calibration documentation as support data. 4-10 4.3.6 Intercomparisons Intercomparisons, whether made during system certification or during actual operations, are used to evaluate the performance and variability of measurement systems under controlled conditions. An in- tercomparison program may be conducted in the laboratory or field. Intercomparisons are generally conducted by deploying and testing a set of equipment against a known parameter or transfer standard to evaluate the error associated with each instrument. Intercomparisons used in marine programs can greatly reduce errors that otherwise might be overlooked. An example of an intercomparison would be the simultaneous lowering of two different STD* probes (one newly developed and the other a transfer standard) into the sea. This might result in different measurements if the newly developed instrument is depth dependent and the transfer stan- dard is not. Intercomparison programs should be used for the following purposes: Development of new test methods. Comparison of results of alternative test methods. Standardization of calibration, sampling, and testing procedures. Monitoring of data quality. Resolution of conflicts. Intercomparisons increase data credibility and confidence in the system certification. This function is specifically recommended for Level III programs. 4.3.7 Field Testing In the laboratory, a single environmental parameter can be simulated over a wide range of values and its effects on instrument performance can be studied under either static or simulated dynamic condi- tions. Such laboratory-obtained information often provides an explanation for anomalous behavior during field tests or actual operations. However, the combined effects of all operating conditions on the performance of an integrated system, such as a buoy, are very difficult to observe in the laboratory; therefore, field testing is required. Field testing should be done under the conditions expected during data collection to permit the observation of the combined effects of all environmental factors on the performance and reliability of the system. Field testing should be performed both at dockside and at a site similar to one of actual operation. Before deployment of any untested system, a thorough operational check and evaluation should be performed at dockside. If a suitable facility is available, system performance should be checked either under controlled conditions or by obtaining comparative measurements of the environment. Measure- ments may be taken by a variety of methods with accuracies ranging from order of magnitude com- parisons to detailed specific comparisons. When dockside conditions are inadequate for complete evaluation of the system, testing at the site of actual operations (or a similar site) may be necessary. If field testing is not done, unsatisfactory system operation may not become evident until after data collection begins. This DQA function is recommended for Level II and Level III programs and for any new system development. For Level III programs, complete documentation should be maintained on the field testing, test readings, and any modifications to the system. *STD— Salinity-Temperature-Depth 4-11 4.3.8 Acceptance Testing Acceptance testing is used to determine whether equipment performs satisfactorily. Tests may be made under field or laboratory conditions. All equipment, whether it is an existing in-house item, a purchased off-the-shelf item, or a newly developed item, should be tested before acceptance for use in the program. Accelerated acceptance tests are generally required for newly developed equipment units for which long-term performance is specified. This DQA function applies to all DQA levels. 4.3.9 Documentation System certification of all equipment should be documented in detail. Such documents may include contracts, design drawings and specifications, field and laboratory test evaluation results, calibrations, acceptance testing, and all other records developed during system certification. Documents important to system certification include: Sampling procedures for use in field and laboratory operations. Calibration procedures for use in field operations. Procedures to be used for analysis of collected samples or data. Data collection and reporting procedures for use in field or laboratory operation. Personnel records that include information on training, qualification, and certification procedures for field, laboratory, data reduction, and analysis work. Auditing procedures for use during the operations phase of the measurement program. Sample shipping and storage procedures to be used during the operations phase of program. Computational and data validation procedures to be used in analyzing collected data. The documentation function is recommended for Level II and Level III programs. Complete documentation should be maintained for a Level III program. 4.3.10 Configuration Control Configuration control is the systematic process of keeping detailed records on equipment design fac- tors, the physical arrangements of equipment and monitoring systems, operating procedures, and changes in engineering design and operating procedures during the measurement program. The DQA planner decides the scope of configuration control that is to be applied to the measurement program. Configuration control may be grouped into two types. In the first, a history of changes is maintained throughout the life of the program. This history can be used during problem-solving investigations either during the course of the program or after it has been completed. This information is useful, because even small changes to the equipment while in use may significantly affect the data collected. Examples of the kinds of changes that should be recorded are: Original and updated schematics for the instrument system and its subsystems. Lists and flow charts of all procedures used throughout the program. Serial and model numbers of interchangeable components. 4-12 The second type of configuration control is used to provide information for engineering design and operation of the initial system when several identical or nearly identical systems are planned. This in- formation is important for large, complex monitoring programs, particularly when data sensor out- puts are stored either on site or at a central facility. Purchase contracts for two or more measurement systems of identical design should require configuration control as part of the contract. This function is recommended for Level II programs and required for Level III measurement programs. For Level III programs, maintain all check prints and design calculations as support docu- ments. 4.3.11 Backup Equipment and Spare Parts Backup equipment and spare parts are required to maintain continuous performance of data acquisi- tion systems under severe environmental conditions. A listing of the backup equipment and spare parts needed should be prepared before beginning field operations. Routine maintenance schedules should be established and adhered to; this scheduling should include regular replacement of expendable sup- plies (charts, pens, batteries) and parts subject to excessive wear. Such lists and schedules should be es- tablished in the planning stage and updated as required. This function is equally applicable to all DQA levels. 4.4 DATA PROCESSING PLANNING The Data Processing Planning element identifies the steps to be taken to assure that the data are satisfactorily processed during program operations and that data accuracy is established. The DQA functions for this element include preparing and verifying all computer programs to be used for data processing, planning for validation of the collected data, establishing data formats, and preparing collection, transfer, and checking procedures. This includes the preparation of data logging procedures and data record forms. In completing this element, the DQA planner should provide for the collection and storage of support data after the measurement program is completed. Support data consist of all records and data not included in the final report on the measurement program. Criteria should be established for the control of errors that may be introduced by program personnel during the collection, recording, transferring, processing, and reporting of data. The procedures for providing an assurance level for human error are described in appendix A; suggested means for con- trolling human error are discussed in section 2.1.7. All functions of this DQA element are recommended for all DQA levels. 4.4.1 Verification of Computer Programs All computer programs should be verified prior to operational use. A computer program should be tested on the proposed data processing system using a sample representative of the data to be processed, i.e., the sample should be in the same format and equal in complexity to real data. The out- put from a test run can be verified against hand calculations or output from a comparable program. Special attention should be given to computer programs used to "smooth" data, scan for wild points, and flag questionable values. These programs, especially those that can modify the data set in any way, should be carefully tested on a sample incorporating problem situations. DQA planning for this function remains valid until system or computer program changes are made. While this verification function provides assurance that computer programs will not introduce un- known computational errors, errors that may develop as a result of one-time malfunctions in computer 4-13 hardware such as bit drops or encoding errors are not eliminated. The principal effect of the verifica- tion of computer programs on the total allowable error is to reduce processing errors to low, accep- table values or at least to known values. This function is applicable to all DQA levels. A Level I program may not require formal documenta- tion, but a Level II program should require preparation of step-by-step procedures and retention of all pertinent documents. For a Level III program, all calculations and computer verification procedures should be independently reviewed and the documentation retained as support information. 4.4.2 Data Reduction and Validation Planning The Data Reduction and Validation Planning function is used to establish the records, documents, reviews, and checks that must be used during data processing to validate the data. Procedures also are developed to screen outputs of data sources to assure that they can be used in further analysis without introducing errors. The complexity of this planning function depends on characteristics of the data sources, the processing planned, and the DQA level. Frequently, a series of flow charts is developed as part of this function. Data reduction and validation planning should include identification and listing of: a. Operating procedures for all data processing systems. b. Procedures that outline computerized or manual checks and the frequency at which these checks are to be used at different stages in the data processing program. c. The limits of acceptability that are reasonable and adequate for the detection of invalid data. d. The validation steps to be followed during data processing. These should be recorded and provi- sions should be made to transfer results from stage to stage during the data processing. e. Statistical tests of the data. This is a basic planning function applicable at all DQA levels. Level I programs may require the development of only those items which directly relate to data reduction. Level II programs may require the preparation of step-by-step procedures to be followed during actual data reduction and validation. Level III programs may, in addition, require the determination of the limitations of procedures, methodology of reduction, and validation procedures and their justification. For Level III programs, all actions should be completely documented for review and evaluation during auditing. 4.4.3 Personnel Training All personnel involved with analysis, reduction, transfer, and validation of data should have sufficient training to assure that they can perform their tasks without introducing errors because of unfamiliarity with techniques or equipment. A valid training program may require that personnel: a. Observe an experienced operator performing the different tasks in the reduction process. b. Study the operations manual for the processing equipment and the data processing plan. c. Perform data processing operations under the direct supervision of an experienced operator. d. Attend advanced and refresher courses to improve their capabilities. 4-14 A means for testing operator training and performance is discussed in section 3.1.4 and appendix A. (Also see Sees. 4.3.4 and 4.7.1.) This function is applicable to DQA levels as stated in section 4.3.4. 4.4.4 Data Collection and Format Planning for this function consists of establishing and monitoring schedules for the collection, process- ing, validation, and archiving of data for each task, and defining the relation of that task to other tasks within the measurement program. This is required to provide for the orderly, systematic flow of data from the acquisition system to the processing system during actual operations. If multiple acquisition systems are used, the greatest ease of handling with the smallest chances of error is achieved if the data management plans are consistent and compatible. Consistent data formats should be specified in processing plans. While formats affect the processing procedure, they should be designed to assure that they do not affect the total allowable error objective. Data collection and format planning should provide for the following: a. Standard logbooks and data recording forms. b. Specifications for instrument recording control function switches such as scale, scale expansion, recording rate, and gain. c. Specifications for the reporting of significant digits. d. Digital data format and encoding forms. e. Sample and record identification and labeling requirements. f. Data handling, transmittal, and accounting procedures. g. Specification of units of measurement, h. Record annotation standards. i. Data archiving standards. This function is applicable to all DQA levels, but intensity may vary. For example, the recording for- mat for a Level I measurement program may be a ship's logbook or notebooks, while for Level II or Level III, standardized field and observation books containing all operational steps and data- recording procedures should be required. 4.4.5 Data Checking The data checking function is used to set up procedures to assure that data are inspected or tested for accuracy at various stages of the data process. One objective is to limit any undetected human error that may be introduced between the time of collection and the reporting of the sample or data. Human errors include failure of the operator or analyst to record pertinent information, making mistakes in reading an instrument, in calculating results, and in transposing data from one record to another. Data handling systems with computers are susceptible to errors in key punching and in handling magnetic tapes and other storage media. Human error can be minimized by using routine checking procedures. Data checking procedures may vary widely, depending upon the specific nature of a program, but generally involve only one reviewer who is not directly involved in the data acquisition or processing. 4-15 The human errors that can occur during data transfer, reduction, analysis, and validation in the operational stage of the measurment program should be identified. Criteria should be established dur- ing DQA planning to provide for the elimination of such errors during the operational phase of the measurement program. The criteria for controlling human errors should be designed to meet the assurance levels established in section 3.1.4. "Pass-fail" procedures can be developed as discussed in appendix A. For example, duplicate key punching can be used to detect and control the large number of human errors that normally occur during initial key punching. Verification of this kind can be done for all DQA levels, but should be required for Levels II and III. The checking procedures developed should be documented for use by personnel involved in data processing. Instruction in use of these procedures should be included in personnel training. 4.4.6 Data Transfer Standards Data transfer standards are procedures that specify how data are to be transferred from one stage to another without introducing errors that cannot be traced or accounted for. Standards must be followed throughout the operations phase of the measurement program. Data transfer standards are closely related to data format, so the following factors should be con- sidered during planning: a. Allowable limits of error during telemetry of data. Control of error requires periodic com- parisons between transmitted and received information. b. Procedures for sample handling, transport, and storage. c. Procedures, such as encoding for entry into computers, required to verify or assure that data are reported correctly. d. Inclusion of all parts of the data set necessary for interpretation or analysis, For example, log sheets annotation offsets in recordings must accompany the affected data. This function is applicable to all DQA levels. A Level I measurement program may require only a notebook containing location and date of sampling or measurement, data logging, basic calculation or analysis procedures, and logging of reduced data. A Level II program requires standard field and ob- servation notebooks, step-by-step procedures for data reduction, and a standard format for data reporting. A Level III program also should include complete continuity of documentation throughout the measurement program and document review, auditing, data examination, correction, and error reporting procedures. 4.4.7 Support Data Collection and Storage Support data consist of calibration and equipment performance records, records of what standards were used, system evaluations, operating logs, records of environmental conditions at the time of sam- pling or measurement, and other records not included in the final data report. The collection of sup- port data must be planned so the data will be readily available to personnel during the measurement program. In planning for the storage of support data after the measurement program is completed, the DQA planner must consider requirements for future reference, and how and where the data are to be stored. Plans for the collection and storage of support data are developed only once. The procedures themselves are used throughout the measurement program. This activity helps to preserve or to im- prove data quality by providing complete documentation on actions taken to qualify the data. Support data also are required for data validation, for auditing, and for preparation of the unertainty state- ment. 4-16 This function is applicable to all DQA levels, however, additional documents are needed as the DQA level increases. Section 3.2 contains an example of how the need for support data documentation changes with DQA level. 4.5 TOTAL ERROR BUDGET VERIFICATION Under this DQA element an analysis is to be made to assure that the total allowable error objective of the measurement program will be met. The error sources identified during the Preparation of Planning element are thoroughly evaluated and analyzed to assure that none has been omitted. When this has been done, the estimated error values for each error source and for the total system are verified. Dur- ing preparation of this planning element, system calibration records, records of systematic and random errors of the system, and the system configuration are studied as the basis for developing a statement on the total system error. To help in completing this planning element, it is recommended that appen- dix A be reviewed. Although this element is recommended only for Level III programs, it is applicable in some degree to all DQA levels. For Level I programs it is suggested that a review be conducted to assure that no significant error sources have been ignored, that reasonable values have been assigned to all error com- ponents, and that the total system error has been estimated correctly. A Level II program should re- quire the comparison and correlation of bench calibration and field testing records. Level III programs also should require intercomparison testing and statistical analyses for the three types of calibrations (bench calibration, field testing, and intercomparisons). 4.5.1 Random Error Sources and Limits of Random Error of Various Components The random error of an instrument is defined as the individually unpredictable variations between suc- cessive measurements of the same quantity under nearly identical conditions. The random error is determined by statistical analysis of the calibration data for the instrument. All sources contributing to the total random error of the measurement system are to be considered. Such a system may include data collection, telemetering, and recording units. For example, a remote — sensing buoy system for monitoring environmental parameters will include sensors, mooring line and platform attachment, telemetry equipment, and data display units. The first step in developing this part of the plan is to compile a complete list of the error sources of the system. Each error is to be evaluated to determine whether it is random or systematic. Special care should be taken to assure that all environmental parameters that can influence measurement results have been considered. All random errors associated with the various components of the system and with the environment should be listed. The total random error of the system is then computed. (See appendix A.) If the total random error is larger than that acceptable for meeting the objectives of the measurement program, the sources of the larger contributions to the total random error should be examined to determine whether remedial action is possible. If the total random error is within the limits allowable, it may be possible to relax some procedures or to change equipment to realize economies in time, personnel, or funding. 4.5.2 Systematic Error Sources and Limits of Systematic Error of Various Components (Estimated) Systematic error is the consistent deviation among repeated measurements of a fixed reference level. Estimations of the limits of systematic error are generally based on experience with similar measure- ments, information available from special studies, and judgment of the investigators. 4-17 This part of the DQA plan identifies the uncertainty of the calibration of each of the various compo- nents of the measurement system Using the list of the error sources identified in the preceding section, the systematic errors of all components should be assigned (these would be obtained primarily from calibration records). The total systematic error can then be computed. (See appendix A.) If a total measurement system is available for preoperational testing, the total systematic error can be measured. 4.5.3 Total System Error At this point, the DQA planner determines the total system error. All sources of error (random and systematic) and their known or estimated magnitudes are listed. The overall error of the measurement can be computed. (See appendix A.) Once computed, the errors of measurement of the various parameters to be determined during field operations are compared against the total error objective. If the total system error cannot be reduced to satisfy the objectives of the measurement program, the ob- jectives should be reviewed. If the total error is less than the total error objective, it may be possible to effect economies in the system or to increase the accuracy or precision of some of the measurements. 4.6 AUDIT (PREOPERATIONS STAGE) At this point, the planning elements are complete. The planning elements for a Level III program should have been prepared in writing for this audit. Preparation of the planning elements for Level I or Level II programs could be in the form of notes taken by the DQA planner for use as background for his preparation of the Operations Elements described in section 4.7 and beyond. Audit of Level I and II planning would therefore take the form of verbal discussions between the DQA planner and the auditor. If an audit is considered at the time the planning elements are being prepared for a Level I orll program, the DQA planner should consider preparing these elements in writing, just as for a Level III program. An audit is advisable for Level III programs to assure that the planning aspects of the measurement program (the design of the data acquisition system, the system certification, the data processing plan- ning, and the total error budget verification) are complete and satisfactory. The audit is conducted by an independent reviewer to assure that the measurement program is ready to proceed to operations. The audit should specifically verify that: a. The DQA plan was prepared and is complete for the planning elements of the measurement program. b. The system certification plan was completed and intercomparison testing results were incor- porated in the calibration requirements. c. All the planning elements required for operations to begin are complete and documented. d. Procedures to minimize equipment failures and human errors during operations have been developed. These procedures can be effectively followed during the operational phase of the measurement program. In terms of the DQA plan for a Level III program, this audit should be a"hold point" beyond which work is not continued until the audit indicates the planning is satisfactory. For programs of other DQA levels, an audit at this point should be considered as a means to identify and correct deficiencies in the system planning that otherwise might not be evident until after field operations began. For Level I or II programs, this audit could be conducted by an experienced person within the measurement program organization. This person should not have been directly involved in drafting the planning ele- ments. 4-18 4.7 FIELD OPERATIONS After the planning elements of the DQA plan have been satisfactorily completed, work should start on the operations elements. Field Operations is the first element of the operational phase of the measure- ment program. This element includes all functions that must be considered to collect data and samples having the accuracy and quality required by the measurement program objectives. The DQA functions related to actual operations are: personnel training, equipment transportation, standardization of operating procedures, site selection, sampling considerations, quality control and system audit, collec- tion, recording, protection and transmission of data and samples, and chain of custody. The functions used to control errors during field operation are predeployment checking and calibrations, measure- ment comparisons, ongoing checking and calibrations, and postdeployment checking and calibrations. 4.7.1 Personnel Training All personnel involved in sample collection, analysis, data reduction, and quality control should be adequately trained for their appointed tasks. The training of personnel for field operations requires the use of a broad range of techniques, because of the diversified backgrounds and capabilities required of the people involved. In addition to proficiency in performing the work, individuals must understand its importance in establishing data of known quality. An on-the-job training program should require that personnel: a. Observe an experienced operator performing the various tasks in the data collection process. b. Study operations manuals and the DQA plan. c. Perform operations under the direct supervision of an experienced operator, and then inde- pendently perform operations with instruments, sample collection, sample transfer, and data reduction. In addition to on-the-job training, formal training programs such as those offered by universities, marine institutions, or professional societies should be considered. Personnel must be familiar with the DQA plan and aware of the various ways of achieving and main- taining quality data. Personnel training should start before operations begin, and should continue dur- ing operations, with periodic reviews and requalification when indicated. Recommendations for evaluating personnel performance are discussed in section 3.1.4 and appendix A. This personnel training function is applicable to DQA levels as stated in Section 4.3.4. 4.7.2 Equipment Transportation The Equipment Transportation function deals with the shipment of equipment from the site of system certification to the site of operations. Improper handling of equipment can result in complete breakdown of the system and produce data that is unusable or, at best, of questionable accuracy. Documents on calibration, maintainability requirements, spare parts lists, and caution notices for transportation protection usually accompany the equipment. Packaging instructions should include en- vironmental tolerance limits to prevent exposure of equipment to conditions that could affect its per- formance. Transportation instructions must be complete before the start of field operations so that equipment can be delivered to operations sites without delay. 4-19 4.7.3 Standardization of Operating Procedures Manuals on operating procedures for routine tasks, calibration procedures, transfer and field stan- dards, and preventive maintenance should be prepared in a format readily understood by operators. The procedures should be reviewed by the measurement program management. Manuals should be prepared for use with equipment units and on data collection techniques. For some marine measurement program activities, procedures should be written either by standards groups or by the organization responsible for the measurement program. Written procedures should be provided to all operators. All procedures used in the measurement program should be documented as part of the support data. All documents related to procedures for predeployment checking and calibrations, measurement com- parisons, ongoing checking and calibrations, and postdeployment checking and calibrations should be available at operational sites and on board program vessels. All field and system certification calibra- tions should be fully documented and maintained. Standards accepted by the measurement community serve as a common base on which instruments can be calibrated and their performance assessed. For marine measurement systems, most available stan- dards are applicable only to laboratory operations. The development of field standards, especially for routine measurement programs, should be considered. For field operations, the use of transfer standards should be normal procedure. To prevent or minimize the occurrence of component failure, standard operating procedures for preventive maintenance should be prepared. Identifying components of the system that are subject to wear or aging helps to assure continuous performance of the system. Moreover, a preventive maintenance program can contribute to maintaining overall data quality, es- pecially if routine calibration checks cannot be performed. Maintenance can be performed during non- operational time for noncontinuous monitoring instruments. For continuous monitoring systems, duplicate (backup) units can be used during routine maintenance periods; downtime should be scheduled for this purpose. A routine preventive maintenance program should be in effect throughout the field operations phase of the measurement program. Operating procedures should be standardized for all DQA levels and must be in effect for DQA Levels II and III. All routine work procedures, calibration procedures, transfer and field standards, and preventive maintenance procedures should be documented for reference. Calibration results should be reviewed and, when required, corrective actions taken. 4.7.4 Site Selection Although site selection was considered in the Preparation of Program planning element, it may be necessary to alter a site location for the actual collection of data. This alteration may be made either to assure that data are representative or to adjust for unexpected operational or environmental con- straints. The onsite supervisor should have sufficient knowledge of the measurement program objec- tives to make field changes that do not adversely affect data quality. Changes in monitoring sites dur- ing operations must be carefully documented as part of the support data. The site selection function is applicable for all DQA levels. For Level I programs, changes in site loca- tion because of operational constraints may be made at the judgment of the onsite supervisor. A Level II measurement program may require formal documentation of the site change and an evaluation of the effect on the data. A Level III program should require prepared instructions to determine when a 4-20 site change may be made by the onsite supervisor and when prior approval is required from the measurement program manager or officer-in-charge aboard ship. The DQA plan also should specify under what circumstances site changes may be made. 4.7.5 Sampling Considerations Sampling considerations are practical, technical matters that must be evaluated to determine the effec- tiveness of the measurement program. These include consideration of measurement program require- ments, equipment capabilities, operating environments, personnel capabilities, and safety. Data analysis, storage, and retrieval procedures also must be considered. Sampling considerations are used for evaluating the effectiveness of the measurement program and the representativeness of the data rather than for assessing the precision and range of accuracy of the data; they are important in assuring the day-to-day consistency of data quality. This DQA function is primarily related to planning for the measurement program, but may require revision during operations. (An example of a sampling consideration: If the vertical thermal gradient of seawater is to be determined at a specific location, measurement planning should consider the depth intervals at which temperature should be measured, and whether the measurements should be con- tinuous or be taken at discrete time intervals.) 4.7.6 Predeployment Checking and Calibrations Before an instrument is deployed, a final check of system performance should be conducted. This in- cludes a visual check of the system and of the operation of the electronic instruments. These actions, together with postdeployment checks, assure proper day-to-day performance of the unit. Zero and full-span calibration checks and adjustments should be used to verify that the instrument has been properly deployed. Immediately following deployment, a comparative measurement should be made as a final direct check on the functionality of the system. These checks can be made by various methods, such as the deployment of a comparison measurement system alongside the instrument. Predeployment calibra- tions should be recorded to aid in establishing data quality. Pre- and postdeployment checks serve to verify the performance of the instrument during its operation and to determine its measurement drift. This checking and calibration function is applicable to DQA Levels II and III. If used frequently in a specific measurement program, criteria should be established during preparation of the DQA plan to control human error (section 3.1.4 and appendix A). All documents on predeployment checking and calibrations should be maintained as support data. Level III programs should require the periodic review and auditing of these procedures and the related documents. 4.7.7 Measurement Comparisons The measurement comparison function includes intercomparisons, and inter- and intralaboratory comparisons. These comparisons are used to study instrument performance, accuracy and precision of instruments, operational procedures, and operator performance. Intercomparison programs are used to evaluate the performance and variability (bias, precision, and accuracy) of a measurement system. An intercomparison is generally the onsite comparison of dif- ferent instruments designed to obtain the same measurements. Preferably, a transfer or field standard is used to measure the performance of each instrument. (For a more thorough discussion on intercom- parison, see sec. 4.3.6.) 4-21 Inter- and intralaboratory comparisons are used (1) to identify laboratories (or analysts) who are biased in their performance, and (2) to estimate measurement method reproducibility among laboratories. I nterlaboratory programs involve two or more laboratories using similar or different types of equipment, different personnel, and similar or different techniques to measure the same parameter. Intra-laboratory programs, conducted within one's own organization, are used to monitor equipment and personnel performances. Laboratories participating in comparison programs should be provided with physical standards, standard reference materials, and instructions furnished by a coordinating laboratory. The coordinating laboratory also evaluates the results and prepares the final report. Inter- or intralaboratory comparison programs are recommended for the following: Joint measurement programs — Whenever two or more laboratories work on a common project and each performs separate and independent measurements, a comparison program should be scheduled to evaluate results. Independent measurement programs with similar goals — Whenever two or more laboratories are working independently on programs leading toward the same or similar goals, they should arrange to conduct periodic interlaboratory test programs. The technical objectives of interlaboratory and intralaboratory comparison programs are achieved by: a. Conducting comparison programs that evaluate measurement performance and provide methods for reducing measurement variances. b. Carrying out periodic performance intercomparisons and evaluations of various instruments that sense similar parameters. c. Performing calibrations or experiments on similar marine environmental instruments to es- tablish measures of calibration variation. International, national, and local government facilities and academic, industrial, and private laboratories should cooperate in such programs. d. Exchanging measurements ideas through workshops and seminars. e. Promoting the establishment of standard calibration procedures, techniques, and equipment. f. Providing a focal point for information on the calibration facilities and measurement capabilities of other participating laboratories to assure that oceanographic data collected from instruments of the same class are comparable. The comparison program provides an op- portunity for participants to evaluate and improve their calibration techniques. This DQA function may be reassessed and redesigned during the conduct of the* operations. Un- satisfactory results from intercomparisons should lead to the reassessment byprogram management of equipment, calibration procedures and frequencies, operating procedures, and operator performance. This comparison function should be used for measurement programs involving two or more laboratories and for Level III programs. Documents on measurement comparison results and the ac- tions taken to improve performance should be maintained as support data. 4.7.8 Ongoing Checking and Calibrations The quality of data from a deployed system is maintained, in part, by checking and calibrating the system periodically and by routine evaluation of the data for consistency and representativeness. When abnormalities are noted, the onsite supervisor should take corrective action. 4-22 Ongoing checks should be routine on board ships and for all measurement programs at all DQA levels. For example, when an instrument is found to be out of calibration, a Level I program may continue without any action except noting the change in calibration results. Even at Level I, however, an out- of-calibration instrument should be replaced if backup equipment is available. In similar circumstances, a Level II program may suspend measurements until the instrument is recalibrated, and a Level III program may require review of all data obtained since the last calibration, with notifi- cation to measurement program management if the problem is chronic. 4.7.9 Postdeployment Checking and Calibrations Immediately before an instrument is to be retrieved and immediately after it is retrieved, checks of the system should be made to assure that it functioned properly during data acquisition. These checks are identical with those performed for predeployment checking (sec. 4.7.6). This DQA function should be applied to Level II and Level III programs as described in Section 4.7.6. 4.7.10 Quality Control and System Audit Quality control and system audits of field operations are fundamental to assure data quality. For Level II programs audit should be conducted by operations personnel. For Level III programs, additional auditing by an independent reviewer is required. The DQA plan should establish both periodic and aperiodic audit frequencies. To control human errors, a pass-fail criterion should be established in the DQA plan. This criterion should be based on the work tasks of the measurement program and the DQA level, as discussed in section 3.1.4. The frequency of pass-fail checks should be established as discussed in appendix A. Audits must be conducted to assure that the accuracy and precision of the data obtained are within the error bounds established for the measurement program. The audits should: a. Verify that all operations documents and instructions have been distributed to the operators. b. Assure that requirements for controlling instrument and human errors have been met and that written procedures are being followed. c. Assure that all data and samples are being handled in accordance with standard operations procedures. d. Assure that calibration programs are being followed and documented, and that preventive maintenance programs are being followed. This auditing function is applicable to Level II and Level III programs. Level III programs may re- quire 99 percent assurance of operator performance, while Level II programs require 95 percent. A Level III program also requires that all documents be maintained for independent auditing. 4.7.11 Data and Sample Collection, Recording, Protection, and Transmission Data and samples should be collected and handled in accordance with documented procedures. Magnetic tapes, analog traces, and printout sheets containing data should be completely labeled im- mediately after data are collected. Labels should show program identification, sample number, loca- tion, depth, collection date and time, weather conditions, sea state, personnel involved, and witnesses. 4-23 Data and samples are to be transferred as specified by the DQA plan. Physical samples should be stored in containers specifically designed to protect the data content of the sample. Label each con- tainer to show all basic information and add special information on items such as the introduction of preservatives, storage instructions, and time sensitivity. Handling of data and samples during transfer should not modify their completeness and integrity. The transfer of all data and samples should be documented; this documentation must remain with the data and samples at all times. These procedures should be used at all times during field operations, par- ticularly for Level II and Level III programs. 4.7.12 Chain of Custody Complete records must be kept of every transfer of data or samples to an individual, to a laboratory, or to a storage facility. Such records permit an investigator to determine who had custody of the material and where it was at any given time. Custody documentation is part of the support data and should be available for review by an independent auditor. Transfer documents should be updated and maintained at a central location such as in the office of the measurement program manager. These records also are extremely important when legal use of the data is anticipated as they may be in- strumental in establishing the validity of the data. These procedures should be used with all Level II and III programs. Chain of custody procedures should assure that: a. Only authorized personnel handle the sample. b. The field sampling techniques adopted for the measurement program were used. c. A record tag is attached to the sample immediately after collection. The tag should include the information listed in paragraph 2 of Section 4.7.11. d. Blank samples both with and without introduced chemicals are collected. e. All record forms are completed. f. The transfer of the samples is documented. g. Transfer procedures provide for proper protection and preservation of samples. For example, if samples are mailed they should be sent by certified mail with return receipt requested. 4.8 ANALYTICAL LABORATORY OPERATIONS Analytical laboratory operations include the planning and execution of procedures for laboratory testing of collected samples. The functions listed under this DQA element are intended to assure test results of the accuracy and quality required by the measurement program objectives. Laboratory operations should be reviewed frequently because lapses in performance can result in large quantities of data being invalidated or deteriorating beyond the limits of the error budget. 4.8.1 Personnel Training Personnel involved in the operation of analytical laboratories should be trained sufficiently to perform their tasks without affecting data quality. Laboratory personnel must understand the operation of the equipment and the procedures for their tasks. Most of the aspects of this training function and its ap- plicability to DQA levels are equivalent to those stated in sections 4.3.4 and 4.7.1. 4-24 4.8.2 Calibration A calibration plan should be developed in the DQA plan and used for all data measurement equip- ment. The calibration plan should include the use of primary or transfer standards. The accuracies of standards, whether apparatus or reagents, should exceed the accuracies required in the measurement program. Generally, the accuracies of transfer standards should be an order of magnitude more stringent than the accuracies required for laboratory instruments. For example, if a thermometer to be used in day-to-day laboratory operations has a specified accuracy of ±0.5°Celsius,the transfer standard criterion requires that it be calibrated with a thermometer whose accuracy is at least±0.05°C. Calibra- tion standards should, in turn, be calibrated against primary standards of unquestionable and higher accuracies. Higher level standards should be directly certified by the National Bureau of Standards or another recognized organization. If standards do not exist, an independent expert should review the procedures, document the data results, and make periodic checks until a standard evolves or is adop- ted. Regardless of the type of calibration equipment or material, accuracy levels must be consistent with those required of the measurement program. Note: Calibration must not be omitted; it is vital to any measurement program. A detailed calibration plan should provide for: a. calibration procedures and record forms; b. stated calibration frequencies; c. acceptable sources for standards; d. a means for identifying specific equipment items, and provisions for routine maintenance of the test equipment; e. a list of all calibration standards (including nomenclature and assigned identification numbers); f. specifications of environmental conditions; g. documentation on qualifications and training of personnel performing calibrations; and h. intended range of validity. Written step-by-step procedures for the calibration of measuring and test equipment and the use of calibration standards should be provided by the laboratory to eliminate measurement inaccuracies that might be introduced by differences in techniques, environmental conditions, choice of standards, etc. Calibration procedures should include information on the following: a. The specific equipment or group of equipment to which the procedure is applicable. ("Like" equipment or equipment of the same type, having compatible calibration points, environmental conditions, and accuracy requirements, may be serviced by the same calibration procedure.) b. A brief description of the scope, principle, or theory of the calibration method. c. Calibration specifications, such as number of calibration points, environmental requirements, and accuracy requirements. d. A list of the calibration standards and accessory equipment needed to perform an effective calibration. Manufacturer's name, model number, and accuracy should be included. 4-25 e. A complete, clear and concise, step-by-step written calibration procedure. f. Specifications for calibration facilities, equipment, temperature and humidity of laboratory space, and physical protection for calibration standards. g. Calibration procedures should include specific instructions for obtaining and recording the test data. Data forms should also be furnished. This function is applicable to all DQA levels. A Level I program may require only the calibration of es- sential instruments without stringent controls on the system. A Level II program should require a thorough calibration of the system with complete error analysis and periodic checks to verify the ac- curacy of the system. For Level III programs, this function may also require independent reviews of control charts, calibration procedures, and all calculations, and the maintenance of calibration documentation as support data. 4.8.3 Establishing Precision and Accuracy of Laboratory System The precision and accuracy of laboratory methodologies must be established by using standard procedures and instruments, or tests of standard samples. Control charts also may be used for this pur- pose. Precision and accuracy in a laboratory system often are established by the routine use of spiked sam- ples, blank samples, and split samples. A normal sample of material to which a known amount of some substance has been added is a spiked sample; the extent of spiking is unknown to the laboratory analyst. Spiked samples are used to check on routine analysis or on the recovery efficiency of a laboratory method. A blank sample consists of a carrying agent normally used to selectively capture a material of interest. Blank samples are used to establish zero baseline or background values that can be used to adjust or correct routine analytical results. A split sample is commonly used for checking operator or laboratory bias by having two related measurements made on the same sample or on equivalent samples under the same conditions. This DQA function should be considered if the error source estimate shows that data accuracy is sen- sitive to laboratory errors. Because the precision and accuracy of the measuring instrument are predominant in the allowable error, inter- and intralaboratory comparisons must be considered for establishing precision and accuracy. If standards, procedures, and reference samples are nonexistent, it is especially important to document the procedures and logic of laboratory tests. If the measurement program objectives are relatively insensitive to the measurement technique used, work to establish precision and accuracy may not be required. 4.8.4 Standardization of Testing The standardization of testing involves the documentation of routine analytical work, transfer and reference standards, and a preventive maintenance program. It is important that testing procedures adopted for analytical laboratory operations result in consistent performance on repetitive tests. Written descriptions of test methods adopted should be prepared and made available for reference. Analytical procedures should include the testing of blanks, certification of reagents, and use of reference and comparison standards. Testing procedure documents should be included in the support data for the measurement program. If test procedures are changed during laboratory operations, the changes should be documented and included with the data records. 4-26 Transfer or reference standards, traceable to national or international standards, should be used as a means for achieving the accuracy required by the error objective in laboratory operations. The laboratory should have a preventive maintenance program in effect (sec. 4.7.3). This DQA function is applicable to Level II and Level III programs. All working procedures and transfer and reference standards should be documented. For Level III programs, these documents should be periodically reviewed by an independent refeiee. 4.8.5 Interlaboratory and Intralaboratory Comparisons Comparison programs should be used for evaluating the validity of testing procedures and techniques in use in the laboratories. Comparisons are recommended for joint and independent measurement programs with similar objectives. Comparisons that show inconsistencies should prompt corrections to procedures in use. For example, replicate samples may be issued to several laboratories to estimate the effectiveness of given procedures. The results can be used to evaluate equipment, procedural bias, or systematic errors. Many of the aspects of inter- and intralaboratory comparisons are equivalent to those used for field operations (sec. 4.7.7). This DQA function should be used for Level III programs and for all measurement programs in which two or more laboratories are involved. The documents on inter- and intralaboratory comparison results and on actions taken to improve the performance of a laboratory should be maintained as sup- port data. 4.8.6 Quality Control and System Audit A quality control and system audit of analytical laboratory operations connected with a marine measurement program should be made periodically. Audit for Level II programs is conducted by the operations personnel; for Level III programs, audit is by an independent reviewer. To control human errors, a pass-fail criterion should be established in the DQA plan (sec. 3.1.4). The frequency of pass- fail checks should be established as discussed in appendix A. In addition to the audit factors listed for field operations in Section 4.7.10, audits for an analytical laboratory should also verify that: a. Quality control check samples are used as specified in the DQA plan. b. Precision and accuracy of the system are monitored by the use of control charts. c. Functional checks are made as required by the DQA plan. Audits of laboratory operations should be made periodically by the laboratory supervisor to evaluate the data quality produced by the analysis system. In auditing an analyst's proficiency, the supervisor should consider the following problems and their suggested solutions: 1. The kinds of samples to use. Solution Use replicates, field samples, unknowns, spiked samples, and reference standards. 4-27 2. How to prepare and introduce samples into the laboratory operation without the analyst's knowledge. Solutions Samples should have the same labels and appearance as field samples or other samples routinely passed through for analysis. Determine whether a periodicity is called for by the DQA plan. If not, check samples should be placed in the system at random intervals. Save an aliquot from one test for analysis by another analyst. This technique can be used to detect performance bias. 3. How often should an analyst's proficiency be checked. Solutions Consider degree of automation. Consider total method precision. Consider analyst's training, attitude, and performance record. Consider DQA level and pass-fail criteria established in the DQA plan. Consider use of control charts. This audit function is applicable to Level II and Level III programs. Analyst proficiency of 99 percent is required for Level III programs, 95 percent for Level II programs. All documents related to this function must be maintained for independent audits during field operations for Level III programs. 4.8.7 Functional Checks Functional checks are routine examinations of equipment units to verify the stability and validity of the sample and the performance of the analytical equipment. These procedures should be designed to permit immediate recognition of changes in system performance as they occur. Personnel should be provided with written performance specifications for each functional check together with recommen- dations for corrective actions. Examples of functional checks are the periodic checking of zero and full-scale deflection on an analog recorder, or the running of a secondary standard or reference sample solution during routine use of an atomic absorption spectrometer to determine whether the unit in- dicates readings within a predetermined range. This DQA function is applicable to all DQA levels. When a functional check reveals that an instru- ment is out of calibration, a Level I program may continue without any action except for recording the event. However, backup equipment should be installed as soon as possible. For a Level II program, measurement should be discontinued until the instrument is recalibrated. At Level III, all data ob- tained since the last calibration should be reviewed and the measurement program management should be notified if the problem is chronic. 4-28 4.8.8 Data Transferral Check Of all the errors that can occur during data transfer, those introduced by people are the most frequent. Data transfer checks also are made frequently to assure that equipment operators are recording instru- ment values correctly and to prevent loss of information during laboratory operations, because data lost during these operations often cannot be replaced. Data transfer checks such as redundant keypunching should be completely documented so that a reviewer can determine their adequacy. This function is applicable to Level II and Level III programs. The criteria for the number of transfer checks required should be established during DQA planning so that the desired assurance level is ob- tained. (See appendix A.) 4.8.9 Quality Control Check Samples Quality control check samples for laboratory operations are of known composition. They are used to determine the bias, precision, and accuracy of the measurement system and the performance of the laboratory analyst. Samples should be prepared from or be traceable to calibration standards. If recognized reference standard materials are not available, check samples should be certified as ab- solute standards by two or more independent methods. The magnitude of a check sample and its entry into laboratory operations should be unknown to the analyst. Quality control check samples form a basis for interlaboratory comparisons, for evaluation of new methods, procedures, and instruments, and for long-term intercomparisons. All analytical operations should be examined frequently to assure that results of the operation are compatible with objectives of the measurement program. An independent reviewer should examine check sample analyses to assure their adequacy. 4.9 DATA REDUCTION AND ANALYSIS This DQA element deals with the primary reduction and analysis of data collected during the measure- ment program. Data reduction and analysis may be performed either as field and laboratory opera- tions are going on or upon their completion. The DQA functions for this element are discussed below. 4.9.1 Personnel Training Training of personnel to carry out data reduction and analysis tasks can follow the guidelines set forth in section 4.7.1. The importance and applicablity of training for these tasks is identical to those set forth in sections 4.3.4, 4.7.1, and 4.8.1. 4.9.2 Data Processing The operational processing and reduction of the collected data are executed as indicated in the Data Processing Planning element of the DQA plan (sec. 4.4). The goal is to assure that the data processing is scientifically and mathematically correct, that all in- fluences on the data set are accounted for, and that the precise manner of formulation of the data processing routine is documented. Quality control procedures described in sections 4.9.3 and 4.9.4 can be used to check, control, and verify this work. 4-29 This processing function is applicable to all DQA levels. A Level I program may not require a formal documentation of work performed, but a Level II program should use prepared step-by-step pro- cedures to control all work steps. For Level III programs, all calculations and verification procedures should be reviewed by an independent reviewer and documentation maintained as support data. 4.9.3 Reduction Checks Data reduction checks are made continuously to assure that the translation of data from the raw to the processed state was accomplished without introducing errors. Periodically, the supervisory technician should check strip charts read by each technician to assure that readings have been transcribed correctly. Mistakes in mathematical computations, another very common source of data reduction error, may require checking by a second person. It is preferable that such checks be made soon after the original calculations are made. This assessment function should be applied to Level II and Level III programs and to all measurement programs involving manual calculations of data. A Level II program requires the checking of some of the calculations. A Level III program requires checking of all calculations and maintenance of all documents as support data. 4.9.4 Standard Program Computer Analysis Checks During the use of a computer system, two types of checks should be made. The first involves a stan- dard check routine that tests the systems and subroutines to verify that they are functioning as specified. These checks are routine after system perturbations or changes caused by events such as power outages. The second type of check involves periodic verification of the validity of the computer program. Details of this function are described in Verification of Computer Programs, section 4.4.1. If the checks are satisfactory, there is no contribution to the total uncertainty of the data. 4.9.5 Quality Control and System Audit Quality control and system audits should be conducted regularly to assure that reduced data are representative of the raw data. (See also sec. 4.7.10.) This function is applicable to Level II and Level III programs. Level II programs require review of the data reduction and analysis documents. Level III programs also require checking by an independent reviewer of sample calculations and of the validity of assumptions made in the calculations. Level III programs also require that all documents be maintained for independent auditing during the course of work. 4.9.6 Support Data Collection and Storage Support data include raw data charts and printouts, calibration records, equipment performance evaluations, operating logs, records of environmental conditions at time of sampling or measurement, measurement comparison records, quality control and system audit records and records of corrective actions taken pursuant to the audit, all training and certification documents, all supporting calculation sheets and check prints, and all similar information. The support data must be readily available to measurement program personnel and independent reviewers. Support data should be collected con- tinuously throughout the measurement program and reviewed periodically. 4-30 4.10 DATA VALIDATION r The data validation element of the DQA plan is designed to assure that the data collected during a measurement program are of sufficient quality to meet program objectives. The functions, or subunits, of this DQA element are described below. 4.10.1 Review of Support Data The review of support data is an important step toward ascertaining the validity of data. This step is the first complete review of the entire data cycle; its purpose is to determine the cumulative effects of operational conditions, support equipment, calibrations, measurement comparisons, maintenance, and data reduction and analyses. The support data review should identify "outliers," i.e., data points with unusually large or small values in a set of observations. The probability of an outlier being a valid member of a data set is small, but outliers should not be eliminated without a comprehensive review and discussion with the measurement program management. Outliers may be identified from the support data. There are a number of sources for these unusual values, such as faulty instrument or system components, inaccurate readings of record or dial, errors in transcribing data, or calculation errors. The occurrence of outliers is the basis for listing the following additional purposes for the review of support data: a. Identify the need for closer control of the data collection process if unacceptable outliers occur frequently. b. Eliminate outliers prior to statistical analysis of the data. In most cases, the presence of outliers violates a basic assumption of the analysis, so incorrect conclusions are likely to result if they are not eliminated before analysis. However, outliers should be reported, and their omission from analysis noted. c. Identify outliers attributable to unusual conditions of measurement, such as abnormal local en- vironmental conditions during the time of sample collection. Ideally, unusual conditions should be recorded on the field data report. Failure to report such circumstances often can be detected only upon the review of support data. This review function should be implemented for Level II and Level III programs. On detection of out- liers, a Level II program may require only that they be omitted from the data set, while a Level III program may require a complete review of the data collecting and reduction process to determine the cause of the outliers. 4.10.2 Statistical Analysis of Data Statistical analysis of data often permits the reporting of data in a concise and simplified form. Statistical analysis should be done only after data validity has been established. Methods and techni- ques described in standard textbooks may be used for the statistical analysis. Some of the common techniques are described in Appendix A. The complexity of statistical techniques that are to be applied to a set of data are generally chosen at the discretion of the investigator. Nevertheless, certain minimum statistical analyses should be con- sidered for all measurement programs so that most of the collected data can be reported in summary form. For Level II and Level III programs, the statistical treatment of data should be compared with alternative analytical methods after their validity has been established by statistical tests. 4-31 4.10.3 Data Evaluation At this point, the data will undergo final evaluation before preparation of the final report. The extent of this evaluation depends both on the objectives of the measurement program and on the results of the review of support data and the statistical analysis of data immediately preceding the Data Validation element (4.10.1 and 4.10.2). Data evaluation should consider the following: Data sources (methodology). Inferences from data on: the effect of data precision and accuracy on conclusions. the appropriateness of the analysis procedure. comparisons with other studies or research similar in nature and considered to be of equal quality. limitations of data. completeness of data. satisfactory completion of objectives. 4.10.4 Preparation of the Uncertainty Statement for All Data A reported value whose precision and accuracy have not been stated is of limited use. The actual error of a reported value — that is, the magnitude and sign of its deviation from the reference value — is usually unknown. The limits of this error, however, can be inferred from the system certification, total error budget analysis, measurement comparisons, precision and accuracy of the measurement process, and application of various error-controlling functions used to obtain the reported value. The uncertainty statement should be precise and clear. It should state both the measurement system error and the assurance level used for controlling human errors. The procedures for estimating the measurement system errors are summarized in section 4.5, Total Error Budget Verification, and are discussed in detail in appendix A. This appendix contains information on equipment and human errors and a discussion of the concept of uncertainty statements. A review of appendix A would be valuable before an uncertainty statement is prepared. Each reported value should be qualified by a statement of the limits of its systematic and random errors together with a statement of the number of degrees of freedom on which it is based. A separate statement on the DQA level should include information on the criteria used for achieving the chosen assurance level for human errors. The uncertainty statement may be very brief in final form, but should provide the user of the reported data with all the information needed to assess its usefulness. Examples of uncertainty statements follow: 1. "Sonic velocities determined to accuracy of ±2.5 meters per second (instrument systematic error 1.0 meter per second, random error 1 .5 meters per second). Instrument calibrated with Organization X Standard Procedure Y (calibration obtained from 16 points with standard deviation of 0.7 and random error of 1.5 for 95 percent confidence level and 15 degrees of freedom). Transit time and distance calibrations done before, after, and at frequent intervals during testing using standard traceable to Z. 4-32 Data corrected to temperature of 25°C and pressure of 101.325kP using relationship ABC. Assurance level for operator errors 99 percent based on pass-fail criteria. Thirteen samples of 150 measurements checked and reviewed; none defective. DQA program: Level III." 2. "The reported current data has been obtained from a current meter having the following instru- ment errors. The calibration of the current meter was prepared using 25 data points between velocities of 0.5 and 6.0 knots. The mean random error was ±0.15 knots (with a 95 percent confidence level and 24 degrees of freedom). The current meter was calibrated to account for salinity differences between 32 and 37 ppt and the calibration curve was linear (15 calibration data points were used) with an RSE of 0.06 knots. The systematic error for the current meter calibration was 0.05 knots, and that for the salinity tests was 0.05 knots. Therefore, assuming the parameters are independent, the maximum total random and systematic errors are ±0.16 and ±0.1 knot, respectively. Of the 1,000 measurements transferred by operators, 80 were inspected and only 1 failed. Therefore, the objective assurance level of 99.0 percent was met. The program was conducted at a DQA Level II." 3. "The estimated random errors in the reported temperature values are not more than ±0.5°C in the range of 0° to 30.0°C and then increases to ±1 °C at 70° C. The systematic error is negligible. Of the 1,000 measurements taken by operator, 50 samples were inspected; 20 were unsatisfactory (failed). Therefore, the objective assurance level of 75 percent was met. The program was conducted at a DQA Level I." 4.11 AUDIT Audits are made during the operational phase of the measurement program to evaluate data quality and to assure that the DQA plan has been satisfactorily followed. These audits are made independently of and in addition to normal quality control checks and internal audits made by the program operator, analyst, or supervisor during the operational phase of the measurement program. (Routine quality control audits were discussed in sections 4.7.10, 4.8.6, and 4.9.5.) Section 4.1.2 gives several examples of when audits should be performed. Independent audits must be made by personnel not involved in the measurement program. To perform an effective audit, the auditor should have a complete un- derstanding of the measurement program objectives and the DQA plan. Complete audits require a review of the DQA plan and audits of actual operations. 4.11.1 Review of Data Quality Assurance Plan The DQA plan is to be reviewed for completeness and effectiveness in meeting the measurement program requirements. Review of the DQA plan may be made by program management, or by an in- dividual selected by management to perform the audits. The review is completed in two steps: First, the auditor reviews the DQA plan prior to field operations to assure completeness. However, complete evaluation can only be made when the plan is used for actual operations. Thus, final review of the DQA plan (the second step of the audit) takes place at the site of the operations. During this part of the review, the following are assessed and documented: facilities; equipment; systems; record keeping; data reduction; data validation; operations, maintenance, and calibration procedures; and reporting aspects for the total measurement program. The review should: a. Identify existing system documentation such as maintenance manuals, organizational structure, and operating procedures. Ascertain that the documents in use are those cited in the DQA plan. b. Evaluate the adequacy of the procedures as documented. Determine if the procedures provide the detail needed by personnel to perform their tasks. 4-33 c. Evaluate the use of and adherence to the documented procedures in day-to-day operations; evaluation is to be based on observation of conditions and a review of applicable records on file. Determine whether the personnel are following the prescribed procedures. d. Evaluate the availablity of personnel for quality control and system audits, and whether these functions are being carried out. e. Identify practical limitations to executing DQA functions. Determine if there are difficulties in executing the DQA plan because of unforeseen operational conditions. Upon completion of the onsite study, the independent reviewer should prepare a report recommending changes to the DQA plan. The comments of the auditor should be reviewed by the measurement program management and his recommendations for improving the DQA plan should be acted upon. If required, a modified DQA plan should be prepared for use in the operational phase of the program. 4.11.2 Audit of Actual Operations An operations audit also is made as a qualitative and quantitative inspection and review of the total measurement system. This audit is used to determine whether the DQA plan has been executed as specified and to evaluate the validity of data by independent checking. Audits of operations may be made at any time during the operational phase. The first operations audit is appropriate shortly after monitoring has been initiated. Additional audits may be appropriate for lengthy measurement programs or to determine whether deficiencies uncovered in earlier audits have been corrected. The DQA plan for the measurement program should be the basis for conducting these audits. Audits should include reviews of the following: a. Organization and responsibility — Is the quality control organization operational? Are quality control and system audits properly made and documented? b. Data collection — Are written data collection procedures available and followed? Are personnel completing all record forms and identification labels? c. Sample collection — Are written sample collection procedures available and followed? Do the per- sonnel use the required containers? Are containers cleaned to prevent contamination? d. Sample analysis Are written analysis procedures available and followed? e. Human errors— are data checks made and actions taken to control human errors? Is the program of pass-fail checks for operations in use? Does checking show that the desired assurance level is met? f. Calibration— Are written calibration procedures available and followed? Are the records of past calibrations available; do they indicate that calibration corrections have been applied to the data? g. Measurement comparisons — Are results from measurement comparison testing reviewed and used? h. Preventive maintenance — Is the preventive maintenance schedule for the measurement program being followed? 4-34 The auditor should qualitatively determine the validity of field, analytical laboratory, data reduction and analysis, and data validation operations. The determinations can be made in several ways. Follow- ing are a few examples: a. Data collecting audit — The auditor may use a separate calibrated thermometer or a set of reference standards to check the data collection system (actual field temperature and instrument calibration). b. Analytical laboratory audits — The auditor may use a portion or aliquot of several routine sam- ples for analysis. c. Data processing audits — Data reduction checking may involve spot checks on calculations. Data validation may be checked by inserting a dummy set of raw data in the data processing system; this should be followed by review of these validated data. Copies of reports on completed audits should be sent to the measurement program management for review, and to operators and supervisors for any necessary actions. Audit reports should list: a. Purpose of the audit. b. Personnel involved in the audit. c. Activities audited, the tests observed, the documents and data reviewed. d. Observations of work performance and errors in procedures. e. Recommendations for corrective action, changes in numbers of personnel, and changes in calibration on future work. f. Deadline for completion of corrective actions g. Date chosen should minimize the effect of the discrepancies on future work. h. Provide for verification of completion of corrective actions, either by written reports to the auditor, or by an additional visit to the operations site. At the conclusion of the measurement program, a final audit should be made to verify that the program has met its objectives and was conducted in accordance with the DQA plan, and that all ac- tions were adequately documented. The role of audits in the overall measurement program is one of verification. While audits do not im- prove data quality if all work is correctly performed, they do provide assurance that the work prescribed for the measurement program has been conducted properly. Audits conducted by in- dividuals not responsible for day-to-day operations provide another management control mechanism to program managers. 4.12 DATA REPORTING After the data collection program has been completed and audited, the data report is prepared in ac- cordance with requirements of the specific measurement program. 4-35 4.12.1 Preparation of Report The final report describes and summarizes raw data, data quality, precision and accuracy of measure- ment methods, methods of analysis, and quality control information. All raw data should be included in the final report when possible. When the amount of raw data is great, examples of typical raw data should be included in the body of the report; the complete set of raw data may be published as a separately bound appendix to the final report or may be made available on microfiche from a data cen- ter. The final report should include a discussion of the objectives of the measurement program, in terms of the data required, and an uncertainty statement. Methods of data analysis should be described unless well documented in the literature. A statement on any limitation and on applicability of the results should be included. Information on personnel, equipment, procedures, training, calibration, laboratory activities, audit, ship's log, hard-copy strip chart records, data sheets and calculations, and on documents used to meet contractual requirements, may also be included in the report. 4.12.2 Archiving Archiving includes the placement of the final report into the storage system of a data central. This may include storage on microfilm or computer tapes or discs, and cataloging of the data in indices. The data to be archived should be accompanied by the uncertainty statement contained in the final report. The archiving system for the measurement program should be indicated before data submittal. In lieu of specific requirements for the submittal of data, Publication B-l, "Guide to Submission of Data," published by the National Oceanographic Data Center (ref. 57, appendix C) should be used. 4-36 APPENDIX A: SYSTEM ERROR ANALYSIS CONTENTS Page A.l Introduction ,.„., : A-2 A. 1.1 Provisions for error analysis in DQA plan A-2 A. 1.2 Types of data A-3 A. 2 Errors A-3 A. 2.1 Sources of error A-3 A. 2. 1.1 Experiment design A-4 A. 2. 1.2 Equipment characteristics A-4 A. 2. 1.3 Calibration A-4 A. 2. 1.4 Effects of disturbing the environment by sampling A-5 A. 2. 1.5 Environmental influences A-5 A. 2. 1.6 Data processing A-5 A. 2.1 .7 Human errors A-6 A. 2. 1.7.1 Sources of human error A-6 A. 2. 1.7. 2 Control of human errors A-6 A. 3 Treatment of errors A-8 A. 3.1 Random errors A-9 A. 3. 2 Systematic errors A-9 A. 3. 3 Combination of errors A- 10 A.4 Preparation of an uncertainty statement A-l 1 A.4.1 Elements of an uncertainty statement A-l 1 A. 4. 2 Examples of uncertainty statements A- 12 TABLE A-l Assurance levels of human error A-7 A-1 APPENDIX A: SYSTEM ERROR ANALYSIS A.l INTRODUCTION 1 The normal measurement system consists of a number of individual pieces of equipment and a team of operators. For the purposes of this guideline, the total measurement system is defined as all sensors, samplers, support equipment, platforms, standards, methodologies and procedures used to detect, collect, analyze, transfer, and report data. Operators are the human part of the system. The sum of errors attributable to the measurement system and those attributable to human fallibility form the total error for the system and define the uncertainty of the reported data. Analysis of the total error must include consideration of all the individual errors associated with the total measurement system that addresses sampling, analysis, data collection and presentation. A com- plete error analysis must be made as the basis for developing the uncertainty statement that should ac- company the reported data. The uncertainty statement defines the actions taken during the marine measurement program to limit error and reports the error associated with the data. A. 1.1 Provisions for Error Analysis in DQA Plan The objectives of the measurement program are established as part of the DQA plan. The parameters to be measured are defined and the error objective for each parameter is established (Sections 2.3 and 4.1). Estimates of expected error sources and magnitudes are made. During the early stages of plan- ning, error analysis is limited to estimations based on experience or calculation. If the error source es- timate does not meet measurement program objectives, the measurement system must be revised or the objectives must be reevaluated. For DQA Level III programs, a formal verification of the total error budget is recommended. For Level I and Level II programs, such verification is not required. However, it is suggested that the system error determined during development of the System Certification element be reviewed to assure that before operations start the selected measurement system satisfies program objectives. Error analysis during operations is not an analysis in the strictest sense; rather, it is a series of com- parisons against the planned acceptable error limits. Error analysis during the operations stage con- sists of discrete tests (calibrations, measurement comparisons, etc.) to assure that the system is not producing excessive errors. During operations, some formal analyses, such as intercomparisons, should be made. The results should always be compared against the results of the System Certification, Data Processing Planning, and Total Error Budget Verification elements, appropriate to the selected DQA level, to assure that the error objectives are not exceeded. If the error objectives are exceeded, action must be taken to correct error sources. Otherwise, program objectives must be reevaluated. The uncertainty statement prepared during data validation should be based on the results of a review and analysis of all information on system errors. A-2 A. 1.2 Types of Data Data may be divided into two types: single point or multipoint. Definitions follow: Single Point Data — basically field measurements of parameters that change with time, e.g. the velocity of an ocean current at station x, at 0900 LST, 21 June 1979. At 0902 LST, the velocity components have changed. Thus single point data are normally not used for calibrating an instrument. Multipoint Data — basically repeated measurements of a single sample, or of a standard under con- trolled conditions. The repeated measurements will vary in a pattern that reflects the accuracy of the measurement system. For example, a caliper being calibrated against a standard 30 cm bar may register 29.9773 cm, 30.0002 cm, 29.9227 cm on 3 successive measurements of the standard. After a suf- ficient number of measurements have been made, a statistical analysis may show the one sigma accuracy of the caliper is 30.0000 ±0.05 cm. For most marine measurement programs, multipoint data occur only during system certification or subsequent calibrations and measurement comparisons. Statistical analysis of the calibration data (multipoint) and evaluation of the calibration process lead to the determination of the error associated with the system. Statistical analysis of single point data cannot result in a statement concerning the error of the data. For example, if the temperature of a body of water is measured for a long period of time, a mean temperature can be reported. However, the mean temperature and standard deviation are only representations of the bulk of the data and do not indicate any consideration of the error associated with the individual data points. The error associated with single point data is established from the multipoint data obtained during calibration. Mistakes related to sampling rate, collection of inadequate information, or monitoring of the inap- propriate parameters must be resolved by the personnel of the measurement program. A.2 ERRORS A.2.1 Sources of Error Errors that affect the quality of data acquired during the course of a measurement program originate in three principal areas: electromechanical (or chemical) imperfections in the measurement system, errors in the design of processing procedures, and human fallibility. The following sources of error are described in detail below: Experiment design Equipment characteristics Calibration Effects of disturbing the environment by sampling Environmental influences Data processing Human errors A-3 A.2.1.1 Experiment Design The design of an experiment can affect the uncertainty of data acquired during a measurement program through its treatment of site selection, sampling rates, and choice of parameters to be measured. For example, attempting to measure the rate of current flow at the ocean-atmosphere inter- face is risky, because the measurements will be affected by short period variations in local winds, the passage of the measuring vessel, and the effects of swell set into motion by distant storms. Measuring ocean currents for 10 seconds at a time at 10-minute intervals may easily be nonrepresentative if there are short period fluctuations that generally last from 30 to 45 seconds. The measurement of the tem- perature structure of a vertical column of seawater must take into account the normal temperature lapse rate and the possible presence of a temperature inversion. Measuring this structure at 1-meter in- tervals may be overmeasurement; measurement at 100-meter intervals will not provide information on any fine structure and may well miss large scale features. Measurement design must include considerations of representative siting, representative sampling in- tervals in both time and space, and the choice of parameters that will truly represent the variables to be measured. A.2.1.2 Equipment Characteristics Characteristics of measurement equipment that can produce errors in the data are: Power fluctuations — fluctuations caused by variations in electrical voltage or frequency, hydraulic pressure, and other mechanical or electrical variables. Transmission losses — Data may be lost or degraded during transmission from one equipment unit to another. For example, fluctuations in the temperature of a cable carrying an analog voltage signal may cause a change in the signal. Electronic noise may lead to loss or alteration of bits on digital transmission links. Conversion algorithm — The equations used in changing analog signals to digital format may cause losses in precision or accuracy because of simplifying assumptions used in deriving the equations. Signal characteristics and conditioning — Information carried by an electronic wave form may be distorted by filtering or other conditioning operations, or may be clipped because of amplifier saturation. Mechanical factors such as response time, time lag, aging, and friction effects may also cause signal distortion. Readability — The precision of the reading may be limited by scale graduations, by malfunctions of recording devices, or by poor access for visual reading of indicators. Instrument error — An instrument does not yield the same reading for measurements repeated under identical conditions. A.2.1.3 Calibration Calibration is the process of comparing instrumental reading under specific conditions with es- tablished values associated with these conditions. Some of the factors involved in calibration are sources of uncertainty in instrumental measurements. Factors to be considered are: Standards — The uncertainty stated for an instrument cannot be smaller than the uncertainty associated with the reference or transfer standard used. A-4 Techniques — During calibration it is often not possible to subject the instrument to realistic ex- tremes of environmental conditions. Transfer function — Because calibration at all intermediate values is seldom practical, a transfer function must be developed. Transfer functions often do not account adequately for all departures from the smoothed interpolation between calibration points. Disturbance of medium — Placing an instrument in a medium for calibration may influence the medium being used as a calibration standard. Adjustment — Adjusting the instrument manually during periodic calibration is often the source of human error, and may be the cause of equipment malfunction or failure. Use of calibration curves obtained during the full scale preoperational testing and calibration of equipment should be con- sidered. Drift away from calibration standards should be noted in the field notes and with the archived data. A.2.1.4 Effects of Disturbing the Environment by Sampling The environment is often affected by the act of sampling (the Heisenberg effect). Thus, measurements taken during field operations can introduce uncertainty into the end product because of: Interference — Measuring a parameter can change the quantity being measured. The platform sup- porting the measurement instrument or the instrument housing itself also can disturb the medium being measured. For example, the introduction of a current meter into a flow path influences the movement of water. Sample contamination — Samples can become contaminated during the sampling process or during storage. Aging during storage can introduce further changes in the sample. A.2.1.5 Environmental Influences The environment of measurement can introduce error. Examples follow: Platform dynamics — The motion of the sensor platform can induce error. In situ instruments suspended over the side of a ship during rough weather experience accelerations through the water column. A current meter array on an anchored buoy line experiences relatively high-frequency mo- tions, which can cause errors in velocity measurements, and relatively longer period swings that also are sources of errors. Geometry — The spatial arrangement of components may influence measurement accuracy. For ex- ample, a current meter not oriented directly into the flow or not horizontal will not give true measurements. Another example: a piston core outfitted with temperature probes to measure the vertical flux of heat may not enter sediments vertically. If the core is to be used to calculate sedimen- tation rates, the non-vertical angle of entry would lead to errors in interpretation. A.2.1.6 Data Processing Mathematical processing of raw data introduces sources of error such as: Truncation and Rounding — the uncertainty limits associated with rounding usually are known. Ad- ditional uncertainty can result from recurring truncation during computer processing. Simplification of Formula — The dropping of second or higher order terms in defining equations in- troduces some uncertainty. A-5 A.2.1.7 Human Errors This section discusses the impact of human error on marine measurement programs. Methods are described for use in estimating and controlling the occurrence of human errors. Checking for human errors should not be used as a means for qualifying personnel; rather, the checks should be used to evaluate whether the operator is correctly performing the assigned work task under the actual field conditions. A.2. 1.7.1 Sources of human error Human errors may be introduced by operators at any point in the data collection, transfer, analysis, and reporting process. Typical human errors include reading an instrument incorrectly, transposing digits or recording incorrect values, erroneous keypunching, and calculating incorrectly. Human errors have the following characteristics: Errors that occur during data collection may be detectable in the final data set. Magnitudes cannot be defined, although frequency of occurrence may be estimated. Human errors can be attributed to fatigue or discomfort, particularly under field conditions. Human errors frequently are the real cause of reported equipment failures. A.2. 1.7.2 Control of human errors Human error in a marine measurement program can be minimized by reducing the need for operators in the data-collection process, by training operators to maximum effectiveness, and by inspections to verify operator performance. Operators are subject to emotional, psychological, and physical dif- ficulties that can affect their performance. The importance of these factors, especially on board ship, cannot be overemphasized. Operators can be partially excluded from direct participation in the data collection process by sub- stituting recorders or other equipment that produces hard copy of the data. Inspections also serve to remind operators of their obligations. A procedure for developing criteria for checking operator performance is described in the following. The reader is reminded that since the magnitude of human error usually cannot be determined, the criteria can be used only as a method for determining whether operators are performing within accep- table limits. The following steps can be used for developing such a method. Steps 1 through 5 describe the planning activity prepared with the DQA plan, and Steps 6 and 7 describe the reporting of human error. Step 1: On the basis of measurement program objectives, DQA level, resource allocation, sophistication of measurement system, personnel experience, and accuracy objective, an assurance level (Sec. 3.1.4) of the human error should be established in the planning stage. Step 2: Identify the sources of human error, such as calibration, sampling, manual recording of data, transfer of samples and data, calculations, and data reduction. Step 3: Establish a pass-fail criterion for each source of human error. For example, the operator may be required to read and record temperatures with a ±0.5°C tolerance limit. Upon inspection or examination, a record should be prepared to indicate performance (pass or fail). The tolerance limit established in this step should be consistent with the maximum instrument error. A-6 Ill c to E 3 I CD > Q) 0) o c CO 3 CO CO < < 0? CO o CM o O o O o o o o CM CO in 00 CM CO in m m in in in in in 00 CD CD y— CM •s- CD 00 CM 00 in in m m in m in m JD ^ ^ CM CM CM CM CM CM CM CM E D Z a> o c i . . . r ^ CO o CM o o o o o o o o nj Gt CO in 00 CM CO m m m m m m in m Q. If) CO fs* T — CM co m 1^- o ■«»■ o o < T— r" CM CM CM CM CM CM CM CM 5 < QC « ^^ t t t t CO o CM CM o o O o o o o o o o 5 s CO m 00 CO m 00 00 00 oo 00 00 CO lO o oo 1 — CM co in 1*. O ■>* cr 1 V T— CM CM CM CM CM CM CM Q. _i LU > - ^_ s _ o in o oo in m m m in in 111 ■•8 in in S s CO o CM CM CO CM CM CM CM CM CM o O) , ~ CM CO m f~ o -3- CM r— CM CM CM CM CM ,— co in O m in in in 0) o S § 5 CO o CM o O CM O ^ T - T - i — E T— CM CO in 00 1— CM CO co CO CO in < o o O ,_ CM ■^ CO CO ,_ CO CO CO CO CO z o T— f CM CM CM CM c ra CO in o m in m m Q. ^S o 5 5 CO CO O CM o o CM o T — i — T— T— o^ T— T- CM CO m 00 1— CM CO CO co CO o < CO CT> < o o ,_ ,_ CM CM m t^ o ■<*■ ,_ J— ,_ ,_ o »~ '" CM CM CM CM 2 < in DC sP , „ fc ^^ f , , , in CM o in O O O O O^ o in s s p O CM o o o i — O o O m CM. CM CO in 00 CM CO m m in o cr 0> < o o o ,- r~ CM CO m r~ O T T~ ,- T- Q. o T — J ~ CM CM CM _l LU > ^8 m < w » § § CM CO CM CO o in o 00 in CM o o CM in CO O O in O O OO o o 00 LU 1 oo o o o o o T— r " CM CO in (-- o ■>* CM CM co O < o CO CO CO CO o m o m o 00 CM O CM CO o o in O O CO in CM O) 0) O) o < o o o o T " , ~ CM co in r^ o i — ^ E o CM 3 01 CM o o c nj Q. 0) o CD < < < CO O CM CO cm" CO CM CO. CM CO m CM in CM o o CM in CO o o m o o 00 m CM o o o < o o O O T- T- CM <» m 00 ,_ o < o ^~ in < CM - . . _ t s .-.^ r N f s o cr s in in in m m o o o m O o> < < < < < O CM CM CM CM CM o o o CM ^ ^ i— r~ T— in O oi o o o o o cr Q_ O) < o o o o o o CM CO _i o LU in _ p , t . ^^ ,_^ > CM o o o o o LU CD < < < < < < < < < in CM in CM m CM m CM m CM o o o o o o o o o ^, ^. O C- C. < o o o o o o 1- o o o o o o a5 LL. 2 o o o o o > O w o o CM co" o o o o" O • 2 o in" m in ■o O LU o o o o CO i ~ o c z cr o 00 o v ~ o o ca *3 H 2 m o o m CM m T— o o CO in CM o in o CD o o O o o o o o o o o o o Y— r ^ o CN 5 CM o o o" o" "** *~* CD CD T— ^ in 00 o o" m" in o CM CM m 05 CM in CO" CO in Q Z LU O c O *— o CD a CO c ^ - c CD O CD Q. O o T - F o o N t co CD n E cd" N (0 C CD E a. E 0' CO •o CO c 0J CD r- "O CD E o o 01 c CD E F O) c *~ o c r o CO CO Cl' Q. n c E o 0) CO o o CO CD co co CO F N CD (0 co CD SZ CD N CD N n CD C co c CD CO Q. CS CO c CD > ■D ._ co E T5 E (0 CO s i CD E CD F O 2 a5 CD CD CD CD ^-Q. 3 Cl) E 3 co CO CD CD CD CO CO T) ■o c T) co CO CD o 3 F o E O c o b ka 'c *- "O F »_ *- J3 ? ^_ CD CD OJ CD CO CD o .O F E n o F Q. "> £ CD 3 u CD n a> 3 C .- CD n C -O C — CO o CD n c CD CD t 5 o b 3 CD n c CD o? Cl Cl 2 n fc F 2 S a. >- C CO n O CD CO »- CD o CD to CD Cl CO o ■*-* o u o — O o 3 c o.2 o o CD CJ cm ' o O z < cr < CO ^ < I \ | | ss;= / \ CM I / \ co I Q. CD \ o E-o / - ~. \ / \ CO i; ^^ 1 °-> 1 - - X O in I " < 1 errors are given in terms of total allowable error. error, human — Human errors are introduced by program personnel at any point in the data collection, transfer, analysis, and reporting process. This error can be controlled by using inspection procedures and criteria in areas known to be subject to human mistakes. error, random — Variations between results of repeated measurements of the same quantity that are random in nature and individually not predictable. The causes of random error may or may not be determinate or assignable; however, their distribution can be quantified. Generally, they are assumed to be normally distributed (Gaussian), which can be verified by applying standard statistical tests. error, residual standard (RSE) — Also known as standard error of mean (SEOM), RSE is a measure of the scatter of calibration data about a best-fit curve obtained from the least-squares, curve-fitting method. RSE is treated as random error. error, systematic — A consistent deviation of results of a measurement process from a reference or known level. The cause may or may not be known, but can be assigned by consideration of experimen- tal data or theoretical considerations. B-2 error, total system — Total system error is the composite error of all components of a system including random, systematic, and even human errors. function, DQA — DQA functions are specific activities undertaken during the course of a measurement program which can individually affect the quality of the data obtained. DQA functions include all aspects of the data collection process. intensity — Intensity is the degree to which DQA functions are applied to a measurement program. In general, the intensity for a specific DQA function increases with increasing DQA level. level, DQA — DQA level is defined as the intensity of application of DQA functions to a specific measurement program so the resultant data quality meets program objectives. The determination of DQA level considers objectives, end use of the data, sophistication required in the data-collection process, and complexity and resources of the measurement program. maintainability— The probability that an item which has failed can be restored (i.e., repaired or replaced) within a stated period of time. outlier — An extreme value whose inclusion in a group of values is questionable. It can be detected statistically if the probability of its being a valid data point in the group is very small, or if it falls out- side the bounds known for the parameter. precision — A measure of mutual agreement among individual measurements of the same parameter, usually under prescribed similar conditions. Precision may be expressed in terms of the standard devia- tion or in terms of the variance. Various measures of precision exist depending upon "prescribed similar conditions" (replicability, repeatability, reproducibility). quality control check samples — Samples of known composition used to determine the bias, precision, and accuracy of the measurement system and performance of the analyst using the same laboratory in- struments and procedures. These samples should be prepared from or traceable to calibration stan- dards. random samples — Samples obtained in such a manner that all items or members of the lot, or popula- tion, have an equal chance of being selected in the sample. reliability — The probability that an item will perform a required function under stated conditions for a stated period of time. repeatability — The precision, usually expressed as a standard deviation, of measurements made at dif- ferent times of the same sample at the same laboratory. The time interval between measurements should be specified, because some repeatability can be expected to be smaller than day-to-day (or week-to-week) repeatability. replicability — The precision, usually expressed as a standard deviation, measuring the variability among replicates. It can also be expressed as an upper confidence limit for the difference between two replicates. replicates — Repeated but independent determinations of the same sample, by the same analyst, at es- sentially the same time and same conditions. representative sample — A sample take to represent a lot or "population" as accurately and precisely as possible. A representative sample may be completely random sample or stratified depending upon the objective of the sampling and the sampled population for a given situation. B-3 reproducibility — The precision, usually expressed as a standard deviation, measuring the variability among measurements of the same sample at different laboratories. resolution — The limits at which valid differences between two or more quantities can be detected. The limit of an instrument in detecting the variability in quantities of data. response time — The length of time required for the output of an indicating method to reach a specified fraction of its new value after application of a step input or disturbance. sample — A subset, or group of objects, selected from a larger set, called the "lot," or "population". Unless otherwise specified, all samples are assumed to be randomly selected. standard — A physical quantity with attributes known to a well-determined (fundamental) limit of un- certainty. Standards are used to test the performance of and to calibrate sensors, instruments, and systems. standard error of the mean (SEOM) — (See error, residual standard RSE) standard, primary — A material or device having a known property that is stable and can be accurately measured or derived from established physical or chemical constants. Primary standards can be used repeatedly and are used to develop secondary standards. Primary standards have a distinct legal status. standard, reference — In the hierarchy of standards, this standard is a laboratory's highest ranking stan- dard of which "working" and "transfer" standards are compared to and their values are obtained. The values for the laboratory reference standard would be obtained by comparison with the standards at the National Bureau of Standards or with the standards at some other standardizing activity. standard, transfer — A material or device, with well-established characteristics, that can be shipped from one laboratory to another for interlaboratory comparison activities, and still maintain a known accuracy that is traceable to a reference standard. The transfer standard is not inherently inferior in quality to a reference standard except that the act of shipping may make it so. standard, working — A material or device, with well-established characteristics, that is used in a laboratory for day-to-day calibration work. statistical control chart — A graphical chart with statistical control limits and plotted values (usually in chronological order) of some measured parameter for a series of samples. Use of the charts provides a visual display of the pattern of the data, enabling the early detection of time trends and shifts in the data measurement system. Such charts should be plotted in a timely manner, i.e., as soon as the data are available. system — The entire complex of equipment, the data collection platform, and data processing and com- putational software, from measurement sensor to final output data device. uncertainty statement — The relevant descriptors that define the quality of the data. This statement in- cludes quantifications of the various error types, details of how and against what objective standards the various errors were determined, the DQA level, and the assurance level realized for human error. B-4 APPENDIX C: RECOMMENDED LITERATURE This appendix lists information for users of the DQA Guidelines publication. SUBJECTS: The numbers listed below each subject heading refer to the items listed in the BIBLIOGRAPHY. Auditing— 4, 5, 23, 24, 25, 39, 47, 48 Calibration— 4, 5, 6, 16, 19, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 36, 39, 40, 41, 45, 47, 48, 50, 53, 54, 55, 56, 58 Configuration Control— 4, 5, 23, 25, 39, 47, 48 Control Charts /Tables— 4, 5, 23, 25, 26, 27, 29, 31, 35, 39, 43, 45, 47, 48 Data Evaluation— 4, 5, 6, 14, 39, 41, 46, 47, 48, 49 Data Processing and Validation — 4, 5, 39, 46, 47, 48, 54 Data Quality Assurance/ Quality Assurance/ Quality Control — 2, 4, 5, 6, 7, 11, 12, 15, 19, 23, 25, 32, 37, 38, 39, 40, 42, 45, 46, 47, 48, 49, 50, 54, 55, 56, 59 Data Submission Quality Control — 20, 45, 47, 48, 52, 55, 57 Documentation— 4, 5, 20, 23, 35, 39, 47, 48 Error Analysis and Uncertainty Statement— 3, 4, 5, 8, 9, 13, 19, 25, 27, 30, 31, 33, 39, 45, 47, 53, 54 Glossary of Terms Used in Data Quality Assurance — 1, 44, 47, 54 Interlaboratory / Intralaboratory / Intercomparisons — 13, 16, 18, 19, 25, 41, 47, 49, 50, 56 Personnel Training — 4, 5, 23, 25, 39, 47 Standard Reference Samples / Reference Standards— 10, 16, 17, 18, 23, 25, 26, 28, 29, 47, 48, 55 Statistical Analysis— 3, 8, 23, 25, 27, 29, 31, 33, 47, 48, 49 Sources of Error In Marine Measurements — 40, 54, 56 Vendor's Quality Control— A, 5, 21, 23, 25, 34, 35, 36, 39, 43, 47, 48, 51, 59 BIBLIOGRAPHY: 1. American National Standards Institute, 1973, "Quality Assurance Terms and Definitions, " ANSI N45. 2. 10-1973, American Society of Mechanical Engineers, United Engineering Center, New York, New York, 4 pp. C-1 2. Baggot, H.E., 1972, "Data Quality Assurance in Shipboard Computer-Controlled Telemetry System," International Telemetering Conference Proceedings, International Foundation for Telemetering, Woodland Hills, California, and Instrument Society of America, Pittsburgh, Pennsylvania, pp. 603-613. 3. Box, G.E. P. and G.M. Jenkins, 1970, Time Series Analysis Forecasting and Control, Holden-Day, Inc., San Francisco, California, 553 pp. 4. Buchanan, James, 1976, "Development and Trial Field Application of a Quality Assurance Program for Demonstration Projects," EPA-600/2-76-083, Research Triangle Institute for U.S. Environmental Protection Agency (EPA), Office of Research and Development, Industrial En- vironmental Research Laboratory, Research Triangle Park, North Carolina, 83 pp. 5. Buchanan, James, 1976, "Guidelines for Demonstration Project Quality Assurance Programs," EPA-600/ 2-76-081, Research Triangle Institute for U.S. Environmental Protection Agency (EPA), Office of Research and Development, Industrial Research Laboratory, Research Triangle Park, North Carolina, 54 pp. 6. Buchanan, James, and F. Smith, 1976, "Guidelines for Development of a Quality Assurance Program," Vol. 16: "Method for the Determination of Nitrogen Dioxide in the Atmosphere (Sodium Arsenite Procedure)," EPA-650/4-74-005-p, Research Triangle Institute for U.S. Environ- mental Protection Agency (EPA), Office of Research and Development, Environmental Monitor- ing and Support Laboratory, Office of Monitoring and Technical Support, Research Triangle Park, North Carolina. 7. Cameron, J. M., 1976, "Measurement Assurance," Journal of Quality Technology, Vol. 8, No. 1, pp. 53-55. 8. Cameron, J. M., 1977, "Measurement Assurance," NBSIR 77-1240, U.S. National Bureau of Stan- dards (NBS), Institute for Basic Standards, Office of Measurement Services, Washington, D.C., 13 pp. 9. Campion, P. J., J. E. Burns, and A. Williams, 1973, A Code of Practice for the Detailed Statement of Accuracy, England Department of Trade and Industry, National Physical Laboratory, Her Ma- jesty's Stationery Office, London, England, 52 pp. 10. Chumas, S. J., ed., 1975, "Directory of United States Standarization Activities," NBS Special Publication 417, U.S. National Bureau of Standards (NBS), Washington, D.C., 223 pp. 11. Coutinho, J. de S., 1972, "Quality Assurance of Automated Data Processing Systems (ADPS), Part 1: Development of Specification Requirements," Journal of Quality Technology, Vol. 4, No. 2, pp. 93-101. 12. Coutinho, J. de S., 1972, "Quality Assurance of Automated Data Processing Systems (ADPS), Part 2: Development of a Quality Assurance Program (QAP)," Journal of Quality Technology, Vol. 4, No. 3, pp. 145-155. 13. Curley, J. B., 1976, "Quality Assurance and Test Methodology," ASTM Standardization News, Vol. 4, No. 9, pp. 16-18. 14. Dehlinger, P., E. F. Chiburis, and J. J. Dowling, 1972, "Pacific SEAMAP 1961-70 Data Evalua- tion Summary: Bathymetry, Magnetics and Gravity," NOAA Technical Report NOS 52, U.S. National Oceanic and Atmospheric Administration (NOAA), National Ocean Survey (NOS), Rockville, Maryland, 10 pp. C-2 15. Fairless, Billy, 1977, "Quality Assurance Practices and Procedures," EPA-905/ 4-77-004, U.S. En- I vironmental Protection Agency, Region 5, Surveillance and Analysis Division, Central Regional Laboratory, Chicago, Illinois, 293 pp. 16. Farland, R. J., 1973, "Standards and Techniques for Quality Oceanographic Instrument Measure- ments," Means of Acquisition and Communication of Ocean Data, Vol. 2: Surface, Subsurface and Upper- Air Observations, World Meteorological Organization, Publication No. 350, Marine Science Affairs, Report No. 7, World Meteorological Organization, Geneva, Switzerland, pp. 82- 95. 17. Goldberg, E. G., 1976, Strategies for Marine Pollution Monitoring, John Wiley and Sons, Inc., New York, New York, 310 pp. 18. Gordan, L. I., G. C. Anderson, and W. D. Nowlin, 1977, "Results of an Intercalibration at Sea of Hydrographic and Chemical Observations and Standards Aboard the USSR Ship PROFESSOR VIESE and the U.S. Research Vessel THOMAS G. THOMPSON During FDRAKE 76," OSU Reference No. 77-7, International Southern Ocean Studies Technical Report, National Science Foundation, Office for the International Decade of Ocean Exploration, Washington, D.C., 23 pp. 19. Green, A.C., and R. Naegele, 1977, "Development of a System for Conducting Interlaboratory Tests for Water Quality and Effluent Measurements," EPA-600 / 4-77-03 1 , FMC Corp., Advanced Products Division for U.S. Envirnmental Protection Agency (EPA), Office of Research and Development, Environmental Monitoring and Support Laboratory, Cincinnati, Ohio, 130 pp. 20. Guinn, Charles, M. Kennedy, W. Toner, E. Hollander, J. Getzels, and K. Dueker, 1975, "Guidebook on Information/Data Handling, Issue Papers," Technical Supporting Report E, U.S. Department of the Interior, Office of Land Use and Water Planning, and Geological Survey, Resource and Land Investigations Program, Reston, Virginia, 155 pp. 21. Hammerschlag, A., 1975, "Planning for the Development of High Quality Instruments by Scien- tific Institutes," Proceedings of the Society of Photo-Optical Instrumentation Engineers, Vol. 73, pp. 91-110. 22. Harr, P. I., 1969, "Value of Engineering Techniques: A Way of Managing Valuable Measure- ment," Making Valuable Measurements, H. L. Mason, ed., NBS Special Publication 318, U.S. National Bureau of Standards (NBS), Washington, D.C., pp. 161-164. 23. Hayes, G.E., 1977, Quality Assurance: Management and Technology, rev. ed., Charger Produc- tions, Inc., Capistrano Beach, California, 452 pp. 24. Hibar, J. R., 1976, "Regional Air Pollution Study: Quality Assurance Audits," EPA-600/4-76- 032, Rockwell International, Air Monitoring Center for U.S. Environmental Protection Agency (EPA), Office of Research and Development, Environmental Sciences Research Laboratory, Research Triangle Park, North Carolina, 35 pp. 25. Juran, J. M., F. M. Gryna, Jr., and R. S. Bingham, Jr., eds., 1974, Quality Control Handbook, 3d ed., McGraw-Hill Book Co., New York, New York. 26. Kanzelmeyer, J. H., 1977, "Quality Control for Analytical Methods," ASTM Standardization News, Vol. 5, No. 10, pp. 22-28. 27. Ku, H. H., ed., 1969, Precision Measurements and Calibration, Statistical Concepts and Procedures, NBS Special Publication 300, Vol. 1, U.S. National Bureau of Standards (NBS), Washington, D.C., 436 pp. C-3 28. Loder, T. C, and P. M. Gilbert, 1976, "Blank and Salinity Corrections for Automated Nutrient Analysis of Estuaries and Sea Waters," UNH-SG-JR-101 , New Hampshire University, Durham, New Hampshire, 29 pp. 29. Mandel, John, 1977, "Statistics and Standard Reference Materials," ASTM Standardization News, Vol. 5, No. 10, pp. 10-15. 30. Mendenhall, William, 1967, Introduction to Probability and Statistics, 2nd ed., Wadsworth Publishing Co., Inc., Belmont, California, 393 pp. 31. Miller, Irwin, and J. E. Freund, 1965, Probability and Statistics for Engineers, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 432 pp. 32. National Research Council, Assembly of Engineering, Marine Board, Review Panel on the National Oceanographic Instrumentation Center, 1975, "An Appraisal of the National Oceanographic Instrumentation Program," National Academy of Sciences, Washington, D.C., 39 pp. 33. Natrella, M. G., 1963, "Experimental Statistics," NBS Handbook 91, U.S. National Bureau of Standards, Washington, D.C., repr. 1966. 34. Ocean Data Systems, Inc., 1976, "Development of Specifications for Surface and Subsurface Oceanic Environmental Data, Final Report," Ocean Data Systems, Inc., for ECON, Inc., Prin- ceton, New Jersey, 113 pp. 35. Pabst, W. R., Jr., 1963, "MIL-STD-105D," Industrial Quality Control, Vol 20, No. 5, pp. 4-9. 36. Pijanowski, B. S., 1973, "Comparative Evaluation of In Situ Water Quality Sensors," Second Joint Conference on Sensing of Environmental Pollutants, Instrument Society of America, Pittsburgh, Pennsylvania, pp. 95-107. 37. Pojasek, R. B., 1977, "Quality Assurance Program Restores Confidence in Data," Water and Sewage Works, Reference Issue No. 1977, pp. R110-R111. 38. Polanin, B. P., R. New, and M. E. Ringenbach, 1977, "How Good are Your Marine Data?," Oceans '77, 3rd Annual Combined Conference of the Marine Technology Society and the Institute of Electrical and Electronics Engineers, Council on Ocean Engineeing, Los Angeles, Vol. 2, pp. 30D-1, 30D-2. 39. Smith, Franklin and James Buchanan, 1976, "IERL-RTP Data Quality Manual," EPA-600/2-76- 159, Research Triangle Institute for U.S. Environmental Protection Agency (EPA), Office of Research and Development, Washington, D.C., 88 pp. 40. Tajima, G. K., 1969, "Factors Affecting Ocean Data Quality," Preprint No. 69-632, Annual Con- ference and Exhibit, Selected Preprints, Instrument Society of America, Pittsburgh, Pennsylvania, 7 pp. 41. Taylor, J. K., 1976, "Evaluation of Data Obtained by Reference Methods," Calibration in Air Monitoring, American Society for Testing and Materials, ASTM Special Technical Publication 598, American Society for Testing and Materials, Philadelphia, Pennsylvania, pp. 156-163. 42. U.S. Congress, House of Representatives, Committee on Science and Technology, Subcommittee on the Environment and the Atmosphere, and Subcommittee on Special Studies, -investigations C-4 and Oversights, 1976, The Environmental Protection Agency's Research Program with Primary Emphasis on the Community Health and Environmental Surveillance System (CHESS): An In- vestigative Report, Committee Print, 94th Congress, 2nd Session, 110 pp. U.S. Department of Defense (DOD), 1963, "Military Standard: Sampling Procedures and Tables for Inspection by Attributes," MIL-STD-105D, U.S. Department of Defense, Washington, D.C., 64 pp. U.S. Department of Defense (DOD), Office of the Assistant Secretary of Defense, Installations and Logistics Division, 1969, "Military Standard: Quality Assurance Terms and Definitions," MIL-STD-109B, U.S. Department of Defense, Washington, D.C., 8 pp. U.S. Environmental Protection Agency (EPA), National Environmental Research Center, Analytical Quality Control Laboratory, 1972, Handbook for Analytical Quality Control in Water and Wastewater Laboratories, U.S. Environmental Protection Agency, Cincinnati, Ohio. U.S. Environmental Protection Agency (EPA), Office of Pesticide Programs, 1976, "Status Report and Action Guide," EPA-540/9-77-012, U.S. Environmental Protection Agency, Washington, D.C., 69 pp. U.S. Environmental Protection Agency (EPA), Office of Research and Development, Environ- mental Monitoring and Support Laboratory, 1976, "Quality Assurance Handbook for Air Pollu- tion Measurement Systems," Vol. 1: "Principles," EPA-660 / 9-76-005 , Environmental Monitor- ing and Support Laboratory, Research Triangle Park, North Carolina. U.S. Environmental Protection Agency (EPA), Office of Research and Development, Environ- mental Monitoring and Support Laboratory, 1977, "Quality Assurance Handbook for Air Pollu- tion Measurement Sysems," Vol. 2: "Ambient Air Specific Method," EPA-660 1 4-77 -027 a, Environmental Monitoring and Support Laboratory, Research Triangle Park, North Carolina. U.S. Environmental Protection Agency (EPA), Office of Research and Development, Environ- mental Monitoring and Support Laboratory, Monitoring Systems Research and Development Division, Quality Assurance Branch, 1977, "Environmental Radioactivity Laboratory Intercom- parison Studies Program, FY 1977," EPA-600 / 4-77-001 , Environmental Monitoring and Support Laboratory, Las Vegas, Nevada, 28 pp. U.S. Environmental Protection Agency (EPA), Office of Water Planning and Standards, Monitor- ing and Data Support Division, n.d., "Minimal Requirement for a Water Quality Assurance Program," EPA-440/9-75-010, Environmental Monitoring and Support Laboratory, Cincinnati, Ohio. U.S. National Aeronautics and Space Administration (NASA), 1971, "Quality Assurance Provi- sions for Government Agencies," NHB 5300.4 (2B), Reliability and Quality Assurance Publica- tion Series, Washington, D.C., 68 pp. U.S. National Oceanic and Atmospheric Administration (NOAA), n.d., Ocean Dumping Program, Data Management Plan, Rockville, Maryland. U.S. National Oceanic and Atmospheric Administration (NOAA), National Oceanographic In- strumentation Center (NOIC), 1974, NOIC Report of Calibration, Rockville, Maryland, 9 pp. U.S. National Oceanic and Atmospheric Administration (NOAA), National Space Technology Laboratories, NOAA Data Buoy Office, Data Quality Division, 1976, Data Quality Statement for NOAA Data Buoy Office, Moored Data Buoys, Bay St. Louis, Mississippi. C-5 55. U.S. National Oceanic and Atmospheric Administration (NOAA), Outer Continental Shelf (OCS) Task Team, 1976, The Environmental Quality Monitoring Report, Washington, D.C., 56 pp. 56. U.S. National Oceanic and Atmospheric Administration (NOAA), Pacific Marine Center, 1976, PMC Oceanographic Procedures Manual, (with changes), Pacific Marine Center. 57. U.S. National Oceanographic Data Center (NODC), 1969, "Guide to the Submission of Data, General Guidelines," Publication No. B-l, Washington, D.C., 18 pp. 58. U.S. Naval Oceanographic Office, 1975, "Instruction Manual for Obtaining Oceanographic Data," Publication No. 607, 3rd ed., Change 1, repr., Washington, D.C. 59. Weiss, H. W., 1976, "Specifying Quality Assurance for NASA Programs," ASTM Standardization News, Vol. 4, No. 9, pp. 8-15. 1>U.S. GOVERNMENT PRINTING OFFICE: 1 9 8 0- 3 1 6 " 7 4 5 / 6 I 02 C-6 Operations Elements Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4. Site selection 5. Sampling considerations 6. Predeployment checking and calibrations 7. Measurement comparisons 8. Ongoing checking and calibrations 9. Postdeployment checking and calibrations 10. Quality control and system audit 11, Data and sample collection, recording, protection and transmission 12 Chain ot custody Analytical Laboratory Operations 1. Personnel training 2. Calibration 3. Establishing precision and accuracy ot laboratory system 4 Standanzation of testing 5. Interlaboratory and intralaboratory comparisons 6. Quality control and system audit 7. Functional checks 8. Data transfer check 9 Quality control check samples LEVEL I DQA PL/ Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5 Quality control and system audit 6. Support data collection and storage Data Validation 1. Review of support data 2. Statistical analysis of data 3. Data evaluation 4. Preparation of uncertainty statement for all data Audit 1. Review of DQA plan 2. Audit of actual operations Satisfactory'' T Yes I Feedback required Data Reporting 1. Preparation of report 2. Archiving J Legend Indicates elements of DQA plan Indicates functions for a specific element or Preparation of DQA Plan ! 55. U.S. National Oceanic and Atmospheric Administration (NOAA), Outer Continental Shelf (OCS Task Team, 1976, The Environmental Quality Monitoring Report, Washington, D.C., 56 pp 56. U.S. National Oceanic and Atmospheric Administration (NOAA), Pacific Marine Center, 1976, 1 PMC Oceanographic Procedures Manual, (with changes), Pacific Marine Center. 57. U.S. National Oceanographic Data Center (NODC), 1969, "Guide to the Submission of Data, General Guidelines," Publication No. B-l, Washington, D.C., 18 pp. 58. U.S. Naval Oceanographic Office, 1975, "Instruction Manual for Obtaining Oceanographic Data," Publication No. 607, 3rd ed., Change 1, repr., Washington, D.C. 59. Weiss, H. W., 1976, "Specifying Quality Assurance for NASA Programs," ASTM Standardization News, Vol. 4, No. 9, pp. 8-15. $U.S. GOVERNMENT PRINTING OFFICE: 1 98 0- 3 1 6 " 7 4 5 / 6 1 02 C-6 LEVEL I DQA PLAN Planning Elements Preparation of Program Objective of program - What data is required - What is end use of data - Determine level(s) - Total allowable error objective and estimate of error sources Audit requirements Expected operational conditions Resource allocation I Data Acquistion System Design Methodology and system considerations Availability of personnel to operate selected equipment Suitability of system Reliability Logistics Retrievability Measurement performance Availability of instrument and cost (existing inventory items, or off-the-shelf equipment or new development System Certification Existing In-House Items Availability Evaluation Checkout (requisitioning) Recommissioning Off-The-Shelf Equipment To Be Procured Availability Evaluation Source inspection Purchase inspection Performance and reliability tests Procurement quality control New Development Design requirements & specifications Inspection of components Instrument oper- ating procedures Performance test reliability Calibration of all components A. Personnel training and qualification B. Bench calibration and test evaluation C. Intercompansons D Field Testing E. Acceptance testing F. Documentation (manuals and handbooks) G Configuration control H. Backup equipment and spare parts Data Processing Planning 1 Verification of computer programs 2. Data reduction and validation planning 3. Personnel training 4. Data collection and format 5. Data checking 6. Data transfer standards 7. Support data collection and storage Total Error Budget Verification 1. Random error sources and limits of random errors of various components (deterministic) 2. Systematic error sources and limits of systematic errors of various components (estimated) 3. Total system error Acceptable limits 9 Feedback if required LjJ Audit Preopct ations Stage H Operations Elements Field Operations Personnel training Equipment transportation Standardization of operating procedures Site selection Sampling considerations Predeployment checking and calibrations Measurement comparisons Ongoing checking and calibrations Postdeployment checking and calibrations Quality control and system audit Data and sample collection, recording, protection and transmission Chain of custody Analytical Laboratory Operations 1. Personnel training 2. Calibration 3. Establishing precision and accuracy of laboratory system 4. Standanzation of testing 5. Interlaboralory and intralaboratory comparisons 6 Quality control and system audit _^_^__^____^^__ 7. Functional checks 8 Data transfer check 9 Quality control check samples Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4 Standard program computer analysis checks 5. Quality control and system audit 6. Support data collection and storage Data Validation 1. Review of support data 2. Statistical analysis of data 3 Data evaluation _^___^^^^_^_ 4. Preparation of uncerlainty statement for all data Review of DQA plan Audit of actual operations Satisfactory? TTT Data Reporting 1. Preparation of report 2. Archiving Legend * Indicates elements of DQA plan Indicates functions for a specific Figure 3-1. — Function Diagram for Preparation of DQA Plan LEVEL II DQA PLAN Operations Elements Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4. Site selection 5. Sampling considerations 6. Predeployment checking and calibrations 7. Measurement comparisons 8. Ongoing checking and calibrations 9. Postdeployment checking and calibrations 10. Quality control and system audit 11. Data and sample collection, recording, protection and transmission 12. Chain of custody Analytical Laboratory Operations Personnel training Calibration ,-[:;■ Establishing precision and accuracy of laboratory system Standarlzation of testing Intertaboratory and intralaboratory comparisons Quality control and system audit Functional checks Data transfer check Quality control check samples Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5. Quality control and system audit 6. Support data collection and storage I Data Validation 1. Review of support data 2. Statistical analysis of data 3. Data evaluation 4. Preparation of uncertainty statement for all data Audit 1. Review of DQA plan 2. Audit of actual operations Satisfactory'' I Y « s I Feedback it required Data Reporting 1. Preparation of report 2. Archiving J Legend Indicates elements of DQA plan Indicates (unctions for a specific element aration of DQA Plan Planning Elements Preparation of Program Obleotive of program • What data is required - What is end use of data - Determine levells) - Total allowable error objective and estimate of error sources Audit requirements peeled operational conditions source allocation Data Acquistion System Design 1. Methodology and system considerations 2. Availability of personnel to operate selected equipment 3. Suitability of system 4. Reliability 5. Logistics 6. Retrievability 7. Measurement performance 8 Availability of Instrument and cost (existing inventory items, or off-the-shelf equipment or new development) System Certification Existing In-House Items Off-The-Shelf Equipment To Be Procured LEVEL II DQA PLAN Availability Evaluation Checkout (requisitioning) Recommissioning I 5. I I 6. I I Availability Evaluation Source inspection Purchase inspection Performance and reliability tests Procurement quality control New Development Design requirements & specifications Inspection of components Instrument oper- ating procedures Performance test reliability Calibration of all components A. Personnel training and qualification B. Bench calibration and test evaluation "C" ' Inlercomparisons D. Ftetd TesttngMMHHHH E. Acceptance "testing F. Documentation (manuals and handbooks}. G. Configuration control H. Backup equipment and spare parts Data Processing Planning I 1. Verification of computer programs 2. Data reduction and validation planning 3. Personnel training 4. Data collection and format 5. Data checking 6 Data transfer standards 7. Support data collection and storage Total Error Budget Verification 1. Random error sources and limits of random errors of various components (deterministic) 2. Systematic error sources and limits of systematic errors of various components (estimated) 3. Total system error ^^^^^^^___^_ Acceptable limits? ~^J Feedback if required Audit Preoperallons Stage Operations Elements Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4 Site selection 5. Sampling considerations 6. Predeployment checking and calibrations ' 7 " Measurement comparisons 8. Ongoing checking and calibrations 9. Postdeploymeni checking and calibi 10. Quality control and system audit 11. Data and sample collection, recording, protection and transmission 12. Chain of custody I Analytical Laboratory Operations Personnel training Calibration 9BK Establishing precision and accuracy of laboratory system Standanzattorvot testing ^■^■■■■■■■■■■■■i tnteVtaboratory and infrafaboratory comparisons Qualityxpntrot and system audit ~ Functional checks Data transfer check Quality control check samples H T Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5 Quality control and system audit 6. Support data collection and storage I Data Validation 1. Review of support data 2. Statistical analysis of data 3. Data evaluation 4. Preparation of uncertainty statement for all data Audit 1. Review of DQA plan 2. Audit of actual operations Satislactory? Legend Indicates elements ol DQA plan Indicates functions for a specific element Figure 3-2. — Function Diagram for Preparation of DQA Plan : Operations Elements Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4. Site selection 5. Sampling considerations 6. Predeployment checking and calibrations 7. Measurement comparisons 8. Ongoing checking and calibrations 9. Postdepioyment checking and calibrations 10. Quality control and system audit 11. Data and sample collection, recording, protection and transmission 12. Chain of custody Analytical Laboratory Operations Personnel training Calibration Establishing precision and accuracy of laboratory system Standarizatton of testing Interlaboratory and intralaboratory comparisons Quality control and system audit Functional checks Data transfer check Quality control check samples LEVEL DQA PLAN • p Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5. Quality control and system audit 6. Support data collection and storage Data Validation 1. Review of support data 2. Statistical analysis of data 3. Data evaluation 4. - Preparation of uncertainty statement for all data Audit 1. Review of DQA plan 2. Audit of actual operations Satisfactory? Yes No Feedback if required Data Reporting 1. Preparation of report 2. Archiving ] Legend Indicates elements of DQA plan Indicates functions for a specific element ation of DQA Plan i LEVEL Ml DQA PLAN Planning Elements Preparation ol Program 1 . Obieclive of program - What data is required - What Is end use of data - Determine level(s) - Total allowable error objective and estimate of error sources 2 Audit requirements 3. Expected operational conditions 4. Resource allocation w—mmmm I Data Acquistion System Design Methodology and system considerations Availability of personnel !o operate selected equipment Suitability of system Reliaoiltty Logistics Retrievability Measurement performance Availability of instrument and cost (existing inventory items. or oM-llie-shelt equipment or new development) System Certification I Existing In-House items Off-The-Shelf Equipment To Be Procured New Development Availability Evaluation Checkout (requisitioning) Recommissioning t 5 I I 6. I I -I Availability Evaluation i Source | inspection j Purchase . inspection Performance and J reliability tests > Procurement I quality control I f I 1 Design requirements & specifications Inspection of components Instrument oper- ating procedures Performance test reliability Calibration of alt components A. Personnel training and qualification B. Bench calibration and test evaluation C. Intercomparisons O. Field Testing E. Acceptance testing F. Documentation (manuals and handbooks) G. Configuration control H. Backup equipment and spare parts Data Processing Planning 1. Verification of computer programs 2. Data reduction and validation planning 3. Personnel training 4. Data collection and formal 5. Data checking 6- Data transfer standards 7. Support data collection and storage Total Error Budget Verification 1. Random error sources and limits of random errors of various components (deterministic) 2. Systematic error sources and limits of systematic errors of various components (estimated) 3. Total system error Acceptable limits? n_p Feedback if required T^T Audit Preoperations Stage H Operations Elements Field Operations 1. Personnel training 2. Equipment transportation 3. Standardization of operating procedures 4. Site selection 5. Sampling considerations 6. Predeployment checking and calibrations 7. Measurement comparisons S. Ongoing checking and calibrations 9. PostdepJoyment checking and calibrations 10. Quality control and system audit 11. Data and sample collection, recording, protection and transmission 12. ; Chain of custody T Analytical Laboratory Operations Personnel training Calibration Establishing precision and accuracy of laboratory system Standarization of testing Intertaboratory and intralaboratory comparisons Quality control and system audit Functional checks Data transfer check Quality control check samples Data Reduction and Analysis 1. Personnel training 2. Data processing 3. Reduction checks 4. Standard program computer analysis checks 5. Quality control and system audit 6.. Support data collection and storage Data Validation 1 Review of support data 2. Statistical analysis of data 3. Data evaluation 4. - Preparation of uncertainty statement for all data Audit Review of DQA plan Audit of actual operations i^TT Satisfactory? Data Reporting Preparation of report Archiving _______ ] Legend Indicates elements of DQA plan Indicates functions for a specific Figure 3-3. — Function Diagram for Preparation of DQA Plan wmr