re 55. 6oa'.s»^^ open Proceedings of the opening meeting at Boulder, Colorado, September 4-6, 1974 sesame SEVERE ENVIRONMENTAL STORMS AND MESOSCALE EXPERIMENT Edited by O.K. Lilly ^^'OUMi'^^ \ -^ -*-^- ^^g^-^j*^^ Proceedings Opening Meeting at Boulder, Colorado Sept. 4-6, 1974 D.K. Lilly, Editor -'"'Wt^^t.^''"^ o Prepared bv the National Oceanic and Atmospheric Administration Environmental Research Laboratories Boulder, CO 80302 a o Q 3 tor the proposed Severe Environment Storms and Mesoscale Experiment (SESAME) Project JUNE 1975 I Foreword The SESAME "opening" meeting was held to acquaint the meteorological research community with the existence and scope of a planned research project, the Severe Environmental Storms and Mesoscale Experiment (SESAME), and to solicit the community's advice and participation. About 130 scientists and science administrators attended the meeting. The papers presented, most of which were invited, as well as the principal discussions, were recorded on audio/video tape by the staff of the Publications Services office of the Environmental Research Laboratories, NOAA, assisted by some staff members of the National Center for Atmos- pheric research. A typed transcript was then prepared, edited, and transmitted to the speakers for approval and revision. Because of the deemed desirability of the earliest possible dissemination of these proceedings, several short-cuts have been taken in its preparation. Thus the reader may find a larger number of typographical errors than is usually acceptable in a journal article, and the speakers and especially those involved in the principal discussions may find occasional misquotes or in- correct attributions. For all of these the editor takes full responsibility and hopes that the early availability of this interesting material outweighs the errors. m Digitized by the Internet Archive in 2012 with funding from LYRASIS IVIembers and Sloan Foundation http://archive.org/details/opensesameproceeOOsesa CONTENTS FOREWORD i i i INTRODUCTION 1 NOAA OBSERVATIONAL PROGRAM SESAME Observational Plans R. Alberty 7 ERL Wave Propagation Laboratory Observational Capability F. F. Hall 13 NSSL Facilities K. E. wilk 20 OTHER AGENCY INTEREST NCAR NCAR Interest in the SESAME Program D. K. Lilly 29 UCAR/NCAR Interest in SESAME F. Bretherton 31 NSF NSF Interests in SESAME F. White 33 NASA NASA Severe Storms Program E. R. Kreins 37 AEC Comments on Possible AEC Participation in SESAME p. Frenzen 50 THE SOUNDING PROBLEM Introduation Vertical Sounding Problems: A Survey J. C. Fankhauser 53 Possible Solution A Proposed New Approach to Vertical Soundings for SESAME V. E. Lally 62 Control Data Corporation's METRAC^*^ Positioning System K. S. Gage and W. H. Jasperson 68 J. Z. Holland 125 A. Eddy 134 S. Williams 139 NEXAIR Upper Air Sounding System W. E. McGovern 87 The Role of Satellite Soundings in SESAME D. S. Johnson 95 Radiation Sounding Techniques from Aircraft p. Kuhn 109 DATA MANAGEMENT Data Management Problems of Experimental Design SESAME Data Management Problems Comparison of Radiosonde Mesoscale Wind Shears and Thermal Winds ir--, E. P. Avara lb/ SEVERE STORM ENVIRONMENT The Generation and Triggering of Severe Convective Storms by Large-Scale Motions . , i^c ^ E. F. Danielsen 165 Severe Storm Observations in the Near Environment S. L. Barnes 186 The Environment Near the Dryline J. r. Schaefer 206 BOUNDARY LAYER A Survey of Fine-Mesh Modeling Techniques R. Pielke 219 Modeling the Influence of the Mesoscale Planetary Boundary Layer ^ „^^ ■^ L. Mahrt 233 Geostationary Satellite Imagery: What We can Do With It and How It Can Fit Into SESAME , «,q J. (V. Purdom 239 The Development of Boundary-Layer Turbulence Models for Use in Studying the Severe Storm Environment J. Dear dor ff 251 Current Development of a 3-D Mesoscale Model at GFDL B. B. Ross 265 VI MESOSCALE MODELING Some Comments on Fine Mesh Modeling D. K. Lilly 111 Prediction Models Using Isentropic Coordinates R. Block 281 Regional and Mesoscale Modeling at Drexal C. Kreitzberg 289 Some Mesoscale Modeling Activities at Penn State R. A. Anthes 302 CLOUD AND SEVERE STORM MODELING Modeling of Convective Storms IV. R. Cotton 319 Numerical Accuracy of Cloud Models J. Klemp 341 Observations Needed to Test Numerical Models of Thunderstorms ^ „ -,/,t- C. Hane 345 Ongoing Research in Three-Dimensional Cloud Modeling at the University of Wisconsin ^ ^, . ^cn •^ R. E. Schlesmger 350 Cloud and Severe Storm Modeling Studies at the University of Illinois „ ^ ,cn Y. Ogura 360 RECENT SEVERE STORM AND TORNADO OBSERVATIONS A Review: Recent Severe Storm and Tornado Observation J. McCarthy 371 A Technique to Supplement Dual Doppler Radar Detection Volume with Aircraft Delivery of Three-Dimensionally Distributed Chaff ^ „ ^ ^^ ,7o J. McCarthy 378 Doppler Radar Results from the National Hail Research Experiment ,. ^^, -307 D. Atlas 387 Radar Observations at NSSL E. Kessler 392 Direct Microscale Tornado Observations and Future Requirements ^^ ^_ ^^^^^^ 399 A Survey of the April 3-4, 1974 Tornado Outbreak E. Pearl 413 vn A Photogrammetric Study of the Parker, Indiana Tornado C. R. Church A Dual-Doppler Radar Method for the Determination of Wind Velocities Within Local Severe Storm Systems L. J. Miller DISSENTS AND ADDITIONS WPL Plans for Observing Electromagnetic Signals W. L. Taylor Predicting the Movement of Severe Convective Storms D. J. Raymond Mesoscale Model Results for the Tornado Outbreak April 3-4, 1974 „ . D. Paine 422 428 443 446 454 MANAGEMENT PLAN AND UNIVERSITY PARTICIPATION - PANEL DISCUSSION 459 D. Lilly E . Kessl er F. Br ether ton W. Hess W. Melahn F. White D. Atlas A. Dennis SESAME QUESTIONNAIRE 497 vm INTRODUCTION W. Hess, R. Alberty, D. Lilly I want to say first, welcome, glad to have you here in sunny Boulder. I want to say a bit about the history of this project before we begin talking about it in detail. SESAME has been known in two or three previous incarnations under the name MESOMEX and it died for a variety of reasons. One thing that we are trying to do this time to prevent it from dying is to keep it fairly strongly focused, moving in one direction. That direction is severe storms. When you talk about mesoscale meteorology, if you have three people in a room you get six opinions. We had that earlier and we decided we simply couldn't tolerate it, so it is now a more strongly directed program. We hope that most of you will be interested enough so that you will join with us in this version of mesoscale meteorological work. But let me go back a bit and talk about the things that have led up to this present meeting. More than a year ago, Dave Johnson, the director of the National Environmental Satellite Ser- vice, George Cressman the director of the National Weather Service, and I sat down to try to see if we shouldn't push to get a project of this type put together. Over the last year we have had a series of meetings to lead to the place we are today. The three of us, Dave, George and myself, constitute a troika to try determine what we wanted to see in the way of mesoscale meteorological experimentation. And with the troika directing two committees, one chaired by Doug Lilly who is your MC today and one chaired by Ron Alberty, who is one of the senior participants from NSSL the present plan has been developed. This went through two or three formative stages. We had a meeting at Joe Smagori nsky ' s laboratory (GFDL) six or eight months ago where a group of NOAA people met to argue what the program ought to look like, what kind of theoretical activity ought to be under- taken, and what type of field experiments should be started. That led, first to the formation of the two committees and through those two committees to the development of the draft project development plan, which most of you have seen. The draft project development plan was used a month or so ago in a briefing for Dr. White. At that briefing Dr. White said, yes, we want to see a project like this go, and at that time he said he would hold SESAME in the 76 budget. As most of you in this room know, there is a certain belt tightening in the federal establishment now and SESAME is no longer in the 76 budget, but is in the FY 77 budget with a high pri- ority. If all goes well in the budgetary processes for FY 77, we can expect to start with this project. We are now delayed at least one year, but to balance that let me say that Dr. White has personally said yery strongly that he wants to see a project like this go. He wants to see it go not as a NOAA project but as an interagency, national pro- ject that involves NSF, NCAR, NASA, AEC, and a wide variety of universities. There was a very strong statement from Dr. White to this effect, so don't be too discouraged. We do have a one year delay in getting off the ground, but that may in fact be a good idea because the project at its pres- ent stage is still relatively unclear. The details of what we want to do in a field experiment have really not been thought out yet. I'm sure that many of you will bring new ideas into the picture and that the project will be better for those new ideas. Ok, enough history, the troika has been watching and catalyzing and trying to see something come out which we feel will be important and useful severe storm and mesoscale work. So I turn into a spectator and Doug Lilly takes over to run the show. Doug Lilly - Let me try to restate the scientific objectives of SESAME in a slightly different way than they have been stated in the documents before. I'm not sure that everybody will agree with it but we'll try it out. 1. Determine the interactions of convective storms with larger scale environments and with the boundary layer, that is to determine quantitatively and comparatively the effects of various interactions that we know intuitively or on the basis of semi -quanti tati ve evidence to be present. 2. To develop simulation models describing these interactions and experimental prediction models aimed at determining the location, time and intensities of severe storm arrays from initial states. 3. To develop and test new techniques of observation and analysis suitable for providing data to such models. Now after this brief statement of some scientific objectives, let me state a sort of political objective of this meeting, which is to try to get a degree of consensus among not every one but most people who have scientific interests and activities related to severe storms and meso- scale meteorology to either join into this program (when it's ready to be joined into) or at worst exercise benign neglect. Actually, even benign neglect tends to cut things off of budgets but we want to hear out the objections, the statements that we are looking at some of the wrong things or not at some of the right things. Some of these objec- tions may eventually be overuled but they will have been heard by interested colleagues and subjected to some form of peer review which is as an employee of NCAR and a half-time consultant to NOAA for the SESAME project. I think that NOAA is ready to compromise to some degree on the design of this program and on the management of it, within the budget levels which it can command. The kind of compromises that will be made are in part the result of your suggestions. demands, complaints, and so forth. The program this week is a fairly full one. We didn't set this up as a workshop meeting with smaller groups producing reports. Perhaps the time will come for that later. Most of the papers will not be, I hope, of the nature of research papers at a national meeting. It is intended that they have more of a review nature with an indication of how the speakers feel their work may relate to the objectives of the SESAME project. This morning there will be a session on, first, a descrip- tion of the observational program as it is now seen from NOAA, a large part of which is already in the draft project development plan. Then there will be some statements from representatives of other agencies that have expressed an interest, these may represent scientific programs and plans. The rest of the program up to Friday afternoon is mainly on scientific subjects. On Friday afternoon we plan on having another quasi -pol i ti cal session for the purpose of discus- sing how this program ought to be managed and what are the best methods for encouraging university participation. R. Alberty - The time allotted for this subset of the program is rather limited, so by way of introduction I'll only mention that the plan for this portion of the session is to give a rather shallow exposure to NOAA ERL observa- tional capabilities. Some of these presentations are going to be rather old hat to some of you. One important purpose of this meeting, however, is to inform those who have not been involved in the SESAME growing pains that some of us have gone through. After each of our speakers have given some of the highlights of their particular field of exper- tise, I'll say a few words about how these capabilities fit into the current SESAME observational plans, and at the end of that period we would invite your comments on any of the mater ial s . There are three speakers on the program today. They are Byron Phillips from the Office of Weather Modification here in Boulder, Freeman Hall from the Wave Propagation Laboratoryalso here in Boulder, and Mr. Ken Wilk from the National Severe Storms Laboratory. Byron capabilities of NOAA aircraft. Freeman Hall is going to talk about atmos- pheric remote probes, and Ken Wilk will speak on the experi- ence and observational capabilities that have been developed at NSSL. SESAME OBSERVATIONAL PLANS R. Alberty Consideration of observational capabilities discussed by Byron Phillips, Freeman Hall, and Ken Wilk along with consideration of experience and research indications for station spacing that Dr. Barnes and NSSL have contributed led to the proposed observational network shown in the figures that follow. The network uses a concept of grid nesting attributable to an attempt to link the atmospheric motion scales of primary interest. For those who may not have read the SESAME PDP or heard it much discussed, SESAME is concerned with length scales that have been categorized (most of the impetus for this came from the meeting at GFDL) . The meso-a scale brackets from 250 to 2500 km, the meso-3 scale from 25 to 250 km and the meso-y scale lengths are categor- ized as 2.5 to 25 km. So you see, there is a decade graduation (a to 6 to y) • One basic hypothesis of SESAME is that severe storm systems are phenomena of the meso-3 scale, comprised of individual storms more of the meso-y scale and thought to be organized or initiated by processes in the meso-a domain. From the viewpoint of predictability most of us hope that this hypothesis has the ring of truth. Fig. 1 illustrates the location of National Weather Service sounding sites surrounding the proposed meso-B observational network. It's quite likely that we'd request special observations from these sites — at least 6 and 18Z soundings and probably others. National Weather Service personnel and budgets will, if the project goes as currently envisioned, be impacted by the necessity of additional soundings. Fig. 2 shows essentially the same NWS stations. This figure also includes the proposed additional SESAME sites marked by x's enclosed with a circle (20 of them) . Fig. 3 is an expanded view of the proposed meso-3 network (the checkered area in Fig. 2). The proposal is currently for uniform grid spacing with surface stations located at the intersections of all lines. Also shown are a few of the sounding sites and their location relative to the grid. Cities V, -.1. ! — JAMARIL • DODGE CITV ALBUQUERQUE | NORMAN ILLO ■-|.r:"-^-ii LITTLE ROCK /" ISSHREVEPORT, I ABILENE I ■. I _ O ©STEPHENVILLE \ / "^ o ^ » MIDLAND 1 '- LAKE CHARLES ODEL RIO VICTORIA NATIONAL WEATHER SERVICE UPPER AIR NETWORK JANUARY 1,1974 PROPOSED UPPER AIR NETWORK FOR SESAME V. -.1. ! — albuquerque ' Iamaril • dodge city n©MCNETT C -^ ^ ^ i#SHnEVEPORT> ' ABILENE I V J • •STEPHENVILLE V / MIDLAND 1 t ' • NWS UPPER AIR NETWORK » SPECIAL SESAME UPPER AIR NETWORK CHILDRESS 20 40 60 ■ KILOMETERS WPL 3-cm DOPPLER 25* AREA •.■■.■.■.•;•.•.•.• NSSL lO-cm DOPPLER 20* AREA and towns you see are just reference points. The larger stippled area is the 20 degree area covered by the NSSL dual Doppler radars. Ken Wilk mentioned the coverage of Oklahoma City to the northeast and coverage to the southwest. The smaller stippled area, which we propose to cover with WPL 3 cm radars, is a 25 degree intersection area. Carnegie is located at the northwest intersection and Fletcher, Oklahoma at the southeast intersection. The clear square areas inside the dual Doppler areas represent meso-y grid boxes. You saw a few moments ago (from Freeman Hall) the proposed meso-y instrumentation and I will show you just a plan view in a moment. The idea 8 currently is that we instrument only two of these at a time, principally because we can't afford to instrument three and we want to look at two different phenomena. Current plans call for two observational periods separated by about 20 months. During the first observational period we propose to have a meso-y area at each of the southwesterly locations (labeled meso-y 1) and, during the second observational period to leave the central one fixed and move the southwesternmost one to the northeast (labeled meso-y 2) , The southwestern area is very rural and the northeast area is highly urbanized (around Oklahoma City) . Fig. 4 illustrates the proposed meso-y instrumentation which would be in the locations I just mentioned. We do not currently plan to have the pulsed lidar shown for a variety of reasons. You have heard the laser transverse wind system discussed. As Ken Wilk mentioned, NSSL and WPL are going to test that next spring. An important study concerns the validity of taking a line integral using discrete points to determine surface divergence or convergence. That covers my remarks and we have about 7 or 8 minutes remaining. I invite questions or comments that any of you have. You can direct them to the other speakers if you like or you can direct them to me and I will refer to the speakers if I don't know the answers. PROPOSED MESO-y INSTRUMENTATION n o SURFACE STATIONS O PULSED LIDAR FH-CW RADAR ^ o- SURFACE STATION A LASER TRANSVERSE WIND TRANSMITTER ACOUSTIC SOUNDER SURFACE STATION -O LASER TRANSVERSE WIND RECEIVER ACOUSTIC SOUNDER DISCUSSION D. RajTnond: I was wondering if Freeman Hall is aware of recent work done at New Mexico Tech on production of microwaves by small sparks or arcs from charged droplets. This work was done by Gary Sower, a graduate student of Mark Brooks, and the calculations seem to show that if you get microwave production, the intensity, which is comparable to the black body thermal microwave production, would make the determination of the liquid water con- tent by microwave radiometers somewhat doubtful. Hall: The microwave determination of liquid water doesn't work -..'ell when there are precipitation size droplets in the clouds. It's only effective when the clouds are in their formative stages. Raymond: Even if you have electrification at this point you presumably would still have this problem. Hall: That could be. We've only worked so far in non-precipitating clouds of thickness no more than two kilometers. We are getting rather good agreement in temperature, and better yet, of course, with water vapor. There we have excellent agreement. Doug Paine: On the 2500 km scale, centering the SESAME grid over Oklahoma, it appears that we have about a 1200 km margin to the west of that grid, or possibly less than that. I would suggest for the more likely storm generation mechanisms in the SESAME grid area, you might want to widen that margin to more like 2000 or 2500 km to the west. In the April 3-4 tornado outbreak, from taking an extensive look at the low-level jet present at 00 GMT over northwest Alabama and associated with tornadoes breaking out over northwest Alabama, the trajectories indicate that the bulk of the kinetic energy of the low-level jet was derived 12 hours earlier over Midland, Texas near 500 mb. This is a 2000-2500 km west to east track of air coming into a specific mesoscale storm area (Midland, Texas to northwest Alabama). I just wanted to key in to appreciating the importance of the 2500 km length scale. You are not doing fair justice with that scale unless you have that margin to the west of the mesoscale network. 10 Alberty: You are not suggesting, I hope, that we add another 40 or so rawinsondes out to the west. Paine: I guess what I am suggesting is to increase the current synoptic scale rawinsonde network observations around the specific meso scale grid. If there was any possibility of increasing the frequency of the synoptic scale soundings, I am arguing for the point of increasing it with a larger margin to the west and south of your mesonet. Alberty: That's a good point. I think that would be a NWS function and we would certainly encourage it if it could be done. Any other questions or comments? Carl Kreitzberg: I wonder if Freeman Hall would comment on the accuracy with which he hopes to be able to measure the mean preciptable water in the lowest 3 km with the microwave radar. Hall: The radiometer that we now have, as Martin Decker told you just yesterday, is certainly within 10 percent of values. That's all we have to compare it with. For example, with 21 mm measured with the rawinsonde, we get 19.6 mm. So 10 percent is a good figure right now. Kreitzburg: No particular height limitations? Hall: This is a line integral vertically above the radiometer site. One doesn't know the height distribution. Alberty: There is one other point that I had hoped would generate reaction or comments. This has to do with the surface network. Those grid boxes are 25 km on a side and there has been some suggestion that uniform grid spacing is a very dangerous thing to do and that random spacing is preferable. I wish somebody could prove that because it would be much easier (in site selection) to have a non-uniform grid. It is a very diffi- cult problem, as Ken Wilk indicated earlier, to put up surface stations on grid points anywhere in the country. You think of Oklahoma as "flat as a pancake", but it really isn't, and a little hill 15 or 20 feet high in the up-stream direction can bias the data very badly. Eddy: What sort of logic went into the particular network configuration that you showed? 11 Alberty: We looked at Dr. Barnes' studies concerning the desired spacing for individual thunderstorm observations in which he found that the optimum spacing is something like 15 km. We extrapolated that and said that we are looking at phenomena approximately twice as large as an individual thunderstorm, so 25 km seemed to get the nod. We discussed it among members of the SESAME committee and decided that that was the best choice we could make at the time. If there is good scientific reason for doing something else we would be glad to hear it. Dave Atlas: In listening to the presentations this morning one gets the impression that we are designing the program rear end first. That is, we are using instruments that are available and I do not see yet exactly what those instruments are gainfully doing. Now I know this is not entirely the case. I think that it would be a very good idea to set down some clear objectives and then to direct the instruments at those scientific objectives. Alberty: There was considerable thought that went into that process in our committee. Several people are here that are on that committee, Jim Purdom, Mike Alaka, Freeman Hall and Wayne McGovern. Hall: One thing we did was to list our scientific objectives, the 11 critical questions that expanded into about 27 at one time. We made a matrix to show how many measurements we could direct toward each one of those particular goals. We tried to have at least 2 overlapping or Independent measurements to answer each of the questions. I can show you that matrix and that we do have that kind of a design. Lilly: Dave (Atlas) means that not only he, but the world has to be shown. He also is aware that putting the observational program at the beginning of the meeting here today does not necessarily mean that is the order one wants to think about things. Most of the rest of this meeting is aimed at defining the scientific objectives, or discussing them, but I think we take the criticism in the spirit in which it is intended. 12 ERL Wave Propagation Laboratory Observational Capability Freeman F. Hall, Jr. In addition to the Doppler radars, described by K. Wilk and J. Miller, ground based remote sensing in SESAME will include other microwave, acoustic, laser, and pressure sensing devices. Three microwave radiometers will measure temperature profiles and water aloft. An FM-CW radar will monitor mixing layer depth and water vapor fluctuations as well as resolving temporal details of precipitation. Four acoustic echo sounding systems will measure wind profiles and boundary layer heat flux at each comer of the eastern Meso-y square. Low level wind divergence will be monitored by square configurations of laser transverse wind sensors. Four arrays of pressure transducers sensitive to lO' roicrobars will detect gravity wave presence, amplitude, and velocity. Since we are going to have a discussion on the doppler radar later on by Ken Wilk and also J. Miller will be speaking, I will be concentrating on the other sorts of remote sensing instrumentation that we are planning for SESMIE. In the planning document we show a diagram that illustrates what we have in mind for one of these mesogamma squares. These are squares 25 km. on a side. There will be 2 or 3 of them within the SESAME experiment area. Since we can't afford the dense instrumentation we would like throughout the array, we are going to concentrate remote sensing and some other in situ measurements within these smaller meso- gamma squares. To begin with, the squares will lie within a region of overlapping coverage by the doppler radars so that we will have information on the wind fields and the hydrometers that are occurring overhead. Figure 1 shows the instrumentation that is planned for the eastern mesogamma square in the first phase of the experiment. There will be a microv;ave 13 ACOUSTIC ECHO SOUNDER MESO*V REMOTE SENSING Figure 1 DOPPIER I ^^ RAOAR_J radiometer in one corner, which will provide both tempera- ture profiles from the ground to perhaps 5 or 6 km and also a measure of water vapor and water droplet concentration overhead. This will probably be a four-channel radiometer working in the oxygen band, the water vapor window and in the water absorption band. We are already showing, as seen in Figure 2, that we can measure temperature profiles, even recovering the ground-based temperature inversion, using the 50-60 GHz band in the oxygen absoption region. We don't do so well, of course, on elevated inversions, because of the necessary ex- pansion of the weighting functions at some distance from the ground. The point is that this kind of radiometer will nicely complement any sounding from a satellite because with it one will obtain more detail in the temperature stucture in the upper regions. In that same corner of the array we intend to place an FM-CW radar. The advantage of the FM-CW, as most of you well realize, is that one can see fluctua- tions in the water vapor content. 14 \ /Inferred Profile BOULDER, COLO. 0900 4/28/71 4 Frequencies 6 Angles ALL-MONTH STATISTICS 240 250 260 270 280 Temperature (K) Figure 2 290 1515 MST 1530 Figure 3 Figure 3 is a time scan with the Wave Propagation Laboratory FM-CW radar during the NHRE last month. The wavy appearance that you see indicates the top of the mixing layer in a region of strong humidity gradients. The shear instability is causing refractive index fluctuations in that region. This technique should provide a very nice indication of dryline morphology. That's one of the major advantages of FM-CW. Notice that because the FM-CW responds primarily to water vapor fluctuations, you do not see the convective 15 plume structure which is shown very well by acoustic echo sounders. We therefore propose surrounding the mesogamma array with 4 acoustic sounders. We will provide 4 of these, as opposed to the one FM-CW in part because we think they are going to be somewhat cheaper. The prime reason for locating them there, however, is that we can do real time wind profiling. Figure 4 Figure 4 illustrates the configuration of antennas that we recently installed at Stapleton airport for real time wind shear sensing. The sound source propagates tone bursts vertically into the atmosphere, while we listen to them bistatically at antennas B and C, filling in the near-range with the A and A' antennas. From this pulse of sound propagated vertically, we measure the doppler shifts along a bisector of the angle between the transmitter and receiver. We then process this signal in real time v/ith a dedicated minicomputer hooked to the system and obtain in real time wind speed and direction profiles to heights of 700 m or 1 km when turbulence exists throughout the boundary layer. 16 HEIi>IT iPEEO CFT) ,^»»^/-^-\ i A \J ^• Mox Correiotirjn (0300 -0330 LT) Figure 6 On Figure 6 we have overlain the acoustic record of the backscatter from the atmosphere with the white trace, indicating the very fine correlation we see with these microbarograph arrays. At different times we have correlated the acoustic record with the direction and the speed of propagation of the gravity waves, and these are summarized in the tables In summary, we are proposing for the first time on such a scale, to supplement the NSSL surface array and the aircraft measurements with a fairly extensive and overlapping series of remote probes. We think we are going to learn a lot more about the atmosphere because as Alan Waterman of Stanford very aptly said, "We want to know all about the atmosphere, but we can't fill all the atmosphere with sensors," So we have to developing these remote sensing tools and SESAME provides us a very fine opportunity to interact with a large scale experiment. 18 Josh Holland ; Is the pressure array absolutely calibrated so that you can get horizontal gradients? Hall ; As they are now used they have a dc leak, so that you don't have absolute calibration, but this can be provided by having a recording microbarograph at that station. Bill Hooke here can answer that question with more accuracy than I can. W. Hooke ; The lowest frequency is a function of the size of the leak. The highest frequency is about 10 Hertz. Hall ; You can get down to about a hundredth of a Hertz , isn't that correct? Hooke; Oh, yes. A thousandth or ten-thousandth easily. 19 NSSL FACILITIES Kenneth E. Wilk This is the eleventh year for NSSL on Westheimer Field at Norman. Our facilities have increased substantially over this period, we hope consistently with the technical progress of our research. We have 43 full-time staff. Twenty of these are involved routinely in our observational programs. In the brief time we have this morning, I'd like to give you an overview of some of our facilities and observing capabilities, and comment on some changes that have occurred recently. We have been in our new building a little over one year (fig. 1). It contains about 18,000 square feet, which is adequate for housing most of our engineering and analysis programs. There are three towers to the north (rear) of the building which support the WSR-57 radar, IFF inter- rogator, and ground-to-aircraft radio antenna. The master weather station for test and development of our network sensors is immediately in front of the building. Our Doppler radars, to be discussed later in this Conference, are housed separately. In past observational programs, our primary observing system has been the mesonetworks. The equipment (fig. 2), somewhat antiquated but extremely reliable, includes microbarographs, weighing-bucket rain gages, hygro thermo- graphs and the standard Weather Service F420 wind instrumentation system used nationally for a great number of years. Probably the most productive network that we've had over the eleven year period was in 1970. This array of 44 stations extended from the northeast side of the Oklahoma City urban complex southwestward across the 20 Figure 1. NSSL facility with radar towers and meteorological observatory. 4 'fm.'i,^ ^WM:-. ' S.i4>&'l Figure 2. Surface Station 21 very dense Agriculture Research Service rain gage network, outlined by the dashed line In figure 3. The average station spacing was about ten km; however, the spacing varied somewhat because the sites were selected primarily according to quality of exposure. Recently, we have divided the stations Into two networks. In a nested concept. We have positioned approximately half of them to supple- ment the National Weather Service and FAA reporting stations to produce the subsynoptic network, with an average station spacing of about 100 km. The remaining 25 stations are grouped as a "mobile" network on an annual basis. Last year they were located again to take advantage of the dense ARS rain gage network, but also positioned to cover the area of dual-Doppler overlap (fig. 4). Since we have only one WBRT rawlnsonde system, most of our upper air observations have been provided under contract by Air Weather Service. Our plans for next spring are not yet complete, but we do know that we want to work with WPL in moving ahead with testing of remote sensors. We have hardened three of the mesoscale sites (fig. 5) for installation of WPL optical anemometers. The other mesonetwork stations will provide data to evaluate divergence measurements to be provided directly from the laser array. We also hope to solve logistic and technical problems associated with routine field operation of this type of probe. Our best boundary layer probe remains the Instrumented WKY tower. At 450 m it is probably the highest Installation above ground anywhere of a Bendlx aerovane. At the present time we have two other levels Instrumented — 270 and 90 m. We can equip seven levels — each measuring wind, dry bulb 22 LEOeNO 1970 NETWORK O-Surioce Stotlon (recording) • -Rowlnsonde Stolion Boundory of AR5 Roinqofle NefwofKfl75 Recorders) ^ initrun^entefl Tower Figure 3. 1970 Surface Mesonetwork. 23 LBL n GUY o AVK ■ PNC TNK O BVO o GAG o END O SEL ■ - - 36* N WAT ■ SWO o sws TUL O 36*N CHE ■ + EOM ELR + A'll'XY B A OKC T1K-CH0 CSM + + "o O OKM ■ AMA o SHM ■ ■ + + + + o- + + ° 8 OUC ■ NSSL B + + + SMO ■ ADA ■ MLC O 1974 O NWS. FAA, a MILITARY SITES + MESONETWORK ■ SUeSYNOPTIC A NSSL TOWER FACILITY- RAWINSONOE COS \ O \_/-\^ ACM i NSSL CIMARRON DOPPLER ^"^^"^X"" — ^ o H SUBSYNOPTK-RAWINSONDE -4- ME5ONETW0RK-RAWINSONDE SPS 1 NAUTICAL MILES Q ^ 10 10 20 SO 40 90 „„„, L y '"^V^ ^-.yx OUR ■ Sy- --34-N ""V^^^itfSM ^^ 10 20 40 60 80 100 9 FT SILL RAWINSOWOC KILOMETERS Figure 4. 1974 Subsynoptic and Mesonetwork. SCALE 5 10 km o o TUTTLE o o A o NSSL 37 km EAST _|_ DUTTON o A o o o CHICKASHA o 1975 o NSSL SURFACE STATIONS A SITES FOR OPTICAL ANEMOMETERS Figure 5. Network configuration for optical anemometer experiment scheduled for Spring 1975. 24 AEROVANE DRY BULB (NOT SHOWN) BELOW WET BULB ^ MAIN BOOM ORIENTED TO 240» PLAN VIEW Figure 6. WKY-TV tower Instrumentation, 25 Figure 7. NSSL Radar Room — WSR-57 console and signal processing equipment. Figure 8. NSSL WSR-57 display after remoting over voice-grade telephone line. 26 and wet bulb temperature and vertical velocity (fig. 6). The aspirators, mounted under the support boom, house Yellow Spring thermistors. One of these has a wick for wet bulb data. All data are brought down the tower for storage on analog recorders in the basement of the tower, and some data is transmitted to a remote site located one tower length (500 m) northwest of the tower. Presently, the University of Oklahoma Electrical Engineering Department has an experimental acoustic radar located at this site. In addition to the local recording, a digital telemetering system is used to relay all tower data by telephone to NSSL, where the data are displayed in real time on an alpha-numeric CRT and stored on conventional magnetic tape at intervals of one or 10 seconds, or one minute. The WSR-57 radar, equipped with six-level contour mapping and digital recording, currently is our best real time probe of severe storms, and is used routinely for most decision making on aircraft positioning, Doppler radar scanning, and vectoring our mobile chase team (fig. 7). We have recently added a system to transmit radar information on voice-grade telephone line, and provide a contoured display (fig. 8) at remote sites, such as the tower and remote Doppler installation. With the SEL 8600 computer fully operational we now have sufficient computer power to handle all digital data acquired with our various observational systems. 27 NCAR Interest in the SESAME Program D. K. Lilly NCAR has observational facilities which are capable of obtaining direct measurements from the surface, balloons and aircraft and remote sensing from surface and aircraft. The extent to which NCAR is likely to do so, depends on what you might call proposal pressure. NCAR facilities will not be given to a program simply because it is a worthy national goal. There has to be a scientific purpose and an identified scientist or group of scientists, either from NCAR staff or from universities who are going to make use of the facihties. For a major program the identification must be made rather early, perhaps two years in advance. For a less major pro- gram, or a hang-on on a large program maybe months or less than a year in advance. One of the purposes of this kind of meeting is to alert university people to the fact that NCAR is interested in SESAME, but it doesn't yet have a soUdly established observational mission in it. Another factor regarding the observational facihties that might be useful for SESAME is that the National Hail Research Experiment (NHRE) has in some cases developed these or is mak- ing strong use of them. NHRE is a major national program supported by the National Science Foundation and has a future program duration which is currently uncertain. The concensus is that there will likely be 3 or 4 further years of further observation work. The decisions on this matter are not fully made, and I will ask Dr. Bretherton to say a little more on that point. Now let me note what some of these facilities are. Dave Atlas is here and he can tell me what I miss. There will be 2 transportable C-band, 5 cm radars with doppler capability available by December 1975. By transportable, I mean transportable for programs of weeks duration or more - not transportable hourly. These will be capable of the same kind of operation as the NOAA doppler radars with abihty, range, etc., somewhere between those of the 10 an radars that are established in the Oklahoma network and those of the portable 3 cm ones here at the Wave Propagation Laboratory. There is a dual frequency 10 cm and 3 cm S-band radar located in the NHRE network at Grover. It's a converted EPS 18, transportable, I understand with considerable effort. I do not know whether this has been very seriously considered as part of the SESAME program, but it could conceivably be so. A very similar system is the CHILL radar, which is not an NCAR system but is owned and operated jointly by the University of Chicago and Illinois Water Survey. It is more or less identical in concept to the NHRE radar at Grover, but differs in electronic design. It is being used in NHRE for at least part of the program and is, I believe, also to be used in cooperation with the National Severe Storms Laboratory within the next couple of years. Another radar not owned by NCAR but operated for NHRE is an N33 air control surveillance radar oper- ated and owned by the Desert Research Institute of the University of Nevada. In aircraft facilities, NCAR has at least 3 aircraft that have potential capabihties in SESAME ~ the Electra, which is a rather large, highly instrumented aircraft, with instrumentation de- veloped from the accumulated experience from the Buffalo and Sabreliner, and incorporates an inertial platform and a gust probe system. It does not contain a good meteorological radar, however. It is being used in GATE, will go to AMTEX next winter, and is intended to to be used later by Project STORMFURY. It represents an exception to the rule that NCAR facilities are entirely dedicated to use by NCAR and university scientists, an exception made necessary because its cost for year-around operation exceeds NCAR's available budget. 29 There is also a portable automated mesonet which is in some respects similar to the mesonet that NSSL and SESAME will be developing for the SESAME project. It consists of a network of surface stations, and is being developed largely in response to the needs and interests of university scientists. The extent to which this facility might be used in SESAME depends, again on proposal pressure; that is who wants to use it and for what. The NCAR computing facility is still one of the best scientific computing faciliteis in the country or the world dedicated to atmospheric science. It is, however, strained to keep up with current programs, in particular the global modeling GARP and a number of in-house and university programs that are piling up more and more computing demands on it. It is hard to envisage the satisfaction of new additional demands until and unless some additional comput- ing power is made available. It is now intended that either a next generation computer (if available from the industry) or possibly a second 7600 would be installed at NCAR within the next couple of years or so. The availabiUty of computing time again depends a lot on demand from scientific programs that pass through a competitive review process. Regarding NCAR's scientific involvement and that of the associated university groups, it now appears to me that there is a readiness for a somewhat dedicated effort in fine-mesh modeling and prediction. We will hear about some of the current programs in the fine-mesh and cloud modeling session tomorrow. Three-dimensional cloud modeling is an area which is perhaps not quite down the main line of SESAME, because the project is oriented more towards the environment of the severe storms and convective clouds than toward the clouds themselves. Yet I think that most scientists working in cloud physics and dynamics and severe storm structure would agree, in principal at least, that an adequate three-dimensional cloud simula- tion is needed to bring the field of cloud physics to a state of scientific maturity. Three- dimensional cloud modeUng is certainly a major effor, but one which is now being conducted or initiated by several university groups and by NCAR. It will be a major competitor for computer time in the next 5 years if more computing facilities are available than now exist. If they aren't, it may still be a competitor, but there won't be enough time to go around. In closing I will only reiterate that within NCAR, within its governing body UCAR (Univer- sity Corportation for Atmosphere Research) and the National Science Foundation, the supporting agency, there are priority decisions still to be made on the position of SESAME vis-a-vis existing and other future planned programs but I think there is a strong interest in supporting it. 30 UCAR/NCAR Interest in SESAHE Francis Bretherton Firstly, let me say that UCAR is extremely interested in the SESAME program. The initiatives in this important area of meteorology are welcome and the level of interest on the part of the university community in the program is encouraging. UCAR and NCAR will largely be using the level of interest of university scientists as a barometer to determine the extent in which we should make commitments to SESAME . Allocation of the NCAR facilities is made on the basis of competative proposals, which are reviewed by advisory panels whose members are university and NCAR scientists. Priority and allocation decisions are on the basis of quality of the proposals, after review by the appropriate panel. The vital importance of modeling as an adjunct to the SESAME field program has been emphasized. The augmentation of NCAR computing facilities is the top priority item for NCAR for the next couple of years, and is also a top priority item in the National Science Foundation's budget for FY 76. Whether funding is appropriated probably depends at this moment more on the state of the national economy than anything else, and I would be foolish to make a prediction on that. So far as the other NCAR facilities are concerned, there is an awkward timing situation associated with prior commitments to the National Hail Research Experiment. This is an on going program with the primary objective of testing 31 whether cloud seeding with silver iodide is effective in suppressing hail, and the secondary objective of investigating whether it is cost effective to do so. The original NHRE program was designed, following some prior evidence from the Soviet Union, to test whether an effect of 60 percent hail suppression would be possible. To achieve the necessary number of cases to demonstrate an effect at this level, it was agreed that a 5-year period was needed. In addition there is a substantial program underway to investigate the physical mechanisms so we can understand' why seeding does or doesn't work. Very preliminary indications are that the suppression effect is more like 30 percent than 60 percent and the best judgment of everyone in the area is that it would be foolish to stop, not having proved 6 percent effectiveness, but having preliminary evidence for a 30 percent effect. Verification of a 30 percent effect would require a greater number of cases which means possibly another two field years. The original 5-year plan called for the program to end in calendar 1976, but we now face unavoidable uncertainty in current plans and potential extension to calendar 1977 or '78. Any such extension of NHRE and the attendant commitment of facility support will probably overlap with the initiation of SESAME and its facility needs. This is a dilemma, and we do not know its outcome at this moment. 32 NSF Interests in SESAHE F. White I did not come here this morning with a prepared speech, but rather I came to learn all about SESAME and about the future plans. SESAME is a very ambitious program as most of you who have read the document realize, but severe storms are very important, too. NSF is very interested in severe storm research. In fact, I made a calculation last week, that about 20 percent of the current research dollars that we put out goes for severe storm research. This includes NHRE, the National Hail Research Experiment that Dr. Bretherton mentioned, some of the NCAR research, and the university effort (some 15-20 programs right now in various phases of severe storm research) . We are interested in SESAME and we want to try to work out some arrangement to help support SESAME. Now while I am up here let me make just three bureaucratic coments on SESAME. As Bill Hess and others pointed out, there is only so much money available in the federal government for atmospheric sciences for all agencies. Where does SESAME fit priority-wise and time-wise in all of the major programs that are planned? As Francis Bretherton just mentioned, NHRE, which was scheduled to end in '76, may be extended to '78. STORMFURY Pacific which was supposed to go in '76 and '77, has been slipped one year. A major new program is in the mill for '76 in the field of climate. The First GARP Global Experiment is scheduled for 1978-79. A major monsoon experiment is planned for the same period. Someone has to decide where SESAME fits in the overall 33 priority. Bill Hess mentioned that Dr. Bob White has given SESAME a very high priority for the '77 budget. I think it is going to be very critical in the next 3-6 months that we determine the priority for SESAME, after which all agencies can start the program for carrying out their part of it. SESAME will be presented next week to an interagency government group for the first time. This meeting here is the first public airing of SESAME, Much water must be distilled before SESAME becomes a reality. My second point deals with the program of SESAME. From the documents you know that it is a very broad program. There are a few holes in it. To me, the estimated cost may look a little low to get the job done. I would have to second Dave Atlas' comment earlier this morning that I don't really think that we yet understand exactly what the SESAME program is. We understand the availability of instrumentation, but I would really like to see in the next couple of days a lot of discussion of just what the program is and then what do we need to carry it out. I wonder whether we are really ready to start a major SESAME program. I wonder if we shouldn't roll a little bit over the next couple of years, expanding our efforts in the SESAME direction, and then launch a major SESAME later on in the '70's or in the early '80's. I haven't firmed up my thinking on it, but I see some major problems because it is very difficult to build overnight the big research team necessary to carry out SESAME. You can think of building on some of the NSSL programs. You can think of building on to some of the NHRE programs and using the next few years as a build up before you launch a major SESAME. 34 My third point deals with the organizational structure. We will get to this Friday afternoon, but I'll bring it up now. A document has been put out. SESAME is a NOAA program and other agencies are being asked to round out this program. The role of the various agencies is a little vague, and that is not suprising when you consider this is the first public airing of a major program. They have to become better crystallized and I am sure that they will as time goes along. The role of NCAR is still not clearly spelled out, as Doug and Dr. Bretherton mentioned. We've had years of practice of looking at NCAR's budget requests. Their budget request for SESAME will be looked at just as we look at their computer requests or any other major program. Major NCAR budget increases do not come easy, as Dr. Bretherton knows . Now, Dr. Anthes' question is really one of university involvement. I feel that the universities have a lot to offer to SESAME, and this is one place where NSF may come into the picture. I feel that sometime within the next two months, after the university people go home and think about what they hear this week, that they should to let their thoughts be known as to how they see themselves fitting into the SESAME program. Broad research can be done. It can be done both in a core experiment or as non-core but related work, if you understand how I'm using the word core. I would like to suggest, that as your thinking becomes crystallized that you put it down in a letter. Not a proposal. Proposals take lots of time to dress up final. You have to go through university mechanisms. But as a letter to Doug or to Dr. Hess or to me, or send us all copies of it, I believe this has to be done by no later than December 31, the end of this year, because we will be 35 starting the '77 budget cycle and SESAME is scheduled to go into the '77 budget, according to Dr. Hess. We will be starting that by February or March, and I would like to have some feeling as to whether we are talking about millions or thousands for university needs in SESAME. Much of the on-going NSF work in severe storm research will be directed toward SESAME. That wont take any more money at all. It will simply be a redirection of certain university efforts into SESAME programs. Others will be new ideas, new programs and we have to begin to get some feel for these needs within the next few months. Now, I don't really know if this is what you expected from me this early in the program, but I've tried to take a very broad look at the problem. NSF is interested in severe storm research. We do believe the university involvment in SESAME will help. I am personally looking forward to an interesting three days and I hope to leave here with a clearer picture of how SESAME is going to proceed and how NSF is going to be involved in it. Thank you. 36 NASA Severe Storm Program Earl R. Kreins The NASA Severe Storm Program consists of two major areas of effort. The first is the Severe Storm Research Program which applies advanced techniques in quantitative monitoring of the atmosphere, continuous observations from geostationary orbit, and highly improved data manipulation and analysis procedures to the problems of storm detection, storm prediction, and storm warning, the second major area is the development of Severe Storm Satellite Systems. The initial thrust in the quantitative measurement of severe storm conditions will be on the current operational sp i n- s ta b i 1 i zed geostationary satellite, SMS/GOES. Modification of the SMS/GOES Visible and Infrared Spin Scan Radiometer (VISSR) is under development to provide for measurements of vertical temperature and moisture profiles of severe storm regions. While this development is proceeding, NASA is also studying a Severe Storm Observation Satellite (SSOS) system to provide both sounding and imaging capability from a three-axis stabilized spacecraft in geostationary orbit. This morning I would like to give you a brief review of what the NASA severe storm program is and show how it can relate to the SESAME program. NASA's program is predominately related to the development of the spacecraft sensors and systems and the application of the spacecraft data to the detection, monitoring, and warning of severe storms. We have the program divided up into two different portions: Severe Storm Research Program 37 NASA SEVERE STORM PROGRAM SEVERE STORM RESEARCH PROGRAM - FIELD PROGRAMS - DEVELOPMENT OF OBSERVING CONCEPTS - DATA PROCESSING AND ANALYSIS SATELLITE SYSTEMS - GEOSTATIONARY SATELLITE SOUNDER (VAS) MISSION ON SMS/GOES - SEVERE STORM OBSERVATION SATELLITE (SSOS) - OPERATIONAL SATELLITE IMPROVEMENTS Figure 1 and Satellite Systems (Figure 1). I'll go into each of these in a little more detail. 38 The Severe Storm Research Program is divided into field programs, development of observing concepts, and data processing and analysis. The satellite systems that we are currently working on include the geostationary satellite sounder, the VAS, which is the VISSOR Atmospheric Sounder mission on the SMS/GOES spacecraft. We are looking into the development of a Severe Storm Observation Satellite system, the SSOS, and also some improvements on the operational satellite systems that are in existence now. I will give a brief description of each of these (Figure 2) . The field programs are divided up into five areas. First, severe storms surveillance. This is predominately the efforts of Dr. Fujita (Univ. of Chicago) and Bill Shenk (Goddard Space Flight Center) involving aircraft observations of severe storms. The last observation period they conducted was earlier this year when they had six aircraft flights utilizing six different aircraft together to provide in situ measurements, predominately photographic measurements of severe storms. Second, mesoscale studies at Marshall Space Flight Center, which relate mesoscale features to their synoptic scale environments. They conducted a 24-hour observation period in May of this year, during which observations were taken by radiosondes every three hours at approximately 50 radiosonde stations. Eleven radar stations were also utilized. Spacecraft data was gathered from the various operational systems, the research satellites, and also the military Defense Meteorological Satellite Program. The third area, satellite support, is specific aircraft flights for field observations relating to spacecraft overpasses, or for specific development of spacecraft sensors. 39 The fourth field program that NASA is interested in is project STORMFURY where the NASA Convair 990 would be utilized by NOAA. In addition to the instrumentation provided by NOAA, NASA is discussing the possibility of adding additional instrumentation on their Convair 990 primarily for air-sea interaction measurements, cloud physics parameters, and also for remote atmospheric sounding. Also for STORMFURY NASA is discussing the possiblity of using their RB57F instrumented aircraft primarily for atmospheric sounding and for cloud physics measurements. The instrumentation, especially that of the RB57F, that's developed for STORMFURY may possibly be utilized for project SESAME. Project SESAME is the fifth area under our field programs. We are currently looking to see how NASA's efforts can be related to SESAME itself. NASA SEVERE STORM RESEARCH PROGRAM 1. FIELD PROGRAMS SEVERE STORM SURVEILLANCE MESOSCALE STUDIES SATELLITE SUPPORT PROJECT STORMFURY PROJECT SESAME 2. DEVELOPMENT OF OBSERVING CONCEPTS TORNADO DETECTION BY RADIO TECHNIQUES NEW AIRCRAFT SENSOR DEVELOPMENT BASIC CLOUD PHYSICS DATA PROCESSING AND ANALYSIS - DATA PROCESSING AND DISPLAY SYSTEM - DATA EXTRACTION AND ANALYSIS - SEVERE STORM MODELING Figure 2 40 One area that I have not mentioned which is another possible application of the NASA developments is that of drones. They are currently being developed at the NASA Flight Research Center, Edwards, California. NASA is developing a remotely piloted vehicle, a drone, that's predominately for high altitude atmospheric sampling. It is possible that such a drone could be utilized for in situ measurements for a tornado or severe storm situations. Under development of observing concepts we are partially funding Dr. Taylor at the Wave Propagation Laboratory in some of his efforts in the detection of tornados by their radio emissions. We are also looking into the possibility of aircraft instrumentation utilizing the same type of techniques and, if it seems feasible, in the future to have a spacecraft sensor for the detection of tornados or severe storms by radio emissions. Some of the new aircraft sensor developments that may relate to SESAME are the cloud top scanner that is being developed at Goddard Space Flight Center. It is a three-channel scanning radiometer in the visible wavelengths, water vapor channel and the 11 micrometer window region which will provide information on cloud type, cloud amount and height of the clouds. Another instrument that is being developed for aircraft is a cloud physics radiometer which will provide information on the cloud top altitude, the cloud phase, the optical thickness, cloud particle temperature and the water vapor amount or the particle number density in the cloud top region. Also additional aircraft instrumentation is being developed or modified to use the microwave region for sounding and rainfall determination. 41 Basic cloud physics efforts are predominately related to the Zero-G Cloud Physics Laboratory which is being developed through Marshall Space Flight Center. This laboratory will be on the space shuttle, or the Space Lab, and is a general purpose facility for conducting cloud physics research in a near-zero gravity condition. In the data processing and analysis area a data processing and display system is being developed. It is called AOIPS (Atmospheric and Oceanographic I^nformation Processing System). It should be fully operational in 1976. AOIPS is a man-computer interactive system for the utilization of satellite data, predominately geostationary satellite data. Data extraction and analysis is essentially what it says, the utilization of data that have been obtained through the various observational techniques, either spacecraft or aircraft measurements, for the demonstration of applications of this data. NASA also has an effort in severe storm modeling. We have some efforts that pertain to tropical cyclones that are being done at Goddard Space Flight Center. More appropriate for SESAME is some of the work being done at Langley Research Center on tornado modeling. They have developed a two-dimensional tornado model which utilizes information on the tilt, circulation, buoyancy of the storm. It shows the motion of the tornado to the right of the mean wind and it also displays the characteristic hook-shaped area. Now that's what we call our Severe Storm Research Program. The other portion of our severe storm program is the satellite portion which I mentioned before consisted of the VAS mission on the SMS/GOES, and the Severe Storms Observation Satellite, and the operational satellite improvements (Figure 3). I'll discuss this last item first. 42 A low cost (less than $100,000) ground system is being developed for the readout and display of the ITOS Very High Resolution Radiometer (VHRR) data in its full resolution and also for the future TIROS-N AVHRR (the Advanced Very High Resolution Radiometer) in its full resolution data. This ground station will have the capability of being expanded at additional cost to read out and display the data from the geostationary satellites. NASA SEVERE STORM PROGRAM SATELLITE SYSTEMS - GEOSTATIONARY SATELLITE SOUNDER (VAS) MISSION ON SMS/GOES - SEVERE STORM OBSERVATION SATELLITE (SSOS) - OPERATIONAL SATELLITE IMPROVEMENTS Figure 3 The VAS (VISSR Atmospheric Sounder) utilizes the SMS/GOES spacecraft (Figure 4). It would replace the current VISR (Visible and Infrared Spin Scan Camera) , The VAS will have three modes of operation. The first one will be the normal imaging mode which will provide the same type information as is now obtained from the VISSR, which is visible and a infrared (11 micrometer) , imagery. It will also have what is called the multispectral mode which provides a capability for five different channels of information in an imagery mode. You can select any five channels that you want. You 43 would probably have the visible channel, possibly the water vapor channel, but that is up to you actually, and any of the other channels that are available to you up to a total of five. The third mode is the sounding mode which provides the the temperature profile from the geostationary altitude. As you probably know, the SMS/GOES is a spin-stabilized spacecraft at geostationary altitude and utilizes the total earth scan as part of the observation. The information I have on Figure 4 pertains to the case where you have a 100 km. wide swath across the earth's disk. In the sounding mode, with the spatial scale of the sounding on the order of about 30 km. , you could have a sounding across that swath which is a 100 miles wide, in about five minutes. You can select the area that you want your imagery over or you can have the full disk. Figure 5 shows just one example of the capability of the VAS. This happens to be the case where you want the full disk, in the multispectral image mode plus the sounding. This maybe a little bit misleading. One full disk multispectral, or one multispectral image, is really five images of the five selectable channels that I mentioned before. So you actually get five images plus the sounding on a swath that is 1150 km. wide. You have that type of capability every hour. The band shown on Figure 5, is 1150 km. wide, and is variable where ever you would like to locate it. You can have the band at varied locations and various widths depending upon how you program the spacecraft. 44 vissR AiT.iosPiiEnie; smimin (vas) SCALE frequency' coverage >30km s 5 min UP TO FULL EARTH DISC, WITH OPPORTUniTY TO SELECT AREAS. J WATER VAPOIl sou;;di:ig Figure 4 Sounding Band CIlSOKm) Position Variable Each Hour One Full Disk Multispectral Image Plus IISOKm Sounding Banc Per Hour Figure 5 45 Figure 6 relates to the Severe Storms Observation Satellite (the SSOS) . The VAS is on a spin-stabilized spacecraft. The SSOS however, is a three-axis stabilized spacecraft which allows you to dwell in a particular area instead of having to scan across the total earth's disk as is required in a spinning spacecraft. The scanning modes of the SSOS also provide either the full earth disk as well as selected portions of the disk. Previously with the VAS you needed to have a whole swath across the earth because of the spinning spacecraft. With the SSOS which will be a three-axis stabilized spacecraft you will be able to actually point your detector towards the area you are interested in and you get an essentially square format of various dimensions depending upon the area you select. The SSOS will have an offset pointing capability which enables the sectors to be chosen anywhere on the earth. I forgot to mention that the launch of the VAS will probably be 1976-77 time frame. The launch of the SSOS will be in 1979-80 time frame. The measurements that will be obtained with the SSOS will be the temperature and moisture profiles with a spatial resolution at subsatellite point of 13.5 km., water vapor distribution 13.5 km., the infrared imagery of the 11 micrometer channel at 4.5 km,, and the visible and near IR channel 0.75 km, resolution. Figure 7 shows some of the comparisons between the VAS capability and the SSOS capability. The biggest advantage of the SSOS will be that it has a much faster capability of giving soundings. Much faster than , possible with the VAS. For example, the AASIR (Advanced Atmospheric Sounding and Imaging Radiometer), which is the sensor on the SSOS, can provide a full earth disk sounding capability with a 54 km. resolution in 20 minutes while the VAS takes 342 minutes. 46 SEVKRE STORM ODSERVATION SATELLITE (SSOS) SYSTEM ODJF.CTIVIIS • THREE-AXIS STABILIZED, GEOSTATIONARY SPACECRAFT SCAHMING MODES PROVIDE FULL EARTH DISC AS WELL AS SELECTED PORTIONS OF DISC OFFSET POINTING ENABLES SECTORS TO BE CHOSEN ANYOTIERE WITHIN FULL DISC • LAUNCH 1979 MEASUREMENT TEMPERATURE AND MOISTURE PROFILES WATER VAPOR DISTRIBUTION INFRARED IMAGERY VISIBLE AND NEAR IR IMAGERY SPATIAL RESOLUTION (KM) 13.5 13.5 4.5 0.75 Sounding Fraiae Siza • Figure 6 INFRARED S0UKD3R/1.MAGER CO.XPARISOM Averaging Area Required ** (spatial rusolution) AASIR VAS full earth 54 km 54 km 2500 kra 27 km 27 km 750 km 13.5 km 13.5 km 250 km 13.5 kB 13:5 km Iraacinq Spatial Resolution AASIR VAS Visible 750 m 900 m IR (11 vn) 4.S km 7 kr,i (6.7 um) 13.5 km 13.5 km VAS scans a swath format, while AASIR scans in a square format. sec aa^ ster an-r Figure 7 Measurement Cime AASIR VAS 20 min. 342 min. (preliminary) 4 min. 142 min. 1.2 min. 85 min. .4 Bin. 28 min. 47 If you take an area that is 250 km, which would be 250 x 250 km. for the SSOS system or a swath that is 250 km. wide for the VAS, the AASIR or the SSOS can provide the sounding over that area in 0.4 minutes whereas the 250 km. swath takes 2 8 minutes by the VAS. I believe that mentions most of the NASA satellite systems that will be of interest to SESAME program. DISC US SION J . Scoggins: I would like to discuss one part of the meso- scale program that you mentioned. Earl. It's called AVE (Atmospheric Variability Experiment) by Marshall Space Flight Center. There is an AVE-2 program that has three phases. For the first phase the observational portion was carried out on May 11, 1974. Soundings were taken east of approxi- mately 105 W longitude at intervals of three hours for a 24 hour period. The second and third phases, which will be similar, will take place within the next few months, depend- ing upon when we can get everything set up and properly coor- dinated. These data, I think, would provide a very excellent background for SESAME. There is a lot to be learned from them. We spent the entire summer processing the AVE pilot experiment data and we have all soundings processed for every contact. That data is now on magnetic tape and I'm told by Bob Turner, who is the NASA project leader, that he would be willing to give it to anybody desiring to analyze it. It's been truly amazing to us what you can learn from soundings taken at three hour intervals versus the existing data at twelve. I also think they will help answer the questions that Dave Atlas brought up. What kind of data do you need? How often do you need to take observations? What spatial resolution do you need? These data can help answer questions like that, which of course need to be answered before start- ing to a large program like SESAME. G. Darkow : I have a question that may be a little premature, for other speakers will be discussing indirect soundings from satellites and aircraft. Regarding the specific satellites and systems you spoke of, I am concerned with their capabili- ties under conditions of cloudiness. Will you be able to produce temperature and water vapor soundings with the indi- cated resolution in those conditions? 48 Kreins ; Both the VAS and SSOS systems utilize the infrared sounding capability, which do have limitations in conditions of cloud cover. You get the sounding above the cloud, but not through the clouds to the ground surface. They do give you information around the cloud area or around the storm system with a capability for very frequent soundings. You can wait until the cloud moves out of your way to obtain a sounding in a particular region if practicable, but you are limited to cloud fr;ee condition. Darkow ; You would have to have at least a 14 km. area of clear skies? Kreins : It's not required to be completely cloud free. There are techniques for eliminating the cloud problem. If you have total cloud cover then you can only get the sound- ing above the clouds. You do not actually have to have the clouds completely removed. There are techniques for getting around that. 49 Comments on Possible AEC Participation in SESAME Pa til Frenzen Atmospheric Physics Section Radiological & Environmental Research Division Argonne National Laboratory , Argonne , Illinois In the few moments remaining before lunch I'd like to tell you about a book I was reading on the way here. Its title is "The Call Girls" / and it was written by Arthur Koestler. The story concerns a group of "experts" summoned to a meeting in a mountain town in order to consult upon a specific problem. The author identifies the roles of participants in such conferences (and hence, perhaps, our own roles here) as those of call girls. Another point he makes is that sea-level scientists who attend meetings in mountain towns (such as Boulder) frequently become affected by the rarified air, and their ideas tend to become equally so. In the town of this story, the local population makes a business of using their winter ski-resort facilities for summertime, think-tank conferences . While looking through the schedule of meetings for the coming summer, one of the "call girls" discovers that there are soon to be three successive conferences whose titles include the words "environment", "pollution" and "future" , in all possible permutations. Now it may well be that I should have waited for one such meeting to be held here; but, on the other hand, there are features of the SESAME program in which AEC atmospheric research groups could develop a joint interest. The AEC's interest is primarily in the last two letters of the SESAME acronym, rather than the first three. Indeed, we have here been reminded that SESAME actually represents the latest reincarnation of what has been a 50 frequently proposed (and never implemented) MESOMEX field program . Perhaps we of the AEG would prefer that SESAME retain some of the more generalized features of these earlier conceptions for a major, mesoscale meteorological experiment . As mentioned earlier, the AEC may soon become ERDA, the Energy Research and Development Agency* . In anticipation of this change, we have begun to adapt our atmospheric research programs to what we think will be a broadened charge, namely the regional assessment of the comparative impacts upon the atmospheric environment of all the various alternative forms of large- scale energy conversion. The key words here are "regional assessment"; thus, we now must increasingly consider problems of atmospheric transport and dispersion on the mesoscale and larger. To this end, the SESAME program may offer a unique opportunity to carry out extensive field studies with the kind of support we in the AEC could not muster on our own. The atmospheric research of the various groups supported within the National Laboratories by the AEC is coordinated through the Environmental Programs office of the AEC's Division of Biomedical and Environmental Research. In order to more-or-less officially express the potential AEC interest in SESAME , let me read a portion of a recent letter from Dr . Rudy Engelmann , Deputy Manager of Environmental Programs, to Dr. Wayne McGovem, Exe- cutive Secretary of the Interdepartmental Committee for Applied Meteorological Research. In part the letter reads: "We have read the SESAME PDP document and feel that, within the uncertainties of predicting programs and budgets for 1977 and beyond, it is likely that the AEC might actively participate in the project in areas of precipitation formation, precipitation scavenging processess, studies of lower tropospheric structiare, and tracer trajectory investigations using tracers * ERDA reorganization bill passed and approved by the President, 14 Oct. 1974. 51 currently being developed in our program . While not explicity mentioned in the plan, we assume that SESAME will be designed to include a system of con- tingency studies which will be able to make use of the assembled facilities during periods when severe storms are absent." When SESAME goes into the field, there will surely be periods when all the equipment of the extensive alpha, beta, and gamma observation networks must stand by, waiting for conditions more favorable for severe storms. During these periods, mesoscale or sub-synoptic, regional scale experiments of a more general nature could be conducted. Indeed, some of the university groups might also be interested in cooperating in such a program of less-specialized, contingency studies . AEC groups that conceivably could participate would include the Pacific Northwest Laboratories at Hanford, where much of the AEC's precipitation scavenging and dry deposition work is centered. Similarly, the AEC's principal numerical modeling group at Livermore Laboratories would probably be interested in using SESAME derived, mesoscale data sets to test and improve their procedures for predicting pollutant trajectories in the lower troposphere. My own group at Argonne National Laboratory is just getting underway with a planetary boundary layer observational program and is anticipating future efforts in regional scale, numerical modeling. Even without SESAME, we are preparing to undertake regional atmospheric transport studies of some kind in the next several years. With SESAME, we would be pleased to cooperate as much as our other commitments at the time of that field program will allow. Lilly: The notion of utilizing the network in quiescent periods/ of course, is a natural one. It is more appropriate, I suppose, for remote sensors than for things like radiosondes, which cost a certain amount per ascent. Frenzen: We have a minisonde — $7 a unit. Lilly: We'd like to hear more about that later; the subject of the first session this afternoon is the sounding problem and opportunities. 52 VERTICAL SOUNDING PROBLEMS: A SURVEY Presented at SESAME Opening Meeting September 4, 1974 J. C. Fankhauser I intend to confine my remarks to the conventional GMD-1 rawinsonde system that has been in operational use for over 25 years in the U.S. and other parts of the world. After a brief historical review of analyses of mid- latitude subsynoptic systems, based on rawinsonde serial and swarm ascents, I will touch on some of the problems of resolution and accuracy inherent in individual soundings and how these may reflect on the planned SESAME networks. The British were among the first to systematically implement radar balloon tracking and Sheppard's paper [Al] represents an early attempt to derive the subsynoptic field of vertical motion from kinematic analyses of rawins through application of the equation of continuity. The Thunderstorm Project of the late '40s [A2] had an amazingly dense rawin network, but as we shall see later such close sounding spacing is superfluous because sounding errors are greater than the gradients of atmospheric variables between stations. This probably explains why definitive three-dimensional mesoscale analyses of the thunderstorm environment did not evolve from the Thunderstorm Project data. The National Severe Storms Project [A3] supplemented the existing Weather Bureau network with serial ascents from intermediately located Air Force Bases to achieve a sounding spacing and frequency that approaches the objectives of SESAME. However, differences in the instrument specifi- cations for military and Weather Bureau sondes extant at that time made data from the two sources extremely difficult to reconcile. 53 Perhaps the most complete exploitation of serial mesoscale upper -air data in defining the dynamics of small-scale weather systems has been made by Matsumoto and Ninomiya and their co-workers [A4] in Japan. Their numerous publications based on evolving networks operated in the '60s concentrate primarily on the dynamics of winter cyclones passing over the Japan Sea. Through imaginative use of time-space relationships, Elliott and Hovind [A5] were able to resolve the kinematics of frontal zones passing through a four-station network on the southern California coast. In 1965, under the auspices of the Air Force Cambridge Research Laboratories, Kreitzberg [A6] established a network in New England that might be considered as a prototype for optimum resolution of mesoscale phenomena by rawinsonde observations. His analyses centered on the meso- scale features of extra tropical cyclones. In 1966 we established a similar network at the National Severe Storms Laboratory in Oklahoma [A7] for the express purpose of studying the convective storm environment. The NSSL network was contracted to a smaller scale in 1968 and has been operated as such with only minor variations to the present time. Analyses from the NSSL data base [A8] continue to date and in my mind these data represent a reasonable embarkation point for SESAME. I'd like to turn now to the accuracy of rawinsonde measurements and the implications for network design and data analysis. The nearly constant state of flux in instrument design and sensor specification through the years prohibits a detailed assessment of temperature and humidity errors in the short time available today. I have assembled a short bibliography from recent literature and recommend this and other reading to those concerned with the evolution and current capability of the U.S. sonde. With regard to temperature, it is generally agreed [B2, Dl] that the absolute error in point temperature measurement is less than tLCC with a standard deviation of about ±0.3°C. 5^ As the referenced literature will indicate [CI - C6] the measurement o£ relative humidity has been a rather sorry state of affairs. However, recent modifications in instrument design have been made [C4], based largely on intensive studies at AFCRL [C2, C3], which provide an accuracy in relative humidity that is within 10 to 20 percent of its true value for humidity above 30 percent. The error bias is always toward the low side since the predominant error is temperature-induced and it is positive. Although there has been significant ijnprovement in relative humidity measurement, the accuracy leaves much to be desired, particularly in the low humidity ranges [C6] . The goal of the SESAME rawinsonde network is stated to be the definition of the severe storm environment and its precursor state at the so-called Beta scale encompassing horizontal wavelengths ranging between approximate:).y 25 and 250 km. Of all the at]Tiospheric variables, continuity at this scale is probably most readily established through the careful analysis of the wind field. It is here then that the most critical assessment of sounding capability should be focused. In the GMD-1 system the direction and speed of the winds aloft are calculated from the balloon's drift after release. The horizontal distance between the release point and the balloon is computed as a function of time from recorded azimuth and elevation angles and from calculations of geometric height which is of course a function of the observed pressure, temperature, and relative humidity. The wind at any height, h, is then computed by differentiating the horizontal balloon displacement with respect to time. Thus, the quality of the wind measurement is a function of and dependent on all the measured variables. To complicate matters, the pressure and thermodynamic data are recorded independent of balloon tracking and the only common base between the data sources is time which is recorded with independent devices. 55 It should be emphasized that any wind computation from the GMD system involving elevation angles of less than about 15° is subject to serious doubt because the magnitude of the error is inversely proportional to the square of the sine of the elevation angle [Dl - D3]. Danielsen [Dl] was one of the first to recognize that much of the research potential of rawinsondes was being lost through the conventional data reduction techniques used operationally. Much of what I will present is based on the work of Danielsen and his co-workers [Dl, D4] in developing methods for extracting the highest possible information content from rawinsonde data. The first essential step is the adoption of a numerical method for reducing the raw sounding data. This eliminates the possibility of error associated with the conventional graphical techniques and facilitates the consistent handling of the enormous volume of data produced during mesoscale serial ascents. To determine the accuracy of winds from the GMD-1 system, balloon ascents were coincidently tracked by conventional means and by high resolution tracking radar. The experiment provided a direct comparison of balloon ascent rates, horizontal wind speeds and wind direction computed from the two techniques. From the comparison, spurious errors in the GMD-1 data became apparent and objective filtering techniques were selected to preserve the meaningful macroscale and mesoscale features. Selection of the appropriate filters was based on the energy spectrum obtained from the winds measured by the high-resolution tracking radar. Danielsen believes that vertical wave- lengths on the order of 50 m are resolvable with adequate radar balloon tracking . If an arbitrary ratio of 1 km to 1000 km for vertical and horizontal waves, respectively, is extrapolated to the mesoscale, the suggestion is that rawinsondes spaced at approximately SO km would do an adequate job in defining relevant three-dimensional atmospheric motions. This is 56 approximately the planned SESAME spacing. It should be emphasized however that with conventional data recording and reduction techniques the GMD-1 system is capable o£ resolving vertical wavelengths of only about 150 m and the associated resolvable horizontal waves are on the order of 150 km. An adequate treatment of the dynamics of the mesoscale environment requires knowledge of the pressure gradient forces, or when pressure is chosen as the vertical coordinate, the gradient in geopotential height. It is here that another serious problem arises in treating rawinsonde data at the mesoscale. Evaluation of geopotential height from individual soundings is inherently limited to the accuracy of the surface pressure measurement, which is usually no better than ±1 mb [A8, A9]. The associated height discrepancy amounts to at least ±10 m. In addition to this surface pressure bias the comparisons between conventional and FPS-16 balloon tracking [34] indicate that random height errors on the order of 20 to 30 m are associated with the lack of resolution in the baroswitch and pressure calibrations of the GMD system. Thus unless radar balloon tracking is available, reliable height fields must be derived by indirect means [A9]. Little has been said to now about special logistical and data handling problems associated with mesoscale sounding networks. Care must be taken to separate the transmission frequencies of sondes released at adjacent stations to avoid the possibility of confused signals. In a mid-latitude convective situation the mean wind in the tropo- sphere might average 30 to 40 m sec' . In the 50 to 60 min period required for the balloon to ascend from the release point to the tropopause it could undergo a total horizontal displacement of 30 to 40 km, or nearly the distance between sounding stations. Clearly a compensation for the mixed Eulerian-Lagrangian character of the soundings must be made before a consistent synoptic analysis can be pursued. In addition to horizontal displacement, compensation must also be made for departures from scheduled release times and for differing balloon ascent rates. 57 This presentation has been intended as a broad overview and admittedly has not dealt in depth with the many problems associated with small scale atmospheric analysis with rawinsondes. Although the picture I've painted is mostly pessimistic I think that definitive analyses at the intended scales are possible with the proposed SESAME network. However, in view of all the cumbersome problems associated with the current rawinsonde, it seems to me that an appropriate charge or spinoff from the SESAME should be the development of a reliable sounding system commensurate with the state of the art. Modifications to the currently available system, which seem long overdue, are the automatic digital recording of transmitted signals, which would greatly facilitate data handling, and the continuous trans- mission of temperature and humidity data, which would improve the soundings overall resolution and accuracy. Our panel members will hopefully present some suggestions and alternatives. Bibliography A. Rawinsonde Networks and Analyses Aimed at Defining Atmospheric Systems at the Subsynoptic Scale" 1. Sheppard, P. A., 1949: Recent research at Imperial College. Quart. J. Roy. Meteor. Soc , 75, 188-192. 2. Byers, H. R. , and R. R. Braham, 1949: The Thunderstorm . U.S. Government Printing Office, Washington, D.C., 287 pp. 3. Lee, J. T., 1962: A summary of field operations and data collection by the National Severe Storms Project in Spring 1961. NSSP Report No. 5, Washington, D.C., 47 pp. 4. Matsumoto, S. , K. Ninomiya and T. Akinama, 1967: Cumulus activities in relation to the convergence field. J. Meteor. Soc . Japan , 45, 292-304. 5. Elliott, R. D. , and E. L. Kovind, 1964: On convective bands within Pacific coast storms and their relation to storm structure. J. Appl. Meteor ., 3, 143-154. 6. Kreitzberg, C. W. , 1968: The mesoscale wind field in an occlusion. J. Appl. Meteor ., 7, 53-67. 7. Fankhauser, J. C. , 1969: Convective processes resolved by a meso- scale rawinsonde network. J. Appl. Meteor ., 9, 778-798. 8. Barnes, S. L., J. H. Henderson, and R. J. Ketchum, 1971: Rawin- sonde observation and processing techniques at the National Severe Storms Laboratory. NOAA Tech. Memo., ERL NSSL-53, NSSL, Norman, Okla. , 246 pp. 9. Fankhauser, J. C, 1974: The derivation of consistent fields of wind and geopotential height from mesoscale rawinsonde data. J. Appl. Meteor . , 13 (in press) . 58 B. Temperature and/or Pressure 1. Badgley, F. I., 1957: Response of radiosonde thermistors. The Review of Scientific Instrt^-ments , 28, 1079-1084. 2. Hodge, M. W. , and C. Harmantas, 1965: Compatibility of United States radiosondes. Mon. Wea. Rev . , 93, 253-266. 3. Lenhard, R. W. , 1973: A revised assessment of radiosonde accuracy. Bull. Amer. Meteor. Soc , 54, 691-693. 4. Madden, R. , E. J. Zipser, E. F. Danielsen, D. H. Joseph, and R. Gall, 1971: Rawinsonde data obtained during the Line Islands Experiment, Volume I: Data reduction procedures and theimodynamic data. NCAR Tech. Note-TN/STR-55, Natl. Center for Atmos. Res., Boulder, Colorado, 71 pp. C. Relative Humidity 1. Bunker, A. P., 1953: On the determination of moisture gradients from radiosonde records. Bull. Amer. Meteor. Soc , 34, 406-409. 2. Morrissey, J. F., and F. J. Brousaides, 1970: Temperature -induced errors in the ML-476 humidity data, J. Appl. Meteor ., 9, 805-808. 3. Brousaides, F. J., and J. F. Morrissey, 1971: Improved humidity measurements with a redesigned radiosonde humidity duct. Bull . Amer. Meteor. Soc , 52, 870-875. 4. Friedman, M. , 1972: A new radiosonde case: the problem and solution. Bull. Amer. Meteor. Soc , 9, 884-887. 5. Riehl, H. , and A. K. Betts, 1972: Humidity observations with the 1972 U.S. radiosonde instrument. Bull. Amer. Meteor. Soc , 53, 887-888. 6. Quiring, R. F. , 1973: Low humidity: Still a problem with the modified NWS radiosonde? Bui 1 . Amer . Meteor . Soc . , 54, 551-552. D. Wind 1. Danielsen, E. F. , 1959: The laminar structure of the atmosphere and its relation to the concept of a tropopause. Arch. Meteor . Geophys . , Bioklim , All, 293-332. 2. Kurihara, Y., 1961: Accuracy of winds-aloft data and estimation of error in numerical analysis of atmosoheric motions. J. Meteor. Soc of Japan, 39, 331-345. ' 3. Duvedal, T., 1962: Upper-level wind computation with due regard to both the refraction of electromagnetic rays and the curvature of the earth. Geophys ica , 8, 115-124. 4. Danielsen, E. F. , and Duquet, 1967: A comparison of FPS-16 and GMD-1 measurements and methods for processing wind data. J. Appl. Meteor . , 6, 824-836. 59 CO in o o o LO o LO LO (V3 o o o o (M iH i-H 1-1 en ■rf o C^J 1 1 1 1 1 1 ro o O o o o O LO O o o oo to i o s CO 5 CO o 00 o% o Figure 2. Array of METRIC receivers assumed for simulations. XI and X2 mark locations of simulated balloon launch points 1 and 2. T 1 i i i Sia'IATED WIXD PSOFILE 1005-1 u CO>tPONENT WIND 10 SECOICD AVERAGE 60 SECOND AVER.\CE -//- 1— ^/=— T M • lO ^ 2 - E Figure 3 . u (m/s) c. cr f/=- ■r" 1 Ispeed error! (cm/s) t. erro: , -m) Simulated u-component profile utilizing METRAC algorithm with transmitted frequency of 403 MHz and an assumed multipli- cation factor of 10. Profiles of speed error and height error are also shewn (see text) . 71 SIMULATED WIND PROFILE 1009-1 ■//- Figure 4 . COMPOKEN-T HIND 10 SECOND AVERAGE 60 SLCONT) AVESACE -/A- F - 403 MHz H - 10 1 Ispeed error (cm/s) Simulated v-component profile utilizing METRAC algorithm with transmitted frequency of 403 MHz and an assumed multipli- cation factor of 10. Profiles of speed error and height error are also shown (see text). 1 Ispeed errcr 'cm/s) 10 20 30 40 I'n eight error! (cm ) Figure 5. A comparison of the errors of heights and 10-second winds of simulated METRAC system data as obtained by simulating balloon release at points 1 and 2 as shown in Figure 2. 72 Figure 6. Minneapolis field system deploy- ment. Each R represents the location of a receiver and X re- presents the location of the re- ference transmitter. Figure 7. The trajectory obtained from the METRAC system data by following a prescribed path on the roof of a suburban hotel. The transmitter was fixed at the end of a long pole. The dashed line is the prescribed path. 4 - 2 - £ -2 - -4 — Figure 8. Trajectory map for flights MF2, 3, 4, 5, and 7. 73 traversed the course with a transmitter fixed at the end of a pole. Figure 7 shows the prescribed course on the roof of the building and the trajectory as computed from the METRAC system. There seems to be some slight systematic discrepancies but all the solutions are within roughly a meter of the prescribed values. Our transmitters were tuned to 403 MHz for these tests. The wavelength at this frequency is less than a meter and we multiplied our Doppler frequency differences to enhance our resolution by a factor of eight. A wind profile comparison test was carried out in the Minneapolis area to compare wind profiles obtained from the METRAC system with wind profiles obtained from rawinsonde and theodolite measurements. During April, eight balloons were launched from the top of a 22 story suburban hotel. Each balloon carried both the lightweight METRAC system transmitter and a standard 1680 MHz VIZ radiosonde. The radiosonde package was tracked with a portable WeatherMeasure RD-65 rawinsonde system on loan from the University of Wisconsin Department of Meteorology, Madison, Wisconsin. In addition, a theodolite was used to track the balloon optically when cloud cover and visibility permitted. This section presents typical METRAC system data and wind profile comparison data obtained in the Minneapolis test. Figure 8 shows the x-y trajectories for the comparison data presented in this section. The locations of the seven receiver stations are also shown. The trajectories labeled MF3, MF4 and MF7 represent six minutes of data, and trajectories labeled MF2 and MF5 represent 28 minutes of data. Both flights MF2 and MF5 extend well outside the receiver station array. Figures 9, 10 and 11 show the wind profiles (u, west-east component; V, south-north component) computed for trajectories MF3, MF4 and MF7. Each of these figures presents the comparisons between METRAC derived wind profiles and rawinsonde and theodolite-derived wind profiles. The figures labeled (a) show the first six minutes of flight observations plotted once per minute, and the figures labeled (b) show the same flights with observa- tions plotted at 20 second intervals. Since rawinsonde measurements were taken only once per minute, they are not included in the figures labeled (b). Rawinsonde and theodolite winds are determined from measurements of elevation and azimuth angles and independently computed or inferred values of height. The accuracy to which a rawinsonde system can determine these angles is dependent upon the beam size of the antenna. Great precision generally requires the use of large antennas and complicated pedestal 7^ 300 200 100 10 15 u (m/sec) 20 -//- MF 3 5 10 V (m/sec) 1500 1000- - 500 Figure 9 . 300 - 2 00 100 Comparison of METRAC system-derived wind profiles with (a) 60 second theodolite and rawinsonde winds and -//- ^ r — METRAC (20 sec) THEODOLITE (20 sec) 5 10 15 u (m/sec) 20 MF 3 5 10 V (m/sec) - 15 00 1000 500 (b) 20 second theodolite winds. METRAC flight MF3 launched at 0848 CDT on 6 April 1974. 75 3 00 200 100 , y^ , , — METRAC C60sec)"\ ;\ THEODOLITE (6 sec) V^-^ i\ RADIOSONDE (60sec) | M F 4 5 10 u (m/sec) -y/^' \ 3 -5 5 V (m/sec) - 2000 1 500 £ I O UJ 1000 I 500 //- 300 200 - 2 100 ME TRAC (20 sec) THEODOLITE (20 sec) 5 10 u (m/sec) MF4 -5 5 V (m/sec) 2000 1500 E I O LlI 1000 3: 500 Figure 10. Comparison of METRAC system-derived wind profiles with (a) 60 second theodolite and rawinsonde winds and (b) 20 second theodolite winds. METRAC flight MF4 launched at 0945 CDT on 16 April 1974. 76 300 200 - 100 - y/- METRAC (60sec) | THEODOLITE (6 sec) ^ RADIOSONDE (60sec) 5 'lO u (m/sec) 15 ^/ MF 7 J L 5 10 V (m/sec ) 1500 - 1000 - 500 \ r 300 200 100 MF 7 5 10 u (m/soc) 5 10 V (m/sec) 1500 1000 ^_ I O 111 I 500 Figure 11. Comparison of METRAC system-derived wind profiles with (a) 60 second theodolite and rawinsonde winds and (b) 20 second theodolite winds. METRAC flight MF7 launched at 1345 CDT on 17 April 1974. V machinery. The RD-65 rawinsonde system has a minimum resolvable element of 0.1 and experience shows that the RMS error may be as large as several tenths of a degree. Optical theodolite tracking with an experienced observer is substantially more accurate with an RMS error of only a few hundredths of a degree. For comparison, the RMS error often associated with the GMD-1 rawinsonde system is 0.05 (Danielsen and Duquet, 1967). Uncertainties in the determination of the height of the balloon also affect the accuracy of the horizontal position computed from the azimuth and elevation angles. This error is particularly significant at low eleva- tion angles. For the Minneapolis test, thermodynamic heights computed from the radiosonde data were used for both the rawinsonde and theodolite- determined winds. Errors in timing the angular measurements also look exactly like height errors in the computation and are critical when the angular position of the balloon is changing rapidly. This problem is most serious in the early parts of the flight and with strong winds. The errors described above can easily account for errors in the wind speeds of 1-3 m sec for 60 sec unaveraged rawinsonde winds . The errors will be largest in the radial direction due to uncertainties in balloon height and will be dependent on wind speed as described above. These facts may explain the largest discrepancy between METRAC system-derived winds and the rawinsonde winds which occur in the u component of Figure 9. Because radiosonde heights were also used in determining the theodolite winds, the normal single theodolite pibal tracking assumption of a known constant ascent rate was unnecessary. This assumption is particularly bad near the earth's surface in an urban environment. Double theodolite techniques must be employed to comnute accurate pibal winds such as those presented by Ackerman (1974). However, by using radiosonde heights, 60-sec theodolite wind errors should be smaller than 1 m sec . Figures 9, 10 and 11 show excellent agreement between METRAC and theodolite winds even over 20-sec intervals except for isolated values which were read or recorded incorrectly. METRAC system wind accuracy is limited in a very different way from the accuracy of rawinsonde or theodolite winds. The differential Doppler numbers locate the balloon-borne transmitter between two hyperbolic shells found by rotating hyperbolas about an axis joining a pair of receivers (foci). The three-dimensional position can then be solved as the common volume formed by the intersection of three such regions. Clearly, the 73 best resolution in position is obtained when this volvune is smallest, and this occurs when the hyperbolic shells intersect one another orthogonally. Position errors can be large when the intersecting shells are nearly tangent. The minimum spacing between two shells for the Minneapolis test was 10 cm. When the transmitter is within the receiver array, the geometry of the intersections is favorable. Simulation studies show that expected accuracy for this case is within a few centimeters /sec for 10 sec winds. Actual errors in computed position and wind speed can occur if the Doppler cycles are not counted accurately. This happens when the sigpal- to-noise ratio becomes too small. In practice, this problem was uncommon and occurred only when the balloon transmitter was well outside the receiver array or located almost directly above a receiver. Utilizing more than the minimum number of four receivers circumvents this problem. Errors in computed position can also arise if the relative locations of the receivers are not accurately known. This error is most serious for long flights where the transmitter moves across the entire array. However, this problem can be overcome by accurate station surveying. Figures 9, 10 and 11 show only the first six minutes of data or up to nearly two kilometers in height. This is the region of most direct interest in air pollution study. However, the METRAC system can track the transmitter to very high altitudes and distances well outside of the baseline as was shown by trajectories MF2 and MF5 in Figure 8. Figures 12 and 13 show the u and V wind components for these two wind profiles compared with the associated rawinsonde wind profiles. The agreement between the METRAC system measurements of winds and the rawinsonde winds is again excellent. In fact, one can observe the apparent increasing amplitude oscillations in the rawin- sonde winds in the top third of the wind profiles in both figures. These oscillations are due to increasing errors in positioning the balloon due to the fixed angular resolution of the rawinsonde system as well as the larger errors associated with low elevation angles. Both errors are character- istic of single dish tracking systems whenever an independent measure of range is not available. Errors in position computed from the METRAC system also increase when the transmitter is outside the receiver array. This effect, however, does not generally become significant until one is several times the baseline of the receiver array outside of the array and is probably undetectable in 60- sec or even 20-sec wind profiles. For example, Figure 12 shows the comparison 79 tV/- 28 24 E 20 III 2 16 1 2 I I MF 21 METRAC (60sec) RADIOSONDE (60 sec) 1 1 \—yA-i. — 1 2 1 6 z Comparison of 60 second METRAC system-derived wind profiles with 60 second rawinsonde wind profiles. METRAC system test flight MF2 launched at 0035 CUT on 2 April 1974. T r^/-T 1 10 20 30 ' -10 10 u (m /sec) V (m/sec) ^®°°~ '[. METRAC (60 sec) X RADIOSONDE (50 sec) J---' 1 200 - Comparison of 60 second METRAC system derived wind profiles with 60 second rawinsonde wind profiles. METRAC system test flight MF5 launched at 1417 CUT on 16 April 1974. 800 400 ■10 -10 u (m/sec) v/- 6 X O I -20 -10 O V (m/sec) between 20-sec METRAC system-derived winds and 20-sec theodolite winds between 800 and 1150 sec into flight MF5 (see also Figure 13). The compari- son is excellent except for one bad theodolite reading at 960 sec into the flight. The largest discrepancy is 1.2 m sec which, indeed, speaks well for the theodolite observer. The balloon is between 2 and 4 km outside of the receiver baseline and between 5 and 8 km from the launch point and theodolite during this period of data. 80 1 100 « 1000 lU 900 800 VA- METRAC (20sec) THEODOLITE (20 sec) -//- MF 5 -1o V (im/l.cc) 6500 6000 X <3 5500 III I 5000 Flgur* L4. Cootparlton of 20 second HETRAC system and cheodoLlte ■a«aured wind for SOO-1150 second section of KF5. 300 200 100 -■ 1 r -y/- "n — r METRAC (60sec) METRAC (15sec) 5 10 15 u (m/sec) 20 v/- j_i L ImF7 - 2000 - 1500 O 5 10 V (m/sec) - 1000 500 FlSure IS. CoaparUon of 60 second HETRAC system oeosureil winds with 15 second METRAC system measured winds from KF7. Besides accuracy, one of the key features of the METRAC system alluded to above is the capability of small scale resolution. Figure 15 illustrates this feature of high resolution with an example from flight MF7. The strong shear present in the u component of the wind is shown to be concentrated in an extremely thin layer of approximately 50 m depth. 81 240 - 220 u M in bJ 2 200 180 -tV- ME TRAC (60 sec ) ME TRAC ( 1 sec ) M F 4 5 10 u (m/scc) -7^ -10 -5 V ( m / f, c c ) Balloon And transmitter package oscillations derived from I second samples of METRAC system data from MF4, Figure 16 illustrates the optimum resolution of the METSAC system when position is computed for each one-second sample. This 60-sec segment of data with "wind" speeds plotted every second shows the circular rotation of the METRAC transmitter suspended below the balloon. The detail of the motion appears to be at least as good as the detail of the balloon-induced oscillations which have been measured with the FPS-16 Radar/Jimsphere system and discussed by DeMandel and Krivo (1972). This kind of resolution encour- ages further evaluation of the use of the METRAC system to obtain measure- ments of atmospheric turbulence or of even looking more closely at the response dynamics of the balloon as it ascends through the atmosphere. Another important aspect of the METRAC system is that the solution is based purely on geometry and the physical laws of electromagnetic wave propagation. In other words, determination of the three balloon coordinates X, y, and z are made without any assumption of balloon ascent rate or hydro- static equilibrium. This feature combined with the high degree of resolution makes possible the measurement of the ascent rate of the balloon. As an example, Figure 17 shows the variation in the vertical velocity of the balloon computed as ten-second averages plotted every 30 sec for METRAC flight MFl, The deviations of the computed balloon vertical velocity from the mean are very reasonable for real atmospheric vertical velocities. 82 w„ - w„ ( m / s e c ) B B -3 -2-10 1 2 3 600 •^ 1 i 1 J 1 1 ME TR AC 1 MF 1 (10 sec) 500 - ^^ V ^ / - If) 400 — f u V, 2 300 - ^ ^ h- 200 100 I ! 1 -<^ > ! 1 1 1 — 3000 I- I — I 2000 c 1. 1000 3456789 10 w (m /sec ) Figure 17. Balloon ascent rate aa neaaured by the METKAC system for flight HFl. Conclusions The METRAC system is a new balloon tracking system capable of providing vertical wind soundings of great accuracy and high-resolution. It has an immediate application in research on micro- and mesoscale atmospheric wind fields. It can also be used to track horizontally floating balloons to study atmospheric transport and diffusion. As an operational tool, it can be used to obtain accurate high-resolution wind soundings in support of meteorological field programs such as Project SESAME. With an additional modest development effort, temperature and humidity sensors may be added to the METRAC system package and two or more balloons may be tracked simultan- eously. Further development will also be required to add real-time computa- tion and display of position information. Acknowledgments This paper is based on the Final Report to the Environmental Protection Agency under Contract 68-02-0760 on the feasibility of the application of the METRAC system to the RAPS program. The field test program was entirely supported by Control Data Corporation. Dr. M. Ulstad, orginator of the METRAC system concept, provided valuable counsel to the authors, and Mr. R, Rust provided engineering support for the RAPS prototype system. The cooperation of the University of Wisconsin's Department of Meteorology is gratefully acknowledged for permitting us to use their WeatherMeasure RD-65 portable rawinsonde system. 83 References Ackerman, B., 1974: Wind fields over the St. Louis metropolitan area. J. Air. Poll. Cont. Assoc , 24, 232-236. Danielsen, E. F., and R. T. Duquet, 1967: A comparison of FPS-16 and G^D-l measurements and methods for processing wind data. J. Appl . Meteor ., 6, 824-836. DeMandel, R. F., and S. J. Krivo, 1972: Measurement of small-scale turbu- lent motions between the surface and 5 kilometers with the FPS-16 Radar/Jimsphere system. Preprints Int. Conf . on Aerospace and Aeronautical Meteor., Washington, D.C., 93-96. Gage, K. S. and W. H. Jasperson, 1974: Prototype METRAC™ balloon-tracking system yields accurate, high-resolution winds in Minneapolis field test. Bull. Amer. Met. Soc . 55, 1107-1114. Johnson, R. W., K. S. Gage, W. H. Jasperson, R. K. Kirchner, and R. C. Rust, 1974: The feasibility of METRAC™ system for regional air pollu- tion study. Final Report under contract 68-02-0760, with the Environmental Protection Agency, Research Triangle Park, N.C. Lally, v., 1974: "Title of SESAME Talk". SESAME Opening Meeting, Boulder Colorado. DISCUSSION Q: (J. Scoggins) What price tag can we put on this system? A: Everybody asks that question and I wouldn't dare give an answer because there are too many options for the system to make a simple estimate meaningful. Also, the cost will vary greatly according to how many units are produced. I can say that the expendable cost for quantities of a 1000 or so, should be under $20. Q: (Dr. Anthes) Does anybody aspire to get wind velocities with an accuracy of better than a meter per second? What would we do with it? Would it be representative of a 20 km area? Would it cost a lot less to get just one or one-half meter per second accuracy? 8/4 A: (Dr. Gage) Well, these are questions which I think should be ansA'ercd very soon. I think that there is justification for doing a trade-off analysis based on our experience and the data that we have to try to answer the last question. Representativeness is, we feel, an important issue which should be explored soon in a mesoscale vari- ability study. Q: (Arithes) From a modeling point of view there's not a single model anjrwhere in the world that could use data that's better than a meter per second accuracy. (Frenzen) That's no criticismi One of the problems is that the modelers are trying to model something that has never been observed. I had to say that sometime in these three days and now I can go home. (Barrett) Did I understand you to say that you get higher accuracy by computing velocity by differencing position rather than by directly measuring Doppler velocities? That's very surprising, that when you introduce a differentiation that you increase the accuracy. I would like to see the mathematics of that. (Gage) Well, of course, the problem is that position is necessary for both solutions. Computing a velocity solution using finite difference approximations to the true differentials introduces an error in the computed velocity which is dependent upon the finite difference time interval and the geometric position within the hyperbolic coordinate system defined by the receiver array. A new position is then determined by integrating these velocities. This position, necessary for the next velocity computation, will be slightly in error, and this error will accumulate as the flight continues. Computing position directly yields a solution independent of time interval or geometric position. Errors in velocity are only as large as the uncertainty in positions (which is very small), and these errors are independent of time step. In practice, errors in horizontal wind speeds will be reasonably small in the direct velocity solution. A more serious problem can arise in the heights to which these winds are assigned. 85 Unknown ; I would like to go back to the question on the sferics. If the Doppler system made measurements in the vicinity of thunderstorms have either Ken Gage or Vince Lally made measurements in this environment and what are the effects of thunderstorms on their system? Gage : I do not believe that Vince Lally has made any measurements so I will answer that question. We have not taken any data in thunder- storm environments. However, the frequency at which we operate, 403 ^IHz is well outside the frequency of maximum electromagnetic energy in the spectrum of sferics activity. Systems based on Loran and Omega operate in the 10-100 KHz band where electromagnetic energy of sferics activity is concentrated. We do not anticipate any problems from sferics at the microwave frequencies anticipated for M5TRAC system operation. 86 NEXAIR Upper Air Sounding System Wayne E. McGovem Environmental Monitoring and Prediction, NOAA Rockvllle, Md. 20852 ABSTRACT During the past few years, NWS has made considerable progress developing a new upper air sounding system called NEXAIR that will satisfy projected requirements of the NWS for macroscale, mesoscale and mlcroscale upper air data. Using navigational aids and a non-rotating antenna system under minicomputer control, the 403 MHz frangible sondes will telemeter pressure, temperature and humidity data to the ground. NEXAIR promises to be a safe, cost-effective, low maintenance system capable of being operated In a wide variety of extreme environments. I would like to preface this talk with a small disclaimer. In the arrangements for this meeting, I Indicated to Doug Lilly that NOAA would try to have a representative here to discuss the NEXAIR programs. However, because of previous commitments, neither Dick Hallgren, Deputy Director of the National Weather Service (NWS) nor Dick Waters, Chief, Equipment Design and Development Branch, the individuals most familiar with this sytem, could make this meeting. In their absence I will try to fill in for them and give you a brief description of the NEXAIR system. What is NEXAIR? (Slide 1) NEXAIR is a possible replacement for the present GMDAfeather Bureau radio theodolite upper air sounding system. Several years ago new technological advances made it apparent that a new look at the upper air sounding problem was due. It was envisioned that an accurate, reliable, inexpensive, automated real-time data acquisition system could be developed which could obtain soundings from both ocean- going vessels and land stations . The system selected was a NAVAID or navigational aid wind finding system in contrast to the two Doppler systems Just discussed for the Doppler system did not appear to be suitable for ocean soundings. S7 WHAT'S NEXAIR NWS replacement for GMD/WBRT upper air sounding system Completely automatic real time Data Acquisition NAVAID windfinding Frangible Sonde Slide 1 In actuality there were two contending NAVAID systems: An Omega system and a Loran-C system. The Omega system has eight stations and would give worldwide coverage. The Loran system, operating at a higher frequency range (100 KHz) and thereby limited In range. Is to be located along the east and west coasts of the U.S. The Coast Guard has Indicated that the Loran system will be Installed In the late 70s, but use of the coastal transmitters for midwest soundings would result In imacceptable wind measurements (errors of the order of 2-3 knots). Therefore, for SESAME, we will primarily discuss the Omega system even though the Loran system has certain advantages (more accurate positioning £md wind measurements ; at least along the coast) . Schematically as Indicated In Slide 2 the Omega system consists of 10 kHz Omega signals and a 403 MHz sonde transmitter. The Omega signals, to establish the position of the sonde and winds, and the sensor signals, to telemeter the temperature, pressure and humidity data to the receiving station, are frequency modulated onto the 403 MHz carrier. The NAVAID receiving station consists of a fixed antenna In contrast to the present GMD system which has a complex tracking system. 88 This Is a very quick overview of the basic NAVAID system; next we turn to the sensor subsystems (Slide 3) . In the NEXAIR design analysis seven basic types of temperature sensors and four classes of pressure sensors were examined. Based upon this analysis a wafer thermistor and a full range hjrpsometer, with an accuracy of 0.5*0 and 1 mb respectively to 6 km were selected. A full discussion of the properties of these sensors Is given In reference 1. DELAY /PAYOUT PARACHUTE TEMP SENSOR PRESS SENSOR I HUMID SENSOR rT—J—Tf L, SIGNAL J SIGNAL CONDITIONER I commutator! [m MET data DIGITIZER TELEMETRK RECEIVER GENERAL PURPOSE DATA PROCESSOR ESSOR 1 LOCAL I/O u NAVAID TRANSMITTERS GROUND SUBSYSTEM NEXAIR DETAILED FUNCTIONAL BLOCK DIAGRAM NEXAIR CAPABILITIES Parameter Sensor Accuracy(RMSE) Remarks Temperature Hsfer Thermistor 0.5°C to 6 Km 1.0°C to 24 Km Response Time 1 sec to 6 Km 3 sec to 24 Km Pressure Full Range Hypsometer 1 1* to 6 Km O.S nti to 24 Km Fluid Freon 11 Humidity Carbon Element 31 RH Same as present operational sensor Minds LORAN 0.5 knot Averaging Period 1 min. No coveraqe In m1(h#est continental U.S.A. OMEGA 1-3 knot Averaging Period 1 min. Wideband linear processor to read thru spherics Vertical Resolution Met Oata/lO sec. W1nd/1 min. Telemetry i XTAL controlled Closely spaced deployment on non-interference basis 89 After an intensive search £or a better humidity sensor, a decision was made to continue to use the present radiosonde carbon element hygrlstor. A number of other types of sensors appeared promising but none are at the stage of development where costs and performance data are available. Systems under development are: The psychrometer, the dew point hygrometer, the barium fluoride element and the coulometrlc hygrometer. These developments will be closely watched over the next few years for breakthroughs which could Impact the NEXAIR design. During an ascent^ temperature and pressure data is averaged over 10 sec. Intervals (approximately every 50 meters with a 300 meter/ mln. balloon rise rate) and winds once per minute. These design requirements are based upon operational needs, however, in a research program the resolution could be increased by half to an order of magnitude. The use of a crystal control narrow band telemetry system from the sonde to the receiving station is important in a small scale research program like SESAME for it allows several sondes to be flown simultaneously without mutual Interference. At this time the meteorological and wind subsystems have been tested separately. In the next few months the combined system is to be tested. In addition, in July and August 1974, tests were carried out aimed at minimizing the effects of lightning on the information signal telemetered back to the ground. The results of these tests are due shortly (see Slide 4). The projected plans for NEXAIR are outlined in Slide 5 and include a one-year, two station test, with a unified software package at Tampa, Florida and Sterling, Virginia. A rocket sled collision test between a sonde and an aircraft windshield in order to evaluate battery and sonde frangibillty (December 1974), will hopefully lead to an initial operational procurement in FY 1977. Let me finish this introduction to NEXAIR with a summary of an analysis of upper air sounding systems performed by Mitre Corporation in 1969 entitled strangely enough Report of Trade-Off Analysis on Sesame 90 PRESENT STATUS Developmental model of hardware. Developmental software in two packages - Met and Winds. Specific analysis of OMEGA windfinding accuracy in spherics. Comparison standard FPS-16 Radar, Wallops Island, Virginia. Data collected 7/15/74 thru 8/2/74. Met data chamber evaluation at T&ED. Pre-production units of safe battery - Lithium chemistry accepted. Pilot production of 3000 units underway. Delivery 10/1/74. Slide 4 PLANS One-year operational experiment (OMEGA) beginning March 1975 (LORAN available 9/75) - 2 systems operating - Tampa, Florida - Sterling, Virginia. Procure Sondes for operational experiment - pilot production. Design to accommodate windfinding by either of the HJRVA ID- LORAN or OMEGA. Unified software package Met & Winds being prepared for experiment. Rocket sled tests - sonde collision with aircraft windshield. Sonde frangibility assessment. Operational buy for field implementation in FY 77. Slide 5 91 System Candidates (Reference 2). The acronym SESAME In this report means Systems Engineering S^tudy for Atmospheric Measurements and Ecjuipment not Severe S^torms aad Mesoscale Experiment. I quote from the Mitre report: "In developing and evaluating alternative upper-air sounding systems which meet Weather Bureau data requirements and which can be Implemented in the early 1970s, it becomes apparent that the primary variable among the alternatives is the method used to obtain wind data. Trade-off analyses show that two techniques for wind-finding which are significantly better in a cost-effectiveness sense than is the current radlotheodollte system deserve careful consideration. One is a technique using navigational aids like LORAN-C or OMEGA. The other is a technique proposed by Control Data Corporation which uses the DOPPLER effect. Cost-effectiveness ratios of these two techniques are close when most probably costs are used, but the NAVAID approach has a significant edge when estimates of the highest expected cost are used. Additional factors not used in the formal trade-off model tend also to favor the NAVAID approach. Among these is the need for four or more separate sites for receivers at each observing station for the DOPPLER technique, with attendant site rental or acquisition problems, and the need for local communication facilities between each separate receiver site and the central data processor. Another factor is that LORAN-C and OMEGA techniques are in a more advanced development stage than is the DOPPLER technique. A third factor is that shipboard operation will be simple with the NAVAID approach. The DOPPLER approach has the virtue that it would be wholly owned by the Weather Bureau, and, also, it shows some promise for the MICRO scale of application. Based on these factors, the LORAN-C and OMEGA systems are recommended over the DOPPLER System, although the margin on the cost-benefit scale is rather narrow. The choice of either technique will produce improvements over the present system, especially in terms of lower total system cost." 92 i Questions K. Gage (CTC) ; How much will the NEXAIR sonde cost? This depends In part upon the quantity ordered; last year for 15,000 sondes approximately $47 each, however, for 70,000 sondes the unit cost drops to $33. The total systems unit price Including the radiosonde, battery, balloon, parachute, regulator, handling cost and miscellaneous Items were $58 and $43 for 15,000 and 70,000 units respectively. In comparison the 1680 sonde cost $21 (In quantities of 60,000) and the total system cost $32 or a difference of approximately $11 per unit. However, due to Inflation, the costs for 1975 are expected to be approximately 30 percent higher. J . Golden (NSSL) ; Maybe I am Jumping the gun, and you'll have to tell me If I am, but It seems to me the dlscussloni thus far have dealt primarily with new types of tracking systems for locating the three- dimensional position of the balloon and thereby Improving the winds. But we are still back to square one; if we want to determine what 1 think is one of the most Important of many parameters that we are interested in for severe storm dynamics and energetics, namely, the equivalent potential temperature (6^) or its equivalent the wet bulb potential temperature (0^) , we have still got to come up with better sensors for getting the temperature and humidity. So far we haven't heard any discussion of how this might be done and 1 am Interested in any ideas on it. V. Lally (NCAR) ; Well, I'd like to make a couple of comments and then try to comment on that question, too. Mitre's study has me really worried because they say it Is simpler to use a Loran or an Omega on a ship that the Doppler system. It Is impossible to use a Doppler on a ship. But there is a more serious question concerning the use of the Omega system right now In the middle of the country. We can't use the North Dakota signal at night if we are within 600 miles of it and the Norway signal 93 Is In shambles as it comes over Greenland. We don't have a decent geometry £or the heartland of America with the Omega system. 1 wonder If somebody has looked at how we can put In something to supplement North Dakota being too close and Norway being just no longer readable over the ice cap. With respect to the sensor, I just want to comment that the sensor we're using was developed In 1946. It Is not bad but If you want to pay an extra 50 cents or an extra dollar you can get a better temperature sensor. People have not been willing to pay any money for anything. We have a system that was developed in World War II. W. McGovem (NOAA) ; I believe the temperature sensor and the pressure sensor are new but the humidity sensor is the same in the NEXAIR system as in the present radiosonde system. V. Lally : If we just calibrate the sensors, we meet everybody's requirement, even with existing instruments. D. Lilly (NCAR-ERL) : Well, I think It has been a good discussion but what I want to know is if there is a chance for a fair fight on the NEXAIR design, or has the Weather Service made up its mind it is going to go for the NAVAID system and nothing else. W. McGovem ; That is why I had hoped that Dick Hallgren would have been able to attend. Acknowledgement ; The author wishes to thank Mr. Dick Waters for his assistance in preparing the original talk and for reviewing the subsequent paper. References: 1. Lovkay, J. and Waters, R. H.; NEXAIR, Development of a New Upper-Air Sounding System, Equipment Development Laboratory, NWS/NOAA/Dept. of Comm., Presented at the WMO's Conference on Means of Acquisition and Conrninnication of Ocean Data, Tokyo, 2-7 October 1972, p. 41. 2. Willis, J.; Kaetz, E.; Paquette, C, and Golden, J., Report of Trade-Off Analysis on Sesame System Candidates. Mitre Corp., Report No. 7013, 20 February 1969, p. 329. 9^ THE ROLE OF SATELLITE SOUNDINGS IN SESAME David S. Johnson I am a very poor substitute for people like Dave Wark and Bill Smith who are very busy working with satellite soundings c I suppose with the short time available it is probably hopeless to try and give a one week course in satellite soundings anyway. So what I would like to do is to try and stress some of the features of the output data possible from satellite sounding instruments that are either being flown experimentally now or are under construction for experimental and operational use in the latter part of this decade. So I will ask you to take it on faith how the soundings are obtained. I will simply state in elementary terms that we sense the upwelling radiation emitted by either carbon dioxide in the atmosphere if we are dealing with infrared sounding equipment, or oxygen if we are talking about microwaves. For purposes of SESAME, we reject the microwave method because it doesn't work over land anyway. So we are limited to the IR case. It is a very complicated and sophisticated process requiring extremely high precision of radiation measurement, probably never accomplished even in laboratories before we got into the space business. It involves very complex physics about which we have much yet to learn. You, I am sure, have heard many widely varying reports on accuracy. I am not going to go into the debates, or what is behind it all. Again I think you almost have to accept it on faith for the time being. No one in the Satellite Service is going to force you to use satellite soundings anyway, so you will be the judge eventually. But what I think one has to look at in this day and age are certain features that derive from the satellite observations. To some extent this applies to any kind of remote sensing from satellites. But I think it is 95 particularly important in the sounding area, because of the requirements of SESAME, to recognize the potential for very high resolution of soundings in the horizontal (but not in the vertical) and very high time resolution. Now first, if I could have the first slide, I would like to just show you something that will be indicative of resolution. It is not resolution directly because we are not talking about a radiosonde. What you see here are a set of so-called weighting functions of the atmosphere as determined by a sounding instrument (actually three instruments) . The solid lines refer to the ITPR, which is an infrared instrument. Pete Kuhn is going to be talking about an adaptation of that to be flown on an aircraft. This senses in the 15 micrometer carbon dioxide band. The dashed lines refer to the Selective Chopper Radiometer, which is a special instrument developed in the United Kingdom by John Houghton and his colleagues which emphasizes the stratosphere. The microwave sounder developed at MIT relates to these dash-dot curves. The basic principal might be simply explained that this shows where the radiation is emitted by the carbon dioxide in the case of IR (or by the oxygen band in the case of the microwave) as a function of standard pressure height for the particular spectral intervals of those instruments. Changing the spectral intervals means sliding the curves up and down. You cannot do a great deal about making them more peaked. There are very definite limits to what can be done. So these curves tell where in the atmosphere the radiation measured by the instrument is coming from. The object is temperature as a function of height or water vapor as a function of height. We must determine what vertical composition of the atmosphere would produce the radiation measured in each of those channels. So it is an inverse problem and that is where most of the fun comes. Now, just for simplicity, simply look at the solid curves as a rough approximation of the resolution. 96 ^ SCR-2 0.3 0.4 cIt /dinp 0.7 FieUPF 1. MiMEUS J WEI'^HTING FUNCTIONS — Verticrl PESOLUTIDN NepSUPETi IB IP PPT'IfiTION EMITTED BY CRPBDN niD-irE 97 Now in the next slide (since you people will, I am sure, insist that it look like a radiosonde) here is the result of one particular inversion done by Bill Smith in the presence of clouds. He has an automatic processing model which tell where the clouds are. Of course there can be errors. We have made comparisons with simultaneous ascents at Yucca and Jackass flights, and we get some idea that there is a correlation. We have had a lot of statistics on errors and I shudder to quote them, but I am sure you will want to know something about it. I can say this, we are in the process of reducing the errors. The problem we have is what to use for the actual temperature as the basis for calculating our error. If somebody could tell us that we would sure like to know it. We have found that if we use operational radiosonde data received at NMC, and if we prescribe that two radiosondes be taken within a couple of hours of each other and within 60 miles of each other, then the rms differences between the pairs of radiosondes, divided by the squareroot of two, comes out to two degrees. So, despite all of our calibration data and instrument specifications, when the temperature retrievals come out the other end to go into the computer, we don't know how far of we are. We are, nevertheless, asked to compare with the radiosondes. Secondly, there is a sampling problem. We are making a volumetric mean measurement (in contrast to the radiosonde's point measurement) . We do not have any really good information on the differences resulting from sampling differences. Bill Bonner at NMC has been trying to shed some light on this with respect to our satellite data. I think he has said recently that he has a sort of a semi- scientific, but mostly intuitive feeling, that somewhere around one-and-a-half degrees can be ascribed to sampling differences. So we are dealing with two problems here. The errors of the radiosonde mixed up with the sampling problem. We cannot separate those out so we have difficulty in making any kind of accuracy statements. 98 Now, in the next slide, we will look at the horizontal resolution. Again this is for the ITPR. Measurements are obtained approximately every 15 miles within these two areas. These areas are over water, just off the Australian east coast which is to the upper right. It is at midnight, and the "surface air temperature" that is measured by using two different channels in the ITPR. One is the water vapor window and the other is the most transparent of the carbon dioxide bands shown on the first diagram. I am not showing this to discuss the data you see, but simply to give you some idea of the resolution. The next slide is a sample from the AMTEX exercise. It shows something about the coherency of the data; an attempt to remove some of the effects of making a direct comparison between a radiosonde and the radiance data that has been inverted to look like a radiosonde. Here we are looking at the difference in the geostrophic zonal wind calculated from the ITPR data in the lower left cross-section versus the radiosonde calculation at the lower right. There is a fairly good number of radiosonde observations, as I recall, about 8 to 10 observations between 2 and 50 degrees north. There is a great deal of similarity in the two patterns. Bill Smith feels that these greater details in the ITPR data are real and are simply a result of the higher density of observations. Turning to the question of clouds the next slide is again from the AMTEX exercise and shows moisture along the same vertical cross-section. Radiosondes are indicated by circles in the center graph. The solid line is the total water vapor measured by the ITPR instriiment in that cross- section. I am not showing it for the water vapor itself, but because water vapor measurements by the infrared technique are much more susceptible to the presence of clouds than is the temperature. The lower diagram shows that there were a fair amount of clouds. The amounts are calculated from Bill Smith's model for determining the 99 -T 1 — \ 1 — r-7 Sopl«rrb«r 3, 1973 / Southern Nevoda / ~~1 1 1 1 i \ 1 O ITPR Relri.vols (1848 GMT) 20 - 30 - 50 - 70 - looF ISO- 200 - 250 ' 300 - 400 - 500 - 700 - 850 ~ (37N-116W) Yucco RWS (1902-2036 GMT) (3926(l) Jackass Flats RWS (1839-2001 GMT) (3426lt) " Moisturo Content (g/cn RV/S npR Yucca 0.87 1.00 Jackass 1.59 1.42 Cloud Observations 1 Tops [pressure, mb) Amount (%) RWS ITPR Visuol ITPR Yucca, 310 300 70 51 Jackass 310 300 60 56 Surface (900 mb) Tempe olure Skin Air ITPR 47°C 31»C Psychrometer 42°C SO'C -80 -60 -40 -20 Temperoture ( *C) F113UPE MiMEUS /E.PSU= Pi=it'TnSDNrES l».0-i a«.7«8 FULL RESOLUTION NIMBUS - 5 ITPR DERIVED SURFACE SKIN-SURFACE AIR TEMPERATURE DIFFERENCE I47.t»l ° ♦? ^2 Figure 3. Hdrizontmu pesdlutidm NlMEU-5 "5 '".SFC SKIt4 TEMFEPRT'.'PE M I hJL supppcE Pip:' N.E. RuBTPPL IP PEnUT 15 M.MI. 100 February 23, 1974 (S) Actual Zonal Wind Nimbu$-5 Geostrophic Zonal Wind RAOB Geostrophic Zonal Wind 30 40 LATITUDE Figure 4. CaMFRPisaN of Mimbu? 5 i-'epsus PMriiDsoNDE GEDSTPHPHTC MINI' ^^RMTEIK f^RSi=i'' '•BeTTEP MRv GF CDMPRPIHG PRriOSDtJr.E RNT SmTELLITE DRTm' 101 "THIR IMAGE 30N , 40 50 60 UO 20N 122E 30 35 40 125 129 Latitude- Longitude (" Figure- 5. Effect o^ LDUDS bU MI. PErDL-UTiaN i:' 4 =• ^ Ki c L E •=: :i HMTEX UmTEP l..'FiPC"=? CRDrr-JECTICN ITPR i.-'EPSUS PPt'IDrnNIiE 1-4 1 TH THIF THEPMRL. IMi="?E mNT' CRLCULPTEti CUDUt! 102 effect of clouds, which he has to use in deriving either water vapor or temperature soundings. The numbers on the clouds show the the calculated effective cloud cover at the indicated millibar height. "Effective cloud cover" is the percent coverage by a completely opaque cloud, or a somewhat larger percent coverage of a less than completely opaque cloud (e.g. cirrus) which would have the same attenuation effect on the radiance from below. The model allows for two separate layers of clouds. So there was considerable cloudiness in this case, and yet there was no break in the sounding due to water vapor. This is shown just to emphasize that you do not have to have a clear column to get the sounding. When you do have clear columns, you are going to get higher accuracy of course than you do with the partially cloudy scene. The break point is kind of hard to pin down, but it is somewhere in the range of five- to seven- tenths cloud cover. In this particular case, the calculations are made at approximately 60 mile spacing, each using four measurements with the 15-mile resolution of the instrument. Thus one can do better this way than by working with the full resolution. I have attempted to show that although there is relatively low vertical resolution, we have very high horizontal resolution, even with instruments that are flying right today. Now what about the time resolution? Earl Kreins has talked about two approaches to this using geostationary satellites. The VAS that he mentioned is under construction. As far as I know no one has yet made a decision as to which GOES spacecraft it would go on. The problem here is that we never know whether we are talking about technology or money. Anyway, the plan is that when the sounder development is completed, and I would guess that it might be somewhere in the 1978-79 time frame, the VAS sounder may be launched on one of the Geostationary Operational Environmental Satellites. We would then have the opportunity of acquiring the soundings as Earl showed approximately once an hour with similar spatial resolution 103 to what we have shown here. So we could get the time resolution that way. Even if that does not go, the TIROS-N satellite is expected to be launched about 1978. This is the prototype of our third generation operational polar orbiting satellite. It will have what will be at that time the most sophisticated sounding equipment yet available. We will have 4 or 6 observations per day because there will be two or three satellites flying in orbits with equator crossings planned now at 7:30, 11:30 and about 3:30 AM and PM local time. Each satellite would provide complete sounding sets with 15- mile resolution. Even though soundings from geostationary satellites may not be available by 1978, it is almost a certainty now that we will have a fairly high frequency of coverage from the polar orbiters with considerable improvment in the quality of the soundings in terms of precision and horizontal resolution. My personal feeling is, and you might have sensed it from some of my hidden sarcasm, that people should start looking at the physics of indirect sounding and stop trying to make these data look like radiosondes. I sometimes wish that the radiosondes would just disappear for a few years so people would have to think in terms of radiation physics. I feel we lose too much in trying to convert the radiation data to something that looks like temperature, and then go and differentiate it or integrate it in a model. This just contributes to error, and looses some of the basic properties of these remote sounding instruments. For example, on a satellite we have very accurate calibration. We can calibrate every five minutes day and night, all the time, and it is part of an integral data set. It is all digital. We have control. We know what our instrument is doing and we are using the same instrument — one single instrument — for worldwide coverage. This provides much greater accuracy overall, as well as accuracy of the differences between adjacent measurements in either space or 104 time. That to me is a very valuable control which would be very difficult to have with radiosondes, particularly where human beings must be involved. I am not against human beings you understand, but they do tend to make errors. So I would make a special plea that in considering the experimental program in SESAME, somehow you bring into it people who are knowledgable and interested in applying radiation physics to the question of measurements at the mesoscale. I think it is important to see how the various kinds of observations would fit together into a complete data set that gives the most information. Thank you. DISCUSSION J. Golden : Looking to the future I was wondering if you would care to give us some indication of what you think the maximum horizontal resolution and the best accuracy in terms of vertical temperature soundings would be in the lowest 20,000 feet of the atmosphere. Johnson : Well, the question of horizontal resolution I think is not technologically limited. It is money limited. So there I guess I would turn it around and say, "what can you justify?" I have a personal feeling. I think we have a lot of work to do with the 15-mile resolution which we have now. It is already there experimentally. It has already been approved for the operational system. It is really just a matter of gathering energy. One can use bigger and bigger optics on bigger and bigger satellites with better and better stabilization. Would you agree. Bill and Earl, that we are a long way from any technological limitation? As for accuracy, what can I going to say? If you give me a standard, that we all can agree to, that also tells us what the atmospheric variability is on the scale of the satellite sounding, so that I can know what we are doing even now, maybe then I could answer that question. Again, I think we 105 have got to take a look at the radiation physics approach. E . Avara : I am not opposed to people who understand radiation physics studying the problem. I am all for that. Radiosondes have been maligned and I have to defend them. I think I can give you some estimates on the accuracy. I am working now with some data taken at White Sands. We were simultaneously tracking radiosonde up to 18 km. I find an error in computed height of radiosonde versus the observed height of 10 m. That indicates an error on the order of less that 0.2 degree over 18 km. That is equivalent to saying that when you run two radiosondes so far apart that the standard error is 0.2 degrees. I would say the standard differences are of the order of 2 degrees. The accuracy of these radiosondes differentially are in the order of tenths of a degree. You may have an absolute error. If you want to compare and get a standard, make a radiosonde flight, track it with a good precision radar, and you could determine the accuracy of the thermodynamic element. If there is a systematic error, then the deviation has to systematically increase or decrease. You can determine it very effectively. Johnson: The 2 degree figure that I quoted is a mix of the atmospheric variability plus any errors that are in the operational System, I would like to empasize that, I, too, have worked on experimental radiosonde programs as some of you know; and with care and attention to what one is doing including the transmission of the data, etc., you can do quite well. What I was quoting was the data that comes in to NMC and is going into their analysis model, which is what I have to live with in the operational world. Smagorinsky ; I tend to agree with Dave Johnson although I must say that there is a lot to be learned yet in order to substitute these differences between radiosondes and IR temperature inversions. Obviously, the aliasing errors of these systems are different from each other. They each have 106 their advantages and disadvantages. It may very well be, at least with the large scale, that the maximum information content from satellite IR inversion is not in the absolute temperature at a particular level but in the horizontal temperature gradient across a swath. What I would like to know is, with specific relation to the scales of interest to SESAME as opposed to the large scale, is it possible to increase the resolution so that you get gradients across swaths which are appropriate to the scale of interest there? Johnson ; Well, Joe, we are 15 miles now with the ITPR and we will have that on TIROS-N, So what do you mean? Smagorinsky: You can determine coverage of gradients essentially across the swath on the order of 15 miles. Johnson : Right. Now that is with the same accuracy that we had with the original SIRS A and B which were spaced at 200 or 300 miles. That is the improvement that has occurred just in the last 5 to 10 years. Smagorinsky ; What you do is use other means to get the absolute values. Johnson; Right. The gradients, I am sure, I am convinced, are very good indeed. It is the absolute values where you start getting into trouble. Smagorinsky ; Instead of trying to reduce it to the thing that is weakest you actually don't try to use that information at all. Johnson ; Now, of course, if we average, say, four of these scan spots over 30 x 30 miles, we can do much better indeed. Not only in terms of the improvment of the signal-to-noise ratio, but improvment in our deriving the clear column radiances by correcting for the presence of clouds. That improves, too, with a larger number of samples. If you 107 really set out and say I want to get the gradients as best as I can get them, wherever I can get them, I'll accept intervening clouds. Then you would go about it in an entirely different way than has been done so far and I think that one could get a great deal more information indeed. Lilly : I would think that one should pose a serious study of trade-offs between horizontal and vertical resolution, I believe that this is being done in the satellite laboratory, but there may be more to work with than is widely realized, because the atmosphere has a coherent structure as has been shown in analyses by Danielson and others for many years, and now by numerical analysis using standard radiosonde data. These analyses frequently show hyperbaroclinic zones of, say, 60 miles across. The fact that there is an rms average of 2 degrees across them shouldn't be surprising because lots of times there are 10 degrees between them. If you fly through these with airplanes you can see exactly the same thing. There is no question that they are there and can be analysed with effectively a finer horizontal resolution than the distance between radiosondes. Using such analyses one could then simulate on a computer the capabilities of an IR sounder and determine its ability to reproduce the analyses which we know can be obtained by use of sondes. 108 Radiation Sounding Techniques from Aircraft Peter Kuhn The work that I would describe is proposed research for SESAME. It flows almost directly from what Dave Johnson has suggested here and I only wish that I could say that the vertical resolution could be improved over what is desired or is now obtained from the satellites. I think the tremendous advantage here as a possible solution to the remote sensing problems, that is for profiles of temperature and humidity, lies in a swiftly acquired increase in the data base. The horizontal resolution, time resolution and perhaps the scene resolution can be greatly enhanced over that of costly sounding sites. This started in this fashion. Dr. Wark, Mr. Yates and myself (Dave Johnson included) discussed the possibility of doing more temperature and humidity soundings, on a much more dense network, in STORMFURY and in SESAME. Out of the meeting at Dr. Smagorinsky ' s laboratory in the east and APCL, our own laboratory here in Boulder, it was proposed to employ a sounder. We have been engaged in a series of such operations for several years, not on the scale, of course, that NESS is, but we have followed everything they have done. In fact we have copied it and attempted to adapt it more to our own work which was measuring stratospheric water vapor for several programs. In any event, rather than go ahead and procure a new instrument which would be extremely costly, it was proposed by Dr. Johnson that we use the ITPR and combine the two services, if you wish, ERL and NESS, and bring scientists 109 from both groups together for not only STORMFURY, but also SESAME. It originally started with STORMFURY and this was done. The basic idea is to change the filters upon the recommendation of an appropriate scientific meeting and sound from 300 mb (30,000 ft.) on down to the surface, preferably employing the NASA 990. At the time it was 711. Now, of course, it is 712 out of Ames. It was also mentioned that this proposal of putting the ITPR on an aircraft and flying it at 300 mb would also have to be adaptable to the NOAA aircraft, the P3Ds. This also has been discussed and I spoke with Bill Smith about this recently in GATE and I will probably talk to him again in the next few days during the GATE operation as to what effects the P3D, with a much lower frequency and higher amplitude to its vibration would have on his, or the NESS, instrument when compared with the very high frequency and very low amplitude of the NASA 990. Alright, so much then for the ideas, I guess it would be better to say that I am a colleague of Dr. Wark in this experiment. Guidelines have been set up. If we go ahead a small amount of funding has also been set up to procure filters out of the ERL Office of the Director and a program is underway to test the ITPR as early next year as we can at Medford, Oregon; Oakland, California and Carson Sink, Nevada. Much of this depends on the requirements of NESS for ITPR. It is now flying in the GATE operation in a slightly different configuration and for a different purpose than soundings. But it will come back to Ames and be flown, I understand, in October for about two weeks. Then, of course, there is a period of time when the aircraft is down at Alameda and Air Research in southern California for modifications. The tests are planned early in the year and sort of going at the back end of this first, I had made a 110 statement to the Director of ERL in a memo that we could achieve a difference between the radiosonde temperature of iO.3 of a degree, starting from 3 00 mb on down to the surface; having the true air temperature at 3 00 mb and making three soundings with the 990, in the space of four hours. 6 w \^^ cm' 10 20 PRESSURE \\\ \ Channels (mb) \\\\l \678cm-' 60 \J 100 -VNw) A692cm"' 200 A jX >A699cm' ^]>2N^706cm"' 600 1 ^/^/^ i^I^^T ■->.^J^7l4cm"' -^^p-d,.^ ^ 899cm-' ,---< P= |___750cm ' 1000 > 0.5 1.0 dr/dlnp Figure 1 So with that I would like to go to the first slide (Fig. 1) and I would like to stay away as Dave did from the mathematics of radiative transfer, but I do have to bring up one point. The radiation measured in the atmosphere at any level is a function of the emitted power and the transmitted power and the key to this is that the weighting function, the change in transmission in a particular spectral interval with a change in depth of atmosphere, if you want, is the key factor in changing the ITPR as it is now configured for and in satellite operations to an operation from an aircraft at 300 mb. Obviously, instead of starting at 0.1 mb. or in space, we are starting at 300 and going on down. HI Next slide, please (Fig. 2) ,. Our weighting functions would look somewhat different from those of the satellite ITPR and before deciding on appropriate weighting functions we should look at a typical depiction of the carbon dioxide band, the center at 672 inverse centimeters, the region of strongest 7I( (i30 ('arbdn Dioxkic Ah>>ciiptioii VVavenunibci Figure 2 absorption. By changing our filters to weaker and weaker regions of absorption, of course, we change the depth in the atmosphere into which we "look" or the level in the atmosphere, as was stated, from which the radiation emanates. Here are the weighting functions that you already saw for a satellite. This is my poor FORTRAN programming. I wasn't able to properly label the frequencies that I use in the carbon dioxide band and these, of course, will be checked by Wark. They probably have been recomputed by Dr. Wark. The point here is in the region from about 700 to 760 inverse centimeters plus 1 spectral interval at the ground and using 112 the change in the transmissivity with the change in log of the pressure as the abcissa and the ordinate pressure, we can from 300 mb on down to a 1000 mb, pick out certain regions in the atmosphere from which we expect the greatest vertical detail. This is only for five filters in this Figure 3 case. I have talked to NESS people on increasing this to as many as 14 by using an additional radiometer, non-space qualified, aboard the 990 in addition to the ITPR. Next slide, please (Fig. 3) . This slide is an example of ITPR and also this is the new 990. This is Dakar about five weeks ago and it will look like it again on Friday when we start all over again. The ITPR is in a position occupying four windows on the starboard side of the 9 90 as it is flown. Bill Smith has been aboard and many NESS people 113 CHOPPER MOTOR SYNC PICKUP -REFtRENCF BRiOGt Figure 4 along with myself. Our equipment is on the other side as we will see shortly and has been flown to 4 0,000 ft. and on down to 500 ft. The indications I have received from the group aboard the 990 that is maintaining it, and of course it is under the PI of Bill Smith, are that it performed excellently. It scans from the zenith on down to the nadir and has the seven channels. I wont go into the details on the channels. That has been published in the literature many times. Also the procedures that Dr. Smith and NESS people use in inverting the transfer equation, I would only cite, perhaps the work of Wark, Leinisch and Yamamoto, Bill Smith and Fleming and Drs. Hanel and Conrad from NASA 114 (Goddard) , and there are several others. There are many excellent presentations on how the inversion of the radiative transfer equation is conducted to retrieve the temperatures that are needed. The next slide (Fig. 4) is the filter wheel radiometer. We have a 14 channel filter wheel radiometer that was used for a different purpose. I believe we have enough in funding to adapt it now to a sounder and it has the same detector as that on the ITPR. The ITPR actually has seven complete detector systems and filters where this uses one detector and 14 independent filters. There are some problems introduced in using this system, but Wark has indicated that it would be well worthwhile gcing ahead with the modifications on it. Figure 5 115 The next slide (Fig, 5) carries me into what I would term proposed 990 profile tracks. One of the advantages of the 990 is its speed. It is a Mach ,93 aircraft. It can fly a pattern. We have done it in GATE as tight as this easily at 450 knots. This is the original PDP SESAME box, 250 x 250 with the 20 sounding stations that are proposed. If this track takes 40 minutes for the 990 across Oklahoma, which it would in this case, 4 to 4 5 minutes, we propose ten individual soundings. It is possible to get at least four times that density of sounding data. I considered at the time this was prepared that we would try onboard data reduction. The 990 has three HP2100s each one of which is 32K and it has two disks and our computing people. Matt Lojko here in ERL and some with NESS, haves indicated that a limited amount of real time data could be accomplished during the flight with the onboard computer. There will be a detailed profile with infrared air temperature and proper humidity, the elements haven't been chosen yet, at the beginning. There is another profile made halfway through the mission or two hours out and a final profile is made at the end. So we have three profiles still allowing four hours to complete the mission. The number of soundings made are independent of the speed. I suggested 71 and then increased it to 142 and found we could do 284. These are the sounding stations that were outlined this morning and that are a part of SESAME. This is the augmentation and is a typical pattern that the 99 can very readily fly. We talked to the flight people at Ames, navigators and pilots, and that is no problem for four hours to do that. The next slide (Fig, 6) indicates an extended SESAME sounding area, I had thought that there might be some interest in extending the areas of soundings and again we 116 Distance : 300 km Speed; 425 kts Time ; 4 hours Profiles; 71 (142) Soundings: 3 Figure 6 are talking of horizontal resolution and time resolution to augment the existing upper air stations that will be used in SESAME, in no way trying to say that they should be eliminated. In fact Wark has suggested that we fly over several of these to obtain excellent relative accuracy across the array. But then I had heard that there might be some interest in extending this area so we have made a new box 864 X 560 km. and checked the time to fly this with a fantastic number of inferred soundings. By the way there would still be three aircraft soundings, one at the beginning, one in the middle and one at the end. We can do literally hundreds of profiles in this area in five hours. This was an alternate proposal. 117 Now, what do we do synthetically? We have been asked, not concerning the satellite, that has been clearly demonstrated, but what has the aircraft done in profiling. Bill Smith flew the unit in 1970 during one of the 990 expeditions and had varied success. In fact he told me that he had some accuracies in connection with radiosonde stations in Alaska and then up the coast of the United States that were ^0.5 to ^1.0 degree different from an ocean station sounding, depending upon what rules one might choose in assessing the relative accuracy of the two systems. I am certainly not qualified to do that. However, that was in real time from an aircraft. This is a synthetic Cape Kennedy sounding. We are trying to do work for them on profiling from the surface up to 10,000 ft. for shuttle approach control, etc. We had quite a bit of data at KSC and we made a first guess from a calculated radiation profile. The iterative retrieval procedure is the standard technique that the satellite people have done for 15 years, perhaps. We then tried to take the radiation in the seven channels that we employ and retrieve temperatures at individual levels throughout in as short a time as possible and see if we could duplicate the actual sounding. First guess was herefFig. 7). The little x's that are almost wiped out are the actual profile. This represents four iterations by a scheme that is a copy of that of Dr. Smith and Dr. Chahine. We have found that their two systems, the iterative or direct approach in our programs works best. Our rms errors here were of the order of ^0.5 degree, not the ^0.3 degree that I had originally hoped for and had expected. This is synthetic. It is not real. The real tests will be in about February of 1975. Another topic for the 9 90 in SESAME. I was told that I might have to sell the use, not necessarily of the 990, but of 118 300 400 500 - 600 700 800 900 1000 230 254 266 Temperature (°K) 276 !90 Figure 7 an aircraft for SESAME. We have talked this over with Dr. Kessler and have now procured here in ERL a new Texas Instrioments thermal imager with two calibrating channels. It is a two channel device and it also has two calibrating sources. It will be mounted very shortly just behind the nose wheel in the electronics compartment of NASA-712, the new 990. It is also being put on the C141A, the Starlifter, 119 Figure 8 because the astronomers at night would like to see what is going on down below them as for clouds. At least Dr. Mark has expressed an interest in getting that on to the 141. It is light weight, about 50 pounds. Alright, the imager goes back to an old program that Dr. Harry Wexler and Dr. Tepper suggested. That is the hot spot theory for tornados. It was dropped many years ago, but has received some revitalization recently. The next slide (Fig. 8) will indicate the type of results that we can expect from the infrared imagery. These are strips over in the BESEX array during the Bering Sea Experiment. Part of the testing was in the Imperial Valley 120 on February 8, 1973. These are strips across the Imperial Valley. The temperature scale is at the top. We use false color enhancement. We are about at the stage of getting that hopefully right here in ERL or in APCL. The imager will be put on the 990 to add not only the sounding capability, if this is used in SESAME or is suggested that it be used, but also the hot imaging spot idea. Here of course (I will only take one little box) irrigated areas completely water filled indicate about 24 degrees centigrade. The hot spots are the bright yellow colors... the color coded at the top. This has been used extensively by NASA in programs at Goddard (BESEX and it will be in BESSMEX and AIDJEX) . Brrinj Sea feh. 70. 1973 Piess'jri! altitude ■ 966 ft R.irtar altitude 50? tl Figure 9 The next slide (Fig. 9) is an example of what was done. I am sorry this happens to be ice and there are hardly hot spots. The temperatures range from -2 to -22 ''C. The thinnest ice is 121 indicated over at this end ... the red area. This is a mosaic about 80 miles long by about 3 miles wide. The resolution of the imager on the 990 which would, incidentally, augment our surface temperature data for the sounder is three-quarters of a milliradian by one milliradian. It is apparently just ahead of the military classification. These results were from 1000 ft. over the AID area in the Arctic Sea in 19 7 3 and we are ready to go again this February and March (19 75) . Notice that the last slide (Fig. 10) is an example of what can be done with imagary on clouds, false color thermal energy of stratus clouds at Cold Bay, Alaska. Again NASA-711, the 990 was the platform. The problem here was that at night it is very difficult to tell the difference between stratus and ice in the visible, let's say if you can see it at all in the moonlight. The imager which produces originally in shades of gray can be false color enhanced. This technique, in a very low cost system enables one to clearly pick out the difference between sea ice and stratus. In summary, the plea that I would make is that the 99 or other suitable aircraft such as our own (incidentally the P3 can take the imager also) be employed in sub synoptic programs such as SESAME. (The fittings for it will be designed by Ames and then it can be put right on to our own ships.) We would have two things, the sounder, ITPR, actually three, a filter wheel radiometer that we have, and then the last item that was discussed the thermal imager. It is my feeling that all three or at least the two principal sounding systems would be vital to required horizontal and time resolution for SESAME. 122 Figure 10 123 Data Hanagement J. Z. Holland I am going to reminisce for a few minutes. Fred White can reminisce with me. We had a thunderstorm project in the MO's and we are really talking about the first coordinated national effort since then to tackle the problem of the structure and dynamics of severe thunderstorms and thunderstorm systems. In the '50's there was established a national severe storms project which later became the National Severe Storms Laboratory, This didn't seem to be really tackling this job at least not to the satisfaction of some people. We had an Interdepartmental Committee for Atmospheric Sciences formed during that time. Alan Waterman was the first chairman and sometime around 1959, he set up a panel on mesometeorology under Don Swingle which I was honored to be a member of. There are a number of people here who came to the hearings of that panel in which we developed a great proposal for a national program in mesometeorology, a facility for mesometeorology, and it was going to cost something like 14 million dollars. Suffice it to say, that did not bear fruit. Later the Interdepartmental Committee for Applied Meteorological Research was formed under Clayton Jensen and a panel was set up under General Joe George which investigated the requirements for mesometeorological research programs. It was around that time that the conflict between the air pollution people and the severe storms people came out. There were also people who were interested in the mesometeorology of land and sea breezes, mountain and valley breezes, and forest fires. It was very hard to focus on a program. Recommendations were made for a national program 125 and I think that the Federal Committee which this interdepartmental committee reported to agreed there should be a program, but nobody could decide what it should be. Another panel was set up under this committee. I think that Mike Alaka was the chairman of that. It was a working group that again developed a very well thought out proposal for a national program in mesometeorological research. This thing is also gathering dust on the shelves. So, through the '60's, we had Gordon Little and Don Pack working up a thing and then towards the late 'GO's, everything had to end with an "EX". I mean there was BOMEX and the bandwagon was created. There was going to be TROMEX, which we now call GATE and there was going to be GLOMEX, which we now call FGGE and so the mesometeorlogical program became called MESOMEX. It isn't being called MESOMEX anymore for some reason although shortly afterwards we did have METROMEX and FAPS and RAPS. Still these didn't seem to add up to the mesometeorology program people felt we needed. Now we have SESAME which is a magic incantation that is supposed to open doors. Some people say it really isn't descriptive enough. It should really be called Thunderstorm Project II. Or maybe it really is the National Severe Storms Project and that should be its name. I don't know. Maybe it is an idea whose time has come and I hope so although it looks as though it hasn't come quite yet. It is going to have to wait for a few other very voracious money-eating projects to subside before it can be properly funded. In fact this may give time for some of the preliminary programs or subprograms that are really needed in order for this one to be defined well enough to rally the support as a coherent program which it should get. I think at this meeting the plan will get loosened up rather than tightened up. It may take some more effort by a lot of 126 people to get the thing into a really credible scientific research program. The project development plan, taking it as strawman, as an indication of what looks now to be a feasible order of magnitude for a project like this, it is very interesting. One of the things it shows is that we now have something that we never had previously at the time of all these other proposals, and this is a highly developed indirect sensing or remote sensing capability ready to be focussed on this problem. I used to say, and I think other people also, that for the problem of one kilometer or two kilometers resolution over an area of a couple hundred kilometers dimensions and several kilometers in height with something of the order of a hundred meters resolution in the vertical, one cannot instrument the atmosphere with in situ sensors. You want time resolution of a few minutes. This is just not a problem for in situ sensing. It has got to be done with remote sensing. We see that we have apparently the capability, but these are unproven systems — these dual doppler radars and the acoustic sounders and the FMCW sounders and lidar sounders and laser divergence measurers and this type of thing. These have not been put together and the data processed and taken through a scientific analysis procedure where we can say quantitatively what we can learn about dynamics of mesoscale systems from them. They are promising. Pieces of the job have been done. A feasibility has been established. So I feel that rather than try to lay on a comprehensive scientific program utilizing these as the backbone observationally, one of the principles I have learned in my experience with these large field programs, is that you have to have known systems to produce the really necessary data 127 and that you can use these as a test bed for new systems which are then proven in the first experiment and are then available to be used as proven systems the next time around. So, I think there is a stage of putting these remote sensing systems in the field together with in situ sensing systems and not necessarily conventional but at least those whose processing and whose data is a well-known thing with which we have experience so that we can relate the new kinds of data with the familiar kinds of data. There is another thing suggested by this morning's discussion that needs to be done before the complete field program can be undertaken. This is, with all their deficiencies, to find out what the models require because certainly as a minimum the field program must satisfy the observational requirements of at least some of the leading model testers and developers. Even though this may not be enough to define the entire contents of the project, it certainly has to be included. Well, I have brought a few viewgraphs and I have shown them here in Boulder before, but to quite a different audience. Slide 1 is sort of a generic project flow chart. A project of this magnitude, a large field project, invariably has subprojects that it gets broken up into. The project results are essentially put together as a synthesis of results from a number of relatively distinct scientific investigations, each one of which has kind of a coherent group of workers who communicate with each other in the same language and have their own methods and equipment. Starting with the project objectives the first step is that these are usually subdivided into objectives which are laid on these subproject groups and only then can the observational requirements be defined. 128 The observational requirements then lead to the development of observational subsystems to satisfy these requirements and at that point essentially the design of the experiment is determined. The requirements not only are requirements in terms of what the observational subsystems have to do, but exactly where are they going to be, how frequently are they going to observe and record and exactly what form and quality of data has to be eventually delivered to these people in order for them to carry out their analyses. In fact it may be the same people and in old research projects it was the same people who developed the subsystems to meet these requirements, went into the field, and at that point they are together and you might call the aggregate of these subsystems a system. They then processed the data, analysed the data, and then got together for the systhesis of the results. In the modern large project of the kind we are looking at now, what happens is that you cannot really give each scientific subgroup the luxury of an independent observational subsystem. They are going to state their requirments in terms of temperature, humidity and wind profiles. Now we are talking about interactive projects (Slide 2) in which the scientific subprojects really draw on a bunch of different subsystems. These could be an aircraft system, a certain remote sensing system, an upper air sounding system, a surface observation system, and the subprojects might be concerned with different aspects of the data, but each one of them is going to make demands on a number of the different subsystems. When these subsystems are operated as a total system in the field, and in fact some of the system aspects of these are that they must be, if there is going to be turning on and off of this system depending on the weather, they have to be responsive to 129 Genenc ( Project Flow Non-lnteroctive Chart Generi c Project Row (Interactive) Chort PROJECT OBJECTIVES PROJECT OBJECTIVES / \ "^^ --^ ^^ "Exp« De J2^ / \ "^ ^^ ^ "Expe Des riment SUBPROJECI 1 SUBPROJECT ■r riment ilgn" SUBPROJECT 1 SUBPROJECT Data Requirements ' Doto Requirements p ^ 2 '^ ' "System Engineering" OBSERVATIOII SUBSYSIEK 1 OBSERVATIO* SUBSTSTED "System Engineering" OtSERVATIOI SUBSrSTEN 1 OeSERVATIOd SUBSTSTEK ■«■ \ 1 / \ / / ! "Field OflSERVATIOII SrSTEK "Field Operotlont" Row Data OlSERVATlOa StSTEIt 1 as amples A ollectic " i \ \ a : "Doto Monogement" >amples / / \ \ 1 "Data Management" PfiOCESSIKG SrSTEN 1 PROCESSIIC STSTEK PDOCESSIIIC STSTEH 1 PROCESSIIC STSTEK •»■ Volidated Data \ I / Volldotec Doto \ \ ARCHIVE / / t ARCHIVE CEITER W-^ T^ ) / i \ \ \ / -l^n^ / \ \ "Data Anolysis" SUBPROJECT 1 AHAIYSIS SUBPROJECT •«• AKALTSIS "Doto Analytle" SUtPROJECT 1 AlAirSIS SUBPROJECT •«• AKALTSIS ~^~^ -^ !^ _J\ ^ I / ^ "^ ^ >k t ^ -^ ^^ PROJECT RESULTS PROJECT RESULTS Slide 1 Slide 2 (A liESPOi-lSrOiLITY OF Pr<.XIECT ''WA^^RP) A, S taff n ':riK:! (nar^oO p rFiMiT Tori: (cARRiCD oirr EY Ti:i-: Pkoject ^ata ['Diaper) pLA.'",'nn, fONiTn^ir;- akd cocrduiating thohf aspects op data COLl.FCTlCf; A!'D Pi'XESSiriG KFQUIRED TO ASSURE TilAT THE DATA ARE nEI I^^FIED TO TllE PSER SERVICE CEfiTER (l.E.^ T^ ;E Archive (smi-?) o\\ tie schedule, in TriE f-EriiA FORfWT ivs) FILE STI^;XTUpi^, AMD l-.'ITH IHE OIXOLITYy OUANTITV ADD RHSOLin'Kf.'j At>n ASSOCIATED DOCU' IMITATION TO Er','\BLE TVIE SERVICE Cl'ITEI' TO SATISFY THE REOUIREfENTS OF THE PRIPARY USERS, Pr.RFO Rf-in!^^ FUNCTIOM (r.RnAp) DEFINITIOM : (carried OUT BY DESlnMATED "DATA r-WJAREriErrT ORGANIZATIOiN") All ACTIVITIES of iwEm"ORYirr.^ quality assuratjce, TRAriSFERINCv COPYirrv REDUCTIOM, processing, EDITirifv VALIDATION, FILTERIflG, REFORMATTIIir., STORAGE, RETRIB/AL, DOCUTENTATIO:!, INDEXItlG, CATALOGUIIJG AID DISTRIBUTIOfJ OF DATA, FROM ■|T;e TIHE OF FIRST RECORDING TO THE TIME OF DELIVtRV TO USERS, Defi n iti o n? COLLECTlOt'! . flANUAL OR AUTOMATIC RECORDING . TaKIICj OF PHYSICAL, BIOLOGICAL OR CHEMICAL SAMPLES . InVENTOKVING and packaging FOR TPJVISFER FROM THE collection putfori'1 . Transfer of data or satfles from platfor.m to PROCESSING laboratory Pr ocessing . SA^PLE aI'Ialysis (che/iical, biological, physical) . TRANSCRIPTiaN, DECaTlTTATION, DIGITIZATION, key punching, etc, . Conversion of units, calibration, reduction . Editing , Smoothing, averaging, filtering . iiDRMATTING . Quality control, validation, verification Slide 3 Slide 4 .130 certain commands and operate in a coordinated way. There can be economies of people, perhaps by colocating some of the sub-subsystems and having parts of various subsystems operated by the same personnel in the field. So that it certainly would be efficient to have an actual system concept for the field operations. In general the processing systems are uniquely related to the observational subsystems. Each particular data acquisition subsystem in general has its own unique processing system. I am talking about, generally speaking, computer processing of data that has been recorded on tape or something like that. You always have besides magnetic tapes coming out of the field all kinds of manuscript documentation and sometimes you get actual data sheets and forms and other things like punched paper tape, punch cards, photographs and other film forms of data. Eventually these have to get into the subproject groups again for analysis but now the requirements have to be made up by data from all the different subsystems being recombined in the packages that are required by the subprojects. This is where what you might call a data bank or archive center comes in. The archive turns out to be a convenient organization to do this job. Some projects set up their own data bank arrangements and so this is not meant to exclude that. I come from the Environmental Data Service and we have a National Climatic Center and National Oceanographic Data Center and National Geophysical and Space Data Center, this last one being right here in this building. We have been finding that if we use their servicing capabilities, get the data into them quickly and get things set up so that the data users get their data from these established data centers they get better service than if we try to do it in an ad hoc informal way. 131 Let me say just a few things about this data management phase that I have shown there to try to establish a common language and then I am going to let Scott Williams elaborate on the data management aspects of SESAME. We use the words data management in two ways (Slide 3) . We find that in these major projects there are people who get called data managers. There is usually a data manager for the project and in GATE we have a data manager on each ship. We have a field data manager operating a data center in Dakar. The data manager doesn't do everything in the data management area, but he is a staff person generally who plans and monitors and coordinates. In the broad sense we use the words data management really to cover all these activities involved in getting the data from the form in which it is collected in the field to the form in which it is required by the users. Now the words collection and processing are in here and people use these words in different ways, I am just going to tell you what I mean by the collection and processing (Slide 4) which are the things that the data manager oversees. The processing in my definition includes everything from the raw data in the form it is received from the field to the point where it is ready to be distributed without restriction but with suitable documentation and qualifications to the users. The functions of the data bank or archive center (Slide 5) are to maintain efficient files and to be able to retrieve anything quickly and at the cost of copying the data. They can do some reformatting and computing work on data, but this all increases the cost and the time. What we have tried to do is to package the data in a way we feel will be convenient for the user. They simply retrieve these 132 A-^CHIVn CEt.TFR Data servici-: CEr.TER fcp. i-n-m "phi'-a' ^y users " win '•'SECOr JDARY U3 F1S," PnOVID'iSl . f''AIIfTEf!Ai;cr OF FFFiCIE;>T DATA F11£S: . Retrieval with niniru' delay and cost; , niSSEraNATIOr) oil RFOUJST AT COST WITHOIH' RF.STRiaiON; , Publication of accessions, inventories, catalo^iUes, docu-ientatiom and brochures describiijt, data and hoi/ to get it; Storage assuring Lc;ir,-TERii irnTGRirc, Prima r y users Agbicy or organization sponsoriiig the data collfction project, and all DESIGI-'MED participants in TrlE project fop hHOSE USE THE DATA l.'FRE COLLECTED, Secondary users EvERYOflE else Slide 5 packages without change. This is the cheapest and quickest way to get data out of these centers. Their job is to disseminate the data on request, to publish the information about what they have got and how to get it and, of course, also to assure the long term integrity of the data which is I think what most people associate with the word archive. What I am trying to tell you is that the functions of the archives are not limited to putting data in vaults so that nothing will happen to it and keeping it there for many centuries. But in fact, they are a very useful device. They are a data banko They are a useful component of a data management system for a project and we are really exercising them both in IFYGL and GATE and in putting the burden on them, of giving fast turnaround and getting data both to project participants and to any other users and getting the information out as to what they have. 133 PROBLEMS OF EXPERIMENTAL DESIGN Amos Eddy Professor of Meteorology and Environmental Design The University of Oklahoma It seems to me from reading the material that was mail- ed out to us and from listening to what's been said today that there is dire need for some experimental design. This has been talked about by the last two or three speakers; it's been talked about but no one has said exactly what it is. I would have a series of questions to pose if, for instance, I were a funding agency thinking of giving money to a group to run an experiment like this. Thj first thing that I would ask is: what are you trying to find, what is the problem? I'll define that a little more in a minute. And then: how will you know when you have found it, are you merely going to absorb all the time, energy, talent and money that's available and then quit, or are you going to stop when you have proved something? We have seen a lot of really high-class hardware described today, but will it be deployed in an optimal fashion? If it were to be located in a slightly different configuration, would the experimenters get more information relative to their problem? Another question: what form will the results take? I guess because the modelers are prominent here, the result will be an objective analysis, on an x,y,z,t mesh, of various types of parameters and possibly values of confidence limits at those x,y,z,t points. How about the climatologlcal structure of those parameters, how well is it already known? So to be a little more specific, if I were going to consider funding this or even working with the group that would be working on SESAME, I would like to know about the null hypotheses with respect to the signal; what sort of signal scale are you looking for, can you describe it at all or are you just going to go out and find what's there with the instrumentation that you have? One excellent signal generator would be a numerical model. A numerical model has as part of its makeup the size and shape and even variations on the form of the signal. It seems to me that it would not be possible to design an experiment to look for information about a particular signal on a particular scale if you had no idea what that scale was, how often it occurred, or where it occurred. Another question would be: what are your noise hypotheses? What is the structure of the noise? What is the noise associated with the instruments, on the varieties of instruments and on the small scale com.ponents which are below the scale you are investigating? Then there is the sampling noise which is connected with the x,y,z,t mesh on which the results are going to be presented. What kind of objective analysis will be done? WilJ. it be an objective analysis which in fact will give confidence limits at those points? There are objective analyses like that. Will the objective analysis be constrained so that when you analyse the temperature fields and the associated wind fields there is some sort of consistency insisted upon between the two fields? Consistencies could be put in the form of dynamic constraints. For example, do you not want to disobey somebody's private theory on relationship betv/een thermal fields and wind fields? Would one want to look using multivariate analysis of variance such as Pat Avara just described to discover from the interaction terms the relationships between the various fields? When one has found from very careful ex- amination of the data the significant interaction terms between the various parameter fields, these could be put into the objective analysis scheme as statistically controlled dynamic con- straints. You can always make a balance between that and what 135 you think are the real dynamic constraints according to some dynamic theory. At any rate, one could run most of an experiment like this in a computer before ever getting into the field. You set up a network — there is one proposed. You have instruments, put them where you feel they should go. Make the observations. Run the signal through in the computer. Sample those points in space and time; add noise that you think is appropriate and then feed it into your objective analysis scheme and then find out if you can actually make an analysis of what you think is going to be passing across the observing network. If you can't make an objective analysis, maybe you ought to think about the experi- ment a little more. After you have made the objective analyses, move the network just a little bit and see if the confidence limits on the new analyses improve. Now there is a logical attack to sensor placement that presupposes some knowledge of how the signal structure can be expressed, for example, in covariance terms. Given this, one can discover logical places to put the sensors. Given the signal covariance function and the noise structure, these two things together can give a very good idea of where to sample in order to get the optimum resolution of the parameter fields and the combination of parameter fields. If value is expressed in terms of reduced standard deviation or money or whatever other measuring device you want to use for the value system then this can be used Influence the deployment. Some type of Bayesian approach should be used to combine the value of your results, the climatology or the em.plrlcal evidence you already have, and the numerical models and what they say are going to happen. Such a combination can lead to a deployment of Instruments which will tell you what you can expect for a certain sample size and 135 how much effort you have to put in, in number of Instruments, In number of samples, or In dollars In order to Increase your con- fidence by a little bit. So one might envisage a process whereby you say, well, it's going to cost us so many more million dollars to get only a little bit more definition of this phenomena and maybe we'll stop before that. Now I'll tel]. you some of the experiments we've run. On a planetary scale, we've simulated stratospheric sudden warmings Marvin Kays at White Sands did this. He built an empirical signal generator, ran simulated sudden warmings through the met- rocket network and decided how to sample optimally using rocket- sondes. He used these simulated observation sets to obtain ob- jective analysis and evaluated the results. On the synoptic scale, we flew aircraft around in a computer to sample divergence and convergence around arctic highs. We took a flight pattern suggested by the GATE group, obtained simulated observations and then an objective analysis. Next we juggled that flight pattern around using a nonlinear program technique and found a better flight pattern better from the point of view that the objective analysis we got looked more like the signal we had sampled. We've done experiments with satellite simulations. I was going to shov/ some of those because we have results to illustrate what kind of detail you can pick up at various levels by deducing wind fields from satellite radiance data. This is all simulated using the computer. We have obtained some of the Illinois State Water Survey rainfall data. In particular we have analyzed thunder- storms associated with a squall line that went through the net- work. We found the signal structure functions, we evaluated the noise and we did objective analyses using one minute rainfall data from all 50 stations in Goosecreek network. Then we with- held half of them, and did it over again. This is not the way 137 I propose doing an experimental design but this does make one of the main points. Then we withheld 3/^ of them and obtained the analysis series again. If the object in doing the field experiment were to be to calculate the volume of rain that fell on a particular water shed then these three analyses make a pretty good demonstration that you could get by with 22 out of 50 stations and still obtain a good estimate. If you try to use only 12 out of 50 stations the errors of estimate become unacceptably large. The objective analyses v;ere obtained by taking the data for 26 one minute intervals, deducing the structure of the signal and the structure of the noise and putting that into the optimal interpolation objective analysis scheme. [A movie was shown comparing time series of the objective analyses obtained from each of the three sampling networks described above.] Now this isn't the ideal way of designing a network, by overobserving and then throwing some of your data, but 1 just wanted to show that if you make use of such an objective analysis scheme, you can design a network which wil]. not have too many stations and waste money that way, nor too few stations and not get you enough data to define the signals that you're looking for. I would like to finish with this idea: If you have a notion of the climatology, of the scale size you're looking for, and of the noise associated with the instrumen- tation, you can adopt a logical approach to looking for the structure of the phenomena that your 're trying to investigate. . Jo StSAHE DATA MANAGENENT PROBLEMS S, Williams I have a few figures which I made up by way of helping us in CEDDA think about project SESAME. We thought that they might also be useful to other people. If any of you are interested, there's a few copies over on the table that you can pick up after the meeting. They do, of course, have a strong data management and processing bias but don't let that turn you against them too much (fig. 1). The first group of figures is simply extracted from the project development plan and we have here a statement of the purpose and scientific objectives of the project (fig. 2). To meet those objectives, we have these principal investigations named, with no names yet of who is going to do the investigating. We see the first group is primarily descriptive and the second is pri- marily modeling, to see if we can really do descriptions PROJEC T SFSAfC Purpose: to gain a fuller understanding of the structure and evolution of convective storms as a function of their environment. SPECIFIC OBJECTIVES: a. to study mesoscale triggering and organizing processes. b. to study convective storm evolution and environmental Feedback. c. .to develop numerical models of mesoscale phenomena. Figure 1 139 numerically (fig. 3). And to carry out these- investigations we have these component analyses that will be necessary. Most of them will contribute to several of the major invest- igations and most of them, also, would be recognized by an old weather forecaster as the conventional type of analysis except applied on the mesoscale rather than the macroscale. A few new ones are added, such as the arc cloud analysis, gravity wave analysis, and the lower level jet, which again are familiar to a lot of forecasters (fig. 4). These data acquisition systems will furnish the data to be analyzed. The next figure (fig. 5) lists various organizations plus the data observing and acquisition systems which they will oper- ate. Now to make, or maybe I should say allow all of these organizations and systems to operate together satisfactorily we have to have some sort of an administrative structure. And again, that's indicated very briefly in the project development plan (fig. 6). We have some positions listed, again with no names. On the bottom of the figure are a couple of positions which I have added that are normally part of an exercise like this but, in this case, appear to be PROJECT SESAME PRINCIPAL INVESTIGATIONS Investigation Principal Scientist flEsocALE Triggering and organizing processes Storm development and environmental feedback Storm structure Mesoscale mathematical model (Fine-mesh model) Cloud model Hybrid model ~ Figure 2 140 included with the observing systems. So whether or not they'll be needed as part of the project director's office remains to be seen. This figure (fig. 7) describes the data management function and is very similar to one shown by Dr. Holland (fig. 8). Here's a diagram of the structure of a large environmental experiment, and the fact that the instru- mentation frequently comes first doesn't really make a great deal of difference. It still has to make sense looked at from the top down or else there's no point in carrying out the exercise. The second figure in this group (fig. 9) is also very much like one Dr. Holland showed. One addition is down here in this area where I show the exchange of inter- comparison data. In an exercise like this, where we have a number of observing systems taking data which have to be used together, it is essential that we have numerous inter- comparisons. The intercompari son data must be analyzed ERQJE CT S ES Afll : COflPONENT ANALYSES OBSERVAITON AND DATA ACQUISITIOrj SUB-SYSTfiS Arc cloud Boundary-layer heating Cirrus distribution Gravity wave Low-level jet A ( )■ r ITPR f'lACRO-SCALE AND MESO-B SYNOPTIC ANALYSES AT "'^"mETEOROLOGY^^NAVIGATIOn! RaSar! RADIATION) SEVERAL LEVELS (WIND> TEMPERATURE^ MOISTURE^ FRONTS, SQUALL LINES, DRY LINES) MeSO-B DIVERGENCE (ftAss, moisture) I1eso-y mass divergence flOISTURE ADVECTION Nephanalyses Rainfall Turbulent flux (heat, moisture, momentum!* ST0RM-5CALE analyses (j-DIMENSIOIJAL wind, water DROPLET SIZE AND DISTRIBUTION, TOTAL WATER SUBSTANCE, WATER vapor) Vertical wind shear Also need macro-scale forecasts Dual doppler radar wind Dual frequency doppler radar wind Fil-CW RADAR (turbulent STRUCTURE) fkCRO-SCALE synoptic RAWINSONDE SYSTEM !1aCR0-SCALE synoptic surface SYSTEM iIeso-B winsonde system Heso-y surface system i'1eso-y special observation systems (laser trans- verse WIND, acoustic wind, ACOUSTIC SOUNDING) Microwave radiometer (integrated water) Tethered balloon boundary layer system (?) Tower boundary layer system Weather satellite (synchronous, polar orbiting) Weather-surveillance radar (cloud observation, precipitation measurement) Figure 3 Figure 4 141 Organ izatio n USSL V/PL RFF NASA U. OF Chicago III. State Water Survey [^WS NESS HESS-APCL EDS (CEDDA) EDS (NCC) P RO J ECT SE SAM E PARTICIPATING ORGANIZATIONS Function or System Basic surface and upper air observations^ doppler radar > weather radar fm-cw radar^ acoustic sounders^ microbarographsy laser transverse wind detectors^ doppler radar C-150j P-3D (2, WITH GUST probes) CV-990 i Dual wavelength doppler radar Operational WSR-57 radar^ operational rawinsondes^ operational surface observations Meteorological satellite observations Airborne ITPR on NASA CV-990 Data management Archiving Figure 5 PROJECT SFSAMF PRINCIPAL MANAGERIAL PERSONNEL Position Name Project Director Field Operations Manager Data Manager Analysis and Research Manager (Instrumentation Engineer) (Data Processing Manager) Figure 6 142 carefully in order to be able to use all the data as a coherent set. Because if we can't do that, we merely have a lot of little experiments running at once, we don't have one big experiment. The archive function is also a very impor- tant one. In order for the archives to be of use to people outside the project, as well as to participants, it's neces- sary for the archivists to do a little advertising. They do that by publishing catalogues and atlases to tell people what is available in the archive (fig. 10). This rather cluttered looking diagram emphasizes the first portions, the preparation portions, of the experiment Chiefly this shows that a lot of planning is required to get ready to go into the field. We have to have an experiment design, we have to have an analysis plan so that we know what the data require- ments are, and this in turn gives us at least some guidance as to the instrumentation and the calibration requirements. It's also necessary to train people to use the equipment in the field. If we're going to be operating a lot more sets PROJECT SESAfiE DATA riANAGE'lENT A STAFF FUNCTION OF THE PROJECT DiRECTOR's OfFICE A PLANNING^ COORDINATING^ AND MONIROTING FUNCTION WITH THE RESPONSIBILITY OF ASSURING THAT SCIENTIFIC DATA REQUIREMENTS ARE MET AS TO TYPE^ QUANTITY^ QUALITY^ AND TIMELINESS. The DATA MANAGER PARTICIPATE IN ALL PLANNING FROM EXPERIMENT DESIGN TO SCIENTIFIC ANALYSIS TO ADVISE ON DATA MANAGEMENT ASPECTS OF PROPOSED ACTIONS AND ANALYSES. He MONITORS ALL PHASES OF THE OPERATION WITH SPECIAL ATTENTION to data collection^ preservation^ petri evabi lity^ quality^ and timeliness. Depending on organization of the experiment^ the Data !*1anager may also be responsible for procuring and overseeing data processing services. Figure 7 143 STRUCTURE OF ENVIRONMENTAL EXPERIMENTS FOK AN^LMM AND tHiTRUMENrATION X exPERIMENT DEitCH PCr^lLED PLANS DfiTA Fietp PH/\J£ r P^T4 PROtEJJ/Ntt VALIPATlON [ARCHWCJ^AHALySlS Tft CATALOdi ATMJC5 PUOLICATION TTV 7TT Figure 8 S C 1 t H T 1 F 1 C g H t R A T 1 M A L INTERACTIONS ScifHTIFIC TaC.KS ODfCRVATIOfJ,"! ItSUlRBlLKT'-, Observing Systbs Field Observatioms Processing Systecs txCHAKGE OF JNiIRCQMPARISOII Cv/LusTioii wn In-HOUSE fll^LYSIS ICDEX Catalog Oefowiat DlSTRlEUTE Ahalysis Data . AMD ft«LYSES TASK i) ELgigrr u SYSTtWA I SYSTEM SU PUBLISH CATALOGS XSD ATLASES TA'JK II Ml^A&Ir^OilRccSLSAioiLii), EXPERIMENT P£ 5ICN 1 DATA Or,:K.iTiof,'j Plan SITE SELECTION CAUSRATIdH XT" Z TtliTKvntmi For TEiT DEPLoy INiTnutALNTJ FOR TI15T 1 COKPLETE AA'D DllPLoymNT PROCESS, E.WALUATE, ^Rcv.wc, Movrpy -J FIELP I PlIAiE U J Pr\OC£Sij EVALUATE, ^NAUyz E, ARCHWE. Figure 9 :r?ZRIKZ^JT PP.0nRES5!0^ (2) ICALIBRATlokl IWTERCOMf/lKS/OW |r.tFo,';r H Tv.-iluateI Jcrcn/.rfo,',' {CRITIQUE. Figure 10 Figure 1 1 m than we have used in laboratory exercises, and if we're going to operate them around the clock and for days at a time, we need more than laboratory personnel to operate them. We have to train some ordinary-type observers to do this and it is worth a fair amount of training before hand. The next figure (fig. 11) emphasizes the observing and pro- cessing portions of the experiment. Some of the blocks on the figure deserve special mention. We must have calibra- tion and i ntercompari son in order to be able to use all of these data as a coherent set. The documentation of what we do at eyery step is yery important so that people may know what the data are which they are trying to use. One step which is not normally of much importance, but which I suspect will be very important in Project SESAME, is to select the data that will actually be processed. It appears that it will be easily possible, and maybe necessary, to collect much more data than could ever be processed in any reasonable time period and for a reasonable amount of money. Therefore, we have to set up criteria not only for the collection of the data but for selection of those data to receive complete processing. Now if this first observation period (top of the figure) were the field trial, then I would not expect the analysis and archiving functions to be carried out. If it were the second field phase, then, of course, everything below this line (below the "analyze" block) drops out. In order to get a handle on the magnitude of the data management and processing problems, I have tried to estimate the quantities of data that we might get from the experiment (fig. 12). These are the major assumptions I used in making my estimates and you can see this still leaves a lot of leeway in the numbers I can come up with. As far as that goes, I will not guarantee to duplicate my numbers if I would go through the computations again (fig. 13). Now 145 here is the first group, on the number of rawinsondes . I come up with 9,000 flights, almost 7,000 hours of rawinsonde data. My estimate on number of magnetic tapes and the processing costs are based on what it costs CEDDA to process the Loran C sondes from the IFY6L experiment after all of the development is taken care of. We have a largely auto- mated process to handle the IFYGL data, with some manual inspection and a little bit of manual input being necessary; manual input was primarily start and stop time. (I can't get the engineers to give us a proper indication of launch, although it seems to be quite simple to me.) There were a few problems with the data that it didn't seem to be worth- while to try to program so we also used manual input to tell the programs what to do with those. I end up with a cost of $100K to process these flights. I would expect that using the doppler windsondes there would be more data and the SESAME I Najor assumptions used in estimating data amounts 20 STORM days 10 NON-STORM days FOR COMPARISON 15 SOUNDINGS PER RAWINSONDE STATION PER DAY (30 DAYS) m5 MINUTES AVERAGE FLIGHT TIME 10 OBSERVATIONS PER HOUR FOR SURFACE STATIONS 8 HOURS OF WSR-57 weather surveillance PER STORM DAY PER SET ^ hours of dual DOPPLER RADAR DATA PER STORM DAY PER SET 2^ HOURS OF ACOUSTIC WIND DATA PER STORM DAY PER SET 2^ HOURS OF LASER TRANSVERSE WIND DATA PER DAY PER SET ^ HOURS OF DATA PER AIRCRAFT PER STORM DAY Figure 12 146 OBSERVING NIMBER OR AMXJNT RECORDING PROCESSING DATAEL£hENTS archiving SYsrai OF CBSERVATIONS MEDIA no. COST TO BE ARCHIVED hEDIA NO. RwirectOE aXBFuGHTs MftS. TAPE 100 $100.K Tl« fk;. tape W (20) (E7S) HOURS) Strip charts 18,000 (W.Kfor Height REELS Misc. paper 60,000 TI^c awpuTER) Latitude Longitude Microfilm plots 120 ROLLS Tem>eratljre TABS RH Wind direction Wind SPEED u V (5-SEC. AVE.) (10-^B AVE.) Surface 75.*10^ CBSER- fVe. tape 135 tlOO.K Tir€ fAC. TAPE 580 STATIONS VATIONS Sti«ip charts 26.100 Temperature REELS a*) 7,5*10^ HOURS (2/station/day) m r.lCROFILM 17W ROLLS Misc. paper 2,000 Q (.UXS, CAUBRA- Wind DiRsaioN TABS TION WTA) Wind speed u V Rainfall rate Total rainfall Figure 13 OBSERVING SYSTEM NIMER OR AMOUNT OF CBSERVATIONS RECORDING ^EDIA ro. PROCESSING COST DATA ELE^ENTS TO BE ARCHIVED archiving media no, Dim. doppler 160 HOURS M«e. TAPE 2,100 $250.K Tirt iIag. tape 1CD RADAR 10 CM Photographs 30,000 HEIGHT 2 SETS (1 Latitude pair) Longitude WifiD Direction Wim SPEED u V Dual DOPPLER 160 HOURS n«;. tape 2,100 $25Q.K SA^E AS for ?>g. tape TO RADAR Photographs 30,000 10 CM OCM ) (2 SETS) Dual doppler 160 HOURS Mag. tape 2,100 $250.K Same as for f'kG. tape tm RADAR QO CM Photographs 30,000 10 CM AND 3 cm) WSR-57 6W HOURS Mag. tape 80 $32.K Time fAG. tape le Weather surveillance (ISETS) Photographs 115,200 (18150' ROLLS OF FIL>0 Height Latitude Longitude Intensity Sfc, rain intensity Total rainfall Microfilm PLOTS REELS 10 ROLLS Figure 14 147 OBSERVING lijreER OR Hvum RECORDING PROCESSING DATA ELE^CNTS ARCHIVING SYSTEM OF CBSEfiVATIOfB fCDIA NO. COST TO BE ARCHIVED MEDIA no, TsE?r 160 HOURS Mag. tape 20 $10.K Echo analysis r'Afi. TAPE 20 Strip (harts 100 ft. Strip CHARTS iflOFr. Acoustic 2880 HOURS fVC. TAPE 720 IIOO.K Tl^€ Mag. tape 1 (?^ Height Microfilm 2 Wind direction ROLLS Wind speed u V (2 levels. 12 Tlr€S PER hour) Laser Tranverse 5760 HOURS Mw. TAPE 720 COO.K Time Mag. tape 1 Wind Transverse wind Microfilm 1 (8 sets) Divergence 02 TI^ES PER hour) PLOTS ROLL C-130B Aircraft 80 hours Q) Itr. CutxB ftwsics Radar Figure 15 (BSERVING SYSTEM NUhBER OR AMOUNT OF OBSERVATIONS RECORDING f€DIA PROCESSING CDST DATA ELEMENTS TO BE ARCHIVED ARCHIVING ^'a)IA P-30 Aircraft (2) ttT. CuouD Physics Gust Probe Radar 160 HOURS CV-990 Aircraft 80 hours Q) Met. Cloud FMysics ITPR Radar GEO-STATiaiARY Satellite Figure 16 148 processing cost probably would be higher, although again I expect that it could be much more fully automated than could the Loran data. However, that remains to be seen. Then for final products going into the archives, we have about forty reels of tape and 120 rolls of microfilm. Incidentally, something that should be looked at very carefully is the amount of auxiliary data accompanying these tapes. For the type of operation we conducted in IFYGL again for GATE that would amount to 18,000 strip charts, that's two per sound- ing, and another 60,000 pieces of paper of various kinds to tell what we have on the strip charts and on the tapes. That is almost an irreducable minimum, you have to have a lot of auxiliary data of this type. If you can find some way to put it on tape, fine, but we haven't found a way to automate all of that yet. For the surface stations, I assumed an observation about ewery 6 minutes, although I haven't been perfectly consistent with that throughout these estimates, I don't know what sort of auxiliary information is required with the observation data. I have largely taken the NSSL's estimate of what the data processing costs are, although I may not have made a proper breakdown in their numbers. The pro- cessing costs are for one SESAME field exercise, not neces- sarily by calendar year and certainly not for the whole experiment (fig. 14). Now I'm getting into an area with which I am less familiar On the dual doppler you come up with a lot of tapes in a hurry, so I'm told. If you run those for four hours per storm day and there's 20 storm days, why for one dual doppler set you come up with 160 hours of data, which is about 2,400 magnetic tapes. Now that's a lot of data. To process that for $250K is going to require a substantial reduction in the amount of manpower that is currently required. The process must be much more 149 automated than it is now. Whether that's practical or not, I don't know, I may be looking at it rather naively. I assume that we have about the same amount of data and work involved in the three different types of doppler sets. On the weather surveillance radar, the processing costs here agree roughly with what it costs CEDDA to process the weather surveillance type radar data which we're using primarily for precipitation measurements (fig. 15). Now I'm getting into stuff I know even less about and you see that down at the bottom of the figure I just give up and list that there's probably a lot of data to be processed (fig. 16). Well, the ame thing, we have some more aircraft and some satellites. All I know about the amount of data from the satellites is that the amount of tapes runs into four digit numbers in a hurry. Again, we're talking about a lot of data. There's an awful lot of processing to be done, and we should think yery carefully about how we are going to use this stuff before we go out and collect it. O.K. , that's al 1 I have. Lilly - This is the first time I've seen this material and I suddenly feel like I'm riding on the back of a tiger! I am concerned about taking care of the scientist who has an hypothesis to be tested, who looks at the early data, decides that the instrumentation or the reduction is inadequate and that he needs further work done in those areas. How are his needs dealt with? Williams - The place for that to occur is after the field trial. I didn't mention the field trial but I'm glad to see that it is planned. It serves several important functions. One of them is that it should prove that the equipment actually operates under field conditions, with operational type operators, rather than laboratory type operators, running it. Two, it should prove that all systems can operate at the same time without interfering with each 150 other; that's a wery important point that is not always true if you just take equipment into the field and try to use it, as we have found out from sad experience. Another, it should give some sort of a test as to whether or not the equipment will really meet requirements, and provide the data needed for analysis. Holland - Well, let me comment too, then, this is one of the reasons why I feel that the time can be used well if the major field phase slips a little because I think that there is a need for this kind of iteration before you go into the big thing. In fact, some of these numbers can be whittled down. I think it's quite important to establish exactly what your data requirements are because if you don't, you're going to collect a whole big pile of data anyway and you may not be doing it \/ery effectively so this is maybe sobering. Kessler - A couple of points I would like to make, or suggest. I tend to disagree mildly with the emphasis on hardening the definition of requirements. I think that we also need to bear in mind that while we try to do that and we must be successful to an important degree in that, that the technology is always ongoing in this area, and that what seems like a good idea this year is going to be changed by new developments in both instrumentation and theory next year. Therefore, we can't completely freeze our recording concepts or our data acquisition concepts. That's just one point. The second point is that I think it's wery important not to separate yery far the data observing and acquiring mechanism from the scientists who are going to use the data. This question that was raised a moment ago by Lilly. I think the scientists should get the first look at it, not after the field trial, but during the field trial, the next day. At NSSL we found it necessary to run our radar data through the computer immediately so we can get a quick look 151 which was illustrated on one of Scotty Williams' figures. If it's no good, you repair it. That needs a wery advanced and sophisticated quick look. So, it's not after the field trials, which I think somebody mentioned, but during it. Otherwise you have a typical breakdown and I think an unfor- tunate separation of the observing and analysis with a tendency for the scientists to ignore the material that was collected because they didn't have enough to do with it and they don't have enough confidence in it. Concerning the somewhat perhaps new charts you mentioned the need to con- dense the amount of data that actually goes through pro- cessing. Well, somehow we have to condense this. When you speak of the 2400 reels of doppler tape, this would occupy all of the meteorologists in the whole world for the next century, to analyze that data. Holland - We have 7,000 reels of SMS tapes coming from GATE. Kessler - This reminds me of the difference between what one says and what one does. Now you spoke a few mo- ments ago about a caution that we must take with respect to material in vaults which would be available on an indefinite basis but when we speak of 7,000 reels, this sounds to me as though there is going to be a lot of stuff in those vaults in spite of our saying that you want to avoid this condition We have to actually avoid the condition. Not just to say that we're going to avoid it. One avoids this again through the quick look process by identifing certain cases worthy of analysis and you concentrate on those and you throw out the others. There's going to be another observational year and God will make more weather tomorrow and so on. At NSSL this year, I think we have something like 165 tapes and we don't plan to process it all. We process some of it. It starts through varying degrees of quick looks but if we spent 152 anything like $250,000 to process these data, there would be no program, no refinement of observational procedures and no funds left for anything but archival of data which would never be utilized. So to summarize I think that by bringing the scientists right into the experiment that one can take care of a number of these problems and I think we make a grevious error if we think that we can sit back here in 1974 and harden an experiment to be conducted in 1979. In the spring of 1979 there have got to be some changes and we'd better be darn well prepared to make them even at that late day. S. Williams - I'd like to make a couple of comments. One is that I can't disagree with you very strongly. As a data manager and processer, it would be very much to my advantage, and I think it would be to yours, if we could harden this and say right now precisely what we're going to do in 1978. As a practical matter, I know it can't be done. However, there are a lot of plans that must be made, and made yery carefully, in order to coordinate an exercise of this magnitude, and it's not too early now to start making those plans. The fact that we will change them next year is of no consequence at the moment. We must get started and we can't start yesterday; I think we must get started now. I realize that we are going to make changes but we should make those changes rather carefully. Holland - Let me comment too. There's really two points that I want to make. One is that while we might differ as to how hard requirements should be made, I don't think that we will differ on the proposition that require- ments must be established, must be defined, subject to change and revision. But there are lead time requirements, and some of them will be frozen long before the experiment, some of them can be changed up until a short time before the 153 experiment, some of them cannot because systems have to be designed, tested and they have to be operational when they get into this complex program. You must not jeopardize the success of the experiment by fiddling around with design parameters which are going to impact operational success at the last minute. That's point one. Point two is it means you must think about the experiment after next, that as you say it will not be the last experiment, and the ideas that you get for how it should be done after the cut-off date at which the design of this experiment has to be frozen, have to be incorporated in the design of the follow-on experi- ment. I think that the idea of having field test in the plan, a first field phase in the plan and a second field phase in the plan is very intelligent from that viewpoint but I think that the time interval that is allowed between the first and second field phase is going to be too short to evaluate the operations and decide the results of the first phase, decide how to make changes for the second phase, implement those changes, and be ready for another full scale operation in 21 months or whatever is available. I person- ally feel that that is going to be much too crowded on all of the people involved, the data processors, the data analysts, the field equipment people and so on to make any significant modifications and do it differently. If you're going to plan it that quickly, it is essentially going to be the same field phase repeated over again. You can see this if it's nine months or if it's a year, maybe you think that if it's two years you can change. I think it will take three years if you have to change it. Ogura - I'm not going to disagree with what's been said, but on certain points this needs input from the scien- ti sts . 154 Holland - the way this is handled in GATE I think is very good. The scientists are there and they're making decisions on what is to be done eyery day but from the standpoint of data management we already know what sensors are on the air- craft and what media they are recording in and in what format they are recording. And we have the software ready to pro- cess the data, the total amount of data is even constrained by the flying hours that are available. So that the param- eters required for the data management plan and for readiness for processing the data promptly have been set long in advance The exact things that are done in the field are decided by scientists at the time. We have a whole hierarchy of scien- tists, we have a mission selection team and a mission scien- tist on the ground and an airborne mission scientist and there is something like this in each of the systems. The ships have chief scientists and there is a "Special Analysis Group" at Dakar. So I think the idea of scientists partici- pating in taking the observations, in processing the obser- vations, in inspecting the first products of the program is important; I couldn't disagree with this at all. I think that this must be built into it. I don't feel that it con- flicts with anything that we have said about the data manage- ment planning. I think that we still have to have a plan; we have to have some boundaries put on the requirements for the data and some specifications as to just exactly what observa- tions we will get, and what format, and in what quantitites so that we can prepare to process these things and know what the magnitude of this job is going to be and be assured that it is within our means to do it. Ogura - The program must provide resources to be sure that the measurements made can be analyzed. 155 Holland - Again, this is outside of the system of data management but I would like to endorse your plea that all of the project proposals and plans should include a complete job, completing analysis with all of the necessary processing and the computer work that is involved so that the agencies which are supporting this can understand in advance what the total task is going to be and assume that there is plenty of resources in the budget, in time, in money, people allowed, for adequate analysis. 156 COMPARISON OP RADIOSONDE PffiSOSCALE WIND SHEARS AND THERMAL WINDS Elton P, Avar a Atmospheric Sciences Laboratory U.S. Army Electronics Command White Sands Missile Range, New Mexico ABSTRACT An experiment was performed at white Sands Missile Range (WSMR) , Nev; Mexico during the period August 1973 to March 197^ to test the hypothesis that mesoscale thermal winds can be used to estimate real wind shear profiles. Nearly three hundred radiosondes were released in groups of three to six. The data obtained were subjected to a detailed statistical analysis. The hypothesis was found to be true except near the tropopause. A discussion of the analysis performed upon the data and the results are presented in the text. The Atmospheric Sciences iiaboratory of the U. S. Army has been working in mesoscale analysis for many years in projects to support combat operations. Contrary to popular opinion the Army does not necessarily waste money. Many of our projects are designed to save money or resources for the Army. The objective of one project in particular, SATFAL, was to replace radiosondes with satellites as the standard system to obtain winds above 10 km for nuclear fallout calculations. Fallout winds are merely average winds over pressure layers or altitude layers. Army requirements specify a minimum frequency of observation in time and space. Satellite data is potentially the most economical source of such observations. 157 The usual method for obtaining winds from satellite data depends upon using clouds as tracers. This method, however, is not always applicable above 10 km because of the lack of suitable tracer clouds. Therefore, another technique using thermal winds to estimate wind shears was selected for investigation. Most meteorologists tend to believe mesoscale thermal winds are of little value in estimating real v;ind shears. Considering mesoscale variability of winds and temperatures there are few reasons to assume the thermal wind equation is valid on the meso- scale. In order to test the hypothesis that the thermal winds can be used to adequately estimate the real v/ind shears an experiment was conducted. The experiment was conducted at White Sands Missile Range (WSMR), New Mexico during the period August 1973 to March 197^ • Fifty-three data groups were obtained. Each group consisted of at least three and no more than six sets of radiosonde profile data taken at approximately the same time. Each radiosonde obtained profiles of temperature, pressure and wind velocity as functions of altitude. A set of pressures called the MRN mandatory levels was chosen as a convenient basis to reduce all radiosonde data for further analysis and comparison. Linear interpolation was performed upon the temperature (T), wind velocity components (U and V) and log pressure (P) to obtain the data values at the desired pressure levels. The six radiosonde sites used during the experiment are depicted in Figure 1. The north-south distance between VHiite Sands and Stallion is approximately 100 miles while the east-west distance between Holloman and Stallion is approximately 30 miles. Since the wind is predominately from the west the WSMR radiosonde network is usually sampling a strong crosswind. The average temperature gradient is approximately north-south. 158 INPUT: U,V,T |\ MULTIVARIATE REGRESSION < ^ MULTIPLE REGRESSION v»= I STALLION C JALIEN HOLLOMAN WKITE SANDS yigure 1 . FLOW CHART COMPUTE S ,S T u' v TWO-WAY LAYOUT MAriOVA REJECT DATA OUTSIDE 3o- LIMITS 7; TWO-WAY LAYOUT ANOVA TWO-WAY LAYOUT MANOVA 1 COMPUTE WIND SHEAR THERMAL WIND SIMPLE LINEAR REGRESSION OUTPUT: ANALYSIS RESULTS Figure 2. 159 Figure 2 shows a flowchart of the analysis performed upon the radiosonde data. Profiles of shears in U and V and average temperatures were computed for each radiosonde. Due to a high noise level in these data a three point binomial filter was applied to each profile. The results were profiles of shears S and S and average temperatures T. A two-way layout multivariate analysis of variance (MANOVA) was used to edit these data. Each three-vector was composed of values of S , S and T taken over the same pressure layer v;ith the same radiosonde. The two fixed effects were station or location bias and time or synoptic influence. Station bias was expected due to the locations of the radiosonde sites relative to the mountains. Each could experience different mountain wave effects, and mountain waves frequently occur during the winter and spring at WSMR. Time influences were expected because of the variation in synoptic weather patterns. These changes, however, would be common to the wind shears and temperatures obtained from all radiosondes released at the same time (same group). The residuals remaining after subtracting these two effects from the observed S , S and T data were calculated, and variances of these residuals were computed. An individual data value was discarded if the corresponding residual fell outside the three standard deviation tolerance limits. The remaining data were accepted for further analysis. Thermal winds were computed between each pair of radiosonde sites from the T data, and the wind shear compo- nent parallel to each thermal wind was computed from the S^ and S data. Each of these thermal wind and wind shear combinations formed a two-vector. These data vectors were sub.iected to a MAI'JOVA, and each component was subjected to a univariate analysis of variance (ANOVA) . Again^ a two-way 160 layout fixed effects model was assumed. The two effects were station bias and time influence. Generally, the results of the MANOVA and the two ANOVA v/ere compatible. There were, however, some samples which indicated this was not true. For instance, the RkEOYk results might have indicated a significant station bias effect while the two ANOVA results failed to show significant station bias in the data components individually. We have not yet been able to satisfactorily explain these inconsistencies. The overall MANOVA- ANOVA analysis shov/ed a good relationship between the fixed effects of both the thermal winds and the v/ind shears. The next step in the analysis was to use multivariate regression to compare the thermal winds and wind shears. Each data vector contained five components. A vector was formed by selecting a radiosonde site, say Holloman, as the base and the five components were the five wind shears or thermal winds computed between the base site and each of the other sites. The base site was changed after each regression analysis until each site had been selected. The multivariate regression analysis consisted of 1) regressing the wind shear vector onto the corresponding thermal wind vector, and 2) testing the residual covariance matrix and each element in the regression coefficient matrix for statistical significance. Following the same procedure set forth in the MANOVA- ANOVA analysis, multiple regression was performed and the results compared with the results obtained from the multi- variate regression. The multiple regression analysis consisted of i; regressing each component of the v;ind shear individually onto the vector of thermal winds, and 2) testing the residual variance and each element of the regression coefficient vector for statistical significance. In general, the results from the different regression analyses were compatible but 161 difficult to interpret on a physical basis. The overall conclusion was that thermal winds do contain a significant amount of information concerning the actual wind shears. The direct relationship between the wind shears and thermal winds is easier to analyze through simple linear regression than the previously mentioned regressions. Simple linear regression, one wind shear vector component regressed onto one thermal wind vector component, was per- formed upon the data and the results are shown in Figures 3 and 4. The table in Figure 3 gives the average ratio of residual wind shear variance to total wind shear variance as a function of radiosonde base and pressure layer. Similarly, the table in Figure 4 gives the average residual 2, wind shear variance in (mps/Km) as a function of radiosonde base and pressure layer. The residual variance in question is the variance of a wind shear vector component which cannot be explained by simple linear regression onto a thermal wind vector component. The tropopause at WSMR is usually between 150-200 mb. Referring to Figures 3 and 4, this region is relatively poor for estimating wind shears from thermal winds. Possibly the ageostrophic component of the wind is relatively largp resulting in a weak relationship between actual wind shears and thermal winds. Another possibility is that radiosonde data errors may be greater in that region than elsewhere leading to a poor comparison betv;een the two computed data sets. Both above and below this tropopause region the linear relation- ship between the thermal winds and wind shears improves and becomes significant. The hypothesis that mesoscale thermal winds can be used to estimate real wind shears appears to be acceptable except near the tropopause. It should be noted that Figures 3 and 4 do not contain data from Apache radiosonde site. During the analysis outlined in this paper data from Apache was found to be incompatible with data from the other sites. The reason for this incom- patibility has not yet been determined. As a result of this discovery, all Apache data was edited out and the full analysis was repeated on the data from the other sites. 162 PRESSURE LAYER (mb) 100-125 125-150 150-175 175-200 200-250 PRESSURE LAYER (mb) 100-125 125-150 150-175 175-200 200-250 RADIOSONDE SITE HOL WSD JAL STL SMR 0.1840 0.1367 0.1865 0.1126 0.1179 0.3650 0.2652 0.2926 0.2578 0.2465 0.7551 0.7672 0.5460 0.5249 0.7561 0.8412 0.7777 0.7217 0.6877 0.8412 0.4920 0.3624 0.4574 0.3295 0.5143 -MJy Fi^re 5. RADIOSONDE SITE HOL WSD JAL STL SMR 1.887 1.781 2.304 1.586 1.535 3.511 2.785 2.715 2.994 2.596 5.976 6.121 4.359 4.332 6.699 6.182 5.410 5.441 4.962 6.111 1.564 1.409 2.156 1.572 1.982 ■SSCaSSBOBD^ Figure 4, 163 THE GENERATION AND TRIGGERING OF SEVERE CONVECTIVE STORMS BY LARGE-SCALE MOTIONS by Edwin F. Danielsen Doug Lilly asked me to present some concepts I have been developing which relate the formation of organized, severe convective storms to large-scale processes. My interest in this work began about ten years ago. Since then I have studied, separately, components of the concept and now I shall attempt to combine the components into a coherent picture. However, I shall not restrict my presentation just to large-scale influences. I must include some microphyslcal processes which I think are important also. The first slide. Fig 1, was constructed from the Storm Data published monthly by NOAA. It indicates the number of reported tornados and tornado funnels aloft per successive six-hour periods during April and May 1963 and April 1973. Notice the quasi-uniform distribution during May and the distinct maximae in both Aprils. Each maximum is related to the rapid development of a large-scale cyclone over the southwestern United States. I predicted and stuided these cyclones in 1963 while directing Project Springfield , an aircraft experiment designed to test the concept of tropause folding. Since tropopause folding is produced during large-scale cyclogenesis, a prediction of the latter is essential to a prediction of the former. It was then, in 1963, that I first observed that, as each major cyclone developed over the southwestern states and the tropopause folded, a major dust storm formed over the arid high plains and, subsequently, severe thunderstorms, hail storms and/or tornados broke out in the mid- western states. 155 25 15 \o\- AFK\L |%3 1 Z 3 4 -5 6 7 g 9 (d II 12 i^ 14- 1^ lc"l7'/8."l9'io'ir22'25'a4'2^'Zfc'27'2fi'lV3<5' uwu J, > J III k/^ ^ o ^5- £L lo O a <^ o ^^■ 2C? /0-- riAT i%5 ^k, n .r J M.ynu l h j .lif) . ■.JUjAM -T-I? •1 2 y 4- 9 6 7 8 9 ID // li l-i W l'5 16 1/ Ifc 19 20 2J W 2?' ^ 29 2t 27 tf 2? >!> 31 ATOL 197'^ I w I . 1^ ' ■ IT > g ' ^ ' . I — r 1 2 3 4 "5 fc 1 S 9 1" II li l^ 14 le 16 IT !& 19 ^ 51 22 13 24 25 U -27 Oe, 2J -Jo Figure 1 The frequency of reported tornados in black and tornado funnels aloft per 6 hr. interval. 155 For example, the tornado maximae late on the 19th and 22nd of April 1963, which occurred in the midwest, followed large-cyclogenesis on the 18th and 21st of April. Both cyclones were predicted and aircraft verified that stratospheric air had descended into the troposphere as the tropopause folded. The second slide, Fig 2, shows the meteorological conditions detected by the aircraft on flight paths normal to the cyclonic flow on the 21st of April. In the lower right, in advance of the cold front, dust is seen to be mixed to about 20,000 ft. MSL, with adiabatic lapse rates extending from the ground to 17,000 ft. In May of 1963, the main baroclinic zone moved northward across Canada. There were no major cyclonic developments over the western United States and the tornado frequency changed markedly from one with distinct peaks to one with a quasi-random, quasi-uniform distribution. Intrigued by the apparent dependence of organized severe storms on large-scale cyclogenesis, I studied, from 1963 to 1969, the large-scale cyclogenesis with the aid of isentropic charts and isentropic trajectories. Many cyclones were analysed in space and time, and a systemmatic pattern of development was determined. Then in 1969, while I was at NCAR, I was asked to predict hail for the northeastern Colorado Hail Experiment. With this request, I turned from large-scale to small-scale phenomena. First, with Phil Haagenson, a steady-state cumulus cloud model was developed. Then, with Rainer Bleck and Don Morris^, a time-dependent model was developed which predicted hail formation. The time-dependent model includes a numerical solution of the stochastic collection equation. With this model we made sensitivity studies to determine the effects of cloud microphysics on the life history of cumulonimbus clouds. 157 1000 ff, WINNEMUCCA 42 ELY WINSLOW ALBUQUERQUE lOOOfi 42 TOTAL /3 ACTIVITY Figure 2 Vertical cross-section with aircraft flight data while crossing folded tropopause, 20 April 1963. 168 The results were very interesting because the most sensitive parameter in the model is the initial cloud droplet distribution. With the coalescence kernel "realistic" in that it is based on both theoretical and laboratory data, the model shows that growth by coalescence is very slow when the droplet radius is less than 30 ym. Since the cloud droplets form at about radii of 5 ym and grow by condensation to only 10-15 ym, they tend to be too small to trigger rapid coalescent growth. The problem is acute when updraft velocities are strong, because then the droplets are swept up to their freezing levels before they become large enough to grow rapidly by coalescence. As small ice particles they spread out in the anvil and evaporate. In this case, the cumulus functions primarily as a water vapor pump which transfers near surface m.oisture to the upper troposphere. It does not produce much precipitation. On the other hand, if there is heavy rain or hail falling from a cumulonimbus, the hydrometeor growth must be by coalescence; therefore, some mechanism for triggering coalescence is necessary. One possible mechanism depends on the presence of giant nuclei, i.e., particles larger than 30 ym in radius which, upon being wetted, form drops large enough to grow rapidly by coalescence. Dale Gillette^, of the NCAR Aerosol Project, has made measurements near the surface in major dust storms which indicate that soil particles of this size become airborne. It remains to be shown whether they remain suspended as the dust advects downstream toward the convective storms. Members of the Aerosol Project have made both surface and aircraft measurements in northern Texas dust storms during April of 1972 and 1973. The emphasis in these experiments was to determine: the mechanism responsible for generating the dust; the size distribution as a function of height 169 in the well-mixed layer; and the total mass of suspended material. Much is now known about particles with radii less than 10 ym, but we lacked reliable measuring capabilities for the larger particles. Although the experiments were not designed to study severe weather phenomena, the dust storms of 1972 and 1973 were both followed by organized outbreaks of destructive hail storms and tornados. In particular, the storm on the 19th of April 1973 produced 41 tornados and 29 funnels aloft in eighteen hours. We are analysing this storm in considerable detail both by objective computer methods and by hand. I will use some of these analyses to illustrate the scale interactions in severe convective storm outbreaks. In the time remaining this morning, I will quickly outline the major concepts and then discuss them in the context of the April 1973 storm. First of all, the large-scale cyclogenesis is influenced by strong surface heating of the arid soil in the southwestern United States. In the spring this heating produces adiabatic lapse rates which extend from the surface to 20-25,000 ft. The reduction in stability increases the available potential energy in the baroclinic troposphere, implying that an infinitesimal perturbation would amplify rapidly in this modified air. However, there is no evidence that the cyclogenesis develops from infinitesimal perturbations. Instead, the cyclones develop when finite troughs with strong upper level jets propagate into the destabilized air. To date, I have successfully predicted cyclogenesis when strong jets with cold adjection approach the west coast of Canada and the United States. The cold advection appears to be a necessary condition for the subsequent southward and downward transport of momentum which accompanies the large- scale cyclogenesis. 170 The second concept to be stressed is that the downward transport of momentum is directly responsible for the major dust storm formation and indirectly responsible for the generation of convective instability. During its descent, the jet decelerates in speed but wind speeds from 60-80 kts do reach the adiabatic layer and then rapidly mix to the ground as strong gusts. These strong wind gusts sandblast the arid soil, disaggregating soil particles, creating small soil aerosols which then mix vertically through the adiabatic volume into the stable transition layer above the adiabatic volume. When I use the term "adiabatic volume" I mean only that the isentropic surfaces are vertical within the volume. There can be, and usually are, horizontal gradients of entropy in the volume. The momentum mixing into the adiabatic volume is predominantly from the west. A southerly component develops in response to the pressure gradient as in an Eckman boundary layer. Thus, after mixing, the mean velocities in the adiabatic volume are uniformly from the west-southwest . Therefore, the entire volume advects from the arid plains toward the mid- west. But the large-scale cyclogenesis has also intensified southerly and southeasterly flow from the Gulf of Mexico into the midwest. This flow is moist and the potential temperatures near the surface are lower (cooler air) than those in the adiabatic volume. It follows that the adiabatic volume must wedge into the moist air flowing over the cooler moist air near the earth's surface and under the warmer air aloft. The resulting vertical temperature distribution is a moist adiabatic layer from the surface to the base of a stable temperature inversion, which represents the transition zone between the moist air and the adiabatic volume; then a dry adiabatic layer which is topped by another 17.1 transition layer back to the moist air. Although dynamically stable due to the stable transition layers, this stratification is potentially unstable (convectively unstable) because of the moisture stratification from moist-to-dry at the base of the adiabatic volume. The severe convective storms form in and feed on this convectively unstable air. The third concept is complicated by several factors whose relative importance is difficult to unravel. It concerns the scale of low level convergence which releases the convective instability, the magnitude of the cumulus updrafts, the vorticity of the air in which the updraft develops, and the rate of growth of the hydrometeors . When the low level convergence produced by the large-scale cyclone is broad, the ascending motion plus random perturbations tend to generate randomly distributed thunderstorms over a broad region. Hail storms are numerous but tornados are rare. Conversely, when the low level convergence is concentrated along a line, tornadic storms break out along the line in groups. Current evidence suggests that the tornados form in air with large absolute vorticity. The large vorticity is due primarily to the mixing of stratospheric air from the folded tropopause into the adiabatic volume. There is also evidence that the tornados form downstream of dust storms which develop behind the main cold front, while random thunderstorms with hail form downstream of dust storms which develop in advance of the main cold front. We know that hail storms require, and that tornadic storms probably require, rapid growth of the hydrometeors. This, in turn, requires large updraft speeds, large water-vapor mixing ratios, and some mechanism to 172 trigger coalescence. The large-scale motion and moisture fields plus strong surface heating supply the potential instability for the updraft speeds and sufficient moisture. I am speculating that the large dust particles can trigger coalescence. Now I will quickly show you a series of slides which will illustrate these concepts. First, Fig 3 is a sequence of isentropic charts, each separated by 12 hrs, starting at 1200 GMT, 17 April and ending at 0000 GMT, 20 April 1973. They illustrate the development of a major cyclone on the 323 K isentropic surface. Contoured at an equivalent height interval of 60 meters is the Montgomery function T„ =c T + gz M p V ° where C is the specific heat at constant pressure, T is the virtual p '^ '^ V temperature, g is the acceleration of gravity and z is the height of the virtual potential temperature surface. The gradient of ^ and the latitude determine the geostrophic wind as does the gradient of gz at constant pressure. Here I want you to note how the pattern of ^ pro- gressively changes as the jet propagates southward and eastward. The trough tilts southeastward, amplifies and then forms a closed low. Simul- taneously, a ridge amplifies to the east of the developing low and the jet re-develops to the northeast of the ridge. Fig A shows this development relative to the jet and to North America, 9 -1 In this figure, only the ¥ = 3.182 x 10 ergs gm line is illustrated at successive 12 hr intervals. Over the western states you can see that the flow is supergeostrophic - i.e., the air in the jet is crossing the •i" lines toward higher values. The air is decelerating and the T field 173 12 GMT • 17th 00 OCT • 18th 12 GMT • 19th 323°K April 1973 00 Gta • 20th Figure 3 Montgomery stream function patterns for 323 K isentropic surface, April 1973. Maps extend over N. American continent. 174 9 -1 Figure 4 Successive positions of jet and ^ = 3.182 x 10 ergs gm 175 is adjusting to the wind field. Later, after the cyclone has developed, the wind field east of the cyclone adjusts to the "¥ field. The 323°K field was chosen to show the development on a surface which intersected the jet core, and was above the surface mixing layer. Now I will show you a technicolor version of the 305 K surface which is affected directly by the surface heating. The slide, Fig 5, corresponds in time to the lower right figure of Fig 3, i.e., at 0000 GMT, 20 April 1973. First the black (continuous lines in Fig 3) lines show the ¥ field contoured at 30 meters and with these lines are the observed winds. Note the strong winds wrapped around the south of the vortex. Also, over Texas the dashed line marks the intersection of the 9 = 305 K surface with the ground. Over central and southwestern Texas 6 > 305 K. Over the Texas panhandle and southeastern New Mexico, winds from 50-80 kts are entering this heated region where the isentropic surfaces turn vertically to the ground. Here the high winds are mixed right down to the ground and a major dust storm is formed. The blue overlay (dotted regions in Fig 5) show the cloud-free regions as determined from a satellite photograph at the same time. Clouds, noted by the white areas, are seen to extend northward from the Gulf of Mexico, around the vortex center into Colorado and then along the Continental Divide into Canada. Cloud-free skies are found under the jet from the Gulf of Alaska to Kansas except for a long, narrow cloud across New Mexico, the Texas panhandle, western Oklahoma, Missouri and Iowa. If this cloud were absent, the pattern would be similar to the "comma" pattern observed frequently by satellites and deduced by our studies of vertical motions from isentropic trajectories. This cloud is essentially a mud cloud, that is, it contains both dust and water mixed upward from the 176 Figure 5 305°K isentropic surface, 0000 GMT, 20 April 1973. Continuous lines are Montgomery stream function contoured at equivalent of 30 m. Dashed lines are isobars, contoured at 50 mb. White areas are clouds observed by satellite and V are tornados reported within - 1.5 hours. 177 surface. Thunderstorms and rain showers wetted the soil the previous day but the strong gusts of dry air dehydrated the soil and then sandblasted it. The Amarillo sounding, Fig 6, shows this cloud layer above the 650 mb level. Note that the lapse rate is dry adiabatic below the cloud and oscillates about the moist-adiabatic up to 450 mb in the cloud. Thus the mixing really extends to above 450 mb and the winds support this view. The first red overlay (black triangles of Fig 5) show the locations of tornados reported within - 1 1/2 hrs of the map time. The group extending from eastern South Dakota to western Missouri are downstream from the dust and in the extension of the mud cloud. Radar tops with these storms were 27-35,000 ft. They moved to the north and northeast. The second overlay (dashed lines in Fig 5) show the isobars, labelled in millibars, on the 305 K surface. Notice that warm advection predominates throughout the cloudy regions. Notice also the steep slope of the isentropic surface south of the vortex. Over the Texas panhandle the surface plunges from the 450 mb level to the ground in a few hundred kilometers . A careful look at Fig 5 will show that the tornados are mostly in the cold baroclinic zone or close to the boundary on this 6 surface. However, despite the fact that the analyses were made before the tornados were located and plotted, neither the analyses nor the tornado's positions are known with sufficient accuracy to determine their positions relative to the baroclinic zone. A tornado is reported also in northeastern Arkansas. Severe thunder- storms were predicted in this area but none was predicted in the northern area where the line of tornados developed. A vertical cross-section through the tornado line is shown in the next slide. Fig 7. It extends from Tucson, Arizona, station 274, through Denver, 178 i: 20 30 40 50 60 70 80 100 72365 75- 4-20-002 200 300 400 500 600 700 SOO 900 1000 J t^ -70 -61 -50 -40 -30 -20 -10 10 20 30 40 Figure 6 Sounding for Amarlllo, Texas, 0000 GMT, 20 April 1973. 179 Figure 7 Vertical cross-section from Tucson, Arizona (274) to Huntington, West Virginia (425) at 0000 GMT, 20 April 1973. Continuous lines are 6 isentropes labelled in K; dashed lines are isotachs labelled in kts. 130 Colorado, 469, and Omaha, Nebraska, 553, to Huntington, West Virginia, 425. I want to point out that the cold front has a negative slope from the surface east of Omaha to about the 700 mb level. The stability is approximately zero in the baroclinic zone while the moist air below the front is more stable. In the red overlay, dashed lines in Fig 7, one sees the effects of the momentum mixing and accelerations which have increased the speeds in the moist air. The isotachs extend almost to the ground between Omaha, 553, and Springfield, Illinois, 532. In the plane of the cross-section, the anticyclonic shear in advance of the front exceeds f, the coriolis parameter. However, the curvature is positive, so the absolute vorticity is approximately zero. To the west of the front, the shear on the isentropic surfaces is cyclonic, so the absolute vorticities are large. This large cyclonic vorticity is due to the downward advection and mixing of stratospheric air into the hyper- baroclinic zone. If large-scale vorticity rather than small-scale inhomogeneities in the vorticity is responsible for the tornado circulations, then the tornados should be forming behind the cold front, in the hyperbaroclinic zone, I suspect, but cannot prove, that this is in fact the region of their development . I think my time is up. There are many other things I wanted to show you. Perhaps I can include one more slide. You might be interested in knowing how the large-scale NMC prediction verified for this case. The 72 hr predicted 500 mb field and the observed field are shown in this last slide. Fig 8. Now this represents a remarkable prediction from essentially zonal flow to a major cyclone in 72 hrs. But, you may notice that the trough tilts from northeast to southwest in the prediction and from 181 52Z S6Z 12 HOUR PREDICTION 500 MB 05SERVED 00 GMT 20 ^V^\\7wi Figure 8 72 hr prediction by NMC of 500 mb contours and the observed contours. 132 the northwest to the southeast in the observed field. This southeast tilt is, of course, consistent with the southward propagation of momentum in the jet to the west of the trough. An error of this type is systemmatic for these rapidly developing cyclones, so one can make a subjective adjustment to the NMC predictions. Incidentally, the 36 hr prediction was very good, perhaps because the asjnnmetry was then in the initial conditions. I hope the material I have presented is sufficient to challenge your imagination concerning the large-scale control of some of the organized outbreaks of severe convective storms. I am of the opinion that the instability and the triggering mechanisms are generated by the large-scale cyclones and the unique characteristics of the North American continent. I also think that the cloud microphysical processes can modulate the degree of severity of the storms. This coming April we of the Aerosol Project at NCAR will conduct Project DUSTORM, an aircraft and field experiment designed to obtain the measurements necessary to test the concepts. We will try to trace the dust particles from their generation region downstream to the rear of the severe storm line. With specially designed in situ particle sensors and bulk samplers we will determine whether large particles reach the convective storms. We will also test the small particles captured in the samplers for their ice nucleating capability, and study the stratospheric aerosols for their drop forming capability. The latter can modulate the degree of supersaturation in the clouds. Finally, we have models available to use the information in numerical simulations. 183 DISCUSSION Das: I want to say, in view of your interest in supersaturations in clouds, that we have computed supersturations of more than 100% in clouds with large updraft speeds. Danielsen: Let me comment on the effect stratospheric particles could have on the supersaturation in these clouds. We know that the strato- sphere contains submicron particles which are produced in the lower stratosphere. As the large-scale cyclone develops, strato- spheric air is advected down into the mid-troposphere. East of the cyclone the stratospheric air is entrained and mixed by the convective storms. If the updraft speeds are large in the convective cloud and coalescent growth is moving the drop distribution to larger radii, then the supersaturation increases and the release of latent heat is delayed. Now as the new condensation nuclei from the stratosphere are entrained into the cloud, one would expect a sudden formation of new cloud droplets, a release of latent heat. This heat release or heat pulse would increase the vortex circulation and affect the cloud growth and dynamics. Certainly definite feedbacks are possible. Pearson: There was a very strong dust storm on April 3rd of this year. I think it was visibile from satellite photos - very striking. We find case after case with the cyclogenesis further east in which we are not aware that dust plays any role at all. For your research work. 184 if you are going to work with storm data from 1973, you are in great trouble. That is the worst storm data we have ever had. The other thing I ask of you, when you look at these tornados, what kind of tornadoes were they? We have had some that were 10 ft wide with path lengths of half a block and some that are several hundred miles long. As I recall this case, the storms in the area (South Dakota to Missouri) where we did not predict them, were all of the mini-tornado variety. Danielsen: They may have been mini-tornados but they were maxi-destructive. I think the destruction is the most important thing. Certainly the cloud tops in that region (27 to 35,000 ft) were lower than those reported in Arkansas (49 to 52,000 ft). Also, I do not mean to leave the impression that dust plays a necessary role in the formation of severe storms. I am just saying that dust may trigger coalescence and the coalescent growth may be critical. It is the sufficient condition that I am stressing. There is no evidence that dust played any role in the Arkansas tornados. REFERENCES 1. Danielsen, E. F., 1964: "Report on Project Springfield," Defense Atomic Support Agency Contract DA-49-146-XZ-079, DASA 1517, Washington, D.C., 97 pp. 2. Danielsen, E. F., R. Bleck and D. A. Morris, 1972: "Hail Growth by Stochastic Collection in a Cumulus Model," J. Atmos. Sci . , 29 , 135-155. 3. Gillette, D. A., 1974: "Production of Fine Dust by Wind Erosion of Soil: Effect of Wind and Soil Texture," Proceedings , Atmosphere- Surface Exchange of Particulate and Gaseous Pollutants - 1974 Symposium, (AEC-Battelle) . 135 Severe Storm Observations in the Near Environment Stanley L. Barnes Soundings obtained over an 8-hour period on 29-30 April 1970 within 20 nm of each other are treated as time-series observations at a point. Although storms crossed the sounding network, only data outside radar echoes are presented. Four storm severity predictors calculated from thermodynamic profiles show consistent trends toward increasing storm severity. However, considerable scatter about trend lines suggest the ambient atmosphere, although not convective, was perturbed. This variability cannot be attributed to developments on the large (synoptic) scale; they are more likely small scale perturbations due indirectly to the ongoing convection. Nonetheless, the representativeness of singular observations is questionable under such cond i t i on s . The slowly developing large-scale environment did not reveal clearly defined triggering mechanisms for these storms. The only consistently observed factor is a belt of low level moisture convergence in western Oklahoma. Most likely, the exact causes are hidden in the unobserved mesoscale structure. The data do reveal modulating and feedback mechanisms at work in the storms' near en V i ronmen t . Ed Danielson has told you a little about large scale environmental influences on severe storms. I would like to address observations of the near environment of severe storms. I had planned also to review what we had talked about in the PDP in terms of what it is that we are trying to describe — internal storm structures, triggering mechanisms, etc. Doug Lilly did this for me so I can save a little time by cutting out that part of my talk. 135 First, let me make a prediction of what we will know by 1977. I think we will know pretty much the internal storm structure in so far as it is describable. Dual Doppler radars are rapidly showing us the makeup of the storm, the evolution of the storm flows and so on. By 1977 we will have described the internal flow patterns of severe storms — large convective storms, I will categorize. We probably will not yet understand all of these flows; that will be partly the work of modelers and hopefully the data from SESAME will be benificial to that task. We know much already about how the atmosphere prepares itself for convective activity. Ed Danielson has given you one example. There are many others — how the conditional instability comes about, how the jet streams develop and transfer momentum vertically, and so on. What we don't know a lot about are the triggering mechanisms, the modulating mechanisms, and the feedback of the storm to the larger scales. The first slide (Fig. 1) shows you something that we found in the April 29-30, 1970, storm case. We obtained some 59 radiosonde observations within a 20 nm radius of each other and covering an 8-hour period during the evening hours on April 29 and 30. The storm activity had been ongoing for several hours. This slide shows the echo activity at 174 5. Here is Norman (center). These are 20 nm range marks. Just for reference, let me point out where our network was in '70. It started about up here (30 nm north of NSSL) , went out to about here (30 nm west) and could be encompassed by a 4 nm diameter circle just about like this. You get some impression of the type of convection that is going on. Two rather intense hailstorms up here (NW of NSSL). This cell marked A also produced small hail. What I 187 want to point out are some of the questions that need to be answered. Why are these storm cells lined up in three separate lines? Why a big hole up in here? We do not know much about this mesostructure. We do not know what the near environment is doing to distribute the storms in this fashion. There are a lot of intuitive guesses, but we have never observed it. We have never measured it. V\niat I will be showing you are the temporal changes in the environmental atmosphere. What we will demonstrate is the way the atmosphere changed in its thermodynamic structure and its wind structure (momentum structure) throughout this eight hour period. These storms (at 174 5) were not particularly severe. They produced some hail, no tornadoes and not much heavy wind. B ^ v^ m 3L '^O » .-.PR CST ";^T LOG X ON NM?K ^ 4 JWi V \^ /* r' 7 N 95ft Figure 1. Radar reflectivity pattern showing mesoscale variations at beginning of 8-hour period of storm observations in Oklahoma on 29- April 1970. 188 The next slide, (Fig. 2) shows you what the storms looked like toward the end of the period. You saw this slide yesterday when Mr. Wilk gave his presentation. This is a rather large tornadic supercell type storm (west of NSSL) . Here is another to-be-tornadic storm right behind it (southwest) . These storms up here (northeast) were large rain storms producing hail also. Convection ceased in the southeastern region. Again we still see lines of convection through here; we do not know yet what causes these lines. Figure 2. Radar patterns near end of 8-hour period with tornadic supercell storm 40 nm west of radar site (NSSL). 189 In the next slide (Fig. 3) I will give you some idea of the overall flow pattern that these storms were embedded in. This is the early morning 500 mb map. It shows a rather elongated trough in the southwest U.S. Wind velocities on the order of 7 5 kt were approaching the Oklahoma area. Our network is shown by the dot here. This (white arrow) shows the 8 50 mb moisture maxima up through Texas. The 8 50 mb thermal ridge is the dotted line and what we could find of an 850 mb jet is indicated by the black arrow. Twelve hours later, in the next slide (Fig. 4), we see essentially the same long wave trough. This was not a case of intense cyclogenesis. In fact we couldn't even detect very much in the way of short waves. The jet had elongated itself, taking on a configuration of three prongs in through here. There was convective activity in Texas and on up into Oklahoma and Kansas. An 8 50 mb jet had developed during the day. I=l[> 850 MB MOISTURE ^^ 850 MB WIND MAX. o o o 850 MB THERMAL RIDGE Figure Jo 500-millibar chart Figure 4. 500-millibar chart at 0600 CST 29 April 1970, 1800 CST 29 April 1970 near beginning of storm observation period. 190 The data that I am going to show (Fig, 5) you begins about 1800 CST and goes on until past midnight. What I will show you are the trends of four common severe storm predictors or aids in forecasting — lifted index, surface gusts, height of zero wet bulb, and hail size. These parameters were calculated by computer from selected soundings, about 35-4 soundings out of the set of 59. These soundings were selected because they were definitely not in the convective activity. however, some were influence by the convection. All of these parameters show the same trend toward greater severity — lifted index decreased along the computed regression line. Surface gusts remained about the same at roughly 30 kt. There is a slight increase. This end of the line is about 33 m/s. Height of wet bulb zero according to Miller's conditions, this value (3800 m) is near the top of the severe storm limit, but as time went on it descends more toward severe values. Hail size predictor increased from near zero to about 2 inches on the trend line. Actually, 4 inch hail was observed on that night. The trends are quite obvious. The atmosphere was becoming more conducive to increase the storm severity during the evening. One of the points I want to make concerns the scatter about these trend lines. These observations are essentially point observations, I'd say, taken within a 40 km radius of each other. If we treat them in the large scale context, that is essentially a point. We see all sorts of scatter about these lines. I cannot explain this scatter — I think it is natural scatter as opposed to instrument scatter. We treated these soundings carefully to remove errors in evaluation. I believe this is natural atmospheric variability. How are we going to explain it? Does it need explaining? Do the storms respond to such variation in their environment? 191 5000 4000 3500 3000 HEIGHT ZERO WET BULB (M MSL) • • J — I I J I ' •■ ' I 60 50 40 30 20 10 SURFACE GUSTS (M SEC"') i . J 1 I I I ■ ■ 5 -5 -10- -15 ' LIFTED INDEX CO • . : • • • • • * • I » • • • 1 1 1 1 L 1 • • • • • 1 . _l 1700 1800 1900 2000 2100 2200 2300 0000 0100 TIME (CST) Figure 5. Trends of four storm severity parameters calculated from environmental soundings released from the 40-km radius NSSL network on 29-30 Apr. 1970, All indicate increasing severity and con- siderable unresolved variation about the trends. 192 The next slide (Fig. 6) shows what it was about the thermodynamic structure that caused these parameters changes. These are the temperature trends throughout the evening. We don't see much of a real trend except slightly warmer conditions toward the end of the period here at low levels and slightly cooler at high levels, although these four points up here are in question because I think they were influenced by the one storm's lifting of an inversion. -20 -15 -10 o i. -5 0. S 5 10 15 20 25 500 mb - • • t • •*^ y^ m __^- — .. • - 700 mb - : • • *. • .1 • • • • 850mb • • _ • • -t _J^ . • • 1 1 1 1 1 ' 1 1 1700 1800 1900 2000 2100 2200 2300 0000 0100 TIME (CST) Figure 6. Temperature trends indicate slight warming in low levels and cooling at 500 mh , although four clustered points around -15°C are believed to be caused by storm-induced lift of a stably-stratified layer near 500 mb . % 193 The major difference shows up in the next slide (Fig. 7) . It was in moisture content. At 850 mb the dew points increased by about lO^C. At 700 and 500 nb there was marked drying. You will notice the scatter of these points about the trend lines. This is a little disconcerting, I guess, in terms of the SESAME project. When do you take a sounding and treat it as representative of the atmosphere in a 4 km radius around the planned sounding sites? If you had taken this sounding and this sounding and looked at the 850 mb dew point trend, you would get something different from the actual trend. This is going to be a problem that we will have to be aware of, -40r , 500 mb Figure 7. Dew points trends 700 mb at selected levels m show marked increase in moisture at low levels and drying aloft. r 850 mb 1700 1800 1900 2000 2100 2200 2300 0000 0100 TIME (CST) 194 The next slide (Fig. 8) shows how the wind direction changed. There are no significant trends, just scatter about the mean. Slight backing is suggested here at 700 and at 500 mb, but as I mentioned there are no short wave systems moving through. No cold air advection is indicated. The wind speeds (next slide. Fig. 9) , show modest changes in the lower layers. As I mentioned a low level jet did develop and it is reflected in the trends at 850 and 700 mb. 270r 180- 90 500 mb z o 1- 360 o UJ a: a 270 a z S leo 270 180 90 850 mb \. V ■ ■» »% 80 60 40- 20 o jij 40 700 mb S' 40r 20 500 mb 700 mb 850 mb 1700 1800 1900 2000 2100 2200 2300 0000 0100 1700 1800 1900 2000 2100 2200 2300 0000 0100 TIME (CST) TIME (CST) Fi gure Wind direction Figure 9. Wind speed trends reflect development trends show slight hack- ing at 700 mb; insignificant of low level jet changes at 850 and 500 mb. At the 500 mb there is no significant change. 195 The next slide (fig. 10) shows the change in vertical shear structure. The solid line is a hodograph of a sounding taken early in the period during the time of the storm A that you saw in the first radar photograph, and the dotted line is a hodograph near storm F, the tornadic storm that we saw in the second slide. Vector A is the mean motion vector of storm A. If we imagine vectors emmanating from point A to the solid line, we see that relative winds first veered and then backed so that the flow at about 3 km was actually coming in from the northwest relative to the storm. Then above that, from 3-6 km there was an extremely large shear from south to north. Quite typically, the storms that developed about this time were elongated in the north-south direction suggesting that these winds erroded the precipitation that was injected into this layer and carried it northward. On the other hand, the later storm (storm F) is in a more typical severe storm wind environment. The dotted hodograph is similiar to the ones Maurwitz has published. . .continuous veering of the wind with height in relation to the storm and a strong magnitude of wind at low levels, then very little shear in middle levels (3-6 km) . This is just one example of how the shear structure of the atmosphere changed during the evening. It confirms what we know, or think we know, about storm structure in relation to the vertical shear. 196 4 STORM A NOB «I5 1730 CST • STORM F CHK#30 0041 CST 50 Figure 10. Vertical (vector ) wind shear at begin- ing and end of observation period. Small hail- storm "A" experienced unfavorable shear between 3 and 6 km. Tornadic storm "F" had wind shear environment believed favorable to super cell type structure . Passing on to the triggering mechanisms, we do not know much about them on this date. Fig. 11 is a 2100 CST regional surface weather map of aviation data which has been computer analyzed with stream lines shown. The moisture convergence field is superimposed (dashed) , Echoes about 2100 CST are shaded. All we see of significance is a band of moisture convergence out here in the western part of Oklahoma. We cannot find any other triggering mechanism. The storms did seem to develop during the day out in this region and then move rather rapidly eastward. The first tornadic storm that we observed is in its formative stages right out here (southeast of Childress, Texas). We really do not know much about the triggering mechanisms even on a day when we have collected a lot of data. \ 197 Figure 11 , Synoptic-scale moisture convergence (dashed ) maximizes in western Oklahoma and northwest Texas in otherwise innocuous surface flow field. Shaded areas are radar echoes less than (light) and greater than (dark) 39 dBZ . 198 One word, briefly, about modulating mechanisms. Ward and a few others have suggested that rotating updrafts in a storm should certainly influence the environmental flow around these storms. One of the unanswered questions is how does a storm updraft begin its rotation? IVhat induces the updraft to rotate? We all know about the convergence term in the vorticity equation. I would simply like to remind you of the tilting term (Fig. 12) . Consider boundary layer shear in the first kilometer above the surface of the earth where we normally have increasing winds with height in a severe storm atmosphere. This shear represents a vorticity source around the horizontal axis. The dw/ds term in the vorticity equation (w is maximum here and is minimum outside the storm) acts to turn this horizontal vorticity into the vertical. Figure 12 . Schematic of storm vorticity source from vertical shear in Ekman layer. Vorticity (about vertical axis) generation from this source may reach 10~^ sec~ , comparable to that realized from vertical stretching (con- vergence) in the updraft . 199 -50 -40 -30 -20 MINUTES 0030 1000, . -50 -40 -30 -20 MINUTES Figure 13, Temperature and de eight NSSL soundings plo tornadic storm F at 0030 CS is balloon time-height pro along calculated hydrostati right) are plotted with nor of diagram and vector is ex from which wind blows, right . 10 20 10 If w point tted i T ^ Ope file, c heigh th refe tended Scale -'so -40 -30 -20 -10 MINUTES 5 profiles for n relation to n dotted line Winds vectors t scale (at renced at top in direction is at bottom 200 This is just a reminder of one of the environment's modulating effects that we need to look at in severe storms. I will bypass the last two slides. They indicate another near-storm modulation or feedback that illustrates what we might intuitively know. Air at low levels in the atmosphere is accelerated toward the storm, and causes a low level subsidence, particularly when that air is trapped under a temperature inversion. These soundings (Figs, 13 and 14) also show lifting of a high level inversion near 500 mb. The consequent vertical stretching implied decreases the lapse rate in the 750-500 mb layer. The updraft of this on-going, quasi-steady severe storm (storm F) continuously meets an environment that is very favorable to energetic release of potential energy. The storm preconditions the environment as it goes along. This may have something to do with why some of these storms were so long lived. f400mb Figure 14. Comparison of updraft influenced WHT sounding (dotted) and environment sounding (CHK) near Storm F. Air sampled by WHT balloon suggests low-level subsidence and high level (6 km) lifting which stretches and destabilizes air column just ahead of moist updraft . 201 DISCUSSION Unident. Are all these quantities calculated from soundings, or are some observed? For example, hail size, you mentioned Barnes ; No, no, no, these are calculated. These are predictors. These are some of the parameters that NSSFC and the Air Force use as predictors of storm severity. They usually compute these from the 0600 CST soundings, the 1200 GMT soundings in the morning, and they are all based solely on the thermodynamic structure of the atmosphere. There is nothing else involved. .. .just the vertical thermodynamic structure. Unident . When did these storms start... the actual convection. Barnes : The convection started as moderate showers early in the day before noon and just kept getting more and more severe as time went on. U nident . So all these data are really contaminated somewhat by small scale activity. Barnes ; I think not. We were careful to look at the ascent rate of the balloons and I am confident they were not in convective clouds. They all had rather uniform ascent rates. So these data reflect the perturbed ambient atmosphere. It is a convectively "quiet" atmosphere, but it has been perturbed for some time. 202 Kreitzberg ; I would just like to question the variability of the raobs in that network. I think you are looking at a very highly perturbed situation and that hopefully in studying the precursor situation . . . the observations including your studies before the convection breaks out at all that you have much more representative sounding data. I think this is a more difficult case where the tornadoes break out many hours after the convection begins. Barnes ; Yes, you may have a point. What I showed you about those soundings was not the complete impression. My purpose was to show how dangerous it is to pin your hopes on an index (of which there are countless numbers) for severe storm prediction. All of these forecast parameters are based on temperatures and dewpoints at given levels in the atmosphere. What we suspect in this case is cloud layers or patches are being advected through or are responding to some wave-like perturbations. We just happen to be picking some temperatures and dewpoints in cloud layers and some in the dry air surrounding the cloud. This is what I think created most of the variability. If you were to average out the vertical structure details, the profiles would likely show gradual change during the evening. Kaplan ; I would like to say something related to what you were talking about on trigger mechanisms (with very close relationships to what Ed was saying) . 203 We ran 14 case studies with a mesoscale mesh of 50 miles on severe tornadic outbreaks this summer. In every outbreak, we get a low level jet forming and intense moisture convergence in the Ekman layer, both predicted by the model in the vicinity of where it happens to the real world. It is in this case of strong Jacobian of u and v in the divergence equation that we see the super-geostrophic flow that Ed Danielson is talking about. This flow advects down into an area where there is difluence in the height field and in which the vorticity goes very rapidly from cyclonic to anticyclonic, forcing an integrated divergence in the pressure tendency equation which helps bring the kinetic energy down to the Ekman layer. This is the type of process that Ed is telling you about and we found the P. E. -model constantly replicating what is going on in the tornadic mesoscale outbreak. This, of course, aids in moisture convergence forcing the atmosphere to a state of convective instability and presenting a profile which is moist and diabatic heating is, of course, favorable in this type of environment. It is this process that is constantly being replicated. I think this is the key to get energy down to the mesoscale. Barnes ; In terms of family tornado outbreaks, that is probably true. Although there were at least three tornadoes the evening of 29th-30th, I don't think I would call this a family outbreak of the type you are talking about simply because there was no large scale development going on. 204 Kaplan : It is hard to say, but with regard to your presenting the view that the large scale there was stagnant, it is true that your trough position is stagnant but we see in terms of your jet structure that there is still very large scale role being played in terms of , . . Barnes ; I am not discounting that. I don't think I used the word stagnant. There were 75 kt. winds up there at 500 mb and that's plenty. There may have been large vertical momentum transport in the drier air to the west of our network region, but if so, it was not revealed on the scale of the available surface observations. 205 THE ENVIRONMENT NEAR THE DRYLINE Joseph T. Schaefer ABSTRACT A dryline is a narrow zone, other than a classical polar front, across which a sharp horizontal surface moisture gradient occurs. The dryline is of signifi- cance since thunderstorms often develop near it. Dry- line environment characteristics are described, and dryline motion is explained. We have heard Ed Danielsen discuss possible convective triggering mechanisms in the synoptic scale flow field and Stan Barnes discuss near storm soundings and their implications. I will present a brief overview of what is known about the dryline. Over the Texas, Oklahoma and Kansas area, the dryline is often a precursor of thunderstorm activity. East of the Mississippi River it loses importance, but for the proposed SESAME area, it is a significant synoptic phenomenon. In April, May and June, the primary tornado season in Texas and Oklahoma, there is a dryline present about 40-45 percent of the time. On approximately 70 percent of the days when a dryline is present, new echo development occurs on or near the dryline. J. Owen Rhea (J. of Appl. Meteor. 1966, Vol. 5, No. 1, pp. 58-63) compared analized surface charts with echo development. Fig. 1 comes from his paper and shows the weighted frequency of new radar echo area development relative to the surface posi- tion of the dryline. Entries are weighted so that if thunderstorms developed randomly, development frequency would be uniform. Obvisouly it is not! The dryline is shown to be a highly preferred development zone. 206 WEIGHTING FACTORS -I •{• I ■!■ I f-5/4-45/^fi^» 3T I •]■ I •[• I- -DRY LINE (n -200 < LiJ DC -180 < o X o -160 LJ q: < -140 o < cr $ -120 LJ Z O -100 O 2 - 80 O Ul 1- I - 60 tD UJ $ - 40 -200 -150 -100 -50 50 100 N Ml EAST OF DRY LINE 150 200 Fig. 1. Frequency of new radar echo development relative to dryline location from Rhea (J. of Appl. Meteor., 1966) 207 Approximately 1/3 of the storms developed within 10 n mi of the dryline and 1/2 of the storms developed from -10 to +50 n mi of the dryline. Rhea further found that 3/4 of the initial cells that developed into squall lines developed within 10 n mi of the dryline. Those which initially developed further away from the dryline remained randomly scattered, basically air mass thunder showers. He also found that in over 2/3 of the cases when storms developed along the dryline a short wave was present at 500 mb. Fig. 2, a cross section running from Childress, Texas to Pauls Valley, Oklahoma (roughtly west to east), shows the typical environment near a dryline. Stations have arbitrarily been placed on a straight line. This accounts for some of the small scale "noise" present. The solid lines are isentrops and dashed lines are mixing ratio isohumes. The shaded area denotes the presence of a low-level inversion or stable region. West of the dryline a basically neutral atmosphere exists. The lapse rate is super-adiabatic from the surface to about 1000 m and only warms by 2°K in the next 3 km. On the synoptic scale it is not uncommon to see the super- adiabatic layer stretching to above 700 mb west of the dryline. East of the dryline the moisture is capped by a low-level inversion. The profile is basically a step function in that the moisture and the isentrops are vertical below the inversion break. Above the inversion, the air is rela- tively dry and nearly horizontally homogeneous. This agrees with Ed Danielsen's finding that the air west of the dryline is the same as that above the moist air. Aircraft traverses of the dryline (McGuire, NOAA Technical Memorandum ERL NSSL-7) reveal that while there is a sharp horizontal moisture gradient. 208 2 ^ ^2^4^ -4000 m -3000m -2000m -lOOOm CDS LTS COR SPS CHK I PVY RIN Fig. 2. Typical cross-section normal to a dryline, 209 the virtual temperature is nearly constant. Fig. 3 shows the temperature, virtual temperature and mixing ratio measured on both sides of the dryline. Note that all the points are within 2 n mi. The moisture gradient is very sharp with a low-level mixing ratio drop of 6.2 g/kg in less than 2 miles. There is also a marked temperature discontinuity across the dryline. How- ever, the mean virtual temperature discontinuity is only -0.1°C. Because of the dryline' s intimate relationship to thunderstorm develop- ment, a knowledge of its motion is necessary for accurate forecasting. Approximately paralleling the terrain contours, the dryline is most distinct over west Texas. As it moves eastward, it starts to become diffuse and can seldom be seen as a distinct entity cast of the 96th meridian. For practical purposes, the dryline can be identified with the 9 g/kg mixing ratio isohume, which is consistently near the center of the strongest moisture gradient. The basic propagation of the dryline is of a diurnal nature (Fig. 4) , with eastward (downslope) motion during the day and west- ward motion at night. Three-hourly dryline positions (Fig. 5) show the rapid movement that the dryline can exhibit. Generally the fastest move- ment is in the early morning when the surface temperature is rising most rapidly. In most instances, this occurs between 0600 and 0900 local time. As surface heating decreases dryline speed wanes. However, in this case a cloud deck in extreme southeastern New Mexico retarded heating in the early morning and the dryline remained stationary until heating began. It is also apparent that different parts of the dryline move at significantly different speeds. 210 « I- MIXING RATIO gm/kg. DRY SIDE 3.3 4.1 5.9 MOIST SIDE 6.9 7.7 9.8 43 9.5 4.5 10.2 4.1 10.3 o o • 2 Z — 700 *. 1 «. ' i )• > — 800 /, /i / 1 / 1 < ' « < ' 1 < ^« 1 AT**! '. 1 AT 1 — 900 q: (O (O UJ oc -r re Fig. 3. Temperature change (AT) and virtual temperature change (AT*) observed on traverses through dryline, from McGuire (NOAA Technical Memorandum ERL NSSL-7). GO GMT 15 MAY 1-- I2GMT 15 MAY J-..-00GMT 16 MAY >••• I2GMT 16 MAY MfRMM Fig. 4. Twelve hourly dryline position. 211 To investigate the possibility that local circulations advect the dryline, relationships between dryline displacement and the wind field were examined. Reported winds converted to kilometers per three hours are analyzed, and the three-hourly positions of the dryline are superimposed. When the dryline propagates eastward rapidly there is very little correlation between wind speed and dryline displacement. Often the dryline is nearly parallel to the local wind, but even where it is normal to the neighboring winds, the dryline velocity is more than double the surface winds (Fig. 6). While friction does slow the surface wind speed, the rapid motion of the dryline cannot be accounted for by the mean wind across its 4,000 foot vertical extent. Another interesting feature is momentum mixing well to the west of the dryline where extremely fast supergradient winds exist. This is the high momentum air aloft being brought down to the surface which Ed Danielsen mentioned in his paper. However the wind confluence region does not necessarily coincide with the dryline. In central Texas, the confluence region is well removed from the moisture discontinuity. Convergence fields were computed both from the u and v wind components and from planametric triangles and no striking relationship between the position of the moisture gradients and maximum dif luence-conf luence areas was found. Although the dryline is in a basic confluence area, maximum values do not occur on or necessarily near the dryline. Fig. 7 shows the dryline displacement and wind field over a three-hour period at night. Momentum mixing has ceased in the dry air where a nocturnal inversion has formed. The moist air has been capped by an inversion all day and there is very little change in the wind. A slight swing due to the 212 Fig. 5. Three hourly dryline positions. -! t / \ \ X '/ :/ \ \ V \ " \M \ ^ VMM t M M 22 MAY -^ M / / M ^ I8GMT WIND FIELD y j j ^ / \ \ { -15 GMT DRYLINE \^ |^ | > / / \ \ X 1-. -1 8 GMT DRYLINE Fig. 6. Daytime dryline displacement and windfield. 213 diurnal shift of the geostrophic wind is present, but it is rather insigni- ficant. In general dryline movement corresponds closely to nocturnal advec- tion. Thus, during the day the dryline moves almost independently of the wind field, while at night it is basically advected. One can propose a mechanism for motion based solely on diurnal heating and forced mixing with steady dissipation and reformation of the dryline eastward during the day and advection back during the night. An explanation of the dryline motion, which accounts for the excessive daytime movement and emboidies the dryline' s observed step moisture profile, resides in mixing moist surface air with dry air aloft. Mixing causes dryline motion in the following manner. After dawn, boundary layer mixing starts as surface temperatures rise in response to insolation. West of the dryline, any nocturnal radiational inversion is rapidly replaced by an adiabatic lapse rate. Since moist layer depth increases from the dryline eastward, the heat input required to erase the capping inversion increases in that direc- tion with corresponding delay of inversion breakdown. Parallel to the dryline, the surface elevation is approximately constant (as is the inversion height), so that the amount of heating required to break the inversion is also nearly uniform. When the needed amount of heat has been absorbed, the low-level moist air mixes with the dry air aloft. The surface dew-point temperature drops rapidly and the dryline "leaps" eastward to a position where no appreciable mixing between air-masses has occurred. In the evening, the dry air cools rapidly, a nocturnal inversion forms west of the dryline, and vertical mixing of momentum is inhibited leading to decreased backing low-level winds in the dry air. The less rapid 214 00 GMT DRYLINE X 03 GMT DRYLINE Fig. 7. Nocturnal dryline displacement and windfield. W-E COORDINATE (lOOkm) 2 4 6 8 10 1 1 1 (IV'' ■ ■ < n \\ \ / ^s \ ^ — ^^:x ^ \ ■ x. . y A ./ 1200 1400 1600 1800 2000 2200 0000 GMT Fig. 8. Model simulation verses observed surface moisture variation. 215 temperature decline in the moist air provides for nearly steady flow east of the dryline. A net easterly wind component develops and carries the dryline westward. As cooling continues, the dry air becomes more dense than the moist air so that it can no longer be easily displaced by the dry air. From this time until insolation again induces eastward motion, the dryline remains nearly stationary. To verify the hypothesis that vertical turbulent mixing is the primary cause of dryline motion, a numerical model of the dryline environment was developed. The model is slab-symmetric with a constant geostrophic wind perameterized on the upper boundary (about 4 km above the surface) . Motions in the model are driven by an externally controlled surface heating function applied at ground level. When this model is applied to actual data, a very close simulation of observed dryline (9 g/kg isohume) motions is produced. Fig. 8 shows the temperal variation of the surface mixing ratio along an east-west line from approximately Hobbs, New Mexico to Fort Worth, Texas. Dashed lines are the observed mixing ratio isopleths and solid lines are predicted by the model. The simulated 9 g/kg line follows the observed one extremely well. Over 75% of the total dryline displacement is accounted for by the modeled mechanisms. Discrepancies between simulation and observations is caused by the restrictive requirements of two-dimensional modeling. The constant westerly geostrophic wind component across the domain and the lack of upstream moisture sources and sinks combine to force the modeled results away from reality. Recognizing these weaknesses, the model demonstrates that mixing indeed does cause daytime dryline motion. 216 DISCUSSION Lilly: How can Rhea get a significant number of storms west of the dryline? Where does the moisture come from? Schaefer: Rhea defined the dryline as the first organized veer of the winds. The actual location of the moisture discontinuity was not considered. Fig. 6 demonstrates that this can misplace the dryline by as much as 100 miles. Lilly: It seems to me that your and Ed Danielsen's descrip- tions of the significance of the dryline in convective development represents quite a challenge to modelers in incorporating effects of both the synoptic scale and boundary layers of a widely varying nature, boundary layer on the dry side being a very deep adiabatic layer and on the moist side a more nearly conventional relatively shallow planetary boundary layer. In the period of active synoptic development, the dry boundary layer is transporting momentum downward rapidly from the middle troposphere to near the ground. Presumably this has a lot to do with the convergence and subse- quent storm development near the dryline. It is an interactive problem with several facets involving parameterization of a rather sophisticated nature that I think that modelers have to consider carefully. Schaefer: One last comment on what Doug (Lilly) just said. I computed surface geostrophic wind speed from this data and then compared it to actual winds and quite frequently winds 500 km west of the dryline were twice geostrophic. This has all sorts of nasty implications. 217 Ogura: Referring to your last slide, I think that the move- ment of the dryline gives a adequate prediction. However, I noticed that the isolines of mixing ratio seem to diverge. Schaefer: The reason for that is that I used upstream differencing which has the effect of making things become diffuse as time progresses. Kessler: Isn't there another reason. Namely that the sun is working on a deeper boundary layer and there is more moisture to mix, so that the late afternoon position represents a diffu- sion of a greater amount of moisture. Schaefer: Right! As the dryline progresses eastward it does generally lose its strength and that is one reason why it is unimportant east of the Mississippi. In fact you can very seldom follow the dryline east of Fort Worth. 218 A SURVEY OF FINE-MESH MODELING TECHNIQUES Presented at SESAME Opening by Roger Pielke Abstract The talk presents a brief survey of fine-mesh modeling techniqiies with particular emphasis on meso-B scale models. The topics discussed include: 1) equations 2) dimensionality 3) grid 4) initialization 5) finite difference scheme 6) lateral boundary conditions 7) top boundary conditions 8) surface boundary conditions 9) boundary layer 10) cumulus parameterization 11) and, representation of the results Specific examples are given illustrating the importance of a three- dimensional representation and of the accurate parameterization of boundary layer processes. The talk concludes with a recommendation of the techniques requried in meso-6 scale models if they are going to adequately aid in satisfying the SESAME objectives. These include the development of a three-dimensional, hydrostatic model on a fixed grid, with horizontal resolution greater than the meso-B scale observing network, and with sufficient vertical resolution to accurately resolve the boundary layer as well as the free atmosphere. The equations should be integrated using an efficient, mass and energy conservative finite-difference scheme in which the lateral boundary conditions are determined from a larger-scale model, or from actual observations. The effects of cumulus activity on the meso-3 scale may need to be parameterized using meso-y scale cumulus-field models. My talk deals with fine mesh modeling techniques; a survey of which is really difficult to do in just 15 minutes. Of course, there is a question of exactly what you mean by fine mesh models. Global circulation modelers, for example, might think of synoptic models as a fine mesh model, whereas a cloud modeler might think of cloud physics models as fine mesh models. Today I am going to try to give a very brief survey of meso-3 type models; that is, models that are in the intermediate scale for the SESAME project. In Figure 1, I list some of the areas in which modeling techniques are required; these are the areas that I am most familiar with. 219 Figure 1 Fine-mesh modeling techniques A Survey I. Equations II. Dimensionality III. Grid IV. Initialization v. Finite Difference Scheme VI . Coordinate System VII. Lateral Boundary Conditions VIII. Top Boundary Conditions IX. Surface Boundary Conditions X. Boundary Layer XI. Cumulus Parameterization XII. Representation of the Results Equations : The first topic area deals with the equations. What kind of equations have models in the past used to describe the physical processes that are occurring. There are basically two groups; the vorticity form and the primitive equation form. The vorticity form has been used only in two dimensional models; since there are severe computational and mathematical problems in the solution techniques for a three-dimen- sional vorticity model. In the primitive equation form there are the hydrostatic approach, the anelastic technique in which acoustic waves are eliminated from your solution but nonhydrostatic effects are retained, and the totally compressible form of the primitive equations. By the way, later in this talk I will list what I think are the best techniques for meso-B scale models as applied to the SESAME program. Dimensionality : Models are either two-dimensional or three-dimensional. I don't think that anyone has used a two-dimensional model because they wanted to. It is usually because they have been constrained by computer limitations or economic constraints. Grid : The third topic is the grid. Most meso-P scale models have used a fixed grid, although some people such as Schlesinger have used a step - 220 wise changing grid and Deardorff has used a grid which is translated downstream with the mean velocity. But most models have used fixed grids. Initialization : It seems that most meso-g scale models have used a very simple initialization scheme. That is, they either assume a constant velocity throughout their whole field, or at best a linear velocity with height. Recently there has been some attempt to have a boundary layer balance in which the winds below the top of the boundary layer are in balance with the pressure gradient, the Coriolis, and the friction forces. Recently, Paine and Kaplan, and Kreitzburg and Perkey have developed meso-3 scale models which are initialized by a larger scale state predicted from a larger scale model. The Finite Difference Scheme : This is another very important fine mesh modeling technique which must be considered. Some modelers, myself included, have used a diffusive differencing scheme which takes advantage of the fact that there is implicit smoothing in the spatial differ- encing operator in order to eliminate the shortest spatial wavelengths in the numerical model. Others have used conservative differencing schemes varying from techniques which conserve mass only, to those which conserve energy as well. The Coordinate System : Many models, most models in fact, in the meso-3 scale have used the x,z,t coordinate system, or in the three- dimensional models, the x,y,z,t. There has been some use of pressure as a vertical coordinate. Such people as Hovermale have used the sigma coordinate system, which is a terrain-following transformation from pressure. Denny Deaven has recently introduced an interesting coordinate system in which a sigma layer in the lowest levels is capped by an isentropic 221 layer above. His scheme takes advantage of the fact that isentropic coordinates tend to resolve baroclinic zones better than the other forms of coordinate systems, while in the lower levels he has a terrain- following Sigma system since isentropic coordinates frequently become multi-valued near the surface during the day. Recently, Gal-Chen has introduced a non-orthogonal coordinate system which is created by making a tensorial transformation from a Cartesian frame to a non- orthogonal framework, where the equations are integrated in the non- orthogonal system. In this system the vertical axis is not necessarily parallel to the gravity vector. Lateral and Top Boundary Conditions: In the past modelers have typically used either a closed boundary condition or cyclic lateral boundary conditions. Recently there has been some work by such people as Hubner and Reisner and, I believe, Kreitzburg and Perkey, to incorporate boundary conditions from a larger scale model in order to force the meso-g scale model. The top boundary condition is particularly impor- tant when you are talking about fairly shallow models. This has generally been a rigid top , while a few modelers have used a flat surface top where the free surface can respond to the surface heating below. Surface Boundary Conditions : Most meso-B models to this point have used a prescribed surface temperature which is arbitrarily specified. There have been some variations to this. Jeff Hill, for example, has introduced a model which has a prescribed surface temperature but he puts random perturbations on the surface temperature in order to para- meterize the irregular heating due to different terrain features along the ground. Also there is the effect of aerodynamic roughness of the ground. Most meso' 3 models, I am sorry to say, use a constant drag 222 coefficient. One really should use something more general than that, W particularly one that is a function of stability. There has been some recent work on this. In Deardorff's model, for example, he uses a drag coefficient which is a function of stability. The Boundary Layer ; The boundary layer, as we have seen in the last two talks, is of crucial importance to a meso-3 scale model. Many such models in the past have ignored almost totally the influence of the boundary layer. There are three methods to handle turbulent mixing in the planetary boundary layer of a meso-B scale model. You can have locally derived exchange coef fecients, which are functions of local quantities such as shear , bouyancy gradient or turbulent energy; or exchange coef fecients which are a function of distance above the ground; or time-dependent equations for the vertical fluxes. The last technique requires considerable computer storage and computer time to implement while the first approach needs sufficient grid resolution so that the equations are reasonably local. The second approach appears to be particularly well suited for models which can afford only coarse vertical resolution. One important area of boundary layer parameterization is the depth of the boundary layer, since, as we have seen in the last several talks, this boundary layer apparently grows during the day and can tap upper level winds, humidity and heat. In fact, since I think I am going to have time, I am going to show some results from a two-dimensional version of my mesoscale model for two separate cases. In both cases I have an island, surrounded by water, which is heated during the day for a period of about 12 hours. In the one case there is no geostrophic wind shear. In the second case, there is a geostrophic wind shear of 3 m/s km. In the version of the mesoscale model presented for this 223 z. 2.0 km - 1.5 km- 1 km- 500 M- H0UR4 lb) THE DEPTH OF THE PLANETARY BOUNDARY LAYER dZ; i.n\w'+ i.ii;'- j..((''fz,l u + »■ + ^ ^ ^—!- + 9W, + T.i V; e„ aZ •20 1.5 1.0 .5 0. (a) EAST WEST VELOCITY COMPONENT 3.0 3.5 (f L = -'••'im/sec — ? = 0. Z, *z Figure 2 talk, I have a variable top of the planetary boundary layer which has been developed by Deardorff, where the time change is a function of the surface heat and momentum flux, of the overlying thermal stability, and of any larger scale vertical motions. In Figure 2 are examples of solutions foi: the non-geostrophic wind shear case where the wind is blowing from east to west at 2.5 m/s. At the top of Figure 2 are the depths of the planetary boundary layer as a function of height above ground , and distance along the ground in terms of grid point number. In the bottom Figure are contoured values of the horizontal or the east-west velocity component. One sees that the convergence zone is slightly inland from the eastern coast and lies immediately along the west coast. The depth of the planetary boundary layer has deepened slightly over land in response to the heating over the island. 224 HOUR S Zi 2.0 km- 1.5 km- 1km- 500 M- (b) THE DEPTH OF THE PLANETARY BOUNDARY LAYER --U +IV+ !^ 1_L + 9W, + 7.2 I/. e, »z S 3.62 km 2.42 km Z 1.82 km 1.22 km 100M- 50 M- -5.0 0. +2.5 +5.0 0. X '^-^ 0. + 2.5 + 5.0 (■) EAST-WEST VELOCITY COMPONENT ^^ UgL = 2.5m/8ec — ? = o. dZ Figure 3 If we look at the solution after eight hours in Figure 3, we can see a very marked change in the depth of the planetary boundary layer due to the surface heat flux over land and due to the mesoscale convergence of the sea breeze. The convergence zone along the windward coast has moved inland somewhat while the convergence zone along the lee coast has remained more-or-less stationary. In this case the vertical tur- bulent mixing of upper level winds has a relatively small effect on the vertical transport of momentum, since above the original or initial top of the planetary boundary layer, the wind directions and speeds are constant vertically and horizontally. 225 HOUR 4 z. 2.0 km - 1.5 km- 1.0 km- .5 km- 0_. (b) THE DEPTH OF THE PLANETARY BOUNDARY LAYER az, sZ: i.s\w'+ i.iv'- j.ju'rZi] St dx itZ,' ie* e, iZ + 9W, + 7.2 V; la) EAST WEST VELOCITY COMPONENT (f I = .'..im/scc 4Z -Jm/s sec k m Figure 4 But now we look at the case with the geostrophic vertical wind shear of 3m/sec km. After four hours, as seen in Figure 4, the solu- tions are almost identical with those in Figure 2. The planetary boun- dary layer has not deepened sufficiently to tap the larger winds aloft. If we look at it after eight hours, in Figure 5, however, we see that the convergence zone along the windward coast (the eastern coast) is substantially further inland than it was in the earlier case. We also see that there has been quite a bit of perturbation of the east-west velocities in the upper level of the model. The convergence along the west coast is immediately along the coast and, in fact, there is some vertical motion slightly offshore. The reason for this, and I believe 226 HOUR 8 2.0 km -| 1.5 km - ^' 1km- 500M- 0. (fa) THE DEPTH OF THE PLANETARY BOUNDARY LAYER -=-U + w + — !^ e, dZ 0. + 2.S * 5.0 (i) EAST WEST VELOCITY COMPONENT "k|„ = ^-"n/sec ^—^ =-imkec km Z, *z Figure 5 this is directly applicable to the dryline situation, is that the plane- tary boundary layer grows, and as it grows, it can tap the upper level strong winds, transporting them downward. In this case, the winds in- crease with height such that the winds become more easterly with height. The effect is to accelerate the eastern convergence zone further inland and encourage vertical motion even slightly offshore along the west coast, Q. How do you define the boundary layer? Is it the top of the Ekman layer? Yes, it starts initially as an inversion in the initial input, or, if the atmosphere is neutrally stratified in the surface layer at the initial time, the depth of the planetary boundary layer is 227 calculated as a function of the surface stress divided by the Coriolis parameter. That gives me the first value which is assumed uniform over the field. Then, the evolution of the depth of the planetary boundary layer is evaluated using Deardorff's proposed tendency equation, where the time-change of the planetary boundary layer is directly depen- dent on the surface heat flux and inversely proportional to the overlying stability. Cumulus Parameterization : The next major technique that must be con- sidered and probably the one that is most difficult is cumulus parameter- ization. Some people ignore it. I have ignored it up to now in my model. There have been basically two major thrusts to attack the problem. The first is the method of Arakawa and Schubert in which they have a cumulus ensemble where the cumulus clouds are assumed in a quasi- equilibrium with the environment. Their scheme is also based on the 1/R entrainment rate for cloud mixing with its environment which is of questionable validity for medium and large cumulus clouds. The second approach is that used by Kreitzburg and Perkey where they use a one-dimensional Lagrangian cloud model in order to release con- ditional instability realized in a larger scale model. A new approach, which hasn't been tried and which I believe was first suggested by Bill Cotton, is that perhaps you are going to have to develop three dimensional meso-y scale models in order to develop statistics for use in the meso-B scale model. That is, cumulus cloud fields would be predicted by the meso-Y scale model as a function of a wide variety of dynamic and thermodynamic input information predicted by the meso-3 model. Then the changes in the environment (i.e. cumulus induced subsidence) predicted by the meso-y model would be fed back into the meso-6 scale model. 228 Representation of the Results : The last technique on my list is how you represent the results from these meso-3 models. To present an example, I am going to show a short movie, or rather short movie loops, which are model results from the three-dimensional EML mesoscale model which I developed while I was' in Florida. You are going to see three short loops. The first loop is going to be the horizontal motion field at 50 m, predicted by the model for a particular day in May 1968. The same depiction is repeated over and over again where the time is in the upper right. The synoptic horizontal velocity is 3 m/s from the southwest. We are looking at the 50 m winds where you can see the winds back and accelerate on-shore along the east coast of Florida, and accelerate inland along the west coast of Florida. You can see in this representation that it does illustrate the dynamic changes of the horizontal wind relative to a geographic feature. I might take this moment to emphasize one thing that has not been mentioned this far, and that is the possibility that severe storms over the SESAME area might be initiated, or at least substantially influenced by the location of terrain irregularities. Apparently they found this to some extent in the Miles City area of Montana in the pro- posed Bureau of Reclamation cloud-seeding program and it is certainly of paramount importance over south Florida. The next movie loop is for the same model calculation where we are looking at the vertical motion field at 1.2 km. This shows how the convergence zones form, and how they move during a simulated day. In the case of south Florida, they agree extremely well with the lo- cations and movement of the thunderstorms over the region. 229 Fine-mesh modeling techniques applicable to Meso-3 scale features I. Equations Primitive equations, hydrostatic, discrete levels II. Dimensionality Three dimensional III. Grid A. Rigid, stretch B. Observational grid is 15 km, so some fraction of this, say, 4 km for horizontal grid spacing. C. Vertical resolution higher near the surface with, perhaps, a log-linear stretch transformation in z. D. 40 X 40 X 40 IV. Initialization Balance state from a larger scale model or from observations V. Finite difference scheme Efficient, quadratic conservative VI. Coordinate system X, y, 0, t VII. Lateral boundary conditions Time-changing boundary conditions VIII. Boundary layer Locally derived eddy coefficients which are a function of shear and stability IX. Cumulus parameterization Statistical representation of sub-grid scale cumulus X. Representation of results Computer-generated movies, three dimensional perspective displays, two dimensional displays Figure 6 The last movie sequence will be the Langrangian trajectories of par- ticles that start at a grid point and are advected by the winds at 50 m as predicted by the model. This shows that all types of material in the lower boundary layer are converged by the sea breeze into these convergence zones. This includes such things as clouds, condensation nuclei perhaps, ice nuclei perhaps, moisture, and heat. I am quite sure vigorous convergence in the lower boundary layer also occurs in meso- scale systems which move into or develop in the SESAME area, and movie presentations of convergence as presented here would be quite useful. I want to conclude with what I think are some of the requirements for a meso-B scale model for the SESAME program. I am sure there will 230 probably be some disagreements with this. As listed in Figure 6, I think the equations for the meso-6 model should be the primitive equations. They should definitely be hydrostatic. They should have discrete levels, that is they should not be a set of layer-system equations. The model should be three-dimensional. The model should be rigid, that is, the model grid should be fixed in space and time. It probably should take advantage of stretch coordinates such as those utilized by Lee in which you extend your grid size as you get further away from your area of interest, but you do it rigorously, that is, you do it in a formal mathematical coordinate transformation and include the extra new terms in your equations. The observational grid proposed is 15 km. I suggest something on the order of 4 km for horizontal grid spacing would be sa.tisfactory in a meso-3 model. The vertical resolution necessarily must be higher in the surface layer than in the rest of the boundary layer, perhaps using a log-linear stretch transformation as used by Orlanski, Ross and Polinsky. I propose a grid spacing of 40x40x40, which is based on the NCAR 7600 capabilities and is the size utilized by Jim Deardorff . Initialization should be either from a dynamic balance from a larger scale model which includes the planetary layer boundary, or from observations obtained from the SESAME program. The finite difference scheme should be efficient, that is people should look into the use of implicit- or semi- imp licit-time operators. The scheme should be quadratic conservative so that it conserves energy as well as mass. The coordinate system should be an x,y,a,t system in which a terrain- following system is utilized because the SESAME obser- 231 vational area is not flat. The lateral boundary conditions must be driven either from a larger-scale model or from observations and should permit time changes which result from changes in the synoptic environment. The boundary layer should be described using either locally-dependant exchange coeffecients if the resolution is sufficient, or profile functions for the exchange coeffecients if the resolution is coarse. To perform the cumulus parameterization in a quantitatively accurate fashion, it will probably be necessary to develop three-dimensional meso- Y models which simulate cloud fields, from which statistics can be developed for use in the meso- 3 models. The representation of the results should include computer generated movies as well as three-dimensional perspective displays. With that I will close. 232 Modeling the Influence of the Mesoscale Planetary Boundary Layer Larry FIahrt The influence of accelerations on production of vertical motion in the boundary layer is briefly considered. A model which allows for growth of the mixed layer is outlined. I am going to briefly discuss the boundary layer's influence on mesoscale circulations. This is a particularly difficult problem because mesoscale circulations are characterized by small time and space scales which means that accelerations are important. First, I will briefly review the classical boundary layer influence problem in which accelerations are not included, and then we will look at some attempts to include accelerations. Greenspan and Howard (1963) demonstrated that as long as the viscous (or turbulent) Ekman number is\small, which is generally the case in the atmosphere, the influence of the boundary layer on the vorticity field through frictionally driven convergence is much more efficient than direct turbulent transport of vorticity. In the atmospheric context Charney and Eliassen (1964), shov/ed that frictionally driven boundary layer convergence could be parameterized in terms of the geostrophic vorticity. In particular, the vertical motion at the top of the boundary layer is equal to some function of the bulk-eddy viscosity (or perhaps drag coefficient if a layer integrated approach is employed) times the geostrophic vorticity. This is often referred to as Ekman pumping. The influence of Ekman pumping on the overlying free flow is most simply illustrated in Ekman spindown theory (Greenspan and Howard, 1963 and Holton, 1965) . This theory does not incorporate latent heat release or moisture fluxes and assumes quasi-geostrophic flow. With positive geostrophic relative vorticity, frictionally induced rising motion causes vortex tube contraction aloft resulting in decreased geostrophic vorticity. This in turn decreases the boundary layer 233 pumping, and so forth, thus an exponential spindown. The time scale for this exponential spindown can be derived in a context perhaps relevant to SESAME by employing the mass conservation and vorticity equations in isentropic coordinates. Here the spindown scale is proportional to the pressure thickness of the free layer and inversely proportional to the pumping coefficient and the Coriolis parameter. This is generally thought to be on the order of a few days perhaps depending on the thickness of the effective free atmosphere. Let's briefly examine some attempts to modify this pumping law for the influence of accelerations which is of course necessary in the mesoscale case. In the case of time accelerations, an additional term results which is proportional to the geostrophic vorticity tendency and Rossby number. This function is very weakly dependent on the bulk eddy viscosity (Young, 1973) or surface drag coefficient (Mahrt, 1974a) . The vertical motion at the top of the boundary layer is equal to the classical Ekman pumping contribution plus a correction term due to the isallobaric wind convergence weakly modified by frictional effects. For very special geostrophic tendencies, this theory can be extended for any sized Rossby number for the layer integrated case. Thus, as the Rossby number becomes large, the convergence is largely controlled by accelerations and classical Ekman pumping becomes to some extent unimportant. In the case of advective accelerations, it is more difficult to modify the boundary layer pumping formulation. The two theories that are available (Mak, 1972 and Mahrt, 1974b) seem to indicate that the maximum Ekman pumping is moved upstream from the geostrophic vorticity maximum, but the theories seem to disagree on whether the Ekman pumping will be intensified or weakened by advective accelerations. This disagreement centers on the fact that neither theory can adequately account for the proper growth of the height of the boundary layer. One theory specifies it to be a constant and the other theory allows the boundary layer to grow unconstrained by stable stratification of the free atmosphere. 234 As an alternative to patching up the classical theory, let me suggest another approach which might be used. This approach might also be considered as an alternative to the approaches suggested by Roger Pielke for modeling the planetary boundary layer. One can vertically integrate the equations of motion, the mass continuity equation, the thermodynamic and turbulent energy equations throughout the boundary layer resulting in a simplified mathematical system, at least for the well-mixed case. This approach is recently receiving increasing attention (e.g., Lilly, 1968, Lavoie, 1972 and Tennekes, 1973). The turbulent fluxes out of the surface layer can be parameterized by conventional surface drag and heat flux relationships. In the well-mixed layer velocity and potential temperature are assumed to vary slowly with height. This layer is capped by an inversion layer where parameters vary rapidly with height. Above the inversion layer, we have an idealized free flow where turbulent transports are assumed to be zero. The major advantage of the vertical integration is that you replace the entire planetary layer by two layers. You substantially reduce the number of grid points needed. Secondly, transports do not have to be related to the often small local gradients. Gradients in the boundary layer may become particularly small under conditions of strong winds or under conditions of thermodynamic instability such as with strong surface heating or in thunderstorm downdrafts. In such cases, local turbulent transports may be more correctly related to gradients on a larger scale than depicted by two grid points. Let me quickly point out some of the disadvantages by listing some of the assumptions. It is necessary to assume that the turbulent heat flux varies linearly with height in the mixed layer. Observations indicate that this is generally a reasonable assumption (e.g., Lenschow, 1970). The time of change of turbulent energy is omitted from the turbulent energy equation. This neglect is justified by arguing that the dissipation time scale for turbulent energy is "relatively small." We further assume that it is 235 reasonable to allow the depth of the inversion layer to go to zero. Then via Liebnitz ' rule, only terms which involve vertical gradients or time derivatives survive in the limit of vanishing inversion depth. We also assiome that the flow is hydrostatic, barotropic and adiabatic with the exception of turbulent transport of heat. Let me make a couple of quick suggestions regarding the observational setup. It appears for various mesoscale values of the Rossby number, acceleration effects may dominate the low level convergence field. In this case, good vertical resolution is not necessarily needed unless one expects stable , strongly baroclinic flow. Resolution is most needed in the vicinity of the inversion or transition layer where variables are changing most rapidly with height. It is not clear that an inversion will always exist, for example, in an updraft or in a downdraft region. In the case of strong vertical motions, a legitimate question is what and where is the boundary layer? These fundamental complications cannot be intelligently addressed until after preliminary observations of the boundary layer in the immediate thunderstorm vicinity. MMlind Ftm Fkn> (Turbutont Tromport^O) Jnv*nion or Trontltion Loytr — Z-h- W*^ *'j ' d^^j^*^ ........ .-.i*.' .?^:-- *• .•■ -^-i Figure 5e. ATS-3, 23 July '73, 2027 (MT. Figure 5f. ATS-3, 23 July '73, 2203 CMT. 245 Reprinted from Preprint Voliome , Fifth Con- ference on Weather Forecasting and Analysis, March lt-7, igTU , St. Louis, Mo. Published by Amer. Meteor. Soc, Boston, Mass. THE EFFECT OF EARLY MORNING CLOUD COVER ON AFTERNOON THUNDERSTORM DEVELOPMENT James F.W, Purdom and James J. Gurka Applications Group National Environmental Satellite Service, NOAA 1 . INTRODUCTION In an atmosphere with a potential for thunderstorm activity, the location of early morning cloud cover plays an important role in determining where afternoon convection will occur. Using the early morning satellite picture, areas can be identified which should be the first to experience thunderstorms due to heating. The early morning satellite picture may also be used to determine how a squall line will evolve as it moves over a given area. 2. EARLY CLOUDS AND THUNDERSTORMS DUE TO HEATING When the major factor controlling after- noon thunderstorm formation is solar heating, early morning cloud cover plays a dominant role in controlling where afternoon thunderstorms will first form. During the discussion to follow, refer to figures la, b and c. In the early cloud free areas, the sun's energy will freely heat the ground and air. The early morning cloud covered areas, however , are kept several degrees cooler due to the clouds' higher albedo as well as the evaporation of water droplets as the cloud cover dissipates. The situation which develops due to this differential heating is analogous to the land - sea breeze effect. The air in the early morning cloudy region, being more dense, sinks and spreads out, lifting the warmer and more un- stable air at its perimeter. Thus, the first thunderstorms to form are along the early morning cloud cover boundary. In addition, the subsidence and slower heating rate in early morning cloudy areas help keep that region free from convection for most of the day. Figure la. ATS-3, 10 July '73, 1332 GMT. Arrows point to a few of the perimeters on which afternoon thunderstorms due to heating will form. Pay particular attention to Alabama, the Oklahoma Texas panhandle and their common E - W boarder, as well as Arkansas. 246 Figure lb. 10 July '73, 1500 GMT Surface Temperature Analysis. Notice how the early morning cloud covered areas are several degrees cooler than the early morning clear areas. Figure Ic. ATS-3, 10 July '73, 1902 GMT. Notice the distribution of the afternoon thunderstorms with respect to the perimeters of the early cloudy areas in figure la. Especially evident are the thunderstorms in Alabama, Arkansas, Missouri, Oklahoma and Texas which formed on the perimeters of the early morning cloud cover. 247 3. EARLY CLOUDS AND SQUALL LINE DEVELOPMENT Among the major factors controlling squall line development is the stability of the air through which the developing line advances. As was shown in the previous section, one effect of early morning cloud cover is to keep the air beneath it stable for most of the day, relative to the air which surrounds it. As one might expect, the effect of early morning cloudiness is to inhibit development of a squall line which moves over it. In some of the more drastic cases, the early morning cloudy area can cause the total collapse of a portion of the developing squall line, as Is shown In figures 2a - e. W - , '^B^gf' f^sd^^^^^ 4. ^^.'i WLi^ • ' ■ "tmj^ , .^ • • « • • • • ■ *.- 1 • 1 '■- i • Figure 2a. ATS-3, 24 July '73, 1445 GMT. An arc shaped line of clouds which loops from north central Kansas across north-west Missouri into south-central Iowa is the leading edge of a developing squdl line (from A-B-C). The cloud mass in southern Kansas, at D, is composed of low stable clouds. As thunderstorms form along the developing squall line, it moves over the area which was covered by early morning clouds. The results is shown in figure 2c. Figure 2c. ATS-3, 24 July '73, 2146 GMT. Strong thunderstorms have formed both to the East and West of the early morning cloudy area aS the Squall line developed. The air beneath the early morning clouds was kept too stable to support thunderstorm activity, thus splitting the developing Squall line in half. Figure 2b. 24 July '73, 1500 GMT Surface Temperature Analysis. Notice how the cloudy area in southern Kansas has kept that area several degrees cooler than the clear regions which surround it. In this case the air was kept so stable by the early morning clouds that no thunderstorms could form as the developing squall line moved through it; however, vigorous thunderstorms developed both East and West of It. The cooler air in Nebraska, northern Kansas and Missouri is due to the rain cooled air behind the developing 'bubble' Squall line. 248 I We will now leave the ATS-3 satellite era, and go to the high resolution satellite imagery of the type which has just come available with the new SMS-GOES satellite. Let me point out a few important things about the GOES system when SESAME comes into operation. First, two SMS-GOES satellites are planned to be in operation at that time. One will be located at 130 W and the otter at 70 W (this has now been changed to 135 W and 7 5 W). The GOES satellites have the capability to take pictures at rapid intervals and still cover a large area. You could view the entire USA about once every 5 minutes. Thus by obtaining rapid interval pictures (under special circumstances) one could observe development on the cumulus scale in both space and time. This should enable us to see many of the important mesoscale mechanisms responsible for triggering severe weather. Two satellites viewing the same area from different locations allows you to get heights of clouds by stereo techniques. With the infrared data that these satellites provide, stereo height determinations, and ground-truth air- craft we should be able to get an idea of cloud height at many places in the SESAME area where there are no aircraft. Stereo techniques could also be used to make movies that allow you to observe cloud development in three dimensions over the entire SESAME area. The important severe storm triggering mechanisms that are not well understood should become more understandable (and predictable) with this technique. Let's take a look at an infrared movie loop. IR is in the 11 micron channel. It's a thermal channel. Thunderstorms are brighter white here. QUESTION: What were the positions of those SMS-GOES satellites? PURDOM: 70 W and 130 W. (Now 75 W and 135 W) . The length of this movie is 17^ hours. The reason I wanted to put this in was to show what happens when cirrus moves over a line of active convection. Here you can see a band of cirrus generated by a cluster. The cirrus is moving to the east- and crossing an active line of convection. You can see thunderstorms to the north and south of where the cirrus band crosses the convective line. 249 However, a minimum of convective activity exists where the cirrus crosses the line, so evidently in this instance, the cirrus does not seed the towering cumulus and cause it to become thunderstorms. This is occurring out over water. I would not expect the lack of heating beneath the cirrus to be the cause for the lack of thunderstorms beneath it but rather upper level convergence in the cirrus patch. D. JOHNSON: The resolution is about 5 miles. It is the same resolution as ATS-3. PURDOM: Right. Now if we can take a look at the % mile resolution visible, I think you are just going to be amazed. This is a ^ mile resolution movie. Notice several things which you can see develop on the mesoscale: the peninsula effect; a sea breeze; Mobile Bay and its effect on the early morning cloudiness; individual thunderstorm development; early morning cloud cover giving way to afternoon clear skies with thunderstorms forming along its perimeter; here a thunderstorm develops and rains, leaving a cool stable area on whose outer edge you get another thunderstorm forming. And you can see many more things. So you can see that satellite imagery of the type we've been looking at is very important in understanding what's happening on the mesoscale. It should be one of the most valuable observational, and forecast, tools available for SESAME. 250 The Development of Boundary-Layer Turbulence Models for Use IN Studying the Severe Storm Environment Jim Deardorff I will try to describe briefly what a gamma-scale model, I think Roger Pielke called it, especially turbulence model, can or cannot do to try to give you more information about the boundary layer. Out of necessity a model that tries to get down to the scale of the turbulence elements themselves has to be on a sufficiently small scale that it cannot really treat severe storms. The grid interval has to be a small fraction of the boundary layer depth in order to try to resolve the turbulence energy of the boundary layer turbulence. Besides that, it has to be three dimensional because in a two-dimensional model the turbulence energy cascades the wrong way towards longer scales instead of shorter. You can not be more incorrect than that. So when you consider these things together, it means your gamma-scale model is not going to be able, with present computers, to stretch out to a scale where we can do anything with severe storms and still look at the boundary layer scale turbulence. The connection with SESAME will be a little bit tenuous here. We will be looking at the fair weather environment of severe storms more than anything else, I would like to first discuss three different types of turbulence closures in the boundary layer scale model. Just to briefly refresh yourself, what I am mostly talking about in equations for the three velocity components, temperature 251 ^ «13 [f^ (^- 9o) -g] (t^ ('0 RAD ' (t^) S (^)q — = - ^ (u" e") _ ^ y i I " '^j J '''j J V "/ V-/KAU \ V -p \--/q=qj i'-afr ("j^)-afT^I^- (^) q=q5 Figure 1 and humidity (see Fig. 1) are the subgrid Reynolds stresses (six of them) , subgrid heat fluxes, and subgrid moisture fluxes the (prime-bar terms) o What do we do about them? I think this is one of the most interesting questions left in trying to make these models realistic. The first method I and some others have used dates back to Smagorinsky ' s type of subgrid scale eddy coefficient. For example, where you use non-linear eddy coefficients with downgradient transfers for the fluxes on the subgrid scale. This eddy coefficient is related to the square of the grid interval and local velocity deformation at each grid point, as in Fig. 2. You find that an eddy coefficient for heat has to be made a little bit bigger than the eddy coefficient for momentum because a scaler cascades its variance in three dimensions quite a lot faster than does momentum (due to the pressure gradients) . This is something that has also come out of Kraichnan's theory. So that is one way of doing it. 252 ^ ^ (0.2 A)2 3U. 3)^ 30^ 3X. ku = 3 \ A = the grid length ^- "j = 3 3U, 9X. 1 "j «' = - ^h n igure Fig, 3 shows a second way of doing it: carrying along equations for each of these subgrid-scale fluxes at each point. There are fifteen equations here for the six Reynolds stresses, three heat fluxes, three moisture fluxes, and some variances and covariances. When you program this on a computer, it is quite a job getting the bugs out, as well as filling in the values of the constants that enter into the terms that have had to be assumed for dissipation and diffusion effects. Nevertheless, this has been pursued even on the subgrid scale. I did this a couple of years ago and will just briefly show some results on that. Then a third way is to use a hybrid method in between the first two. Both this method and the second one were suggested by Doug Lilly in a paper in 1967, and it has taken about seven years now just to test this out on a tentative basis. In this hybrid method you just keep the equation for 253 au- 9u- ^ uTuT = - (u, U I i '- ) - U ! Ui' T - U '. U i' t + ^ {6 . q U '-9 ' + 6 . . U '9 * 3t 1 J 3X|^ ^"k"!':' "l"k 3X|^ "j"k SX|^ 9o M' J J3 "l n "i*' ° " 3^'"k "''■' " "i'k 37;^" "^ -k S A 1 3 1 3 9o 4^[ss-'''(4=^^4^)] lt^=-3^'"k^' -"i"k^-"I^37^^ sF^-iNbI^^^ -J-+ 6. 3-1 (6) 4 [c3s-H4^^4^)] '^' he-^^4^] (u.F^) - 2U7 39 . J r, ..r>S 3_Fr7l . r li F^ H " n: ^ V ' - '"k" 3^ 3X, pse" - 3X, I -9 A " (8) 3 — 1 — r 3 /— :n — r\ — nrr 9Q — r — r 96 (9) '4 [s6*-e'4^] -s!^^ (10) Figure 3 the subgrid scale turbulence energy with advection, heat flux sources, Reynolds stress sources, diffusion and dissipation as in Fig. 4. You use an eddy coefficient assumption, as with Smagorinsky ' s eddy coefficient, but you relate that eddy coefficient to a constant timts the turbulence energy to the 1/2 power times the length scale. A recent modification I have been trying is not to have the 254 It = . 7.VE + ^vPT' - ^2 ulu'. |r-5-+ ^ ' afr (^h Hrj ^3/2, 3U. 3Uj Ulu! = T '5..E - K K^T" + ^ 1 J 3 ij m I ax. 9x^. w'e' = m k K A i = Min. A, .5 e 3 -• C,^ ^ 0.12 (stable case) i=A (neutral & unstable cases) Cp J« .2 + .5 - (but larger close to the surface) Figure 4 length scale necessarily be the same as the grid scale (although that is what it should be, I think, when the atmosphere is locally neutral or unstabile) , but to allow that turbulence length scale to be smaller in stable regions where the turbulence may not even be well enough mixed to extend over one vertical grid interval. In this method there are a couple of constants again to calibrate. I might mention that if you just keep the third and fifth terms, on the right of the 6E/6t equation, this approach boils down to Smagor insky ' s approach . 255 I I 1200 Day 3 3 Wangara simulation Subjrnd eddy cocf. based on subgrid turb. energy eq'n. Result when eddy coet. is not stability dependent (schematic t Subgrid lluxes from 2nd moment eq'ns. J L J L Figure 5 Just to give you a better clue as to the type of problem I have applied this to most, it is the mixed layer type turbulence above heated ground. The second-moment subgrid equation model typically gives a nice well-mixed layer as in Fig. 5 with a little jump in stratification like Larry Mahrt showed at its top. The stable layer above is a remnent of whatever initial conditions you had there. The last method I just reviewed was used in a repeat calculation; with the same mean initial conditions, they gave about the same results. If however you use an eddy coefficient on the subgrid scale which has no direct stability dependence what various people have found (like 256 Soinineria and Wilhelmson and Ogura) is this: The mean temperature structure will be smeared out too much and you will loose the main definition of where the top of the boundary layer is (as shown by the dashed curve of Fig. 5) . In fact, Wilhelmson and Ogura had to apply the eddy coefficient in the heat equation in a special way to avoid this. I think this shows that we do need something better than that, e.g., Smagorinsky ' s original formulation, when applied to problems involving stratification. Fig. 6 shows the sort of eddy pictures one gets in this type of model on a fine scale where it is, say, 5 km each way between cyclic lateral boundary conditions in the horizontal. Here we are looking down on the turbulence in the middle of the mixed layer, the vertical-velocity patterns with solid lines indicating upward motion and dotted downwards. With good thorough mixing like this above heated ground around noon or afternoon the vertical velocity patterns you get out of a model like this will show rather random orientations as will the patterns of the other velocity components o That result was from using Smagorinsky ' s eddy coefficient in a model where the upper lid of the model only extended to the top of the mixed layer and we didn't have this problem about the stable stratification getting smoothed out at the top of the mixed layer. Now using that horrible mess of second moment equations, the eddies halfway up in the mixed layer (again 5 km on a side) had the pattern of vertical velocities shown is Fig. 7. The main difference from Fig. 6 is really that the zero line in this pattern is not drawn in as it was before. If it were, it would connect a lot of the more disjointed areas together. Still the eddies have a random orientation. 257 V" Contours in the X-Y plane Z = .45 at time ^ 16.301 Contour interval for U1 .25, for VI .25, tor W2.00, for T .15, for P3,00, for K .0040 Using smag. eddy coefficients Figure 6 Now the other way of doing it using the hybrid method gives rather similar patterns as in Fig. 8. I think that it is an accident at the time of this figure that the positive vertical-velocity eddies happened to be lined up along the left-hand edge. A little bit later the other eddies near the middle happened to connect together and line up. There always seems to be a tendency for the updrafts to want to link together where they can. I suppose if the model could extend way out in scale you might get some more interesting mesoscale structure to the eddies. 25S ^3^P®J^ r> 1 1 1 1 li tf n 1 1 iXT f 1 1 1 1 lMPaaaa.! •/Till M IN T = 90E +00 r •■• -^D «^ ■■■•(^■■1 VBAR = 217E 01 INT = 37E+00 TBAR = 283 55 Using 2nd moment subgrid llux eq'ns Hour 12 00 i 500 M ■ .Tg^ • 1 1f 1 1 irhri 1 1 1 1 1 Afti 1 1 ■ ■ • •*■ "*n'* INT = 90E -03 Figure 7 So the conclusion from this is that you are not going to get much difference in eddy structure whichever of the three methods you use. Therefore I now feel it is preferable not to use the 15 subgrid-scale second moment equations, simply because that took 2 or 2 1/2 times more computer time, although there were some interesting sidelights with that method. Fig. 9 shows aside view of what you get with this (second moment) kind of model. Looking at potential temperature, you can see that the delineation of the mixed layer is a 259 UBAR = -264E+00 IN = 61E+00 it-. !W '/\ _., , .. IN = 11E 1-01 VBAR =776E - 01 IN =42E too TBAR = 283.17 Using subgrid eddycoef. bawd on turb. K.E. eq'n. Contours in the X- Y plane Z = .50km at hour = 12.02 •'"^ y ....... .K IN = 14E - 02 seq 992206 Figure 8 by-product of the result. You may have trouble defining it precisely at each point, but it shows up pretty well, and warm plumes can be seen to feed the vertical velocities which in turn push up on the top of the mixed layer creating the internal waves also shown « A very similar picture (Fig. 10) came out of the hybrid method which ran two times faster on the computer. The main updraft on the far left at this particular time is seen to have a warm plume feeding into its base. This and other features also look about the same. 260 f l ^^ I T TT^ I I I I W illlliilllillTllii I I I • *' Hour 12:00 Using 2nd moment subgrid flux eq'ns. INT = 41E +00 INT =42E +00 INT =87E +00 INT =.50E - 12 Figure 9 261 I f ,^%J M I I iX ' ' ' I ' ' I I I T T I H I I y I I I I I I I LJ H ■■•■ .'■ " -^ •■■' '-''.V'i Contours in an X - Z plane at hour = 12.02 Using eddy coefficients based on turbulence K.E. equation IN = 54E+00 IN =33E+00 IN = S6E 1 00 Figure 10 As for the statistics. Fig. 11 gives a quick look at the vertical velocity variance in the mixed layer when scaled by the velocity scale, w* , involving the surface heat flux, the height of the mixed layer, and g/0. Using Smagorinsky-type eddy coefficients, the variance is shown by the dashed line. (That model only extended up to the upper limit of this line.) Then using the complicated second-moment equations, the profile is shown by the dash-double-dot curve. The hybrid method, with only the subgrid-scale turbulence energy equation to get the eddy coefficients, gave the solid profile obtained just a few months ago. The results 262 = (l<-^>.^^j 1/3 Figure 1 1 probably lie within the scatter of the different realizations from which I obtained these curves. The scattered aircraft data also shown here (the triangles are Lenschow's points) don't permit you to decide which is the better result. Wyngaard's surface layer, tower-derived parameterization formulas (dotted curve) gives you vertical-velocity variance about the same as in the model, except that it would predict it to keep on increasing too fast with height if you extended the formulation too far up in the boundary layer. Finally, Fig. 12 gives an estimate of what the subgrid turbulence is using these three different methods. The ratio of the vertical component of the subgrid scale 263 1 ' V ' ' ' ' ' 1 \ '^ . ~ - \ *N - — \v \ _ ^^ / - - ~ A' — — 1 '» '-— Smjg type eiiily toethtifiit /.' •— • • — • • 2nd momeni siihgrnl llux apis - 1 'i lurhuiL-nce K t etiiMiion — \ — — - - - _ \'«N _ - \\^^ 1 < W'>sl:HC,RII) <»-■> roiAi Figure 12 turbulence to the total vertical turbulence was suprisingly similar in all three cases. The main difference is see to occur in the stable layer. So again, that suggests to me that the subgrid-scale eddy coefficients based on turbulence energy are probably good enough to parameterize the subgrid scale fluxes in this type of boundary layer model. The models are being used for supplementing measurements, especially aircraft observations, and for helping to parameterize larger scale models, especially NCAR's global circulation model. Also for deriving velocity fields in which vou can put in pollutants or particulates and cause them to move around and diffuse. 264 CURRENT DEVELOPMENT OF A 3-D MESOSCALE MODEL AT GFDL Bruce B . Ross Geophysical Fluid Dynamics Laboratory/NOAA Princeton, New Jersey 085A0 Abstract A description is given of the 3-dlmenslonal mesoscale model currently under development at GFDL. The present model, which Is being designed to study urban heat Island effects and squall line evolution, uses the deep anelastlc primitive equations on a variable Cartesian grid net with the model domain extending to the tropopause. Important areas which are currently under study include: a) possible subgrld-scale parameterlzations of turbulence and cloud mlcrophysics, b) the use of "open" boundary conditions, c) methods for communicating changes in large- scale weather systems to the mesoscale solution, and d) the possible use of partially-implicit numerical schemes in the full primitive equations as an alternative to the anelastlc equations. Current conputer resourx:es at RFDL permit a mesoscale model with a f^rid net of approx- imately 100 X 100 X 30 points with computational times estimated to be of the order of 10-20 seconds per time step. This morning I would like to describe some of the recent work which we have been carrying on at GFDL in the Mesoscale Modeling Group headed by Isidore Orlanski. Our group is currently developing a three- dimensional numerical model which we hope will be capable of modeling a number of different mesoscale phenomena. The Interest of our group has originally been concerned primarily with the effects of the urban environment on the atmosphere. In this regard we have developed a two-dimensional model for studying a planar-symmetric urban heat Island. The inrluence of the urban en- 255 vironment on the boundary layer and on precipitation patterns may be classified in the meso-3 category as distinguished by a 25-250 km horizontal scale. Squall lines exhibit similar horizontal scales; there- fore we would like to develop a three-dimensional model that is suffi- ciently flexible to permit us to simulate both severe storm and heat island mesoscale phenomena. Today I would like to discuss the current status of this model and also to discuss some areas of the model which are currently being developed by our group and which I believe are critical to the success of the mesoscale model. I would like to begin by describing some of the basic features of our three-dimensional model as it now exists. The model uses a Cartesian coordinate system with primitive equations defined on a staggered grid mesh. The grid mesh spacing is stretched in the vertical through the use of a mapping factor in the equations. In the future we hope to incorporate variable grid spacing in the horizontal as well. The model is not primarily a boundary layer model. Rather it is designed to treat deep convection processes and therefore will extend up to the tropopause (and beyond the tropopause if necessary) . We use deep anelastic equations in the program to permit us to properly resolve deep convection processes. The basic drawback of the deep anelastic equations is the need to solve a three-dimensional Poisson equation, a task which can become quite time- consuming for problems involving large arrays. Fast Fourier transform methods are quite efficient for solving such elliptic problems. However, in the current developmental model, we have chosen instead to employ slower iterative ADI ("alternating direction implicit") methods in order to give ourselves additional flexibility in our choice of model boundary conditions as well as different elliptic diagnostic equations involved in other possible equation systems. In this regard, we are currently investigating the possibility of using implicit methods selectively in the full primitive equation as a means of eliminating the stability restriction of the sound wave speed on the model time step. This would serve as an alternate method to the procedure of filtering sound waves 266 from the solution through the use of the anelastic approximation and could be used in the model with little increase in computational time. The primary challenge facing the numerical modeler in develop- ing a successful mesoscale model involves finding a means for treating the interaction between those phenomena which are resolved by the numerical model and the larger- and smaller-scale processes which cannot be explicitly represented within the model. At the small-scale end of the spectrum, cloud microphysics and turbulent eddies with scales less than the order of the grid size must be parameterized. Also for models in the meso-3 range, it will be necessary to 'arameterize moist convection processes. It will be a difficult task to find methods for representing these important subgrid-scale processes which produce realistic behavior but which are not excessively tirpe-consuminp,. On the other end of the spectrum, we face the problem of intro- ducing Information about larger-scale processes (such as meso-alpha and synoptic scale events) into the model. This information may be obtained from observations, from a numerical model with a larger domain (the "nesting" problem) , or from an idealized set of conditions representing the influence of larger-scale processes. The problem of introducing this information into the mesoscale model brings to mind two areas of model design which will require further development if the interaction of the numerical simulation with larger scales is to be properly treated. First of all, "open" boundary conditions need to be developed which not only permit information about large-scale conditions outside of the model domain to be introduced into the model at the boundaries, but which also permit gravity waves and advected quantities within the model domain to pass out of the model through the boundaries without distorting the interior solution excessively. In addition, it is also possible to incorporate this large-scale information into the model at certain times during the computation by constraining the larger-scale modes in the meso- scale model to conform to the input data about the meso-alpha and synoptic scale conditions. This procedure, which may be viewed as a form of 267 "four-dimensional analysis," has the advantage in certain solutions of allowing the model to be driven by changing "mean" conditions which are applied globally to the equations of motion in the numerical solution rather than by requiring that temporal changes in the large-scale conditions be forced on the model only at the model boundaries. However, methods must be developed which permit the large-scale modes of the solution to be forced intermittently (e.g. , at times when observational data is available) without exciting non-physical modes in the solution. Before concluding my talk, I would like to suggest the limits on model resolution and computational speed which I believe we will face in developing our mesoscale model with current state-of-the-art computer capabilities. We at GFDL are currently involved in the final stages of acceptance of a fifth-generation computer made by the Texas Instrument Co.. This machine, known as the ASC-4X computer, has a million words of central memory and is currently capable of speeds of computation of up to five times the IBM 360/195 computer. In terms of the current version of our 3-D mesoscale model, this machine will permit us to use an array size of approximately 100 x 100 points in the horizontal and 30 grid points in the vertical. The central processor time for the computation of one time step with such an array size would be on the order of 10-20 seconds (depending upon the method of solution of the Poisson equation) . We may make two general observations from these estimates concerning the forseeable limits of 3-D mesoscale modeling capabilities: First, that we can hope to resolve no more than two orders of magnitude in the horizontal, and secondly, that large and complex mesoscale models will have computing times which are roughly of the same order as the time duration of the phenomena which they are modeling. In conclusion, it is clear that mesoscale modeling is in an early stage of development . We will need to gain a lot of experience in the next few years. In this regard, we will need to obtain a good data set with which to verify the validity of the mesoscale models which we are now developing and, more importantly, which will help us to understand the physical mesoscale processes which we are seeking to simulate. So this is where the SESAME program comes in, and we certainly welcome it. 2r38 D. Raymond - This question is really directed toward anyone who spoke. Most of what we have been doing this morning is showing how complicated mesoscale modeling is but I wonder if it's possible to make one simplification. Back in the 1950's, Stern and Malkus wrote some papers on heat islands and found that the surface friction or stress effects were considerably less important than the heat transport effects of eddies and thermals and plasma, and I wonder if this is really true in general? If it is, on the mesoscale in this particular case we could throw out about 2/3 of Jim Deardorff's equations and still be ok. Pielke - Well, I'll try to answer you as best I can. In my calculations over south Florida it was true that if you just looked at the convergence generated by surface roughness the values were much less than that due to the heat flux. But if you didn't have the proper representation of the surface roughness, you didn't obtain the correct heat flux either because they're the magnitude of the heat flux is correlated with surface roughness over areas where there is yery rough terrain, I suspect that this becomes even more important, so I can't agree with your generalization. Deardorff - I don't think there would be too much satisfaction with throwing out all of the momentum fluxes in your model because that's what is needed to control the wind speeds in the boundary layer. You wouldn't want the flow to be running away or oscillating in an inertial oscillation; it really must remain under control of frictional forces. Ross - That's certainly true in the large scale, does it really hold on the mesoscale? 269 Deardorff - The motions still obey the same basic equations and are subject to frictional forces. N. Phillips - I address this question to Jim Deardorff. Do your computations with the three different varieties of subgrid scale parameteri zations predict that during the destruction of the nocturnal version in the morning, at some fixed height, like 200 meters, temperature at first cools before it starts warming up? Deardorff - Yes, that occurs because the model predicts a heat flux which has a negative value at the top of the mixed layer and when you go a bit higher than that then the heat flux goes from negative to zero, produces net cooling. Hoffert - It occurs to me that the speakers were dis- cussing basically two kinds of boundary layer models. On the one hand an integral method was proposed that uses essentially zero vertical resolution legislated profiles time dependent depth, and on the other hand a three dimen- sional model which would parameterize turbulence on the subgrid scale. The first is probably appropriate to a large scale model or general circulation model the second would give us a lot of information on the microscale. I would like to propose another possibility which would really be a time dependent, one dimensional model where turbulence is essentially modeled rather than parameterized on a subgrid scale. This is actually treating turbulence, treating the planetary boundary layer locally over each grid point as a one dimensional column and I think this would have several advantages. It might be make it possible to treat some of the current dynamic issues associated with possible cloud formation within the boundary layer itself and to allow one 270 to say something about the vertical structure of the P.B.L. including the effects of heat flux without going into the full blown 3-D type of model. Alaka - You have said that we need a good data set for initialization of the model or for its verification. What's really meant by this, what is a good data set for this kind of model ? Ross - Well, I will say that, at this point, I can't answer that question. I think we have to have more experi- ence with the models and be able to ascertain what physical quantities are necessary for adequately representing the phenomena that we are studying. For example, what detail do we have to parameterize cumulus effects and radiation in order to develop an accurate meso-3 scale model. Also we must set limits on the accuracy of the data which we will need to input into the models. Smagorinsky - Certainly you would want to have the basic dependent variables to establish initial conditions for model calculations such as that described by Dr. Ross, on scales appropriate and resolvable for those models. You would also want the same dependent variables for verifica- tion, essentially the horizontal wind components, the temp- erature, the water vapor content. You would also want data on parameters that do not enter as initial conditions but which are verifiable from the results such as precipitation and other variables which are consequences of the evolution of the model. You're trying to see whether models of this sort can sustain development, sustain instability phenomena of the type that resemble what happens in the real geophysical medi a . 271 Lilly - I'd like to ask the same question in a little different way. You can start looking at the observational systems that have been talked about and ask what they're really capable of doing. As in the large scale case, they're not capable of producing a complete set of all variables. You might have, for example, quite a good set of winds from doppler and other sources but \/ery incomplete temperature and moisture data. So you have the equivalent to the large scale data assimilation problem, in taking an incomplete set of data and trying to make the model compatible with it, in the presence of various kinds of noises that may lead to instabilities or at least large spurious waves bouncing around. I think that this is a problem that can't be ignored very much longer in both mesoscale and cloud model- ing. Gal-Chen - This comment is directed to Dr. Ross. You said you were interested in a meso-6 scale. Nevertheless, your system of equations seem to be appropriate for a meso-y scale. It would seem to me that the urban heat island effect is really a meso-y problem. Ross - In the vertical I think that you can say that basically it is a meso-y phenomenon in terms of the boundary layer detail but in the horizontal, it involves scales of the order of 30 km which are more representative of the me so-3 scales. Smagorinsky - As Ross said, you're using a large com- puting machine, one can hope to resolve no more than two decades. I think that if you want to deal with the meso-y, you have to do a two-step calculation for the meso-y. 272 The mesa-3 would be some sort of limited area of fine mesh type of calculation. You might want to idealize the larger scale but essentially you're going to have to step down. You can't deal with all scales simultaneously. Cotton - Do you use an anelastic system? Ross - Yes, we want to start with an anelastic system and then determine whether it is necessary for the particu- lar problem under study. Kreitzberg - When you put in all the cloud physics, will you still be able to use that size of domain? Ross - That is a problem which we will have to face later Fox - I just wanted to comment on this question of how do you use data. At the pre-SESAME meeting that was held at Princeton, I talked about EPA's St. Louis project, in which we had a set of 25 stations located within an approximately 50 mile radius around the city of St. Louis. We were continuously facing this problem of how to use the data and, of course, one obvious thing to do is to try and make a fairly good determination of the boundary conditions. You would be fairly certain of what the boundary conditions are in terms of velocities and temperatures and then try to use the combination of objective analysis and modeling. There hasn't been too much talk here about objective analysis on the mesoscale, I think this is a problem that has to be faced. 273 Pielke - Well, in regards to objective analysis I suspect that there will be more said about it this afternoon because there are several talks on mesoscale modeling. Yes? E. Barrett - I wonder whether the scale of the model is adequately specified, a, 3, and y because it seems to me that the vertical resolution in each case is highly depen- dent on the horizontal resolution and it depends a lot on the phenomena that you are going to describe. The vertical resolution required for the boundary layer is certainly different than the vertical resolution required on a full grown cumulonimbus. Pielke - The way I would define the difference between a meso-3 and a meso-y model would be that in a meso-y model you're required to use the nonhydrostati c formulation of the primitive equations whereas in the meso-beta scale model you can use the hydrostatic assumption and the meso-alpha model might be one in which you can use the quasi -geostrophic assumption. The amount of vertical resolution is not used in making these definitions. In my meso-3 model, for in- stance, I'm just looking at a shallow atmospheric phenomena, namely the sea breeze by some definitions one might want to call that a meso-y model but I wouldn't since I use the hydrostatic form of the primitive equations. Barrett - I also have a few questions for Dr. Deardorff I am rather curious as to why in your model you don't pro- duce something like Benard cells. One would think with the rather uniform horizontal boundary conditions you would see them in the model. Do you do something to keep them out? 274 Deardorff - Benard cells require surface tension to be observed in the laboratory. Besides that the aircraft observations and most of the radar and acoustic soundings, too, don't show that much regularity, I believe, unless you have clouds at the tops of these larger mesoscale cells. I think that their scales are tied in with cloud radiation cooling. Frenzen - I will begin by addressing myself to the last question; and answer, you are getting, not Benard cells, but organized convection, in the organized line up of the w's, at least there is that tendency. But now I would like to talk about the mesoscale data sets. It's kind of an essay. One of the criticisms of global modeling is that there are no single data sets to which all global models can be tested, or maybe I should use the word validated. One of the goals of GARP is apparently to produce such a global scale data set. Ergo, one of the goals of a national mesoscale experiment should be to pro- duce a mesoscale data set. In SESAME, with the emphasis on severe storms you're producing a set of prejudiced mesoscale data sets. In Scott Williams' summary the other day, al- though the figures are only provisional he showed 20 storm days and 10 non-storm days. In an observational program that sets out to obtain a mesoscale data set, the ratio would have to be considerably different. So perhaps some part of any mesoscale network should operate continuously. The AEC and my programs are interested in these undisturbed situations where we can do continuous experiments in meso- scaledispersion. 275 Pielke - I would agree wholeheartedly with that. In fact, I think that the meso-6 scale models should be tested first on days which are undisturbed, because, if the model doesn't do well on undisturbed days, it's very unlikely that it's going to do well on disturbed days. The entire set of observations should be obtained on undisturbed days as well as disturbed days. Smagorinsky - A very important point that came out of the Princeton meeting in November was the importance of establishing observational ly as well as theoretically, the knowledge of the background spectrum, that is, that part of the spectrum, arising from quiet periods. The occasional disturbances which occur are, of course, quite important phenomenologically but without the background spectrum it is nonsense to deal with the extreme phenomena first. You really must make provisions in the observation experiment to obtain this background data. Lilly - Point taken, but there is a real question of how many times you can afford to throw up balloons, day in and day out. The more indirect sensors that you can run more or less independently all day and night, for a week or more the better. 276 SOME COMMENTS ON FINE MESH MODELING D. K. LILLY The primary purpose of any fine mesh modeling is to extrapolate the development of small synoptic scale or mesoscale events either in a real prediction mode or for the purpose of quantitatively simulating a typical situation. Since little mesoscale data exists routinely, or hardly at all, the emphasis is more on predicting new mesoscale phenomen than on extrapolating existing patterns, that is on predicting small scale events that didn't originally exist or were unobservable from available initial data. This seems on the face of it a very ambitious goal indeed. It's not without hope of success in some circumstances, though, because certainly some mesoscale phenomena, especially frontogenesis, are inherent in a larger scale precursory state. If the potential for frontogenesis is there it will develop out of a numerical simulation if the simulation is done right. This has been very nicely shown in some of isentropic models. Another place where small scale events occur or develop out of no pre- existing disturbance, to all intents and purposes, is where a certain large scale environment is imposed on existing topography. A mesoscale response is generated from that topography if the large scale environment is suitable. In this category, we have mountain waves, orographic convection, sea breezes, etc. When these factors are not present or are not controlling, it would seem that mesoscale models can only produce meaningful results if mesoscale initial and verifying data are available, and that problem is a portion of what Project SESAME is aimed to remedy. Other modeling problems besides that basic one includes the matching of a limited area fine mesh model to a global or hemispheric prediction model and the question of the optimal tradeoffs between large area and high resolution for a given prediction effort in computer time. I think that this last question hasn't really been investigated very methodically. Supposing you want to make a prediction valid for 24 hours. Are you better off with 3,000 km on a side and, say, 100 points each way or 6,000 km on a side and the same number of points but further apart. The problem of getting the most resolution and accuracy per dollar or computing unit is always with us and often is, and probably should be, given more importance than the introduction of new physics. If the numerical accuracy is so poor that nobody can believe the results with minimal physics in it, they're not going to believe it if you put in a lot of physics. Parameterization 277 of boundary layer and convective transports is widely recognized as one of the principal physical problems in both large scale and mesoscale modeling. The problem of how best to use available data for initializing is one that tends to get put off till last, everywhere except in the operating weather service. Then when an organization like NCAR tries to do real time prediction experiments, they spend three years in either duplicating or hoping to make some kind of an improvement on the data reduction and initializing methods used in the weather services. It is hard to take over and utilize the actual codes, as simple as they may be in concept, because the weather service does not develop them in a form suitable for transfer to anybody else's computer. Sometimes they don't even know what's in the codes themselves. Some current trends in fine-mesh modeling were identified in a workshop held at NCAR this summer. The problem of optimizing resolution was attacked in several ways. Some groups are experimenting with semi- implicit time differencing to speed up the calculations. At NCAR there is an emphasis on improving vertical resolution, either by directly adding in more fixed height levels or by going to isentropic coordinates, which seem to provide high vertical resolution where it is needed most. Not much attention was given to the use of functional space, e.g. Fourier modes, in the horizontal, although that has been shown to be preferable in the case of simple physical systems with simple boundaries, e.g. turbulence in a box. The problem of matching lateral boundary conditions of a limited area fine mesh model with those of a larger scale model received lots of attention. It is doubtful, however, that the optimal solution (or even one of guaranteed stability) has yet been found. The development and testing of new methods of convective and boundary layer parameterization appears to be proceeding at a good pace, especially under the stimulation of GARP and GATE. In my opinion, the slab-type model discussed by Mahrt is very attractive. It has been difficult to get modelers to take that approach very seriously, however, because of its computational inconvenience. The slab will have varying depths, cutting across pressure or isentropic levels indicriminately and therefore compounding the problem of boundary layer topography. In addition, the basic structure of the equations is likely to differ markedly between the stable and unstable cases. This is, however, the nature of the real world, and modelers will eventually have to face it. Now I wish to describe briefly the fine-mesh modelling activities that were presented in the workshop held at NCAR this summer. A more complete description will be found in the proceedings of the colloquim on Sub-Synoptic Extra-tropical Weather Systems being issued shortly by NCAR. 278 Fine-mesh models developed by the Institute of Meteorology, University of Stockholm (MISU) and the Swedish Meterological and Hydrographic Institute (SMAI) were described by A. Sundstrom and S. Bodin. These models are generally similar in scale to the U.S. Weather Service limited area fine- mesh (LFM) model, but differ from it and from each other in various ways. The most interesting aspect of the description was the discussion of boundary conditions around a limited area model, which makes use of the important theoretical work by Oliger and Sundstrom. Y. Okamura discussed a six-level fine-mesh limited area forecast model being developed at the Japan Meteorological Agency for operational use. The model uses a 150 km grid spacing with 100 mb vertical resolution in the lower troposphere but only two levels in the upper troposphere. Thus the concentration is in lower troposphere sub-synoptic scale systems, especially fronts. The lateral boundary conditions are determined from results of a larger scale model, with some smoothing applied near the boundary. Time differencing is by the Euler backward scheme, which is known to damp fast moving waves. Results were shown for two cases, with an indication that the model was underpredicting low centers and overpredicting highs. A fine-mesh version of the British Meteorological Office 10-level primitive equation model was described by D. E. Jones. The mesh spacing is 100 km and the integration is carried out over a 64 x 48 mesh centered just west of Ireland. The lateral boundary conditions are taken from a larger scale model large artificial diffusion inside the boundary. Integration of the model equations is accelerated by use of a semi-implicit scheme. J. A. Gerrity described several aspects of the regional model development program in NMC. The LFM was shown to have improved precipitation predictions, especially in the summer. Somewhat oddly, it seems to be hard to show objective improvements in other parameters, although forecasters generally regard the LFM predictions as superior. A planetary boundary layer model (PBL) predicts low level structural details, using the LFM for the upper forcing conditions. This model is being applied experimentally at NMC. Initial verification tests show, in Gerrity's words, "much room for improvement". Gerrity also discussed the use of 4th order space differencing, semi-implicit time differencing, and a hurricane model under development. A more sophisticated boundary layer model (BLM), also using an operational primitive equation model for forcing is being developed by the Techniques Development Laboratory of NMC and was described by W. Schaffer. Tests conducted using a one-dimensional version of the model showed that it duplicated the Australian Wangara diurnal cycle in the 279 boundary layer with reasonable fidelity. The model is intended for application over the non-mountainous part of the U.S. M. L. Kaplan and D. A. Paine showed some results from their regional scale modeling program (see Paine's talk elsewhere in these proceedings). The cases shown included several severe storm events. They used a 42 km mesh with about 2500 mesh points at 10 levels. An extensive and critical discussion accompanied and followed the presentation and focussed on questions of adequacy of the modeling assumptions, the initial data, and the verification techniques. On their face, the model results seem impressive, but there is a clear need for more tests to be conducted in cases of less extreme development. C. Kreitzberg and D. Perkey described a two-step nested mesh model designed to predict and simulate regional and mesoscale weather, especially in circumstances involving corrective cloud development. As this work is also being presented in this meeting, I will not give any further details. The global circulation model developed at NCAR has also been applied on the regional scale, and this application was described by D. Williamson and D. Baumhefner. Since the model is identical to the global version except for its grid scale and area covered, it is particularly suitable for evaluation of the problems of grid nesting and variable resolution. Encouraging results were reported in these areas. Other test calculations suggested that vertical resolution is of greater importance than horizontal resolution in improving prediction accuracy. Also from NCAR, M. Shapiro and R. Bleck described some of their work in analysis and prediction using isentropic coordinates. Again, I will not discuss this work further because Dr. Bleck will be doing so himself. Das - (somewhat unintelligible. He proposed that the prediction of several models be tested and compared against a single data set.) Lilly - Sounds good. We haven't got the global modelers to do that yet and they've been at it for 10 years or more. This is something that's often '^ery hard to arrange because first one has to agree on a way to handle the data before putting it into the model. Nevertheless, it's a valid point and there should be more efforts made to do this. Deardorff: Mel Shapiro has plans for doing this in the next year, I believe. Lilly: Yes, within the somewhat independent and mildly competing efforts at the same institution it seems almost inexcusable not to do something like this and at NCAR I think it will be done. N. Phillips: There is a difference, namely that all of the global models are attempting to predict the same thing, whereas with the mesoscale models one is more likely to find the case that a model is developed to do well with a particular type of phenomena and therefore it is more difficult to compare all the models. 280 PREDICTION MODELS USING ISENTROPIC COORDINATES R. Bleck - The idea of using isentropic (or "theta") coordinates in numerical modeling is not exactly new. It was discussed, for example, by Charney and Philips in 1953. They in turn referred to an internal MIT report by Dr. Fred Schuman who wrote his Ph.D. thesis on the subject. There were good reasons not to pursue numerical weather prediction in theta coordinates at that time, because it was not clear immediately how much trouble there would be with super adiabatic lapse rates and with coordinate surfaces inter- secting the ground. I think the ice was broken in the mid- 1960's when Eliassen and Raustein in Norway conducted a channel experiment using isentropic coordinates. They started from barocl i nical ly unstable zonal flow and devel- oped good looking frontogenesi s patterns and surface cy- clones. The advantages and disadvantages of isentropic coordinates have been discussed widely and I will go through them, briefly. I might start with the advantages. The overwhelming advantage in our opinion is that they, as Doug Lilly already stressed, minimize truncation errors. In terms of computation accuracy per grid point, theta coordin- ates probably beat everything else. Wherever there are fronts or baroclinic zones, we tend to find high static stability, and precisely in those areas we get the highest concentration of grid points. I have one Figure to illus- trate (Fig. 4). It shows a vertical cross section through the atmosphere along the west coast of the United States from San Diego to somewhere in Canada. Solid lines are lines of constant potential temperature; the dashed lines are lines of constant, normal velocities. I marked one area 281 100 SAN LAX P6U OAK MFR SAN LAX PGU OAK MFR Fig. 1 Typical frontal zone depicted by cross section using pressure (left) and potential temperature (right) as vertical coordinates. Solid lines are wind speed (knots). on the left and I marked the same area again in theta space on the right. You notice that the shallow, tilted area in (x,p) space expands into a cube in (x,e) space. The point is that if you want to include frontal phenomena in conven- tional models, you need an awful lot of grid points both vertically and horizontally because the baroclinic zone is tilted. As you cut through the zone on a constant pressure surface, you find a large gradient of temperature. In the vertical you obviously need a similar resolution. In isen- tropic coordinates we avoid that problem; the baroclinic zones take up more space in the model at the expense of less interesting volumes of air. This not only pays off in the time integration of the dynamic equations but also in the objective analysis of meteorological fields as has been pointed out numerous times by Danielsen and Shapiro and others probably. The horizontal truncation errors are minimized because you don't find fronts on individual isen- 282 tropic surfaces. The information about the existence of the front is contained only in the thickness field between theta surfaces and that thickness field, itself, shows a large scale organization comparable to the scale of our radiosonde network. Therefore, we can do the analysis of the initial state better in this coordinate system than in pressure or height coordinates. Another advantage, is that we don't have to worry about a vertical velocity e in the forecast model unless we include diabatic processes, and if we do include those the equation for the velocity e is the thermo- dynamic equation. There is no dynamic equation to predict or diagnose a vertical velocity to worry about. The disadvantages have been known and emphasized a lot. To transform from z space to 6-theta space, we must assume that theta increases with height, so wherever this is not the case, we cannot carry out this transformation. Particu- larly in the boundary layer, we must, if we want to retain super adiabatic gradients close to the ground, switch to different coordinate system. Dennis Deaven in his Ph.D. thesis at Penn State has worked on this problem but so far only in two dimensions. What we've been doing in our three dimensional models, is that we have pretty much ignored the problem and have tried to eliminate regions of superadiabati c lapse rate. We don't have a boundary layer at this time either so this problem is still coming up in three dimensional modeling. The other unpleasant characteristics of theta coordinates is that the locus of the ground changes in time in the grid box. And when there is a front forming at the ground, the effect of that is that the ground gets steeper and steeper in the x-y-theta box. There's also the disadvantage that since the ground has an average north-south tilt in x-y- theta space, a wedge shape part of the grid box always is underground and therefore you're wasting storage space in the computer. 283 Fig. 2 ATS-III satellite photograph for 2100 GMT, 3 April 1974. The model presently in use is based on the primitive equations. In the past we used NMC grid points, but we've cut the grid distance in half recently. We're using a fairly large number of vertical levels because as you've noticed in the cross section, it pays in isentropic coord- inates to have high vertical resolution. You don't need that much horizontal resolution so the emphasis in our so called fine mesh modeling has been on vertical accuracy and not so much horizontal. I want to show now where we stand presently Figure 2 shows the cloud cover in a cyclone which was the one associated with the tornado outbreak in April this year figure 3 shows a 9 hour surface pressure forecast verifying at the same time as figure 2. Vertical motion at 700 mb is shown in the window, diagnosed by means of 6 hour isentropic trajectories. I'm not claiming that we scratch even the meso-alpha domain; this is, I think, plain old fashioned synoptic scale forecasting and so far what I'm saying has not yery much relevance to SESAME. 284 Fig. 3 9-hour surface pressure forecast, verifying at time shown in Fig, 2. Isolines of vertical velocity at 700 mb based on 6-hr trajectories are drawn in window in intervals of 20 mb (6 hr) (upward motion: dashed; downward motion: solid) Stippled area shows upward velocity exceeds 60 mb (6 hr) We have also constructed cross sections from our re- sults. Figure 4 shows again the April 1974 case. We have transformed our data back into p-space so one can see the fronts. The initial condition at 0000 GMT, 3 April are shown on the left and the 24 hour prediction on the right. You notice how che model tends to generate fronts. The maps in the upper half show the temperature at 500 mb and we've drawn cross sections along the dashed lines cutting through the front. The cross sections again are showing the isen- tropes (solid) and the normal wind velocity (broken) and you notice how the frontogenesi s proceeds. This is actually a 285 (b) '■ = (d) Fig. 4 Upper portion: 24-hr forecast of 500 mb temperature from isentropic model. Lower portion: Vertical cross sections along lines marked in upper portion. Courtesy of M. Shapiro, NCAR. yary strong case of frontogenesi s and Dr. Danielsen this morning elaborated on the connection between things going on in the upper troposphere and events that affect people. I would like to point out that the isentropes in the cross sections are drawn eyery 3 degrees whereas the coordinate surfaces, in the model are ewery 5 degrees apart. This means that we needed in excess of 20 vertical levels, but were able to capture the fronts and the shear probably the clear air turbulence as well, still in a manner which is not too costly compared to other models. I would like to men- tion that if we had to rely on satellite data, instead of radiosonde data, we probably would not capture the frontal gradient in the initial data but as Doug just pointed out the frontogenesi s would probably proceed in the model just as well, but not necessarily in the right place. 286 Anthes - I think this is a good example of one of the difficulties that can arise from some sort of systemized comparisons between different models. If somebody said let's try to compare this model with a different model Rainer would, obviously like to chose a case where there's strong f rontogenesi s and winter time thermal stability whereas somebody in sigma or z coordinates would like the case where the dry adiabatic layer goes up to 200 mb and so you'll just have real problems in selecting cases. J. M. Wallace - Rainer, could you perhaps eliminate that first disadvantage of theta coordinates in the follow- ing way. Instead of considering the ground as the bottom boundary consider the top of the mixed layer as the bottom boundary of the model perhaps have something simpler. Bleck - That's what I want to do. Larry Mahrt's pro- posal of using an integrated boundary layer is attractive because even in the case of flat terrain you'd have the problem of the lower boundary intersecting the coordinate surfaces so you have to face the problem of a time dependent lower boundary from the very beginning. Adding mountains or adding the boundary layer doesn't make it any worse. Actu- ally when we put mountains in the model took them wery well. I am confident that when we have developed an integrated boundary layer with a variable height that this model is set up to take care of that, too. J. Golden - Just a general point for the modelers who are anticipating SESAME in Oklahoma. Contrary to popular opinion, in fact my own when I first went there, Oklahoma is not as flat as a pancake. In fact it has both a sloping 287 terrain, which slopes upward towards the west and also mountains, both to the south and southwest of Norman. There are mountains between 1500 and 2000 feet above ground level and also mountains in the southeast portion of the state, so in that regard some sort of a sigma coordinate system might have advantages. My colleague Joe Shaeffer has demonstrated that the terrain itself can make substantial contributions to surface convergence and divergence fields and you must take the slope of the terrain into account in these models. N. Phillips - Do the isentropic coordinates have any advantage in the nonhydrostati c equations? Lilly - I would say no. They're harder. 288 Regional and Mesoscale Modeling at Drexel C. Kreitzberg We've been working for several years on mesoscale modeling at Drexel and have a joint project with Doug Lilly's group. Don Perkey and Mark Lutz work for Drexel at NCAR and we are continuously in touch with all of NCAR but mostly with Mel Shapiro and Rainer Bleck. Our model is on the meso-beta scale, only one scale above the GFDL model, so it treats phenomena that have characteristic lengths on the order of, 50 kilometers, such as convective rain bands. Our primary goal is quantitative precipitation forecasting in extratropical cyclones. To be able to run from real data, the model is nested within a larger scale model. To be able to predict convectively active distur- bances, a one dimensional cumulus model that calculates the effect of the cumulus clouds on the mesoscale is incorpor- ated as a subroutine within the basic model. As far as severe weather is concerned, can study the precursors, the early part of the development of the convection, and perhaps gain some idea of the intensity that it will reach, but there is no provision for handling the evolution of the giant thunderstorm. I'd like to tell you a little bit about the equations, about the results we have achieved to date, what our primary needs are over the next couple of years and our plans to meet some of those needs. The implications that our work has as far as the SESAME is concerned will then be summarized. Most of the work that is presented today deals with a grid size of 20 km in the horizontal. For SESAME we would be prepared to, of course, go down to a 5 km grid size. But it's rather shocking if you look at the computer time re- quired. You realize how long it took the weather service to get up the courage to just go down to half the conventional 289 grid size because of the computer costs. It will be very, yery expensive to go down from 20 km to 5 km so all our development work will be done on the larger scale grids. All our preliminary work had told us that these rain bands are yery strongly controlled by the larger scale. So it was essential to start out with a North American scale forecast. We felt that we could do the objective analysis over North America and start our forecast on that scale, but then so many boundary problems arose on both oceans and in the Gulf of Mexico that it was necessary to start one scale up above that. We then went to the northern hemisphere model. That scale is not our business and we didn't want to get into it. Fortunately, NCAR has the model that Doug talked about earlier, namely, the real data version of the GCM that Williamson and Baumhefner run. They ran the northern hemi- sphere forecast for a particular case and we used their data to initialize our equations and our model over the United States (see Fig. 1). The fine-mesh forecast was run over the U.S. using a 1.25 degree latitude, longitude grid. The 1.25° grid forecast was used to initialize our mesoscale forecast. In the particular storm dealt with here, the activity was down in the south central U.S. The idea is to obtain at least our first guess of the initialization on the mesoscale from a larger scale model. As the mesoscale fore- cast is being made, time-dependent lateral boundary condi- tions from the larger scale model are provided to the mesoscal e model . This is strictly parasitic meshing; nothing feeds back to the larger scale. The reason I think this is the good way to go for practical purposes can be seen from an example of how a mesoscale model might be used in operational 290 Figure 1 Model domains. The dark solid line denotes the fine mesh domain, while the stippled area depicts the mesoscale domain. forecasting. Say that this evening, when you run off the OOZ data, you realize that there is going to be important mesoscale weather in a particular part of the country. You have made the LFM forecast and are ready to start the mesoscale forecast. As soon as the 12Z data are received they are used to update the large scale analysis and initial fields used to make the forecast for tomorrow on the meso- scale, using the lateral boundary conditions from the large scale forecast made the evening before. That way you don't have to wait too long to start the mesoscale run. Another point that is kind of interesting; in the particular case we started out at 12Z on the fine-mesh but the convection didn't break out until 18Z. Therefore we didn't start the mesoscale forecast until 18Z. You can start the mesoscale model at any time during the large scale forecast. The equations that we use in the model are the primi- tive equations in height coordinates. Soon we will incor- porate terrain-following height coordinates. The convective 291 effect computed in the cumulus subroutines will go into the primitive equations that run the mesoscale model. Addi- tional terms for convective effects have been added to the tendency equations; not just for temperature and moisture but also for cloud water. The finite differences used are leap-frog in time and fourth order centered-i n-space except for the rain water. We find upstream differences are needed for rain water Every 20 minutes the sounding from each column in the hydrostatic model is run through the one-dimensional quasi- time-dependent cumulus model to find out what changes that particular cumulus cloud will produce on the larger scale. Those changes are fed into the larger scale forecast over the next forty minutes. We call this cumulus routine every 20 minutes so we have an overlap in the convective adjust- ment. The cumulus equations are the conventional equations for the calculation of the updraft following the Lagrangian formulation. However, a substantial number of steps are performed in addition to the Lagrangian calculation of conditions within the updraft of that cumulus cloud. First it is necessary to locate the base of the cloud. With some of the parameterization schemes in the tropics, the cloud base is always close to the surface. In extratropical cyclones, the convective cloud base can be above a warm front so it can be as high as 3 kilometers. A stability analysis is performed at every level in the sounding to find out where the most unstable parcel is and we try to build a cloud from that first. In this sort of model it is necessary to specify the initial updraft radius and this is one of the major weak- nesses in the model. We have not yet figured out a good way to tell ahead of time what radius to use. We can tell afterwards what radius should have been used so hopefully 292 sel ,ct.on criteria can be incorporated into the code. After selecting the cloud base radius, the updraft is calculated and the updraft radius is a function of height depending upon the acceleration of the updraft. After the cloud is built, the next problem is to com- pute the subsidence that is going to be produced in the environment. In order to do so, the vertical distribution of the cloud mass must be specified. To date we don't know any better way than to specify the final mass distribution as that which existed during the updraft. At the present time we do not consider any quasi-steady outflow or building of a large anvil. Again, it's wery simple to put that into the code if someway were known to specify what that anvil shape should be. Once the vertical mass distribution is specified the next problem is to determine the percent cloud cover or the area that's going to feed each cloud. A fairly satisfactory way of specifying percent cloud area is based on the follow- ing sort of reasoning. The larger the area that feeds each cloud, the less will be the subsidence in the environment. The less subsidence in the environment, the less will be the warming around the cloud. When the cloud first builds, it is warmer than the environment as the parcels accelerate upward; it will not stop building until that environment subsides enough to warm up to the temperature in the updraft What we do is calculate the radius that's going to feed each cloud such that the subsidence will be just enough to warm the environment until it's just as warm as the cloud on the average over the depth of the cloud. This is like the classical slice method but it is vertically integrated. It means the same pressure change will exist between cloud top and cloud bottom in the environment as within the cloud. Having calculate the subsidence, microphysics equations are used to calculate the precipitation out of the cloud. 293 Of course, that precipitation collects much of the cloud water and then evaporation below cloud base is calculated. Another factor that is quite important, particularly for SESAME clouds, is the evaporation in a downdraft if a tilted cloud system develops. This above cloud base evaporation of precipitation has been put into the model in a crude fashion that works fine. The problem is to specify how much precip- itation is going to fall outside the cloud; it is rather arbitrary at the present time. After the precipitation falls out collecting the cloud water and evaporating and the environment has subsided, then the cloud is dissipated through lateral mixing. The complete cycle of the cumulus cloud and the re- sponse of it's environment in that box has now been simulated The model then tries to build additional clouds at that same time step. After all the clouds that can be built are built, a final hydrostatic adjustment is made and the final results are put back into the hydrostatic model via the extra terms in the primitive equations. The boundary layer formulation for this model is being worked on at the present time. Several different versions have been run in two-dimensions and, we'll soon have a reasonable boundary layer in the three-dimensional model. The sort of boundary layer formulation that is being devel- oped would have about 6 to 10 extra levels below 2 kilom- eters in order to resolve the boundary layer structure. We'll probably at least try out a time-dependent height of the boundary layer like Pielke talked about getting from Deardorff. While that will be tried, we do want to have for a standard, a constant resolution boundary layer so as to handle a variety of boundary layers that are likely to be encountered using real data. The results obtained to date will be summarized briefly. The cumulus parameterization scheme is working yery smoothly. Very consistent, continu- 294 ous answers are produced and the cost of the cumulus calcu- lation is about 15 percent of the total. All our work has shown extreme sensitivity of the convection and of the development of the mesoscale to the larger scale forcing. Particularly the low level moisture convergence is the key mechanism driving the convection. Simulations in two dimen- sions show how the precipitation band forms for a given width of the humidity band. (The humidity decreases later- ally as a sinusoidal function.) We have found how the width of the precipitation band depends upon the width of the humidity band and upon the vertical motion of the larger scale forcing. Furthermore, tests show that with the narrow cumulus cloud base radii more intense mesoscale circulations develop than with broad clouds, but these \/ery intense circulations are at low levels because the heating is confined to low 1 evel s . A substantial change is produced by varying the domain of the integration. Calculations within our domain are affected by an influence speed that travels at about 65 knots inward from the boundaries. These results indicate that it will be necessary to use quite a large domain to make a mesoscale forecast and they also indicate, as Jim Purdom showed in the satellite movies, that relatively small factors coming in from the lateral boundary can have a large impact on what goes on interior to the grid. Both an extratropical case and a BOMEX case have been run using the three dimensional model. The results are very reasonable. This work, up until a few months ago, has been written up in detail (Kreitzberg, ejt aj_. , 1974)* and I think most of you are on the distribution list. * Mesoscal e Model ing and Forecasting Research , Carl W. Kreitzberg, Donald J. Perkey and John E. Pinkerton, Final Report, Contract No. F19628-69-C-0092 , AFCRL-TR-74-0253 , March, 1974, 318 pp. 295 In the future we need to put in a surface energy bal- ance calculation and a successful nocturnal boundary layer. I think our main trouble with the boundary layer will arise at night. The initialization scheme, as Doug pointed out, is a major problem. The initialization has to be part and parcel of the forecast model . You have to not just use objective analysis to get your initialization, you have to get it in dynamic balance with the particular forecast model that is being used. The real need is to run these models on a lot of cases. We expect in the next year to run it on the AMTEX data, on Fankhauser's NSSL case in 1969 and on the April 3, 1974 tornado storm to compare with Shapiro's model. We may run it on Hurricane Agnes and we're particularly interested in running it on the data from NASA that was discussed briefly yesterday; namely, the AVE data. Having three hourly rawin- sonde observations will permit us to develop a dynamic initialization scheme much better than we could do with just the 12 hourly observations. As far as the implications to SESAME are concerned, I think that the primary implication is the very strong re- quirement to tie in directly to the synoptic scale without any gaps; I think that is the main contribution that SESAME can make that no smaller project can get a accomplish. It will take some experimental design studies to tell how to space out the rawinsonde observations to avoid getting a big gap between the synoptic scale and the mesoscale. Kreitzberg - On the question of verification, one of the most important points, of course, is that you always try to verify what it is you're trying to forecast. We're trying to forecast quantitative precipitation and there's no reason why we can't verify that. We've looked at the verifi- cation data and we're within a factor of two. But when 296 you're developing a model like this you concentrate on one thing, at a time. At the present we need to achieve better agreement on the yery largest scales. We have to get a good boundary layer into the model to slow down the flow at low levels. So the verification looks promising at the moment and we see no reason why we'll have undo difficulty. As far as the question of the predictability of the mesoscale, one of the tasks in SESAME, we'll be able to predict quite well some cases and we'll get wiped out on other cases. The problem is to figure out that percentage, what the win-loss ratio is. It is going to take a lot of computer time to run through a big enough sample. But within a couple of years we ought to have that sample. Unidentified questioner - Carl, I was going to ask you how confident are you that a one-dimensional cloud model can represent squall line convection? Kreitzberg - I tried to point that out. I am quite confident that we'll be okay until we get into the tilted updraft. We can still treat that case as far as evaporation is concerned above cloud base, but we can not handle the kinetic energy generation that is a result of the cumulus cloud, the sort of problem that Moncrieff and Green talk about (QJRMS, 1972, pp 336-352). We do have to verify to make sure that the moisture and heat budgets in the param- eterization model do compare with what you get in nature and we hope to do a lot with the GATE data. Alaka - Several speakers have mentioned the problem of initializing boundary layer models and I would like to mention that in TDL we are trying to initialize our boundary layer model by using a variational approach. We feel that this approach uses the advantage of being able to incorporate the dynamics of the models into the process of the initializa 297 tion. Of course, this would make the model very happy and hopefully suppress some of the convulsions. Our experiments so far, with the one-dimensional version of the model, have been very promising but whether this promise continues into the three-dimensional model remains to be seen. Kreitzberg - As I see it the problem of initialization is going to be tough over irregular terrain. That's what bothers me; I'm hoping that Rick will solve that. Das - I'm yery uncomfortable about the one dimensional cloud parameterization. What I'm worried about is that this is so much work, while it can probably be done in a much simpler way. The complex model has not been compared with actual data. One must start from much simpler kinds of parameterization and see how they behave before going to the complex; at least to find out what a \/ery complicated sort of parameterization must achieve. Lilly - Only convective adjustment or one of the CISK- type methods are much simpler, isn't that right? Kreitzberg - There are a couple of varieties of the Kuo-Kri shnamurti scheme and various forms of convective adjustment. I think that in order to get the flexibility needed in our convective adjustment scheme, for it to be generally applicable over a reasonable range of conditions, you need a fair amount of complexity. I certainly expect that when everything is working perfectly we'll be able to throw away maybe half or two-thirds of the complexity. Until we get a good benchmark, I hate to start throwing things out. Bill Cotton - I guess I should express my concern about the use of the entrainment models for parameterizing convec- 298 tion. Maybe the reason is because I find it being used quite extensively not only by Carl. Kreitzberg - We are prepared to make such tests and many internal tests have been made to determine model sensi- tivity. For example, Figure 2 shows some results from a simply two dimensional experimental symmetric in x about the central column. The prediction starts from rest except for a superimposed large scale ascent that will release poten- tial instability in the more moist central region. Figure 2 shows the x-variation of surface precipitation and 5 km level vertical motion at 360 min and 720 min for two hori- zontal mesh sizes, 10 km and 20 km. Cotton - The Arakawa -Schubert model has entrainment as a foundation. In my own external testing with the entrain- ment model, as well as Jack Warner's, one of the disturbing features is that we found that they're yery poor predictors of the average properties of the cloud as observed or the averaged properties of the field of clouds. What you want is a predictor of the average properties of a field of clouds. The best prediction seems to be for locally instan- taneous samplings in a cloud near it's top, while still in it's actively rising stage. Still there's a discrepancy since the models tend to overpredict liquid water contents. This is a poor vehicle that you're using for parameteriza- tion, and as I said it is disturbing that this is where we are in terms of the verification of the base model. Kreitzberg - I think that that is the opposite opinion to that expressed a moment ago by Dr. Das. I'd like to say that I'm in the middle ground between the simple and the super sophisticated. I think that the model has enough flexibility to give good results if we knew what the average distribution is. If you can get it from a very fancy model, I think this general framework is adequate to parameterize the more important processes. It will take a fair amount of time to do it, but if you can tell me any profile of mass or any profile of liquid water or any profile of evaporation, I can twist the dials and get that out. 299 DX:20 KM CASE MESaSCflLE VERT VEL RNO SFC PRCP SaLID=W!I.' x> ^' ;£ ■K h r--'-r^ i'-'-w"' •IVV'- 14. The 30 x 30 meshed grid for the i 15. The 25-hour fo by the explicit model. a 3 to 1 meshed system. At present we have obtained a 6-hour stable forecast with a strong wind conditions blowing through the domain. The last area of research that I want to say something about today is the use of the semi-implicit modeling techniques. Here briefly is a comparison of an explicit forecast with the semi- implicit technique, which runs four times faster. This experiment consists of the propagation of relatively long Haurwitz wave; the forecast geopotential field at 25 hours is shown in Figure 15. The error fields after 24 hours from the two models are shown in Figure 16 and 17. The maximum perturbation error in the semi-implicit model is about 8 meters, which is quite a skillful forecast considering the perturbation amplitude. The maximum error for the explicit model is not quite as high, and the error pattern does not have exactly the same distribution; however, the models are comparable in accuracy. Another more quantitative 5m 16. The 25-hour Eo In height (m) from Che semi-iTnpllcit Eo 17. The 25-hour fo height (ml from comparison of errors (shown in my last slide, Figure 18) compares the mean square errors as a function of time for the two models. The solid curve is the semi- implicit model with a time step of 1800 seconds, the dashed line is the explicit model which requires a time step of 300 seconds. Thus there is a six to one shortening of time step. However, the semi-implicit method requires extra steps, so the net increase in efficiency is not quite that high. Both RMS errors are oscillating with time and both of the envelopes of the errors are quite comparable. Thus we are going ahead and investigate the use of this technique in the mesoscale model. 315 (>U) XH Nl UOVUB SMtJ 18. Variation with time of RMS errors from seml-impllcit solid line and explicit (dashed line) forecasts. 316 DISCUSSION Phillips : I suggest you approach the semi- implicit method in the mesoscale with considerable caution because its stability, when At is greater than the CFL criteria, is achieved at the price of considerable corruption of the gravity wave phase speed. In principle, at least, you have to expect a deterioration of the forecast accuracy. R. Anthes : That's right, the adjustement during the first few hours is due to gravity modes. R. Rogers : After Dr. Kreitzberg's paper I was wondering how his results compared with the available observations and then Dr. Anthes told us, if I understood him correctly, that verification is impossible anyway. Well, now I just hear that your actual resolvable scales were a 100 kilometers or greater, can't you do something toward comparing the results with the available data? Anthes: For the perturbation on that one experiment we find that there were no radiosonde stations within the wavelength of that perturbation and so it would have to be verified by aircraft which is, of course, feasible. But those upper air perturbations cannot be resolved with the standard radiosonde network. We have been trying to verify the mesoscale surface perturbations. I didn't show any of that but the surface observations are close enough to resolve that scale. To be honest, the traditional verification measures look very bad when you compare model grid point values with observed wind speeds. The errors are in the 5 m/sec range when verified against surface wind conditons. Maybe during the evening hours, as the stability of the boundary layer increases, the errors will be 20 m/sec. We do not include the boundary layer in the model very effectively yet. The modeling of the PBL is of high priority and a student is working on incorporating a Deardorff type of scheme into the model at the present time. 317 toELING OF CONVECTIVE StORMS li. R. Cotton I'm starting out this session with a brief, semi-review, I guess you might say, discussion of problems in and approaches to modeling of severe convective storms, at least as I see it. The major problem in modeling severe convective storms is that they cannot be distinctly defined as either convective scale or mesoscale or synoptic scale. Instead, through the momentum and thermodynamic fields, the convective scale is often in direct communication with the meso and synoptic scale motion fields. It is the lack of scale separation that makes the severe storm convective modeling what I consider the greatest challenge in meteorology. When constructing severe storm convective models we must recognize that as with nearly all atmospheric convective processes, such storms are fully three dimensional. In addition because such convective systems are not scale-separated from larger scale meteorological systems, the problem of defining proper initial conditions and boundary conditions for the convective scale systems is by no means trivial. Certainly, a major although probably not very satisfying part of severe storm modeling will be the development of internally consistent, though not necessarily balanced, initial wind and thermal fields. If we turn to regional or meso-alpha or meso-beta scale models for initialization, such models must contain a proper statistical depiction or parameterization of the effects of convection preceding the time of convective scale model initialization. In addition such models must be extremely skillful in defining and maintaining the fine vertical structure of the atmosphere such as the existence of shallow moist or dry tongues of air, shallow stable layers, vertical shear in the horizontal wind through small depths. Such features can have a considerable influence on the depth and intensity of convection of a given horizontal scale. 319 Unfortunately, fixed vertical level large scale models with a limited number of information levels, say perhaps between 5 and 15, are notorious for destroying or smearing such fine vertical structure of the atmosphere. The problem of optimum resolution is by no means restricted to the mesoscale or the synoptic scale analysis and/or models. In fact a fundamental question in severe convective storm modeling is what are the scales of the energy containing eddies in such storms and as a result, what is the optimum spatial resolution for models of such storms? If one wishes to model the dynamics of a tornado cyclone, for example, one is likely to choose a horizontal averaging scale on the order of, say, 1 to 5 km, in order to explicitly simulate the storm scale dynamics and the environment of the convective system. Even with such a coarse explicit resolution, the data requirements will exceed several million words of data. Then, when choosing such coarse resolution, an implicit assumption is that the scale of the energy containing eddies increases with the scale of the convective storm. Is this a realistic assumption? There is evidence that it is not. We are, for example, assuming that the cumulus and cumulus congestus scale are not major contributors to the storm dynamics, i.e., the dynamics of the tornado cyclone or a major squall line system. Yet, there are those who believe that the cumulus congestus towers, in the flank of the parent tornado cyclones, are the main producers of severe weather activity. Thus, regard- less of whether we use conventional numerical prediction techniques or spectral techniques, if we chose to model the storm dynamics using coarse spatial resolution say from about .5 km to 5 km, and employ simple first order closure schemes, we are ignoring convective scales which may be major contributors to the severe storm dynamics. 320 This brings us to the related problem of the nature of the closure schemes that have been used for cumulus convection modeling. I believe that it is safe to say that we have not developed a turbulent transport and energetic model which is explicitly designed for modeling moist con- vection. The entrainment models, for example, involved in a discussion here a few minutes ago, are based on our understanding of laboratory plumes, jets and dry thermals. The currently popular, nonlinear eddy viscosity models based on the Smagorinsky concept have been developed for those meteorological problems in which the role of the unresolved eddies is primarily that of an energy transfer from the explicitly resolved meteorological scales to that of the viscous dissipation scale range. A distinguishing feature of moist convection is that kinetic energy is gener- ated on the small to medium scale eddies thereby often transferring energy to the larger, explicitly resolved scales of motion rather than simply acting as a medium in the energy cascade process. Even the more sophis- ticated stochastic transport model developed recently at NCAR by Sommeria (197^) ignores the role of condensation, freezing and precipitation pro- cesses as a generation mechanism in cloud turbulent energy and presumes that the unresolved moist convection scale eddies are primarily a transfer vehicle in an energy cascade process. Even the most sophisticated second order turbulent transport models, such as those recently summarized by Deardorff this morning, have been aimed specifically at modeling the dry atmosphere boundary layer. What I would like to do is go over what I believe are specific and unique requirements for turbulence models of moist convection. Certainly, the most unique feature is that the thermal, moisture, and the momentum fluctuation fields are highly coupled. If we look at the Reynold's stress equations for a moist convective system. 321 we find, except for a few terms, they differ little from the similar form of the Reynold's stress equations presented by Deardorff this morning. Some distinguishing features which are probably not too significant are in the triple correlation terms where we include density variations in order to simulate the turbulence in a deep convective layer, allowing for basic state variations in density. What I want to concentrate on here are the distinguishing features as far as the generation of Reynold's stress in a moist convective atmosphere and specifically look at these two terms illus- trated in Figure 1. The buoyancy terms for example, involve correlations between velocity fluctuations and in this case virtual temperature fluc- tuations and between velocity fluctuations and total water content fluctua- tions. I might add that I neglected to introduce a term for the correlation between velocity fluctuations and the mixing ratio of total dust in order to simulate Ed Danielsen's muddy clouds! As far as the buoyancy term is concerned, what is unique is that there exists in a moist convective field, sources and sinks of temperature variance. The sources and sinks of tempera- REYNOLD'S STRESS EQUATIONS at = - u 3(U, "U'') U, "U.' — u 4 k j k "i i 3x, 3x, U "U " 3U i j ^ 3x, 3 3x. (U, "U."U ") U."U, "U •' , 3p k J i _ 1 k j _1 o P 3x. 13 (1) 3 ^V'^"^ 3 (U^"P")-^ 1 . / 3U, " 3U." + l_p" _JL_ + _^ ^o V ^^ ^\ + e vo 6,, u,"e " ^ 0, ^ U/'9 " 13 k V + k3 i V - g ^13 V^t" . \3 \'\" -2 (U, "U.") 3U." 3U, " 'o 3xj^ 'o 3x, 3x. . j J 3?2 ture variance are related to condensation, evaporation processes and phase changes such as the latent heat of fusion within a cloud and at the cloud boundaries. A unique feature here is that we include a term for the genera- tion of Reynold's stress by the effects of water loading. When one develops an explicit numerical cloud model one typically includes a water loading term. Furthermore, when we look at the generation of turbulent energy or Reynold's stress, we find a correlation between velocity fluctuations and the fluctuation in total water content. In fact, in many convective systems, the only place one experiences turbulence is right in the neigh- borhood of the precipitation field itself. Of course, the action of pre- cipitation is two-fold. One way is through the direct drag force of the water as explained earlier. In addition, below the cloud or in the cloud environment, or at the edge of the cloud the precipitation water evaporates resulting in a coupling of these two terms. I bel ive this is a unique fea- ture of cumulus convection and something we haven't even proposed as a serious problem. How do we handle such interactions? Such sources of tur- bulent energy become most acute when we begin to relax our resolution con- straints. Instead of explicitly resolving very fine scale features in the cumulus clouds such as 10' s of meters, maybe 100 meters, or 200 meters or so, but relax our constraints to several 100 meters, to the kilometer scale or greater, these sources of Reynold's stress or turbulent energy become , I'm sure, very significant. Now the question is how do we solve then for temperature variance? Let's just go on and look at the generation of some of the variance equations for a moist convection system and the transport equations just to illustrate what one is getting into if one wishes to formulate a turbulence model specifically designed for cumulus convection. For example, we can generate a temperature variance equation shown in Figure 2. Again, it has very little unique features compared to the temperature variance equation that Deardorff showed earlier this morn- ing. There is, however, a basic state variation of density which I believe is insignificant. We do end up with a diabatic source term. A term then 523 POTENTIAL TEMPERATURE VARIANCE EQUATION 1_ 9^ - 36"^ ^ U "9" 36 at " ' j 3x " ^ J 3x (U-Mg777) e"^U." 3p 29 9"ds" 3 i J o . . O -r- (2) 3^9^ , /39" Y which is proportional to the correlation between temperature fluctuat ions and rate of entropy change. This is the role of condensat i<^n , evaporation processes, or freezing and melting of water, for example, in generating temperature variance. I believe that this is a very important source of temperature variance for convective systems. My own personal experience in analyzing data and just flying through cumulus clouds suggests, in fact, that it is. The boundaries of the cloud and internally wherever there are dry air intrusions we have very strong fluctuations in our temperature fields and likewise feedback into the momentum field. Well, to go on into the system we have to develop a transport equation for temperature, shown in Figure 3. The transport equations for temperature differs not too significantly from that illustrated by Deardorff this morning. Again, I just want to point out a distinguishing feature, that is a major one, namely, the correla- tion between velocity fluctuations and the rate of entropy change fluctua- tions. This represents the role of diabatic processes on the eddy trans- port of heat in a convective system. The buoyancy term shown in Figure 3 involves correlations between potential temperature fluctuations and vir- tual potential temperature fluctuations which we can expand as the variance of potential temperature and a correlation between potential temperature and water vapor fluctuations. The coupling becomes worse as we go further 32^ into the system. Also shown in Figure 3 are correlations in the buoyancy term between temperature fluctuations and total water content fluctuations. Well, to close this system, that is to make reasonable models, say of the diabatic processes, one would suspect that one would have to generate equations for the variances and transport rates of water vapor. Likewise we find in order to close this term we need an equation for the correlation between potential temperature fluctuations and water content fluctuations. As shown in Figure ^ we can generate an equation on water vapor variance which is not terribly unique as far as convection is concerned except that we have correlations between water vapor mixing ratio fluctuations and sources and sinks terms in the water vapor fluctuations. These are the source and sink terms due to condensation and evaporation, for example. We thus have to model these terms in order to close our system. As shown in Figure 6 we can generate transport terms for eddy transport terms of water POTENTIAL TEMPERATURE TRANSPORT EQUATION ^ (e-Uj^") at (u "9") u "u " - - _3_ '^ k '' k j ie_ e"u •• 3u, , (u,"u, "e") u ''u ''e" 9p j _k _ _i_ i k .1 k o^ 3x 9x. P 3x j3 (^) + ^3^ e"e Q^"9" vo + Y 32u^"e" k o 3x/^ 2y ^\ 39" ° 3x^ 3x. 325 WATER VAPOR VARIANCE EQUATION 1_ V ' ^ - _3_ -v ' , j V _2v at " "j 3x ~ ^ 3x w (u. , vJ."q "O U."q "^ 3p 9 J ^ J V __o g 3x. " p 3x. 13 3^-^ / 3q;- , q S (q ") 4 2 V n V vapor which again is not distinctly different from the boundary layer problems or dry convection problems except again, there are diabetic source terms! Here, again, correlations between velocity fluctuations and source and sink terms, illustrate the coupling in a moist convective system be- tween the momentum, thermal and moisture fields. The role of buoyancy also effects the transport of water vapor in terms of correlations here between water vapor and potential temperature fluctuations, variances of water vapor, and correlations between water vapor and total water content. In order to close the system we have to generate an equation representing correlations between water vapor and potential temperature fluctuations shown in Figure 5. If we now expand our system into a cloud containing cloud liquid water Q and rainwater Q , a set of equations for cloud water variance, eddy C H transport of cloud water, rainwater variance and transport of rainwater as well as cross correlations among these variables can be formed as illustrated in Figures 7-I5. In order to complete the precipitation trans- port equation or model for the correlation between terminal velocity fluctuations and rainwater content fluctuations must also be formulated . 325 EQUATION FOR CORRELATION BEnrtEN POTENTIAL TEMPERATURE AND WATER VAPOR FLUCTUATIONS at " ~ 3x ~ 3x U "q " 39 3 (U."q "9") (U "q^"9") 3p^ ^ ^ 3x " 3x, ^ ^ ~ -^—^ 77- S.o (5) 32 (9"cl") -„^-V--o(l5)ft) e"S (q ") e q •■ds(9") n ^/ 4. —2. _Z "^ "^ C dt P WATER VAPOR TURBULENT TRANSPORT EQUATION 3t " j 3x 3x . V"j" 3X^ 3 <"3"\V 3x^ *J3 ^ V'=n«>v"> . 1 3 (P'V\P"V ^o '^ % '\ *\i^ (1 + .61q^) q;'6" .61 q "^ 1 "^ 'h\" . 6 vo ' 6 VO (6) + Y 32u, "q " k V o 3x7^ J - 2 Y 3U, " 3q " k__ V o 3x. 3x. 327 CLOUD WATER VARIANCE EQUATION 3t ^ 3x^ 3x^ TJ "Q " 3Q " Q "3^Q " + 2 Q "S (Q ") ^c n c CLOUD LWC TRANSPORT EQUATIONS 3_ (U,"q,") 3t "j _3_ (Qc'\"> V"j" !^ 3xj 3Xj (7). Q "U '• 3U, . (U/'U, "Q ") Q "U/'U, " 9p ^c j k 3_ ^ j k ^c ' ^c j k o ^ 3x 3x " p^ 3x j3 (8) + ^3^ (1 + .61q ) Q "e" .61 6 Q '"q V c vo vo - Q "Q " ^T ^c 3^U, "Q " . Y k c »^ ■^ ° 3x.^ - 2^ 3U, " 3Q " k c o 3x. 3x. J J 328 EQUATION FOR CORRELATION BETWEEN POTENTIAL TEMPERATURE FLUCTUATION AND CLOUD LWC FLUCTUATION 3 (e"Q^") It - U 3 (Q^"6") U^"e" 3Q^ j 3x, 3x, 3xj : 3xj - p^ 3Xj j3 (9) 'o 3x "^ c VXj j \ 3Xj / e"s (Q ") e Q " ds (e") n^c _o c n "•■ C dt P EQUATION FOR CORRELATION BETWEEN WATER VAPOR FLUCTUATIONS AND CLOUD LWC FLUCTUATIONS 3t " 1 3x. ~ 3x. U/'Q " 3q , (U."Q "q ") U."Q "6" 3p _ J C V 3_ 3 C ^ _ J C _£ . 3x. ~ 3x. ~ p 3x^ 13 ■"^o 3r^ 32 ^%\" "'--mm q "S (Q ") Q "S (q ") , ^ n c , c n V 329 (10) Perhaps it's obvious at this point why people haven't developed turbulence models specifically for modeling cumulus convection. It is an overwhelming problem. What I have generated here is a system of 31 scalar equations to handle a system containing only cloud water and rain water; no decomposition into a spectrum of cloud droplets. We have a trade off here, a choice really as to the way we handle the problem as far as modeling severe storm convection. We can grid up the system or put it in spectral space and resolve features down to say 10's of meters or maybe PRECIPITATION VARIANCE EQUATION 0~^^ O"'^ U "Q " 30 "JT7 " ^ 3xj " 3Xj °j3 Q V "Q " , 3p Q •'•^ V,^. , 3p 2 p^ 3xj 'j3 P^ 3x. j3 Po ^^j J^ + 2y — t — T (11) 330 It It EQUATION FOR CORRELATION BETWEEN U^^ AND Q^^ 3t 3x 3x p^ 3x j3 p 3x, p 3x, (12) (I + .61q ) e"Q „" + .619 q "Q „" vo g fi k3 32u, "Q. " [ ^c ^H ^H J ^ k3 ""0 3x.^ - 2y 3 V iV _ ^j \" IV' o 3Xj 3xj 3xj "j3 U "V " 3Q U, "V " 30 " k Tj _1h k T ^H 3x '^j3 " 3x ^j3 ^kV^j" i.!!o3_^ VV\j i_!^3 Po ^^j ^3 - P^3xj j3 % 9x. ^J3 331 EQUATION FOR CORRELATION BET\;EEN 8" AND Q„" n 3 (e"Qjj") _ 3 (8"Qjj") Uj"e" 3Qjj 37 " ' "j 3^ " 3^ (13) 3x 3x. pQ 3x j3 ^Tj ®"^V ®"^Tj" ^^H 9"V^"9Qh" 3x j3 3x j3 - 3x. j3 k"r» "17 •• Q..Q uy H ij i- i!2 , ^ , -iL ^*"^h") P„ 3Xj -J 3 ■ \ ax^ fe) (^} e" s^(Q ■•) + — dt. P 332 EQUATION FOR CORRELATION BETWEEN qv" and Q^" l.^v"Q ") . U. 1__ (Q " qv") U." qv" 30, at ^ ^''j ^ .0 ;? 3x, ^j3 J . V qv"15H" 5 ^ ..v'V„/'|^ 6 3x. "j3 "*^^ 'Tj 3x. "j3 J J (14) qv"V_." T^ aV 3xj ^j3 Po 3^J J^ ^^ ^i" t -1^; ^,3 - ""^h" v.," 1- IIJ a^3 + Yo 32 + V^n^"^"^ 333 EQUATION FOR CORRELATION BETWEEN Q " AND QJ' c H 9 qT ^ «c" V> ■ "j tg < %"V^ - S\" U^ - U/'Q " -^-^ - -^ (u "Q "Q ") - u "Q "Q " 3p "^ "' p 3x. -^ - V^. Q "I^h" 6.. - q''v^." |-^H 6.. T3 c 3x J 3 c Tj 3x. j3 9p. (15) M II I -K^ 9 p^ , 3f 1 Po , . ,.. ..„ I. 1 °Po 6. •'0 2 o J 3^ (Q "Q„") 9 o(IF)Pf) + q;" s„(Q„") + Q„"s„CQ,") 334 a 100 meters or so, in order to explicitly resolve the cumulus and the cumulus congestus scale, but we can only do so at the expense of resolving the entire thunderstorm system if it's of the scale of a squall line. We're talking about millions of words of data, actually on the order of tens of millions of words of data that we have to transfer, move in and out of the central memory at very high rates. Developmental time scales involved in following this approach, I believe, are on the order of five to ten years. I don't think we're close to a computer that can handle high resolution three-dimensional simulations of the convective scale and storm scales on the order of a severe storm. The alternative I claim is perhaps following an approach such as outlined above or maybe a compromise approach that Deardorff illustrated. Developmental time for the theory is probably about the same, say five to ten years. There's a lot of work involved. Everyone of those 31 equations have a number of closure assumptions that must be formulated. Some are very nasty problems that have been giving boundary layer people problems for years. Some of these terms are so impor- tant that we can't just throw them out, or at least we don't feel confident in just throwing them out. That's how I view severe storm modeling. In regard to modeling the microphysical processes, I think there is very little hope in the near future of performing detailed simulations of the storm dynamics and to simultaneously simulate the detailed structure of the cloud microphys ics. That i s, to be able to explicitly model the microphys ics, say with 71 categories of cloud droplets and ice particles and so forth on the dynamic structure simultaneously. This is particularly true if we have got to generate moment equations for everyone of those 71 categories. But I think that one of the major avenues of research in terms of microphysical research in the context of severe storms modeling should be to develop very simplified parameter izat ions. That doesn't mean at the cost of the physics; we shouldn't just toss the physics aside but it should be in the sense of very simplified parameter i zat ions of the mi crophysical processes. As far as fundamental explorations in microphysics in the severe storms context, I think it should be limited to those pro- cesses which are unique to a severe storm environment. We could direct the questions about the microphysics in less severe environments. it's difficult enough there. I think the main avenue of research that would really have an impact on a program like SESAME in microphysical modeling will be in developing rather difficult but elegant parameter izat ions in microphysical processes. Well, I'll open the floor to questions before we go on to our next speakers. GOLDEN: Well, you've had some rather startling equations up there with a large number of variables and I think that the time is right for some additional discussion on what role aircraft measurements and other types of indirect probing measurements and specifically what parameters you think need to be improved in terms of measurement capability to supply the necessary inputs for your model. COTTON: Well, if you're thinking in terms of development, or externally testing say a higher moment equation approaches to modeling severe storm convection, I don't think that that is the place for it. I think that we should develop and test these models on a lot simpler meteorological systems. First of all one should start with non-precipitating convective systems and then go on to precipitating systems that are at least of such an intensity that 336 we can make in-situ measurements or we can penetrate rather comfortably. I hope we can make an evolution of this sort with a lot more refined modeling of the turbulence for moist convective systems and evaluate just the general properties, predicted by a model on the meso-gamma scale rather than have to go into evaluating triple correlation products and things such as that in a severe storm environment. OOYAMA: I have no comment on the horrendous equations. What we have done about a more heuristic approach to the parameterization is to make a model in- , stead of taking the former approach of truncating terms, but rather make binary model in deep cloud properly outside of the environmental cloud property and then if we take that approach the properties are so different so that making a simple studies of the processes is not obvious and so we need a more physical model but as Carl Kreitzberg told us the model could be more complex if we need it and here I think we face one thing, in the cumulus parameterization, so-called second generation approach, that one dimensional model so far does not have vertical velocity as explicitly calculated variable. Usually mass flux is calculated but not vertical velocity. Then when we come to the cloud physics, most cloud physics calculations need vertical velocity inside the cloud . Now when you say simplify the computation of cloud physics, do you think you have, you can derive computations without vertical motion as expressed known, or cloud model should include calculation of vertical motions. The difference is just a large amount of computer time. 337 COTTON: Right, well, one of the first things that I do when I teach a cloud physics course and i speak of microphysical processes, precipitation pro- cesses, I write in big letters across the board T I f. E and I try to ingrain this in students that microphysical processes are highly time dependent and unless some diagnostic approach could be developed in which you can deduce the time dependence of the motion field and hence the time dependence of the system, I would ask how one would parameter i ze the processes. Convec- tive systems are time dependent I We do have systems which are quasi-steady state admittedly in terms of severe super thunderstorms. A fundamental feature of convective systems and microphysical processes is that they are in fact time dependent which means you need most the time variability of the vertical motion field. LILLY: I wouldn't want to say that you're putting us on, but I think you're giving rather extreme examples there. You can make a few simplifications, like if the air is saturated, the q' and T' have to be proportional to each other, from the Clausius-Clapeyran equation, and if you use conservative variables like total water or equivalent potential temperature, I think the equations would come out somewhat simpler wouldn't they? COTTON: I would hope that one could make simpler set of equations that would have certain specific applications. I am overselling the point here that the convective system is a highly coupled system, coupled among the momentum, the thermal and moisture fields, and whether we're parameterizing 338 sub-convect i ve scale eddies or the convective scale itself, that coupling, I believe, is extremely important and we've got to admit that and we've got to fight our way through it somehow. Perhaps s irpl i f icat ions can be made but I think we have to somehow drive our way through. We must admit that moist convection has very strong internal sources of temperature variance, and coupled with the momentum field and that moist convective systems are not similar to a simple dry thermal in many respects. t am however, over-selling this point. KREITZBERG: I think you have demonstrated convincingly enough for me that we're not ready for modeling a severe storm evolution. I think that's a good reason why SESAME should not be directed towards the severe storm evolution, and as Doug and the proposal points out several times, that's not what SESAME is after. What it's after is the precursor environment and I think that's why SESAME will be successful and why I think we are ready for SESAME now because it's not that difficult a problem. COTTON: Perhaps, but I would argue that even if you're modeling scales explicitly on the order of 50 km, you still have to somehow incorporate the coupling that I'm talking about among the thermal, momentum and moisture fields. You know that in a severe storm environment the role of convection in trans- porting momentum is important. There are so many illustrations of that, you could spend an afternoon just documenting it. I'm just claiming that first of all there's not an equilibrium coupling between the convective scale and the larger scale. It's a very stochastic process that we're 339 dealing with in moist convection. Here, I mean stochastic with respect to a larger scale model having 50 km, 100 km, or even 1 km resolution. RAYMOND : When you write down these higher order equations you have this little bar over which indicates an average, which implies some scale. One of the problems in severe storms is that we don't know what the scale of the transports are and presumably if you solved those equations, you might be able to find out. COTTON: No, you have to put that into them, I'm afraid. No, they don't define the scales, I wished they did. You have to go to either higher momentum equations or develop some other approach to closing the system wi th the seal ing. So the point is if we're going to understand severe storms we have to find out what are the dominant scales. I can see two approaches to this. First of all, I don't know if we're going to convince anyone to fly air- planes through these Oklahoma thunderstorms. I certainly wouldn't volun- teer myself, but the South Dakota program of flying a T28 through the NHRE storms has yielded some information in which we see updrafts and downdrafts on a scale of 1 to 5 km and from the correlations that appear with the liquid water contents at least, it looks like maybe important transports are on this scale. So maybe, No. 1, SESAME can learn some- thing from NHRE. No. 2, someone mentioned a remotely piloted vehicle pro- gram. If you'd fly some drones through these storms and if they survived long enough, you might get some valuable information on the transports. 3^0 NUMERICAL ACCURACY OF CLOUD MODELS J . Klemp Earlier in the afternoon Doug Lilly made a plea for modelers to seriously consider accurate numerical representations in their simulations and I would like to very briefly add some emphasis to that plea. As numerical models become more and more complicated, it is extremely important that one has confidence in the numerical schemes that are being used before one can really attach any particular physical significance to the results that are produced. To emphasize this point, let us consider some solutions from a two-dimensional simulation of the inviscid compressible equations of motion. In particular, I would like to show the effect of altering spatial resolution of the advection terms through the use of pseudospectral, fourth order, and second order finite difference representations. These comparisons have been done many times' for simple systems, but I feel cloud modelers have not con- sidered these effects as seriously as they perhaps should. The simulations I vjill discuss begin initially with hemispherical bubble having a 2.5 km radius with a maximum potential temperature excess of 1 degree. In this example, as the simulation proceeds, the bubble rises due to buoyancy and also is advected along with a mean wind of 15 m/sec. A 500-m mesh spacing is used in the vertical while the horizontal grid interval is 625 m. No vertical velocity is allowed at the upper and lower boundaries while the lateral boundaries are periodic. Figure 1 shows the vertical velocity field after 23 minutes using pseudo- spectral differencing of the horizontal advection terms. During this time the bubble has advected to the right at 15 m/sec, passed through the periodic boundary, and returned to the center of the box. For comparison, the same solution with no mean wind is included in Figure 2, and reveals that there has been no significant distortion caused by the horizontal advection. 3^1 - lo ->-\o \Ao^\icm«i^L r^\s'\t.vi(X. (j(cn^ Fig. 1 Vertical velocity field using pseudo spectral differencing of horizontal advection terms. The convective cell is advecting to the right at 15 m/sec and the solution corresponds to 23 min after the initial pertur- bation. Solid contours represent upward flow, while dashed lines denote downward motion. Velocities are scaled by a factor of 10. Figures 3 and 4 show the vertical velocity fields for the same situation as that in Figure 1, except that fourth order (Figure 3), and second order (Figure 4) finite difference representations were used for the horizontal advection terms. Significant phase errors are beginning to appear in the fourth order solution and are much worse in the second order solution. The solutions, which should be perfectly symmetrical, are starting to become skewed. The updraft region is tilting in the upwind direction, and the down- draft on the upwind side is becoming stronger relative to that on the downwind side. This distortion is much more pronounced in the second order solution than in the fourth order one. In fact, for the second order solution there is a sig- nificant secondary updraft region which appears to be a secondary convective cell 3i|2 1 1^ Woe.\i,o\^T^ vnv. ^NS^N^xwce ^y.n^ Fig. 2 Vertical velocity field using pseudo spectral differencing of horizontal advection terms. Same solution as in Figure 1 except no mean wind is present, -\o -t \o Fig. 3 Vertical velocity field using fourth order differencing of horizontal advection terms. Same situation as in Figure 1 except for differencing of advection terms. Velocities are scaled by a factor of 100. 3^13 \0— r-T t\0 Uc>^Vt-a>41f^\_ 0^5*^P»*^<-^ iyj^"^ Fig. 4 Vertical velocity field using second order differencing of horizontal advection terms. Same situation as Figure 1 except for differencing of advection terms. Velocities are scaled by a factor of 100. forming but in reality is just numerical phase error. This is only a simple numerical example, but the system is similar enough to the kinds that we are trying to model to indicate that we must carefully evaluate the consequences of these errors and seek to minimize them. 5^^ Observations Needed to Test Numerical Models of Thunderstorms Carl Hane I would like to present some areas where I feel observations are needed on the individual thunderstorm scale, based upon my experience with numerical modelling of the squall line in two dimensions. In all areas of research in the atmospheric sciences it is generally true that observation and analysis should combine with conceptual or numerical modeling to in- crease our knowledge of physical phenomena. Ideally, the process should be an iterative one where observations are taken and analyzed and models are formulated based upon the observations. More observations may then be taken for veri- fication of the model in areas where the models point to poorly observed areas. The models may then be improved based upon these observations. With this sort of iterative process in mind and within the context of this meeting, I think it would be appropriate to point to some areas where observations appear to be needed on the scale of the indi- vidual thunderstorm based upon results of tv7o-dimensional thunderstorm numerical modeling. This modeling has been done in a sheared environment under moderately unstable con- ditions. Many of these areas of which I will speak have been observed previously in nature. I do not wish to suggest that modeling is responsible for their discovery, rather that observation and modeling have mutually confirmed the existence of certain features of thunderstorm structure. Many of the observations, however, have been isolated from the observation of other variable fields which might provide a more total picture. Now I would simply like to list certain features, obtained from numerical modeling results, which appear to be areas where observations are needed fairly badly. First, simply the intensity of certain maximum quantities within the 3^5 storm. Updraft speeds have been obtained in these two-di- mensional models on the order of 20 to 30 meters per second. I simply ask, is this realistic? Positive temperature anomalies; that is the departure of the temperature from the initial (pre-storm) temperature profile or departure from environmental temperature on the order of 10 to 12 degrees centigrade have been reached in these updrafts. Is this realistic? Maximum downdraft velocities in the rain area in the strongly sheared cases reach 10 to 14 meters per second. Another area, the cold dome in the downdraft area resulting from the evaporation of falling rain, has been well ob- served, and this is reproduced in model calculations. However, the model calculations also point to an extension upward of this cold dome, especially on the upshear side of the storm and along the edges of the storm. This relatively cool region is presumably due to the evaporation of the cloud edges. This seemingly should be a primary region for future observations. In another area, there are in the model results broad regions of warming due to the adiabatic descent of the air around the storm. These departures can be on the order of several degrees centigrade. These to my knowledge have not been well documented within the context of other observations. In association with this, the downward motion itself can become quite intense in the model calculations. This is downward motion in dry air of which I am speaking now, not the very strong downdraft near the ground. Downward motions on the order of 5 meters per second or so result in the anvil region downshear from the storm in the upper levels. Also another region of downward motion in middle levels on the upshear side results in many of these calculations. These areas of downward motion are concentrated very near the storms in the model results, that is, within one cloud diameter in most cases. This I would think would be a very important area for investigation, i.e. to determine the horizontal extent of the downward motion outside the clouds. Another point concerns the shape of the updraft in many of the model results. The updraft is tilted 345 in an upshear sense through the lower half of the tropo- sphere, and this has some very important implications. This sort of tilt results in a downdraft below the updraft because of the weight of the rain itself and because of the negative buoyancy produced by the incorporation of dry air in middle levels and the subsequent evaportaion of the falling rain. Another point concerns the time evolution of the individual cells in the severe storm case. The two di- mensional model results, I think, are pretty unrealistic in this regard. The two dimensionality forces a periodic or aperiodic oscillation in, say the maximum vertical velocity, which is quite often not observed in natural situations. I think it would be important to gear the observations so that the time changes in the intensity of the updraft can be studied in relation to the development of new cells in the direction of propagation of the storm. In summary, I think the problem of looking at the individual storm should be an important part of SESAIIE. I know that the opposite opinion has been expressed. However, bearing in mind that the indi- vidual thunderstorm is the producer of the mesoscale system, how can one understand an interaction between an individual storm and the larger scale if one doesn't understand the structure of the thunderstorm itself? UNKNOWN ; I would like to make a comment also. You're talking mostly about the internal structure or the environment right near the storm. One of the problems we face as modelers is what kind of lateral boundary conditions we should use, and this is a problem again of scales. What kind of moisture supply should be expected entering our domain compared to what's leaving in the form of precipita- tion. Observations may be coordinated perhaps with the setting of our domain and the setting of our lateral bound- ary conditions. I fear that sometimes our lateral boundary conditions have a great deal to say about what the internal structure will end up being. 3/17 Newton ; You began by asking whether 30 m/sec and that sort of thing is realistic and whether the temperature excesses that you get in the model, 10 to 12 degrees, were realistic? I think the NSSL people can answer you on that. The last thing I saw, and I don't know how preliminary they were, were some traverses of thunderstorm tops made by Peter Sinclair for NSSL, and these did show, as I recall, 60 m/sec vertical velocity and 10 to 12 degree temperature excesses. The interesting thing to me was that these large temperature excesses and the large vertical velocities did not coincide with each other; they were displaced a considerable distance. I don't know how that would come out in the model. I think Jim Fankhauser could comment on the warm air around the cloud. I am not sure whether the warm air outside of model clouds and real clouds originate from the same thing, and I've a feeling that maybe the boundary con- ditions have something to do with shoving the air down the model. But I think there wer several degree temperature excesses outside the cloud, Jim, in some of the flights that you made? Fankhauser : (somewhat unintelligible, but affirmative.) Hane : Well, I think my point concerning the updraft speeds and the intense warming is that these observations quite often have not been taken within the context of many other observations which are needed for a complete, integrated picture. Barrett : Can I point out that until the recent development of the carbon dioxide radiometer thermometer we really haven't had any way of resolving such things? No material type thermometer is feasible to use inside a wet cloud. 348 Cotton ; Well, at least I think they can come within 10 degrees. Barrett ; I would say that we have had errors of 4 or 5 degrees. Kessler ; About 65 ascents out of 5,000 or so that we've developed at NSSL, in a program that has been documented by Stan Barnes, run into the updrafts and show temperature excesses that agree reasonably well with the models. In my opinion a characteristic of the models, when considered in light of such observations as are available, is that they are "reasonable". The problem comes when one looks at the detail. There are many models around which produce thunderstorms or showers on computer paper which are superficially similar even in the degree of their complexity with the natural events. To bring in an analogy outside my field, I think that the Rossby waves reasonably reflect the behavior of long waves and yet they are extremely deficient when you get down to details. I think we are really a terribly long way, from observations in our computer models, and they are not well matched to the observations. There is an extremely painful job of determining what can be observed to produce a definite verification of such models as we have and construction of models which incorporate observations rather than non-observable detail. These kinds of problems are not being properly addressed by our community. Cotton ; Thank you, Carl, incidently your question about the representativeness of downdrafts on 10 to 15 m/sec reminds me of when we had a cloud penetration in Australia and we had three very surprised observers plastered to the ceiling of an aircraft in a downdraft in excess of 18 m/sec in a cloud that was less than 3 km deep and had a radius of less than 400 meters. So 10 to 15 m/sec downdraft does not surprise me at all. 3^19 ONGOING RESEARCH IN THREE-DIMENSIONAL CLOJD MODELING AT THE UNIVERSITY OF WISCONSIN R. E. Schlesinger I'll be talking about sorae preliminary thrGO-dimonsi.; Tia] r.j ■■^rical cloud modeling that I've been working on at the ' ni- ver.-ity of /isconsin, focusing upon one of throe trj^l ' xp r:' - monts. I'd also like to give a brief indication of hO'^f^fj-.:or model elaborations ovr the next couple of yoars or ro. di'ci- mately, I'm hoping to simulate large-diameter thundc r.. corms , with the model domain naving horizontal dimensions of about 100 k"^ and a vertical depth encompassing the full troposphrre , say 15 :rr The model is to be anelastic, with open lateral boundaries"., rxg d free-slip top and bottom boundaries, using the balance equal- ' on (a I'oisson-like elliptic equation) to recover the pres'.ure usin - Fourier transforms. An individual storm will be simulated, ratnex than a squall line, unlike the two-dimensional numerical model developed for my Ph.D. and written up in the July and October 1973 issues of Jy\S . I hope to include both solid and liquid precipitation later. The preliminary exf)eriments were firr'.- r--- garded as short test runs to make sure the model was working and used a very small trial grid of 11 x 11 x 8 interior box&s (or 13 X 13 X 10, counting the exterior boxes). Dimensions for this midget trial grid were only 35,2 km in the x and y direc- tions, and 5.6 km in the vertical, only enough for si. .ulating early cloud growth up to maybe the beginning of camulus congestas. So, partly for reasons of convenience and partly because there wouldn't be any hope of trying to simulate a real case of Cv^mulo- niiTibus development with this trial model, I suppres'^ed turbulence and precipitation, both of which will be included later. The three preliminary experiments used different vertical wind profiles, no attempt being made to simulate any one actual obs'Tved case. One case, PI (P for preliminary), had no amblonr. wind; another case, P2, had speed shear but no directional she-\r; the third case, P3, had a similar speed increase with height and also veering. Figure 1 shows hodographs for the two sheared -1 -1 cases; the dashed circles are isotachs for 10 m s and 20 m s and the small numerals near the hodograph points represent alti- tudes in kilometers. The speeds shown here are relative to the earth. The model grid itself is assumed to move with the verti- cally averaged ambient wind, so that the disturbance, initially a buoyant impulse at the center of the domain, should tend to stay relatively near the center with time. In case of very strong 350 * «CASE P2 ♦ . .CASE P3 Fig. 1 Hodographs for un disturbed ambient wind, relative to the earth, in the sheared preliminary oxperimeni.s P2 and P3. iJumbers near small circles or crosses denote heights (km) above lou/er boundary, while concentric dashed circles correspond to wind speeds (m sec ) marked near them. In one other experiment, case PI (not shown), a basic s^ate of rest was assumed. propagation relative to the averaged ambient wind, there could still be some tendency to drift off center, though just how much of a problem that might be awaits much longer experiments with the full-sized version of the model. In this talk, I'll be concentra- ting on P3, the case with veering and shearing. The net shear vector is mostly from west to east, although it also has a slight north-south component. Figure 2 shows the time variation of maximum values for verti- cal velocity (m s~ ), potential temperature deviation ( K) and liquid water content (g m ) in all cases. They only go out to 12 min plus a fraction because, with the severe height limitation, the disturbance soon approaches the upper boundary. One can see that cloud variables don't come anywhere near a steady state; this is strictly early cloud growth. The magnitude of the potential temperature deviation is admittedly excessive, mainly because I've suppressed turbulence in these very preliminary experiments. Note 351 . 6 8 10 12 1<] "o 2 T(MIN) u — 1 — r — 1 — 1 — 1 — 1 — 1 — I — 1 — 1 — 1 — 1 — r- - CASE P3 8 - • • / / 6 / / / / ,•' / / <] y ,■ / y ' ' ^^ / .'■ y^ 2 y'' -' ^^-^ ^ ■ ' ^^''"^^^ ^ ^ •' ' ^^^^^ ■ ■ ~ • — ^» ■ " ' ^^.^^ • ^^^.-'■'''''^ n € 1 1^--'^""^' ■ 1 1 1 1 1 1 1 t- 6 8 10 12 T(MIN) Fig. 2 Time variation of space maxima or vertical velocity _ -1 (w, m sec ), potential temperature deviation (W, K) _3 and liquid water content (L, gm m ) for each experi- ment. that there is a tendency for shear to inhibit development, as was also true in J. T. Steiner's three-dimensional shallow cumulus model without precipitation (JAS, April 1973). That may not necess- arily be true after precipitation is put into the model and much longer runs are done. Figure 3 shows central x-z cross sections, west to east, through the cloud core for^-'case P3 at the end of the run. Each tick: mark represents ^x (3.2 km) horizontally and ^z (0.7 km) vertically. (I may want to use somewhat smaller ^x and J^ , assumed equal to each other, when the model is more finalized, but since I am anticipating to be looking at large-diameter rather than small- diameter clouds, one would think that at least the main macro- scale elements of such a cloud would be resolvable with the pre- sent grid cell size. It involves an anticipated trade-off between computer storage capacity and how much physical room I want to leave the cloud. I want to have one to two cloud diameters on either side; for a large-diameter cloud-, that means about 100 km on a side of the domain.) This figure shows u, v and w (m s ), potential temperature deviation ( K) , pressure deviation (mb) and liquid water content (g m ). There is asymmetry, in that the cloud outline leans downshear even though the core of the updraft is nearly erect. Variables tend to have larger horizon- tal gradients on the upshear or west side than on the east side. 352 U ' =17.60 KM T=12.51 MIN CASE P3 V f = 17.60 KM T=12.51 MIN CASE P3 u i = i:'.50 KM ' = 12.51 m:n ::SE =5 I t" I n -J I i_ r = 17.60 KM T = 12.51 MIN CASE P3 LWC =1~.6C KM ^=12.51 MIN CASE P3 Fig. 3 Central x-z sections of horizontal velocity compoments _^ u (U, contour interval 2 m sec ) and v (V, contour _^ interval 2 m sec ) relative to the earth, vertical _^ velocity w (W, contour interval 0.5 m sec ), potential temperature deviation (POT, contour interval 0.5K), pressure deviation (P, contour interval 0.25 mb), and liquid water content (LWC, contour, interval 0.25 gm m ) for case P3 at 12.51 min. Horizontal ticks are ^ - 3,2 km apart and vertical ticks ^z = 0,7 km apart, with overall dimensions of 35.2 by 5,5 km. Proportions are vertically stretched by a ratio of 128:21. Note again the very warm-core updraft, excessively so bcc^'Use tur- bulence is suppressed. Also note the evaporative cooling along the cloud base, and a meso-low underneath a meso-high. The meso- low is displaced downshear of the meso-high, as was also true with speed shear alone (case P2) although with a little bit less asymmetry. In the case with no shear (PI), all central x-z sections were symmetric about the center except that the u field was anti- symmetric. 35: U '^IT.eO KM T=12.51 MIN CASE P3 -1 T 1 1 1 1 1 1 r Fig. 4 Same as Fig. 3, but central y-z sections for case P3 at 12.51 min. Horizontal ticks are /^^ = 3.2 km apart, and vertical ticks ^z = 0.7 km apart, with overall dimensions of 35.2 by 5.6 km. Proportions are verti- cally stretched by a ratio of 128:21. Figure 4 shows the same type of pattern, but for central y-z sections, south to north. As can be seen, the net shear vector was mainly west-east rather than north-south, so that while there is some asymmetry as one might expect, it's a lot less pronounced than in the last figure. There is a relative minimum of u at just about all levels near the center, indicating that there's a ten- dency for the updraft to transport eastward momentum downward (or, if one wants to say it another way, to transport small eastward momentum upward), and one sees again the warm-core updraft and the evaporative cooling along the cloud base. Again I should men- tion that there is no precipitation allowed in this preliminary experiment, so features such as strong cold outflow at the ground or very large downward vertical velocities do not appear. The -1 updraft maximum is 7,5 m s ; the strongest downdraft appears on _1 the north side of the cloud and is about 2.4 m s 35^ Now we come to horizontal cross sections (Figure 5), looking down rather than toward the north or east. These sections take in an upper level (4.55 km), middle level (2.45 km) and lower level (0.35 km); recall that the top height is 5.6 km. These are plots of the horizontal parts of the wind in two different senses. One sense is the departure from the ambient wind, or just the pertur- bation part of the wind. The other is the wind relative to the cloud; so far, the cloud has been moving with the vertically averaged ambient wind, since the cloud core has stayed at the center point in the model grid. One would have to run a much longer experiment to be able to tell whether there is significant propagation in any one case. There is convergence at t'\e lower level, and the perturbation outflow at the upper level is strongest on the upshear side, also true in case P2 with only speed shear. In case P2, one has syi.imetry about the central west-east line and asymmetry otherwise. Case P3 is fully asymmetrical. In case PI, without shear, all the fields in the hori.-.ontal viows had no preferred qindrant; all four quadrants were the same. It is ap- parent in case P3 that features such as downdraft strength show preferred quadrants. Perhaps most interesting in this higlily preliminary experiment is the appearance of a vortex doublet, anti-cyclonic to the north and cyclonic to the south, v;ith vertical vorticity similar in magnitude to the ambient horizon- -3 -1 tal vorticity of t ne shear, about 3 x 10 s .In this experi- ment, I haven't assumed any cyclonic or anti-cyclonic large- scale vorticity, so the doublet appears to be due to the tilting of the initially horizontal vorticity into the vertical by the differential vertical motions within the cloud. In nature, that may not be the only thing contributing to such vorbices; there could also be concentration, by mechanisms beyond the scope of my talk, of pre-existing cyclonic vorticity in the warm sector of a cyclone, toward smaller horizontal scales and greater inten- sities — perhaps tornado cyclones, even if not outriqht tornadoes. Another thing in connection with vortices is that single Doppler, and more convincingly dual Doppler, radar observations have con- firmed regions of strong vertical vorticity within severe storms. For instance, Donaldson has emphasized the "plan shear indicator" view of such features, especially in Massachusetts storms. Such a feature could be of potential predictive value for warning of damaging winds reaching the surface, as it appears from some of. Donaldson's observations that there is a lag of anywhere from 355 UV-fWI Z" 1.55 KM T-12.51 m:N CASE P5 UV-fiEL Z- 4.S6 01 T-12.51 MIN CASE P3 LHC Z' 1.5B KM T-12.51 HIN CASE P3 T 1— I r 1 1 1 r- — I r < \ 1 , , . . . . . . \ \ / , . . . . . ^\ \ / / ^ . . . ^ . - - ' ^ / / - ~ - ' ' / I . . . - UV-PRM Z- 2.«5 »l T.12.51 MIN CASE P3 1 1— I ( 1 r 1 1 1 1 W-REL Z- 2.a5 m T.12.51 MIN CASE P3 - • - ^ / X / • ■ I I 1 — 1 1 1 "-^ 1 ' 1 ' 1 / /'/'.', tit ' . . . / //-/•/• ' ' 'l ' 1 1 1 / / ^ ^ - \ / - , . 1 ' - _ - ■ , \ \ \ V \ 1 1 ■ 1 , \ \ \ \ 1 1 ' ■ . . / 1 \ 1 1 r' ■ ^ ^ / / 1 / / »" ■ / / / t 1 / / /' 1 1 1 / lilt 1 I 1 / / /' UV-fWI 1' .35 KM T.12.51 MIN CASE P3 \ v 1 I f < - • > > ■> 1 / / ' ' - . / / I 1 > . . UV-fiEL Z" .35 KM T.12.51 MIN CASEP5 ■ - - " ^ ^' - '-''^'^-^- - "^"^ - ■ - - ■ - ■ - - ■ ■ lm: .35 KM T-12.51 MIN CASE P3 LWOO Fig. 5 Selected x-y sections for case P3 at 12.51 miH, Plotted variables are anomalous flow, i.e., deviation from undisturbed ambient wind (UV-PRM, vector scale 10 m sec per tick separation), horizontal wind component relative to cloud motion (UV-REL, vector scale 10 m sec~ per tick separation) and liquid water content (LWC, contour interval 0.25 gm m ). Ticks are Ax = ^ = 3.2 km apart in both directions, with overall dimensions of 35o2 by 35.2 km, and fields of a particular variable at heights of 4.55, 2.45 and 0.35 km are arranged in columns from top to bottom. Scalings are equal in both directions. 356 15 min to an hour or so between the appearance of these concen- trated vorticity regions 1 to 2 km above ground (or maybe higher) and the arrival of damaging winds at the surface. Looking at the cloud-relative flow, one sees the vorticity quite noticeably, although only the anti-cyclonic region actually appears as a closed circulation. Around the northwest side of the cloud, there's the suggestion of obstacle-like diversion of the air, but it's not quite that simple a situation all around the cloud; on the south- east side, the relative flow shows considerable inflow. By the -1 way, the wind vectors in Figure 5 are scaled to 10 m s per tick separation, in case you were wondering what the intensities of the wind might be. The last diagram, Figure 6, shows more horizontal sections (low level, middle level and upper level) for vertical '■oiocity, potential temperature deviation and pressure d'^viation in the same units as in the x-z and y-z cross sections. Ono Goes aoain the warm-core updraft at the middle and upper levels. The lowest level shown is underneath the cloud base, with some dry-adiabatic cooling of air rising toward the cloud. The meso-low and meso- high show up again at lower and upper levels, respectively. The middle level shows something like the perturbation pressure pattern that one would expect if one had a cylindrical obstacle in place of the cloud, with relative flow from the west. However, the relative flow, in this experiment, is nearly southerly, suggesting that one should be careful about trying to extend the cylindrical obstacle analogy. Even in this relatively simple cloud modeling experiment, thermal buoyancy and liquid water — two elements which do not appear in a wind tunnel experiment — are strongly in evidence. It would have been intriguing to be able to run this experiment much longer, but with the disturbance already getting near the upper boundary this wouldn't have been realistic- ally feasible. But one tantalizing question is: might the hori- zontal pressure gradient force within the cloud ultimately steer the storm to the right of the winds? Because the cloud up until now has been moving relative to the earth at just about the ver- tically averaged ambient wind direction, about 240 , the hori - zontal pressure gradient force in Figure 6 is directed 30° to 40° to the right, even though the vertical perturbed pressure gradient force is downward almost everywhere and therefore not favorable to continuous propagation (by regeneration on a particular flank). At this point, I want to indicate briefly where I'm hoping to go in the next couple of years. I hope to use at least a 357 Fig. 6 Selected x-y sections for case P3 at 12.51 min, continued. Plotted variables are vertical velocity (W, contour interval 0.5 m sec ), potential tempera- ture deviation (POT, contour interval Oo5K) and pressure deviation (P, contour interval 0,25 mb)o 358 33 X 33 X 22 grid, even if I don't make ^x and ^y any smaller. If I want to improve horizontal resolution to just 2 km, I will need more like 50 points in each horizontal direction. During 1975, I'll temporarily be sticking with a highly simplified preci- pitation parameterization such as I used in my two-dimensional model, adopting Takeda's partitioning of liquid water into cloud and rain water. During 1976, I'm hoping to include hail as well as rain, possibly cloud ice in addition, using parametorizations such as those in the Wisner-Orville-Myers hail cloud model (JAS, September 1972), transcribed into three dimensions. One thing that didn't really apply in these very early cxp^^rimonts , but which will be very important later, would be some way of veri- fying the model in trying to get results as realistic as possible within the framework of such parameterizations . This might entail doing sets of experiments in which I would input some real environ- mental data for initial ambient variables, using for instance data from NSSL or perhaps NHRE , and also varying disposable parameters such as the intensity of turbulence (which I'm hoping to represent by methods similar to those Deardorff has used in recent years). I may just use shear, rather than both shear and buoyancy, in formulating the variable eddy coefficient, since I wouldn't want to take on too much elaboration at a time. Now one last thing that might be good to bring up as a possible challenge to SESAME would be the question of what would be a well-motivated initial pertur- bation to set the storm off in the first place. Would it be, for instance, a thermal buoyancy impulse or humidity impulse or ver- tical velocity disturbance in the free atmosphere, or maybe random heating at the ground? That suggests almost going down to a "meso-delta" scale, if one wants to call it that, to tfy and see what an initial impulse a lot smaller than somethihg like the dry line or jet stream might be like if, in fact, it does help set off individual severe storms. So, that may be a small- scale gap to be filled, in which SESAME could provide valuable help to numerical modelers toward deciding what would be a well- motivated initial perturbation from which all subsequent model results would evolve. 559 Cloud and Severe Storm Modeling Studies at the University of Illinois Yoshi Ogura This has been a long day, listening to the presentation of 16 or 17 inter- esting papers and discussions, and I am sure every one wants to go back to their notels. So, I'll make my presentation very brief. For the past few years we have been working at the University of Illinois on numerical modeling of clouds. We have recently geared our effort to investi- gate interactions between cumulus clouds and the environmental atmosphere. The following table shows research topics which we have been pursuing in an area that may be related to the projected SESAME. I have also listed in the table papers by my colleagues and myself published during the past two years and papers in preparation. 1. Pre-storm analysis and meso-scale prediction Analysis: Lewis, J. M., Y. Ogura and L. Gidel. Mon. Wea. Rev., 1974. Prediction: Kondo, H. 2. Cloud modeling 1 - D: Ogura, Y., and T. Takahashi, JAS, 1973. 2 - D: Soong, S.-T. and Y. Ogura, JAS, 1973. Soong, S.-T., JAS, 1974. Wilhelmson, R. and Y. Ogura, JAS, 1973. 3-D: Wilhelmson, R. , JAS, 1974. Numerical Aspects: Wilhelmson, R. (in preparation). 3. Nested grids in nonhydrostatic systems Walsh, J. (in preparation) 360 4. Cumulus ensembles and large-scale circulations Ogura, Y. and H.-R. Cho, JAS, 1973. Ogura, Y. and H.-R. Cho, JAS, 1974. Cho, H.-R. and Y. Ogura, JAS, 1974. I do not intend to describe each research topic in detail. I will describe, however, the motivations for these research topics and emphasize a few points which I think should be considered in planning SESAME. Also, I should mention that the above classification of our research is made as a matter of convenience in this presentation. As you will see in a few minutes, these research areas are closely interrelated. Pre-storm analysis I have just used the word "interaction" between the cumulus activity and the large-scale field rather loosely. To be exact, we should differentiate cause and effect in this interaction problem. In other words, the questions we want to help answer are (1) what are the favorable environmental conditions for the generation of severe storms and (2) what are the feed-back effects of cumulus clouds on the large-scale fields. In helping to answer the first question, one objective of our research is to investigate the favorable conditions for the generation of severe storms and to identify the physical processes responsible for creating such favorable con- ditions. In doing this, we also intend to determine what space and time scales are involved in the generation of severe storms. We believe that this information is needed for predicting occurences of severe storm by a fine-mesh limited area primitive equation model and that this information is essential in designing the SESAME observational network. 351 We have searched the National Severe Storm Laboratory (NSSL) archive to find cases which satisfy the following conditions: (1) sufficient upper air observations were available prior to convective activity, (2) a disturbance originated in the network as opposed to a case where it propagated into the NSSL network, (3) generation occured in the absence of surface heating. We were fortunate to find such a case in the NSSL archive where a series of upper air soundings were initiated 2 '^ 3 hours prior to the explosive development of storm activity near Chickasha, Oklahoma. A detailed case study was made by Lewis, Gidel and myself to find out what was going on in the atmosphere before the storm started developing. In particular, we were interested in pinpointing why the storm developed near Chickasha but not in the other areas within the NSSL network. The result of our analysis indicates that the meso-scale convergence area with the horizontal scale of about 50 km was present in the lower atmosphere near Chickasha at least two hours prior to the convective development. The upward air motion (approximately 10 cm sec in magnitude) associated with this meso-scale convergence resulted in the saturation of a portion of the conditionally unstable atmosphere. Explosive growth then occured. In sharp contrast to this, soundings at Dibble, approximately 40 km away from Chickasha, indicate that the lower atmosphere became drier due to downward motion in the two hours prior to convective development. From our analysis and also from previous analyses by other authors, it seems to me that the development of deep convection in middle -4 -1 . latitudes is associated with horizontal mass convergence of 10 sec m magni- tude in the lower atmosphere. As I mentioned before, we have developed models for an isolated cloud. One of my collegues. Dr. Soong, is now applying his axisymmetric cloud model to environmental conditions observed in the case study 1 have just described. The intention here is to see whether his model generates a cloud which possesses 362 observed cloud properties such as the cloud top height at the right geographical locations (near Chickasha) and generates no cloud at locations (near Dibble) where no development of clouds was observed in the real world. The indication of his preliminary result is encouraging: when applied to the sounding at 2000 CST at Chickasha, his model generated a cloud which reached the maximum top height of approximately 12 km in 40 minutes. This compares favorably with radar observations (Fig. 9 of Lewis, Ogura and Gidel's paper). It is not my intention to draw too many conclusions from only one case study. As indicated in the draft plan for SESAME, there are several different triggering mechanisms. Different space-time scales and different physical processes may be involved in different triggering mechanisms. I agree with the statement in the SESAME document that the investigation of the triggering mechanisms should be one of the main objectives of SESAME and we are continuing our research along the line I have just described. The point I would like to make here regarding the observational strategy of SESAME is that the SESAME observing system should be activated for intensive observations before, not after, the cloud activity has developed. I think we should take the gamble associated with the time choice for these intensive observations in order to investigate possible triggering mechanisms . Meso-scale prediction The next question we have addressed is to what extent can we predict the changes in environmental conditions in the space and time scales I have just de- scribed. Currently, Mr. H. Kondo, a visiting scientist from the Japan Meteorological Agency (JMA) , is applying a slightly modified version of the fine-mesh limited area PE model developed at JMA to the case to which we have made a detailed case 353 study. This model has finer vertical resolution in the lower half of the atmos- phere. This is desirable because we feel that the important physical processes for the generation of meso- or intermediate scale disturbances take place mostly in the lower portion of the atmosphere. An energy- and mass-conserving finite difference scheme is being used. Unfortunately no results have yet been obtained. Therefore, I will skip the detailed description of the model. I would like to say, however, that 1 feel strongly that the detailed analysis of meso-scale disturbances and an attempt to predict these distrubances by a fine-mesh limited area model should be made in parallel. This is because we are currently using data from routine air sounding stations as the initial data for our model, while the grid-size in our model is much smaller than the average separation between neighboring stations. Therefore, we should be very careful in interpreting any meso-scale disturbances occuring in the model, to determine whether they are model-generated noise or whether they represent disturbances in the real world. A dense network within the region of routine sounding stations used as initial data would be helpful to determine whether meso-scale disturbances in our model also occured in the real world. Cumulus ensembles and large-scale circulations Now let us turn to another aspect of the interaction between cumulus ensembles and large-scale circulations, that is, the feedback effect of cumulus ensembles on large-scale fields. Dr. Han-Ru Cho and I have been doing some work in this area, developing a method whereby the spectral distribution of vertical mass flux associated with cumulus clouds can be determined from observed large- scale meteorological variables. This method combines the observed large-scale heat and moisture budgets and a simple model of the physical processes occuring 364 within the cloud ensemble, based on the cumulus parameterization formulations proposed by Ooyama (1971) and Arakawa and Schubert (1974). We have applied the method to the averaged easterly wave disturbances in the Western Pacific based on Reed and REcker's (1971) data set to determine the cumulus cloud population in various weather conditions and to investigate the relationship between the cloud activity and the low-level large-scale mass convergence. Because this work is scheduled to be published in the November issue of JAS, I would like to skip the detailed description of our result. However, let me add that Dr. John Lewis is now applying this method to a severe storm situation observed in the midwest, using data from the NSSL network. The preliminary result he obtained is en- curaging. The computed cloud mass flux is compared favorably with radar obser- vations and the amount of precipitation computed from the model appears to agree with observations fairly well. Dr. Soong and I are now taking a different approach. For a given environ- ment which is taken to be uniform in the horizontal plane, we run our axisymmetric cloud model with a prescribed initial impulse placed near the ground and let the model cloud undergo its life cycle: developing stage, mature stage and decaying stage. At the time when the cloud dies, the horizontally averaged static energy and mixing ratio of water vapor are computed. The differences of these quantities from the initial environment give the contribution which an individual cloud has made to redistribute the static energy and moisture averaged over the entire life cycle as a function of altitude. By multiplying these values by an appropriate factor, we can see the extent to which this type of cloud satisfies conditions indicated by the large-scale heat and moisture budgets. We then re- peat the cloud computation for the same environmental conditions but vary the width of the initial impulse. In this way we are hoping to determine the cumulus 365 population and their contribution to the vertical redistribution of heat and moisture in the atmosphere (an example of preliminary result was shown at the meeting) . Dynamics of severe storms There are many questions we have about the structures of severe storms and the physical processes responsible for maintaining them. Some of these are: (a) Some or most thunderstorms do not reach destructive proportions, apparently because special conditions are not met. What are these conditions? (b) What is the role of the vertical wind shear and the veering of environ- mental wind with height in the formation and maintenance of long- lasting storms? (c) What are the mechanisms for generating cloud rotation? (d) What are the mechanisms for cloud regeneration? (e) What determines the movement of severe storms? I do not think we understand very well the physical processes which determine the speed and direction of movement of severe storms. (f) What are good parameterizations for vertical transfers of momentum and vorticity associated with cumulus clouds? This problem has not been discussed extensively. The SESAME document puts much emphasis on the triggering mechanism for genera- tion of severe storms. 1 feel, however, that equal emphasis should be placed on the investigations of the evolution and internal structure of severe storms, including their movement. Another aspect I feel is missing in the document is the need to investigate the microphysical processes of precipitation and the water budget in severe storms. 366 Living in the midwest I have become concerned about the amount of precipitation. As Ed Daniel son pointedout today, the coalesence process is very slow until drop sizes reach about 30ym. So, unless there are sufficient numbers of large nuclei, the coalescence process may not be very effective. Then, even if a large amount of water vapor is condensed in the cloud, liquid drops will not fall to the ground. They will simply evaporate again in the atmosphere. I am interested in knowing more about the efficiency of a cloud in producing precipitation. This efficiency may be measured as a ratio of the total amount of precipitation to the total amount of water vapor condensed in the cloud. Another measure may be the ratio of the total amount of precipitation to the convergence of water vapor in the subcloud layer over a certain area. According to the numerical simulation of warm rain development by Soong, (1974) , the precipitation efficiency in the first definition is approximately 45% for a maritime cumulus cloud whereas it is only a few percent for a continental cumulus cloud. As to the precipitation efficiency in the second definition, we have found recently that it is almost 400% for deep clouds associated with tro- pical distrubances whereas it ranges from zero to over 100% for midwest storms depending upon the wind shear (Marwitz, 1972; Foote and Fankhauser, 1973). The large efficiency in tropical distrubances is explained by the fact that the cloud base height is low and the horizontal moisture convergence in the subcloud layer constitutes only a small fraction of the total moisture convergence into a cloudy area. In any case, one of my colleagues. Dr. Robert Wilhelmson, had developed a three dimensional cloud model to help answer some of the questions I have just raised Some behaviors of an actual thunderstorm cell in the sheared environ- ment appear to be simulated by his model, such as the air flow around the cloud core. Because his paper has been published in JAS, there is no need for me to describe his result here. However I would like to mention one thing. Bob's 367 three-dimensional model is a rather big one: the number of grid points is 64 X 33 X 25 = 53K and each grid point carries 8 variables (3 components of velocity, 2 thermodynamic variables and 3 variables for water substance). He was able successfully to debug and run this three-dimensional code on the UCLA IBM 360/91, but from the University of Illinois. Remote access was accomplished through the ARPA communication network having nodes at UCLA and the University of Illinois. Coupled with three-dimensional coding was an effort to help in the development of graphical capabilities to display the data on any one of several display or hardcopy devices at the University of Illinois network node. Such graphic display was extremely helpful in looking at the large amount of data typically associated with three-dimensional simulations. From this experience, I am convinced that the good remote access to the computing facilities at NCAR would permit many talented university people to participate actively in the data analysis and modeling studies in SESAME without being present at NCAR. I may add that a three-dimensional similation of clouds is not the only approach to answer the questions I have raised. There is a lot of room to investi- gate these problems theoretically. In fact, two of our students who are sitting somewhere in this conference room are interested in doing some theoretical work in the areas of severe storms or cumulus cloud parameterization. Nested grids in nonhydrostatic systems Finally let me talk for a few minutes on the use of nested grids in non- hydrostatic systems. The numerical simulation of a meso-scale circulation often requires fine grid-point resolution. Yet the total grid domain must be consider- ably larger than the immediate region of interest because the circulations are not isolated from their surroundings. An enlarged grid domain is also necessary 368 in order to minimize the contaminating effects of artificial lateral boundary conditions. Since a uniform fine grid requires excessive computer time and storage, it is often necessary to restrict the fine grid-point resolution to a limited area of the grid domain. Needless to say that this restriction is parti- cularly severe for three-dimensional simulation problems. Variable resolution can be achieved either by coordinate-stretching or by overlapping fine and coarse grids. The latter approach has the advantage that the time-step may be increased in the coarse-resolution region, thereby permitting an even greater saving of computer time. The overlapping grid approach has been used successfully in primitive equation models by several authors. When smaller scale problems are studied, however, the hydrostatic assumption eventually becomes invalid. At our Laboratory one of my colleagues. Dr. John Walsh, has been investigat- ing the feasibility of nested grids in a nonhydrostatic model. The two-way interaction scheme of Phillips and Shukla (1973) is being employed so that each grid obtains information (feedback) from the other grid. Two differencing schemes are being examined: the upstream and the two-step Lax-Wendroff . He has been examining the ability of his schemes to handle a two-dimensional gravity wave and a two-dimensional sea breeze. In both problems, a dry atmosphere is considered and the vertical grid resolution is uniform. His test results appear to be satisfactory. Dr. Walsh is now planning to extend his investigation to handle moist convection and so that the fine-grid domain can be moved with a propagating cloud within the coarse-grid domain. Where the techniques become well-established we plan to apply this model to a three-dimensional cloud simulation. I think my time is up and I had better stop here. Thank you. 369 A Review: Recent Severe Storm and Tornado Observation John McCarthy University of Oklahoma and National Severe Storms Laboratory Norman, Oklahoma 1974 has been a very good year — for severe storm and tornado observation. Many of us were out there looking on April 3rd, when 125 tornadoes occurred in a relatively few hours. Others of us were watchina a significant but much smaller outbreak in Oklahoma on June 8th. Meanwhile an unusually large number of hail days occurred over the National Hail Research Experiment (NHRE) area during the spring and summer. As we move toward more complete analyses, other days will certainly show significant observational results. During these discussions of the plan and hopes of SESAME we felt it to be very important that we describe to you some of the very ex - citing observational results which have been obtained very recently. Our observational tools have become much more sophisticated during the last several years and we are now emerging into a period of re- search where our expectations are very high indeed. It is with a great deal of anticipation that we take these tools and more into a broad spectrum attack on mesoscale storms. We are very fortunate to have for our program this morning a number of very interesting examples of recent research. We do not intend to present an exhaustive survey 371 of research and research tools used in severe storm and tornado re- search, but rather to give you several examples of what is going on. As most of you certainly know, many tools are now being applied in this area but certain tools and approaches stand out, and it is to these that we address ourselves this morning. Rather than my presenting a survey of current research (our speakers will do this), I would like to spend a few minutes talking about some of these tools and what the great expectations, and I dare say limitations, are when they are applied to severe storm and tornado research. I have been working with some of these tools for a few years and I think it would be very useful if we just spent a few minutes describing them. The first and probably most important tool to SESAME is the pulsed Doppler radar. Doppler radar allows us to determine that radial velocity as well as the intensity, of scattering precipitation targets. Use of two radars looking at the same target from nearly orthogonal directions allows us to map the kinematic properties of the storm, usually in the horizontal plane, depending on the configurations of the radar with respect to the echo. Most typically, in severe storm research the radars are configured to give us a quasi-horizontal wind field within the storm, and by applying continuity considerations, we are able to calculate the vertical velocity as well. Consequently, we are able to obtain maps of the three dimensional field of motion. Some of the most exciting results are coming from this type of very recent re- search and we will hear current results from three areas, the National Hail Research Experiment (NHRE) and the National Severe Storms Lab- oratory (NSSL) and the Wave Propagation Laboratory (WPL) . It is rather important to understand the advantages and disadvantages of working with a remote probe such as the dual Doppler radar system. The most obvious advantage of using radar is the ability to look at a large volume of space in a relatively short time. In a matter of sev- 372 eral minutes the entire volume of a large severe thunder- storm can be sampled by radar. However, the resolution of this sampling is rather crude when compared to the res- olution now obtainable by aircraft sampling. For example, after an objective analysis of dual Doppler radar data, normal resolution of the velocity data is on the order of one observational point per cubic kilometer. Clearly this type of resolution allows us to see scales of motion in the mesoscale -- that is a 5 to 10 km circulation scale. As a consequence of this type of resolution, we are obtaining excellent details of mesoscale cyclonic circulations (i.e. tornado cyclones) but likewise we must recognize that we still cannot adequately observe the smaller scale — the microscale circulation of the actual tornado. In most observations, we would consider it very fortunate indeed if we could get one or two Doppler gates to see the veloc- ities of a tornado. We do have some evidence that we have seen small portions of the tornado, but we will have to accept the fact that for the most part, looking at the tornado scale is somewhat beyond the capabilities of Doppler programs at this time. Looking at the larger end of the Doppler resolution scale, we realize that 3-cm and 10-cm wavelength radars normally can see only those regions of space containing precipitation-sized 373 hydrometeors . We have recently conducted experiments which utilize radar-reflective chaff to fill large pre- cipitation-free volumes of space in the near environment of storms, allowing the Doppler system to map the in- stantaneous velocity field outside the natural target, but within the mesoscale circulation. This use of chaff is in contrast to the traditional "tracer" use of chaff in previous years. So in summary, dual Doppler radars are a very exciting development and are particularly suited for studying the mesoscale severe environmental storm. Their value decreases somewhat when we attempt to look at the much smaller scale circulations imbedded within the mesoscale. I would like to contrast radar probing with aircraft probing. In recent years we have seen the development of numerous instrument systems, which when mounted on aircraft, can give us very detailed information on the micro-structure of cloud systems. For example, by using various optical sensors, we are now able to measure quite accurately the microphysical structure of clouds both in the cloud droplet and the precipitation drop-size ranges. Three dimensional wind measuring systems have also been 374 developed which allow us to map very fine structure of the wind field in the vicinity of and inside the storm. One of the great problems with aircraft is the very limited sampling volume that is obtainable. No matter how sophis- ticated is our equipment, a single airplane can only sample a very limited portion of space in the vicinity of or in- side a storm; consequently we have great difficulty in piecing together the entire storm structure from aircraft data alone. On the other side of the fence, radar can look at large volumes but not with the type of resolution and detail that we can obtain with aircraft. For example, for a storm whose spatial coverage is 20 x 20 x 20 km, we may obtain 8000 objectively analyzed data points thoughout the 8 X 10 km volume sampled by the radars. During a single horizontal penetration of the storm (in about the same time period as the radar sample) , the volume sampled is only -9 3 10 km ! However, data along the path can be as fine as 5 m resolution! Clearly there are some dramatic differ- ences between these tools! Additionally, thermodynamic information is not available with radar. Obviously with reference to SESAME, we need to develop experiments which 375 utilize aircraft, radar, and chaff, simultaneously looking at similar volumes, so that we can obtain limited but de- tailed microphysical and thermodynamic information from the aircraft and over-all field of motion and reflectivity information from the chaf f-supplimented radar targets. Thus by working with both systems very closely we can certainly make great strides in understanding many of the details of these storms. Another very exciting approach to severe storm obser- vation has been the tornado chase program at the National Severe Storms Laboratory. We will hear details of this program later this morning. The primary objective of this program is to locate and document photographically the life-cycle of the tornado. The vital point here is to establish how the tornado itself fits into the entire his- tory of the parent severe storm. A second objective of this experiment is designed to allow the computation of the velocity in the tornado wall and ajoining collar circulation. It is important to realize that the photo- graphic observations are of a scale that is too small to be observed by aircraft or radar tools. For the primary purpose of SESAME, it would seem that the greatest importance of tornado chase is precise documentation of the tornado event, 376 so that we can determine how the tornado fits into the over-all mesoscale system. The great jumbo outbreak of tornadoes on April 3, 1974 has been and is being studied by many scientists. A film showing one of these tornadoes to be composed of a number of small vortices circling the parent circulation has been studied in depth at Purdue University. We will hear some very interesting results of this analysis, which are a real-world revelation to earlier laboratory and theoretical work, anticipating this observation. A second form of documentation of tornado events has been the damage survey, pioneered by Dr. Fijuta at the University of Chicago. We will hear a discussion of the documentation of tornado events by extensive damage sur- veys of the jumbo April 3rd tornado outbreak. It seems to me that this type of documentation is absolutely vital to the understanding of how tornados fit into the larger scale, particularly in situations where tornado family outbreaks occur. In conclusion, we now have available a number of ex- citing tools and approaches to the study of severe storms and tornados, which have been tested just this very year. I think that we will find that their use is most appropriate to SESAME; I think we should now hear details of these various tool applications. 377 A TECHNIQUE TO SUPPLEMENT DUAL DOPPLER RADAR DETECTION VOLUME WITH AIRCRAFT DELIVERY OF THREE -DIMENS TONALLY DISTRIBUTED CHAFF John McCarthy University of Oklahoma and National Severe Storms Laboratory Norman, Oklahoma 1. Introduction We have seen that the NSSL Dual Doppler Radar System is indeed impressive and outstanding data sets are being obtained. However, 10-cm radars, and 3-cm radars to a slightly lesser extent are restricted to "seeing" precipitation targets only, and thereby they are unable to detect significant portions of the motion field that is within the over- all circulation, but does not include hydrometeors . We are conducting experiments at NSSL which utilize radar-reflective chaff to supplement the natural echo, in an effort to expand the Doppler view. The technique is useful not only to add to the vicinity of natural storm echoes, but can be used to create artificial echoes in a large volume of clear air, so that otherwise unobseirvable atmospheric motions can be detected by Doppler radar. Our most recent efforts have involved the tornado cyclone. 2. Experimental Design Typically a tornado funnel varies in size from 30 to 300 m in diameter while embedded in a larger mesoscale cyclonic circulation ranging from 5 to 10 km in diameter. This parent circulation is usually identi- fied as a tornado cyclone, and is typically situated near the right rear flank of the thunderstorm. While the presence of such a mesoscale circu- lation by no means assures that a tornado will form, there does appear to be a high correlation between them. The tornado and its parent circulation have traditionally been difficult to detect with most meteorological sensors, but with the devel- opment of dual Doppler radars, such as the system at NSSL, there has 372 been significant progress in better elucidating the tornado cyclone velocity structure. With dual Doppler radars we obtain data on storm intensity, measured in the conventional manner, as well as radial velocity components of the target from two separate antennas placed some distance apart. These data can be objectively analyzed to yield the quasi- horizontal velocity field. Unfortunately, even with the NSSL radars, a significant portion of the mesoscale circulation is not detected. This is due to the fact that much of the circulation is precipitation free, and it is this pre- cipitation that scatters the radiation back to the radar antennas, result- ing in radar echoes. For this reason, we have designed an experiment that utilizes a special radar reflective material called "chaff", a metallized plastic fiber, which, when distributed in clear air, will return "clear air" echo signals to the radar. By distributing chaff through much of the volume of clear air adjacent to the natural echo of the mesoscale circulation, we can then use dual Doppler radars to see the entire circulation. 3. Research Tools A number of excellent meteorological tools have been used in this research. The very fine NSSL lO-cm Dual Doppler radar system, one located at Norman and the second at Cimarron Field, 42 km northwest of Norman, is of course the primary data acquisition tool. A chaff dispersal system capable of distributing lO-cm chaff in 5 discrete bundles through a 2600 m vertical column below the airplane flight level, was a vital component to this research. The idea for the vertical delivery of chaff is orig- inally attributed to Mr. P.H. Hildebrand of the University of Chicago. A more advanced unit was developed and manufactured for this project by Flight Systems, Inc., of Burns Flat, Oklahoma; Fig. 1 shows the complete chaff system. The University of Wyoming meteorological research Queen Air air- plane, operated under subcontract to the University of Oklahoma, provided 379 Fig. 1 Photograph of complete chaff system, including both 3 and a 5 packet modules, a firing cap, and a hand- held launching gun. a highly sophisticated measurement platform, as well as a means of deliv- ering the chaff to the appropriate environmental inflow regions of a tornado cyclone. Parameters sensed and digitally recorded on the air- plane included dry bulb and dew point temperatures, airspeed, magnetic heading, static pressure, Aitken condensation particles, airplane posi- tion, and navigation Doppler radar ground track, wind drift angle, and airplane ground speed. An onboard mini-computer produced cockpit-displayed real-time values of dynamic heating-corrected temperature, dew point, potential and equivalent potential temperatures, specific humidity, and horizontal wind speed and direction. The real-time display of these com- m puted parameters enhanced the ability of the flight scientist to make a real-time identification and to trace properties of the immediate thunder- storm environment. 4. Tornado Cyclone on June 8, 1974 On June 8, 1974, a severe thunderstorm containing a mesoscale cir- culation, produced a tornado at Harrah, Oklahoma (this storm was one of a number of similar storms occurring during a tornado outbreak in Central Oklahoma on that day). Chaff operations were conducted before and during this particular tomadic storm. While the storm was in its developmental stage, chaff operations were begun at 1539 CST and lasted 14 minutes, resulting in the distri- bution of a chaff curtain extending from 3.35 km (flight level) to 0.75 km msl, with 11 chaff modules spaced horizontally along a 55 km length. The curtain was placed approximately 20 km ahead of the inflow border of the storm. Fig. 2 is a 1559 CST photograph of the intensity- contoured Cimarron Doppler radar scope shown at deg elevation angle, with range circles every 20 km. Position A marks the location of the suspected mesoscale circulation (hook), while B indicates the positions of the chaff. A combination of vertical cyclonic shear and strong cyclonic inflow can be seen in the figure. At this point, the chaff appears to be rapidly entraining into the mesoscale circulation. Fig. 3 is a display of the mean radial velocity of the Norman Doppler at 1602 CST. The presentation is velocity-contoured radial 2 velocity, with positive velocity (away from anatenna) , appearing strobed , while negative velocity (toward the antenna) is unstrobed. The brighter Equivalent radar reflectivity factor. 2 Strobed display refers to a highlight technique that alternately conceals the illumination of adjacent data gates. Consequently a strobed display appears as light - dark - light bands of concentric rings about scope center. In this case, such a display refers to positive radial velocity, or targets moving away from the antenna. 581 Intensity-contoured display of Cimarron Doppler radar, taken at 1559 CST at deg elevation angle. A marks the location of the cent of the suspected mesoscale circulation, while B indicates the positions of the chaff (NSSL-NOAA photo) . Fig. 3 Display of the mean radial velocity of the Norman Doppler taken at 1602 CST at deg elevation angle. The display shows velocity-contoured radial velo- city, with positive velocity (away from antenna), appearing strobed, while negative velocity (toward the antenna) is unstrobed. First level display intensity represents 0-13 m sec"-*-, next brightest 14-21 m sec~^, while the brightest is 22-26 m sec"-'^. A marks area of negative radial velocity, a signature of suspected mesoscale cyclonic circulation. B marks chaff streamers (NSSL-NOAJ^ photo) . the intensity of the echo, the greater the absolute value of radial velocity, with the brightest ranging from 22 to 26 m sec . Since the storm was moving towards the northeast, most of the radial velocity indication of the echo is positive, or strobed, as would be expected. However, the circular area marked A represents a negative velocity region, with a portion having the maximum brightness. The existence of an area 382 of strong northeast flow situated iiranediately northwest of strong southwest flow is a classical single Doppler radial velocity signature of a mesoscale cyclonic circulation (see Sirmans, et al, 1974). This is presumably a tornado cyclone, since the Harrah tornado was occurring at the photograph time, in the same location. The chaff streamers are indicated by B, showing very large convergence into this circulation! While dual Doppler data were being collected within both the chaff and 10-cm precipitation return, the airplane collected dense horizontal wind and thermodynamic data in the south, southwest and west flanks of the immediate environment of the storm. The airplane wind data will be included in an objective analysis of the dual Doppler data, to give a auasi-horizontal kinematic description of the mesoscale circu- lation. More specifically, we hope to provide constant altitude maps of the complete tornado cyclone velocity flow field, at vertical inter- vals of 1 km, between the surface and 3 km above the surface. The air- plane thermodynamic data will be used to further the understanding of the immediate environment of a tornadic storm. Additional studies will concentrate on analyses of the synoptic and subsynoptic dynamics on June 8, 1974. 5. Conclusion We have obtained what appears to be a unique data set on the tornado cyclone. It is our expectation, when the data are fully analyzed, to be able to advance the understanding of the structure of the tornado cyclone, and to see more clearly its interaction with the larger scales of motion. In addition, we feel that this technique is applicable to the study of a variety of mesoscale phenomena, including echo- free cumulus cloud motion studies, and mesoscale divergence studies. Acknowledgements The broad scientific support of the personnel of the National Severe Storms Laboratory is gratefully acknowledged. Dr. Donald L, Veal of the University of Wyoming is acknowledged for his great skill 383 in handling the airplane. This research was supported under Grants 04-4-022-3 from NSSL-NOAA, and GA-41844 from the National Science Foundation. References Sirmans, D. , R.J. Doviak, D. Burgess, and L. Lemon, 1974: Real-time Doppler isotach and reflectivity signature of a tornado cyclone. Bull. Amer. Meteor. Soc, 55, (to appear in Sept., 1974 issue). 384 R. Pielke: Is there any evidence of where those storms were being generated? Was there a geographically favored region or c^n intersection of outflow lines? McCarthy: There is no topographical effect that I know about. Juct west of Norman it's not completely flat but for all practicable purposes, its flat! Dr. Sasaki is planning a gravity wave trigger investigation right now, and I have some students working on it. We don't know the answer to it, but the triggering mechanism was localized and very intense. ;^e ' ve looked at the sub-synoptic moisture convergence and the area which kicks it off in a very small area. It does persist for several hours prior to the first severe storm but conceptually I'm not yet sure what's going on here. J. Shaeffer: I think we should mention that we had one come through about 5:00 a.m. that day so it's going for 12 hours. McCarthy: Yes, in the early morning hours Oklahoma City re- ceived about 5 inches of rain. At the time these severe storms were breaking out, the earlier storm was near Fort Smith. You. can see it on the sub-synoptic analysis. In the area of the tornado tlie land was very wet. I miqht add that Don Veal, who is sitting in the back, v.-hile piloting our two flights of that afternoon, estimates that he saw 30 funnel clouds so it was an extremely heavy situation, It's the heaviest field he has ever flown in; hi? h;^ir is white nov/! S. Hanna: This question of triggering brings up a possible interest to the AEC in this project. We're currently trying to analyze the environmental impact of a power park that is being planned, it's on the drawing boards, by Gulf States Utility at Riverbend 385 about 50 miles northwest of Baton Rouge, Louisianna. This is planned to put 100,000 megawatts through cooling towers into the atmosphere. This is an approximate equivalent of the Surtsey volcano and it's our job to try and analyze what the impact on the atmosphere that it's going to have. G. Schwiesow: I just wanted to indicate my pleasure in seeing the importance recognized of the clear air motions and to let the assembled group knovi that we're somev/hat behind you at WPL with an alternative technique using an optical doppler system to try to probe this without chaff. J. Purdom: Did you see any relationship between the early morning rain boundary, and the region of later severe storm development? As you know, our film loops do show some interesting relationships, McCarthy: No, I do not have such a relationship at this time. 1 think that there are some very fascinating things that happened on June 8 and they're going to be examined by a number of people. NSSL is going to be working on it in detail as well as number of people at OU, and I'm sure other people. I don't have a relationship but it's worth looking for. I'd really like to know what's kicking off something so localized and so intense; it's really puzzling. J. Golden: Just a quick comment. The tornado intercept group was out that day and we observed at least 3 or 4 separate tornadoes and some additional funnels besides. One of the interesting things that we observed was this very point that you brought up - the multiple nature of storm formation in a very localized area. I might emphasize that there was a very pronounced dry air boundary that day and when we got back in the evening in fact and crossed through the final squall line that was noced on your last radar picture, there was literally a wall of dust behind that and surface visibility was down to 1 or 2 miles. McCarthy: Like on the earlier storms, we did have moisture feeding on both sides - the dry air was holding back for some reason. 336 DOPPLER RADAR RESULTS FROM THE NATIONAL HAIL RESEARCH EXPERIMENT D. Atlas I had originally planned to speak about something else tut my absence yesterday is a reflection of the fact that I got so very excited about some very recent Doppler radar studies vhich Peter Eccles and Ron Rinehart have been doing in NHRE with the data of 7 August 19T^, just two fortnights ago, that I discarded my original remarks and switched the subject. I would simply like to reiterate and emphasize the point that John (?) made that I don't see how SESAME can forget about the storm scale or the "mini-mesoscale" observations. After all the final objective of SESAME must be to predict the severe storm or detect the severe storm, whichever you like. I do not see how we can do this unless we understand the mechanisms of the individual severe storm in the mesoscale setting. And it would be a tremendous waste if we did not study the individual severe storm in the larger setting because we will be investing tremendous efforts, funds, and equipment in studying the larger scale. The combined effort would clearly be more efficient and economical than separate ones. Now, with those preliminary remarks, I'm going to switch immediately to our observations. You must forgive us because they are very preliminary. I'm going to be talking about some observations with the beautiful, false color display developed by the NCAR Field Observing Facility under Bob Serafin. It was done by Grant Gray. It ' s a phenomenal instriiment as you will see for yourself. I'm not going to sell it although it's a very cheap device and you can reproduce it at will. Let's turn the lights out and you'll see what I'm talking about. While the pictures are absolutely beautiful and aesthetic they are no gimmic. Even though we're talking about a single Doppler radar, not a dual-Doppler radar, care and interpretation, the displays provide indications 537 of the general flow, the convergence, vorticity, in-flow and out-flov regions, and surprisingly, up- and downdrafts, the regions of vertical transfer of momentum. What I'm implying here, is that despite the fact that we have some arahiguities, that we do not need dual-Doppler radar all of the time; and the availability of single Doppler radar will provide the capability of extending our Doppler observations out beyond the ranges which a dual-Doppler system can cover. May we have the first picture of the 7 August 197^ storm at elevation of O.k deg. We have sixteen colors here; the black is either zero reflectivity or zero velocity and the browns, rusts, oranges and yellows up to reds represent receding velocities, each color corresponding to an interval of 3 m/sec; red is 2h m/sec; brown is 3 m/sec away. On the other hand, greens through blues, lavendars and almost white are approaching velocities. You see that this is a fairly intense storm which was right over the NHRE protected area; it was moving toward the east at about 3 m/sec. Now the environmental winds in front of the storm were from the SSE up to about 10,000 feet and switched sharply to the west above but only at velocities of 20-25 knots up to about iiO,000 feet. Now you notice that on this 1.1 deg. elevation picture the colors switch from greens and blues along the leading edge of the system to browns, yellows and oranges on the western side. Clearly there is easterly momentiim and inflow at the lower levels along the entire front of the storm; remember the greens and blues represent in-flow. At places like this where we go sharply from blues to yellows we clearly have convergence; in fact, I would suggest that there is some convergence along the entire front of the storm. Similarly where we have color changes in both range and azimuth, we clearly have vorticity. Thus we have cyclonic vorticity here and here (pointing). Now watch as we go up, some of these greens and blues (easterly flow) will disappear. We're going up in steps of 0.7 degrees. Also remember that where we have greens and blues we are carrying easterly momentum up toward higher altitudes. This can only be accomplished by an updraf t . Thus where you see cores of green and blue aloft, s\irrounded by orange and yellow. 388 those cells must represent the updrafts. Note also the large overhanging region of yellow-orange (westerlies) which extends far out ahead (eastward) of the lower level precipitation. This is a classical radar "overhang" and we see that the precipitation elements in it prohably originate in the (green-blue) updraft cells to the west. As we scan up to higher elevations we also see how the (green-blue) updraft cores move westward with height, representing the kind of tilted updraft first visualized by Browning. And at the very highest altitudes, we note the upper level divergence with easterly velocities on the west side and westerly velocities on the east. Now let's go back and superimpose reflectivity maps on the velocity display, first at 0.i+ deg. elevation. Here is a reflectivity maximum with 55 d.BZ inner contour and a peak of about 65 dBZ. Notice that it is on the western edge of the in-flow region. As we proceed upward in elevation we note that the two primary reflectivity maxima remain close to the green-blue cells (which we have previously identified as updrafts) but are shifted westward so that the maxima lie on the western boundaries of the updrafts, as if the precipitation is being unloaded to the west from the tilted draft. Tornado detection with the color Doppler display [Summary based on Weather Radar Conference Paper and Bulletin (AMS) Abstract]: Perhaps one of the greatest potential uses of the real-time color display is for tornado detection. While the system as used on the Grover radar has a maximum unambiguous range of only plus or minus 25 m/s, by adding a simple Doppler variance processor to the system, the extreme values of variance associated with tornadoes should be an excellent indicator of the presence of a tornado. It is extremely unlikely that anyone will ever be able to predict with any great accuracy the location where a tornado will occur. Any warning given to the public will have to be made through use of a reliable and credible detection system. The real-time color display of the Doppler variance of a tornado holds greater promise of filling this detection need than any other single instrument now available or on the horizon. 389 This figure is representative of the pictures shovn in the movie presentation. For a more detailed version of this vork, please refer to the June 1975 issue of the Bulletin of the American Meteorological Society, "Real Time Color Doppler Radar Display," Grant, et al. 390 To summarize our observations made with the Doppler color display, several things stand out. First of all, much more can be made of single- Doppler radar observations than has usually been done in the past. While some of these need validating with other information such as dual-Doppler radar measurements, we can make reasonable inferences about areas of convergence, divergence, vorticity turbulence, inflow, outflow, updrafts, downdrafts, gust fronts, and even make complete wind profiles with height. Experience using the system should increase our confidence in making such interpreations. From our analysis of the T August 197^ line storm, we were able to trace inflow and updraft regions completely up through the storms from their entrance at low level to their exit at high levels. This kind of information in real-time is extremely valuable for cloud seeding activities such as performed by NIIRE. The ability of the system to separate weak, moving targets from strong but stationary targets makes it possible to detect wind fields from the return signal from insects and to detect weak gust fronts. Other uses of the color display include detection of tornadoes and hurricane velocity mapping. Certainly the potential usefulness of this display system has just begun to be exploited. The value of the information available from the color display is such that it is likely that in the future all radars will have quantitative color displays. 391 Radar Observations at NSSL E. Kessler I will present recent NSSL observations with single and dual Doppler radar. We had a single 10 cm Doppler radar placed in operation at NSSL in 1970, and one of the first encouraging things that we confirmed was the reflec- tivity structure observed to have been quasi-steady in studies made as long ago as those by Browning and Ludlam, for example, who enunciated the supercell, a quasi -steady storm. These reflectivity structures were associated with corresponding stationarity in the velocity structure. This gives emphasis to one of the points that Dave Atlas made a few minutes ago, that you don't always need a dual Doppler system to deduce details of the two dimensional or even three dimensional flow in a storm if it's quasi-steady because as the storm moves along, one can have the equiva- lent of several Doppler radars by considering the views made at different angles with one Doppler. In 1973 we started operation of a second 10 cm Doppler radar. Our two radars have equivalent performance. Each has a 30 ft antenna which projects an .8° beam in S-band. The long wavelength is associated with the correspondingly large capability to measure high velocities at long ranges. This capability is directly proportional to the wavelength. Also, at 10 cm we're able to visualize the distribution of precipitation or reflectivity in intense storms without the encumbrance of attenuation that bothers a shorter wavelength. So for severe storm investigations we're proud of this dual Doppler 392 tool. Well, I want to illustrate this morning some reduced data prepared by Peter Ray, and which show two dimensional velocities in a tornadic storm that occurred on April 20 of this year in Oklahoma. Now I'll explain these pictures to you. These are produced by computer, of course. They're reduced data from NSSL's two Doppler radars. The horizontal scales are here in km, and these are constant altitude pro- jections at the indicated heights. These data have been assembled from multiple scans by the two radars at fixed elevation angles. In other words, a scan at degrees, one, two, three, and so on. A lot of interpolation and extrapo- lation was required before the data were combined to produce these two dimensional winds at constant altitude. The data were gathered over a period of about five minutes, and the patterns were treated as though they were stationary during that time. The length of the arrow refers to wind speeds. The echo boundary is shown and the echoes are labeled with the logarithm of Z, corresponding to z of 10, 100, 1,000, 10,000, and ten to the fifth, and the spatial distribution of the contours is rather complicated. Now this is a tor- nadic cell and the path of the tornado is illustrated al- though at this preliminary stage of investigation, we don't have the exact location of the tornado at the time of the data. Notice that the wind velocities are shown as \iery weak near the center of the principal cyclonic circulations. However, in that region individual spectra are wide, and in some cases the spectra are chaotic in a way that suggests the tornado vortex, itself. At any rate the computer has drawn us streamlines based on these winds and some of them have been erased in order to make this presentation read- able. So we have the direction of the wind and their speeds indicated by the length of the arrows and objective analysis of the wind direction. Now notice that streamlines of the 393 1 KM ' \ \ / I I I I I I i_ 14 b 3 Kn ^:4;.:^v.u;:t>^;.:AX''; r 8m/s I I'll r I I I I I q/ 3 6 3 12 IS IB 21 24 27 30 33 36 39 42 — 20.0 n/s I I3I I I ^Jh li^l I I J 1,1 I J^ I 1,41 I I I I I IJubj K ^~ 1 ' % Q_ I I I I y r^ i^'^^'i I I I I I I I I I I I I I I I I I Q" 12 15 18 21 24 27 30 33 36 33 42 5 Kn — 20.0 fl/S I I I gi I I t ^ 1 I I I I I ,LJ I I I I I I i I I 3 6 3 12 IS 18 21 24 27 30 33 36 33 42 Kn — 20.0 n/s Dj 1 [FT^oFJ^^eT-n-r3^^^-T-TT^^^T^ 1 \ 1 \ \ f \ \ 'T I I I I I 30.0m/s I I I I I I I I I I I I I I I I I I I 12 15 18 21 24 27 30 33 36 33 42 Fields of reflectivity factor Z and velocity at different altitudes based on dual Doppler radar observations of a tomadic storm near Norman, Oklahoma on April 20, 1974. Horizontal distances are marked in km and isopleth labels represent 10 log Z. 394 main vortex at 1 km spiral inward and that, of course, we can call convergence, objectively defined convergence at the height of 1 km. Now as we go up in altitude in the same region, the convergence persists. And it appears qualita- tively to be more intense at 3 km than at 1 km, and it seems significant to know how the divergence varies with height in a meso-cyclone associated with a tornado. Of course, we're setting up to calculate the kinematic parameters of diver- gence and vorticity from these fields. Now we go on to greater altitudes and notice now that at five kilometers the streamline is spiraling outward and I think that it's inter- esting to learn that the level of non-divergence is between 4 and 5 km in this one storm and notice now the divergence increases. So that at 8 km it looks as though a bomb ex- ploded in the middle. Well, this was the greatest altitude at which it was possible to reasonably construct the wind field. Now the challenge, there are many other features of these pictures. From the point of view of severe storm morphology it is important that we not only have these pictures, but we have a succession of them and we have other associated data. We have the temperature fields, soundings, we have aircraft flights here and there and in a few moments John McCarthy will show you how this kind of data can be extended into the clear air around the storm, into the near environment of the storm which is presumably also disturbed by the circulation. What we can do here is to relate the kinematics and the thermodynamics and we can relate the development of the reflectivity distribution to the devel- oped wind distribution. We know that the updrafts are associated with cloud formation and the conversion of cloud to precipitation and then advection of the formed cloud and 395 precipitation with the wind. Here we have a marvelous, I think, visualization of these processes although the reflec- tivity, of course, shown by these contours is not one to one related to precipitation and its interpretation is itself a rather complicated problem. It can be assisted by rain gauges at the ground. Then can we show how the development of vorticity corresponds with the development of conver- gence? And the restraints represented by this kind of information, should help us to extend our knowledge to properties of the atmosphere not directly indicated in our data. We can better extend our knowledge into unknown areas . Well, I'm about ready to pass it on to John McCarthy. On a related subject, I would like to emphasize that inter- pretation and application of Doppler data is rather compli- cated. There are some special attributes of the data which require a short course and in order to make our Doppler data available in an effective way to the university community the NSSL and the Atmospheric Sciences section of the Science Foundation are cosponsoring a workshop at NSSL from Oct. 28- 29. We can accommodate about 35 representatives of the university community who we hope will learn enough about the data to take some of the raw tapes (we have about 200 of them), several storms, some tornadic some non-tornadic , and start to acquaint themselves with Doppler radar and contri- bute to an understanding of what these data mean to meteor- ology. Thank you. Questions : Fankhauser - Ed, has the motion of the reflectivity pattern been removed from the winds? 396 Kessler - Oh, thank you for mentioning that. The horizontal winds shown at each height have had the average Doppler- determined horizontal wind at that height subtracted. In other words, the average of the winds illustrated at each height is zero. In addition, the location of the reflec- tivity pattern and associated wind pattern at each height has been adjusted to remove pattern advection while the radars were scanning. Foote - Then the circulations as you have drawn them don't necessarily represent trajectories of parcels. Kessler - That's correct, absolutely. They do, however, represent the divergence and the vorticity. McCarthy - That raises a pretty good question of how best to do that when you start to objectively analyze it, I have some questions . . Kessler - Very important, and \/ery complicated. You don't just walk into these data and start spewing out the conclu- sions; there are just a few important conclusions that one can get by just looking at them. You need a succession of snap shots to construct trajectories and, of course, the total components of the wind must be treated. J. Miller - If I understand correctly you have taken single Doppler radar at two different times and put those together? Kessler - No, this is two doppler data. When I started my talk I was supporting the comment that Dave Atlas made that in certain situations one Doppler data is adequate. Under some circumstances, one Doppler radar tells interesting 397 things about the two dimensional field; however, this is two Doppler data. Miller - Alright if it is dual doppler data, what kind of vertical air motion estimates do you have on this? Kessler - I don't have those with me. The vertical motion comes by integrating the convergence represented in the fields. I don't have a slide with me to show. 398 DIRECT MICROSCALE TORNADO OBSERVATIONS AND FUTURE REQUIREMENTS J. H. Golden ABSTRACT Methods of obtaining flow and thermal measurements on two similar vortices, waterspouts and tornadoes, are described. Quantitative results on the vortices are presented and compared. Both apparently exhibit a re- petitive life-cycle. I intend to avoid the pitfall of making this a scientific paper. I hope to pique your interest and I hope to stimulate you to think about new ideas, possible new approaches to measurment on the microscale and, in particular, measurements of the parameters of interest on the tornado scale. So far this morning we have been talking about the scale perhaps on the lowest order of scales that will be studied in SESAI-TE, namely the scale of the tornado cyclone- on the order of a few kilometers. l-Jhat I shall discuss with you very briefly is some work that I've been doing over the last five years on both waterspouts in the Florida Keys and, since Spring 1972, tornadoes in Oklahoma. It turns out, from a detailed study of evidence gathered over the last few years, that these two vortices are really quite similar. In fact, I believe (and would be willing to discuss this or debate it with anyone) that tornadoes and water- spouts are qualitatively the same but differ in certain definable quantitative aspects. Some of these quantitative aspects we could elaborate at length at some later time (see NSSL Tech Memo //70) but the most important quantitative differp.nces between these two vortices are first, scale and intensity of the two vortices, second, the structure of the two environments, and, third, the degree of organization and int'snsity of the tx-/o parent convective cloud systeins spT.-ming the two vortices. Now with regard to waterspouts, and I'm ^9P not going to spend much time on this — there will be two companion papers coming out in the September 1974 issue of JA^-l which describe the essential details of my work on waterspouts. However, one important aspect of water- spouts which is having some important relationship to our work on tornadoes is the concept of a recognizable, repetitive life cycle. A concentrated field program was conducted in 1969. During that one season, some 400 water- spouts were observed and documented in the Lower Florida Keys, of which 95 were studied in some detail from aircraft. Photogrammetric techniques were used to follow air motion tracers provided by the waterspouts themselves and various types of artificial tracers were dispensed and tracked, but I won't go into details here. The first stage of the waterspout life cycle is the dark spot stage, during which there may or may not be a visible condensation funnel, and in V7hich the first feature that we noticed on the sea surface was a circular light disk of flat non-disturbed sea surface surrounded by a much darker patch (Fig. 1). We found that there are very large numbers of these dark spots that occur and about half of them never have a visible funnel above. So the obvious question arose, do these dark spots in fact represent the downward termination of a complete vortex column? This question was answered by successful interception of a traveling dark spot by a marine smoke flare. I'll just refer to the inferred flow pattern in Fig. 1. There is, of course, a convergent, spiraling-inward flow pattern in the surface boundary layer above the sea surface f.^hich then begins to rise in a helical fashion. As the inflowing air reaches the outer diameter of the inner light disk, it rises. This represents a convergent cylinder of smoke which "e obser^/ed to ^00 "*^ SOMETIMES A FUNNEL APPENDAGE HERE INVISIBLE VORTEX CIRCULATION it tll ■■■IW O I (iil H »U Fig. 1. Composite structurai-f low model for the dark spott stage of the waterspout life cycle. Characteristic range of scales is: H (cloud base) 550 to 670 m MSL, D_ (maximum funnel diameter) = 3 to 150 m, d = 3 to 45 m, and a = 15 to 760 m. See NCAA Tech. Memo. ERL NSSL-70 for illustrated examples. '401 form and evolve as the dark spot continued to move away from the flare. There is very strong evidence in both the smoke streaklines and from other tracer experiments that there is weak subsidence in the core of the vortex column in both stage #1 and beyond. The second stage is the spiral pattern stage during which a pattern of alternating dark and light bands forms around the dark spot on the sea sur- face (Fig. 2). The funnel becomes longer now and wider with a dark spot in the epicenter of the spiral pattern. There are various explanations which have been proposed for these dark bands on the sea surface. I won't go into detail, but the evidence is from both the smoke flare data and a careful study of the aircraft films that these dark bands are formed by a local generation of capillary waves on the sea surface and interference with the waves produced by outflow from a neighboring shower. The third stage then occurs as the first real indication of a complete intensifying vortex on the sea surface. During stage P3, a concentrated ring of spray develops in a very rapid, pulsating manner around the periphery of the dark spot (Fig. 3). The funnel is now continuing to lengthen and expand. The spiral pattern now begins to shrink or contract around this incipient spray vortex and f onward motion of the waterspout begins, usually at large angles away from the neighboring rain shower. The mature stage of the waterspout is when maximum degree of organiza- tion and maximum overall intensity occur (see Fig. 4). At that time there's an annulus of rising motion which reaches a maximum just exterior to the visible funnel and there is rising motion in the funnel walls with weak r.ubsidence in the hollow core. The vortex is asymmetric at times and in fact m COMPOSITE MODEL FOR SPIRAL PATTERN STAGE LONGER FUNNEL INVISIBLE VORTEX COLUMN Y^ WAVELET SWELL Fig. 2. Composited schematic model for the spiral pattern (stage 2) in the waterspout life cycle. Vertical scale is contracted, H (cloud-base height) varies from 550 to 670 m MSL, and d varies from 150 to 920 m. s Bold arrows in spiral indicate that major band evolves around dark spot during this stage. See NCAA Tech. Memo. ERL NSSL-70 for illustrated examples. m COMPOSITE MODEL FOR SPRAY RING STAGE LONGER VISIBLE FUNNEL INVISIBLE VORTEX / CIRCULATION I I H Fig. 3. Composited schematic model for the spray ring (stage 3) of the waterspout life cycle. The vertical scale is contracted, H varies from 550 to 670 (11 MSL, and d from 45 to 260 m. See NCAA Tech. Memo. ERL NSSL-70 s for illustrated examples. ^0/4 fbuTFLOW/^ ^N^^ 'AVE ' r-.^ TRAIN — -- '5 Fig. 4. Composited schematic model of a mature v;aterspout (stage 4 in the life cycle). For scaling reference, the maximum funnel diameters in this stage, just below the "collar cloud," range from 3 to 140 m. See NOAA Tech. Nemo. ERL NSSL-70 for illustrated examples. ^f05 we do have evidence in some large waterspouts (about 10% of the waterspouts each year reach tornadic size and intensity) of the "suction vortex" concept, in which there are one or more secondary intense eddies that rotate around the primary spray vortex of the waterspout. This I'll refer you to when we get to the results of the Union City tornado. We've deduced some of the waterspout's flow characteristics by tracking spray as it rotates around the spray vortex and we find that these are very concentrated vortices. We have found in an anti-cyclonic waterspout (there are about 10% of the total seasonal waterspout population that are anti-cyclonic and a few of these exceed tornadic intensity) a maximum tangential windspeed of 88 m/sec in the spray vortex at a radius of just 10 meters out from the center. Finally, stage //5 is the decay stage in which the funnel becomes con- torted, serpentine in some cases, and perturbations appear on the funnel surface. I believe that the decay of the waterspout is primarily due to its interception by a density surge line from the neighboring rain shower (Fig. 5). The density surge line intercepts the waterspout; it disrupts the radial- vertical circulation and it cuts off the inflow. In some cases we found a rotor cloud over the dome of rain-cooled air. I'm not going to show the Union City tornado film at this time. For those interested we will run it after my talk. During the last three years in Oklahoma, we at NSSL have undertaken a project called the Tornado Inter- cept Project. During these three Spring storm seasons, we and teams of University of Oklahoma meteorology students have observed, tracked on the ground and photographically documented with the aid of two 16 mm NASA movie cameras at least 18 tornadoes. The largest of these was observed last year by our chase team as the tornado approached, passed devascacingly through /f06 COMPOSITE MODEL OF DECAY STAGE WHITE -CAPS J J Fig. 5. Composite model of decay (stage 5) in waterspout life cycle. Note that in some cases the density-surge line may be closely followed by the heay>' rainshower. Also, the funnel cloud often undergoes rapid changes in shape and may become greatly contorted late in stage 5. For scaling pur- poses, maximum funnel diameters range from 3 to 105 m. See NO/VA Tech. Memo, I-RL NS.^T.-70 for illustrated examples. and finally decayed southeast of Union City, Oklahoma (26 miles westnorth- west of our Laboratory). This is a very Important tornadic thunderstorm because it was extensively observed an hour before the tornado formed and documented by our Tornado Intercept crew on the ground, two teams of University of Oklahoma students, one of the NSSL Doppler radars, and the meso-network of surface stations (see Weatherwise, April 1974 for O.U. students' paper). We are completing a multifaceted study of this tornado and its parent storm and our Intercept group has just completed the photo- grammetric analysis of some of the NASA film which will be shown this after- ^107 noon after the panel discussion. It runs about 15 minutes, \niat I'm trying to show you here is the anatomy and complete life-cycle of a very large, devastating tornado. We believe that it's on the high end, maybe the upper 5 to 10% of tornado sizes and intensities ever documented in Oklahoma. The damage from surveys early on the morning after the tornado, both from the air and on the ground, would correspond to an F5 on the Fujita scale. We tracked on the film dark aggregates of debris circulating around the base of the tornado. We also tracked cloud tags rotating around the upper periphery of the tornado and we also will be tracking the perturbations that were moving up the funnel wall. Debris and peripheral cloud tags were tracked on the film both when the tornado was largest and approaching Union City and when it was much smaller southeast of Union City. The results on this tornado are quite important in that we think for the first time it represents a case in which a large, damaging tornado was observed from several different direc- tions throughout its life-cycle by a team of trained meteorologists and students. The Union City tornado evolved in a manner very similar to the derived waterspout life-cycle, apart from scale differences. We find that in the debris cloud, the maximum horizontal windspeeds occur at elevations of about 90 to 100 meters above ground level and these maximum values appear to be on the order of 80 meters per second. The isotachs appear to be elongated quasi-horizontally but tilt slightly upward as you get further out and there's an elongated isotach maximum in this area of 60 m/sec. (Note: figures were in preparation during and after SESAME Meeting; these will appear in an upcoming, comprehensive NSSL Tech Memo early in 1975 on the Union City storm.) It appears that the debris is picked up off the ground. Unfortunately, we couldn' t see the lowest 60 to 75 meters of the tornado because of intervening trees and railroad tracks. We infer that debris is picked up off the ground, obtains vertical motions of the order of 30 m/sec and accelerates upward and outward (relative to the camera's principal line through the tornado center). ^03 Another important feature, and this is the real "fly in the ointment", we could not assume radial symmetry for the debris circulation in this tornado as Hoecker (1960) did for Che Dallas tornado. We have an advantage over Hoecker in that during this portion of its lifetime when the tornado was x^est of Union City \^e know from the dam.age surveys that all of the debris we're tracking is in the form of dust and small vegetation. Therefore, what \-7e are tracking on the film is aggregates of debris (scale 10 meters) and this tracking was repeated several times. There is no question that the debris cloud is strongly assymetric. We also found that cloud tags around the upper periphery of the tornado have rotational wind speeds up to 45 m/sec. T'nere are also indications, in a feeder cloud band which rotates around the direct penetrations of waterspouts starting with the dark spot funnel, cloud combinations first with a highly-stressed CSU AT-6 aircraft this Fall. The Wave Propagation Lab, working in the tornado area, is developing a CO2 Doppler lidar which we hope by next summer can be used to take mobile measurements of the velocity profiles through the spray vortices of waterspouts. Finally, here are some other ideas that I'll throw out. It seems to me, with the great ambiguities in such basic tornado parameters as wind speed and tornado periphery and merges with the northeasterly portion of the circulation, that there is very intense upward motion here of the order of 20 m/sec. The interesting thing is that the tornado was tilted toward the northeast; this tilt is consistent with the Doppler data. The cloud tag velocities are also consistent with the Doppler velocities in the lower portion of the storm- cloud's tornado-cyclone circulation. An interesting observation is that we find upper peripheral cloud tags were gradually sinking at about 5-10 m/sec and evaporating as they rotated around the southeast portion of the tornado circulation. Now to conclude by suggesting some ideas that we have for future tornado observations. I don't mean to imply that these measurements are all that we 403 should strive to obtain. I'm sure that many of you have new ideas that we could fruitfully apply. We will continue collaborative work on waterspouts using direct aircraft penetrations now with my colleague from CSU, Dr. Peter Sinclair. Over the last three years, I have collaborated with a group from Purdue University that has been using a trailing-wire probe package towed through waterspouts and from which we believe some useful temperature (Fig. 4) and a small amount of pressure data have been obtained. We now plan to attempt pressure, that the first thing that we ought to try is to use the simplest possible approach; namely, to determine once and for all for a suitable number of cases, the maximum wind speeds and the minimum surface pressures near the ground of tornadoes. Beyond that, perhaps we can try to probe tornadoes with remotely-piloted instrumented vehicles (and there are some currently available). We plan to use a more fully instrumented van, a mobile unit to take continuous surface measurements of pressure, temperature, humidity, etc. in and around tornadoes. As we've already emphasized tornadoes and water- spouts are very complicated vortices, both with regard to their wind fields and thermodynamic structure. DISCUSSION Paine: I would like to make a comment on vapor vortices, between dust devils and waterspouts as observed over the Great Lakes or the Finger Lake region where there is an excellent visual tracer. You could almost call it Arctic Sea Smoke. Zero degree Fahrenheit air moves across these warm lakes that are still unfrozen. Flying along the length of a 50 mile lake, you only see one or two full scale vapor vortices joining up to the cumulus snow showers and snow squalls overhead, but from aircraft you see as many as 50 to 100 immature vortices that start up and lift vapor but never join up to the cloud suggesting that at least in some situations this seemingly very effi- i|10 cient vortical supply of sensible and latent heat to the cumulus cloud base is a very prevalent phenomena, much more prevalent than normally we see in the literature. Golden; I think that we're finding out as Dave Atlas suggested that conventional data for intense vortices and other types of severe storm observations is woefully inadequate. I'm afraid we're going to have to take matters into our own hands to a certain extent. I think we've got to invest time and money to some extent in educating the public so that we get better observers and storm spotter. I found that to be true in Oklahoma, certainly. We need to get out in the world and look at these things and I'll put one pitch in here for SESAME, we've talked a lot about remote probing and I'm all for it. I think, though, that our experience has been in Oklahoma that going out with a suitably equipped vehicle and having this interchange of ideas and information with the radar personnel, and especially the Doppler radar — this has been a primary reason for our success in intercepting torna- does on the ground. The key to it is not a paved road network and not even an extensive road network but in having selective exchange of real-time remote sensor and visual information. We use this information to pick the storm, to "home in" on the storm that has the greatest potential for becoming tornadic. Now it hasn't all been roses, we've had some misses but to give you an example in Spring 1972, I think we had 13 missions; of those 13 we tracked 12 severe storms and we observed at least 3 or 4 tornadoes. So one doesn't try to go after every storm, but I hope in the SESAME program as well that there will be teams of trained observers with cameras and other sensors, out in the field documenting the visual characteristics of these storms, even if they're not tornadic. We found that, for one example, both in the Keys cloud lines producing waterspouts and clouds in Oklahoma pro- 411 ducing tornadoes (and even in some that don't produce tornadoes), there are some very interesting circulation patterns, some of which are rotational, closed circulations and some of these don't produce tornadoes, so the problem is very complex and the tornado problem is very complex. There are some tornadoes that have multiple "suction vortices", there are some large tornadoes such as Union City in which we have been able to find no evidence, either in damage or in the complete photography records, of multiple suction vortices. The Union City tornado appears to be one large suction vortex. 412 A Survey of the April 3-4, 1974 Tornado Outbreak E. Pearl The April 3rd tornado outbreak is an ideal case study of tornadoes from all aspects. Research has been and will continue to be performed upon voluminous amounts of data. Data has been obtained from satellites, radar, aerial and ground surveys, and interviews. Most of the research emanating from the Satellite and Mesometeorology Research Project will be confined to specific cases. In general, the April 3rd outbreak demonstrated a variety of char- acteristics, both expected and unexpected. Many of the tornadoes oc- curred in series. Each series had a particular trait such as a right-, left- or no-turn at the end of each tornado track in the series. In addition, evidence was found to substantiate the rotation of two rotat- ing thunderstorms around each other. Also, tornadoes moved through the rugged terrain of the Appalachian Mountains with little deviation or change in intensity. Research must continue in these areas and evidence is definitely needed to determine a unique signature of these tornado- producing thunderstorms. We surveyed about 93 of the tornadoes on April 3 by air and ground and also made personal interviews. We've studied that case utilizing radar, satellite and ground surveys. We also studied the Muncie tornado of which you will see a movie later. It's a slightly different film, than the one you saw earlier since it contains every tenth frame time lapsed. And we also checked with Wally Hubbard who lived at home with his mother and we even checked Mother Hubbard's cupboards. Figure 1 is of all the tornadoes we surveyed and also some that we didn't but we have verification on. There's 127 tornadoes now that we have verified for April 3 and 4. You'll notice that many of these tornadoes occur in series although it might not be too obvious. For instance Southern Indiana had a series of about six tornadoes over a period of about 2 hours. This is a very interesting phe- 413 SUPEROUTBREAK TORNADOES OF APRIL 3-4, 1974 P PRELIMINARY EDITION, SEPTEMBER 1974 nomena to study in the future and we're doing a study on this right now. Also many of these tornadoes have left turns, you may notice here, the whole series has a left turn to it. Other groups of tornadoes had right turns. These are aerial surveys done by Ted Fujita and myself, John McCarthy from the University of Oklahoma and Les Lemon and Joe Golden from NSSL and Ebbeson, Struzziery, and Shanahan from Stone and Webster Engineering Corp. Most of these that are in series are separate touch downs but they may be from the same tornado cyclone or the same thunderstorm. This Figure 1 also includes the cities, the terrain and also the river pattern over the area of the tornadoes. You'll find that they were not located or centered around large cities. They're scattered throughout the area and so there isn't too much bias as far as population centers are concerned. We did a yery thorough job of flying over almost eyery square inch of territory and surveyed over 93 torna- does, we didn't survey all 127. We found that if we were to survey all 127 that many of these were FO or yery weak. Only branches or small trees were down making it yery diffi- cult to see that from the air. There has been some talk in the past about tornadoes following rivers or river valleys but we found yery little correlation with that and you'll see this later on a finer scale. You see that they go over right into the Appalachian mountain area. We have some in the Blueridge area up to about 4,000 feet in elevaton. The tornadoes are scattered throughout the entire area, ^ery interesting tornadoes here in West Virginia and Virginia and in the Carol inas. We found that they would go up over ridges and down through valleys with no change in direction. They would continue in a more or less straight line mode and that was quite interesting. In other words they did not follow terrain. The tornadoes in southern Indiana which I 417 personally surveyed, 40, 41, 42, 43, and into Ohio 44 and 45 are all from one tornado producing cell. What's \/ery inter- esting about this is that there were many hook echoes that did not contain tornadoes. We surveyed the areas where some of these hook echoes passed by, we interviewed a few of the people and found that there was a roaring sound in some of the areas but no tornado touch down. Also, some of these tornadoes did not occur near the hook or even 2 or 3 miles from the hook but rather inside the main thunderstorm cell. This was rather interesting, a little bit different than what we had expected. We expected maybe they would not be in the center of the hook but only 2 or 3 miles from it, we found then 5 or 10 miles away. Next slide, please. We did a very detailed analysis concerning the time of occurrence, talking to people, interviewing, reading news- papers and doing our own personal survey because many of these, although they may look like they're in series, are from separate tornadoes. We also studied two thunderstorms rotating around another, both producing tornadoes, so that there would be one series of tornadoes that would cross the tracks of another series of tornadoes from the two tornado cyclones rotating about each other. Although some of the tornadoes in the Blueridge area may look like they come close to river valleys they do not follow the valley area or the terrain, rather they freely cross right over hills and valleys. The small numbers along the track are the Fujita or F-scale representing the intensity of the tornadoes along each track. As we followed the tracks we determined the damage; we have photographic evidence of eyery position indicated by the F-scale numbers. We've also done ground surveys covering on many of these where we had doubts. Although not necessarily the strongest tornado, the one most talked about tornado was the Xenia tornado. Those in South- 418 Figure 2 Figure 3 419 ern Indiana, in my estimation, were much stronger, or at least as strong as the Xenia tornado. The difference was that they didn't pass through cities, which I think is a large factor in the F-scale verification. If we have tor- nadoes in Oklahoma or Kansas, they may pass through farm lands and maybe only run into one or two farm houses and, of course, the F scale may not be too high. Therefore there may be tornadoes in these areas of F4 and F5 classification but we haven't been able to verify this. The movie is of the Parker City tornado which you saw earlier. (Ed. note: Figs. 2 and 3 show frames taken from the film.) Here the tornado is moving over Monroe High School, where we see well-defined multiple vortices. You notice the cars passing by. This storm was yery well documented not only from the movie stand point but also by the use of still pictures and we have quite a clear series of pictures from various ang- les. As for multiple suction tornadoes, there's no doubt that they exist now, I think. They come in various forms. People in Kentucky during this same day reported seeing what they called fingers reaching down from the sky. Other people said that they saw multiple vortices on the ground; they didn't see multiple vortices in the air like you see here. This part allows you to look at one part of the storm and analyze each particular element as we go along. In the beginning it just looks like one great big mass but there are, if you look yery carefully, separate vortices. We're analyzing the motion of each element of the storm from movies much like this. You can see one funnel passing around the rear of the storm, with two others on the forward side. What's yery interesting in this part of the film is the velocity in this region of the tornado. It appears almost as an outflow circulation at a higher altitude; a yery rapid movement in that upper part. 420 We're analyzing April 3 from the movies and also from still pictures that we've obtained from various government affiliations and also private survey companies. We are also utilizing our satellite pictures and radar and combining all of these into, hopefully, a very unique study to be pre- sented in a series of reports from our project. Darkow - Six months or so ago, Ted was quite enthusi- astic about the possibility of large metropolitan areas tending to break up or at least prevent the formation of weaker vortices. On the basis of what he saw on April 3 and 4, does he still feel that roughness is an important factor in occurrences and tracks. Pearl - I guess I'm supposed to be a spokesman for Ted. We've changed our policy to some extent. The terrain was obviously very ineffective and Xenia was not enough to destroy that tornado and also some of the city areas that these storms passed through. Some of the theory was based on the heat island effect and I'm a little bit negative in that respect, in that regard I'm not speaking for Ted. Darkow - Unfortunately, some of those comments are just now reaching the general public, however. Some of the people in large metropolitan areas are resting too easily due to them. A. Dennis - I'd like to say that there was quite a strong tornado in the Black Hills which occurred in 1966 and I was amazed at the fact that it passed over a ridge and dropped into the valley causing damage right down to the bottom of the valley and then over the next ridge and so on. And of course your results verify this on a much larger seal e . Pearl - The funnel actually may intensify as it drops down into the valley area and then decreases as it passes back over the ridge. 421 A Photogrammetric Study of the Parker, Indiana Tornado Christopher R. Church Purdue University Abstract During the summer of 1974 a photogrammetric study was made of the Parker tornado, using the 16mm motion picture film taken by Wally Hubbard of WISH, TV 8, Indianapolis. This movie provided the first conclusive documentary evidence of multiple (suction) vortices occurring within a parent vortex system. At one instance as many as four suction vortices could be detected, showing a similar behavior to the laboratory vortices studied by Ward at NSSL. A physical description of the life cycle of the suction vortices was made, where vortices were observed to a) generate and accelerate on the left flank, b) achieve maximum intensity at the rear, c) weaken and decelerate on the right flank, and d) dissipate on the leading edge (with respect to the parent vortex) . Velocity vectors identified in the multiple vortex system are 1) the translational velocity of the parent vortex V , 2) the tangential velocity of the suction vortex v with respect to the center of the parent vortex, 3) the tangential velocity of the parent vortex V , and 4) the tangential velocity of the suction vortex v with respect to its own center. Estimates of 50 m/s for maximum v and 25 m/s for V gave maximum wind speeds relative to the ground of at least 75 m/s. Presentation The study that was done this summer by four of us at Purdue University Is based on some unique and spectacular movie footage that was taken by Wally Hubbard, an Indianapolis television news correspondent. From very close range he took about 80 feet of footage of a multiple vortex structured tornado which occurred on April 3 in the north-eastern part of Indiana near the town of Parker. Figure 1 is a map showing the location of the tornado track. The tornado travelled towards 30 degrees east of north at a trans- lational speed of about 55 knots. The black section is the actual part of •122 UNCI S RIdgevllle Winchester H (STATUTE MILES) Figure 1 ; Schematic of tornado track in Randolph County, Indiana, on April 3, 1974. 4 8 12 16 20 00 4 8 12 16 20 00 4 8 12 16 20 00 4 — I — I — I — I — I — I — I — I — I — I — I — I — I — I — ^ 2 APRIL 1974 — 3 APRIL 1974 — 4 APRIL 1974 I — I — I — I — J30.90 i — I — I — I — hsoaoH — I — I — I — H I 1— I 1 130.00 I 1 1 1 130 00 1 1 h I 1 1 1 129 50 I ^ 'tornado" = PASSAGE _ H 1 1 1 129 00 H 1 1 1 129 00 I — I — I — I — I — Y 4 8 12 16 20 00 4 8 12 16 20 00 4 8 12 16 20 00 Figure 2 : Barograph trace at the Davis-Purdue Agriculture Center, near Farmland, Indiana. m the tornado that was photographed by Mr. Hubbard, who was standing at Point A, which was estimated to be about 1.2 miles from the tornado. Point B is the location of the Davis-Purdue Agriculture Center which is equipped with meteorological instruments. The tornado passed within a mile of this station. Figure 2 shows the barograph trace for April 3 at that station. The pertur- bation associated with the passage of the tornado occupies a very small time interval of the complete pressure record. A pressure anomaly of about 5 mb was observed. With regard to the other measured parameters, no particularly striking features were observed on the records of wind, temperature or relative humidity. The movie is about 80 feet long in four continuous segments. The first scene was taken looking towards the southwest and shows the strong cyclonic rotation and upward motion on the funnel walls. In this segment there is no visual evidence of a multiple vortex structure. The second segment shows that the funnel has broken up into a series of vortices. With the given lighting conditions it is possible not only to see the suction vortices coming round on the near side, but to look right through the funnel cloud and see others forming and intensifying on the far side. Four suction vortices can be identified here. At the time of crossing Highway 32, the system was somewhat reduced in intensity. In the third segment just two large suction vortices can be identified, associated with what appear to be clouds of dust and debris. The fourth segment does not show suction vortices, but gives some general indication of rotation in the cloud. Figure 3 summarizes the positions of the individual suction vortices at different times during the second segment of the film. Vortices form, accelerate and intensify on the left or far side of the track. Individual funnel diameters were estimated as 70-100 feet on the forming side in- creasing to 200 ft at the rear of the parent vortex. Suction vortices coming round on the near side were observed to weaken and dissipate as they moved to the front of the parent vortex. Figure 4 identifies the velocity vectors which are significant. Two vectors are associated with •»- o o o E o o O o d ro in to 00 d in O J^ u Q> • • (/) 0) F 0) o b it F Figure 3 : Positions of individual suction vortices at different times during the second segment of the movie. ^25 I streamlines in surface flow layer Figure 4 : Vector depiction and hypothetical streamline flow for a given suction vortex in a multiple vortex system. the motion of the parent vortex, and two with the motion of the suction vortex. The former are: the translational speed of the parent vortex and its tangential speed with respect to its center, designated as V and V . The latter are: the tangential velocity of the suction vortex with respect to the center of the parent vortex and with respect to its own center, designated as v and v . The observations showed that these vectors com- bined to give maximum effect in the right rear quadrant of the parent vortex. The maximum value of v was estimated as 50 m/s. Combining v with the translational speed of 25 m/s gave maximum wind speeds relative to the ground of 75 m/s. Since it was not possible to determine v , this was a low estimate. ^26 The unique feature of this movie is that it provides the first documentary evidence of the vortex splitting phenomenon, which can be inferred to be a characteristic feature of the more destructive tornadoes. On the basis of such an empirical examination it is possible to account for some of the damage patterns produced by tornadoes. Observations such as these and subsequent analysis of them show considerable promise for an increased understanding of tornado and storm dynamics. It is hoped that during the SESAME program there will be some concerted efforts directed at detecting and documenting tornado events in the field. Such documen- tation together with the data from a closely spaced network of sensors could result in an increased understanding of several aspects of tornadic storms, such as the dynamic structure of the tornado vortex, the dynamic structure of the tornado mesocyclone, tornado triggering mechanisms, and providing new means of predicting the occurrence of tornadoes and their tracks with greater precision than is presently possible. w A DUAL-DOPPLER RADAR METHOD FOR THE DETERMINATION OF WIND VELOCITIES WITHIN LOCAL SEVERE STORM SYSTEMS By L. J. Miller NOAA/ERL/Wave Propagation Laboratory Boulder, Colorado 80302 ABSTRACT A method of coordinated scanning which makes use of two Doppler radars and a procedure of data reduction are described. The two radar estimates of radial velocity of the precipitation particles are combined with the equation of mass continuity and an independent estimate of the mean particle fall velocity. This results in a complete solution for the three components of vector air motion. Examples of air motion fields which illustrate the effectiveness of this new technique in revealing the detailed structure of the winds are shown. These data were abstracted from our studies of thunderstorms during the 1973 summer field program of the National Hail Research Experiment. 428 J. Miller I want to quickly go through a technique that we've developed at the Wave Propagation Lab whereby we use dual doppler radars in a slightly different fashion than is conventionally used. It is our so-called co-plane scanning and if I could have the first slide. What we do is modify the elevation scan such that the beams are constrained to a plane passing through the two radar positions. What this does for us is acquire the dual doppler data in a cylindrical coordinate system in which we can systematically analyze the data. I'll present data on two storms that occurred in July 1973 during the NHRE field experiments. Also, I'll show one very new storm that occurred in 1973 which we've just now gotten to in the analysis. I think this latter storm is more pertinent to this meeting than the others. However, I only have a little bit of information on it. In the co-plane method the first benefit is one that's available with all doppler radars. It's the only means available to map the detailed air motion insides storms. In our particular method we believe that we're mapping fairly unambigiously the three dimensional air motion inside the storms. In our 1974 NHRE program we had three doppler radars so that we'll confirm or reject this notion, hopefully con- firm it. The second benefit compared to a conventional scan, the co-plane is a more systematic and natural means of acquir- ing and reducing dual doppler measurements. One thing that is \/ery important is that we don't have to assume time sta- tionarity over a period of more than about 10-20 seconds 429 which is the scan time for a single plane We can take a short duration scan of a single plane and then repeat this some several minutes later and determine from the changes in the pattern whether or not we do indeed have time stationari ty . We don't have to assume it first. Another off-shoot is that the interpolation schemes to reorganize the data in the plane are all two dimensional rather than three dimensional as you would require in the conventional elevation scan. Finally, it has the potential of circumventing the need for more than two doppler radars when the storms don't violate the assumptions necessary for the co-plane scan. Some of the limitations, these have also been alluded to previously by John McCarthy, in the clear air we're unable to trace the air motion but if chaff is injected in these areas that we can trace the air motion there also. I might add that, at 3 cm, you need something on the order of about 1,000 times as many needles as you do with 10 cm in order to get the same detectable target. Another limitation is that the radar easily provides more information that can readily be assimi- lated in the field. This forces you into more sophisticated and rapid data acquisition schemes. Presently off-line large computer data reduction and display of the three dimensional air motions is required. With Serafin's new on line color display of the doppler velocities, we can use these same kinds of techniques in the future to display the three dimensional air motion, hopefully. Now the assumptions necessary for the dual doppler data reduction are: that the precipitation particles trace the mean air motion, that the reflectivity weighted mean of the doppler spectrum represents the sum of the radial components of mean air motion and mean particle fall velocity, that the particle fall speeds can be estimated from measured radar 430 ' UWER ! 140734 Slide 1 1427 ® 142219 ,'-^\ ® 143630 ® 150000 20-30 30-40 40-50 10 20 30 40 Kilometers Reflectivity (dbz) Slide 2 AREA OF COVERAGE FOR COPLAN SCANNING Vor(V, ) ♦ Vor (Vy ) ■ ^O-J Vor(V ) = f 0-, «^Var(Vy) = |or* ^-y Rodar *2 Rador *1 Slide 3 431 5 < 1 1 r Ground level (A 1 1 Main Updroft Mean Sub-cloud _ Environmental Air - flow Slide 4 4 6 Horizontal Distance (km) 4500m AGL(A) Slide 5 o X 2 4500 m AGL (A) 4 6 Horizontal Distarrce (km) Slide 6 4 6 Horizontal Distance (km) 432 reflectivity with sufficient accuracy this is a terminal velocity radar reflectivity relationship such as a Jost- Waldvogel or others, that linear interpolations of measured values to common grid points are valid since they're not obtained at common grid points, that the mass continuity equation for an incompressible atmosphere is applicable at the scale sizes and heights observed by the radars, and that the field quantities are stationary for the scan time of a series of planes. This last assumption is evaluated by taking co-plane scans at two different times and determining whether or not the patterns have changed significantly stationarily is required in the integration of the mass continuity equation and this is typically perhaps 2 to 3 minutes at most. The last one, zero air motion at 0° coplane angle, is not that important. But it is, if you're working at large distances. Your zero elevation of scanning is above the horizon so you do have problems with the lower boundary condition in the mass continuity equation. So we do assume that the earth is flat for the ranges that are covered, which is typically under 50 km. If I could have the next slide. Now in the error analysis, with dual doppler, you are constrained to a region where you get reasonable estimates. This area, with the two radars located at half the separation distance from the origin of the coordinate system, is a region on both sides of the baseline where you get estimates that will typically run for the horizontal velocities, around 0.2 of a m sec"^ to perhaps 0.5 of a m sec"^. For the vertical air motion estimates the uncertainty is perhaps 0.5-1.0 m sec"^. Could I have the next slide, please. These are 10 cm PPI scans from the NCAR radar located at Grover. This is the storm of 31 July. We were scanning this new growth here 433 4500m AGL (A) „ 6 E (J c o o X 4 6 Horizontal Distance (km) Slide 7 COPLAN PART. VELOCITY (M/SEC) Slide 8 434 that was on the right flank of this northern cell here that had reached about 45 DBZ and was slowly advecting toward the south at about 3 m/sec. There was rapid propagation of this new cell in the downshear direction. The upper level winds were predominantly out of the northwest. We scanned this cell for nearly an hour. The region that is marked as an unbounded weak echo region is the location of what I believe to be the main updraft feeding the back cell as well as this front cell. There is no indication of any significant updraft being regenerated on the leading edge of the new growth. These trajectories are horizontal trajectories of chaff released by the University of Wyoming aircraft and they all indicated updrafts. Could I have the next slide, please. This is a ground level composite of the 3 cm radar reflectivity and regions of low level convergence with a the peak value of 10"^ sec"^. The region of divergence in here had a peak value of about 8 x 10"^ sec'^, I believe. The streamlines drawn here are relative to the ground since most of the echo motion is due to propagation only. If there is any advection, it's perhaps 3 m/sec. Now what you do see is curvature into the updraft on the backside with the mean subcloud environmental air flow mostly from the west. Next slide, please. This is at 4500 meters AGL. This is about 3 km above cloud base with this region of updraft where we have peak values of 9 m/sec and 8 m/sec. In this region updrafts of 2 m/sec exist which may be an attempt to gener- ate a new updraft on the right flank of this cell. Down- drafts exist through this, region of about -5 down to nearly zero. From the previous slides remember the weak echo region in the other cell was up here. So this updraft region probably extends on out and we're only seeing a portion of it. Oh, these are 3 cm reflectivity contours with 35 DBZ located in here and 35 DBZ out here, two high 435 reflectivity cores with the updraft predominantly in between them. Next slide please. Now this is the two dimensional horizontal air motion at this same level, 4500 m AGL. The mean cloud layer environmental winds were out of the north- west. If we subtract the ambient wind at this level (if I could have the next slide) see a cyclonic and anti -cycl onic vortex pair located more or less downshear from this one high reflectivity region. The confluence axis is oriented fairly close to paralleling the mean environmental flow through the cloud depth. Could I have the next slide, please. This slide is one that appeared in the last bulle- tin of the AMS. It is a co-plane slice, from the right to left extending from 4500 meters AGL to 6000 meters AGL. This edge here will be at the same level as the previous horizontal slice. It's at an eight degree tilt which indi- cates that this vortex is tilted slightly off the horizontal at about eight degrees, in my opinion. Could I have the next slide, please. These are two vertical slices at about eight minutes after the previous data. At this time the cell had grown slightly larger. The axis is oriented left to right along the mean cloud layer wind. This indentation here is, in my opinion, where the updraft air is entering the storm and is being carried up and then out the anvil. This slice is the one that's a vertical slice oriented along the cloud layer winds. In a second I'll show another slide that is oriented in the direction normal to the cloud layer winds. As you can see, the main updraft or at least the edge of the main updraft is located here with a tilt of perhaps 10 to 15 degrees in the downshear direction. There is a closed circulation out ahead in this case. I think at this point there are enough precipitation particles ahead of the main updraft and beneath the anvil or slightly into the cloud area where we're mapping out a closed circulation in 436 •«.->Ct« 1!1--!C. .-.■ !■ • ! : I?::' sj::- liV:- 5:::- «::■ 15::- !;::- 25::- n:t- t::E- «::• .l I IT i:»lllEjfl<«l«iii»ti iNLl I 1 :H\»ifio" * * 1 »>-< I I "I > "^"S^ > S ,» :' 1 *J \l • « I A J i 1 ^ 1 1 n I ) I I Pil I < 1 I I *. * « ' 1 1 II I I 1 T -i 1 1 I » « S I I I * *v' I w 1 1 > 1 ) i») I i thr I I V * * > 1 I I } 1 1 j » j J/i .iiifi \ I I i i m1 ' I «»>jJ-« 1X^4 iiii > I I # I I <( < fl I n I T i>\i-T » til n ^ ^ ;i > J Ml g I II I ]yri'i i j ■ JM^W 1 f I* f I t V V V \ ^ I t 1 » I I /*5//> iNJtii 4 4 < I ^ 1 H II M >o i*1^> Z. Kcu';;-5f»' HSk Slide 15 X-'CMS'.-'Q" ;"-'jc.: mX- **-i-«-f I'l 1 '" "' '"i ' \\\- J-O 1 1 7-» TSi \ It 1 1 * llJ-r kills 1;;: '^^i^ i ^^■i-i'iirlv I J ■ I ■ »l I 1 J 1 I 1 T -I ! ! I )]>»»< t < I 1 I I 1 I S > > 4 '^T' ' s ■* ■♦■•^ till ) T'O 1 \ ' f*Ssj '** I I I i''l-j I I itt It 1 I I H. J V <«ijtT-»ilill\lllhl« ! I l.-l - ^ I ! » 1 1 IM I 1/ 1 -I J V • * > \ t 1 l\l t H I I VI t 4 I \\\ I\J I^^-^J I\J ^ I J J 1 I ■« .♦ 4 .♦ .• '.'. .- .'. .-. .'; :'. .*. > >A y^^^ : J J V "- Jj ~ » 5 ^ « fj • V ^ g ^ J J» Slide 16 438 the leading edge. This has not been obscured in the more severe or super-cell storms where you have a very strong updraft. There is apparently no closed circulation locally for these storms, with the middle level air intruding from the backside and being carried down, mixed, and carried out the backside of the storm. If I could have the next slide. This is the one taken in the other direction. It's fairly narrow across so there's only a limited amount of data. It's not too difficult to extrapolate the streamlines down to the surface and have the air entering the backside from this direction and moving toward the bottom here. A vortex is being generated here; this leg over here is missing data. I'm reasonably confident that this vortex does close and it's horizontal. The northeast is on the right and it is a back feeder, predominantly. Could I have the next slide, please. The other data which I'm going to present is from the 9th of July which is a storm that has been extensively worked up by other NHRE participants. We were scanning a portion of the larger cell, this region here. These two small cells out ahead later developed into the hail stage and these are the ones that will appear in the literature in the cloud physics conference, I guess. Could I have the next slide. This is a series of horizontal slices at 0, 1,000, and 2,000 meters AGL of the flow relative to the storm after subtraction of the echo motion. The highest reflectivity regions are located here and here; updraft in this region here; downdraft is coincident with the high reflectivity region, and updraft is over in this area. This updraft is, I believe, forced by the outflowing cold air from these two high reflectivity regions and really isn't the main updraft. Could I have the next slide. In the previous slide there were two lines along the inflow direc- tion and normal to it. This is one of those cuts along the 439 inflow direction. It is shown here with a schematic model of the severe storm. You do notice that this is only at 3 km deep storm whereas the height of the severe storm is about 12 km. The intent is only to show that the same kinds of circulations probably exist in many of these cells. This kind of circulation may be much more universal than has been anticipated. If I could have the next slide. This cut is normal to the previous one with a rain induced downdraft in this region here with a closed vortex. Could I have the next one please - skip this go on, the next one. Now this one is a storm that occurred on 28 July 1973 which has a more complete data set than any that we have had so far. We've only recently begun the analysis and we believe that this storm is much closer to the so-called super-cell. This is a cut that's about 30 degrees off from the inflow direc- tion. We aren't mapping the low level inflow here but we certainly have the reflectivity overhang with the updraft air entering and the middle level air intruding into the backside in the downdraft. This is probably a portion of the gust front out ahead of the low level updraft, a gust front on the backside, also. And if I could have the next slide to boggle the modelers minds. Clearly the flow is three dimensional and is complex. I don't believe that your present models are able to cope with something like this. For the modelers, the questions raised yesterday about scales of motion, I believe that we can answer now. The kind of resolution that you can get, depending on the range, is perhaps 300 meters. It's a simple matter to use a nar- rower beam and resolve the motions even at smaller scales. The technology is there to handle all of the data. Thank you . 440 Lilly - As a comment on this fascinating work, I would recommend that anybody who can grab hold of Jay or Earl Gossard or Shelby Frisch sometime after the program this afternoon or any other time, go down and look at their etchings. The will show you a whole wall full of different cuts in different directions of this last storm, a whole wall full, from one 2 minute scan. Presumably another wall full can be obtained for the next 2 minutes. It's impres- sive and a bit overwhelming. 441 ^ WPL Plans for Observing Electromagnetic Signals W. L. Taylor Not yery much has been said so far in this SESAME Opening Meeting concerning thunderstorm electricity. There are a number of workers, however, in atmsopheric electricity research in this country who will be interested in participating in this joint experiment. It is true, we do not know the role of electricity in meteorological processes occurring within storms. But it is for that ^ery reason that we must not neglect this opportunity to establish some cause and effect relationships. I feel, therefore, that it is very important to include observations relating to thunderstorm electricity because both the microphysics and the dynamics of severe storms are somehow intimately related to the electrical processes within the cloud. My primary interest is in passive remote sensing techniques utilizing the radio-frequency signals radiated during lightning discharge processes. Certain characteristics of these signals can provide us with a measure of the severity of a thunderstorm. In fact, in the Wave Propagation Laboratory of ERL, we have been developing a method of measuring the various parameters of the radio signals from severe storms in an effort to develop a tornadic storm detector. Based on some of the results we have recently obtained in that work, and expanding on some of the observational techniques we have recently developed, we are designing instrumentation for mapping lightning discharge processes. This will utilize signals from intra-cloud lightning observed in the frequency range from 10 MHz to 100 MHz. Using fast-logic techniques, the time difference of arrival at a pair of antennas horizontally separated by about 15 meters will be used to determine azimuth and 443 similarly a pair of antennas vertically separated will be used to determine elevation angle. The specific method of obtaining 1" by 1" "boxes" covering 60" in azimuth and 30° in elevation has been developed in the laboratory. Tests indicate impulse signals can be partitioned into the 1800 directional elements with an accuracy of ±0.5** and at impulse rates up to about two million per second. Data processing and recording will limit the number of impulses that can actually be utilized from the yery high rate capability of this technique. During the next few months we will complete a working prototype instrument for an azimuth-elevation display. This observing technique will be tested in the field during the summer of 1975, and will be used in subsequent years as a tool in our continuing study of lightning discharge characteristics. Two observing sites separated by 75 km or so will be used to determine the space-time history of individual lightning discharges. We are planning to participate in the SESAME program. Tentative site locations for a two station network are shown in the accompanying figure. With our expected observational coverage, together with the areas covered by WPL 3-cm Doppler radar and NSSL 10-cm Doppler radar. Data analysis will make use of Doppler radar as well as weather radar observations to show space-time relationshops between lightning discharges and the location of internal storm wind fields, turbulence and regions of precipitation. The detailed measurements of electromagnetic signal characteristics and their time-space relationships will go far in helping us to answer some of the questions converning what mechanisms are active in the electrification processes, and what role electricity or the discharge •process itself plays in the formation of precipitation and in the development of severe storm manifestations. 444 SESAME Mesoscale yS-array mwm WPL 3-cm Doppler NSSL lO-cm Doppler 100 km Range for EM. Detection Areas for 3-Dinnensional Mapping of E.M. Si; j\ Sources Tentative Site Locntion Observational Coverage of V\/PL E.M. System for Observing Space -Tinne History of Lightning Discharge Processes Slide 1 445 Predicting the Movement of Severe Convective Storms David J. Raymond New.Mexico Institute of Mining and Technology Socorro , New Mexico ABSTRACT A model for predicting the movement of severe convective storms is briefly described. Such storms are hypothesized to be internal gravity waves, forced by convection produced in the waves' own regions of low level convergence. Some results of the model are illustrated. 446 I heard of a fellow who was working on a very complex numerical model of , solar dynamics. The model was predicting observed solar behavior fairly well. However, his comment was^,the sion knows what the sun is doing, the computer knows what the sun is doing, but I still don't know what the sun is doing I Perhaps the reason was that he did not have any simple physical conception of what was going on, and therefore lacked the means to interpret his computer model. I 'm a little worried that we may be getting into the same situation in some of our mesoscale models. If we look at the case of synoptic meteorology, we see a different approach. The first dynamical models of synoptic and planetary scale disturbances were the linear analyses of Rossby, Charney, Eady and so on. We well know that these models are oversimplified, and I'm sure they knew it at the time. However, they provide the conceptual framework whereby we speak of baroclinic instabilities, Rossby waves, and so on ^ occurring in our more complex numerical models of planetary and synoptic scale disturbances. What can be done on the mesoscale? Rossby and his colleagues looked at the natural or normal modes of the atmosphere on the synoptic and planetary scales. What are the normal modes of the atmosphere on the scale appropriate to the severe storm? The answer is internal gravity waves. We therefore conceivably might be able to construct a fairly simple theory of severe storms based on gravity waves. My reason for being up here is to describe such a theory. I've assumed that a large storm is composed of many individual convective elements . I take as evidence for this the results of the South Dakota group who have made many penetrations through severe storms with their instrumented T28. Internal structure on the scale 2 to 5 km is very evident in many of their storm traverses . This is in storms typically 20 to 50 km in diameter. If you treat internal convective elements in the storm statistically and perform a simplified linear analysis a la Rayleigh, what you get are equations for the storm scale which look like convectively forced w internal gravity waves. The gravity wave is a very effective creator of conver- gence and divergence at low levels , and as a consequence can organize the spatial distribution of convective elements. This organization means that the internal convective elements can share their energy with the larger scale wave much as excited atoms in a laser share their energy with a light wave. What results is basically an eigenvalue problem, the eigenvalue being the complex propagation velocity of individual sinusoidal gravity waves. The imaginary part of this quantity is the growth rate, and the real part is the trans lational velocity or the phase speed of the particular mode. As in Rayleigh convection, you get more than one unstable mode. However, in this theory there exists a unique mode with maximal instability. This has to do with the inclusion of the ambient wind profile in the calculation. The real atmospheric wind is used for each individual case. There's one more point that I will just mention. A localized severe storm obviously cannot be described as a sinusoidal disturbance in space, and so I've invoked the theory of wave packets. You can construct any type of localized disturbance as an appropriate superposition of sinusoidal waves --the motion of this wave packet is given by the group velocity of the central Fourier mode of the packet . That is all I will say at present about the theory. Fig. 1 illustrates the theory's prediction for the famous Geary, Oklahoma storm of 4 May 1961. The solid line is the environmental wind hodograph, with the heights in kilometers AGL. The dashed line represents the locus of predicted propagation velocities, the variations in velocity being associated with the variation in spatial orienta- tion of the central mode of the wave packet. The diamond -shaped symbol denotes the velocity of the wave packet with maximal instability. Note how close the actual storm velocity, denoted by the circular symbol, falls to the dashed line. ^^8 w — f- -5 5-- GEARY STORiV. \ ^^^^ y 35 10 15 20 25 m/sec 30 Fig. 1 Model predictions for Geary, Oklahoma storm of 4 May 1961. The solid line is the environmental wind hodograph, and the locus of possible predicted storm velocities is given by the dashed line. The diamond denotes the predicted velocity of the mode with maximal instability, while the circle indicates the observed velocity of the storm. i}49 Fig. 2 shows the actual and predicted velocities for a split pair of storms. While the right -mover (R) was a severe tomadic storm, the left -mover (L) was rather weak. Again, the observed velocities fall close to the dashed line, and the relative strength of the two storms is reflected in the respective instabilities of the associated Fourier modes --the instability of the R mode is near maximal, while that of the L mode is much less. NSSL 19 APRIL 1972 SPLIT PAIR 30 40 m/sec N 50 60 Fig. 2 Results as in Fig. 1 for a split pair of storms. ^150 For a squall line the theory predicts that the line orientation be tangent to the dashed curve at the point of maximal instability. Fig. 3 illustrates a squall line case in which the line orientation was observed to be 70° - 250°, quite close to that predicted. The circular symbol here represents the velocity of a mesocyclone which was embedded in the squall line, and moved along it like a bead on a wire. Note that its velocity was also predicted properly. NSSL 2 JUME 1971 SQUALL LINE PREDICTED LINE ORIENTATION Fig. 3 As in Fig. 1 for a squall line. iJ51 Fig. 4 shows that the theory is not always successful in predicting storm velocities. This shows the results for a discretely propagating multicell storm that occurred near Alhambra, Alberta. A considerable discrepancy exists between the predicted and observed velocities. This is perhaps not surprising, as the propagative mechanism for a multicell storm is thought to be substantially different from that of the continuously propagating supercell. The previous three exaii5)les were of the latter variety. -lO" ALHAMBRA 12 JULY 1969 20 m/sec .^'' Fig. 4 As in Fig. 1 for a multicell storm. ^52 I'd like to conclude by saying that though the details are computationally complex, the theory is conceptually simple. In particular, the amount of cloud physics in it is negligible. The important variable is the environmental wind profile. One may therefore conclude that propagation velocities are sensitive to small changes in the wind hodograph, but not to cloud physics. As a final plea, I would urge that we continue to develop such simplified theories , valid for particular aspects of storm behavior. Though these theories may not be good in every detail, they can give us a better understanding of what's actually going on. In the long run, this understanding may serve us better than the most sophis- ticated numerical simulation. ACKNOWLEDGEMENT This work was supported by National Science Foundation grant No. GA-36630X. ^55 MESOSCALE MODEL RESULTS FOR THE TORNADO OUTBREAK APRIL 3-4, 1974 D. Paine Abstract Shortly after the 3 - h April 197^ tornado outbreak, a mesoscale primitive equation (PE) model under development for operational use at the Air Force Global Weather Central was initialized for an l8 hr numerical forecast covering the period of maximum severe weather. The numerical model has ten vertical levels in a constant pressure coordinate system. The horizontal grid mesh is 25 nautical miles within a matrix of ^9 x k9 points. The system of differential equations includes conservation laws for gaseous and liquid water, thermal energy, u and v-momentum, as well as a low-level geopotential height-tendency equation. The momentima equations are hydrostatic Navier-Stokes equations with a diagnostic integration of the mass continuity equation for the vertical velocity. The time integration is performed using the Euler-backward explicit second-order technique with a time-step of l60 seconds. The initial data are 100 n. mi. resolution of geopotential heights and relative humidities taken from the opera- tional analysis data base at the Air Force Global Weather Central. The analysis data are bilinearly interpolated to the 25 n. mi. PE model grid points. The initial wind fields are calculated geostroph- ically from the height field, while temperatures and mixing ratios are calculated hydrostatically from the height field and relative humidities, respectively. In this ten minute talk, we shall stress the forecast vertical flux of momentum and changing equivalent potential temperature topography in relation to observed variables. ^5^ COMPUTED DIABATIC TRAJECTORIES IN RELATION TO OBSERVED VARIABLES FOR THE APRIL 3 - U, 197lt TORNADO CASE Figures 1 a and 1 b below show the observed deformation of the low-level Sg fields at 900 mb between the PE model's initialization time and sub- sequent 12 hr forecast. We have already discussed the relevant forecast fields in some detail. In particular, we noted that the PE forecast through l600 GMT 3 April simply advected the Se maximum initially over Mississippi northeastward toward central Kentucky eind Tennessee . There- after, the strong downward flux of momentum forecast by the PE model established a growing 9g minimum west of the Mississippi River and subsequent in- crease of the 6g maximum to the east. Figures 2 a and 2 h show the zone of sur- face streamline confluence (negative asymptote) to the south of the large-scale cyclone at 1700 GMT 3 April moving northeastward into southern Illinois at speeds of 50 to 60 kt by 0100 GMT U April. Meso- troughs were radiated in the same direction at a similar speed, as observed within the surface pres- sure pattern analyzed over the region of strong moisture convergence which accompanied these events. Figures 1 a/1 b. Observed 900 mb Qg(°K) fields at 1200 GMT 2 April and 0000 GMT 4 April 1974. ""■Kaplan, M. and D. Paine, XSlh: The numerical simulation of the mesoscale features associated with the tornado outbreak of 3 April 197't- Preprint 6th Conf . on Aerospace/Aeronautical Meteorology, El Paso, Texas. By 0300 GMT It April (Fig. 2 c), the dry- line at the forward edge of the changeover between meridional to predominantly zonal flow had plunged southeastward into western Tennessee and Kentucky. This 60 kt movement is in a direction which is at right angles to the prevailing flow behind, and 180° opposite to the observed flow ahead, of the line. Figures 2a- 2"a. Observed surface streamline patterns at 1700 GMT 3 April (top), 0100 GMT (middle) and OZOO GMT 4 April 1974 (bottom). Ar- rows depiat wind direation. The dot-dash lines in the middle panel refer to 0100 GMT mesotrough lo- cations vs. those of the previous hour (bold dashed lines). The 31, 33 and 36°N latitude seg- ments were used to locate the perspective of the next figure. 455 320K 550 mb 650 mb Figure 3 . PE predicted Qg topography at 0000 GMT 4 Apr 1974 shoim along a vertical crose- section extending from Montgomergy (MGM) Ala., Jackson (JAN) Miss., Shreveport (SHV) La., and Fort Worth (FTW) Texas. Additional locations include Victoria (VCT) and Midland (MAF) Texas, Burrwood (BVE) and Lake Charles (LCH) La. Two isotaah sheets of SO and 60 kt between JAN and MGM delineate the low-level set to the north of the stippled cross-sectional slice. Diabatia trajectories (solid arrows) are explained in the text. 950 mb Figure 3 is a composite view of the pre- dicted diabatic transport upon 9^ surfaces which culminates in the severe, long-tracking tornado families originating over northwestern Alabama at 0000 GMT i* April. This diagram provides both an explanation for the observed changes documented in Figures 1 and 2 , while serving as an impor- tant check on the validity of the PE mesoscale forecast . We are looking from the mouth of the Mississippi River (BVE) into a rectangular voliane of the atmosphere which extends vertically to 550 mb between 29 to 3lt°N latitude and 85 to 102°W longitude. Surface locations are identified in the caption. Air parcels on the anticyclonic side of the synoptic-scale Jet originating near 500 mb over MAF in west Texas at 1200 GMT maintain a con- stant Sg value of 312°K as they descend at -10 cm s"-'- and turn anticyclonically to arrive near 850 mb over VCT 12 hr later. Radiosonde ascents be- ginning at 2315 GMT 3 April confirm the existence of an anomalous northwest flow over Houston and eastward flow into Del Rio, Texas at 850 mb, the plane of reference which is delineated by the uniform horizontal shading in Figure 3. The ageostrophic descent of mass estab- lishes a +10 mb pressure anomally relative to the synoptic-scale field stretching from FTW to Just west of VCT at 0000 GMT 1* April. The air parcels forming this ridge arrive near the surface with relative humidities of less than 50 percent as they decelerate from 9** to h kt during this 12 hr time frame. Because equivalent potential vorticity is conserved along these trajectories, the decrease of convective stability to 1*°K (1+00 mb)-l at 0000 GMT is offset by the increas- ing anticyclonic turning of this air flow. Table 1 shows the consequent change of energy terms along the trajectory designated by the letter "A" in Figure 3. It is interesting to note that the loss of potential and kinetic energy for the parcel is augmented by both an in- crease of enthalpy (CpT) and latent energy, the latter amounting to lS% of the increase realized by compressional warming. 456 Table X Initial and Endpoint Data Used to Compute Diabatia Trajectories Upon Og Topography Location Height Temperature, T Mixing Ratio, w Velocity, V Geopotential, gz Enthalpy, C T Latent Energy, Lw Initial Point for "A" (1200 GMT, 3 April) 500 mb over MAF 5.59 X 10 cm -18.7°C; 311°K 312°K .55 g kg 91+ kt, 1*7 m s -1 5lt.8 X 10 ' erg 25*^.0 X " 1.1* X " " Kinetic Energy, V /2 1.1 x TOTAL ENERGY Endpoint, Trajectory "A" (0000 GMT, h April) 850 mb over VCT 1.1*1 x 10 cm (850 mb) = 305°K +17.7 C; = 312°K -1 2.2 g kg 1* kt, 2ms Change (erg) II4 X 10''' erg (-1*1 X 10''') 291 X 6 X ( + 37 X " ) (+ 5 X " ) (- 1 X " ) 311 X 10 erg . . .For Trajectory "C" (0000 GMT, U April) 222 km NME JAN Miss. 3.0I* X 10 cm (700 mb) +5.0°C; I '- ^e = -1 kt, 29 m s"""" 308. 5°K 312°K 1.1 311 X 10 erg 30 X 10''' erg 278 X " 3 X " " .1+x " " 311 x 10''' erg Change erg) (- 25 X 10^) (+ 21* x " ) (+1.6 X " ) (- .7 X " ) The downrush of air occurring to the east of SHV along trajectory "B" on the 312°K Og-surface is responsible for the buckling of higher 65 topography seen in the plane of the stippled cross-section drawn across SHV-JAH-MGM. This vertical plane was chosen in order to check the validity of the 12 hr forecast with radiosonde observations of Oe, wind speed and direction. Film experiments involving computer graphics of the forecast 63 sheets will eventually be used to capture the rapid temporal and spatial changes predicted by the model: i.e., a small "hill" of higher Qe west of the Appalachians up until 1600 GMT 3 April is effectively plowed into the high "equivalent mountain" forecast and observed to the east of JAH by 00 GMT. The air parcels on the 312°K Oe-surface which accomplish this task undergo no important change in their convective stability and, hence, their constant inertial stability or spin relative to Qe results in the straight path direct from MAF to over SHV 12 hr later. Trajectory "C" originates on the cyclonic side of the macroscale Jet over MAF, descends to become part of a distinct isothermal layer situ- ated at 650 mb over SHV and 8OO mb over LCH, then ascends as it begins a rapid cyclonic turn and accelerates to create a strong low-level Jet. Although the parcel decelerates to '^1*0 kt in its plunge, it still supplies a significant portion of this Jet's kinetic energy. The Jet core was measured over Nashville, Tennessee with speeds of 75 kt from 230° at 65O mb. A distinct cirrus streak over the core was visible in satellite pictures taken over the storm region at 0017 GMT 1* April. The final energy input into the tornadic storm region comes in the form of moist air with dewpoints of 7l+°F (w = 19 g kg'M ascending along trajectory "D" from near BVE. The saturated stream of air arrives in the vicinity of the low- level Jet core with an equivalent potential tem- perature of 350°K; this corresponds to an ambient temperature of +10.5°C and mixing ratio of 11 g kg~^ near 700 mb. Surface weather reports reflect a mesotrough and strong confluent zone beneath this stream with thunderstorms occurring along two-thirds of the border between the states of Alabama and Mississippi at 0000 GMT. Because of the quasi-hydrostatic assumption currently built into the PE physics, the model is unable to forecast the severe cyclonic turning necessitated by a decrease of convective stability forecast to be 2°K (350 mb)"^ within the air column at the endpoints of trajectories "C" and "D". The higher frequency gravity wave modes will eventually be forecast in this area by incorpo- rating the complete time-tendency of density and vertical motion resolved within a non-hydrostatic window. In summation , 9^ diabatic transport into mesoscale cyclones appears to be a miniature ver- sion in space-time of adiabatic transport studied by such investigators as Danielsen and Reiter utilizing isentropic surfaces. There exists for mesoscale dynamics measured upon 6g topography: 1) an anticyclonic turning of decelerating air that represents a transport of mid- and upper- tropospheric momentum which terminates within a surface mesohigh; 2) an uncurving, rapid downward transport of momentum originating at the macroscale Jet core whose energy is effectively converted into pre- dictable mesoscale wave phenomena within the troposphere ; 3) an accelerating, cyclonic turning of an air- stream which originates on the cyclonic side of the macroscale Jet and becomes a low-level Jet supplying kinetic energy eventually dissipated by microscale vortices; It) an ascending branch of rich diabatic poten- tial which fuels the convective storms. 457 PANEL DISCUSSION D. Lilly ; Some people have asked me what the title of this session means, whether we mean by this university participation in this discussion or in Project SESAME or which. The answer is both, of course. But in particular what that refers to is the possible modes which might be most effective for bringing in the widest and best participation of university researchers. I have not got really too much to say myself. In the SESAME Project Development Plan there is a very simple, brief, and fairly predictable management structure which one might envisage for a project like this. We will start with that and perhaps discuss it a bit. That then leads into the university support subject fairly directly because there is sort of a blank in this structure in that area. The project leadership is in ERL, a part of NOAA, which will act as lead agency, but we believe that if it is to proceed successfully it will require coordination with other agencies. Thus there has to be some sort of a body which probably consists of the Director of ERL and similar management levels at other agencies, who occasionally meet to smooth out problems that have arisen in interagency relations. Then there is the project director, and working on the side with him a scientific advisory panel. There are various modes and models for that sort of thing. The one that I think of most directly analogous is the NHRE Advisory Panel and Dave Atlas can say something of how effective that has been. One can think of others, such as the U.S. GARP Committee. That is a somewhat different animal but fulfill some of the same functions. 459 Within the project itself there are a small number of subdivisions with managers. That is about as far as I want to go on the project management and I will now ask for comments from the panel. F. White ; I am going to be the devil's advocate. I sat down last night and read through the document again to try to relate it to what we have been discussing here for the last three days. There was very little correlation between the document and what we have actually been talking about. I wrote down what you had written in the document as the purpose of the SESAME project. You had four things; A,B,C and D. D was really just a list of questions. But you wrote down for A, "Effects of mesoscale meteorological processes in triggering and organizing severe storms." For B, "Convective storm evolution and environmental feedback." C, "Numerical modeling of mesoscale phenomena." Now somehow it is my reaction that that the objectives of SESAME are not sharp enough yet to really have a project. I would simply throw that out for what it is worth. Don't you really have it in mind to try to determine the physical mechanisms of severe storms, their prediction, and their possible modification? Is that what you are really talking about in SESAME? D, Lilly ; That is pretty close to a paraphrase. White: According to the details of what you have written, you do not really talk about the physical mechanisms in detail. For the last three days we have been hearing sort of an AMS meeting and a discussion of the university research effort in severe storms, the new equipment has been developed and I have been very impressed. I have been 460 impressed with what we have been hearing. Somehow I personally feel that there has to be a sharper objective spelled out to the project. When the problem is better defined, then I think we can get to the organizational chart very easily. W. Hess : Well, you, Fred, and many of the people in the room here have not had the advantage of having sat through another half-a-dozen sessions on this same kind of problem which led up to the statement of objectives as you listed them there. I am not quite sure that your statement is completely fair. I agree with the fact that there has to be a lot more thought about specifically what is SESAME. As a matter of fact that is one of the reasons for the existence of this meeting. But the statement that you made that the objectives should be an understanding of the physical processes involved in severe weather and the prediction of severe weather and possible modification is, I think rather broader than the picture we have tried to spell out. The picture we tried to spell out was specifically narrowed because we did not think we could hack the whole problem. I think we specifically left out modification because we did not figure there was anything useful we thought we could do until we were a lot farther along than we are now on it. That is a subject that can be discussed of course. In the matter of prediction we thought that that could come as sort of a second phase. When the troika, George Cressman, Dave Johnson and I were working on this earlier, we agreed that we wanted this to be a research project. That was in fact George Cressman 's position even more strongly than my position. That it should be research and not operational or quasi-operational. If Allen Pearson of 461 NSSFC wanted to take any of the output and ship it to Kansas City and use it for any of his business in prediction, fine, but we did not think that at the beginning you ought to torque the program to try to make it look operational or to look like it was going to produce predictions at the state of the subject that we thought we are at now. Ok, so I am not disagreeing with your statement of the objectives, but I am stating the thought process we went through in narrowing the objectives from what you stated to about the place where we are now. Now I have excluded modification and I have said that prediction tends to flow naturally out of this kind of thing. But in a formal sense we wanted to hold off and not say anything about prediction until somewhat later. Then if you look at the physical processes I think you get down pretty much to what we stated there. Within that we then emphasized certain features of it. Let me go through the thought process before you interrupt me on that. The three pieces of it that were identified were initiation or triggering, evolution and feedback. In thinking about these three things , I think we came to the conclusion that it simply was impossible, within one reasonably sized project, to try to do a good job on all three of those at the same time. So we again restricted our attention from the larger job of looking at the entire field of physical processes to emphasize most stongly those attached to the initiation process. If you are working on the feedback process you probably have to have a different scale network, among other things. I am not trying to say we have thought through the problem in the best way. I am saying we did try to think it 462 through, and some of the things you have suggested along the line have been discarded. But once again I am saying one of the purposes we are here is to re-examine those questions and see if we did the wrong job. D . Atlas : Well, I would simply say at this time that I agree with you, Fred, that the objectives need to be clarified and focussed a little better than they are now. I can see that there is already great disagreement as to what the objectives are or should be. I presume that that can be done in the next few months. We can focus on them. I didn't think that was the purpose of this panel discussion. There are a number of important organizational and management ideas to discuss and particularly philosophy, I think. Perhaps we can postpone the arguments on the detailed objectives and see if can agree to some extent at least on basic philosophy of organization management. A. Dennis ; I would like to second Dave's remarks and emphasize that my presence here is mostly concerned with the organizational structure and particularly with the role that university people might play in Project SESAME. Our previous experience with large projects of this kind is best epitomized by the National Hail Research Experiment. Some of the university groups participating in NHRE have had very good experiences and others have had less good experiences. In trying to set up the pros and cons of a university decision to participate in a project like SESAME, we have to have in mind that the primary function of a university is to teach. A secondary and closely related function is to expand knowledge. It is not the function of the university to make money or build up a big staff or pursue 463 I materialistic goals of that kind, although none of us are completely pure. Looking, then, at the total problem, you can raise a question as to whether or not universities should participate in a project like SESAME at all. This is true from the point of view of those who want to run a tight efficient operation and get results without trying to coordinate the efforts of independently-minded prima donnas or whatever the masculine of prima donna is. But also, from the point of view of the university, there are real pros and cons. I can see some good reasons why a university group would want to be involved in SESAME. To begin with, the topic is interesting to the point of being fascinating. There would be, if this morning's presentations are any guide, access to excellent data sources, which are beyond the reach of individual university investigators, and there would be stimulating interchanges with the other scientists participating in the project. It might represent a source of funding to the university, although the reputation of NOAA as an organization which does most of its work in-house is well-known in the academic community and some real efforts would have to be expended to convince them that SESAME was something new in that regard. Now on the negative side we have some strong considerations. To begin with, in this type of project there is little control exerted typically by the university participants and the first sketch of an organization chart leads one to think that it would be so again. Then the funding to which I referred has a bad tendency to be erratic. The reason I say it is erratic is based largely on our experience with NHRE. When there is a cut in the NHRE budget the university groups seem to suffer the most. People in Canada sometimes say 464 that when the U.S. economy catches a cold, they catch pneumonia. I think it is a fair analogy here that if a university investigator were taking part in SESAME and the SESAME budget caught cold, he would have double pneumonia. The reason is that there is a natural tendency to preserve what is regarded as the core of a project or an organization. I suspect that when the crunch came most university participants, unless they had somehow infiltrated the very heart of the organization, would find themselves regarded as expendable and that their efforts could be put off for another year or so. We have been wrestling with this subject in connection with NHRE. The only answer I can see to it would be to place the project under the control of an executive committee in which the lead agency would have a strong voice, but not the total voice. I would now like to turn it back to other members of the panel who would care to comment on the question: How could university participants be guarenteed a meaningful voice in the policy decisions? How could they be given an assurance of continuity of funding, without which meaningful research on long range difficult problems is almost out of the question? And perhaps such minor points as the publication arrangements. For example, would a scientist wishing to publish something under Project SESAME have to clear his papers and reports through a controlling committee or through the project director? F. White ; I can take a crack at this. I do not intend to dominate the thing first. Speaking as an individual, not speaking as an agency or anything. If the universities want to have a role in the policy-making part of a project, they 465 have to do it in the beginning. They have to help set the project. They have to set the science, the objectives of the project. They have to set down to the scientific experiment so that the role of the universities or the role of scientists is set in the policy issues what the project is going to be all about. It has to be done in the beginning. You can refer back to GATE as a good example for that. I think the Academy and the scientific community did a very good job going up through this GATE Scientific Advisory Committee setting the broad picture of GATE. Then after the broad objectives were set, it was turned to a lead agency, NOAA was the lead agency, to carry out the project. The scientists participated in GATE and they are going to have access to all of the data and there is no requirement, that I am aware of at least, that they have to clear their publications with any senior international or national body. To get down to SESAME I believe, if I understand Bill Hess, that the scientific community has had some say up till now in drawing up this plan. Some of the colleagues, at least here in the Boulder area, I would say that the thing has to be looked at very critically by a broader scientific community the people that are sitting in this room. Then as to the funding of the project, I have put myself on record as how I see the funding, NOAA being the lead agency, I believe, is going to have to be responsible for the core experiment. That is the funding of the equipment, carrying out of the field program, and if some university wants to fly an airplane or is asked to fly an airplane for SESAME, then I believe that is part of the core experiment and that would have to be born as part of the NOAA cost. There are many agencies in the federal government that support research, related research. It is just not NSF. I 466 just happen to be the only non-NOAA person sitting up here from the federal government. We would have in mind, speaking for NSF, that if a university wanted to analyze the data, wanted to carry out some related experiment with SESAME that we would be willing to take a proposal under consideration. We are not going to fund every single proposal that we get for SESAME. There is no sense in my telling you that we will. But we will fund a few of the highest quality proposals that we get. I can use the NHRE as an example. When NHRE started there was an attempt made to let NCAR fund all of the university research, whether it was core program or related program to NHRE. Due to the budget crunch as Dr. Dennis pointed out some of the university related efforts and some of them that were directly to the core have had to be reduced due to funding. At the same time other parts of NSF, beside the NHRE part of it, have supported university research. I refer to Battan's work. I refer to Srivastava's university of Chicago's work in NHRE with non-NHRE money. So I would see NOAA being the core funding to fund the basic experiment which still to me has to be refined a little bit in the next 6-8 months. I am personally not completely pleased with this document and that is the reason in my remarks being the devil ' s advocate because I am not sure that we have it spelled out quite what you want for a SESAME program. Once a program is spelled out and sold up through the government, then I think other funding agencies such as NASA for certain satellite work, DOD for certain other work, AEC. Paul Frenzen mentioned the AEC interest in SESAME. NSF for some of the research. Arnett brought up the question of publication. This is a touchy subject in any scientific experiment. Scientists like to go out and collect their data and take it home and 467 put it in a little ball and analyze it and be the only one to publish it. NASA faces up to this problem on their big satellite experiments by giving their scientific investigators 6-12 months and then they say if you have not finished your analysis by then the information is wide open to anyone that wants to collect it. GATE people, the American investigators, I do not even think are being given 6-12 months. The data is going into a central bank and each investigator knows what he wants to do and he is asking for that data first. But if some other smart man comes along and asks for the same data and beats him to publication, I suppose it is possible o In the case of SESAME, I guess I better shut up and say I do not really know how the best way is to handle it. W. Hess : Well, let me have a crack. I do not see any reason why the project would exercise any control over what is published inside a project like SESAME. In certain areas where we do large projects we get into things that are politically sensitive. If you have something that is politically sensitive, you do exercise control. I wont go through horror stories, but we have had two or three of them where an individual, thinking he is speaking as an individual says something that comes back to haunt the agency. There is not anybody inside NOAA who in publication speaks as an individual. He speaks as a member of an agency. So in certain circumstances we have to watch for political sensitivity. I do not see any of that here. So I do not see that there would be any attempt to have a control through project. D. Atlas ; NHRE has been taken in vain several times as an example. I think by implication, a bad example. Perhaps I am no one to speak because we have just gotten a rather poor 468 marks in a review by an external review committee who said that NHRE's management stinks and criticized us for our poor relationships with universities. To the point that Bill and Fred just raised with respect to centralized control of publication, I think the only reason you must have some centralization is that the most vital part of a major multi-faceted program such as this, is the synthesis and integration. We would not be here today if we could face the problems of SESAME individually. So systhesis and integration are the key objectives of a program of this kind and to obtain this kind of synthesis you must have some imaginative, inspired and perhaps even sometimes dictatorial control. Now we have been toying. We have not achieved this successfully. W. Hess ; Would you please distinguish control over data and control over publication. D. Atlas ; Ok. Now we have been toying with this because we recognize that if you want an individual, a university investigator, to participate in a dedicated manner (or any other agency investigator) that man rightfully wants and deserves lead authorship or up-front authorship and how to achieve this at the same time that the individuals are mutually independent upon each others data and where they must rely, one man has doppler data, one man has aircraft data, another guy has mesoscale data, you name it. Another guy has a model. They want to integrate this. Now I find this very difficult to face because the real guts and the real creative activities will come from the synthesis and multi-authored papers. Well, we are toying with the idea in NHRE of doing something like this. We have individual contributions. We let the individual contributors write 469 what they can as individuals, and then go assign a team to a particular subject area. Where the group of individual contributors gets together and tries to synthesis. When you publish that, then you simply have to recognize the lead author by consensus or, if they are all co-equal contributors, then you must simply pick names from a hat. It is a very touchy question and I do not know if we want to carry it much further. W. Hess ; Let me pick one point up. I tried to make a point of distinguishing data control from publication control. My point was to the effect that I do not see any reason for exercizing publication control over a project like SESAME. Data control, yes. There will be certain kinds of data that automatically go into the public domain. The doppler data, the ground station data, the aircraft which is taken by a facility (a group) is available to everybody as soon as it is in shape where somebody can work on it. Is there anything improper about that? E. Kessler ; I agree with you. There are a number of problems that have been touched on here. They are indeed a concern at NSSL as well as at other places. I substantially agree that the place to exercise control is at the level of the data. When you engage a university on a grant to address a particular problem, you are engaging them because you are asking them to conduct an original kind of investigation. You are asking for their expertise and then you do not act as though you know more than they do about the problem because you should have been asking them because they know more than you do about the problem. That comes to some other questions that I have explored here in some notes about the role of universities, the role of government, the 470 role of the private sector in the contract and grant picture. Before I get to that, let me stick with this data for just a minute more. At NSSL we have asked the scientists who are authoring or who we expect to author scientific papers to address the question themselves on how to control the utilization of that data. Now within NSSL I do have something to say, or should have, about who is working on what and who is writing what within that organization. But when we go outside of NSSL, I think that the only way that you can guarantee the integrity of a process where the universities have proper control over their own destinies is to have the right people in the management structure who understand the roles that different groups should play. Let me just discuss this role business a minute because I think this is as good a place as any to bring it up. I do think it is something that our society has not adequately considered. We have a certain strength here in the United States through diversity of institutional forms which provide opportunity for creative work to personalities of different kinds. We should encourage this healthy diversity and each institutional form has a special responsibility to maintain the capabilities peculiar to it, in my view. Now there is going to be overlap. I am not going to try to cover every conceivable permutation on this. But if I break down what seem to me that the salient roles that the different units can play, I would would say that the universities do creative work which is usually on an 471 individual basis. Private industry should be responsive to requirements which are expressed through the funds which are made available. The private sector is asked to tackle a particular job with a fairly well-defined end result and receives a contract to do that and they are required to perform. The government, it seems to me, has an important responsibility in coordination and in the provision of facilities which should not burden the university groups and in the organization of the key methods. I think that there is room for all of these sectors in SESMIE, the private sector as well as the universities which have occupied our discussion primarily so far. There is something I have mentioned from time to time before and I would like to mention it again. The universities get itchy when the the unit of government tends to hold the money close in-house. I would like to say that the units of government which are employing the university graduates are under pressures of their own, partly created by the universities. We have seen continuing dispersal, with the organization of new departments each one of which thiks it should have funding. We see students continue to come through universities, all of whom want jobs and as soon as they have graduated then they want one of those jobs either in government or in the private sector, which in meteorology is supported by the government. The universities also want to continue their funding so that they can generate more students and keep the professors active. There may not be enough money for all of these. I suppose it will not be the SESAME project manager's role to sort out the national issues that are involved here and how many students you create and how many jobs you create a demand for by fostering additional graduates. But it is an issue and I 472 wish that the universities would sometimes keep these various pressures in mind. The function of SESAME management is, v/ithin the framework of a well-defined program, to go where the answers can best be obtained whether it be the university for exploratory studies or to private industry for a response to a well-defined problem or to the in-house team when that is indicated. Lilly ; Is there anything more that needs to be said on this matter of publications? I appreciate it being brought in, but I hope that this is something we can now dispose of, recognizing that there are differences of opinion. F. White : Well, if I understood what Dave said, he was saying that NHRE was trying to have some control of publications. Bill Hess was saying that he did not think of control of publication for SESAME was proper, I think from the National Science Foundation's point of view we would not want to give a grant to some university man and have him have to clear his paper with, for example. Bill Hess. I think that SESAME is going to be such a big overall many faceted program from instruments to physical understanding, and everything else, that you are not really thinking of controlling publications, Lilly ; For whatever it is worth, I vote with you and Bill. D, Atlas ; First of all there are certain core experiments which I would define as those which must be done. If there are core experiments which must be done, then this implies to me that there must be some focussing and some control. 473 It doesn't have to be pounding on the head. But there are publications which have come out in NHRE which are clearly- related to other activities than NHRE and two investigators or three investigators haven't even gotten together. Now that is not accomplishing the core objective of the program. So where you need control is under the core experiments. It can be loose or inspired great leadership control. Or if necessary, you have sometimes have to pound some heads together. On the other hand there are ancillary experiments which are highly desirable. These are the kind of experiments which are most suitable for the university investigators to undertake so they can use their imagination. They can roll free and have elbow room and exercise full serendipity. I would not as a university man, take a grant under which I had to constrain myself to do just what somebody else tells me. We are hiring brain power under a program such as this as Ed suggests and it would be insulting to me as an university investigator to have day-to-day or even week-to-week or month-to-month control over my efforts. Bretherton ; I will look at SESAME from the same point of view as many others, that of how to get the most effective university participation in the national program. This is in recognition of the fact that universities have real talents to bring to bear and it would be imprudent not to tap them. The question is how do we most easily evolve the structure to do that. I think we can take recent history of various other projects as models to help give us some guidance on this. I share the apprehensions expressed by others about having the funding control for university 474 participation reside in the same institution that also has the primary funding control and responsibility for carrying out the central program. UCAR's experience with the National Hail Research Experiment has shown that in times of tight budgets there are very strong pressures which are difficult to resist. These pressures tend to squeeze out university participation in order to preserve the centra; program. The procedure followed in GATE, namely, that the universities' primary funding resource is the National Science Foundation preserves the independence of support to the universities and is, in my view, a desirable kind of structure for projects which combine a major institutional involvement with added university participation. Over the next few months we need to get a clearer picture of what consensus exists within the atmospheric sciences community of the priority and effort SESAME should have in the face of other large projects, such as the First GARP Global Experiment and STORMFURY. These are also well along in planning and we have to recognize that there is unlikely to be enough funding to be able to do all of them. It seems possible that UCAR or the Academy should perhaps try to help determine what the constituency is for SESAME. How does an individual scientist get deeply enough involved into a project of this sort that he is really a part of the core effort? The key to the organization scheme being discussed for SESAME is the Project Scientist Advisory Panel. But the important way for a scientist to impact the program and be involved is to make creative proposals about the design of the experiment and data analysis. The extent of an individual scientist's influence is almost in direct proportion to his input in the planning level at that stage. 475 My experience in at least two projects of this sort is that in fact this kind of input is not only sorely needed, but it rarely goes unheard if it makes good scientific sense. Those who make such input are, de facto, in positions of considerable influence in the program. The difference between influence and control is important; there has to be a clear line of responsibility for carrying on the program, and the Project Director has the overall responsibility to make things go. The individual university scientist's responsibility is to make sure, firstly, that his or her suggestions are expressed and secondly, that they are backed up by the necessary in-depth studies. What is really needed is a lot of hard work. A scientist whose input is substatiated by good homework is in a very strong position further downstream in the project. But now we come to the question of publication rights. The key danger that has shown up in other programs of this sort is how to give due credit to the people who took the data and contributed creatively to the design of the experiment when it usually is yet another person has the bright idea for the analysis. There is no easy answer to this very sensitive issue. I don't personally believe that a central clearing house is the best answer. It seems to me that the most desirable approach is to assure that someone do some very hard jawboning to try and make sure that credit is given liberally rather than miserly. In summary, I believe that the key questions are: What role does this project have in the overall national scheme of things.? How can we most effectively learn what the 476 consensus is about this role, from the interested scientists, within the next few months? And then, as we get further down the line, what are the specific university proposals that would involve NCAR facilities and support? In the meanwhile the one thing that can and must be done is the theoretical analysis, the numerical modeling. I think the community has to recognize that, regardless of when the field program gets off the ground, there is literally no excuse for not going along top speed with the associated numerical modeling and theoretical analysis of the problems associated with severe storms. S. Williams : I think the best precedent here probably is Project BOMEX, This project again had NOAA as lead agency and as such gathered most of the data. Other organizations made significant contributions to data gathering. Of course these data were automatically in the public domain as soon as they are ready for use. Now there were certain analyses which we believe had to be done as part of the core experiment. Most of these we did in-house. However, there were also other institutions and organizations who took part in the project and gathered data and made analyses and stored data away or whatever happened to it. Now all these other participants in the project agreed that after some arbitrary time period which in most cases was liberal enough and beyond that there were extensions, to turn these data over to the central archive for the project and to make the results of their research known to the scientific public. Now beyond this there was no control exerted over publications whatever or even use of the data for that matter from the project director's office and, as far as I know, none from the Scientific Advisory Panel, But once the 477 data are in the public domain, what anyone does with it or what they say in the papers they write is their own business. The only way we might have exerted any control is when on a few occasions people have called us and asked us for additional data or asked why a piece of data of a particular day and time is so fouled up. There is no control over what they said, P. Frenzen ; I was just going to say that it seems to me that the cart is way before the horse and the cart is full of red herring. Publications policies are red herrings. It isn't a real problem as BOMEX illustrates. But the cart before the horse is that you are talking about the administration of a project whose objectives are not really presupposed in a way that does not admit a further discussion or they are not really settled, I don't think they are going to shake out in the next six months. W. Melahn : One of the earlier panelists asked whether the attitude and the philosophy of management was clear. It seems to me that it might be worth going back to look at the basic attitude toward a project and its definition and see if there is not some adjustment that makes sense. Large projects like this tend to cost a lot of money and since these have to be justified from funding agencies, the tendency is to try to look far down the road and to specify goals very precisely and then to define the minimum resources that are required to accomplish the essential goals. It seems to me that in a situation such as we are facing, that may not be the best thing to do. That we might well want to recognize that we are going to have ideas coming in a steady stream from here on until the project is 478 finished. We are going to have scientists who look to different agencies for funding and who come with somewhat different objectives. The concept of the project might be modified to recognize that a major part of its aim is to build a flexible structure and to build flexibility into the operation out at the site and into the data gathering and data handling. That is to say, to recognize that there will come a time when certain experiments are going to have to take place at the same place and at the same time. Therefore you have to avoid conflicts from interference. Aside from that you would like to bring together those things that will be assisted, that are symbiotic, that will go better by joining with others. So in laying out plans for the equipment, for collecting the data, etc., I personally believe it would be worth spending quite a lot of money to insure flexibility, to insure an opportunity to make adjustments, to change objectives, and to set up some mechanism rather than keeping people out, to find ways to encourage applicants to join with somebody else so that the two experiments can be enhanced by being done together rather than being done separately. In the case of data, I think Dr, Holland has already mentioned the importance of getting inter-calibration of various sensors so you know what is going on, of planning ahead to take the data in an effective and organized way. If anything was clear in the last few days, it is clear that the ability to collect data and fill magnetic tapes is greatly increased and so any efficiency and orderliness in that, which isn't accomplished at the expense of losing flexibility, is to be desired. So perhaps the concept of building a project which has ultimate flexibility rather than limited flexibility focussed on specific goals which have to be defined years in advance would be the appropriate thing here. 479 Lilly ; I certainly like the sound of those words. Kreitzberg : I would like to suggest that the natural way for the flexibility to be built into the system would be by recognizing that the mesoscale problem is not something that is going to be solved by one project. I think we ought to take as an organizational example the GARP case where you have a series of field experiments that are carried out and as you go along you get smarter and more efficient. So I think that as far as defining precisely what the goal is, I think we should all recognize that we are only going to accomplish one part of the goal the first time we go in the field. We could argue about what scale we should go on the first time we go in the field. But if we do space it out, I think we will be in good shape. A couple of other points Arnett mentioned the individualist approach of the academician and that is certainly fine and will be preserved I am sure. At the same time our society is getting so complex that I don't think it should be embarassing to an academician to be part of a team effort and the same with the students. I think it is an excellent education to learn to work with teams. This problem of multiscale interaction is one that is certainly going to require a lot of team effort over the years. To get to Fred White's comment about funding and also Arnett 's, I think it is of very primary concern to the university community to have stable funding. The only way it can be provided, in my opinion, is with the sort of structure we have with GATE in which NSF is supplying a lot of the funds to the university community. I think it is important to recognize that the GATE funding to the 480 universities began before they went in the field. So I would like to think that a lot of this hard work and homework that Francis is talking about that has to be done before the experiment is going to have to be funded by NSF, With respect to some other aspects of university cooperation I think an excellent mechanism for that, again in the preliminary stages, is through the utilization of the extremely good computer facility at NCAR for running flexible programs. Some of their software is really outstanding. But we are going to reach the point where we are going to saturate NCAR's computer completely. That point is going to come when we want to run the same model on a lot of cases in order to get some basis for information. At that point we are going to have to go out to the AEC and to NSF and maybe even to Princeton and ask for some computer time at those places to run a large number of cases. P. Squires ; I have a carryover from a session called complaints and additions. It would be rather a break in on the trend of the present discussion. We have had a very interesting series of talks. I have felt a lack of participation of the audience in the forming of the ideas about what SESAME is. in particular I would like to refer back to the underlying philosophy expressed in the draft project development plan to one small section which refers to my particular interests, where it is said that we don't know whether the environment and aerosol has the consensus of opinion is negative. Of course the consensus is a pretty flexible thing. It depends on exactly how you ask the questions and who you talk to. You could definitely ask the question will a storm that hits the tropopause produce some rain? 481 We do know that in the smaller clouds which are more accessible to observation and computation the aerosols can make all the difference between precipitation and no precipitation. It seems inherently unlikely that the same physical causes do not influence of the amount and the timing of precipitation development events in even larger storms. We know several ways in which microphysical events in clouds can have an effect on the energetics of convection, on the loading of the updrafts, latent heat of freezing, and outside the cloud, the formation of an anvil the properties of which will depend on the microphysical events in the cloud which itself influences the radiative balance of the region. For all of these reasons I would feel that, on a scale appropriate to the general plan of SESAME which might not include a vast amount of cloud probing that it would be reasonable to include as a significant element in SESAME both aerosol and cloud physics considerations. There is another reason which has been touched on already for the same conclusion. It is true that the justification of SESAME in the opening sentences of the draft plan is in terms of forecasting. But of course we know very well that forecasting has improved quite a lot for severe storm events in recent years, particularly as a result of the utilization of remote sensing. Nevertheless, some people don't need forecasts. It is difficult to move a town away from the path of a tornado, even if you know where it may be going. So a little bit of modification would be worth a great deal of excellent forecasting even though it is a difficult thing to achieve. Now in terms of modification the only way we know how to go about this or even try to go about it is by modifying the aerosols. For this reason in addition I would think that it is important within the core project of SESAME to include aerosol physics and the interaction of the environmental aerosol with the microphysics of the cloud. 482 E. Barrett ; I would like to add support to what Pat Squires has said and indicate that the lab with which I am associated is giving thought to some of these problems, particularly the modification problem, and when we saw the draft of the PDP about a year ago we did make suggestions for including a more account of physics. I must say that as yet we have not come up with too many concrete proposals. We intend to do so. E. Bollay ; I would like to second the emphasis by Pat Squires. It is really unfortunate that no more consideration was given to the cloud physics. Because after all, it is the cloud physics that has to go hand in hand with the dynamics if you are ever going to understand this. A. Dennis ; Going back a bit to talk a bit about the overall philosophy and management structure of the project, I brought up publication matters as just one of many things that would have to be ironed out and I didn't mean to open up such a diversion. But, I was trying to point out that there are basic philosophical questions here. There was talk from one of the participants or one of the panelists about building a constituency for SESAME. In other words how do we develop a basis of support which, when transmitted to groups like the National Academy of Sciences, would be strong enough to prevail and bring the project actually into being. I am afraid that as a skeptic I am not committed to it myself sufficiently to help lay that basis. So what I would like to hear, perhaps from the other panelists or perhaps from the audience, is a discussion of the merit of a large scale project like this, as opposed to the more individual small scale efforts. Perhaps I should emphasize small scale rather than individual since I think we have 483 reached the point that one person can hardly make a dent in the problems anymore. But when you consider the fraction of our total research effort in the atmospheric sciences which is going to be tied up in STORMFURY and GATE and NHRE and now SESAME, if it goes, I wonder if we are really getting the optimum mix here. Or would it be better to hold the number of what I would call superprojects down to say not more than two simultaneously and hedge our bets so to speak by spreading some of the funds out to smaller scale projects within individual agencies or individuals. Now, but again I emphasize, it means a large fraction of your time for a considerable period of time. Now, given the feeling that these are essential then, you have to be very orderly about this. You have to write contracts. You have to establish deadlines. You have to write milestones. If a person is supposed to have a piece of equipment working on an airplane for a certain flight program, he has to accept the responsibility for doing that. That is a contractual responsibility. That may or may not fit a university-role operation. I would like to think that it would in certain cases and we would like to have you aboard. It may end up that most of you have to go the other mode and work on the related experiments, most of which I would think, and properly so, would be funded through NSF in its orderly process for handling these kinds of things. But the question I want to ask essentially to you is how many of you see this as an awkward arrangement and do not think it is the right way to run a project. W. Hooke : This is slightly off the subject of Bill's question, but since there did not seem to be any other ready responses, I thought I would say one or two things. There has been a lot of criticism of the lack of audience 484 participation. I am afraid that I find that in my case there has been too much. I am referring particularly to this questionaire that we have all been given because I find that I cannot answer some of the questions on it. Now with that in mind, I thought maybe the panel could help me out in filling out this questionaire. Two questions, specifically. Do you think the project will be carried out? If you are interested in SESAME, what do you think should be the next step? Answer more than one, if desired. Then there is a list including improving the pdp, stay cool until new money is available, and others. P. Lilly : Are you asking why I put those things in? W. Hooke t No, I am asking you to answer those questions. D. Atlas : I would like to continue on Bill's course. With respect to whether or not for universities to participate in the core experiment, I think that perhaps the NHRE experience might be helpful o If the lead agency responsible for the program can indeed give reasonable assurance, virtually a guarantee, of continuing funding, then I think it is feasible for universities to participate in the core experiment and dedicate a great part of their time to it, enough to satisfy the program director and the advisory panel o However, NHRE's experience is that funds are erratic and at those times we then have to eliminate those parts of the project which are the appendages, if you like, which are not easily controlled, which are not susceptable to direction, and these tend to be, by and large, the independent university investigators. Arnett referred to them as prima donnas. I was one such until 485 recently, I do think that the most appropriate way, for university people to participate in a program of this sort is not within the core experiment. They lose a kind of flexibility that is natural to universities, that is required by students, for example. They should take advantage and they can exploit a program of this kind because there is so much data available that cannot be analyzed that they can profit very greatly. It may turn out that the appendages, the related ancillary programs which we are now taking out of the core experiment, made in the light of Wes Mehlahn's suggestion of building in flexibility, may convolute themselves and become the core experiment as the program evolves. So, I think it is undesirable for universities to try to participate as part of the core experiment. There is one other comment with respect to Arnett's question as to the importance of this program, whether or not the country can afford to do this. I have very mixed feelings about it. I agree with Yoshi Ogura that the time seems, does indeed seem right, for a major program of this kind. The modeling is there. The tools are there. The need is there. I doubt that there is any meteorological subject that is more pertinent to the national interests other than air pollution, and I think there is enough activity going on in that in other agencies that we can contribute to. So I feel that if we had to order the big projects, this would certainly be among the top. But I must say that I have the same kind of queasy feelings that you have about the concentration of funding in these really big projects to the detriment of the smaller efforts that must continue at the universities. 486 A. Dennis ; I appreciate the straight talk we have had from Dr. Hess and Dr. Atlas. In my earlier questions or statements, I was not implying criticism of the people who have had to manage these very complex programs in the past. I was merely trying to bring out the fact that there are very hard decisions involved in prgrams of this kind and that there are potentials for conflict between the lead agencies and the other participating groups. I think the kind of straight talk we have had here has been very beneficial. In other words I much prefer people to tell me the truth rather than what I might like to hear. There is no point in trying to impute blame to anyone for environmental or ambient situations which just cannot be helped. This may be one such. Certainly the program manager has to protect the integrity of his total program as he sees it. But I think that the university people who consider entering into these programs should be aware of all these ramifications and understand that the time might come when they would be judged expendable by the people responsible for the overall project. There is no point, three or four years down the road, in kicking and screaming and saying, "It is not fair", because that might be the way it would turn out. But if a person is aware of the risks when he enters in, presumably he should be adult enough to accept the developments withut trying to blame somebody for them later on. D. Lilly ; i would like to get back to Fred a bit on this matter of funding continuity. You ask for a program with more meat in it, more sharp focussing in some respects. This must come in part from the interaction of people wwo are not now committed to this program, and hardly anybody is. They may be willing to gamble on it to a certain extent 487 but they are very reluctant to put in a lot of work and see it actually going but then die or be cut in half after they have really committed a large part of their careers to it. Is there some possibility in the future that the time scale on funding decisions can be expanded a bit. F. White ; I will try to answer that. I will also say something else first. As I do not agree with everthing that Dave Atlas has to say. In this case I agree more with Bill Hess. If SESAME is going to go, I think it is going to take the involvement of the universities in a very, very active fashion. We are not talking now about an NHRE 5-year or a 7-year period, we are talking about two 3-months periods of intensive observatin. GATE was one 3-months intensive observation. It was a 5-year planning process up to a 3-months intensive observation. There were many university scientists who devoted 6-12 months thinking and planning and working bureaucratic-wise in the development f that project. These university scientists were funded to do this. I mean it did not come out of their own pockets or the universities did not pick up the bill. Some of it the universities did. Other cases, they did not. I would like to side with Bill Hess. I think that to put the scientific meat on the bones, and to put it in a real physical sense so that a lot of manpower is not wasted doing second-rate things, that we ought to think seriously of how to go about getting some real good planning. I am not critisizing what has gone on. It has been very good, I am thinking of hat has to go for the next six months. Now to Doug's question about funding. That is a tough question. Any federal agency can only make commitments 488 depending on the availability of funds from Congress. 60-70 percent of our grants are 2-year grants. We make some grants up to five years. Less than 5 percent of our grants are 5-year grants. We have the authority to make 5-year grants. We are making more and more 1-year grants simply because of dollars. I think you have to play the game both ways. If the project is going and it is successful and if it is really paying off, I feel there will be money left to carry it out. Whether there will be money to start with to fund two or three years, I am not sure. But if a project really gets going and there are good people working on it and they are doing good work, I believe there will be money left to finish the project. That is about all I can say, Doug. E. Kessler : May I make a brief comment or two? For one, the comment was made earlier that the universities should not wrk on the core experiment. I think we perhaps are not clear at this point what the core experiment is and I submit that we, therefore, really do not know whether the universities should work on it or not. Now if the ancillary work is the first work that is going to be dropped when funds get tight, maybe the universities should work on the core experiment because that would be one of the safer areas. At least that is one reason for them to be involved there in the safe area. No with respect to defining what the core experiment is, there has been one effort at definition in the present PDP. But to improve the definition, if that is indeed needed, I would like to address the management plan that has been sketched on the blackboard and just suggest a couple of things that were probably implicit in Doug's mind anyway as 489 he wrote it. I find that the scientific advisory panels are all composed of big shots, busy people. Their membership hardly has time to really comprehend the details, and the panels tend to be a facade concerning the giving of advice on those important details. Now I think there is a place for an Advisory Panel, through my very limited experience in this area, I find that the Advisory Panel comes in. They are all big shots. They hardly have time to really comprehend the details and it is more of a facade as far as giving advice on those very important details. Now I think there is a place for an Advisory Panel, but it is not quite as implied by that diagram. It is more a panel to give an overall surveillance report up to higher levels of government where the meteorological competence is not so great, but administrative competence and power to act on such matters as funding is great. What is needed is to open up the advisry system to a number of committees which consist of the very peple who are participating in the project. They are the ones who have the greatest stake in making it go. And they are the ones who understand it best. At NSSL, for example, we have now set up advisory committees consisting of the working scientists, all good independent strong scientists who know more than the Director about the particular part of the work that they are addressing. Then you give them a charge. You do not just leave your door open so they can come in, but you give them a charge and a responsibility to work in that area and to propose revisions to the program and new areas. W. Hess ; Ed led into my next comment. The main reason we called you together was to get you to react to the thing we have put in frnt of you which is not 490 quite a strawman, but it is not very much past a strawman. The preliminary project development plan was a document that a bunch of people in this room have worked on to get to the place where it could be taken to NOAA management for what you might call a first-order blessing. It has been first-order blessed, but there are a number of other kinds of blessing that have got to go in before it goes anywhere. Having it first-order blessed, now sort of the second-order (not blessing, but yelling) has got to come from you kind of people. The thing I am worried about right now is what kind of feedback we are going to get from you after having exposed to you the kind of things we have been thinking about for some period of time. Fred said, early in the meeting, write us letters. Ok, that is one way of getting this back to us. But right now it is very unclear to me how we go ahead and take the next step. Bill Hooke said let Doug tell him how to take the next step. I guess we would like you to tell us how to take the next step because right now what we are trying t do is to involve you in this planning thing and involve you in re-scoping objectives, if that is something that ought to be done and those are still flexible, in re-scoping the field program, how do you meet the objectives, in thinking the thing out with us. There is no intent to present you with a cold deck. The reason you are here is to interact with us, Chang ; I have a personal feeling that Congress is interested in the SESAME project because of tornado damages which are shocking people. We cannot have the poor people dead from tornados. Therefore, that is salient object. I feel we should put more emphasis on rather than appears. This is the point that I have. You say you are working towards it. I think the Congress would be happier than we are. 491 J. Golden ; Well, I may be cutting my own throat, but so be it. You know I am deeply interested in tornados and I certainly hope for increased funding support and increased university participation in our Tornado Intercept Project at NSSL. Other appropriate tornado programs should continue. One example that we were very pleased to have is the participation of a University of Wyoming student last Spring. He got quite an eyeful! We were very glad to have him. We just do not have the manpower at NSSL to fully conduct a project like this. On the other hand, I think that if SESAME gets too wide spread, if we have too many things we are trying to do, we are liable to not do any of them very well. I think that in this regard, while I am deeply interested in tornadoes, I think that it should be an on-going project at NSSL and elsewhere. But as you bring in the tornado-scale measurement question and especially modification. . . We have already had many problems in this modification area in Oklahoma. I think the fundamental consideration here is that we have yet so much to learn about the basic structure of severe, least of all tornadic storms, that if we tamper with them, we won't really understand the natural from the modified. So that I think that hail-storm modification is one area, of course in which NHRE is deeply involved and should continue to be involved. But to bring in the tornado modification problem here, it is politically expedient but I think scientifically it would really muddy the water. Raymond ; It seems there are a lot of people with a lot of different interests. Whenever you go out to make some measurements in the field, one problem you always run up against is the large scale flow field. For instance, on the mesoscale flow field what is the air really doing? It seems 492 to me the major contributions can be made as sort of a core project. That is the 121 ground stations and the 25 radiosondes. With this as basic data set, a lot of other people could come in and know that this data set is going to be there and make their own particular observations. Do Atlas ; As President of the American Meteorological Society, Dave Johnson just received a letter last week from Guy Stever asking what is the role of the scientific societies in influencing and shaping public policy towards science. The Society has to generate a response to this letter and Dave will be attending a meeting of all the Society Presidents, with Stever, about October 9 or 10. So that, it seems to me that an the appropriate question is how do we develop the constituency to support not only project, such as SESAME, but other major programs in the atmospheric sciences or simply the kind of continued support that individual universities need? So that if any of you have a desire to express your views in this regard, please let me or Dave Johnson have your ideas within two weeks. D. Lilly : I declare the meeting adjourned. 493 SESAME Panel Discussion SQUIRES: We have had a very interesting series of talks, though I have felt a lack of participation of the audience in the forming of the ideas about what SESAME is. In particular, I would like to refer back to the section of the Draft Development Plan where it is said that while we don't know whether or not the aerosol has a significant effect on precipi- tation, the consensus is in the negative. Of course, a consensus is a pretty flexible thing. It depends on exactly how you ask the questions and who you talk to. Obviously, very large convective storms will produce a lot of precipita- tion, irrespective of the nature of the aerosol. However, we do know that in the smaller clouds which are more accessible to observation and computation the aerosol can make all the difference between precipitation and no pre- cipitation. It seems inherently unlikely that the same physical causes do not influence the amount and the timing of precipitation development in even larger storms. We know several ways in which microphysical events in clouds can have an effect on the energetics of convection; e.g. through changing the loading of the updrafts or the timing of the release of the latent heat of freezing. Outside the cloud, the formation of an anvil, whose optical properties will depend on the micro- physical events in the cloud, can modify the radiative balance ^94 SQUIRES (cont'd.) : of the whole region. For all of these reasons I would feel that, on a scale appropriate to the general plan of SESAME, it would be reasonable to include as a significant element of both aero- sol and cloud physics. This does not mean that a great deal of detailed cloud probing should be undertaken. There is another matter which has been touched on already in the discussion. It is true that the justification of SESAME in the opening sentences of the draft plan is in terms of forecasting. Of course we know very well that fore- casting has improved quite a lot for severe storm events in recent years, particularly as a result of the utilization of remote sensing. Nevertheless, some people don't need or don't use forecasts. It is difficult to move a town away from the path of a tornado, even if you know where it may be going. So a little bit of modification would be worth a great deal of excellent forecasting, even though it is a difficult thing to achieve. Now the only way we know how to go about modifying the weather is by modifying the aerosol. For this reason in addition I would think that it is important within the core project of SESAME to include aerosol physics and the interac- tion of the environmental aerosol with the microphysics of the clouds. ^95 SESAME Questionnaire Late in the program a questionnaire was distributed to those in attendance, with a request to return it by the end of the meeting. About 50 were so returned. The questionnaire follows, together with the count of answers in each category. Some of the more interesting of the individual comments made at the end of the questionnaire are also included. QUESTIONNAIRE 1. Affiliation: 16 NOAA; 5 Other government agency; 21 University; 8 Other. 2. How well have you read the SESAME Project Development Plan? 38 Thoroughly \2 Skimmed it 2_Not at all 3. Do you think the project should be carried out? 46 Yes No 4. Do you think the project will be carried out? 34 Yes No 5. If SESAME does proceed within a couple of years, what do you think are the chances that you will be significantly involved in it? 29 Good 28 Fair _3 Poor 6. Would you like to be so involved? 36 Yes JJ Maybe No 7. Do you regard your attendance this week as worthwhile? 45 Yes Maybe No 8. If you are interested in SESAME, what do you think should be the next step? Answer more than one if desired. 32 Improve the PDP 26 Appoint a Scientific Advisory Committee 15 Put in individual proposals 8 Try to stir up more pubhc and professional interest 3 Stay cool until some new money is available 10 Other 497 9. What did you think of the meeting? 37 Okay 28 Okay 2 Too Long 12 Too many talks 28 Inadequately focussed 1 9 Okay 2 Too many atendees 37 Okay 4 Most talks dull 4 Needed more time ^_ Too many long talks j2 Too constrained - Too few (really?) 1 8 Okay 22 Most talks interesting Other comments: 10. Anything else you'd like to get off your chest? Other comments: "I beheve that we do not want to ignore measurement of mesoscale atmospheric scales in quiet conditions." "I think the evolutionary problem of cyclogenesis - dry line - thunder storms - tornadoes would be an obvious choice to settle upon to give SESAME a scientific focus." "sesame's first experiment could be slipped one to two years without harm." "Much planning and preliminary work needs to be done with adequate funding before any full-blown experiment." "Funding: Why cannot NOAA, NASA, AEC, . . . decide to commit a definite number of $ to outfit the basic experiment out of their base funding? This will assure the experiment to begin without the whims of Congress. Then go to Congress for the basic experiment, uni- versities could then seek other funds to support complimentary experiments." "It seems clear the if SESAME is to fly, it will have to have a few clearly defined problems which have a reasonable chance of being solved. I would like to see a beginning of individual proposals of research that relate to the overall objectives of SESAME. Hopefully some initial 'pre-SESAME" funding could be made available from existing sources. These preliminary projects would have the objective of preparing for SESAME, and might include such items as: (1 ) Definition of mesoscale variability in undisturbed enivronment. (2) Work on dynamic initiahzation schemes, in particular, the use of the geosynchro- nous satellite data which in my opinion has the only real chance of providing operational |3 and a mesoscale data. In particular, can we force the mesoscale winds to adjust to mesoscale perturbation in the mass field? It works with hurri- canes, but will it work in weater forced systems? (3) Predictability studies. 498 (4) Accuracy of minimal schemes and models under more or less undisturbed condi- tions. How many layers are necessary? How to best handle PBL? A tremendous amount of sensitivity tests are needed with mesoscale models before they will be "believed" by the scientific community. (5) Ideahzed computer simulations or simulations using available data sets to help in the optimum design of the observations network." "The next step in SESAME should be to provide a context within which individual proposals may be given, i.e., decide on a basic experimenatl design, confirm or discard the network designs suggested in PDP." "Before any major planning or scheduling decisions are made we need a smaller, more focused meeting to discuss the PDP plan and the various alternatives available to us." "Good job. Hate to spend so much time in a dark room under beautiful weather conditions - how about a session or two in the park? " (a) This project must not be too much of an extension of NHRE - the need is for generalizations - not case studies. As Medawar said, "Since Newton discovered the laws of gravity, we need not study the fall of every apple." "Laws" for mesoscale behavior (not storm behavior) are required. (b) The goal (or one major goal) should be the collection of representative mesocale data sets, for testing present mesoscale models, developing new mesoscale models, parameterizing convection (weak and moderate, as well as severe and strong), and parameterizing the energy source and sink properties of the surface boundary layer. (c) At least one specific question for measurement and study in the field can be de- fined now : What is the nature of the evolution and time and space of the height of the PBL? How is it affected by the ongoing evolution of the larger scale circu- lation? Obviously, this parameter (hpg^) is basic (although some so-called models pre- suppose a fixed value and use it as input). Equally obviously, field measure- ments of h(x,y,t) will have to be made a) over very simple terrain b) land, not water, since this is where people live and therefore where meso- scale numerical forecasts can be best applied to human needs c) in more than one season." "I think it is a mistake to videotape, record, and publish the proceedings of this meeting. The presentations are all rough and unreviewed. The presence of the cameras inhibited free discussion. It will take days or months of someone s time to go through the tapes. This should have been a free and open preUminary meeting, without the "threat" of a report coming out. A more appropriate means of disseminating the results of the meeting would be a brief meeting summary (8 to 10 pages) written by someone like Doug Lilly. I enjoyed the meeting. Many of the presentations were scientifically stimulating. TTie political maneuvering was also interesting. " 499 ^ U. S. GOVERNMENT PRINTrNG OFFICE 1975 ■ 677-241/1268 Res. 8 PENN STATE UNIVERSITY I irqad.^^ iiiiiiiiiiiii A00007mqs37c,