UNIVERSITY OF CALIFORNIA, SAN DIEGO 3 1822 04429 0708 UC SAN DIEGO LIBRARY VERSITY OF CALIFORNIA, SAN DIEGO Version 1.1 Jul 2003 Offsite (Annex-Jo OSPHERIC OPTICS GROUP rnals) HNICAL NOTE NO. 260 QC 974.5 3 1822 04429 0708 T43 no. 260 Calibrated Fisheye Imaging Systems for Determination of Cloud Top Radiances from a UAV For publication in Proceedings of SPIE Vol. 5151 Earth Observing Systems VIII, SPIE, Bellingham, WA UNIVERSITY OF CALIFORNIA SAN DIEGO J. E. Shields R. W. Johnson M. E. Karr A. R. Burden J. G. Baker ........ . . .... ......... .. UNIVS . . ********* 106... . Q.HU FORNI ...6 •1868 .. The material contained in this note is to be considered proprietary in nature and is not authorized for distribution without the prior consent of the Marine Physical Laboratory. SCRIPPS INSTITUTION OF OCEANOGRAPHY MARINE PHYSICAL LAB San Diego, CA 92152-6400 UC SAN DIEGO LIBRARY UNIVERSITY OF CALIFORNIA, SAN DIEGO III L 1 11 3 1822 04429 0708 Calibrated Fisheye Imaging Systems for Determination of Cloud Top Radiances from a UAV For publication in Proceedings of SPIE Vol. 5151 Earth Observing Systems VIII, SPIE, Bellingham, WA J. E. Shields, R. W. Johnson, M .E. Karr, A. R. Burden and J. G. Baker Calibrated Fisheye Imaging Systems for Determination of Cloud Top Radiances from a UAV Janet E. Shields, Richard W. Johnson, Monette E. Karr, Art R. Burden, and Justin G. Baker Marine Physical Lab, Scripps Institution of Oceanography, University of California San Diego, 9500 Gilman Dr., La Jolla CA 92093-0701 ABSTRACT In order to measure cloud top radiances from Unmanned Aerial Vehicles (UAVs) or other light aircraft, two small calibrated fisheye imaging systems have recently been developed. One of these systems uses a visible-wavelength CCD and is optically filtered to measure cloud top and ground radiances near 645 nm. The other uses an InGaAs detector and is optically filtered to measure radiances near 1610 nm. These sensors are specifically designed for use with DOE's Atmospheric Radiation Measurement (ARM) Program UAV Project, and it is anticipated that they will be used for comparison with a variety of satellite-borne radiance measurements. Radiometric calibration of solid-state imagers is never trivial, as the effects of exposure time, system non-linearities, temperature, gain and other system characteristics must be adequately measured and characterized. Much experience has been gained with the ground-based Day/Night Whole Sky Imagers and the Daylight Visible/NIR Whole Sky Imagers developed and used by the group for many years. New techniques for the radiometric calibration of the two new airborne systems are being developed based on this experience and the characteristics of the sensors involved. In addition, new techniques for a more accurate angular calibration have been developed. Key words: Cloud, radiance, calibration, UAV, NIR, airborne, fisheye, imager, radiation, whole sky imager 1. OVERVIEW Two calibrated fisheye imaging sensors have recently been developed for use with UAVs and other light aircraft. The Atmospheric Optics Group at Marine Physical Lab (MPL) at Scripps Institution of Oceanography has been active in the development of atmospheric sensors and the interpretation of their data for many years. The airborne imaging systems were based in large part on the Whole Sky Imagers (WSI) developed by this group (Johnson et al. 1989 and Shields et al. 1993 and 1998). The WSI systems are ground-based sensors used in UV research, global climate research, test site support, and other applications. They are designed to provide high quality digital imagery of sky conditions and, when combined with appropriate algorithms, provide automatic assessment of cloud amount and location within the scene, absolute radiance distribution, and related atmospheric parameters. The Atmospheric Radiation Measurement (ARM) Program is a DOE program for measurement and evaluation of radiation and other parameters related to climate change. The ARM Unmanned Aerospace Vehicle (ARM UAV) Program makes use of UAVs and other small aircraft for studying evolving cloud fields and their effect on the solar and thermal radiation balance in the atmosphere. Recent operations were focused on improved understanding of solar and thermal fluxes at the top of the Single Column Model (SCM) column centered on the Southern Great Plains (SGP) ARM site (Tooman 2002). For this project, a variety of instruments were deployed, including Broadband Hemispheric Radiometers; a Cloud, Aerosol, and Precipitation Spectrometer; Cloud Detection Lidar; Cloud Integrating Nephelometer; Compact Millimeter Wave Radar, and other instruments. In addition, two sensors designated the Diffuse Field Camera (DFC) were developed and deployed. This DFC system consisted of a pair of calibrated fisheye imagers which are mounted under the UAV pod, for nadir viewing of the complete lower hemisphere. This paper discusses the instruments that were developed by MPL for this DFC application. Within the context of the ARM UAV Program, they are called the Diffuse Field Camera (DFC), however we have used the somewhat more * ishields@ucsd.edu; phone (858) 534-1769; fax (858) 822-0665 generic name of Calibrated Fisheye Imaging Sensors in parts of this paper. The DFC camera system consists of two separate sensors: a system optimized to obtain calibrated radiance fields in the red portion of the visible spectrum near 645 nm, and a system optimized to obtain calibrated radiance fields in the NIR spectrum near 1610 nm. Each of the camera systems uses a fisheye lens to image the full hemisphere, along with a custom spectral filter to provide the desired spectral band. The visible system uses a commercial imager with Silicon CCD sensor, and the NIR system uses a commercial imager with a GaAs CMOS sensor. The two camera systems are shown in Figures 1 and 2. Figure 3 illustrates one of the vehicles used to deploy the instrument suite for the ARM UAV program, and Figure 4 shows an early version of the UAV pod used with the program. Costa Figure 1: Visible Imager used for DFC System Figure 2: NIR Imager used for DFC System Figure 3: DHC-6 Twin-Otter used by ARM UAV Program Figure 4: Early Version of UAV Pod This paper will provide a brief overview of other related imaging systems from which the airborne calibrated fisheye sensors evolved, and then discuss the system design and calibration of the airborne sensors, and provide sample imagery. 2. DEVELOPMENT OF FISHEYE IMAGING SYSTEMS AT MPL The original concept for the development of calibrated fisheye imaging systems at MPL evolved out of the group's Atmospheric Optics program, a measurement and modeling program using multiple sensors for monitoring sky radiance, atmospheric scattering coefficient profiles, and other parameters related to vision through the atmosphere (Johnson et al., 1980). Beginning in the early 1980's, the group developed a series of ground-based calibrated fisheye imaging systems known as Whole Sky Imagers (WSI) which used digital imagery and were fully automated. The first automated WSI was conceived as combining the features of the all-sky cameras used in earlier programs, with the scanning radiometer systems that provided quantitative measurements of sky radiance distribution. The first WSI systems used digital cameras (sometimes CCD, sometimes Charge Injection Device or CID imagers), with fisheye lenses, optical filter changers, relay optics to provide the proper image size and location, equatorial sun occultors to provide shading for the fisheye lens, and early versions of personal computers for automated control. Figure 5 shows some of this evolution. The film-based all-sky camera in use in a 1963 deployment is shown in Figure 5a, and the automated Day-only WSI developed in the mid-1980's based on CID technology is shown in Figure 5b. With the use of very low noise 16 bit CCD cameras and an occultor modified to handle both sun and moon, these systems were further developed into the Day/Night WSI shown in Figure 5c. Some typical images from the Day/Night WSI are shown in Figure 6. CAMERA'S VIEW ALL-SKY CAMERA ASSEMBLY (a) (b) (c) Figure 5: Some of the WSI Systems developed at MPL that contributed to the development of the Airborne Fisheye Imagers: a) the All-Sky Camera used in 1963; b) the Day-only WSI used in the 1980's; c) the Day/Night WSI used in 1990's and currently in use. Figure 6: Sample imagery from the Day/Night WSI for sunlight, moonlight, and starlight conditions. To place the development of the airborne systems into the context of related developments at MPL, Figure 7 shows other systems that evolved at MPL for other applications. Figure 7a shows the Day/Night WSI designed for remote sites and real-time processing. This system was designed for applications in military test support. Figure 7b shows the Daylight Visible/NIR WSI designed to replace the old Day WSI, and to enhance the system with calibrated radiance in up to 7 spectral filters (Shields et al, 2003). Figure 7c shows imagery from a new mockup for a full scene imager with optical zoom for surveillance and tactical applications. This full scene imager uses a unique type of optical zoom developed at MPL, for which the patent is pending. The general concept is to image the full scene with one or more fisheye lenses or other wide angle lenses. The image from the fisheye is then split with a beam splitter, and one of the image planes is inspected with a microscope objective and second camera, providing a very high resolution image of selected regions of interest. The choice of the region for high resolution view can be made by a motion detector or by a user with a touch screen. (a) (b) (C) Figure 7: Additional Imaging Systems in the WSI family at MPL: a) Day/Night WSI for real-time processing; b) Daylight Visible/NIR WSI; c) A full scene imager with patent zoom The airborne systems are the lightest weight and least complex of the systems designed to date, and also extend the furthest into the NIR region. We were very pleased to have this opportunity to develop a system of this general size regime and application, as well as to extend the capabilities into new wavelength regimes. 3. AIRBORNE FISHEYE SYSTEM DESIGN The requirements for the airborne fisheye system are considerably simpler in many ways than requirements for such systems as the Day/Night WSI. Only one spectral filter was required per system, so no automated filter changer was needed. This change removes many of the more rigorous constraints on the optical design, because there is no need to stretch the back image plane to accommodate a mechanical assembly with moving filter wheels. The system is looking down, so no solar occultor is required. The system does not need to acquire both day and night flux levels, so the system dynamic range requirements were considerably relaxed. On the other hand, weight and size were very important priorities. The most significant impact of these concerns was that the sponsor desired to have a system compatible with a PC104 format computer card (so it could be controlled from the same system running other on-board instruments). This requirement significantly narrowed the choice of available commercial imagers. Another requirement was that the system be radiometrically calibrated, and provide a full distribution of radiance measurements over the lower hemisphere. This requirement placed restraints in terms of image quality, dynamic range, sensor control, and sensor stability. A prototype DFC had been deployed on a previous ARM UAV mission, based on MPL's design recommendations. The experience with this prototype was also very helpful in designing the new system. The new system uses a fisheye lens modified to hold a spectral filter, and a camera with a compatible imaging chip. The 2/3” Format Video Lens developed by Coastal Optical Systems, Inc. provides an image size compatible with most 2/3” format CCD chips, and its back focal length was sufficient for inserting a reasonably thin spectral filter between the lens and the image plane. The lens was custom fit with a filter adaptor to hold the spectral filter, although the filter adaptor was later modified at MPL for compatibility with the camera. The standard lens design was modified by Coastal Optical Systems Inc, in order to avoid cemented doublets that had a high potential for cracking due to thermal and/or mechanical stress during flight. A glass dome, with air blown between the dome and lens, was also added to this package. A custom filter was used in order to optimize the match with the desired filter specifications, while maintaining adequately thin physical dimensions for compatibility with the lens. The resulting spectral transmittance curve for the custom filter is shown in Figure 8, in comparison with the ideal spectral transmittance curve for this project. In selecting a visible imager, we wanted fixed or controlled gain, an appropriate array and pixel size, and of course compatibility with the PC104 computer format. The sponsors also preferred that we use an electronic shutter, for higher reliability. Detailed analysis of the anticipated flux levels and desired noise levels, in conjunction with evaluation of camera sensitivity, noise, dynamic range, shutter exposure range, and other factors, led to the selection of the DVC 1312M camera. This sensor was selected primarily due to its data output format options, and in part because it has a cooled CCD chip. The noise, dynamic range, and sensitivity considerations were reasonable for this system. Perhaps the biggest uncertainty with this type of camera is that it is difficult to obtain a dark image due to the lack of mechanical shutter, so stability of the dark image becomes more important. For this reason, we would have preferred to obtain a CCD with temperature stabilization. This sensor is cooled but not temperature stabilized, however obtaining a stabilized-temperature sensor was not practical within the given time and cost constraints. The visible imager has a full 1300 x 1030 array, with a pixel size to yield an anticipated optical image radius of approximately 1015 pixels. This results in a spatial resolution of approximately 0.2 degrees per pixel in nadir angle. The sensor has 12-bit digital readout, with adequate flexibility in the shutter exposure options to obtain data that are nicely on-scale. The system readout noise specs are slightly higher than desired, with anticipated noise levels of about 1% of the signal for bright clouds, and 4% of the signal for land surfaces. Sensitivity computations for the full system (lens, filter, camera) showed that the images should be onscale for reasonably short exposure times, so that there should not be significant amounts of smearing in most scenes. Camera control software was available in an interactive program for acquisition of the calibration data on a standard PC, and a library of camera routines were available for development of the acquisition software for airborne use. The airborne acquisition software, as well as the hardware for mounting the system in the UAV, were developed by our sponsors at Sandia National Laboratories. Similar considerations drove the design of the NIR system. The same type of fisheye lens was used, however optical coatings optimized for the desired NIR wavelengths were used. As with the visible system, the filter mount was modified, and a custom interference spectral filter was used. The spectral transmittance for the custom IR filter is shown in Figure 9, in comparison with the ideal spectral transmittance. 1.2 1.2 - - - - - o - - ! - - т - - т - - т - т - - т - т - Transmittance Transmittance т - - - т II - - - - - - - - - - - 0.0 1540 1680 600 620 640 660 680 700 Wavelength (nm) 1580 1620 Wavelength (nm) Figure 8: Visible custom filter spectral transmittance (solid line and ideal spectral transmittance (dashed line) Figure 9: NIR custom filter spectral transmittance (solid line) and ideal spectral transmittance (dashed line) The NIR selection criteria were similar to those for the visible sensor. Design evaluations included consideration of the relative noise, sensitivity, pixel size, and other significant characteristics with respect to the anticipated imaging environment. For this application we chose Indigo's Alpha NIR camera, primarily due to the pixel size, which gave us the best pixel resolution for this lens, and the fact that the sensor chip was temperature stabilized. This system also has two selectable gains, which we felt might be important during deployments. The Alpha NIR has a 320 x 256 pixel format, and the pixel size was such that the diameter of the optical image would occupy approximately 227 pixels. While this provides somewhat limited angular resolution of about 0.8 degrees per pixel, it was the best that could be obtained within the cost constraints (while retaining full hemisphere field of view). Temporal noise was anticipated to be reasonably low, with anticipated noise levels of less than 1% of the signal for bright clouds, and approximately 1.5% for land surfaces. Sensitivity of the full system (lens, filter, and sensor) was such that reasonably short exposure times could be used in flight. 4. CALIBRATION OF THE SYSTEM RELATIVE RESPONSE In order to provide absolute radiometry as a function of angle with respect to the nadir, several calibrations are required, including the relative response or linearity, absolute response, and angular calibration. Sensor relative response, as characterized by system linearity, can be measured in a variety of ways. We normally measure it in two ways, first with a fixed exposure setting and variable light levels from the optical bench (using FEL lamps traceable to NIST), and secondly with a fixed input flux level but variable exposures. We have found in the past that systems that are considered nominally linear may result in significant errors if the linearity is not characterized and corrected appropriately (Shields et al., 2003). The Visible DVC camera linearities were reasonably systematic, and were very repeatable. The curves at first appeared to be quite non-linear, because we were not properly correcting for dark current. The vendor-provided software essentially set the electronic bias such that the dark current was truncated. When the bias was changed to a higher value to yield a “live zero” for dark current, we found the dark-corrected signal response was close to linear over most of the range at fixed exposures. The response as a function of changing exposure was somewhat non-linear at short exposures. This made it more difficult to de-couple the non-linearity effects of changing exposure and from the non-linearity effects of changing flux level on the chip. Rather than characterizing the non-linearity as a function of both exposure and flux changes as is normally done (Shields, 2003), only selected exposures were used, and the non-linearity as a function of flux changes at these exposures were characterized. Linearity results for one of the selected exposures, 62.02 msec, are shown in Figures 10 and 11. ST 4000 1.2 3000 € Expected Signal 1 Fractional Non-Linearity 1 -h ey - Y. - - - - - - - - - - - - - - - - - - - - - - 2000 - 000 يللللللللللللللللللللللللللللللللللل 0 1000 2000 3000 4000 Observed Signal Figure 10: Visible camera measured linearity data taken at a fixed exposure of 62.02 msec, offset 2.2, and speed 2.5 0 1000 2000 3000 4000 Signal Figure 11: Resulting non-linearity relative to a signal of 1000 As may be seen in Figures 10 and 11, the resulting linearity at the selected exposure is quite good, with non-linearities of about 3% or less. The character of the non-linearity was best-fit with an eight degree polynomial fit, so that the non- linearities could be corrected. The linearity results for the other selected exposure, 130.49 msec, were similar. When corrected, the residual uncertainty due to non-linearity effects is less than 1%. The linearity curves were also acquired at two temperatures differing by approximately 40 degrees C. With this temperature change, we found no significant change in the linearity. The NIR imager showed somewhat more non-linearity than the visible imager, particularly as a function of exposure setting. As with the visible system, we decided this could be handled most accurately by characterizing the non-linearity correction at the few exposures that were actually used, rather than providing a general characterization for all combinations of exposure and flux level. In addition, we found with this particular sensor that the dark image was abnormal at certain exposures, and there were discontinuities in the response curve at these exposures. We were told by the manufacturer that it was best to avoid these discontinuity points, and so we were careful to select exposures that were not associated with discontinuities. Only one exposure was actually used in the flight series. The linearity results for the NIR sensor are shown in Figures 12 and 13. When these non-linearities are best-fit and field data are corrected accordingly, we anticipate a residual error of about 1% or less over most of the span, with errors of up to about 5% for signals below 200 counts (on a 4000 count range). This is not perfect, but is a large improvement over the 50 – 100% errors that would have occurred below a signal of 200 counts if the linearity effects were not carefully measured and applied to the field data. This is perhaps a reasonable place to comment that even though the non-linearities shown in Figure 13 are quite significant, this system would normally be considered to be linear in common parlance. We have found before that cameras that are advertised to have 12 or 1% linearity may induce very significant errors if not rigorously and radiometrically characterized for linearity. When we worked with two separate manufacturers of imaging systems, to try to understand the differences between our manner of characterizing the linearity and theirs, we found that both manufacturers used similar methods for determining linearity. The method that both used was reasonable for determining if the system was linear in an operational sense, but not for radiometric purposes. We were able to duplicate their data, and we were able to duplicate their results with our data, for both cameras. Out of four cameras that were characterized as linear to within one percent for operational purposes, rigorous radiometric tests showed that one of them was linear for radiometric purposes, one was close (Fig 11) and two others (including that shown in Fig 13) required larger linearity corrections. TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT 2.0* 5000 F 4000 ΤΤΤΙΙΙΙ Expected Signal ............................................ Fractional Non-Linearity للللللللللللللللللللللليييليييييييييليييييد اااايييييييييييللللاايييييلللليليلا O 1000 2000 3000 4000 5000 Observed Signal 0.06 0 1000 2000 Signal 3000 4000 Figure 12: NIR camera measured linearity data taken at a fixed exposure of15.9 msec, low gain Figure 13: Resulting non-linearity relative to a signal of 1000 In summary, the linearity was characterized at the actual exposure setting used during flights for both the visible and NIR sensors. The resulting non-linearity curves were characterized by a polynomial best-fit equation for application to dark- corrected field data. 5. CALIBRATION OF THE ABSOLUTE RADIOMETRIC RESPONSE OF THE SYSTEM In order to provide the absolute radiance distribution for the scene, absolute calibration must also be performed. For the DFC systems, several steps are required, as discussed below. First, the effective lamp irradiance for the system passband was determined. Then a region of interest (ROI) near the center of the field of view was calibrated against standard lamps traceable to NIST, to determine the absolute calibration. A uniformity calibration provided the relative response as a function of pixel position. A crosstalk calibration determined the change in response of the central region of interest as a function of the lighting in the rest of the field of view. And, for the visible system only, the response of the system to changes in operating temperature was calibrated. To determine the effective lamp irradiance, the spectral filter transmittances and the chip sensitivity were characterized, and integrated with the lamp spectral irradiance to generate the effective lamp irradiance from the equation SE2 Sa Ta 1... Tanda SS, Tai... Tanda Eq. 1 where Ea =Lamp spectral irradiance S = Sensor spectral sensitivity T = Filter spectral transmittance for filters 1 through n à = wavelength This effective lamp irradiance is used in processing the remaining calibrations. For the absolute calibration of the central ROI, redundant measurements are made at a series of lamp positions on a 3-meter calibration bar. The lamp is directed at a lambertian plaque in order to provide a known radiance to the sensor. The lamp position was varied such that eleven measurements over a 1-log range could be acquired. These signals should result in redundant calibration constant determinations. One measure of the uncertainty in the calibrations is the percentage STD obtained between the eleven resulting computed calibration constants. These results are shown with and without the linearity correction in Table 1. For the visible system, we achieved better than half a percent self-consistency. The NIR results were consistent to within 3% after the linearity correction. Some of this uncertainty in the NIR is due to residual linearity uncertainties, but we believe that some is also due to small amounts of stray light within the calibration room in the NIR wavelengths. Table 1 Self-Consistency of Absolute Calibration Measurements | Calibration Visible at 62 msec Visible at 130 msec NIR at 16 msec | Consistency as given by STD (%) | No lin corr With lin corr 0.56% 0.15% 0.56% 0.47% 19.9% 2.8% The uniformity calibration was determined with a matte white hemisphere lit by a bulb behind a baffle, similarly to an integrating sphere. Measurements are acquired as this system is slowly rotated, and the resulting images are processed to use only the half of the field of view away from the source. These multiple images are combined to yield a uniformity image that includes the effects of both the pixel-to-pixel flat field and the lens rolloff. Both the visible and the NIR system yielded somewhat interesting results. The visible system uniformity image is shown in Figure 14. The rolloff, or change as a function of nadir angle, is shown for lines through the vertical and horizontal in Figure 15. They differ from each other due to vertical smearing, an artifact of the storage and transfer mechanism in CCD systems without mechanical shutters. We minimized it to some extent by using a more closed aperture and longer exposures than originally planned. We hope to further mitigate this effect in future deployments by using shorter readout times, but it will be necessary to evaluate the noise vs. smearing tradeoffs as a function of readout speed. The NIR uniformity image appears to be approximately symmetric radially, although analysis of the NIR uniformity is not yet complete at this time. TU . . . O Ve .*. In? . 1997 C6112, W . .' ! it! 2012 AX ING VA . ..X 2013 ROTI . errowe is Str. . - S ' X FUNCU w " S 9. . AS3 . . ',., * . . . % * ** 1,;: . ii ". . ' 27. XV SEKS 27NWA ..! . 20- ! • 4 ..' POMLA KAS V. : :.. . * 2 . TTTT 4 * Kita * R . . .. : . 2 T . : , , .. it. . :. 113 .*. S 0 . AY MAY .. Wine uns r 41- . 2 . 2010 OKCIR . . 1 W * YHT WS3XTEN: AX N :: 4 ..? UNDE? PARA -0.2 -700 – 3500 350 700 Radius Figure 15: Rolloff in sensitivity along vertical and horizontal lines through the nadir in the Uniformity image Figure 14: Uniformity Image for Visible Camera The crosstalk portion of the absolute calibration is designed to account for the fact that there may be some crosstalk between pixels, and or a slight increase of the signal due to the wings of the point spread function (PSF) of the optics. Although we have verified that the wing for the PSF is less than the peak by many orders of magnitude, there is a measurable difference between the signal when only the central ROI is lit and when the full field is lit. This change was measured and applied to the calibration. The visible camera also had to be calibrated for thermal effects because the CCD is thermally cooled but not stabilized. To measure these effects, a small thermal chamber was used, and also the chip's thermo-electric cooler was controlled externally with a switch added by MPL. It was determined that although there was a small change in the signal, this was only due to the change in dark current. The sensitivity of the dark-corrected signal showed no change over a 20 degree C temperature range. We feel that at the present time, the greatest source of uncertainty in the visible system calibration may be the thermal effects, because it is difficult to measure the dark current at the low temperatures used in-flight. 6. THE ANGULAR CALIBRATION OF THE SYSTEM The angular calibration is designed to determine a mapping between nadir and azimuth angle in object space, and pixel position in image space. Some years ago, an angular calibration was set up for the Day/Night WSI systems, in which the camera is mounted on a rotary table, with the entrance optics centered over the center of rotation, and the center of rotation oriented under a plum bob hanging from the ceiling. Wall markers had been set up using a theodolite, so that angles were known with respect to the plum bob to fractions of a degree. The same technique was used for the DFC system, and the system was rotated so that reasonably accurate calibrations of both nadir and azimuth angle could be determined. The calibration data acquired in this room were processed to yield angular mappings as shown in Figures 16 and 17. As part of this procedure, a special setup was devised to determine which pixel corresponded to the nadir point when the camera was mounted in its deck mounting. The deck mounting was designed to allow realignment of the camera within the mounting, but to hold the camera rigidly once it was set. The deck mounting was defined as the reference frame for the DFC cameras, and pitch, roll and similar corrections are made with respect to this deck. A new test was devised to determine the nadir pixel with respect to this reference plane, and additional tests were devised to determine the accuracy of the technique. Discussion of these techniques is beyond the scope of this paper, but preliminary results indicate accuracies to within approximately one to two pixels. 100[ - Zenith Angle - - 600 200 400 Radius Figure 16: Zenith to Pixel Mapping Derived from Angular calibration images Figure 17: Resulting azimuth angle and nadir angle maps for use with field data 7. SAMPLE IMAGERY FROM THE VISIBLE AND NIR SYSTEMS Both of the calibrated fisheye imaging systems appear to have survived the rather harsh deployment conditions, and acquired good images. Measured exterior housing temperatures ranged from -30C to +25C during flights, however inspection of the systems on return indicates no cracking of lenses or other obvious problems, and the lenses and filters did not appear to move during flight. The focus of the system had been optimized prior to deployment using an MPL technique which measures and then optimizes the point spread function of a point source. We found that images acquired during the deployment appeared to remain well in focus. System sensitivity appears to have been appropriate. The visible system images were acquired near 130 msec and 62 msec integration time, and are nicely on- scale. Figure 18 shows an example taken on 22 November with no clouds, and Figure 19 shows an example from 24 November with scattered clouds. In each case, the dynamic range appears to be appropriate, with signals ranging from approximately 500 to 3000. There is no obvious smearing in these cases, nor in cases with the clouds at higher altitude. Figure 18: Visible DFC Image, 22 Nov 02, 62 msec Noise levels appear to be reasonable, and about at expected levels. For example, the standard deviations in reasonably- uniform portions of the cloud scene with signals near 2000 to 2500 are about 2 - 5% (net variance, i.e. cloud field variation plus noise). Patches of field at lower signals near 500 to 700 have standard deviations of 3 - 4 % net variance. These images have been averaged on a 2 x 2 basis, but since this variance includes both the scene variance and the various noise sources, we believe the achieved noise was reasonably low. During calibration at somewhat warmer temperatures, STD's of 2% were measured at both of these signal levels. Like the visible system, the NIR camera appears to have performed well during flight. There were no indications of mechanical problems, and the post-deployment focus remained good. A sample image acquired at the same time as the visible image in Figure 19 is shown in Figure 20. [The dark circle outside the image is where the data have been masked; the NIR image is somewhat smaller, and appears as the gray circular image somewhat inside the masked area.] Figure 19: Visible DFC Image, 24 Nov 02, 62 msec Figure 20: NIR DFC Image, 24 Nov 02, 16 msec The data in Figure 20 appear to be reasonable, although their range is somewhat limited, with the brightest clouds having signal levels of around 2000. A more detailed study of the data set would be required to determine whether the exposure setting could be further optimized. Signal variance levels are reasonable, with about 2% over fields, and 4% over clouds. The actual noise level measured in the calibration room (temporal plus spatial) was about 1% at similar flux levels. The field data have not yet been fully calibrated and analyzed, and are thus not presented here. In summary, the DFC camera system performance appears to be quite good for this application. The calibration data were systematic and repeatable, and the calibration analysis indicates that the calibration data are self-consistent and reasonable. Measured field data looks reasonable. The overall performance indicates that in spite of the cost and size constraints, these are viable systems for use on UAVs and other platforms, and we anticipate that they will provide high quality calibrated absolute radiance distributions over the full lower hemisphere. 8. ACKNOWLEDGEMENTS We would like to express our appreciation to the Sandia National Laboratories and the DOE's Atmospheric Radiation Measurements Program administered by Battelle's Pacific Northwest National Laboratories for their support of this work. We would especially like to thank Dr. Tim Tooman, Ken Black, John Smith, and Jason Reinhardt of Sandia National Laboratories. In addition, we would like to thank the professional staff of DVC Company, Indigo Systems Corp, and Coastal Optical Systems Inc for their assistance with our questions regarding performance of their products. 9. REFERENCES 1. R. W. Johnson, W. S. Hering, and J. E. Shields, Automated Visibility and Cloud Cover Measurements with a Solid State Imaging System, University of California, San Diego, Scripps Institution of Oceanography, Marine Physical Laboratory, SIO 89-7, GL-TR-89-0061, NTIS No. ADA216906, 1989. 2. J. E. Shields, R. W. Johnson, and T. L. Koehler, Automated Whole Sky Imaging Systems for Cloud Field Assessment, Fourth Symposium on Global Change Studies, American Meteorological Society, 1993. 3. J. E. Shields, R. W. Johnson, M. E. Karr, and J. L. Wertz, Automated Day/Night Whole Sky Imagers for Field Assessment of Cloud Cover Distributions and Radiance Distributions, Tenth Symposium on Meteorological Observations and Instrumentation, American Meteorological Society, 1998. 4. T. Tooman, ARM-UAV Science and Experiment Plan, Fall 2002 Flight Series, Sandia National Laboratories, Livermore, California, 2002 T 5. R. W. Johnson, W. S. Hering, J. I. Gordon, B. W. Fitch, and J. E. Shields, Preliminary Analysis and Modeling Based Upon Project OPAQUE Profile and Surface Data, University of California, San Diego, Scripps Institution of Oceanography, Visibility Laboratory, SIO Ref. 80-5, AFGL- TR-0285, NTIS No. ADB-052-1721, 1980. 6. J. E. Shields, R. W. Johnson, M. E. Karr, A. R. Burden, and J. G. Baker, Daylight Visible/NIR Whole Sky Imagers for Cloud and Radiance Monitoring in Support of UV Research Programs, International Symposium on Optical Science and Technology, SPIE the International Society for Optical Engineering, 2003.