10.52. . 15" ;ERL ILL, -JTS »07 ESSA TR ERL 166-ITS 107 A UNITED STATES DEPARTMENT OF COMMERCE PUBLICATION / y\ ESSA Technical Report ERL 166-ITS 107 U.S. DEPARTMENT OF COMMERCE Environmental Science Services Administration Research Laboratories Evaluation of the Loran Tests at Anniston, Alabama, and Panama City, Florida BERNARD WIEDER JAMES S. WASHBURN BOULDER, COLO. JUNE 1970 ESSA RESEARCH LABORATORIES The mission of the Research Laboratories is to study the oceans, inland waters, the lower and upper atmosphere, the space environment, and the earth, in search of the under- standing needed to provide more useful services in improving man's prospects for survival as influenced by the physical environment. Laboratories contributing to these studies are: Earth Sciences Laboratories: Geomagnetism, seismology, geodesy, and related earth sciences; earthquake processes, internal structure and accurate figure of the Earth, and distribution of the Earth's mass. Atlantic Oceanographic and Meteorological Laboratories: Oceanography, with emphasis on the geology and geophysics of ocean basins, oceanic processes, sea-air interactions, hurricane research, and weather modification (Miami, Florida). Pacific Oceanographic Laboratories: Oceanography; geology and geophysics of the Pacific Basin and margins; oceanic processes and dynamics; tsunami generation, propaga- tion, modification, detection, and monitoring (Seattle, Washington). Atmospheric Physics and Chemistry Laboratory: Cloud physics and precipitation; chem- ical composition and nucleating substances in the lower atmosphere; and laboratory and field experiments toward developing feasible methods of weather modification. Air Resources Laboratories: Diffusion, transport, and dissipation of atmospheric con- taminants; development of methods for prediction and control of atmospheric pollution (Silver Spring, Maryland). Geophysical Fluid Dynamics Laboratory: Dynamics and physics of geophysical fluid systems; development of a theoretical basis, through mathematical modeling and computer simulation, for the behavior and properties of the atmosphere and the oceans (Princeton, New Jersey). National Severe Storms Laboratory: Tornadoes, squall lines, thunderstorms, and other severe local convective phenomena toward achieving improved methods of forecasting, detecting, and providing advance warnings (Norman, Oklahoma). Space Disturbances Laboratory: Nature, behavior, and mechanisms of space disturb- ances; development and use of techniques for continuous monitoring and early detection and reporting of important disturbances. Aeronomy Laboratory: Theoretical, laboratory, rocket, and satellite studies of the physical and chemical processes controlling the ionosphere and exosphere of the earth and other planets. Wave Propagation Laboratory: Development of new methods for remote sensing of the geophysical environment; special emphasis on propagation of sound waves, and electro- magnetic waves at millimeter, infrared, and optical frequencies. Institute for Telecommunication Sciences: Central federal agency for research and services in propagation of radio waves, radio properties of the earth and its atmosphere, nature of radio noise and interference, information transmission and antennas, and meth- ods for the more effective use of the radio spectrum for telecommunications. Research Flight Facility: Outfits and operates aircraft specially instrumented for re- search; and meets needs of ESSA and other groups for environmental measurements for aircraft (Miami, Florida 1 ). ENVIRONMENTAL SCIENCE SERVICES ADMINISTRATION BOULDER, COLORADO 80302 fMENTp^ Sc 'tHCl StW* VS U.S. DEPARTMENT OF COMMERCE Maurice H. Stans, Secretary ENVIRONMENTAL SCIENCE SERVICES ADMINISTRATION Robert M. White, Administrator RESEARCH LABORATORIES Wilmot N. Hess, Director ESSA TECHNICAL REPORT ERL 166-ITS 107 Evaluation of the Loran Tests at Anniston, Alabama, and Panama City, Florida BERNARD WIEDER JAMES S. WASHBURN a. o u o a. a> a INSTITUTE FOR TELECOMMUNICATION SCIENCES BOULDER, COLORADO June 1970 For sale by the Superintendent of Documents, U. S. Government Printing Office, Washington, D. C. 20402 Price 55 cents TABLE OF CONTENTS Page ABSTRACT 1 1. INTRODUCTION 1 2. BACKGROUND 2 3. AIRPORT TEST 4 4. EQUIPMENT PERFORMANCE 5 5. SYSTEMATIC DISCREPANCIES 7 5. 1 Long -Range Effects 8 5.2 Statistical Considerations 13 5.3 Local Effects 21 6. DISCUSSION AND RECOMMENDATIONS 23 7. ACKNOWLEDGEMENTS 27 8. REFERENCES 28 FIGURES 29 iii /; ■' ji EVALUATION OF THE LORAN TESTS AT ANNISTON, ALABAMA, AND PANAMA CITY, FLORIDA Bernard Wieder and James S. Washburn The results from an evaluation of the differential lor an technique conducted in southeastern United States are discussed. The discussion centers on equipment per- formance and data error analysis. The results from the evaluation show relatively large systematic discrepancies from predicted loran coordinates. The analysis attempts to isolate the errors attributable to long-and short-range propagation effects from systematic errors in the manpack receivers and the loran chain. Recommendations are made for possible future tests to more clearly determine long- and short-range loran propagation effects for a given area. Key Words: Loran-C, differential loran-C, ground wave propagation, manpack receiver s, irregular terrain, inhomogeneous terrain 1. INTRODUCTION The Naval Applied Science Laboratory (NASL) has been develop- ing a "differential loran" technique for use in combat areas. Two (or more) loran navigation receivers are used, which are first compared with each other at a single site. One receiver remains at that site; the other is taken on the mission. By using the first (fixed) receiver as a calibration instrument and comparing the readings on the moving receiver with it, the errors caused by offsets and systematic temporal variations in the loran grid can be determined by the fixed receiver and the readings of the moving receiver appropriately adjusted to better determine position. To implement the technique, NASL tested two manpack loran-C navigators (Electronics, 1968). Mr. David Pessin directed the differ- ential loran program for NASL; Mr. Fred Pappalardi was in charge of the field tests. ESSA personnel participated in the tests and in the evaluation of the data. This report presents some of the results of the ESSA evaluation. 2. BACKGROUND Two test series were run, the first during May 1968 and the second during May 1969. Each series involved two test areas, one in the vicinity of Anniston, Alabama, and the other near Panama City, Florida. The Anniston area is characterized by heavily wooded rolling hills at a mean altitude of 600 to 700 ft, with occasional ridges rising to about 2000 ft. These ridges made it possible to investigate the effects on the differential lor an measurements of lor an grid anomalies due to irregular terrain. Table 1 lists the names of the sites used, all of which were located as near as possible to Coast and Geodetic Survey benchmarks for precise position determination. Two (Horn and Able) were on top of ridges near fire watch towers. Also given in table 1 are the latitudes and longitudes of the benchmarks and the predicted time difference read- ings that were expected for the loran triad, consisting of the master station at Cape Fear, North Carolina, and the slave stations at Jupiter Inlet, Florida, and Dana, Indiana. The predicted values are based on routine loran calculations with secondary phase corrections (Johler, Kellar, and Walters, 1956) based on sea water conductivity. The Panama City area is lightly wooded, relatively flat, and near the seashore. The names and locations of the Florida sites appear in table 2. To test the manpack receivers, TDA and TDB readings in micro- seconds were taken at all the sites. TDA is here defined as the time difference between the master station and slave A (Jupiter Inlet), TDB as the time difference between the master station and slave B (Dana). Miles are statute miles. Site Table 1. Alabama Test Site. N. Latitude W. Longitude TDA TDB (Master -Jupiter) (Master-Dana) (p. sec) (j-lsec) Delta o 33 24 03. 59" o 85 41 34. 69" 14236. 66 68590. 79 Taylor 33°33 10. 15" 85°39 07. 60" 14292. 69 68558. 35 TT-1 33°43 08. 60" 85°54 17. 12" 14317. 93 68410. 40 Airport 33°35 27. 12" 85°51 23. 23" 14281. 63 68470. 03 Horn o 33 17 52. 01" 86°04 28. 79" 14160. 56 68483. 28 Able o 33 33 36. 10" 85°41 54. 65" 14289. 66 68538. 64 Bynum 33°37 03. 71" 85°59 08. 01" 14275. 80 68413. 95 Piedmont 33°56 0. 79" 85°37 01. 14" 14421. 94 68446. 60 Mead o 33 42 36. 74" 8 5° 57 51. 34" 14308. 21 68391. 44 Table 2. Florida Test Site. Site N. Latitude W. Longitude TDA TDB (Master -Jupiter) (Master -Dana) (|Isec) (|Isec) Burnt 30 19' 53. 42" Southport 30 16' 47. 39" West 30°14' 50. 24" Goose 29°57' 50. 84" Park 30°08» 13. 04" 85°45' 01. 54" 13075. 76 69299. 94 85°38' 45. 95" 13055. 73 69342. 19 85°52< 50. 44" 13043. 17 69273. 27 85°26' 43. 13" 12929. 66 69457. 55 8 5°44' 23. 66" 13000. 13 69336. 52 3. AIRPORT TEST The basic technique for taking readings was to observe repeated readouts of each TDA and TDB for 15 s, note the smallest and the largest readings during that period, and take the average of the two as the record- ed reading. This technique could easily be used by unskilled operators. Statisticians refer to this technique as the "midrange" method of estimat- ing the true reading. Taking the average is another technique for estimating the true reading. However, finding averages of repeated readings proved too tedious to be useful. The effectiveness of any approach for obtaining a good estimate of the true value depends on the distribution of the data points and other statistical considerations. Crow and Siddiqui (1967) discuss several different methods for obtaining good estimates for data points with various statistical distributions. One of the tests made at the airport site provides a direct comparison of the "midrange" versus the "average" technique. Figure 1 shows the layout of the test. Markers were set at 200-ft increments along the airport taxiway, and two points were marked along an 890-ft line perpendicular to the taxiway that intersected the Coast and Geodetic Survey benchmark near the edge of the airport. The distances are indicated in figure 1, where points 1 through 8 are the locations where the readings were taken. Two sets of readings were taken at each point, except point number 8, where only one set was taken. One set of readings was taken while the operator was walking up the taxi- way, and the second on the return trip. The results are shown in figures 2 and 3. The dashed lines connect averaged receiver readings while solid lines connect the midrange values. Also shown (displaced) are the gradients in microseconds per feet predicted for the airport site markers from Pierce et al.(1948). As these figures show, the midrange approach does not yield as good a result as the average. The latter provides better repeatability, and the gradients derived from the average values come much closer to the gradients predicted for the system at the location of the measurements. Finding the average would represent an important step forward in improving the performance of the manpack receivers. While it would be a tedious chore to obtain averages from the equipment in its present configuration, it would be relatively easy and inexpensive to incorporate an extra decade (or two) in the counter that drives the time- difference display and thus automatically to find the average of 10 (or 100) readings. This test also demonstrates that the receiver can resolve incre- mental distances. Under conditions similar to those during the test and based on the average readings, the manpack receiver can easily resolve differential distances to better than 250 ft. 4. EQUIPMENT PERFORMANCE The performance of the manpack receivers during both series of tests was best evaluated by examining the three principal features of the data obtained from them. These three features are: (|) the systematic differences between the observed and predicted readings; (2) the dis- crepancies between readings of the two receivers; and (3) the erratic behavior of the data from one receiver or the other on specific days. Feature (]) is discussed in detail in later sections. In this later discussion, for both series of tests, it is assumed that the adverse contributions to the data from features (2) and (3) have been removed. Thus, in the later data analysis, which attempts to attribute the system- atic discrepancies to either long-range or local effects, we assume we are working with the best possible data from the manpack receivers. To eliminate the adverse contributions to the data from features (2) and (3) the following step was taken. The data from both series of tests were analyzed for obvious defects. The data from the second series of tests appeared to be much less variable than those of the first. Also, the discrepancy between the readings of the two receivers (units 2 and 5) for the second series appeared to be far less than the first, in which units 1 and 2 were used. The results of this cursory data analysis indicated that all data from the second series of tests should be used in the analysis of feature (1). However, the data from the first series needed culling. To display the data and their characteristics from the first series of tests the mean value of the midrange readings for TDA and TDB for a given day on each receiver at each site was derived. This is plotted on the vertical scales in figures 4 through 14. On the abscissa, a line two standard deviations long, centered about zero, is drawn through the corresponding mean value. Solid lines identify unit 1; dashed lines, unit 2. The numbers beside the line show the date in May 1968 when the data were obtained. The numbers in parenthesis beside the line show the sample size for the particular receiver on that date. From this display unsatisfactory receiver performance is easily observed. One indicator of unsatisfactory performance is when the readings on one day differ substantially from the readings taken at the same site by the same receiver on another day; indicative also is when the standard deviations are excessively large. Thus data from days of readings at a particular site are rejected if they have anomalously large standard deviations, or if their mean does not visibly cluster with the means for other days of readings at that site. These rejected data are included in figures 4 through 14 and are annotated by the letter "R", but these data are excluded from all subsequent analysis. The Alabama tests of the first series produced a sufficient number of readings at each site that we believe that the rejection of the poor data leads to a better estimate of the overall mean value. However, in the Florida tests of the first series, there were fewer readings. Further, the spread in the day-to-day mean values was much greater in the Florida than in the Alabama tests. We thought it was desirable to select only the best data, but by rejecting apparently poor data, we may have seriously biased the outcome of the analysis of the Florida data. A one-way analysis of variance was applied to the data behind figures 4 through 14 to test if the site-to-site differences in means were indeed systematic. For all cases, i. e. , Florida TDA and TDB, and Alabama TDA and TDB, these differences were found to be systematic. Thus, the equipment performed in a satisfactory manner for the most part. For the first series, where the two receivers showed large discrepancies and where the receivers behaved erratically, the data were eliminated from the analysis. For the second series, equipment performance was acceptable. 5. SYSTEMATIC DISCREPANCIES Ideally, the differential loran technique should resolve the errors between the fixed and the mission receivers that are caused by temporal variations and offsets. That is, with temporal variations eliminated, the mission receiver's observed loran coordinates should locate a geographic point whose predicted loran coordinates are identical with those observed. In fact, however, the discrepancy between the observed and the predicted readings varied from site to site in both series of tests. These discrepancies, which will be analyzed in the following discussion, can be attributed to either long-range effects, i. e. , the effects of perturbations integrated over the total paths between the transmitters and receiver, or of localized perturbations. The two will be treated separately, but, as we shall see, the data are too scanty to determine whether the observed systematic discrepancies are caused by local or long-range effects. 5. 1 Long-Range Effects The time difference between the master and the slave stations is essentially a measured difference in phase, calibrated in microseconds, plus a constant. It can be described by the equation TD = V
and
ATDB = C 3 D sB - € 2 D m + C 5 , (4)
12
where the €'s account for the errors in the K' s for the reasons enumer-
ated above. Two equations are required, one for ATDA and one for
ATDB, since the propagation paths from the respective slave stations
are different. Thus, the discrepancy in K^ and Ko for the two time-
difference readings may well be different. The two time -difference
equations, however, do have K£ in common.
Using the measured values of ATDA and ATDB at the sites used
in the experiment, our objective is to determine the values of the €'s
(and their standard deviations) that give the best fit to the data. What-
ever residuals that remain after the best fit is obtained will depend to
some extent on the statistical processing of the data to determine the €'s
on their confidence intervals, but the residuals can be assumed to be
due principally to the phase perturbations resulting from local terrain
effects and to other effects that may enter in a nonlinear way. Also,
since in some instances the data recording sites were offset from the
benchmarks, offset errors will also affect the residuals. At the
Taylor site for example, the offset was particularly large. No attempt
was made to correct for the offsets, since no information on offset
distance or direction was available.
5. 2 Statistical Considerations
To obtain the least square estimate of the C's, we used the
Gauss-Markoff least squares theorem (David and Neyman, 1938) to
minimize the quantity
N
i=l
= ) w A .(ATDA: - e,D A . + e o D . - €.) +
L, Ai * 1 sAi 2 mi 4'
(5)
w Bi (ATD Bi - € 3 D sB . + e 2 D mi - C 5 ) 2
13
with respect to the C's. The quantities ATDA^ and ATDBj are the
differences between the observed and predicted TDA's and TDB's at
the i sites (i = 1 , . . . , N); w^ and w-n- are the weights to be assigned
to ATDA^ and ATDB^, respectively; D g ^i» ^sBi' and 1^ are the distances
from the i" 1 site to slave A, slave B, and the master transmitters,
respectively; the C's are the quantities, to be determined, that give the
best linear fit to the data; and N is the number of sites at which measure-
ments were taken.
The standard deviations of the C's can be estimated through an
estimate of the variances of the C's. The latter estimate is given by
S N 5 A" : 6 Bi
(estimate of variance of C-j) = . T - ) „. + w , j = 1, . . . , 5 (6)
J 2N-5 L w Ai Bi
l-l
where S is the minimum value of S obtained by substituting the
estimates of the C's determined by the least squares minimization
procedure into (5), and 6^^ and 6-g.. are the coefficients of ATDA^
and ATDB^, respectively, in the expression for c- expressed in terms
of ATDA^ and ATDBj. The equations for the minimization conditions and
(6) are easily expressed in matrix form for easier computation. Implicit
in (5) is the assumption that ATDA A a nd ATDB i are mutually independent.
This may well not be the case, but, if not, the solution is much more
complicated but not necessarily much better, for we would have to
approximate covariances (or correlations) for ATDA^ and ATDB^.
Another question in treating the data statistically is how best to
assign the weights w^^ and Wg.. If all data contributing to the deter-
mination of the mean ATDA^ and ATDB^ were independently distributed,
an appropriate weight- for each ATDA^ and ATDB^ would be the inverse
of the variance of the mean, which is given by
14
n i
Ai
and
w
m i
Bi ' c2
b Bi
2 2
where S A . and S .., . are the sample variances calculated from the n ;
Ai Bi 1
and m^ observations at the i site. On the other hand, if the over-
riding errors can be ascribed to random systematic site errors, it
would be more appropriate to weight all the ATDA^ and ATDB^ equally,
i. e. , w^^ = w-g^ = 1. Although the latter probably comes closer to the
actual experimental situation than the former, both weighting techniques
were applied with similar results. For both cases, data points that
appeared to reflect obvious equipment difficulties, as described in
section 4, were not included in the analysis.
Table 3 shows the comparison for each of the 11 sites used in
the first series of tests of the observed and calculated ATDA's and
ATDB's obtained from the least squares fit calculated for both types of
weighting factors discussed above. The points predicted by this
technique are shown in figures 15 through 18 by crosses for weights
w^ = n-/S| and by open squares where the weights are all equal to unity.
The analogous comparisons for the 13 sites from the second series of
tests are given in table 4 and in figures 19 through 22. Table 5 gives
the C's and the estimates of their standard deviations ((7) for the first
series of tests; and table 6 gives the values derived from the second
series. The values derived for €j, € 2 > an< 3 € 3 should be compared
with . 0039, which is the value we would expect if the errors in the
predicted values were due solely to a secondary phase correction for
15
CO
CO
H
oo
co
CO
CO
l-H
■8
CO
3
3
o
CO
r^
CM
CM
"tf
CO
CO
o
00
CM
I—I
<*
o
O
O
CM
o
o
•—I
o
O
• I-l
,
CO
h
ii
CO
PCS
1
1
1
1
1
1
1
CO
•i— <
CO
pq
pq
in
CM
in
^
CM
O
o
m
in
«*
in
P
CM
CM
vO
in
00
CM
in
CM
^
in
co
3
H
o
o
o
o
o
C7 1
<3
i
'
i
i
1
l
+j pq
i— i
So
CO
2
CM
r-
o>
c
■o
o
a>
?7.70
u.
c
o
a>
S
27.60
TJ
a>
>
w.
a>
c/>
27.50
O
^,V
GOOSE SITE
Predicted TDA
12929.52
Predicted TDB
69457. 55
R
27
25 (11>
26 (25)
27 (21)
— 25(11)
25
26
25
Unit I
Unit 2
R Rejected Data
Numbers indicate day of month
(N) N is the sample size
_J xv-J 1 l
(21)
58.80
58.70
(11)
(2 5)
- 58.60
- 58.50
58.40
58.30
- 58.20
58.10
(11)
58.00
0.2 0.1 0.1 0.2 0.1 0.1
Deviation From Mean (/j. sec)
Figure 13. Data dispersion at Goose site 1968.
0.2
41
^v
PARK SITE
Predicted TDA
13000. 13
Predicted TDB
69336. 52
o
o>
o>
c
'■o
o
or
c
o
0)
•o
>
Q)
V)
-Q
o
08.20
08.10
08.00
23
(13)
24
25
26
24 (18)
25(26)
^_?3 (19)
- 26 (28)
23
23
Unit I
Unit 2
R Rejected Data
Numbers indicate day of month
(N) N is the sample size
JL
J_
0.2
^r
(26) _
(18)
(28)
- (19)
(13)
37.1
36.90
36.80
36.70
0.1 0.2
Deviation From
0.2 0.1
Mean (/xsec)
0.1
0.2
Figure 14. Data dispersion at Park site 1968.
42
>
<
u.o
1968 Alabama-TDA
1
)
1
1
<
0.1
—
C
—
[
C
)
: c
1 X
(
P
X
-0.1
o
-0.3
f
;
O-Observed Discrepancy
X-Wj=nj/Sj 2
D-Wj = 1
)
:
—
-0.5
c
5
w
o
X
Delta
o
a.
<
<
o
o
1-
1
h-
-0.7
1 1
1
80
r
x
90
100
no
-0.795
120
(D s -D m ) Statute Miles
Figure 15. Least squares estimates of observed discrepancies
43
•0.5
-1.0
CD
Q
-1.5 -
-2.0
- 1
1
368 Alabama-TD
3
1
1
-
-
o
-
O-Observed Discrepar
X-Wi=ni/Si 2
icy
D-Wj = 1
X
X
-
a x
a
-
c
:
-
o
>
: )
:
—
—
I
3
c
)
c
3
-
c
)
k.
L
C
]
-
o
o
1
a.
c
aj
>>
^
1-
o
jQ
o
a>
1-
<
1
* >
1
( <
I
t-
i
•30
-20
•10
10
(D s -D m ) Statute Miles
Figure 16. Least squares estimates of observed discrepancies
44
<
-i.o
1968 Florida-TDA
n^
o
-1.6
'
~
D
-1.7
,
O-Observed Discrepancy
X-W^rij/Sj 2
D
D-W, = 1
X
X
O
-1.8
[
]
(
)
-1.9
■ —
C
:
:
—
-2.0
—
)
(
<
—
©
o
CL
u
_c
^_
o
u>
3
c
o
o
a>
o
3
_0 1
o
a.
\ 1
$
CO
1
CD
•150
■140 -130
(D s -D m ) Statute Miles
•120
-no
Figure 17. Least squares estimates of observed discrepancies.
45
0.6
0.5
GO
Q
5
0.4
0.3 -
x
I
□
0.2
3
CO
O
I
X
□
o
I
X
I
□
o
1968 Florida-TDB
O-Observed Discrepancy
X-Wi
D-W; = I
rij/Sj
a>
in
O
O
O
•130
-140 -150
(D s -D m ) Statute Miles
•160
-170
Figure 18. Least squares estimates of observed discrepancies
46
0.5
0.4
0.3
0.2
<
0.1
0.0
-0.
D
I
-0.2
Q
100
o
o
I □□
x x x x
o
CD
o
D
I
X
I
X
~r
D
1969 Alabama-TDA
O-Observed Discrepancy
X-Wj=nj /Sj 2
D-Wj = I
110
120
130
140
(D s -D m ) Statute Miles
Figure 19. Least squares estimates of observed discrepancies,
47
-0.8
-1.0
-1.2
CO
Q
h-
<
-1.4
o
8
a
a
a>
■1.8
Xn
I
x
o
m
c
o
e
(V
CL
Q
1969 Alabama-TDB
O-0bserved Discrepancy
X-Wi
D-Wi= I
: ni/S
2
D
D
□
8
Q
-40
-30
-20 -10
(D s -D m ) Statute Miles
Figure 20. Least squares estimates of observed discrepancies..
10
48
1.40
.50 -
-1.60 -
s
-1.70 -
.80 -
.90
1 1
1969 Florida-TDA
1
O
a
i
O-Observed Discrepancy
VJ
X-W^nj/Si 2
D-Wj = 1
□
X
(
X
E
3
:
c
(
<
)
i
i.
o
a.
x
•i-<
sD
vD
in
i— 1
00
a^
vO
O
CO
^
>*
CM
sO
•i-i
co
O
O
o
o
!— H
o
co
-*
O
o
1— 1
CM
•-H
h
i-H
CD
CO
II
tf
i
1
1
1
1
1
cO
r— 1
ii
CD
3
•r-i
u
<;
1—1
£
<3
Q
H
<
CM
«tf
CM
m
h-
CM
CM
CM
CM
o
CM
r»-
o
U
i— I
■*
O
!-H
1— 1
O
O
o
00
f-
sO
in
v£>
1
.
1
1
i
1
X
CD
> <;
** Q
CO ~
o
a^
00
in
-*
CO
i— 1
00
co
00
"*
nO
O
1— 1
oo
o
■—I
i
CM
O
CM
o
1
00
r-
vO
in
m
43 <
1
1
1
1
i
4->
4->
o
•|H
4->
XI
cd
CD
2
9
X
u
a,
CO
4-1
i— (
cd
CD
u
o
T— 1
[>
£
1
co'
1
I— I
II
LD
o
CM
h3
•<->
o
o
o
00
oo
N CD CO
+J cO
nj CD . H
OX) rH
<3
u
a3
u rci
^ J) O
X ^ £
cO ' -
I
CO
CD X
U
fl
fl
CD cO
CO
. fl.S
a o ^
fl X *
o cd y
X! cO
H
CD
rfl
CD
• H
CD
XI
CD
X
co
« CD
a a
CD
CD
(D -(_> rlJ M-l
cd
3 m 3 o
s » ^
h Z3 «J
co cc) nj o
S J» ? ' +J
O - M CO
cn vV co . h
JJ cd