3.3

Impact Assesment of a Doppler Wind Lidar for NPOESS/OSSE



Stephen J. Lord, Michiko Masutani, John C. Woollen, John C. Derber

NOAA/NWS/NCEP/EMC, Camp Springs, MD



G. David Emmitt, Sidney A. Wood, Steven Greco

Simpson Weather Associates, Charlottesville, VA

Robert Atlas, Joseph Terry

NASA/GSFC, Greenbelt, MD

Thomas J. Kleespies

NOAA/NESDIS, Camp Springs, MD



http://www.emc.noaa.gov/research/osse



1. INTRODUCTION



The future National POES System (NPOESS) is scheduled to fly during the 2007-2010 period. For the next 10 years, a considerable amount of effort must take place to define, develop and build the suite of instruments which will comprise the NPOESS. The forecast impact of current instruments can be assessed by Observing System Experiments (OSEs), in which already existing observations are denied or added to observations from a standard data base. However, the impact of future instruments must be assessed with experiments using simulated observations. These experiments are known as Observing System Simulation Experiments (OSSEs). (Atlas, 1997)

This project is a collaboration among the National Centers for Environmental Prediction (NCEP), NASA/Data Assimilation Office (DAO), Simpson Weather Associates (SWA), and the National Environmental Satellite, Data and Information Service (NESDIS). Through this collaboration, the data assimilation and modeling communities can be involved in instrument design and can provide information about the expected impact of new instruments. Furthermore, through the OSSEs, operational data assimilation systems will be ready to handle new data in time for the launch of new satellites. This process involves preparation for future data volumes in operations, the development of the data base and data-processing (including formatting) and a quality control system. All of this development will accelerate the operational use of data from the future instruments (Lord et al. 1997).

For each OSSE, a long integration of an atmospheric general circulation model (GCM) is required to provide a "true atmosphere" for the experiment. This is called the "nature run" (NR). The nature run needs to be sufficiently representative of the actual atmosphere but different from the model used for the data assimilation. The observational data for existing and future instruments is simulated from NR and impact tests are performed for both real and simulated data. The nature run used in this project, the data assimilation system and forecast mode used in these experiments is described in Masutani et al (2002).

Among various candidate instruments Doppler wind lidar (DWL) wind data are produced as line-of-sight (LOS) winds by SWA using their Lidar Simulation Model (LSM). Bracketing sensitivity experiments are being performed for various DWL technology neutral concepts to bound the potential impact (Lord et al. 2001a). Scanning, and various data sampling strategies, are being tested with these experiments.



3. SIMULATION OF OBSERVED DATA

Details of procedures to simulate observational data are described in Masutani et al. (1999b) and Lord et al. (2001a, 2001b) and these papers are available at the OSSE web site. The initial simulation uses real observational data distributions available in February 1993. ACARS and satellite derived winds are simulated with distribution in February 1993. TOVS level 1B radiance (T1B) data is simulated by NOAA/NESDIS and details are described in Lord et al. (2001a) and Kleespies (2001). NASA/DAO is taking the lead in the simulation of realistic conventional observations, including cloud motion vectors (CMV, Velden et al. 1997) and ACARS. More realistic CMVs based on NR cloud and the more recent distribution of CMVs are simulated with collaboration with SWA.

In this paper the impact of DWL is assessed with existing instruments. However, it is important that assessment is also done with more advanced instruments expected when DWL would be actually launched. Higher density CMVs and more advanced sounders need to be included in impact assessment. The atmospheric Infrared Sounder (AIRS, Goldberg et al 2001) is scheduled to be included in the NPOESS Preparatory Project (NPP) instrument suite. As one of the advanced sounders, AIRS is planned to be included in the calibration and impact assessment. Outline of the simulation of AIRS data is given in Masutani et al. (2001). The AIRS simulation package was originally developed by Evan Fishbein of JPL. The simulation (i.e., forward calculation) is based on radiative transfer code developed by Larrabee Strow (UMBC). AIRS will be simulated by NOAA/NESDIS using the simulation package from JPL. A scatterometer (e.g., ASCAT) is also considered to be added and method simulation is being investigated.

3.1 Simulation of DWL data

The simulation of DWL data includes efforts with DWL performance models, atmospheric circulation models and atmospheric optical models (Atlas and Emmitt, 1995; Emmitt, 1995a; Emmitt and Wood, 1996; Wood et al., 1993; Wood et al., 1995; Wood et al., 2001). The steps between a notional concept for a DWL and the blueprints for instrument construction include a considerable amount of performance modeling and, for space-based systems, an intensive series of OSSEs. During and subsequent to the Laser Atmospheric Wind Sounder (LAWS) study (Baker et al., 1995), a method for assessing the potential impact of a new DWL observing system was established. The instrument parameters are provided by the engineering community. Scanning and sampling requirements are provided by the science community and define various instrument scenarios. These scenarios are tested initially by examining the sensitivity of analyses to the various scenarios. A candidate DWL concept is then chosen for a full OSSE, and an impact study is then conducted and evaluated by a technology-neutral group.

The bracketing OSSEs are being performed for various DWL concepts to bound the potential impact. Later OSSEs will be performed for more specific instruments (Emmitt, 1999). The following "technology-neutral" observation coverage and measurement error characterizations will be explored.



EXP 1(Best): Ultimate DWL that provides full tropospheric LOS soundings, clouds permitting.



EXP 2 (PBL+cloud): An instrument that provides only wind observations from clouds and the PBL.

EXP 3 (Upper): An instrument that provides mid- and upper- tropospheric winds only down to the levels of significant cloud coverage.



Exp 4 (Non-Scan): A non-scanning instrument that provides full tropospheric LOS soundings, clouds permitting, along a single line that parallels the ground track.



TRV : 200km x 200km x T

T: Thickness of the TRV

0.25 km if z < 2km, 1km if z > 2km

0.25 km for cloud return



Swath width: 2000 km except for EXP4 (non-scanning)



No measurement error is assigned for the initial test. Strategies for systematic errors are discussed by Emmitt (2000). One measurement is an average of many shots. Data products based upon clustered and distributed shots are generated for each experiment. The clustered data product is based upon averaging the observations associated with shots clustered within an area that is very small compared to the base area of the TRV. The distributed data product is based upon averaging the observations of shots distributed throughout the TRV as would result from continuous conical scanning.

Distributed shots for the non-scan experiment (EXP4) are not realistic. However, it is used to test the penetration through cloud. In the real atmosphere, cloud has porosity which is not described in the NR archive. Cloud porosity let some DWL shot to go through the cloud. This not possible for the NR cloud as the clouds are uniform within a grid in the NR. Distributed measurements collect many shots within the TRV and there is more chance to penetrate through the atmosphere. This does not exactly model the porosity of the cloud but it is used to check the penetration due to porosity.

EXP2 and EXP3 are simulated to test various wave length and measurement methods. They are tested but not presented in this paper. More recent developments are discussed by Emmitt (2001).



4. INITIAL RESULTS FOR IMPACT ASSESSMENT FOR DWL



Prior to testing future instruments, data impact test of existing instruments are performed to calibrate OSSEs (Masutani et al. 2002). The results showed there are reasonable agreements between simulated and real data impact but the interpretation needs to be conducted with great caution.

In Table 1 part of experiments used for calibration and DWL impact assessments are listed. In Table 2 time and area averaged root mean square error (RMSE) between control experiment and the NR are listed. The period used for the averaging is from February 14 to March 6. The values are zonally averaged and averaged in the latitude band indicated in the table for the southern hemisphere (SH), the tropics (TROP), and the northern hemisphere (NH). Impact on zonal wind (U) for 200 hPa, 500 hPa and 850 hPa are presented. Change in the RMSE from the control experiments are presented as impact in Table 3.

The first three experiments in Table 3 are denial tests for existing instrument. These experiments are also used for calibration and the impacts are compared with real analysis. For the calibration the analysis was compared with 1B experiment. Now, for the simulated data the analysis can be compared with truth. They show the R-Wind has the largest impact in NH and T1B in SH. This results agrees with the impact test in forecast skill score (Masutani et al. 2001). The impact of R-Temp on wind fields is small and it is not clear if they have positive impact.



4.1 Results



Among many candidate instruments for the OSSE, DWL winds are simulated by SWA. According to the strategy for bracketing sensitivity experiments (Lord et al. 2001a, Lord et al. 2001b, Masutani et al. 2001), scanning or non-scanning, various wave lengths, numbers of LOS per measurement, are being tested. Sensitivity to weight in the data assimilation is also being tested.

For first few days, more than 20 cases are tested with various combinations and selected cases are completed for the whole OSSE period (00z February 13- 00z March 7, 1993). Experiments discussed in this paper are listed in Table 1. The distributed data for the non-scanning scenario is not realistic. However, it is used to test the effect of penetration. Because of the averaging of each 200 Km square area, more DWL shots penetrate to lower levels for distributed shots. The amount of penetration is still an unknown quantity and needs to be investigated.

The impacts are measured as the change in RMSE from the NR. Table 2 shows zonally and time averaged values for three latitude bands as differences from the RMSE between the control experiment (CTL) and NR. A positive value indicates the experiment has positive impact compared to CTL.

Rows 4-7 in Table 3 show an advantage of a scanning instrument (rows 4,5) over a non-scanning instrument (rows 6,7). The differences are largest in the upper troposphere and are reduced in the lower troposphere. Distributed shots are better than clustered shots in most of the cases. Comparison between rows 4,6 and 5,7 shows that penetration in distributed data is important in the lower troposphere.

Representativeness errors of 1 m/s and 7 m/s are tested for the first week and the results are presented in Lord et al (2001c). The impact with representativeness error 7 m/s is about 10-20% less that that of 1m/s, but the geographical distribution of the impact does not change.

With T1B in CTL, DWL data improved the wind fields globally at all levels for all experiments (row 4-7 in Table 3). Major improvements are over the tropics if T1B is included in CTL. Marseille et al. (2001) showed major impact in SH, because in their experiment CTL does not include T1B. If T1B are included, the major improvement in SH has already achieved by T1B and of the major improvements due to DWL move to tropics. However without T1B in CTL, using NTV as CTL more improvement is achieved in SH even by worst case of DWL (Dex4cr7, row 9), compared to T1B (row 8). Although T1B and Dex4cr7 show similar magnitude of impact in SH and minimum impact in NH, row 1 in Table 4 shows there are significant differences between experiment 1B and experiment Dex4cr7. Therefore, both T1B and Dex4cr7 together allow a further improvement to be achieved (row10 in table 3). In NH neither the T1B nor worst case of DWL (Dex4cr7) produce significant impact. Significant impact is achieved by the best case of DWL (Dex1dr1 in row 11)

Dex4cr7 is run without R-Wind (NTVNWIN) as CTL and the impact is compared with R-Wind (Row 12,14,table3). The results show worst case of DWL (Dex4cr7) dot produce as much as impact over NH compareed to R-wind. However, the impact of the best case of DWL (Dex1dr1) is twice as much as R-wind (row 15, Table 3) in NH. Adding T1B only could cause negative impact over NH (row 13, table 3). The distance between NTV and Dex4cr7NWIN in table 4 indicates that the impact of R-wind and the impact of DWL are quite different.



4.2 Comments on the Results

DWL is evaluated with 1993 data distribution. However, DWL winds also need to be evaluated with both the current data distribution and the anticipated future data distribution corresponding to when the DWL data will be used.

In this paper no measuserment error is included in DWL. Systematic errors are discussed by Emmitt (2000) and other large scale correlated error need to be designed and added to assessment. Various sampling strategies such as separation between forward and backward scan, adaptive observation need to be tested.

In this paper only results from U are presented. The impact on meridional wind (V) is similar to that on U. Impact in temperature fields is more sensitive and complicated. Impact on temperature from radiance data and R-Temp involve many procedures to alter the results, such as bias correction. Impact on temperature from DWL wind is even more complicated. It is interesting to note that when distributed, data usually give positive impact in wind fields compared to clustered data, although sometimes the temperature fields is better with the clustered data. Analysis and simulation procedures need to be evaluated and developed for more reliable results.



5. FUTURE PLANS



The calibration will be continued to gain further confidence in the OSSE system. Various techniques for adding systematic errors will be tested. The simulation procedure of T1B requires further evaluation, including the formulation of observational errors.

In addition to a DWL and AIRS, the Cross Track Infrared Sounder (CrIS), Conically-scanning Microwave Imager/Sounder (CMIS), and the Advanced Technology Microwave Sounder (ATMS) have been proposed as candidate instruments to be tested by OSSEs. We are proceeding to develop appropriate forward models for these instruments.

In order to make reliable recommendations, the techniques for creating simulated observations need to be refined. Addition of large-scale spatially correlated error and systematic error in simulated data may alter the results.

OSSEs also need to be tested with upgraded techniques for data handling and data assimilation system. Since the amount of data involved in the future instruments increases drastically, effective super-observations to reduce the sizes of data sets need to be studied (Purser et al. 2001). Including an adaptive correction for the bias in the data assimilation will also be tested (Purser and Derber, 2001).

Future instruments need to be tested with 2001 and future data distributions since the 1993 data distribution is outdated. Alternative NRs for the same period and summer time have also been generated by NASA/DAO and can be used to investigate additional atmospheric regimes. NRs to test northern summer time response are important, especially to study the impact on tropical storm prediction.

The evaluation metrics will be expanded to include diagnostics of strength and position of cyclones and jets and a study of extreme events, as well as standard forecast skill scores. Cost-benefit and flight planning will also be studied.



ACKNOWLEDGMENT



We received much assistance from the Data Services Section and Dr. Anthony Hollingsworth of ECMWF in supplying the nature run. Throughout this project NOAA/NWS/NCEP, NASA/DAO and NOAA/NESDIS staffs provided much technical assistance and advice. Especially, we would like to thank W. Yang, R. Treadon, Y. Zhu, W.-S. Wu, M. Iredell, D. Keyser, W. Collins, and R. Kistler of NCEP, G. Brin, S. Bloom and N. Wolfson of DAO, and V. Kapoor, P. Li and W. Wolf of NESDIS. We would like to thank Evan Fishbein of JPL for providing simulation code for AIRS data. Drs. E. Kalnay, W. Baker, J. Yoe and R. Daley provided expert advice. We appreciate the constructive comments from members of the OSSE Review Panel. This project is sponsored by the Integrated Program Office (IPO) for NPOESS and by the NOAA Office of Atmospheric Research (OAR) and the NOAA National Environmental Satellite, Data and Information Service (NESDIS). We thank Drs. Stephen Mango, Alexander MacDonald, John Gaynor, Jim Ellickson and John Pereira for their support and assistance in this project.





REFERENCES



Atlas, R. and G.D. Emmitt, 1995: Simulation studies of the impact of space-based wind profiles on global climate studies. Proc. AMS Sixth Symp. on Global Change Studies, Dallas, TX, January 1995.

Atlas, R. 1997:Atmospheric observation and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75,111-130.

Baker, W.E., G.D. Emmitt, F. Robertson, R.M. Atlas, J.E. Molinari, D.A. Bowdle, J. Paegle, R.M. Hardesty, R.T. Menzies, T.N. Krishnamurti, R.A. Brown, M.J. Post, J.R. Anderson, A.C. Lorenc and J. McElroy, 1995: Lidar-measured winds from space: An essential component for weather and climate prediction. Bull. Amer. Meteor. Soc., 76, 869-888.

Becker, B. D., H. Roquet, and A. Stofflen 1996: A simulated future atmospheric observation database including ATOVS, ASCAT, and DWL. BAMS, 10, 2279-2294.

Derber, J. C. and W.-S. Wu, 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287 - 2299.

Emmitt, G.D., 1995a: OSSE's in support of a small-satellite mission. Paper presented at the NOAA Working Group on Space-based Lidar Winds, Clearwater, FL, January 31-February 2.

Emmitt, G.D. and S.A. Wood 1996: Lidar Mapping of Cloud Tops and Cloud Top Winds, 1996. PL-TR-96-2129, F19628-93-C-0196, 1996.

Emmitt, G. D., 2000: Systematic errors in simulated Doppler wind lidar observations. http://www.emc.nceo.noaa.gov/resarch/osse/swa/sys_errors.htm

Emmitt G. D., 2001: Global wind observational requirements and the hybrid observing system appproach. AMS preprint volume for the Fifth Symposium on Integrated observing Systems, 14-19 January 2001, Albuquerque, NM. 176-178.

Goldberg, M. D. , L. McMillin, W. Wolf, L. Zhou, Y. Qu, and M. Divakarla, 2001: Operational radiance products from AIRS, AMS 11th Conference on Satellite Meteorology and Oceanography 15-18 October 2001, Madison, Wisconsin. 555-558.

Kleespies, T. J. and D. Crosby 2001: Correlated noise modeling for satellite radiance simulation. AMS preprint volume for the11th Conference on Satellite Meteorology and Oceanography, October 2001, Madison Wisconsin. 604-605.

Lord, S. J., E. Kalnay, R. Daley, G. D. Emmitt, and R. Atlas 1997: Using OSSEs i.n the design of the future generation of integrated observing systems. Preprint volume, 1st Symposium on Integrated Observation Systems, Long Beach, CA, 2-7 February 1997.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001a: Observing System Simulation Experiments for NPOESS, AMS Preprint volume the Fifth Symposium on Integrated Observing Systems. 14-19 January 2001, Albuquerque, NM 168-173.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001b: Observing System Simulation Experiments for NPOESS, AMS Preprint volume the 14th Conference on Numerical Weather Prediction, 30 July-2 August 2001, Fort Lauderdale, Florida. 167-171.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001c: Observing System Simulation Experiments for NPOESS, Presentation at the 14th Conference on Numerical Weather Prediction, 30 July-2 August 2001, Fort Lauderdale, Florida.

http://www.emc.ncep.noaa.gov/research/osse.

Marseille, G. J., A. Stoffelen, F. Bouttier, C. Cardinali, S. de Haan and D. Vasiljevic, 2001: Impact assessment of a Doppler Wind Lidar in space on atmospheric atalysis and numerical weather prediction. KNMI, Contract No.13018/98/NL/GD.

Masutani, M., J.C. Woollen, J.C. Derber, S. J. Lord, J. Terry, R. Atlas, S. A. Wood, S. Greco, G. D. Emmitt, T. J. Kleespies 1999a: Observing System Simulation Experiments for NPOESS, AMS Preprint volume for the 13th Conference on Numerical Weather Prediction. September 1999, Denver Colorado,1-6.

Masutani, M. K. Campana, S. Lord, and S.-K. Yang 1999b: Note on Cloud Cover of the ECMWF nature run used for OSSE/NPOESS project. NCEP Office Note No.427

Masutani, M., J. S. Woollen, J. Terry, S. J. Lord, T. J. Kleespies, G. D. Emmitt, S. A. Wood, S. Greco, J. C. Derber, R. Atlas, M. Goldberg 2001:Calibration and Initial Results from the OSSEs for NPOESS, AMS preprint volume for the11th Conference on Satellite Meteorology and Oceanography, October 2001, Madison Wisconsin. 696-699.

Masutani M., J. C. Woollen, S. J. Lord, J. Terry, J. C. Derber 2002: Calibration and Error Sensitivity tests for NPOESS/OSSE,AMS Preprint volume the Sixth Symposium on Integrated Observing Systems. January 2002, Orlando, Florida.

McMillin, L. M., L. Crone and T. J. Kleespies, 1995: Atmospheric transmittance of an absorbing gas. 5: Improvements to the OPTRAN approach. Appl. Opt. 34 (36) 1995.

McNally, A. P., J. C. Derber, W.-S. Wu and B.B. Katz, 2000: The use of TOVS level-1 radiances in the NCEP SSI analysis system. Q.J.Roy. Metorol. Soc. , 129, 689-724.

Parrish, D. F. and J. C. Derber, 1992: The National Meteorological Center's spectral statistical interpolation analysis system. Mon. Wea. Rev., 120, 1747 - 1763.

Purser, R. J. and J. C. Derber, 2001: Unified Treatment of measurement bias and correlation in variational analysis with consideration of the preconditioning problem. AMS Preprint volume for the 14th Conference on Numerical Weather prediction. July 2001, Fort Lauderdale, Florida. 467-470.

Purser, R. J. and D. F. Parrish, 2000: A Bayesian technique for estimating continuously varying statistical parameters of a variational assimilation. NCEP Office Note 429. (Also submitted to Meteor. Appl. Phys.)

Purser, R. J., D.F. Parrish and M. Masutani 2001:Meteorological observational data compression; an alternative to conventional "Super-Obbing". NCEP Office Note 430.

Velden, C. S., C. M. Hyden, S. J. Nieman, W. P. Menzel, S. Wangzong, and J. S. Goerss 1997: Upper-tropospheric winds derived from geostationary satellite water vapor observations. Bull. Amer. Meteor. Soc., 78, 173-195.

Velden, C. S., T. L. Olander and S. Wanzong, 1998: The impact of multispectral GOES-8 wind information on Atlantic tropical cyclone track forecasts in 1995. Part 1: Dataset methodology, description and case analysis. Mon. Wea. Rev., 126, 1202-1218.

Wood, S.A., G.D. Emmitt, M. Morris, L. Wood and D. Bai. Space-based Doppler lidar sampling strategies -- algorithm development and simulated observation experiments. Final Rept. NASA Contract NAS8-38559, Marshall Space Flight Center, 266 pp., 1993.

Wood, S.A., G.D. Emmitt, D. Bai, L.S. Wood, and S. Greco, 1995: A coherent lidar simulation model for simulating space-based and aircraft-based lidar winds. Paper presented at the Optical Society of America's Coherent Laser Radar Topical Meeting, Keystone, CO, July 23-27.

Wood, S. A., G. D. Emmitt., and S. Greco 2001: The challenges of assessing the future impact of space-based Doppler Wind Lidars while using today's global and regional atmospheric models. AMS preprint volume for the Fifth Symposium on Integrated observing Systems, 14-19 January 2001, Albuquerque, NM. 95-101

Experiment Name T1B RAOB

WIND

DWL DWL

SHOT

DWL

Rep_error

DWL

SCAN

1B Y Y N All existing data including T1B
NTV N Y N deny T1B from 1B
1BNWIN Y N N deny R-Wind from 1B
1BNTMP Y Y N deny R-Temp from 1B
NTVNWIN N N N sfc data and R-Temp
1BDex1dr1 Y Y Y D 1 Y Best in scan
1BDex1cr7 Y Y Y C 7 Y Worst within scan
1BDex4dr1 Y Y Y D 1 N best in non-scan
1BDex4cr7 Y Y Y C 7 N worst in non-scan
Dex4cr7 N Y Y C 7 N worst case of DWL added to NTV
Dex1dr1 N Y Y D 1 Y Best DWL added to NTV
Dex4cr7NWIN N N Y C 7 N Worst case of DWL added to

NTVNWIN

Dex1dr1NWIN N N Y D 1 Y Best case of DWL added to

NTVNWIN



Table 1. Experiments described in this paper. All other conventional data including RAOB temperature, ACAR data, CMV, etc are included in all experiments.







CTL SH (80S-20S) TROP(20S-20N) NH (20N-80N)
1B 3.4

2.8

1.9

3.6

3.1

2.0

2.3

2.5

2.1

NTV 4.1

3.3

2.2

3.9

3.3

2.1

2.7

2.6

2.1

NTVNWIN 5.1

3.8

2.5

4.4

3.4

2.4

2.7

3.0

2.2



Table 2 RMSE(CTL-NR) for zonal wind (U). Within each cell top: 200 hpa, middle 500 hPa bottom 850hPa. RMSE are averaged between February14 and March 6, 1993.





CTL Exp SH (80S-20S) TROP (20S-20N) NH (20N-80N)
1 1B NTV

(Deny 1B)

-0.67

-0.56

-0.30

-0.24

-0.22

-0.062

-0.017

-0.085

-0.0013

2 1B 1BNWIN

(Deny R-Wind)

-0.23

-0.17

-0.12

-0.43

-0.60

-0.34

-0.45

-0.40

-0.22

3 1B 1BNTMP

(Deny R-Temp)

0.018

0.038 -0.0091

0.056

0.28

0.0027

0.0029

-0.042

-0.015

4 1B 1BDex1dr1

(add DWL)

0.88

0.69

0.45

1.3

1.3

0.69

0.24

0.30

0.33

5 1B 1BDex1cr7

(add DWL)

0.91

0.48

0.21

1.0

0.72

0.24

0.31

0.23

0.095

6 1B 1BDex4dr1

(add DWL)

0.35

0.29

0.16

0.54

0.49

0.19

0.12

0.12

0.089

7 1B 1BDex4cr7

(add DWL)

0.25

0.12

0.045

0.34

0.13

0.036

0.086

0.029

0.011



8
NTV 1B

(Add T1B)

0.67

0.56

0.30

0.24

0.22

0.062

0.017

0.085 0.0013

9 NTV Dex4cr7

(add worst DWL)

0.89

0.47

0.18

0.56

0.22

0.050

0.10

0.059

0.014

10 NTV 1BDex4cr7

(add T1B and worst DWL)

0.92

.68

0.34

0.58

0.35

0.098

0.10

0.11

0.013

11 NTV Dex1dr1

(Add best DWL)

1.8

1.4

0.93

1.6

1.2

0.71

0.22

0.42

0.31

12 NTVNWIN NTV

(Add R-Wind)

1.0

0.49

0.29

0.71

-0.35

0.055

0.30

0.36

0.031

13 NTVNWIN 1BNWIN

(Add T1B)

1.5

0.88

0.47

0.36

-0.33

0.046

-0.13

0.042

-0.19

14 NTVNWIN Dex4cr7NWIN

(Add worst DWL)

1.7

0.71

0.35

0.71

-0.35

0.055

0.065

0.033

-0.16

15 NTVNWIN Dex1dr1NWIN

(Add best DWL)

2.9

1.9

1.1

2.2

1.7

1.0

0.49

0.55

0.23



Table 3. Data impact measuered by RMSE(CTL-NR)-RMSE(EXP-NR) of zonal wind. If the value is positive EXP is closer to the NR than CTL and data added to EXP have positive impact or data subtracted from EXP have negative impact. If the values are negative, the data subtracted from EXP have positive impact. Period used is from Feb14 to March 6. 1993. Except for row 11 is for feb14 to Feb19. Within each cell top: 200 hpa, middle 500 hPa bottom 850hPa.



EXP1 EXP2 SH TROP NH
1 1B Dex4cr7 2.8

2.4

1.3

2.5

2.5

1.2

1.3

1.3

0.67

2 NTV Dex4cr7NWIN 3.6

2.9

1.5

2.6

2.6

1.2

1.3

1.4

0.67

3 NTV Dex1dr1NWIN 3.7

3.3

2.0

3.4

3.2

1.8

1.9

2.4

1.5

4 1B NTV 3.6

2.9

1.5

2.6

2.6

1.2

1.3

1.4

0.67

Table 4. Time and area average RMSE of zonal wind between EXP1 and EXP2. Period used is from Feb14 to March 6. 1993. Within each cell top: 200 hpa, middle 500 hPa bottom 850hPa.