P1.13



Calibration and Error Sensitivity Tests for NPOESS/OSSE





Michiko Masutani,* John C. Woollen,* Stephen J. Lord,*

Joseph Terry+, Thomas J. Kleespies,#, John C. Derber*, Robert Atlas+



*NOAA/NWS/NCEP/EMC, Camp Springs, MD, +NASA/GSFC, Greenbelt, MD

#NOAA/NESDIS, Camp Springs, MD



http://www.emc.noaa.gov/research/osse



1. INTRODUCTION



The future National POES System (NPOESS) is scheduled to fly during the 2007-2010 period. For the next 10 years, a considerable amount of effort must take place to define, develop and build the suite of instruments which will comprise the NPOESS. The forecast impact of current instruments can be assessed by Observing System Experiments (OSEs), in which already existing observations are denied or added to observations from a standard data base. However, the impact of future instruments must be assessed with experiments using simulated observations. These experiments are known as Observing System Simulation Experiments (OSSEs) (Lord et al., 1997)

In order for OSSEs to provide credible insight about the impact of new instruments, it is necessary to "calibrate" the OSSE system by establishing that the impacts of real and simulated observations for existing instruments are similar. Data denial tests are conducted for this purpose. It is found that the data impacts are sensitive to errors assigned to the simulated observation. Methodology to add realistic error is being investigated and tested.







2. NATURE RUN

For the OSSE, a long integration of an atmospheric general circulation model (GCM) is required to provide a "true atmosphere" for the experiment. This is called the "nature run" (NR).

The nature run needs to be sufficiently representative of the actual atmosphere and different from the model used for the data assimilation. In calibration, the observational data for existing instruments is simulated from the NR. Then forecast and analysis skill for real and simulated data are compared.

For this project, the nature run was provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The description and evaluation of the nature run is provided by Becker et al. (1996). A one month model run was made at resolution T213 and 31 levels starting from 5 February 1993. The version of the model used for the nature run is the same as for the ECMWF reanalysis.

The nature run was found to be representative of the real atmosphere but with a few exceptions (Masutani et al. 1999a,1999b). For example, low level marine stratocumulus required some adjustment. In addition, sea surface temperature (SST) is fixed throughout the period for the nature run. However, a localized warm anomaly in southern hemisphere (SH) appeared in late February, in real SST. This difference in SST could potentially cause some inconsistent results in OSSE calibration and verification.







Fig.1 Schematic diagram for OSSE calibration procedure.





Calibration Experiments

EXPID TOVS 1B RAOB

Temp

RAOB

Wind

All other conventional data, ACAR, SATwind,

Surface data

1B(Control) Y Y Y Y
1BNTMP Y N Y Y
1BNWIN Y Y N Y
1BNTMPNWIN Y N N Y
NTV N Y Y Y
NTVNTMP N N Y Y
NTVNWIN N Y N Y
NTVNTMPNWIN N N N Y

Table 1 List of Calibration experiments.



3.SIMULATION OF OBSERVATION



Details of procedures to simulate observational data are described in Masutani et al. (1999b) and Lord et al. (2001a, 2001b). These papers are available at the OSSE web site. The initial simulation uses real observational data distributions available in February 1993, including ACARS and cloud motion vectors (CMV, Velden et al. 1998). TOVS level 1B radiance data (T1B) data is simulated by NOAA/NESDIS and details are described in Lord et al (2001a).

CMV based on the NR wind fields and with present density as well as the Atmospheric Infrared Sounder (AIRS) (Goldberg et al 2001) need to be included in the calibration in future. AIRS is scheduled to be included in the NPOESS Preparatory Project (NPP) instrument suite. It will be used as one of the advanced sounders in calibration. AIRS will be simulated by NOAA/NESDIS and NCEP. CMV will be simulated by NASA/DAO and Simpson Weather Associates (SWA). An outline of the simulation of AIRS and CMV data is given in Masutani et al. (2001).



4. DATA ASSIMILATION SYSTEM



The data assimilation system at NCEP is based on the "Spectral Statistical Interpolation" (SSI) of Parrish and Derber (1992), which is a three-dimensional variational analysis (3-D var.) scheme. T1B is used (McNally et al., 2000, Derber and Wu 1998) for data assimilation and the March 1999 version of NCEP's operational Medium Range Forecast (MRF) and data assimilation system are used for the data impact test. Line of sight (LOS) winds from instrument such as Doppler Wind Lidar (DWL) are directly used in the data assimilation.

The following upgrades of the NCEP operational data assimilation system are in progress.

Development of situation-dependent background error covariances for global and regional systems (Purser and Parrish, 2000).

Bias correction of background field.

Improved moisture background error covariances.

Development of cloud analysis system.



Data from NPP/NPOESS instruments, QuikScat observations, GCP radio-occultation observations, GIFTS, DMSP (SSM/IS), and imager radiances (MODIS, GOES, AVHRR) are all planned to be included at a later time.

5. CALIBRATION FOR OSSE



Calibrations for OSSEs are performed for existing instruments and some initial results are presented in Lord et al. (2001b) and Masutani et al. (2001). Denial of RAOB wind, RAOB temperature, and T1B with various combinations are tested.



5.1 Procedure



A schematic diagram of the OSSE calibration procedure is given in Fig.1. On January1, 1993 the initial condition was provided from the NCEP/NCAR reanalysis. For the reanalysis, satellite derived temperature was used. Starting from January 1, 1993, a switch was made to OSSE system with the 1999 version of MRF and SSI with and without T1B (Jan_1B, Jan_NTV). Both systems have a resolution of T62. Simulated experiments use initial conditions from either Jan_1B or Jan_NTV at 06z 5 Febraury 1993. The period between February 5 and February 13 is used as spinup period. Other data are added or denied at 00Z 13 February. The data included in experiments discussed in this paper are given in Table 1. All other data, such as surface data, ACARS, CMV are included in all experiments.



5.1 Geographical Distribution



The impact is measured as differences in geographical distribution between analysis and forecast fields. The results show generally satisfactory agreements between real and simulated impact (Lord et al. 2001b).

In NH, the impact of RAOB winds (R-Wind) is slightly weaker in simulation and the impact of RAOB temperature (R-Temp) is slightly stronger in the simulation. Particularly in tropics, there is a large impact of R-Temp in the analysis which does not increase with forecast hour. Impact of T1B is slightly larger in the simulation. In the northern hemisphere (NH), T1B has little impact over Europe and Asia but shows impact over the Pacific for both real and simulated analyses. The magnitudes are slightly larger in simulation but patterns are similar. In the 72 hour forecast the impact of T1B spreads out over the NH and shows a similar magnitude of impact compared to R-Temp. In SH, T1B dominates. However, with T1B, RAOB data do exhibit some impact and their impacts are similar between simulated and real analysis.

The larger impact of T1B in simulation is expected because of the lack of measurement error in the simulated data. Under-estimation of the cloud effect in the simulation is another possible reason for the large impact in the simulation. The large analysis impact in tropics may be related to the bias between the NCEP model and the nature run. Including a bias correction in the data assimilation is being considered (Purser and Derber, 2001).

5.2 Impact on forecast Skill





Fig.2 Anomaly correlation skill for 72 hour forecast for 500 hPa height fields.

Anomaly correlations for 500hPa height fields for 72 hour forecast skill for the experiments without T1B (NTV), experiments without R-Wind (1BNWIN), experiments without R-Temp (1BNTMP) are presented in Fig. 2. The forecast skills are verified against experiments with all data (CTL). For both real and simulated experiments, 1BNWIN shows least skill in the northern hemisphere (NH) and globally less skill compare to 1BNTMP. Therefore R_Wind has more impact compared to R_temp in both simulation and real and both NH and SH.

The simulated T1B data are known to be of "better" quality than the real T1B, because various systematic errors and correlated large scale errors have not been added to the simulation. Therefore, it is expected that denial of the simulated T1B would result in more skill reduction than denial of the real T1B. However, the opposite effect occurred (Fig.2). This problem is discussed in next section.



5.3 Anomalous SST.



In the southern hemisphere (SH) T1B has the largest impact. Simulated T1B does not include error, so a stronger impact is expected. However, skill reduction in NTV is far larger in real experiments. It is noted that there is a localized large warm anomaly in the south Pacific in the end of February in the real SST (R-SST). However, SST in NR is fixed throughout the OSSE period to that of February 5 (FEB5-SST). Assimilation with FEB5-SST with real observed data and assimilation with R-SST with simulated data are performed to test the impact of SST variability. The results showed that if T1B data is included the difference in SST is minimal throughout the troposphere except for very low levels. However without 1B data, two different SSTs show clear differences throughout the troposphere, particularly in the upper troposphere. If there is any localized SST anomaly, 1B data become very important for the SH forecast.

Therefore, the large impact of T1B in real data is due to the localized anomaly appearing in R-SST. The simulation experiment with constant SST can produce impact of T1B data when the SST variability is small. These experiments clearly demonstrate that data impact depends on the variability of SST. In fact in NH, without large difference in SST fields, simulated and real T1B show similar impact.

5.4 Large scale error

In order to test sensitivity to observational error, the difference between observation and analysis (o-a) from the real data assimilation is used as the error for the simulated data. This error will give a large-scale correlated error. With (o-a) error, the rejection statistics of simulated experiments become closer to those for real data. With random error too little data are rejected by quality control. However, simply adding (o-a) error reduced the skill too much. Designs for correlated observed error for T1B data and for improving observational error for conventional data are being investigated.



5.5 Surface data



It was found that large portions of surface data in the real world are located underground in the NR. As a result, there are much less surface data in the simulation. It is necessary to test with equal numbers of surface data for simulation and real. The additional surface data are being simulated by extrapolating the nature run fields.



6. SUMMARY



The results show that simulations reproduced major features of the impact in the real data. Error assignment needs to be investigated to study the detail of the impact. CMV and AIRS need to be used for calibration to demonstrate impact with future observing system. The data impact is also expected to change when new features are added to the data assimilation system.

As calibration is being evaluated we can proceed with many OSSEs to evaluate future instruments. Doppler Wind Lidar was selected as the first instrument to be tested and the results are presented by Lord et al. (2002). The impact of future instruments need to be interpreted according to the results of calibration experiments.



ACKNOWLEDGMENT



We received much assistance from the Data Services Section and Dr. Anthony Hollingsworth of ECMWF in supplying the nature run. Other than authers, G.D. Emmitt, S. Wood, S. Greco, are also actively involved in the OSSE project through simulation of DWL winds. Throughout this project NOAA/NWS/NCEP, NASA/DAO and NOAA/NESDIS staffs provided much technical assistance and advice. Especially, we would like to thank W. Yang, R. Treadon, W.-S. Wu, M. Iredell, D. Keyser, W. Collins, Y. Zhu, and R. Kistler of NCEP, G. Brin, S. Bloom and N. Wolfson of DAO, and M. Goldberg, W. Wolf, V. Kapoor, and P. Li of NESDIS. We would like to thank Evan Fishbein of JPL for providing simulation code for AIRS data. Drs. E. Kalnay, W. Baker, J. Yoe and R. Daley provided expert advice. We appreciate the constructive comments from members of the OSSE Review Panel. This project is sponsored by the Integrated Program Office (IPO) for NPOESS and by the NOAA Office of Atmospheric Research (OAR) and the NOAA National Environmental Satellite, Data and Information Service (NESDIS). We thank Drs. Stephen Mango, Alexander MacDonald, John Gaynor, and Jim Ellickson and John Pereira for their support and assistance in this project.



REFERENCES

Becker, B. D., H. Roquet, and A. Stofflen 1996: A simulated future atmospheric observation database including ATOVS, ASCAT, and DWL. BAMS, 10, 2279-2294.

Derber, J. C. and W.-S. Wu, 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287 - 2299.

Goldberg, M. D. , L. McMillin, W. Wolf, L. Zhou, Y. Qu, and M. Divakarla, 2001: Operational radiance products from AIRS, AMS 11th Conference on Satellite Meteorology and Oceanography 15-18 October 2001, Madison, Wisconsin.

Lord, S. J., E. Kalnay, R. Daley, G. D. Emmitt, and R. Atlas 1997: Using OSSEs in the design of the future generation of integrated observing systems. Preprint volume, 1st Symposium on Integrated Observation Systems, Long Beach, CA, 2-7 February 1997.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001a: Observing System Simulation Experiments for NPOESS, AMS Preprint volume the Fifth Symposium on Integrated Observing Systems. 14-19 January 2001, Albuquerque, NM.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001b: Observing System Simulation Experiments for NPOESS, AMS Preprint volume the 14th Conference on Numerical Weather Prediction, 30 July-2 August 2001, Fort Lauderdale, Florida. 167-171.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2002: Impact Assesment of a Doppler Wind Lidar for NPOESS/OSSE, AMS Preprint volume the Sixth Symposium on Integrated Observing Systems. January 2002, Orland, Florida.

Masutani, M., J.C. Woollen, J.C. Derber, S. J. Lord, J. Terry, R. Atlas, S. A. Wood, S. Greco, G. D. Emmitt, T. J. Kleespies 1999a: Observing System Simulation Experiments for NPOESS, AMS Preprint volume for the 13th Conference on Numerical Weather Prediction. September 1999, Denver Colorado,1-6.

Masutani, M. K. Campana, S. Lord, and S.-K. Yang 1999b: Note on Cloud Cover of the ECMWF nature run used for OSSE/NPOESS project. NCEP Office Note No.427

Masutani, M., J. S. Woollen, J. Terry, S. J. Lord, T. J. Kleespies, G. D. Emmitt, S. A. Wood, S. Greco, J. C. Derber, R. Atlas, M. Goldberg 2001:Calibration and Initial Results from the OSSEs for NPOESS, AMS preprint volume for the11th Conference on Satellite Meteorology and Oceanography, October 2001, Madison Wisconsin.

McNally, A. P., J. C. Derber, W.-S. Wu and B.B. Katz, 2000: The use of TOVS level-1 radiances in the NCEP SSI analysis system. Q.J.Roy. Metorol. Soc. , 129, 689-724.

Parrish, D. F. and J. C. Derber, 1992: The National Meteorological Center's spectral statistical interpolation analysis system. Mon. Wea. Rev., 120, 1747 - 1763.

Purser, R. J. and J. C. Derber, 2001: Unified Treatment of measurement bias and correlation in variational analysis with consideration of the preconditioning problem. AMS Preprint volume for the 14th Conference on Numerical Weather prediction. July 2001, Fort Lauderdale, Florida. 467-470.

Purser, R. J. and D. F. Parrish, 2000: A Bayesian technique for estimating continuously varying statistical parameters of a variational assimilation. NCEP Office Note 429. (Also submitted to Meteor. Appl. Phys.)

Velden, C. S., C. M. Hyden, S. J. Nieman, W. P. Menzel, S. Wangzong, and J. S. Goerss 1997: Upper-tropospheric winds derived from geostationary satellite water vapor observations. Bull. Amer. Meteor. Soc., 78, 173-195.

Velden, C. S., T. L. Olander and S. Wanzong, 1998: The impact of multispectral GOES-8 wind information on Atlantic tropical cyclone track forecasts in 1995. Part 1: Dataset methodology, description and case analysis. Mon. Wea. Rev., 126, 1202-1218.