6.2



Global OSSE at NCEP



Michiko Masutani1#, Stephen J. Lord1, John S. Woollen1 +, Weiyu Yang1 +

Haibing Sun2 %, Thomas J. Kleespies2, G. David Emmitt 3, Sidney A. Wood3

Bert Katz1 +, Russ Treadon1, John C. Derber1, Steven Greco3, Joseph Terry4 +



1NOAA/NWS/NCEP/EMC, Camp Springs, MD

2NOAA/NESDIS, Camp Springs, MD

3Simpson Weather Associates, Charlottesville, VA

4NASA/GSFC, Greenbelt, MD

#RS Information Systems

+Science Applications International Corporation

%QSS Group, Inc.



http://www.emc.noaa.gov/research/osse

Page 11
Page 12
Page 13



1. INTRODUCTION



The future National POES System (NPOESS) is scheduled to be launched during the 2008-2018 period. For the next 10 years, a considerable amount of effort must take place to define, develop and build the suite of instruments which will comprise the NPOESS and its forerunner, the NPOESS Preparatory Program (NPP). The forecast impact of these future instruments must be assessed with experiments using simulated observations. These experiments are known as Observing System Simulation Experiments (OSSEs). (Arnold 1986, Lord et al. 1997, Atlas, 1997)

An OSSE system has been constructed through a collaboration between the National Centers for Environmental Prediction (NCEP), NASA/Data Assimilation Office (DAO), Simpson Weather Associates (SWA), and the National Environmental Satellite, Data and Information Service (NESDIS). NCEP's Global OSSE provides boundary conditions for mesoscale OSSEs by NOAA/Forecast Systems Laboratory (S. Weygandt, 2004). By using OSSEs, current operational data assimilation systems can be prepared to handle new data in time for the launch of new satellites. Preparations include the handling the volume of future data, the development of a data base, data processing (including formatting) system, and quality control systems. All of this development will accelerate the operational use of data from future instruments.

To date, the major effort in this project has been to develop a simulated prototype Doppler wind lidar (DWL) data set. SWA has been able to simulate line-of-sight (LOS) winds using their Lidar Simulation Model (LSM). Bracketing sensitivity experiments have been performed for various DWL technology neutral concepts to bound the potential impact (Emmitt 1999, Emmitt et al. 2001b). Scanning and various data sampling strategies were tested with these experiments. The impacts of DWL on analysis and forecasts were presented in Masutani et al. (2002c).



2. EVALUATION AND ADJUSTMENT OF THE NATURE RUN



The Nature Run (NR), which serves as a true atmosphere for OSSEs, needs to be sufficiently representative of the real atmosphere and different from the model used for data assimilation. In the calibration phase, the observational data for existing instruments is simulated from the NR. Then forecast and analysis skill for real and simulated data are compared.

For this project, NR was provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The description and evaluation of the nature run is provided by Becker et al. (1996). A one month model run was made at resolution T213 and 31 levels starting on 5 February 1993. The version of the model used for the nature run is the same as for the ECMWF 15 year reanalysis (ERA-15).

The Nature run period was found to be relatively neutral as an ENSO event, and tropical intraseasonal oscillation was decaying during the NR period. A comparison of cyclone activities between the NR and the ECMWF reanalysis was performed by NASA DAO. The number of cyclones in the ECMWF analysis is about 10% higher than the NR run, which is within natural variability. The distribution of cyclone tracks is very realistic.

Sea surface temperature (SST) is fixed using SSTs on February 5th, 00Z throughout the period for the NR. The effect of the constant SST on the data has been evaluated. It is shown that OSSE with the constant SST will give a valid data impact while SST variability is small in reality.

Cloud evaluation is particularly important for the assessment of Doppler Wind Lidar (DWL). DWL data can be retrieved only if DWL shots reach the target and reflections from the target are able return to the satellite. Clouds are important targets for DWL and they also interfere with the DWL shots. Therefore, large differences in the NR cloud amount will affect the sampling of simulated data. Realistic clouds are also necessary for generating realistic cloud track winds from geostationary platforms. The cloud distribution also affects the simulation of radiance data.

In general, the NR total cloud agrees with observational estimate except over the North and South poles. All over the globe, the High level Cloud Cover (HCC) looked larger than the satellite observed estimate. The Low level Cloud Cover (LCC) over the ocean is less than observed and amount of LCC over snow is too high. After careful investigation, we found that due to the lack of reliable observations, there is no strong evidence for an over-estimation of HCC and polar cloud by NR. However, under-estimation of low level stratocumulus over the oceans and over estimation over snow are clearly identifiable problems and adjustments were applied. (Masutani et al, 1999)

Since satellite-based estimates have difficulty in sensing LCC, the Warren ground -based, climatology for stratus and stratocumulus (Hahn et al. 1996) and NR vertical velocity are used for adjustment. At lower levels, Warren cloud climatology is added if there is rising motion. LCC is divided by 1.5 where there is snow cover over land. This adjustment made the cloud distribution much more realistic. After the adjustment, Fig.1 shows that LCC free area is much smaller, and areas with moderate cloud cover are increased over the ocean. This U shaped distribution agrees with results from ground based observations.



3. SIMULATION OF OBSERVATIONS



3.1 Simulation of conventional data



The initial simulation of conventional data done by NASA/DAO uses the real observational data distributions available in February 1993, including ACARS (automated aircraft) and cloud motion vectors (CMV, Atlas and Terry 2002). In the initial simulation by DAO, random error was added and NR surface height was used to simulate the surface data. As a result, these surface data may have an exaggerated positive impact on results. Furthermore, the use of random error alone has been known to cause positive impact on forecast skill due to a lack of systematic error (bias).

Simulations using real orography and a formulation of systematic error have been conducted by NCEP with more realistic results. The difference between Observation and Analysis (O-A) for each observation was computed from the real analysis at each observation time. These O-A values were added to the simulated data for that time. The O-A value from the real analysis includes representativeness errors (RE) that come from subgrid-scale structures. These RE were already removed from the NR data, since it was from a model integration. Since NR is a model, it does not include errors on scales smaller than the NR resolution, which is about 50Km. Real data have small scale errors due to subgrid scale structures.

This is particularly true for surface data. In the NR, envelope orography is used, which is higher than real orography in average and much smoother. Data between real orography and NR orography are missing and these data are main source of RE in real world. This lack of RE will cause a larger influence of surface data and better analysis with conventional data only. This will cause less room for an additional impact from future instruments



3.2 Simulation of DWL data

The simulation of DWL data includes efforts using DWL performance models, atmospheric circulation models and atmospheric optical models (Emmitt 1999, Emmitt et al. 2000b). The instrument parameters were provided by the engineering community. Scanning and sampling requirements were provided by the science community and define various instrument scenarios. These scenarios were initially tested by examining the sensitivity of the analyses to the various scenarios. A candidate DWL concept is then chosen for a full OSSE, and an impact study is conducted and evaluated by a technology-neutral group such as NCEP.

Bracketing OSSEs are being performed for various DWL concepts to bound the potential impact. Later OSSEs will be performed for more specific instruments. The following "technology-neutral" observation coverage and measurement error characterizations will be explored: a DWL which senses PBL and low clouds (DWL-PBL); an instrument that is sensitive to upper tropospheric clouds (DWL-upper); a combination of the previous instruments (DWL-hybrid); scan and non-scan versions (DWL-scan, DWL-nonscan); and distributed and clustered sampling strategies.



3.3 Simulation of TOVS and AIRS radiances

TOVS level 1B radiance data (TOVS) were simulated by NOAA/NESDIS, and the strategies to include correlated error in the TOVS simulation were presented by Kleespies (2001). The radiation scheme used in the simulation is R-TOVS, which is different than the OPTRAN used in the data assimilation.

AIRS radiances, along with those from AMSU and HSB, have been simulated for the period of NR. Thus, the capability to simulate data from the next generation of advanced sounders has been achieved. The AIRS simulation package used was originally developed by Evan Fishbein of JPL. The simulation (i.e. forward calculation) is based on radiative transfer code developed by Larrabee Strow (UMBC). The package was modified by Walter Wolf to generate thinned radiance data sets in BUFR format. It is identical to the one providing AIRS data to NWP centers in near-realtime, which was funded by the NPOESS IPO and implemented by Mitch Goldberg (NESDIS). Further details of this simulation is described in Kleespies et al. (2003).



4.4 Simulation of Cloud Motion Vectors



For DWL calibration and the initial OSSEs, cloud motion vectors (CMVs) are simulated at the observed data locations (based on observed cloud cover and satellite data from1993). For a more realistic evaluation, the present density of CMVs at the NR cloud location is being simulated by SWA (O' Handley et al. 2001) and NASA/DAO (Atlas and Terry 2002). Satellite view cloud fraction of 5% to 25 % is assumed to be a potential tracer. Slow bias and image registration error will be included. Error statistics will be obtained from the NOAA/NESDIS Office of Research and Applications Forecast Products Development Team (NESDIS, 2002).



4. DATA ASSIMILATION SYSTEM



The global data assimilation system at NCEP is based on the "Spectral Statistical Interpolation" (SSI) of Parrish and Derber (1992), which is a three-dimensional variational analysis (3-DVAR) scheme. TOVS 1B radiance data are used (McNally et al., 2000, Derber and Wu ,1998). The March 1999 version of NCEP's operational Medium Range Forecast (MRF) model and data assimilation system were used for the data impact tests so far. Line of Sight (LOS) winds from instruments such as DWL are directly used instead of wind retrievals. Note that in some data assimilation systems, preprocessed retrieved temperature estimated from satellite radiance and horizontal wind directly from DWL data are used. Processing horizontal winds from a DWL LOS wind measurement requires designing satellite systems capable of taking measurements from at least two different directions at approximately same time. Data from DWL-nonscan cannot be used without LOS in data assimilation.

A major upgrade to NCEP's operational system occurred in late 2002 and includes:



A new version of the radiative transfer model to accommodate high resolution radiance data.

Improved treatment of the bias correction for radiance data.

Ability to accommodate more recent instruments (AIRS, DWL)

LOS added as an observed variable

Precipitation assimilation is included

Adjustment for higher resolution models

Comprehensive diagnostic tool for radiance assimilation.



This version is being converted to the OSSE system and will be used for assimilating advanced sounder data (from AIRS, CrIS, etc). The details of SSI and the upgrade are posted at "http://wwwt.emc.ncep.noaa.gov/gmb/gdas".

The inclusion of new instruments requires a major revision in the SSI to accommodate large amounts of data and the increased spectral resolution of the new sounding instruments. Various coefficients need to be reevaluated as the new version of the radiative transfer model is introduced. OSSEs will be continued, using this new system. AIRS data evaluation and other work need to be conducted with the 2002 operational data assimilation system. Selected calibrations and impact tests need to be repeated. In the future, the NCEP data assimilation will be upgraded to include a cloud analysis.



5. CALIBRATION FOR OSSE



5.1 Procedure



Calibrations for OSSEs were performed on existing instruments. Denial of RAOB wind, RAOB temperature, and TOVS with various combinations were tested. The period from January 1, 1993 to February 5 1993 was used for spin up from the reanalysis to the 1999 data assimilation system. The period between February 5 and February 13 was used as a spinup from the real data analysis to the simulated analysis for control experiments. Other data are added or denied at 00Z 13 February, 1993.



5.2 . Geographical Distribution



First, the impact was measured as a geographical distribution of time averaged root mean square error (RMSE) between the analysis and forecast fields (Lord et al. 2001.) The results show generally satisfactory agreement between real and simulated impacts. In the Northern Hemisphere (NH), the impact of RAOB winds is slightly weaker in the simulation and the impact of RAOB temperature is slightly stronger. Particularly in the tropics, there is a large impact from RAOB temperature in the analysis which does not increase with forecast hour. The impact of TOVS is slightly larger in the simulation. In the NH, TOVS has little impact over Europe and Asia but has an impact over the Pacific for both real and simulated analyses. The magnitudes are slightly larger in the simulation but the patterns are similar. In the 72 hour forecast, the impact of TOVS spreads out over the NH and shows a similar magnitude of impact compared to RAOB temperature. In the Southern Hemisphere (SH) TOVS dominates. However, even with TOVS, RAOB data exhibit some impact and their impacts are similar between the simulated and real analyses. The larger impact of TOVS in the simulation is expected because of the lack of measurement error in the simulated data. Under-estimation of the cloud effect in the simulation is another possible reason for the large impact. The large analysis impact in the tropical temperature may be related to the bias between the NCEP model and the nature run. Inclusion of a bias correction in the data assimilation is being considered (Purser and Derber, 2001) and this will change the impact.



5.3. Impact on forecast Skill



Anomaly correlations (AC) skill in 72 hour forecast 500hPa height fields for experiments without TOVS (NTV); experiments with TOVS but without RAOB winds (1BNWIN); and experiments with TOVS but without RAOB temperature (1BNTMP) are presented in Masutani et al. (2001). Forecast skill is verified against experiments with all data (CTL). For both real and simulated experiments, 1BNWIN shows the least skill in the northern hemisphere (NH) and globally less skill compared to 1BNTMP. Therefore, RAOB winds have more impact compared to RAOB temperatures in both simulated and real cases and in the NH and SH.

The simulated TOVS data are supposed to be better quality than the real TOVS because various systematic errors and correlated large scale errors have not been added to the simulation. Therefore, it is expected that denial of the simulated TOVS would result in more skill reduction than denial of the real TOVS. However, in the SH the impact of real TOVS is much larger than the simulation. This is due to the variable SST in real data and constant SST in simulation. These results suggest that if SST has a large variability, the impact of TOVS becomes more important.



5.4 Adjustment of Error for the simulated data



The problems in the original simulated data were noted in Section 3.1. In order to improve the simulated data, simulations using real orography and the formation of systematic error have been conducted by NCEP.

In order to test the effect of the systematic error, O-A for each observation was computed from the real analysis at each observation time and added to the errorless simulated data for that time. The O-A value from the real analysis includes RE that come from subgrid-scale structures. RE were already removed from the NR data as it came from a model integration. The O-A also add a large-scale correlated error.

With O-A error, the rejection statistics of simulated experiments become closer to those from real data. With random error, too little data are rejected by quality control. The coefficient for O-A is evaluated through impact of surface data. The optimum coefficient for O-A is between 1.0 and 2.0. Further improvement on systematic error will be conducted through out the project.



5.5 Summary



Results show that the simulations reproduced major features of the impact seen in the real data. Error assignment requires further investigation. The data impact is also expected to change when new features are added to the data assimilation system. CMV and AIRS need to be included to demonstrate their impact on the future observing system and the impact of the future observing system need to be evaluated with CMV and AIRS. Since there was no real AIRS in 1993, the data impact of simulated AIRS data has to be compared with that of current data.



6. ASSESSMENT OF DOPPLER WIND LIDAR (DWL) IMPACT



6.1 Overview of the results

Many experiments have been done to illustrate the impact of conventional and DWL data for the first several days. Then selected sets of experiments are extended to whole NR period, with forecasts also being performed. The impact of DWL is assessed by using anomaly correlation (AC) with NR in various space scales (Fig.2) and by a synoptic analysis of case studies. (Fig. 3) Time averaged geographical distribution and a time series of a RMS error are also studied. Consensus among different measures of skill are examined for the assessment.

In the NH, skill at the global scale is mostly achieved by existing (conventional and TOVS) data. Therefore, the impact of DWL at synoptic scales is most important. The skill for zonal wind (U) and temperature (T) are mainly from planetary scale events, and the skill for meridional wind (V) is from the synoptic scale. Therefore, the impact of DWL is much clearer on V than U or T in synoptic scale (Fig.2). The advantage of DWL scanning was clear from Fig.2. Particularly in the NH, it is very difficult to expect a significant impact without scanning. At 850hPa, the skill of DWL-PBL was better than the DWL-upper analysis. However, after 48-72 hours the forecast with DWL-upper becomes better. This is observed for various values and at various latitudes. This indicates that upper level data are much more important than low level data beyond 48 hours.

Figure 3 demonstrates the impact on the particular synoptic event in terms of the difference from the NR. They showed that the improvement in analysis is not that significant, but it become much more significant in the 48 hour forecast. TOVS 1B data itself does not show much improvement in the forecast, neither does non-scan DWL. Difference from NR are similar to a forecast with conventional data only. However, when both TOVS and non-scan-DWL are used, the forecast improvement is as much as the best DWL. A longer NR will provide better examples to demonstrate the various impacts.

In the tropics, DWL shows a large positive impact in most of the configurations tested. Even the non-scan DWL has more impact than TOVS. The positive impact is reduced with forecast time; the large positive impact in the analysis from the best DWL decreases by half beyond the 72 hour forecast. In the SH, any DWL has more impact than TOVS. With TOVS and DWL together, the impact is larger than TOVS alone but slightly less than DWL alone. The cause of this small negative impact of TOVS on DWL is being investigated.



6.2 Role of systematic error



The impact of DWL also depends on the error in data used in CTL runs. Experiments with conventional data with and without O-A error are conducted along with either best-DWL or non-scan-DWL. The results are presented for upper troposphere (200hPa) and lower troposphere (850hPa); total scale (wave number 1-20) and synoptic scale (wave number 10-20).

The results show that the systematic error such as O-A increase the forecast impact in the large scale significantly. In the synoptic scale where the impact is already significant without O-A, changes in impact due to additional systematic error are rather small. However, even with the O-A error in CTL, the impact of non-scan DWL is much smaller than that of best-DWL.



6.3 Further evaluation of the results



The sensitivity to the RE of DWL (RE-DWL) has been tested. Ideally, RE-DWL must be a function of various parameters such as the number of shots per one measurement, height, and latitude. However, in this evaluation the effect of RE-DWL is kept same for all LOS. The results showed that the analysis with DWL were closest to NR, if RE-DWL is between 1.0 and 2.0. If RE-DWL is too small, the DWL data forces the analysis away from NR. RE for TOVS,is also tested Further investigation of random error, and balance in weight within data assimilation will be tested. These results will provide a valuable evaluation to real data assimilation.



7. DISCUSSION AND STRATEGIES OF OSSE

Much research has showed that wind information has a much stronger impact on weather forecasts compared to temperature (Arnold 1986, Halem and Dlouhy 1984). The results from NCEP OSSE support these results in many ways. If DWL provide three dimensional wind data, it would cause a fundamental advance in the prediction of weather (Baker et al. 1995). Another advantage of DWL is its ability to take direct measurements of the wind, while extracting temperature information from radiance data involves radiative transfer models and many other complicated processes. On the other hand, space based DWL is a costly instrument and careful evaluation through OSSEs is important. Once OSSE systems are developed, they can be used to develop other instruments with relatively little effort.

It is a challenging task to evaluate the realism of the impacts from OSSEs. Due to uncertainties in OSSE. The difference between NR and real atmosphere, the process of simulating data, the estimation of observational errors all affect the results. Evaluation metrics also affect the conclusion.

NCEP's OSSE has demonstrated carefully conducted OSSEs are able to provide useful recommendations which influence the design of future observing systems. Consistency in results is important. Some results may be optimistic and some are pessimistic. However, it is important to be able to evaluate the source of errors and uncertainties. As more information is gathered we can perform more credible OSSEs. If the results are inconsistent, the cause of inconsistency needs to be investigated carefully. If the inconsistencies are not explained, the interpretation becomes difficult.

A separate OSSE effort was conducted by NASA (Atlas 2003). There are some important differences between the NASA and NCEP OSSEs. The resolution of NRs are similar, but the NR for NASA is for summer while NCEP is for winter. NASA's OSSE could evaluate hurricane forecasts and NCEP OSSE is suitable for winter storms forecast scenarios. For the NCEP OSSE, DWL data is simulated and assimilated as LOS. Level 1B radiance data are simulated and assimilated using a different radiative transfer model. On the other hand, NASA uses temperature interpolated from NR to locations where the retrieved temperature is evaluated and used as proxy sounding instruments. NASA uses horizontal wind components (U and V) at DWL. NR cloud was evaluated and adjusted for NCEP's OSSE. The results of NASA's OSSE in the NH showed a much larger impact for DWL. The effects of differences between the two OSSE on their results are yet to be evaluated.

Currently, NCEP's OSSE system is being transferred to a more powerful computing environment to accommodate data assimilation using higher resolution model and AIRS data. AIRS data have already been simulated by NESDIS (Kleespies et al. 2003) and the simulation of Cross Track Infrared Sounder (CrIS) data has been started. Simulation of more realistic DWL configurations was proposed by SWA. Sensitivity to the quality of simulated observations and the assumed data quality in the analysis will be investigated. Research related to adaptive observing strategies and some fundamental issues in the design of observing networks will be conducted to estimate the upper bound of forecast impact from various observing strategies if the necessary support is provided.

ACKNOWLEDGMENT



We received much assistance from the Data Services Section and Dr. Anthony Hollingsworth of ECMWF in supplying the nature run. Throughout this project NOAA/NWS/NCEP, NASA/DAO and NOAA/NESDIS staffs provided much technical assistance and advice. Especially, we would like to thank K. Campana, S.K. Yang, P. Van Delst, R. Kistler, Y. Tahara of NCEP, R. Atlas, G. Brin, S. Bloom and N. Wolfson of DAO, and V. Kapoor, P. Li, W. Wolf and M. Goldberg of NESDIS. Drs. E. Kalnay, W. Baker, J. Yoe and R. Daley provided expert advice. We appreciate the constructive comments from members of the OSSE Review Panel. This project is sponsored by the Integrated Program Office (IPO) for NPOESS and by the NOAA Office of Atmospheric Research (OAR) and the NOAA National Environmental Satellite, Data and Information Service (NESDIS). We thank Drs. Stephen Mango, Alexander MacDonald, John Gaynor, Jim Ellickson and John Pereira for their support and assistance in this project.



REFERENCES



Arnold, C. P., Jr. and C. H. Dey, 1986: Observing-systems simulation experiments: Past, present, and future. Bull. Amer., Meteor. Soc., 67, 687-695.

Atlas, R. 1997:Atmospheric observation and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75,111-130.

Atlas, R., G. D. Emmitt, J. Terry, E. Brin, J. Ardizzone, J. C. Jusem, and D. Bungato, 2003: Recent observing system simulation experiments at the NASA DAO. AMS preprint volume for the Seventh Symposium on Integrated observing Systems, 9-13 February 2003, Long Beach, CA.

Baker, W.E., G.D. Emmitt, F. Robertson, R.M. Atlas, J.E. Molinari, D.A. Bowdle, J. Paegle, R.M. Hardesty, R.T. Menzies, T.N. Krishnamurti, R.A. Brown, M.J. Post, J.R. Anderson, A.C. Lorenc and J. McElroy, 1995: Lidar-measured winds from space: An essential component for weather and climate prediction. Bull. Amer. Meteor. Soc., 76, 869-888.

Becker, B. D., H. Roquet, and A. Stofflen 1996: A simulated future atmospheric observation database including ATOVS, ASCAT, and DWL. BAMS, 10, 2279-2294.

Derber, J. C. and W.-S. Wu, 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287 - 2299.

Emmitt, G. D., 1999: Expanded Rationale for the IPO/NOAA Bracketing OSSEs http://www.emc.ncep.noaa.gov/research/osse/swa/DWLexp.htm

Emmitt, G. D., 2000a: Systematic errors in simulated Doppler wind lidar observations. http://www.emc.ncep.noaa.gov/resarch/osse/swa/sys_errors.htm

Emmitt G.D., S. A. Wood, S. Greco and L. Wood 2000b: Bracketing DWL Coverage OSSEs. Simpson Weather Associates.

Goldberg, M. D. , L. McMillin, W. Wolf, L. Zhou, Y. Qu, and M. Divakarla, 2001: Operational radiance products from AIRS, AMS preprint volume for 11th Conference on Satellite Meteorology and Oceanography 15-18 October 2001, Madison, Wisconsin. 555-558.

Hahn, C. J., S. G. Warren, and J.. London 1996:Edited synoptic cloud reports from ships and land stations over the globe. 1982-1991. Environmental Sciences Division Publication No. 4367, NDP-026B.

Halem, M. and R. Dlouhy, 1984: Observing system simulation experiments related to space-borne lidar wind profiling. Part 1 Forecast impact of highly idealized observing systems. AMS preprint volume for the conference on Satellite Meteorology/Remote Sensing and Applications. June 25-29, 1984, Clearwater, Florida, 272-279.

Kleespies, T. J. and D. Crosby 2001: Correlated noise modeling for satellite radiance simulation. AMS preprint volume for the11th Conference on Satellite Meteorology and Oceanography, October 2001, Madison Wisconsin. 604-605.

Kleespies, T. J., H. Sun, W. Wolf, M. Goldberg, 2003: AQUA Radiance computations for the Observing systems simulation experiments for NPOESS. Proc. American Meteorological Society 12th Conference on Satellite, Long Beach, CA, 9-13 February 2003.

Lord, S. J., E. Kalnay, R. Daley, G. D. Emmitt, and R. Atlas 1997: Using OSSEs i.n the design of the future generation of integrated observing systems. AMS Preprint volume, 1st Symposium on Integrated Observation Systems, Long Beach, CA, 2-7 February 1997.

Lord, S.J., M. Masutani, J. S. Woollen, J. C. Derber, R. Atlas, J. Terry, G. D. Emmitt, S. A. Wood, S. Greco, T. J. Kleespies, 2001: Observing System Simulation Experiments for NPOESS, Presentation at the 14th Conference on Numerical Weather Prediction, 30 July-2 August 2001, Fort Lauderdale, Florida.

Masutani, M. K. Campana, S. Lord, and S.-K. Yang 1999: Note on Cloud Cover of the ECMWF nature run used for OSSE/NPOESS project. NCEP Office Note No.427

Masutani, M., J. S. Woollen, J. Terry, S. J. Lord, T. J. Kleespies, G. D. Emmitt, S. A. Wood, S. Greco, J. C. Derber, R. Atlas, M. Goldberg 2001:Calibration and Initial Results from the OSSEs for NPOESS, AMS preprint volume for the11th Conference on Satellite Meteorology and Oceanography, October 2001, Madison Wisconsin. 696-699.

Masutani, M., J. S. Woollen, S. J. Lord, J. Terry, T. J. Kleespies,, J. C. Derber*, R. Atlas, 2003a: Calibration and Error Sensitivity Tests for NPOESS/OSSE. AMS Preprint volume, Sixth Symposium on Integrated Observation Systems, 13-17 January 2002, Orlando, FL. 71-76.

Masutani M., J. S. Woollen, S. J. Lord, J. C. Derber, G. D. Emmitt, Thomas J. Kleespies, J. Terry, H. Sun, S. A. Wood, S. Greco, R. Atlas, M. Goldberg, J. Yoe, W. Baker, C. Velden, W. Wolf, S. Bloom, G. Brin, C. O'Handley, 2002b: Progresses and future plans for Observing System Simulation Experiments for NPOESS. AMS preprint volume for 15th Conference on Numerical Weather Prediction 12--16 August 2002 in San Antonio, TX. 53-56.

Masutani, M., J. S. Woollen, S. J. Lord,J. C. Derber, G. D. Emmitt, S. A. Wood, S. Greco, R. Atlas, J. Terry, T. J. Kleespies, H. Sun, 2002c: Impact assesment of a doppler wind lidar for NPESS/OSSE. AMS preprint volume for 15th Conference on Numerical Weather Prediction 12--16 August 2002 in San Antonio, TX. 346-349.

McNally, A. P., J. C. Derber, W.-S. Wu and B.B. Katz, 2000: The use of TOVS level-1 radiances in the NCEP SSI analysis system. Quar.J.Roy. Metorol. Soc. , 129, 689-724.

NESDIS/Office of Research and Applications. Forecast Products Development Team 2002: High Density Winds Match Statistics http://orbit35i.nesdis.noaa.gov/goes/winds/html/tseries.html

O'Handley C., G.D. Emmitt and S. Greco 2001: Simulating Cloud Motion Vectors From Global Circulation Model Data For Use in OSSEs: A Preliminary, But Useful, Algorithm For Application to Current NASA/NOAA OSSE Projects. Simpson Weather Associates. http://www.swa.com/cloudtrack/cloudmotionwinds.htm

Parrish, D. F. and J. C. Derber, 1992: The National Meteorological Center's spectral statistical interpolation analysis system. Mon. Wea. Rev., 120, 1747 - 1763.

Purser, R. J. and J. C. Derber, 2001: Unified Treatment of measurement bias and correlation in variational analysis with consideration of the preconditioning problem. AMS Preprint volume for the 14th Conference on Numerical Weather prediction. July 2001, Fort Lauderdale, Florida. 467-470.

Purser, R. J. and D. F. Parrish, 2000: A Bayesian technique for estimating continuously varying statistical parameters of a variational assimilation. NCEP Office Note 429. (To appear in Meteor. Appl. Phys.)

Purser, R. J., D.F. Parrish and M. Masutani 2001:Meteorological observational data compression; an alternative to conventional "Super-Obbing". NCEP Office Note 430.

van Delst,P., J.Derber, T.Kleespies, L.McMillin, and J.Joiner 2000: NCEP Radiative Transfer Model, Proceedings of the 11th International ATOVS Study Conference, Budapest, Hungary 20-26 September 2000

van Delst,P., Y.Tahara, J.Derber, T.Kleespies, and L.McMillin 2002: NCEP Radiative Transfer Model Status, Proceedings of the 12th International ATOVS Study Conference, Lorne, Australia 27 February-6 March 2002

Velden, C. S., C. M. Hyden, S. J. Nieman, W. P. Menzel, S. Wangzong, and J. S. Goerss 1997: Upper-tropospheric winds derived from geostationary satellite water vapor observations. Bull. Amer. Meteor. Soc., 78, 173-195.

Wood, S. A., G. D. Emmitt., and S. Greco 2001: The challenges of assessing the future impact of space-based Doppler Wind Lidars while using today's global and regional atmospheric models. AMS preprint volume for the Fifth Symposium on Integrated observing Systems, 14-19 January 2001, Albuquerque, NM. 95-101.

Weygandt, S. S., S. E. Koch, S. G. Benjamin, T. W. Schlatter, A. Marroquin, J. R. Smart, B. Rye, A. Belmonte, M. Hardesty, G. Feingold, D. M. Barker, Q. Zhang, and D. Devenyi, 2004: Potential Forecast Impacts from Spaced-based Lidar Winds: Results from a Regional Observing System Simulation Experiment. The Eighth Symposium on Integrated observing Systems, 11-15January 2004, Seattle WA.