NPOESS Advisory Committee for OSSEs

Committee Report on 6-7 March 2003 Presentations by NPOESS OSSE Team

 

Introduction

 

The NPOESS Observing System Simulation Experiments (OSSE) Advisory Committee provides technical oversight and scientific guidance to the investigators involved in the NPOESS OSSE project.  These investigators have requested input from a committee representing the potential users of the polar-orbiting satellite data forthcoming from the future NPOESS satellites’ sensor suites.  The Advisory Committee has been asked to convene as necessary to review progress on development and implementation of the OSSE system, and report on the progress to the NPOESS Integrated Program Office.  This offering constitutes the fifth in a series of such reports.

The fifth committee meeting was held at the National Oceanic and Atmospheric National Weather Service headquarters building in Silver Spring, MD on 6-7 March 2003.  In attendance were the following committee members: Akira Kasahara, T. N. Krishnamurti, Jeff McQueen, Jan Paegle, Edwin Eloranta, and chairperson Donald Norquist. At the meeting, the committee heard presentations from the NPOESS OSSE project investigators (hereafter referred to as the OSSE Team). This meeting added an additional half-day to allow the committee to discuss the information presented and prepare a committee report. The following discussion summarizes the information provided by the OSSE Team presentations and the recommendations given by the committee.

 

A. Summary of Presentations

 

Steve Lord, the NPOESS OSSE team leader, gave an overview of issues that are currently of importance to the OSSE project. Computing resources for this project are limited because funding was not provided in the FY03 budget to provide a “backup” computing system for NCEP that was targeted for use in satellite data assimilation projects within the Joint Center for Satellite Data Assimilation (JCSDA). This forces any such work to be done on the recently upgraded Silicon Graphics mainframe that was originally purchased for the conduct of the NPOESS OSSEs but must now be shared by the JCSDA projects. Prospects for acquiring the new computing system for the JCSDA in FY04 are uncertain. Funding for the NPOESS OSSE project from the NPOESS Integrated Program Office was recently reduced to zero for FY03. Therefore, technically no more OSSE work tailored to providing guidance for the implementation and use of the NPOESS sensor suite can continue. Some funds for simulating infrared sounders are available through July 2003, but there is some uncertainty regarding the continuation of such funding. NPOESS OSSE project management resources has become more limited, due to increasing administrative demands on the OSSE team leader from other responsibilities.

Steve mentioned that there are other projects relying on the NPOESS OSSE data assimilation system, and specified the observing system research and predictability experiment (THORPEX) as an example. The primary factors governing the impact of observations on analysis are the observation density, observation quality and model and analysis resolution. Steve then gave a brief presentation on the Joint Center for Satellite Data Assimilation (JCSDA). Several activities have occurred in the evolution of this organization since the last Advisory Committee meeting. These include instituting a budget, expanding partnerships (AFWA and FNMOC), releasing and reviewing responses to an Announcement of Opportunity, and creating office space at NCEP for a group of some 25 positions representing the participating organizations. Three co-directors have been named, representing NCEP, the NOAA Office of Oceanic and Atmospheric Research (OAR) and NASA. The goals of JCSDA focus on preparation of an infrastructure to efficiently and effectively use the vast volume of environmental satellite data expected in the NPOESS era for improved environmental prediction. Steve also reviewed recent changes in the NCEP Global Data Assimilation System, which included assimilation of rainfall rates inferred from TRMM and SSM/I satellite sensors; QUIKSCAT data; increased top altitude to 0.3 hPa to include more infrared sounder channels; use of AMSU data over land, improved thinning of satellite data for assimilation. These changes have relevance to the global OSSEs because it is the NCEP GDAS that is the designated system that is to be used in all JCSDA projects, including any OSSEs. In response to questions, Steve acknowledged a role for the radiative transfer model used in the data assimilation system to be used in the global forecast model, and reviewed the positive impact observed from the use of “adaptive” observations in real atmosphere observing system experiments conducted over the past five years. These issues have implications for the design of future of global OSSEs.

Committee recommendations: (1) Given the termination of funding of the global OSSE project by the sponsor NPOESS IPO, the OSSE team should produce a comprehensive written report on all activities paid for IPO funding. Rather on focusing on the limitations of the project, the report should emphasize the strengths of the results and what implications they have for the proposed NPOESS sensor suite. The sponsor needs to see these results to give them an advanced look at what to expect and what recommendations to make as to final sensor suite configuration. (2) Set up a meeting to present a distilled, strong presentation of the OSSE project results to the appropriate levels of NPOESS decision-makers to ensure that these results are considered in NPOESS decisions. (3) Determine if it is possible to continue the global OSSE effort under the auspices of the JCSDA, so that any future funding can be channeled through this structure. Consider a proposal to the JCSDA to maintain the OSSE project software for outsiders to propose projects to continue global OSSE studies. The Committee suggests that a science program with specific objectives be submitted to the JCSDA at the same time. Emphasize as a primary justification for this project the need for an infrastructure for the assimilation of the NPOESS sensor products as they become reality on the operational satellite.

 

Tom Kleespies of NOAA/NESDIS presented an update on the simulation of radiance observations from the advanced sounding sensors under consideration for the NPOESS sensor suite. Having obtained assistance on this project from Dr. Haibing Sun since January 2002, Tom was able to present a lot of progress since the last Advisory Committee meeting. Infrared sensors for which simulated observations are required are ATOVS, AIRS and CrIS. Microwave sensor observations to be simulated are ATM and CMIS. Simulation status for the sensors: ATOVS – done; AIRS – nearly done; CrIS – in progress; ATM – just starting; CMIS – future. Tom described the assumptions made in the AIRS simulations, that included use of a community radiative transfer model and a surface properties generator. The AQUA satellite now in orbit provided earth location fields of view used in the simulations. A different radiative transfer model is to be used in the assimilation system for the OSSE impact experiments. Issues Tom cited included the inconsistency of cloud assumptions between radiance and Doppler wind lidar (DWL) observation simulations. There are plans to reconcile the two approaches in the near future. There is also a difference between the CrIS field of regard and that of the other sensors that needs to be accommodated in the simulations.

Committee recommendations: (1) Tom was not able to show comparisons of simulated and observed radiance profiles for the AIRS. The Committee has requested that those be provided so that profile characteristics can be compared. (2) Since the realistic simulation of these sensors is vital to the reliability of the OSSE impacts, cloud impacts on radiance profiles should be handled in a manner similar to how they would be on the operational satellite.

 

George (Dave) Emmitt of Simpson Weather Associates, Inc. presented two talks, the first of which was a briefing on the simulation on cloud and water vapor motion winds from the OSSE nature run data set. He described the objectives of this task as simulating and investigating the realism of cloud and water vapor motion vectors as they might be derived from a sequence of satellite images. Dave described criteria used for determining suitable target clouds for the simulation from the T213 nature runs. To assess realism, the simulated wind vectors were compared with those derived from GOES satellite imagery. His conclusion was that the simulation algorithm produces realistic cloud motion winds from the nature run for use in the global OSSEs. Low level vectors were found to be somewhat problematic, which was attributed to due to less reliable clouds in the nature run below 700 hPa.

In his second talk, Dave presented his latest work on the simulation of DWL observations for the global OSSEs. Unlike the previous work that focused on “bracketing OSSEs” that gave bounds for impact, the newest set of simulations were derived based on realistic DWL systems. Factors adding realism included in these simulations were a lower satellite orbit altitude, flow dependent observation errors, inclusion of systematic error and aerosol and attenuation dependent distributions of the simulated observations. A collection of realistic DWL sensor scenarios were considered that included single technologies, either coherent or direct detection, or a hybrid of the two. The next step is to actually produce simulated observations from selected DWL scenarios that include systematic errors, fewer observations and greater cloud effects than were used in the bracketing OSSE simulations. Dave called for more input from the diverse DWL community involving both technologies, and reiterated the need to include cloud and water vapor motion winds in the OSSEs.

Committee recommendations: (1) It is important to be sure that lidar wind impacts for fieldable DWL systems be assessed before any recommendations for specific systems be made. There is a need for a larger DWL community participation in this project. (2) Preparing simulated observation data sets from the nature run should be done in a manner consistent with the way the observations are to be used with the observation operator in the global data assimilation system. (3) Following from (1), there is a need for even-handedness in the designs used for the realistic coherent and direct detection DWL systems. Compare what is proposed with what exists in order to fine tune the feasible technologies and to constrain the envelope of what is feasible. (4) Make sure global aerosols assumed conform to what is known about global aerosol distributions. See for example http://www.lrz-muenchen.de/~uh234an/www/radaer/gads.html.

 

Jack Woollen of NCEP/EMC gave a presentation on his most recent work on determining empirical observational errors and applying them to simulated observations used in OSSEs. He broke down the observation into components of true value, instrument error, representativeness error and analyzed error, the last of which he described as the observation content erroneously analyzed. He proposed to use the observation – analysis difference as a way of capturing the systematic error in the analyzed fields. Jack showed observation – analysis differences for forty years of RAOB winds and temperatures using the NCEP-NCAR reanalysis, which indicated that analysis systematic error has declined over that period of time. He proposed using the differences computed over the OSSE nature run period (5 Feb –7 Mar 1993) as a basis for applying observational errors to the “perfect” observations extracted from the nature run data set. He discussed the issues involved in compatibility between the real topography and model topography in determining how to apply the nature run data in computing the differences. Results were shown for OSSEs using simulated observations laden with various multiples of observation – analysis difference, which he compared with the impacts of real observations on real analyses and forecasts. In the Northern Hemisphere, perfect observations showed little difference in impact on forecast with the real case. In the Southern Hemisphere he showed that perfect observations could improve the forecasts with respect to the real observations. His conclusions were: using observation – analysis to supply random and systematic errors empirically to simulated observations for OSSEs seems to be a viable concept; forecast results suggest that it may be optimal to assign different multiples of observation – analysis differences in different regions of the globe; it may be possible to improve the estimate of systematic error.

Committee recommendations: (1) Observation – analysis can be interpreted as the portion of the observation value that did not get included in the analysis. The implication is that the rest of the observation value is truth and contributes this truth to the analysis. If the “perfect observation” extracted from the nature run represents this truth component, then adding an estimate of the non-analyzed component contributes the part of the simulated error that should be excluded by the analysis in the assimilations done in the OSSEs. It is not clear then why observation – analysis should be thought of as a lower limit of the observation error. For the purposes of analysis, it should be considered the whole of the observation error. This has implications for the use of multiples of the observation – analysis in the OSSEs. Perhaps it is when the observation – analysis differences are averaged over time forming the basis for the “systematic” error that the magnitude is reduced. In any case, the fact that it is necessary to use such multiples to get data impacts comparable to those in real observing system experiments dictates that fundamental principles used in this work should be more fully examined. If this interpretation is incorrect, the explanation of this approach should be made more clear to specify what information is being derived from observation – analysis that is really being contributed to the simulated observations. (2) Once the principles used in this approach are shored up, a consideration should be given to applying a similar methodology to other observation types besides RAOBS. A comparison of such simulated errors for satellite data should be compared with errors directly simulated in the satellite radiance computations.

 

Michiko Masutani of NCEP/EMC gave a presentation of the OSSEs conducted to date. All OSSEs presented used the NCEP global data assimilation system that was operational in March 1999, with a T62 spectral truncation and 28 model levels. The calibration of the OSSEs was done by comparing the impact of denying certain types or combination of types of observations from both the real data observing system experiments (OSEs) and the simulated data OSSEs. They found that RAOB winds have more impact than RAOB temperatures in both the OSEs and the OSSEs. Removal of simulated TOVS radiances in the OSSEs had considerably less impact than doing the same with the real TOVS radiances in the OSEs. This was credited to the fact that sea surface temperature was held constant in the nature runs, and allowed to vary in the OSEs – ocean areas are where TOVS give the greatest contribution to the analysis. Overall there was good agreement between the trends in the OSEs and the OSSEs.

A second topic covered was formulation of simulated observation errors. OSSE results are expected to be sensitive to the way observation errors are included in the simulations. To date only random errors have been included in the simulated observations. There is a recognized need to include systematic large-scale errors. The observation – analysis methodology described in Jack Woollen’s presentation is a candidate for applying systematic errors. It was noted that errors in real surface data due largely to the locale effects are larger than those of simulated surface observations. This underestimate of simulated error may mask or diminish the impact of simulated observations from other observing systems. A component of the observation – analysis difference is the representativeness error of observations (since the analysis more closely represents the spatial scales of the numerical model). However this component of the error depends on the meteorological conditions, whereas the time-average observation – analysis difference is by definition systematic in nature. Observation – analysis difference information was used to simulate errors in surface observations and upper air observations for a limited set of impact experiments in which surface data with varying error assignments were withheld from the analysis. The impact on forecast accuracy (500 hPa anomaly correlation) of adding errors to simulated surface observations was less than that due to adding errors to upper air data. The best agreement of impacts between simulated and real experiments resulted from between 0.5 and 2 times observation – analysis difference.

Finally, Michiko presented results from a series of “bracketing OSSEs” that used a range of DWL wind simulation scenarios ranging from full tropospheric line of sight soundings (clouds permitting) to a non-scanning DWL that provides LOS soundings along a single track. Impacts of the use of these simulated wind observations for the various scenarios with conventional observations and with both conventional and TOVS were shown in the form of anomaly correlations for the northern hemisphere extratropics, tropics and southern hemisphere extratropics. The nature of the impacts in each of these regions was discussed. Generally the more complete the set of DWL soundings the greater the impact. TOVS showed some impact with the lesser amounts of DWL winds but none or negative impacts with the fullest set of DWL winds. A notable result is that with the full set of DWL winds the forecast skill in the southern hemisphere became similar to that of the northern hemisphere. Some comments given include: results need to be verified using further experiments that include varying errors in the simulated observations; upgrades in the assimilation system may alter the impacts; the vastly greater data density of the DWL simulations may have greater affect on impact due to the design of background field error covariances in the assimilation system; DWL experiments could be used to calibrate the impacts from other simulated and real observations such as cloud motion winds and satellite radiances. Plans for future OSSEs include: experiments involving simulated observations from the AIRS sensor, improve the simulation of TOVS and AIRS through reconciliation of cloud distributions in the nature run with assumptions used in DWL simulations; OSSEs with realistic DWL scenarios, with combinations of DWL and cloud motion winds, and DWL plus AIRS.

Committee recommendations: (1) There is a difference between an overestimate of forecast skill and an overestimate of observational impact. In fact, if the nature run and assimilating models are too much alike, it is likely that the forecast skill (where the nature run acts as a reference) is going to be overestimated. This in turn could lead to an underestimate of observational impact. This effect should be quantified, as a way of determining the overall estimate of accuracy of the OSSEs. (2) There is a concern about metrics being used to evaluate lidar wind impacts. Instead of using impact metrics over large sections of the globe that may cause some of the effects to get washed out, it may be better to measure the impacts in limited areas in cases of disturbed conditions. Answer the question: how much impact do DWL winds have on the prediction of weather features of interest (e.g., mesoscale lows with strong convection, polar vortices, tropical storms, etc.)? (3) In fraternal twin OSSEs, comparing model with model (nature run) causes an artificial reduction in the resulting anomaly correlation due to the differences in the systematic error of the two models. One may move a storm too fast and another too slow, which elevates the correlation error compared to when the forecast was evaluated against an analysis that had the storm in the actual location. The model differences may be deflating the anomaly correlations that are being computed. The computed anomaly correlation should be normalized for this effect. (4) One measure of progress in data assimilation is how realistically the field of divergent wind component can be analyzed. Because the magnitude of divergent wind is small compared to the rotational wind,

it has been difficult to analyze accurately. Yet, the accuracy of divergent wind is very important to forecast active weather disturbances. Thus, we anticipate that wind observations such as by DWL will greatly contribute to improvement in the quality of divergent wind analysis. One question in this regard is: how realistic is the divergent wind field in the nature run? Unless this information is available and systematic differences are taken into account, the evaluation on the impact of wind observations on the analyses will be biased. A study of the quality of divergent wind as well as that of the rotational wind in the nature run data set should be conducted in comparison with real analyses to identify errors in the nature run that can be passed into the OSSE analyses through the simulated observation. (4) How realistically is the moisture field reproduced in the nature run? Analysis of moisture field is strongly dependent on the treatment of hydrological cycle, particularly cumulus parameterization, in the analysis-forecast model. Thus, systematic errors in moisture analyses produced by different models are significant. This point must be kept in mind in evaluation of the impact of new observing systems through OSSE techniques.

 

Bob Atlas of the NASA Data Assimilation Office did not attend the meeting, but was scheduled to present a talk on their activities in global OSSEs. Viewgraphs were obtained after the meeting and are reviewed briefly here. They have generated a 3½ month long global nature run using their finite-volume community climate model. Simulated global fields of standard meteorological variables are available on 36 vertical levels at six-hour intervals during the period 11 September – 31 December 1999. While it is not clear from the viewgraphs, it appears that data assimilation experiments using the finite-volume data assimilation system (thus, an identical twin experiment?) were conducted from 11 September through 31 October. Calibration experiments involved both real and simulated conventional, TOVS, cloud track winds and QUICKSCAT observations. OSSEs are to be conducted with simulated DWL winds, AIRS radiances, and sea surface winds from the ADEOS2 satellite. Lidar wind OSSEs are currently being conducted for a variety of observation scenarios for a four-day assimilation period. Five-day forecasts to be run from the end of that period will be compared with the nature run to assess impact from the simulated observations. New regional metrics focused on various weather scenarios of interest have been developed and will be applied to the forecast evaluation. Significant impacts from lidar winds are expected in the OSSEs.

Committee recommendation: (1) Whereas there is utility in independent assessment of NPOESS sensor impact such as planned here, there is no evidence that the NASA project is focused on estimating the impact of sensors planned for the NPOESS satellite. There seems to be a parting of the ways between the NASA and NCEP/NESDIS OSSE efforts. This may be a result of funding reductions from the NPOESS IPO. The Advisory Committee needs more information on the role that the NASA effort has in the NPOESS OSSEs, and whether this has changed from the original charter for the OSSE team. Is NASA no longer a team member, and are they carrying out a separate OSSE project? (2) The advisory committee needs more information on the NASA OSSE project design – especially with regard to the apparent identical twin approach that is being used.

 

John Derber of NCEP/EMC gave the final presentation on monitoring of real-time AIRS data with the NCEP global data assimilation system. A new version of the OPTRAN radiative transfer code is operating in the assimilation system. The original version was considered too cumbersome to apply to the 281 channels of the AIRS sensor. The current version has been streamlined for greater efficiency. John described the AIRS data monitoring effort, showing rates of channel reception as the observations are obtained from the sensor aboard the AQUA satellite. New quality control procedures are in place that are based on estimated cloud height using the first contaminated channel. He described the NCEP monitoring web site. Assimilation of AIRS radiances has been conducted for a single case, and differences in the analyses with and without AIRS were shown. Future plans for assimilation of AIRS data include transferring the system to a new computer, complete development of a bias correction, continue monitoring and single analysis step assessment, begin parallel testing and evaluation of AIRS plus AMSU A, B data assimilation, and refine AIRS and AMSU quality control and bias correction procedures and repeat experiments.

Committee recommendations: (1) Focus evaluation of real AIRS impacts on significant weather events in limited areas. (2) Once data assimilation procedures for AIRS and AMSU are in place, conduct AIRS + AMSU both with and without realistic DWL winds to determine sensitivity of lidar wind impacts to the presence of infrared and microwave radiances in the assimilation.

 

B. Advisory Committee General Comments

 

            In looking at the comments the Committee made in the last NPOESS Advisory Committee meeting (May 22-23, 2001), we noted a number of recommendations that were acted upon and those that weren’t. One that we feel was not addressed was the issue of comprehensive reporting to the funding agency. In the previous report the committee referred to “management’s and funding agency concern about rate of progress.” It was suggested in the current meeting that a possible contributor to why NPOESS IPO is no longer funding the OSSE Team effort was the perception of going so far with so little to show. The Committee feels that if the recommendation for a distilled presentation to the IPO at time intervals along the way had been followed, the IPO would have at least seen that progress was being made and that careful attention to detail requires time. The Committee recommended that the OSSE team “present pertinent information on progress to the immediate contact persons at the IPO and other funding agencies…on a continuing basis.” The Committee senses there is a reluctance to doing this because of concern about communicating wrong impressions about the results. As stated above, emphasis should be placed on the strengths of project’s results without neglecting a summary of the caveats. Priority should be given to both written and oral comprehensive reports to the IPO by the OSSE team. Perhaps when they see the quality of work done, they may rethink their decision about future funding.

            Another recommendation given in the last meeting was the simplification of the calibration experiments. This was addressed in the design of most recent calibration experiments presented, though not in the presentation of results. While the Committee’s recommendation of “adding incrementally increasing numbers of randomly located observations, without regard to simulating current observing systems” was not followed, the design used did isolate on the effect of the current observing systems in a way that the effect of each (over the whole globe) could be determined. The Committee reiterates a call to make the calibration results easy for the sponsor to understand and appreciate.

            Another recommendation made in the previous meeting was “compare the spectral analyses from the synthetic environment with that of the corresponding components of the real experiments.” This was not acted upon and in fact was suggested again in the present meeting. There is general concern that the nature run may or may not be realistic enough to be used to construct simulated observations for OSSEs. One way to answer to this question is to perform the spectral analyses on nature run data sets for all analysis-forecast variables, including temperature, geopotential height, vorticity and divergence(not horizontal wind components) and compare with those from corresponding analysis data. This too is a high priority recommendation that should be acted on to assess the realism of the nature run data used in the OSSEs.

            A recommendation that was not carried out due to a lack of sufficient computing resources was “demonstrating the OSSE infrastructure utility at higher spatial resolution.” A continuing concern is that the rather coarse T62 spectral resolution is out of date and it is unknown how much of an affect this will have on the OSSE results. If the JCDAS is provided with more up-to-date computer resources, an increase in spectral resolution of the data assimilation system should be pursued. After all, the nature run has a T213 spectral resolution, so there is plenty of room for increasing the data assimilation system resolution.

            The Committee grants that, as put in the previous report, “the OSSE Team should consider each suggestion and determine which should be done, and which have the most payoff for the least amount of investment of time and energy.” However, some of the most important suggestions were not acted on and as a result the Committee feels the project accomplishments have not received the attention they deserve. The last report ended with the following pointed recommendation: “The team needs to emphasize a ‘Look what we can do for you’ sales pitch to the NPOESS sensor teams and get their support for the OSSE Team’s active role in the sensor design and selection process. This is the best guarantee for continued future funding.” While it may be too late to resurrect discontinued direct funding from the IPO, it would still be necessary to let the importance of OSSE team's work be heard if it is to be continued under the auspices of the JCSDA.