the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A description of Model Intercomparison Processes and Techniques for Ocean Forecasting
Abstract. The availability of numerical simulations for ocean past estimates or future forecast worldwide at multiple scales is opening new challenges in assessing their realism and predictive capacity through an intercomparison exercise. This requires a huge effort in designing and implementing a proper assessment of models’ performances, as already demonstrated by the atmospheric community that was pioneering in that sense. Historically, the ocean community launched only in the recent period dedicated actions aimed at identifying robust patterns in eddy-permitting simulations: it required definition of modelling configurations, execution of dedicated experiments that deal also with the storing of the outputs and the implementation of evaluation frameworks. Starting from this baseline, numerous initiatives like CLIVAR for climate research and GODAE for operational systems have raised and are actively promoting best practices through specific intercomparison tasks, aimed at demonstrating the efficient use of the Global Ocean Observing System and the operational capabilities, sharing expertise and increase the scientific quality of the numerical systems. Examples, like the ORA-IP, or the Class 4 near real time GODAE intercomparison are introduced and commented, discussing also on the ways forward on making this kind of analysis more systematic for addressing monitoring of ocean state in operations.
- Preprint
(2249 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 02 Dec 2024)
-
RC1: 'Comment on sp-2024-39', Anonymous Referee #1, 10 Nov 2024
reply
This paper I believe is intended as a brief review of past efforts and methods to compare ocean state and ocean forecast products that have been developed within the international community. While this is a useful objective I find the current version of the paper has many problems with it especially if read on its own. I understand that it would form 1 chapter of a larger report but I do suggest that someone should be reviewing the report as a whole
The paper does reference other “Chapters” and sometimes non-existent “sections” Lines29,61,66,84,100. These need to be properly checked
The text also has lots of acronyms and other notation that will mean nothing to a wider audience eg ET/OOFS, Class 4, L4 products, “go-no-go”?. I appreciate these may appear in other chapters but they should at least be defined here or cross referenced when first mentioned.
The AMIP concept is rightly introduced and a valuable concept, but seems odd then not to mention OMIP? And then finally leading to CMIP. The evolution of objectives to define the actual ocean states should then refer to the success of ERA and other atmospheric reanalyses. Emphasising the different objectives of GODAE and CLIVAR in comparing reanalyses for the ocean could then be explained and would then follow naturally. State estimation and forecasting as different applications.
Section 3.1 should start by properly justifying the value of observation space verification. The issue of independent v non-independent data comparisons should be discussed. Different kinds of metrics and their usefulness would be useful to summarise more clearly here.
Section 3.2 good topic to discuss the value of ensemble comparisons but it does not really discuss the value of ensemble product. Are biases in individual products reduced in this way? Uotila et al 2019 polar comparison is an example of this. L154 top right panel seems wrong? Include reference for Fig 1 legend.
Sections 3.3, It would be nice if this issue of regional studies was brought more up to date? The references are all rather old? These and the following 2 sections are very brief
3.4 This section is so brief and in no way lives up to the promise of the long heading. What are the key points?
3.5 Again very brief and the title seems to promise a future look but this is entirely absent.
I would say this paper needs more attention before it should, be published. It needs more careful reading through and some thought given to making it more accessible for the wider audience. For State of the Planet is “expert-based assessments of academic findings curated for a wider audience to support decision making”
Citation: https://doi.org/10.5194/sp-2024-39-RC1 -
RC2: 'Comment on sp-2024-39', Anonymous Referee #2, 15 Nov 2024
reply
This article presents an historical and exhaustive review of the different intercomparison exercises done in the Ocean Forecasting community, including ocean reanalysis intercomparison exercises. Methods and outcomes are discussed.
The paper is well-written, but I would suggest few improvements to make it easier to understand to readers outside the operational forecasting community. There are many acronyms that should be defined the first time they appear in the text (GODAE, CLIVAR, OOFS, CMEMS, ORA-IP, ETOOFS) and some “internal vocabulary” that can be explained in common words as class1, class2 and class3.
The addition of a concluding section highlighting the challenges and opportunities that are coming with, for example, the ensemble approaches for analysis and forecasts, the higher resolution system with an increased data volume to handle and intercomparison methods based on machine learning would make the paper more impactful even if mentioned previously in the different sections.
The problem of the double penalty when comparing products at different resolution is not mentioned. This can be done in section 2.2, when discussing the representativity or 3.1 with the class4. It is also related to the regridding approach used in some intercomparison projects.
I found the section 3.2 confusing. First ensemble approach is related to forecast ensemble, from the same system, but then ensemble is related to an ensemble of forecasts coming from different OOFs. Intercomparison exercises will offer more opportunities if involving ensemble forecasts, with possible comparison of the different spread characteristics.
In section 3 describing intercomparison exercises in different context, I would suggest adding the UN Decade SynObs project, intercomparing different Observing Systems Experiments (OSE) to assess the impact of diverse ocean observing systems on different OOFs.
Line by line comments
l.62: I would suggest adding “ocean operational forecasting system” to differentiate from other ocean operational products based only on observations and from line 49 dealing with the 1st intercomparison also but for ocean reanalysis.
l.80: the spread of the ensemble is also used as an uncertainty estimation. In the atmospheric community it may be more often seen as a reference (verification against analysis) which is less common in the oceanographic community.
l.92: OOFS or OOFSs?
l.106 to 109: You may refine the potential use of those emerging methods. This could also be addressed in a conclusion section.
l.114: to compare discrete observations?
l.131: scales and processes even for observations, especially the remote ones that are the result of complex treatments.
l.159: the definition/examples of class1 metric should be given, for example: daily 2D and 3D model fields on a common grid.
Figure 1: legends are very small.
l.176: can you describe in few words the tools developed?
Citation: https://doi.org/10.5194/sp-2024-39-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
55 | 22 | 9 | 86 | 2 | 1 |
- HTML: 55
- PDF: 22
- XML: 9
- Total: 86
- BibTeX: 2
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1