The NOAA Coral Reef Report Card - Reflections on the report card process

Heath Kelsey ·
29 April 2016
Environmental Report Cards | Science Communication | Applying Science | 

Caroline Donovan and I facilitated a mini-workshop in Charleston, South Carolina this week to advance the NOAA Coral Reef Monitoring Program Report Card Pilot projects in American Samoa and Florida. The meeting went very well – we had some difficult things to work out, and everyone came together to do just that. Most importantly, we finalized the list of indicators that will be included in these report cards, and developed the key messages for the Coral, Fish, Climate, and Communities and Management groups.

This was an important milestone for this project. For a variety of reasons, it has been a struggle to identify indicators and assessment methods that were acceptable to both the American Samoa and Florida Reef Tract workgroups. This is important because the indicators chosen in these regions will also be used for all of the remaining coral reef monitoring jurisdictions. After an intense 90-minute breakout session on Thursday, each group came together to develop the best possible, albeit imperfect, list of indicators and benchmark approaches.

CSCBuildingFront
NOAA Coastal Services Center. Credit: coast.noaa.gov

This struggle, and the way it was resolved, got me thinking more deeply about our report card process. This project has a couple of unique features that are relevant to our current direction in report card work. One of those directions is the need to select indicators that will be useful in multiple settings. In this project, the differences in the systems and pressures acting on them makes it difficult to choose one set of indicators that will work well in both locations.

  • The American Samoa reefs are larger, and more diverse, with different coral, fish and human communities
  • American Samoa has a much smaller population and less intense human impacts than in Florida,
  • Some of the data are collected differently in the two regions, some data are available in one region and not in another, and historical data availability is variable between the regions and indicators.

To account for some of these differences, we began the project working under the assumption that although categories of indicators would be the same (e.g., Coral Health) the specific metrics used to measure that might be slightly different in each region. We've been discussing this approach for several projects also and the coral reef report cards would be a good test case. This approach allows flexibility to choose metrics that were most relevant in each region, and allow for differences in the way data are collected by different jurisdictions.

Heath
Heath reminding everyone of the report card process.

For instance, an indicator for Coral Health in Florida could be calculated from percent cover, where long-term data sets and historical percent cover estimates are taken from random sites (so they provide a reliable representation of overall condition). In American Samoa, however, the coral cover data (until 2015) were from fixed, not random sites, and less is known about historical cover estimates. Data from these sites might be very useful in evaluating trends, but are not representative of the overall status of coral cover in the region.

The option of using indicator categories was appealing in that we could use data and metrics in a way that would account for these differences. But there was concern that report card results would not be directly comparable between regions, which is an important objective of the report card project, and we forged ahead with the challenge of choosing a common suite of indicators. Our partners rose to that challenge, even leading the breakout groups themselves in the effort. In the end, a common set of indicators was chosen with only slight differences to account for temporary data availability issues. Everyone seems to accept this outcome and I have hopes that this will be the final and complete list of indicators that we use in the report card.

participant
Rusty Brainard (NOAA) responding to a comment.

This project also made me reflect on the journey that we take our partners on as we help them develop their report card. In many of our projects, participants at the first meetings have some doubts about how the process will work, and concerns about the value of the report card product. These doubts and concerns are almost always resolved as we go through the process - we work out indicators, agree on ways to analyze data in meaningful ways, and try out ways to communicate the stories the data reveal. In the end, the value of the product and the process is made clear. This report card project seems to fit this pattern. We've had challenges, but with persistence and continued discussion, we work them out. I'm happy to see that as we've progressed, participants are more active than ever, even leading breakout sessions to work through some difficult issues. This is great, because it means that folks are not just accepting the report card, but are able to champion the process.

The journey and the struggles that we encounter along its path are relevant to other report card projects and initiatives. In our Basin Report Card Initiative partnership with WWF, we are creating a suite of tools to enable local watershed organizations to develop report cards on their own, anywhere in the world. The goal is a good one: it's about democratizing the process - anyone can create a report card if they have the proper tools. But there is more to the process than just having the tools; it is about having the confidence to know that every setback can be overcome. Every project has difficult times - when agreement seemingly can't be reached, a key stakeholder won't participate, or an important data provider won't release data. These are real challenges that can be difficult to overcome. Without the confidence that comes with experience in working through the process many times, it seems possible that some report card projects won't get past issues like this.

Breakout sessions
The climate indicator work group discussing indicators and key messages.

Therefore, I suggest three potential options to assist in overcoming these types of challenges. Some of these ideas are already in the works, but I think it's worth specifically calling them out here:

  • We need to be cheerleaders. Every system and region is unique and every project will have challenges that will seem like they are also unique and therefore insurmountable. But persistence and creativity can overcome almost anything. Report card leaders need to know and believe that their problems can be solved.
  • We should encourage and facilitate a "reach one, teach one" concept. Practitioners that have created their report cards will almost certainly have encountered challenges in the process; they need to support additional groups in their
  • Similarly, we should encourage and facilitate a community of practice for report card projects. We should create and support a web-based forum for sharing ideas, challenges, and solutions. The forum can also be a good way for our team of experienced report card facilitators to provide support and guidance from afar.

It is possible that some level of more direct, on the ground involvement of an experienced report card developer may be frequently required to move some projects along. Time will tell, but the tools that we are developing now should go a long way to help groups help themselves.

About the author

Heath Kelsey

Heath Kelsey has been with IAN since 2009, as a Science Integrator, Program Manager, and as Director since 2019. His work focuses on helping communities become more engaged in socio-environmental decision making. He has over 10-years of experience in stakeholder engagement, environmental and public health assessment, indicator development, and science communication. He has led numerous ecosystem health and socio-environmental health report card projects globally, in Australia, India, the South Pacific, Africa, and throughout the US. Dr. Kelsey received his MSPH (2000) and PhD (2006) from The University of South Carolina Arnold School of Public Health. He is a graduate of St Mary’s College of Maryland (1988). He was also a Peace Corps Volunteer in Papua New Guinea from 1995-1998.



Next Post > How to organize and run short, productive and fun scientific workshops

Post a comment