Quality Control Monitoring
Last modified on Mar 05, 2019
The need for quality control varies, but consistency from year to year is important, because gradual deterioration in accuracy of age estimations can lead to errors in a stock assessment (Lai & Gunderson, 1987; Tyler et al., 1989; Bradford, 1991).
Consistency needs to be monitored through time, because monitoring ensures:
- age estimations of individual readers do not "drift" over time, introducing bias relative to earlier determinations; and
- age estimations by different readers are comparable.
Campana (2001) argued that a QC program based on reference collections provided cost-saving advantages, because secondary age readers were not required. A secondary age reader may be desired for contingency purposes. We would disagree. How will discrepancies be discovered unless some type of comparison between primary and secondary age readers occurs? When/if discovered, comparisons with a reference collection are required to determine which age reader drifted off course, but if a reader is never compared with another and is basically conducting a self-comparison, how will errors be found? Readers tend to feel that they are either right or wrong, thus there needs to be a mechanism to determine issues.
A reference collection is still important, because the reference collection provides a level of error detection that cannot be matched by simply re-aging a previous year"s samples. For example, precision remained high for Nova Scotia haddock (Campana, 1995) from age bias graphs and re-aging prior years" samples. These tests failed to detect a gradual, seven-year "drift" in age estimations. The bias was uncovered when the reader read the reference collection samples.
Practicing on samples from the previous year is common at many locations; however, this measures precision only. Re-aging only a recent sample reduces the ability to detect a gradual shift in age estimations (accuracy), whether completed by a second reader or not. The test for consistency also tests accuracy, when the test includes known -age samples from the reference collection.
Including a random subsample of a reference collection in a production sample (Campana, 1997; Heifetz et al., 1999), possibly by digital images, makes it more likely to detect changes in an age reader's criteria. This can be analyzed by using age bias graphs and age matrices. If there was lack of bias, the same aging criteria were used. Performing these additional reads also creates more consensus-aged scales to add to the reference collection. Sample sizes of 100 fish from a reference collection plus 100 fish from a recent production run would be provide reasonable statistical power (Campana 2001).
An age-structured population analysis will underestimate strong year-classes and overestimate weak year-classes in the absence of statistical correction of catch-at-age data. This occurs because statistical error correction amplifies differences in age frequencies. Older age groups are more affected, because aging error at a given CV will spread an actual age across more age groups at an older age than at a younger age. Thus, it is critical to examine precision and bias in age estimation data (Campana 2001).
There is little documentation of quality control programs associated with estimating Chinook salmon scale ages (Morison et al., 1998). Thus, we recommend the following QC protocol:
- Develop a reference collection consisting of known- or consensus-aged structures;
- Periodically age a random subsample of a reference collection mixed with recently-aged structures. This ensures that readers do not inadvertently change criteria during the QC test;
- Evaluate results using CV (Campana et al., 1995). CVs are simple to implement and are effective in testing for short- and long-term precision.
- Every sample have another reader read a minimum of 20% of the scales. Compare age estimations. Calculate CV. If CV <10, continue. If CV > 10, have second reader read entire sample. If this continues, retraining might be required. For small samples (<100) second read entire sample.
Bradford, M.J. 1991. Effects of ageing errors on recruitment time series estimated from sequential population analysis. Canadian Journal of Fisheries and Aquatic Sciences 48: 555-558.
Campana, S. 2001. Accuracy, precision and quality control in age determination, including a review of the use and abuse of age validation methods. Journal of Fish Biology 59: 197-242.
Campana, S.E., Annand, M.C., and McMillan, J.I. 1995. Graphical and statistical methods for determining the consistency of age determinations. Transactions of the American Fisheries Society 124: 131-138.
Campana, S.E., Thorrold, S.R., Jones, C.M., Gunther, D., Tubrett, M., Longerich, H., Jackson, S., Halden, N.M., Kalish, J.M., Piccoli, P., Pontual, D., H , Troadec, H., Panfili, J., Secor, D.H., Severin, K.P., Sie, S.H., Thresher, R.E., Teesdale, W.J., and Campbell, J.L. 1997. Comparison of accuracy, precision, and sensitivity in elemental assays of fish otoliths using the electron microprobe, proton-induced X-ray emission, and laser ablation inductively coupled plasma mass spectrometry. Canadian Journal of Fisheries and Aquatic Sciences 54: 2068-2079.
Morison, A.K., Robertson, S.G., and Smith, D.C. 1998. An integrated system for production fish aging: image analysis and quality assurance. North American Journal of Fisheries Management 18: 587-598.
Richards, L.J., Schnute, J.T., Kronlund, A.R., and Beamish, R.J. 1992. Statistical models for the analysis of ageing error. Canadian Journal of Fisheries and Aquatic Sciences 49: 1801-1815.