Review: update on performance assessment and monitoring systems for ophthalmologists
Review Article

Review: update on performance assessment and monitoring systems for ophthalmologists

Heather G. Mack ORCID logo

Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia

Correspondence to: Heather G. Mack, B Med Sc, MBBS, MBA, PhD, FRANZCO, FRACS. Centre for Eye Research Australia, University of Melbourne, 2/232 Victoria Parade, East Melbourne, VIC 3002, Australia. Email: hmack@eyesurgery.com.au.

Abstract: Performance assessment and monitoring systems are the process by which ophthalmologists demonstrate they are fit to continue practicing. Revalidation is the process by which licensed doctors are legally required to demonstrate to medical regulators that they are up to date and fit to practice in order to maintain their medical license, and began in the United Kingdom (UK) in 2012 following well-publicized events. Recertification is a different process which began in North America and demonstrates doctors have maintained competence, without involvement of external stakeholders. Arguments for and against the concept of performance assessment are reviewed. Stakeholders in performance assessment, whether part of the process or not, are doctors themselves, peers and other health care providers, patients, hospitals and health systems, medical regulators, and health policy makers. Early models of performance assessment included portfolios, credit accumulation, and closed-book examinations. Models are in evolution with the value of formative rather than summative assessments, and the need for methods of assessing procedural competency increasingly recognized. Emerging methods of performance assessment of practicing ophthalmologists include use of performance indicators, simulation, and workplace-based assessments. Alternate methods of assuring the public by improved detection of outliers are discussed. Despite weak evidence supporting performance assessments in demonstrating continuing competency to practice, the process is gradually being accepted worldwide. Further research into optimum methods of performance assessment and monitoring processes is required, along with cost-benefit analysis.

Keywords: Performance assessment; revalidation; recertification; maintenance of competency; relicensure


Received: 21 March 2023; Accepted: 27 November 2023; Published online: 08 January 2024.

doi: 10.21037/aes-23-21


Introduction

Background

The need for ongoing performance assessment of medical practitioners while they are practicing, as distinct from training, was first recognized by the American Board of Medical Specialties (ABMS) in the 1940s, when they proposed time-limited recertification, and confirmed in a recent meta-analysis which demonstrated that physician performance decreases with time elapsed since initial training (1). Since first proposed, performance monitoring and monitoring processes have evolved differently across the globe due to differing history and context, with revalidation in the United Kingdom (UK) and recertification in North America the commonest methods.

Medical revalidation is the process by which licensed doctors are legally required to demonstrate to medical regulators that they are up to date and fit to practice in order to maintain their medical license. Revalidation was developed by the UK General Medical Council (GMC) (2) and has been adopted in other jurisdictions such as Australia.

In parallel, rectification was developed in the USA and Canada, and is a different process which medical practitioners undergo to recertify their professional credentials, typically USA Board Certification. Recertification implies but does not directly demonstrate fitness to practice, and thus differs from revalidation.

Continuing professional development (CPD) is a documented self-directed process, which includes reflective learning, development goals and incorporates both formal and informal learning (3,4). CPD is typically self-directed towards self-identified knowledge gaps. However, accuracy of these self-assessments is recognized to be poor (5-9). Revalidation and recertification incorporate some CPD activities, but are intended as a broader demonstration of competency. CPD can include audit and other measures of performance, but is self-directed and does not involve external stakeholders such as employers and patients.

Rationale and knowledge gap

Performance assessment and monitoring systems are complex. Knowledge gaps exist regarding effectiveness of the various methods of performance assessment which are emerging globally, and perspectives of stakeholders.

Objective

Characteristics of performance assessment and monitoring systems widely between countries (10). This brief review covers a historical overview, discussion on the need for performance assessment of practicing ophthalmologists, overview of commonest current practices (revalidation and recertification), knowledge of stakeholders, and reviews some future trends of performance assessment and monitoring systems for ophthalmologists. It is not intended to cover detailed requirements of all jurisdictions. Where possible studies pertaining to ophthalmology are cited, but much of the reviewed material relates to practicing physicians and surgeons rather than ophthalmologists.


Overview of performance assessment and monitoring systems

Historical background of recertification and revalidation

Recertification started formally in Canada in 1969 when the College of Family Physicians started certification of its members and required recertification every 5 years. In the USA the ABMS suggested time-limited certification in 1940 and in 1969 the American Board of Family Practice decided that all its certifications would be valid for only 7 years. Most of the other Boards then introduced time-limited certification and a recertification process. The American Board of Ophthalmology (ABO) introduced certification limited to 10 years, with the earliest method requiring ophthalmologists to re-sit a closed-book examination each 10 years to maintain Board Certification. Current requirements for ABO recertification [also known as maintenance of certification (MOC)] include practice-based learning and improvement, patient care and procedural skills, medical knowledge (learnt through CPD activities), interpersonal and communication skills, systems-based practice, and professionalism (11). Recertification remains a self-directed practice, learner-driven, focused on everyday practice based on educational principles, and does not include other stakeholders such as patients and employers. Cordovani et al. (12) highlight that if recertification processes are to demonstrate competency of practicing physicians, then challenges include defining the competencies of practicing physicians (as distinct from trainees), improving self-assessments and improving physicians’ engagement and motivation.

The different concept of revalidation developed in the UK after multiple publicized adverse patient outcomes, particularly related to cardiac surgery in Bristol (13), and Dr. Harold Shipman, a General Practitioner (GP) thought to be a serial killer with at least 215 patient victims (14). In this context in 2010 the GMC stated ‘the purpose of revalidation is to assure patients and the public, employers and other healthcare professionals that licensed doctors are up-to-date and fit to practice’, culminating in all UK doctors required to participate in revalidation from 2012. Revalidation involves external stakeholders including patients, employers, service providers, peers, other healthcare professionals, and policy makers. Supporting information during revalidation includes CPD activities, quality improvement activity, significant events, feedback from patients, feedback from colleagues (peer-review), and complaints and compliments (15).

Arguments for and against performance assessment and monitoring systems for practicing ophthalmologists

Board certification (and presumably other forms of formal post-graduate medical training and recognition) is thought to improve quality of patient care, but the evidence is modest (16,17). A recent meta-analysis demonstrated that physician performance decreases with time elapsed since initial training (1). There is weak evidence that undertaking MOC is associated with improved quality of care. Norcini et al. (18) recently demonstrated improved mortality for patients with acute myocardial infarction or congestive heart failure whose physicians had board certification, and further improved in those whose physicians had undertaken MOC. On the other hand, Xu et al. (19) found no difference in surgical complications by surgeons who did not undertake MOC activities. Between 1992 and 2012 ophthalmologists who did not maintain board certification had a higher risk of disciplinary license actions (20). Proponents in favor of ongoing performance assessment argue that it results in public trust in doctors, but this is difficult to quantitate. Maintained competency of ophthalmologists is important in health system capacity, given the very high costs to taxpayers and insurers in maintaining hospital and health care systems. Clinical quality is a significant component of high-performing health care delivery (21).

The strongest argument against performance monitoring is that the evidence that it has led or will lead to improved patient outcomes is very weak. Cited studies (1,16-20) require very large datasets to demonstrate differences in grouped patient outcomes, and cannot be extrapolated to performance of individual ophthalmologists. Evaluating revalidation processes in Kirkpatrick terms (22), probably the highest level that could be realistically assessed is ‘change in professional behavior’ where doctors can indicate improvement through reflection.

Another problem with performance assessment is defining competencies of practicing ophthalmologists, noting that practitioners typically develop special interests and narrow their scope of practice over time. The baseline against which individual performance is to be assessed is challenging to define. Competency-based CPD is being developed as a first step (23).

Revalidation gained momentum globally following its introduction in the UK in 2012, driven by a need to regain community trust in doctors. Dr. Shipman was not unique, and doctors as serial killers are recognized in the literature, but the likelihood of repeat killings by ’serial euthanasia’ by large numbers of doctors would appear to be low (24). Revalidation may be seen as addressing a problem which has very low probability.

Performance assessment and monitoring systems may not detect underperforming doctors, or those with high numbers of unexplained patient deaths. Dr. Shipman was commended in his practice audit 9 months prior to his arrest (25).

Performance assessment and monitoring systems usually require establishment of administrative processes to review data submitted by practitioners (e.g., UK Responsible Officers). These processes can be costly, particularly in jurisdictions where the costs are borne by practitioners rather than government-funded bodies (e.g., the Medical Board of Australia passes costs on to doctors in a cost-recovery basis). There are no cost-benefit studies available.

Lastly, external performance assessment can be considered at odds with the concept of professional self-regulation with life-long self-directed learning through CPD programs. Revalidation in particular can be considered as a ‘command and control’ system, with conflict between the perspectives of doctors and regulators. There is limited ability to personal assessment, which is summative rather than formative. The stakes are high, and failure results in punitive actions such as forced retirement, rather than supportive remediation. Revalidation does not work with episodic learning where practitioners can have intense and quiet years of professional learning, and instead requires a uniformly paced process of learning that fits with externally mandated time periods.


Stakeholders' perspectives in performance assessment

UK revalidation process formally involves external stakeholders including patients, employers, hospital and health care systems, peers, other healthcare professionals, policy makers, and regulators. These stakeholders are also involved in other types of performance assessment. There is limited evidence and/or information about most of these perspectives.

Doctors undergoing assessment

A recent scoping review (26) reported that doctors in the USA, UK, Canada, Australia, New Zealand, and Ireland found performance review a good idea in theory, but difficult to achieve objectives in practice. Common barriers were time, complexity of requirements, and lack of flexibility in addressing doctors’ personal and professional circumstances. Older doctors found the process less beneficial than younger cohorts (27). Perhaps related to this, introduction of revalidation in the UK led to greater numbers of doctors ceasing clinical practice (28). Doctors have been found to be more motivated by regulatory requirements than patient expectations to participate in revalidation (29).

Peer doctors and other health professionals

In the UK peer assessment is a component of revalidation for medical practitioners and nurses (30). Probably peers can identify ‘good’ or ‘bad’ doctors. This might be the most useful component in revalidation, but it can be difficult to obtain accurate written feedback and to quantitate. A recent review concluded that evidence supports the introduction and use of peer review processes as a quality improvement tool, noting the cost is a barrier to implementation (31).

Patients

Patients are best served by a system which allows transparency on whether doctors are competent and fit to practice. This information can be difficult to find, and not well represented on regulators’ websites. A recent move by the Medical Board of Australia to record on-line unsubstantiated patient complaints is controversial, seen as the pendulum swinging too far in favor of transparency while reducing natural justice for practitioners. In the UK, patients are involved in monitoring and assessing medical performance, but a recent review found this is limited, variable, and primarily achieved through patient feedback and complaints (32).

Hospitals, health care systems, service providers

Hospitals and service providers are best served by competent doctors performing at their peak, delivering standard of care, in a cost-effective manner, with minimized variations in service quality. However, it can be argued that there has been no increase in system-wide health outcomes despite the onset of revalidation in the USA and similar developed countries (33). Analysis in UK of the cohort of doctors who exited practice following implementation of revalidation in 2012 found that they did not appear to have provided lower quality clinical care compared to those who continued practice, meaning that the process did not selectively target underperformers, and there was no improvement in system-wide quality of care following introduction of revalidation (28).

A recent review of employed doctors in the UK found that health care organizations have become intermediaries in the relationship between doctors and regulators, resulting in reduction in doctors’ autonomy and making them more accountable to and reliant on the organizations that employ them (34).

Regulators

Regulators require revalidation to be both effective and simple to administer for large numbers of doctors (for example the UK registers approximately 45,000 doctors, and the ABO has over 10,000 board-certified ophthalmologists participating in the continuous certification process). The USA MOC process has been criticized as a monetization of the process, with the primary reason for its establishment the financial well-being of the boards themselves (35).

Policy makers

Health policy is defined by the World Health Organization as the decisions, plans, and actions that are undertaken to achieve specific healthcare goals within a society. Examples include the National Institute for Health and Care Excellence in the UK, a provider of evidence-based recommendations for health and care in England, and Medicare in the USA, a health insurance program for people aged 65 years and older. Optimal health policy requires doctors to be practicing optimally. Similar to healthcare organizations and medical regulators, there is ongoing tension between policy makers and the medical profession (36).


Early models of performance assessment

Early models of performance assessment include data collection in a portfolio, credit accumulation and examination-based. These models are not mutually exclusive, for example the ABO Continuing Certification Program in 2023 requires accumulation of 50 American Medical Association (AMA) Physician Recognition Award (PRA)TM credits over 2 years, completion of one activity that qualifies as patient safety, meeting the annual passing standard of the Quarterly Questions and completion of two activities from the Improvement in Medical Practice menu (37).

Portfolio

A portfolio is a purposeful collection of information. Some of the components of portfolios are shown in Table 1; portfolio contents vary between jurisdictions. The process of collection is thought to be a reflective activity. Portfolios are commonly used in medical schools, and increasingly to record CPD activities.

Table 1

Contents of portfolios can include

• Personal statement of goals
• Personal strengths and weaknesses in cognitive, affective and metacognitive performance
• Assessment of personal learning style
• Plans for addressing learning needs
• Self-questions with reflections, plans and responses compiled during learning experiences
• Self-evaluation of performance, with self-generated and other sources of data
• Patient surveys
• Audit of procedural results

Pros of portfolios are that they can be personalized to cover all competencies relevant to the individual, include outcomes and can include feedback from patients and peers. Potential disadvantages are they can be time-consuming to collect, the process does not guarantee that reflection occurs, administration is difficult, requiring examination of each individual’s portfolio, and there may be associated medio-legal risk. Dr. Hadiza Bawa-Garba, was a pediatric registrar in the UK who was found guilty of manslaughter and struck off the medical register following death of one of her patients after training encounter notes by one of her consultants was used by the prosecution. This was subsequently overturned on appeal (38) and the GMC has since called for reflective statements to be legally privileged so that courts will not be able to compel doctors to produce them (39).

Credit accumulation

Credit-based is a development from traditional continuing medical education lectures where points are accumulated based on 1 notional hour of educational time. Each jurisdiction has developed its own method of accreditation of points/credits and/or providers of activities [e.g., AMA PRA Category 1 CreditTM, European Union of Medical Specialists-European Accreditation Council for Continuing Medical Education (UEMS-EACCME® points)]. Ophthalmologists collect points/credits towards a pre-determined annual requirement.

Advantages of credit-based systems are that it is relatively easy to accumulate points, and an uncomplicated method for regulators to review. Disadvantages are that activities are frequently of low educational value, time spent does not necessarily reflect learning or reflection, and many systems do not include review of outcomes.

Closed-book examination-based

The best example of examination-based recertification is Board Recertification in the USA. Advantages are that examinations demonstrate knowledge verification, cover the full and contemporary curriculum and are the easiest form of revalidation to administer. Disadvantages are that the process is summative only, and many not reflect the practitioner’s individual practice profile or interests. Further, the American Board of Internal Medical examinations were found to not reflect conditions seen in routine practice by general internists (40). The examination process is criticized as has allowed an industry to emerge around examination preparation.

The ABMS in 2019 recommended revision of the board certification process to replace ineffective strategies for recertification (i.e., infrequent high-stakes examinations) with meaningful strategies that strengthen professional self-regulation and simultaneously engender public trust (41).


Emerging models of performance assessment

Following the ABMS 2019 report (41), Boards are increasingly recognizing the need to combine both summative (assessment of lifetime learning) and formative (quiz) assessment. Procedural specialties, including the American Board of Surgery (42) and the American Board of Obstetrics and Gynecology (43) have begun developing processes for assessment of surgical performance. Emerging methods of performance assessment of practicing ophthalmologists include use of performance indicators, simulation and workplace-based assessments.

Monitoring using reliable performance indicators was developed for GPs (44), but can also be used to measure the performance of ophthalmologists. Performance indicators may monitor outcomes or processes. Examples of performance indicators include the proportion of procedures in which the posterior capsule is unintentionally damaged during cataract surgery (outcome), and 100% of glaucoma patients to have full ophthalmology workup of measurement of intraocular pressure, measurement of central corneal thickness, gonioscopy, visual fields and fundus assessment (process). A survey of hospital staff in the UK in 2015 found doctors were willing to engage with performance measurement and manage and monitor their peers, but management approaches were thought to be insufficient (45).

Concerns regarding the use of performance data include difficulty in distinguishing performance of the individual from the system in which they work, for example Kovács et al. (46) demonstrated in general practice that observed variability in performance between GPs partially arose from demographic characteristics and education of patients, and location of the medical practice with observed variance attributable to GPs less than 20%. Further concerns include the need for risk stratification of patients, risk of poor-quality data, risk of attribution errors, and unintended consequences of use of performance data such as distortion of activity to meet clinical or financial targets. The public release of performance data is contentious. Finally, there is scant evidence that use of performance data has resulted in improvement in quality of care (47).

Simulation-based education is accepted in ophthalmology training programs, although the evidence supporting its adoption in weak (48,49), but is another potential method of assessing ophthalmologist performance in practice for recertification or revalidation (12,50). Simulation has also been suggested as a means of assessing the performance of aging surgeons, demonstrating those who may require remediation or retirement (51). Simulation can be offered in different modalities including standardized patients, mannequins, and theatre-based simulation. Simulation is ideal for learning new surgical skills, and demonstrating competency in simulated medical emergencies and surgical complications, but may also be used in developing and accessing non-surgical skills. Simulation has the advantages of a safe learning environment, and the ability to customize depending on the ophthalmologist’s learning needs and scope of practice.

Workplace assessment, similarly becoming useful in trainee assessment (52), is another potential tool for recertification or revalidation. Workplace assessment involves observing physicians as they perform their daily tasks, often over a period of time, and may be used to determine whether they meet the required level of competency, and whether further training may be required. It can be used to assess non-surgical skills as well as procedural skills. Evidence is limited in its implementation (53). Recognized disadvantages include the complexity of data, difficulty in standardization between raters, and labor-intensive nature (54).


Assuring the public by detecting performance outliers

Alternative models of ensuring doctor competency focus on identifying underperforming outliers, who can then be remediated, rather than requiring the majority of competently performing practitioners to prove their fitness to practice.

One method is identifying underperforming doctors is through reporting mechanisms used by patients and peers and/or litigation. Vexatious complaints in this setting are increasing, and lead to considerable distress for practitioners. Processes must be established to allow speedy dismissal of vexatious claims (55).

Another method involves using risk factors identified while practicing. These can include recognized risk factors of age (1), solo practice, prescribing patterns, development of a medical disability and failure of re-sitting recertification examinations (56). Similarly risk factors identified during training (as opposed to practicing ophthalmology) can be a useful indicator. Poor performance in communication skills in Canadian licensing examinations has been associated with later complaints to medical regulatory authorities (57).

Alternative methods of identifying doctors requiring remediation show promise, but come with significant challenges (58). Medical competence is not a dichotomy, instead probably a spectrum. A nuanced approach is required to support doctors early with excellent remediation practices rather than a punitive approach later in their career. Remediation processes must be readily available, effective and supported by a solid evidence base, which is currently lacking (59). Finally, if remediation is linked to license revalidation, great care will be needed to ensure timely natural justice for doctors identified using a risk/statistical approach.


Strengths and limitations of this review

A strength is the novelty of approach. Weaknesses include single authorship, difficulty in locating original scientific studies of performance assessment (in contrast to opinions), and differing and rapidly changing regulatory requirements in different countries. Although there are many stakeholders involved, revalidation and recertification inherently applies to individuals; interdisciplinary team performance and improvement is not assessed, although this might be the best method of care delivery in health care systems.


Conclusions

Performance assessment and monitoring systems of practicing ophthalmologists are being adopted and revised world-wide, although evidence regarding the various stakeholders and supporting the value of current mechanisms is weak. Current processes are high stakes summative assessments, and results of failure are punitive. Accordingly, performance indicators, simulation and workplace-based assessments are emerging as tools to assist in demonstrating competency. Alternative methods to reduce the burden to practicing doctors require consideration. Future rigorous studies demonstrating the value, and cost-benefit analysis of current revalidation mechanisms are required.


Acknowledgments

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the Guest Editors (Drs. Karl Golnik, Yip Chee Chew, Gabriela Palis, and Meena Swaminathan) for the series “Improving Teaching Skills in Ophthalmology” published in Annals of Eye Science. The article has undergone external peer review.

Peer Review File: Available at https://aes.amegroups.com/article/view/10.21037/aes-23-21/prf

Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at https://aes.amegroups.com/article/view/10.21037/aes-23-21/coif). The series “Improving Teaching Skills in Ophthalmology” was commissioned by the editorial office without any funding or sponsorship. H.G.M. has received $20,000 grant from Retina Australia for a survey of people with inherited retinal disease in 2021, participated in advisory board for Luxturna launched in Australia from 2020 to 2022, and served as unpaid board member of Retina Australia and Vision 2020 Australia, which both ceased in 2023. The author has no other conflicts of interest to declare.

Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med 2005;142:260-73. [Crossref] [PubMed]
  2. Carter S. Government announces start of revalidation. BMJ 2012;345:e7092. [Crossref] [PubMed]
  3. Davis N, Davis D, Bloch R. Continuing medical education: AMEE Education Guide No 35. Med Teach 2008;30:652-66. [Crossref] [PubMed]
  4. Filipe HP, Silva ED, Stulting AA, et al. Continuing professional development: best practices. Middle East Afr J Ophthalmol 2014;21:134-41. [Crossref] [PubMed]
  5. Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991;66:762-9. [Crossref] [PubMed]
  6. Woolliscroft JO, TenHaken J, Smith J, et al. Medical students' clinical self-assessments: comparisons with external measures of performance and the students' self-assessments of overall performance and effort. Acad Med 1993;68:285-94. [Crossref] [PubMed]
  7. Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. Acad Med 2005;80:S46-54. [Crossref] [PubMed]
  8. Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA 2006;296:1094-102. [Crossref] [PubMed]
  9. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. J Pers Soc Psychol 1999;77:1121-34. [Crossref] [PubMed]
  10. Sehlbach C, Govaerts MJ, Mitchell S, et al. Doctors on the move: a European case study on the key characteristics of national recertification systems. BMJ Open 2018;8:e019963. [Crossref] [PubMed]
  11. American Board of Ophthalmology. ABO Continuing Certification. Accessed 30 August 2023. Available online: https://abop.org/maintain-certification/
  12. Cordovani L, Wong A, Monteiro S. Maintenance of certification for practicing physicians: a review of current challenges and considerations. Can Med Educ J 2020;11:e70-80. [PubMed]
  13. Walshe K, Offen N. A very public failure: lessons for quality improvement in healthcare organisations from the Bristol Royal Infirmary. Qual Health Care 2001;10:250-6. [Crossref] [PubMed]
  14. Gunn J. Dr Harold Frederick Shipman: an enigma. Crim Behav Ment Health 2010;20:190-8. [Crossref] [PubMed]
  15. General Medical Council. Guidance for doctors: requirements for revalidation and maintaining your licence. Accessed 30 August 2023. Available online: https://www.gmc-uk.org/-/media/gmc-site/registration-and-licensing/guidance_for_doctors_requirements_for_revalidation_and_maintaining_your_licence.pdf
  16. Norcini J, Lipner R, Kimball H. The certification status of generalist physicians and the mortality of their patients after acute myocardial infarction. Acad Med 2001;76:S21-3. [Crossref] [PubMed]
  17. Brennan TA, Horwitz RI, Duffy FD, et al. The role of physician specialty board certification status in the quality movement. JAMA 2004;292:1038-43. [Crossref] [PubMed]
  18. Norcini JJ, Weng W, Boulet J, et al. Associations between initial American Board of Internal Medicine certification and maintenance of certification status of attending physicians and in-hospital mortality of patients with acute myocardial infarction or congestive heart failure: a retrospective cohort study of hospitalisations in Pennsylvania, USA. BMJ Open 2022;12:e055558. [Crossref] [PubMed]
  19. Xu T, Mehta A, Park A, et al. Association Between Board Certification, Maintenance of Certification, and Surgical Complications in the United States. Am J Med Qual 2019;34:545-52. [Crossref] [PubMed]
  20. Sheth BP, Schnabel SD, Comber BA, et al. Relationship Between the American Board of Ophthalmology Maintenance of Certification Program and Actions Against the Medical License. Am J Ophthalmol 2023;247:1-8. [Crossref] [PubMed]
  21. Ahluwalia SC, Damberg CL, Silverman M, et al. What Defines a High-Performing Health Care Delivery System: A Systematic Review. Jt Comm J Qual Patient Saf 2017;43:450-9. [Crossref] [PubMed]
  22. Kirkpatrick DL. Evaluating training programs: the four levels. San Francisco: Emeryville, CA: Berrett-Koehler; 1994.
  23. Lockyer J, Bursey F, Richardson D, et al. Competency-based medical education and continuing professional development: A conceptualization for change. Med Teach 2017;39:617-22. [Crossref] [PubMed]
  24. Kinnell HG. Serial homicide by doctors: Shipman in perspective. BMJ 2000;321:1594-7. [Crossref] [PubMed]
  25. Fitzpatrick M. Auditing deaths. Lancet 2003;362:586. [Crossref] [PubMed]
  26. Wiese A, Galvin E, Merrett C, et al. Doctors' attitudes to, beliefs about, and experiences of the regulation of professional competence: a scoping review protocol. Syst Rev 2019;8:213. [Crossref] [PubMed]
  27. Hopayian K, Sherifi J. GP appraisal: an evaluation of generational differences on the utility of GP appraisal. Educ Prim Care 2020;31:371-6. [Crossref] [PubMed]
  28. Gutacker N, Bloor K, Bojke C, et al. Does regulation increase the rate at which doctors leave practice? Analysis of routine hospital data in the English NHS following the introduction of medical revalidation. BMC Med 2019;17:33. [Crossref] [PubMed]
  29. Wiese A, Galvin E, O'Farrell J, et al. Doctors' maintenance of professional competence: a qualitative study informed by the theory of planned behaviour. BMC Health Serv Res 2021;21:419. [Crossref] [PubMed]
  30. General Medical Council. Colleague and patient feedback for revalidation. Last Accessed 6 September 2023. Available online: https://www.gmc-uk.org/registration-and-licensing/managing-your-registration/revalidation/revalidation-resources/collecting-colleague-and-patient-feedback-for-revalidation
  31. Tang S, Bowles A, Minns Lowe C. Peer Review Processes for Quality Improvement in Health Care Settings and Their Implications for Health Care Professionals: A Meta-Ethnography. J Contin Educ Health Prof 2022;42:115-24. [Crossref] [PubMed]
  32. Lalani M, Baines R, Bryce M, et al. Patient and public involvement in medical performance processes: A systematic review. Health Expect 2019;22:149-61. [Crossref] [PubMed]
  33. Woolf SH, Schoomaker H. Life Expectancy and Mortality Rates in the United States, 1959-2017. JAMA 2019;322:1996-2016. [Crossref] [PubMed]
  34. Tazzyman A, Bryce M, Ferguson J, et al. Reforming regulatory relationships: The impact of medical revalidation on doctors, employers, and the General Medical Council in the United Kingdom. Regul Gov 2019;13:593-608. [Crossref] [PubMed]
  35. Schwartz Z, Lieberman MR, Siegel DM. The evolving maintenance of certification process: update on the financial status of the medical boards. Dermatol Online J 2020;26:13030/qt93p2d3n2.
  36. Salter B. Governing UK medical performance: a struggle for policy dominance. Health Policy 2007;82:263-75. [Crossref] [PubMed]
  37. American Board of Ophthalmology. Getting Started. Last Accessed March 2023. Available online: https://abop.org/maintain-certification/getting-started/
  38. The British Medical Journal. The Bawa-Garba case. Accessed March 2023. Available online: https://www.bmj.com/bawa-garba
  39. Iacobucci G. New guidance on reflective practice to be published in wake of Bawa-Garba case. BMJ 2018;361:k2225. [Crossref] [PubMed]
  40. Gray B, Vandergrift J, Lipner RS, et al. Comparison of Content on the American Board of Internal Medicine Maintenance of Certification Examination With Conditions Seen in Practice by General Internists. JAMA 2017;317:2317-24. [Crossref] [PubMed]
  41. American Board of Medical Specialties. Continuing board certification: vision for the future commission – final report. Accessed March 2023. Available online: https://www.abms.org/wp-content/uploads/2020/11/commission_final_report_20190212.pdf
  42. Pradarelli JC, Pavuluri Quamme SR, Yee A, et al. Surgical coaching to achieve the ABMS vision for the future of continuing board certification. Am J Surg 2021;221:4-10. [Crossref] [PubMed]
  43. Orlando MS, Greenberg CC, Pavuluri Quamme SR, et al. Surgical coaching in obstetrics and gynecology: an evidence-based strategy to elevate surgical education and promote lifelong learning. Am J Obstet Gynecol 2022;227:51-6. [Crossref] [PubMed]
  44. Sibthorpe B, Gardner K. A conceptual framework for performance assessment in primary health care. Aust J Prim Health 2007;13:96-103. [Crossref]
  45. Trebble TM, Carder C, Paul M, et al. Determining doctors' views on performance measurement and management of their clinical practice. Future Hosp J 2015;2:166-70. [Crossref] [PubMed]
  46. Kovács N, Pálinkás A, Sipos V, et al. Factors Associated with Practice-Level Performance Indicators in Primary Health Care in Hungary: A Nationwide Cross-Sectional Study. Int J Environ Res Public Health 2019;16:3153. [Crossref] [PubMed]
  47. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008;148:111-23. [Crossref] [PubMed]
  48. Thomsen AS, Subhi Y, Kiilgaard JF, et al. Update on simulation-based surgical training and assessment in ophthalmology: a systematic review. Ophthalmology 2015;122:1111-1130.e1. [Crossref] [PubMed]
  49. Lee R, Raison N, Lau WY, et al. A systematic review of simulation-based training tools for technical and non-technical skills in ophthalmology. Eye (Lond) 2020;34:1737-59. [Crossref] [PubMed]
  50. Ross BK, Metzner J. Simulation for Maintenance of Certification. Surg Clin North Am 2015;95:893-905. [Crossref] [PubMed]
  51. Frazer A, Tanzer M. Hanging up the surgical cap: Assessing the competence of aging surgeons. World J Orthop 2021;12:234-45. [Crossref] [PubMed]
  52. The Tripartite Alliance. Work-based assessment: a practical guide. 2014. Last Accessed 3 September 2023. Available online: https://www.surgeons.org/-/media/Project/RACS/surgeons-org/files/becoming-a-surgeon-trainees/work-based-assessment-a-practical-guide.pdf?rev=64c62242e777411eb43be8ac781dfa4a&hash=DCEE633AC11B7EE63975DF1A6948C99A
  53. Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. Acad Med 2021;96:S164-74. [Crossref] [PubMed]
  54. The American Board of Pediatrics. Workplace Assessment Summary. Summary Report. 2015. Last Accessed 3 September 2023. Available online: https://www.abp.org/sites/abp/files/pdf/fotc-workplace-assessments-summary.pdf
  55. Medical Board of Australia. New Code of Conduct: sections on vexatious complaints added. Last Accessed March 2023. Available online: https://avant.org.au/news/new-code-of-conduct-section-on-vexatious-complaints-added/
  56. Jones AT, Kopp JP, Malangoni MA. Recertification Exam Performance in General Surgery is Associated With Subsequent Loss of License Actions. Ann Surg 2020;272:1020-4. [Crossref] [PubMed]
  57. Tamblyn R, Abrahamowicz M, Dauphinee D, et al. Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA 2007;298:993-1001. [Crossref] [PubMed]
  58. Price T, Archer J. UK Policy on Doctor Remediation: Trajectories and Challenges. J Contin Educ Health Prof 2017;37:207-11. [Crossref] [PubMed]
  59. Price T, Brennan N, Wong G, et al. Remediation programmes for practising doctors to restore patient safety: the RESTORE realist review. Southampton: NIHR Journals Library; 2021.
doi: 10.21037/aes-23-21
Cite this article as: Mack HG. Review: update on performance assessment and monitoring systems for ophthalmologists. Ann Eye Sci 2024;9:3.

Download Citation