Future Trends in Competency-Based Assessment
Published: 18 August 2019
Published: 18 August 2019
Competency-based assessment tools have long been accepted as the standard for assessing a practitioner’s ability to perform tasks safely and well.
Yet despite their popularity, there are growing concerns about their suitability to demonstrate ‘real competency’ in unsupervised clinical practice (Franklin and Melville 2015).
For many, the question that now needs to be asked is ‘what should the competency model look like moving forward?’
Wright (2005) suggests that all assessment validation methods should embrace three common domains of learning.
Wright also highlights the importance of having a competency model that not only has clearly defined principles but that is also responsive, dynamic and adaptable to an evolving healthcare system. Specifically, Wright (2005) recommends that:
Yet despite the popularity of this model, Deere (2017) flags current research that reveals a gap in understanding regarding the intended proper application of the Wright model and asks if educators need to think more broadly about how competency is assessed.
Harden (2002), like Wright (2005), also offers a succinct summary of the key criteria needed for effective outcomes based learning. They include:
Whilst recognising the value that traditional competency-based assessment has to offer, Harden (2007) also raises concerns about the movement away from a process-based approach that is focused on teaching and learning methods, towards a model where the emphasis becomes more about learning outcomes of the educational experience. In other words, focusing on what students learn and making sure it is learnt well, should be of greater importance than when and how students learn.
Franklin and Melville (2015) are also among a growing number of educators who suggest that competency assessment tools run a serious risk of being little more than a ‘quick-fix’ means of assessment.
To remedy this, they suggest an alternative approach that moves away from ‘tick-box’ assessments towards a more ‘patient-centred’ competency model. It’s a more rewarding approach, which in their view not only increases the reliability and validity of competency assessments but also allows for greater recognition of the knowledge, skills and experience of individual practitioners, as well as demonstrating ‘real-life’ competency.
In recent years. there has been a significant move away from the use of a traditional syllabus and towards outcome-based education (OBE), yet significant challenges are still evident. For example, although it’s relatively easy to make learning outcomes explicit, the use of specified outcomes as a basis for decisions about curriculum design and development are often ignored.
In particular, Harden (2007) outlines three patterns of behaviour that can limit the potential of competency based assessment:
In asking the question ‘are educators fulfilling the promise of competency-based education?’, Touchie and Ten Cate (2015) suggest that using entrustable professional activities (EPA’s) could be one way to overcome some of the challenges and weaknesses inherent in traditional competency-based training.
Entrustable Professional Activities have been described as ‘units of professional practice, to be entrusted to the unsupervised execution by a trainee once he or she has attained sufficient specific competence’ (Ross 2015). As Ten Cate (2013) comments, EPA’s are now gaining popularity as a way of facilitating the transition from theory into practice by offering an added level of flexibility based on the level of supervision required by trainees.
In general terms, competencies are descriptors of practitioners, whereas EPAs are descriptors of work. As Ten Cate (2013) explains, EPAs are not designed to be an alternative for competencies but simply offer a way of translating competencies into clinical practice.
To use EPA’s as a valid mode of assessment, entrustment decisions need to be made that involve clinical skills and abilities as well as more general aspects of competence, such as the ability of the practitioner to acknowledge their limitations and to ask for help when needed.
Making entrustment decisions for unsupervised practice requires observed proficiency, usually on multiple occasions.
In practice, entrustment decisions are affected by four groups of variables:
Entrustment decisions can occur either on an ad hoc basis during routine clinical care, or take a more structured format involving a formal acknowledgement that the student has passed a threshold that allows for decreased supervision.
Of course, it could be argued that all modes of assessment have their downside and that is true of EPA’s as well. The terminology surrounding entrustable activities is still relatively new, and most of the current literature remains descriptive and uncritically enthusiastic. But as Ross (2015) reminds us, EPA’s are just another way of describing a ‘curriculum’ which means that there is a vast amount of literature to draw on.
Perhaps a more significant disadvantage is that individual learners and clinical teachers will be tempted to focus on the achievement of specified competencies and EPA’s to the exclusion of everything else. The emphasis may even be placed on the achievement of minimal competence rather than on excellence.
Another risk factor that is particular to the use of EPA’s is that of inadequate supervision and the potential negative impact this may have on patient safety. That said, the EPA model offers clinical educators a valuable new way of thinking about what students learn and how they learn it. As with all assessment tools of course, it should be used thoughtfully, with care, and with full appreciation of its strengths as well as its limitations (Ross 2015).