Refining Competency-Based Assessment
Published: 28 November 2018
Published: 28 November 2018
Driven by the need for greater accountability and the ability to support students learning at their own pace, competency-based assessment and learning has been a feature of healthcare education for the past 50 years.
Gone are the days when simply completing a training course was considered enough to ensure proficiency. Today, clinical competence is given equal importance alongside academic knowledge, leaving many educators asking if traditional forms of competency assessment are still the best form of knowledge verification.
In the view of Franklin and Melville (2015) competence is:
“The ability to perform a work role to a defined standard, with reference to real working environments, that ideally includes a person's ability to demonstrate their cognitive knowledge, skills, behaviours and attitudes in any given situation.”
Nehrir et al. (2016), offer a simpler definition, suggesting that competency is:
“The ability to perform the task with desirable outcomes under the varied circumstances of the real world.”
However, they also point out several weaknesses in the use of language around competency-based assessment. For example, definitions of competence often vary by profession and country and whilst the terms competence and competency are often used interchangeably, ‘competency’ should strictly be used to describe a skill, where ‘competence’ is the practitioner’s ability to perform that skill.
Assessment is an integral part of the learning process. No matter what form the assessment tool takes it should always be measured against intended learning outcomes and enable appropriate assessment of what has been understood.
To be effective an assessment tool needs to be:
Norcini et al. (2011) suggest that good assessment tools should include criteria such as validity and coherence, reproducibility, consistency, feasibility and equivalence. They also suggest that perhaps one of the most important criteria of effective assessment is the catalytic effect. In other words, the ability of the assessment itself to enhance and support learning.
However, as they go on to point out, not all of the suggested criteria for assessment apply equally well to all situations, and with this in mind they recommend that the criteria should also take account of:
Taking a ‘one size fits all’ approach poses a significant limitation to competency-based assessment within the clinical environment. The key limitation being failure to factor in the differing levels of a practitioner’s knowledge, skills and experience (Franklin and Melville 2015).
Wright (2005), reflects these concerns making the point that no one’s job stays the same over time, suggesting that assessment should be a fluid and ongoing process. Not only should assessment help to identify the skills needed to do the job in the present, but assessment tools should also be able to adapt and respond to future changes as role requirements evolve over time.
Reflecting on this need for assessment tools to be both dynamic and responsive Wright (2005) likens the assessment process to sailing, where to move ahead in the desired direction, there must be both the willingness and ability to adjust the sails.
While competency-based assessment tools can provide useful key performance indicators, taking a ‘one-size’ fits all approach can reduce the validity of the assessment.
In other words, by using the same tool for practitioners of different skill levels, or even for different members of a multidisciplinary team within a given clinical area much of the potential value of competency assessment is lost.
As Franklin and Melville (2015) point out a good assessment tool should be both intra-reliable, in that results can be reproduced by the same assessor as well as inter-reliable, so that the same results can be reproduced by a different assessor.
They also warn that when competency assessments are estranged from the ‘real-life’ clinical environment they inevitably become less valid and reliable.
Given the limitations of the tick box approach which is so often a feature of competency-based assessments, Franklin and Melville (2015) strongly recommend moving to a ‘patient-centred’ competency model, as a way of adding greater reliability and validity to the assessment process.
This, they argue, would allow for recognition of the knowledge, skills and experience of individual practitioners, as well as demonstrating ‘real-life’ competency.
Zasadny and Bull (2015) also argue that traditional forms of competency assessment are flawed by both ambiguity and inconsistency.
To counter this difficulty, many assessors favour measuring the level of competency over a ‘continuum-of-time’ versus a ‘snapshot-in-time’.
It’s a strategy that Franklin and Melville (2015) suggest also allows both the ‘art’ and ‘science’ of the practitioner’s skills to be assessed. It’s also a practical way of ensuring that the assessment is reflecting ‘real-life’ clinical practice under a variety of circumstances.
Is it time to reform the 50-year old customs and traditions of competency-based training? Many educationalists would argue that yes, it is.
Whilst workplace-based learning and assessment tools are essential elements of all healthcare education programs (Ossenber et al. 2016), it’s clear that more research is urgently needed into their use.
New approaches are now needed to embrace different levels of practitioner knowledge and experience to make this form of assessment more robust and relevant for modern day use.
Alongside this is the ever-present challenge for educators and assessors to think creatively about how to adapt existing tools to maximise their relevance and reliability within a rapidly evolving health care system.