Refining Competency-Based Assessment

CPDTime.
4m

Published: 28 November 2018

Driven by the need for greater accountability and the ability to support students learning at their own pace, competency-based assessment and learning has been a feature of healthcare education for the past 50 years.

Gone are the days when simply completing a training course was considered enough to ensure proficiency. Today, clinical competence is given equal importance alongside academic knowledge, leaving many educators asking if traditional forms of competency assessment are still the best form of knowledge verification.

Defining Competency

In the view of Franklin and Melville (2015) competence is:

“The ability to perform a work role to a defined standard, with reference to real working environments, that ideally includes a person's ability to demonstrate their cognitive knowledge, skills, behaviours and attitudes in any given situation.”

Nehrir et al. (2016), offer a simpler definition, suggesting that competency is:

“The ability to perform the task with desirable outcomes under the varied circumstances of the real world.”

However, they also point out several weaknesses in the use of language around competency-based assessment. For example, definitions of competence often vary by profession and country and whilst the terms competence and competency are often used interchangeably, ‘competency’ should strictly be used to describe a skill, where ‘competence’ is the  practitioner’s ability to perform that skill. 

Principles of Assessment

Assessment is an integral part of the learning process. No matter what form the assessment tool takes it should always be measured against intended learning outcomes and enable appropriate assessment of what has been understood.

To be effective an assessment tool needs to be:

  • Ensuring alignment between the teaching aims and intended learning outcomes;
  • Reliable, fair and equitable. Ensuring that the marking criteria are sufficiently robust to create reasonable parity between the judgements of different assessors;
  • Helping the student to benchmark their current level of knowledge or skills, as well as identifying areas for improvement and future learning;
  • Making sure that the assessment is appropriately timed within  a given unit of study;
  • Ensuring that the assessment is sufficiently challenging and rigorous to motivate further learning; and
  • Efficient and manageable. Taking account of resources such as the available time and space for the assessment to occur.

Criteria for Competency Assessment

Norcini et al. (2011) suggest that good assessment tools should include criteria such as validity and coherence, reproducibility, consistency, feasibility and equivalence. They also suggest that perhaps one of the most important criteria of effective assessment is the catalytic effect. In other words, the ability of the assessment itself to enhance and support learning.

However, as they go on to point out, not all of  the suggested criteria for assessment apply equally well to all situations, and  with this  in mind they recommend that the criteria  should also take account of:

  • The perspectives of patients and the public;
  • The intimate relationship between assessment, feedback, and continued learning;
  • Systems of assessment; and
  • Accreditation systems.

Going Beyond a ‘One Size Fits All’ Form of Assessment

Taking a ‘one size fits all’ approach poses a significant limitation to competency-based assessment within the clinical environment. The key limitation being failure to factor in the differing levels of a practitioner’s knowledge, skills and experience (Franklin  and Melville 2015).

Wright (2005), reflects these concerns  making the point that no one’s job stays the same over time,  suggesting that assessment should be a fluid and ongoing process. Not only should assessment help to identify the skills needed to do the job in the present, but assessment tools should also be able to adapt and respond to future changes as role requirements evolve over time.

Reflecting on this need for assessment tools to be both dynamic and responsive Wright (2005) likens the assessment process to sailing, where to move ahead in the desired direction, there must be both the willingness and ability to adjust the sails.

Intra- and Inter-reliable

While competency-based assessment tools can provide useful key performance indicators, taking a ‘one-size’ fits all approach can reduce the validity of the assessment.

In other words, by using the same tool for practitioners of different skill levels, or even for different members of a multidisciplinary team within a given clinical area much of the potential value of competency assessment is lost.

As Franklin and Melville (2015) point out a good assessment tool should be both intra-reliable, in that results can be reproduced by the same assessor as well as inter-reliable, so that the same results can be reproduced by a different assessor.

They also warn that when competency assessments are estranged from the ‘real-life’ clinical environment they inevitably become less valid and reliable.

Moving Away From ‘Tick-Box’ Assessments

Given the limitations of the tick box approach which is so often a feature of competency-based assessments, Franklin and Melville (2015) strongly recommend moving to a ‘patient-centred’ competency model, as a way of adding greater reliability and validity to the assessment process.

This, they argue, would allow for recognition of the knowledge, skills and experience of individual practitioners, as well as demonstrating ‘real-life’ competency.

Continuum of Time Versus a Snapshot of Time

Zasadny and Bull (2015) also argue that traditional forms of competency assessment are flawed by both ambiguity and inconsistency.

To counter this difficulty, many assessors favour measuring the level of competency over a ‘continuum-of-time’ versus a ‘snapshot-in-time’.

It’s a strategy that Franklin and Melville (2015) suggest also allows both the ‘art’ and ‘science’ of the practitioner’s skills to be assessed. It’s also a practical way of ensuring that the assessment is reflecting ‘real-life’ clinical practice under a variety of circumstances.

Is it Time to Reform Competency-Based Assessment?

Is it time to reform the 50-year old customs and traditions of competency-based training?  Many educationalists would argue that yes, it is.

Whilst workplace-based learning and assessment tools are essential elements of all healthcare education programs (Ossenber et al. 2016), it’s clear that more research is urgently needed into their use.

New approaches are now needed to embrace different levels of practitioner knowledge and experience to make this form of assessment more robust and relevant for modern day use.

Alongside this is the ever-present challenge for educators and assessors to think creatively about how to adapt existing tools to maximise their relevance and reliability within a rapidly evolving health care system.

References

  • Franklin, N. and Melville, P. (2015) 'Competency assessment tools: An exploration of the pedagogical issues facing competency assessment for nurses in the clinical environment Author links open overlay panel', Collegian, 22( 1), pp. 25-31 [Online]. Available at: https://www.sciencedirect.com/science/article/pii/S1322769613001108(Accessed: 12.11.18).
  • Nehrir, B., Vanaki, Z., Nouri, J. M. et al. (2016) 'Competency in Nursing Students: A Systematic Review', International Journal of Travel Medicine and Global Health, 4(1), pp. 3-11 [Online]. Available at: http://www.ijtmgh.com/article_33021_12a0e6bc3d69e52799218e3f83250786.pdf(Accessed: 15.11.18).
  • Norcini, J., Anderson, B., Bollela, V. et al. (2011) 'Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference', Medical Teacher, 33(2), pp. 206-214 [Online]. Available at: https://www.tandfonline.com/doi/abs/10.3109/0142159X.2011.551559?journalCode=imte20(Accessed: 12.11.18).
  • Ossenberg, C., Dalton, M. and Henderson, A. (2016) 'Validation of the Australian Nursing Standards Assessment Tool (ANSAT): A pilot study', Nurse Education Today, 36(), pp. 23-30 [Online]. Available at: https://www.sciencedirect.com/science/article/pii/S0260691715002828(Accessed: 12.11.18).
  • Wright, D. K. (2005) The Ultimate Guide to Competency Assessment in Health Care, 3rd edn., USA: Creative Health Care Management.
  • Zasadny, M.F. and Bull, R. M. (2015) 'Assessing competence in undergraduate nursing students: The Amalgamated Students Assessment in Practice model', Nurse Education in Practice, 15(2), pp. 126-133 [Online]. Available at: https://www.sciencedirect.com/science/article/pii/S1471595315000049(Accessed: 12.11.18).

Author

educator profile image
Anne Watkins View profile
Anne is a freelance lecturer and medical writer at Mind Body Ink. She is a former midwife and nurse teacher with over 25 years’ experience working in the fields of healthcare, stress management and medical hypnosis. Her background includes working as a hospital midwife, Critical Care nurse, lecturer in Neonatal Intensive Care, and as a Clinical Nurse Specialist for a company making life support equipment. Anne has also studied many forms of complementary medicine and has extensive experience in the field of clinical hypnosis. She has a special interest in integrating complementary medicine into conventional healthcare settings and is currently an Associate Tutor, lecturing in Health Coaching and Medical Hypnosis at Exeter University in the UK. As a former Midwife, Anne has a natural passion for writing about fertility, pregnancy, birthing and baby care. Her recent publications include The Health Factor, Coach Yourself To Better Health and Positive Thinking For Kids. You can read more about her work at www.MindBodyInk.com.