Rethinking self-assessment (Part 1)

Self-assessment is everywhere. It is the essential key to personal development, the underpinning rationale of curriculum development, the main indicator for measuring achievement, the political foundation of recognition, the clandestine enigma of accreditation.

abbreviation
potpourri

Instruments are designed at high speed – from self-assessment forms to personal development plans, from self-perception inventories to competence improvement maps – resulting in a cacophony of abbreviations that seems only a little shy of setting new records.

A rigorous evaluation of these instruments – looking at aims, scopes and approaches as well as usage, usefulness and impact – is as much missing as a painstaking analysis of underlying frameworks and tacit assumptions.

It is clear already, however, that the entire assortment of self-assessment instruments fails to respond to some key questions, among them:

  • In the absence of quality standards, what do you measure yourself against?
  • In the absence of external expertise for validation, how exactly should recognition and accreditation come about?
high ambitions
little value?

Even when leaving all political intentions and inconspicuous ambitions in relation to validation, recognition and accreditation aside, I have trouble finding value in any of these instruments for their most palpable purpose – self-assessment.

Take whichever you want – SAF, SPI, CIM, PDP – they all start from yourself as a trainer and educator. Not yourself as a trainer and educator in a particular project or context, but rather yourself as a trainer and educator in life. Through this inherent claim of being universally relevant and the resulting decontextualisation, the self-assessment process loses most of its value for me.

Universal?
Impossible!

Let me pick three quandaries to exemplify and justify my defiance:

Firstly, this approach implies that there is a potentially agreeable set of competences for non-formal educators. It assumes that there is a combination of knowledge, skills and attitudes that, once mastered, makes for a non-formal educator of tolerable, decent or outstanding quality.

Secondly, this approach implies that there is a universally acceptable scale along which any set of competences could and should be measured. It assumes that there is a common understanding of what it means to be moderately or exceptionally competent or incompetent in a specific area.

Thirdly, this approach implies that educators are generally aware of what specific competences entail before they have fully mastered them. It assumes that there is sufficient understanding of knowledge, skills and attitudes required to achieve basic or advanced levels of proficiency.

crumbling
assumptions

Research can prove what common sense and practical experience tell us: none of this is true, none of these assumptions hold, they crumble at first sight. And yet we continue to invent and re-invent self-assessment tools, defeated before we start by their envisaged universality…

How then, you ask, could a useful self-assessment instrument look like?

A very good question indeed :)

I will gladly take on the challenge to develop some ideas for alternative tools in the second part of this mini-series, but let’s first leave some time for your questions and ideas, your criticism and feedback. Fire away!


Posted

in

by

Comments

3 responses to “Rethinking self-assessment (Part 1)”

  1. Paul Kloosterman Avatar
    Paul Kloosterman

    Took me some time to come with a reaction to this. But still the first it seems! My problem was that I didn’t get excited when reading this, not like ‘yes! this helps in further thinking’.

    My first problem is the picture that is painted. ‘Instruments are developed at high speed’. I don’t want to go into a discussion about the definition of ‘high speed’ but the first time that I worked in a training course with a Self Perception Inventory and a Personal Development Plan was in 2001/2002, so 8-9 years ago. Since that time in training courses for/of trainers it became a kind of common practice to use these instruments. In these years both the Personal Development Plan as the Self Perception Inventory were changed, improved and new more complex and more simple versions came out. Discussed a lot amongst trainers and with participants. I think they even got better! Since the beginning a need was expressed for other and more instruments that could help people to assess themselves. ‘Not only instruments where you have to fill in a form!’. New tools were developed but also already existing methods were found. Still in the field the need for more instruments is expressed, especially after the introduction of the Youthpass in the Youth in Action programme.
    Not so much ‘high speed’ anyway for me.

    ‘A rigorous evaluation of these instruments… is as much missing….’. A wonderful sentence but not very respectful to those people (trainers and participants) who have been working with these instruments and have been evaluating their experiences in order to improve the tools. I will be the first one to support further analysis, evaluation and research on this but the last one saying that only analysis- and research reports guarantee improvement of quality.
    ‘The absence of quality standards’. When that means quality standards agreed on in the European educational field, signed and stamped, I agree. They are absent. But I would not wait for that agreement to start (and continue) developing better self-assessment instruments. We are lucky in our field that many people in different constellations have been thinking about quality standards. The good thing is that they even wrote their thoughts down. It’s no problem at all to offer to participants in training courses material which give them the possibility to assess themselves against quality criteria. It’s not that the field has no clue what the job is about.
    ‘Research can prove what common sense and practical experience tell us: none of this is true, none of these assumptions hold, they crumble at first sight.’
    First of all I get a bit scared of research that can prove that ‘non of this is true’. Especially when this is written after three statements following an assumption that I find quite doubtful.
    I would be rather in favour of research that would join the forces of those people who would like to assist learners in the best possible way to take responsibility for their own learning.
    So….. it’s above all the first part of your last sentence I’m supporting: taking on the challenge to develop some ideas for alternative tools!

  2. Andreas Karsten Avatar

    Yes indeed, we have worked with self-assessment instruments in European youth training for roughly ten years – as trainers and participants, as learners and evaluators. All those ten years I have never felt like ‘yes! this helps in further thinking’ either…

    I have spent much time and many discussions in trying to figure out why that is so, and not only have I found that I am not alone in doubting the usefulness of the current self-assessment strategy as a learner, but also that I am not alone in doubting the usefulness as a trainer and educator.

    Disrespect? Please.
    Bringing these discussions out in the open is anything but a sign of disrespect ‘to those people (trainers and participants) who have been working with these instruments and have been evaluating their experiences in order to improve the tools’. I don’t want my intentions to be misunderstood or my thoughts to be misconstructed that way.

    That a rigorous evaluation of these instruments is still missing—after ten years of educational practice and several educational evaluations taking a critical stance at the self-assessment approach—that is a sign of disrespect.

    Complementarity of embedded and external analysis
    It’s not about analysis and research being the only tools that guarantee quality, but it is their absence that delays discussion and hinders development. There is a huge difference between trainers and participants involved in particular educational processes working on improving a specific instrument for the purpose of usability improvements, and a more general approach that reviews the entire range of instruments and tests them for consistency, verifiability and rigour.

    Obviously, not much would happen if trainers and participants chose to ignore and push aside self-assessment instruments, and did not work on improving them for their practice. But that is not the element of further development that is currently absent, as you rightly pointed out.

    It would, in other words, be a most welcome sign of respect to all the work done if a study was commissioned—similar to Fennes/Otten and Ohana/Otten, possibly—which, at long last, picks up some of the many open questions around self-assessment.

    Time versus speed
    I recognise that some of the instruments have been around for some time, and quite some of these instruments were ‘discussed a lot amongst trainers and with participants.’

    But somehow there is never enough time for this, is there? The development of these instruments is always embedded in the work of a team, or the context of a session… I have myself been part of too many discussions that had to be cut off—and had to cut some myself.

    Not once has there been enough time to engage in sufficiently long discussions around self-assessment that are not exclusively aimed at further developing one specific instrument and that would lead to questions or conclusions that were written and shared widely.

    Quality standards
    I agree, the field has some clues what the job is about, and some good writing exists on quality standards in particular for European youth trainers. They are not agreed, yet, and there is plenty of discussion around what particular standards may or may not mean, but yes: enough material to provide context.

    Scary research
    Hehe, the sentence sounds a bit scary when taken out of context:

    Research can prove what common sense and practical experience tell us: none of this is true, none of these assumptions hold, they crumble at first sight.

    But within its context, it’s unfortunately not wrong:

    Firstly, there is research showcasing that knowledge, skills and attitudes are not enough to capture the competence of a person, for example in the framework of UNESCO’s DeSeCo Project – in other words: a trainer with a perfectly acceptable set of knowledge, skills and attitudes may still be a crappy trainer even though the self-assessment indicates otherwise.

    Secondly, there is books full of discussions around the dilemma with numerical scales and, while they suggest that the difference between levels 8 and 9 are the same for everyone, the individual judgment of respondents is miles apart – in other words: five trainers assessing their competence in the area of writing with 8=very good, will very likely display pretty diverse writing skills.

    Thirdly, there is evidence already in the evaluations of our own courses in our own field that educators and learners are often not aware what a specific competence entails when doing their self-assessment – in other words: the more people learn about a competence, their assessment tends to become more self-critical and sceptical.

    New instruments
    Yes, indeed, what a challenge! I hope to find the time soon to polish some of my ideas around alternative instruments…

  3. Quote by Peter Drucker Avatar

    “Knowledge has to be improved, challenged, and increased constantly, or it vanishes.” – Peter Drucker