Despite its widespread and frequent use, the term ‘soft skill’ is surprisingly hard to define.
As a bucket term, ‘soft skills’ relates to a wide range of social-emotional competencies, which are often presented in opposition to ‘hard skills’, technical proficiencies related to technical tasks. The definition of soft skills has varyingly been linked to ideas such as Goleman’s Emotional Intelligence (EQ), social and situational agility and or simply an individual’s ‘character’ or ‘personality’. Given the difficulty in even defining what a soft skill is, it should come as no surprise that measuring these critical capabilities is far from straightforward.
Traditional Approaches to Soft Skill Measurement
Perhaps the most ubiquitous method of measuring soft skills is that of employee surveys. Whilst self-reporting surveys are a low-cost, low-effort method that can quickly generate extensive amounts of data, the value of such data is often limited; generally, you can expect to collect more noise than signal. Research suggests that much of the data from self-surveys is unreliable; they are easy to fake, and employees often will present themselves in a more favourable light than may actually be the case. Even with the provision of anonymity to encourage honesty, the reliability of surveys will still be questionable.
Whilst the variety of data sources inherent in 180- and 360-degree feedback approaches may facilitate higher quality results than those of surveys, the fundamental problems that limit the utility of surveys apply to these approaches. Honest responses are not guaranteed, as there is no way of eliminating malicious, inaccurate or biased perspectives. Furthermore, the process of collecting 360-degree feedback is time-consuming and laborious, and as such is not suited to regular, real-time assessment of soft skills.
The true limitation, however, of both 180- and 360-degree feedback and survey approaches is that both generate data that is entirely anecdotal and subjective. In a world where hard, quantifiable data metrics such as sales numbers, client satisfaction and growth drive business, soft approaches such as surveys simply aren’t reliable or objective enough to truly be of value.
Machine Learning and Monitoring Software
Recently, machine-learning driven monitoring software has entered as a market solution. These approaches collect data on employee behaviours from a variety of sources such as semantic monitoring and analysis of texts or emails, facial recognition and tracking through device cams and website browsing and social media behaviour. This data is fed into an algorithm which identifies behavioural patterns that can be correlated to other performance metrics. Such approaches result in a flow and analysis of data that is continuous, real-time and automatic, and seemingly objective.
However, even machine-driven approaches aren’t without biases, as developers may inadvertently introduce their own preconceptions into their algorithms; the datasets which these algorithms are fed may be fundamentally flawed or biased. For example, a 2018 MIT study on biases in facial recognition software found that maximum error rates when identifying ‘lighter skinned males’ were 0.8%, rising to 34.7% for ‘darker skinned females’. This significant disparity stemmed from the fact that the datasets from which the algorithms were developed consisted largely of lighter-skinned subjects. Furthermore, as the algorithms are self-learning, which data they pay attention to may become obscured from developers themselves. It is clear that despite the objectivity you might associate with machine learning-driven processes, they are often anything but objective.
So, what’s the solution?
USTEER is a new approach to soft skill development and measurement that we developed out of our extensive experience of the challenges in tracking soft skill improvement. We set out to find the function of the mind that really regulates a person’s social-emotional interactions; we call this our cognitive steering. We found that cognitive steering wasn’t being effectively assessed by traditional surveys, so we designed a technology that could measure it. Our founder, Dr Simon Walker, also led the work which defined this as a uniquely human skill; thus, we call soft skills, human skills. We tested this technology rigorously in both business environments and in education, with more than 70,000 participants over 10 years, in order to build a platform that is both robust and intuitive. We launched USTEER when we were certain we had a solution that could really help businesses today.
To find out more about how USTEER can help your organisation measure and develop crucial human soft skills, visit www.usteer.io/solution