I recently spoke with a third year Physics student (integrated Masters) at the University of Leeds, concerning her experience of a graduate online recruitment test process for a Summer internship. She revealed that she passed the verbal and logical reasoning tests, but somehow did not pass the numerical reasoning test.
“How is that possible? You’re surely well ahead of the standard presented by that test?” I asked.
She informed me that, despite scoring 81 in her second year Maths 3 exam, she grew anxious because of the speed at which she had to work. In an environment where she takes care and attention to calculate correctly extremely complex equations, the idea of forsaking attention for speed in a brief online test is one that feels alien to her, largely because its purpose seems defunct. She is not alone in this. A fourth year integrated Masters student in Physics, also from the University of Leeds, failed this test as well.
To me, this does not speak of an inherent flaw in the people taking the test, but rather a bigger question: Do these online recruitment tests really allow for the progression of the best candidates?
In this article, I will explore the problems that I find with the online recruitment tests given by many companies to graduates applying for their positions, and offer a potential solution.
I will first look at the biases that are found in these tests, incorporating gender and cost. I will explore why an otherwise competent person might fail these exams, and what their score really says about the company, that being their desire to set an unwanted and somewhat irrelevant precedent. I will also incorporate a personal perspective, and highlight the implausibility of the conclusions drawn by the qualifying candidates through this medium. I will then look at educational backgrounds of Actuaries in the UK, and link this to hiring process. Finally, I will offer a potential solution for the aforementioned issues.
Before moving on, I should mention that companies use varying systems to administer their online test, including SHL, Kenexa, Cubiks, Saville, etc. The online test that is discussed in this article is SHL, although the likes of SHL and Kenexa – for instance – are known to be very similar.
Section 1: Gender Bias
Studies (Buck et al., 2002; Leaver and van Walbeek, 2006) have found that women typically perform worse than men in multiple-choice questions, while women tend to perform better than men at essay-style questions (du Plessis and du Plessis, 2009). However, online recruitment tests such as the SHL-administered test are only multiple-choice. While there have been papers written on bias in Graduate Record Examinations in the US (Miller and Stassun, 2014), I have not found any such study on the type of online recruitment tests that many major companies administer to their graduates. If nothing else, this article is a call for further studies on these tests, in an attempt to collate empirical evidence so as to clarify the matter of potential gender bias.
Section 2: Cost Bias
Most of the companies that administer these online tests do provide a free mock exam prior to the marked exam. To practice further, there are independent revision aids online, starting from £30 and covering the numerical reasoning, verbal reasoning and logical reasoning tests
For those willing to pay for it, an advantage is offered. In doing so, there is a clear bias towards those with more disposable wealth than others. Of course, it is hard to level this at the hiring company, due to the independence from the hiring company of the company offering the revision aids. However, perhaps it can be said that the hiring company is complicit by inaction. There is a way to combat this, and I believe it important in order to level the playing field. One way to do this would be to offer multiple free tests from the company website, so that personal wealth is no longer a part of the equation. It could be said, as a reaction to that action, that further free tests would demean the value of the marked exam. However, one must first ascertain the true value of the exam, before anything else.
Section 3: Setting an (unwanted) precedent
Often when a candidate completes their set of online tests, they will be provided with written feedback. The aforementioned third-year MPhys student has kindly shared her feedback, quoted below.
Your performance on this test was below average when compared to the comparison group you were compared against. This suggests that understanding of interpreting numerical data and mathematical calculations is likely to be an area of development for you, on the basis of your performance on this test.
Ideas to help improve your skills
You may be interested in things you can do to help improve these skills.
Developing your skills is something that requires considerable time and effort. As well as reviewing the practical tips below, think about the opportunities you have in your everyday life to challenge yourself in this skill area. How often do you deal with numerical data and mathematical calculations? How can you gain more exposure to this type of information?
This is the feedback presented to a student that has the mathematical knowledge that she will take forward to PhD studies. Despite being on target for a First class designation in her degree, she has been told that her performance was below average. Failing a test in itself is not the problem, but why the company feels the need to set its own standard borders on arrogance, especially if the standard of the test excludes certain excellent candidates.
If these types of tests were so rigorous and a good (or good enough, at least for the purposes of securing a job) test of aptitude and potential, they would surely supersede the type of testing that students are subjected to in their SAT’s, GCSE’s, A-Levels, or University entrance exams. By saying “you have to pass our test in order to be considered for a role”, the company is effectively circumventing the educational system on which some of their requirements are predicated.
The type of test, those being time trial MCQ exams, must be scrutinised further, as – while the type of questions may be representative of the type of work that an Actuary does, at least when first in training – the way in which these questions are presented are not. To explain further, each exam consists of roughly twenty questions with four or five choices for answers for each question. There will also be a time limit, which is slightly longer (in minutes) than the number of questions in the test. In his/her working life, an Actuary will have deadlines to meet, but nothing quite as strenuous as this test supposes. Secondly, multiple-choice answers won’t be presented – the Actuary will have to work this out for him/herself. The exams for the Institute and Faculty of Actuaries are not multiple choice – they are written exams of three hours in length, much like A-level mathematics or University exams. For a role that models for the variables of certain scenarios, this type of MCQ exam seems a poor representation of the working life of an Actuary.
Section 4: Why excellent candidates fail such tests
The aforementioned third-year student became anxious in the test, due to the time limit. For the past two years, she has been a part of an environment where she is used to computing complex calculations over a lengthy period of time, and so the time restraints of the online exam were seen to be unnecessary. The type of questions administered in the numerical reasoning test were of a level that the student attained and surpassed some years ago, and since has not dealt with such problems relating to interest rate calculations or currency exchanges. If she had been given the time to revise these types of questions, she would have no doubt passed the exam. But again, most companies wish you to complete all of the exams (that can amount to two hours in total), within three days. How a full-time Physics student is supposed to find the time for this on top of their studies (which are certainly extensive and cannot be understated), is something of a quixotic expectation.
Section 5: A personal perspective
As previously mentioned, I took the same online recruitment tests that the aforementioned students took. I took these three years ago, after graduating from the University of Leeds. I studied Music, and my mathematical expertise reached no further than A-level mathematics. When I took the online tests, I had not studied mathematics for three years. Yet, I passed the test, and the two University-level Physics students did not. The company told these two students to improve in several areas. They said that my mathematical expertise was fine. The fact that someone who has not done A-level mathematics for three years can pass an exam that University Physics students cannot is worrying, especially in relation to a financial position. This highlights, at least to me, a major problem with the standard of the exam, and what exactly is being tested for.
Section 6: The background of successful actuaries
For this article, we took a random sample of two hundred location non-specific Actuaries working throughout the UK. Of this sample, 67% were men and 33% women. Of the actuaries sampled, only five (2.5%) had a first degree in anything other than STEM subjects or economics. Yet, graduate job requirements specify that a minimum of a 2.1 degree in any subject is suffice to be considered. While it is recognised that a larger sample is needed to draw any definite conclusions, with 97.5% of successful applicants having first degrees in STEM subjects or economics, perhaps it is redundant to suggest that candidates with any degree need be considered. Further, if it was specified on job descriptions that candidates with first degrees in STEM subjects or economics would be preferred, it may eradicate the need for such a numerical reasoning test (or any such test) altogether.
Section 7: A possible solution
It is fair to say that firms that administer these tests are not presenting themselves in the best way to potential candidates. While candidates that pass the tests and move through the recruiting process may be happy to ultimately secure a job, a discerning candidate must consider why they are being treated as something of a commodity. In essence, the need for online assessments prior to any human contact takes the human element out of graduate recruitment – an element that is present in professional recruitment.
In professional recruitment processes, a candidate will usually receive a first phone call, and thereby get a chance to speak with an employee at the company. The human element is accounted for, and both the company and the candidate can gauge impressions from the call and see if this is something that they both want to continue with. Companies treat professionals as such, but have a tendency to keep candidates at arm’s length. It is clear why, due to the high demand from candidates but relatively low supply for jobs from the companies. Nevertheless, that is not a way that anyone wishes to be treated.
My suggestion is, firstly, to ensure that studies are carried out on graduate online tests, in order to assess their validity and potential bias, and made publicly available. If found to be lacking – as I believe it likely – I would suggest the abandonment of online tests, and instead the specification of exactly the type of people that will be considered. A sufficient quantitative background for actuarial positions should be the standard, and not the 20-minute online test. After all, a 3/4-year course can undoubtedly do a better job of assessing potential and aptitude than a short, timed test can. If a candidate has the requisite qualifications, or are studying degrees suited to their chosen career path, then I advise speaking with them. Gauge their character and their motivation from human contact. The time investment will always be worth it, and will better promote the company’s brand.
Section 8: Final thoughts
I have spoken with a lot of actuaries. Some are very easy to speak with, others are somewhat more introverted. Such is the wonderful diversity of humankind. I think it fair to say that, within the actuarial profession, to find someone that is technically exceptional and that can communicate this to a layperson in an easy manner is a special thing.
I can’t help but think that, if the company had spoken with the candidate that has been my case study throughout this article, they probably would have seen just how much potential this person has. The human element must remain for business success.
Leo Charlton
Consultant – Aston Charles
The post-script: Aston Charles is a specialist insurance recruitment company for the placing of professionals across various fields (claims, broking, actuarial, to name three) in the UK and continental Europe. We believe that the key to lowering attrition rates and ensuring company growth is via partnering with recruiters that you can trust, that understand your business, and can identify the right people for you. We are confident in our ability to understand a company’s recruitment needs, and provide a consultative service to our clients for the duration of our working relationship. Should you wish to discuss how we can help you with your staffing requirements, please email us at info@astoncharles.co.uk. We look forward to speaking with you.