Login Demo
088 49 59 000

Kantoorgebouw De Blend

Vleutensevaart 100

3532 AD Utrecht (NL)

It is always a good idea to test a candidate’s intellectual ability, particularly when the job in question requires a certain amount of brainpower. But why? And if we do so, which tests should we use? And even more important, where do we draw the line?
I’d like to run you through three cases that illustrate the strategic importance of intelligence tests at three companies. You will better understand how intelligence works after reading this article – and hopefully apply intelligence testing during selection processes.

What exactly is intelligence?

To start with, let’s define what we mean by intelligence or cognitive capacity. Definitions of intelligence are wide ranging:
“Intelligence is the performance level related to people of the same age.”
“Intelligence is the ability to apply knowledge and experience in problem solving.”
“Intelligence is a psychological trait with many different functions, such as the possibility to detect similarities and differences in observation processes, or to negotiate unfamiliar territory, or to reason, to make plans, to see through and resolve problems, to think in abstraction, to understand and conceive ideas and language, to register information in your memory and then reproduce it, to learn from experience.”
“Intelligence is that which is measured by an intelligence test.”

See here for more information about the General Intelligence Test.

Pieter Drenth (emeritus Professor of Psychology at the Vrije Universiteit of Amsterdam) describes the notion of intelligence as follows: “Intelligence is a conglomeration of innate capabilities, processes and skills, such as:

  • abstract, logical and consistent reasoning powers,
  • the discovery, establishment and ability to see through cross connections,
  • problem solving,
  • discovering guidelines in seemingly disorderly material,
  • solving new tasks with existing know-how,
  • the ability to be agile and nimble in new situations,
  • the ability to learn independently without direct and comprehensive instruction.

And I agree with Pieter’s definition. Intelligence is not simply one thing, but a mix of various things. Intelligence is the ability to be able to perform various cognitive tasks well. Let me take sport as a metaphor. If you’re good at sport, you’re generally good at a number of sports.

A large employment agency has tested its account managers for decades. Functional analysis indicated that primarily verbal and problem-solving cognitive skills were more important than the ability to add up and subtract. Thinking was concrete rather than abstract. The problems were not so difficult in themselves, but it was important to be able to sum up a situation quickly and under time pressure. The intelligence test consisted of subtests that examined problem-solving capacities, and verbal tests. The criterion was an overall 25% HBO level (G-score). This was the bar even in times of scarcity, when it was difficult to find suitable people. Candidates with solid social skills – but who came in lower than the 25% benchmark – were rejected.

This employment agency was the first in its segment to select strictly on the basis of intelligence. The agency won market share year in, year out, eventually becoming market leader in its segment. Noteworthy was the fact that market share grew in a shrinking market but much less in a growth market. The reason could be that an average better problem-solving ability compared to competitors played a more important role in tough market conditions than in a growth market. A logical conclusion, I’d say.

It works the same way with cognitive tasks. If you’re intelligent you can perform all kinds of cognitive tasks well. More so with similar cognitive tasks than with cognitive tasks that have less to do with one another. If you’re a good tennis player, you can probably handle a baseball bat pretty well too. If you’re a good runner, you’re probably good at long jumping too. But a single measure for intelligence dismisses the fact that there are various sorts of cognitive tasks.

Intelligence: a sample of all possible cognitive tasks

Broadly speaking, there is a correlation of appromixately 0.50 between various intelligence tests. The results of one cognitive test predict for 25% the results of another cognitive test. Thast is more or less the case with all intelligence tests. Let’s assume there are a large number of different cognitive tasks. You could then see an intelligence test consisting of various subtests as a sample of all possible cognitive tasks. Then you can ask yourself how large the sample of these tasks should be in order to arrive at an overall score. That is actually what an intelligence test tries to do: to offer a series of different cognitive tasks and compare the score of people in order to see how someone scores relative to a group of other people.
Back to my sports metaphor. You want to put together a general ‘sports’ test to determine the Sport Quotient (SQ), comparable to IQ. You take a sample of all sports and you develop a programme consisting of a ball sport, an athetics component and a few more items. The performance with regard to each sport is then compared to the performance of others. And the result is an SQ. But what does an SQ actually tell you?
Should you wish to build a team – let’s say a football team – it’s a good idea to take a general sports test first. So you do a kind of decathlon with 100 people and you select those who performed the best in the test. Generally speaking this is a much better approach than putting a team together at random. But would it be better to use a sports test that has been specifically designed for football? Probably!

Why do intelligence tests predict successful functioning?

Let’s consider the question of why intelligence is the best projection for successful functioning, with a correlation of between the .30 and .50. This relationship has been repeatedly proven by scientific research (see for example Schmidt & Hunter, 1998). You can consider a job as a wrap-up of cognitive tasks, some of which are precise, others more diffuse. The 100 meter sprint is very precise. A football match is diffuse, because you have to do various things well to win: sprinting is but one of them. Most functions have diffuse tasks; you listen to a conversation and you need to extract the essence. You’re faced with a problem that you’ve not dealt with before – and it calls for a creative solution. You read this copy and try to understand it and eventually translate it into your daily work. In addition, of course, a job entails more than simply cognitive tasks. Consider social tasks or motorial tasks that you can basically do on automatic pilot.

Functional analysis of ICT people indicated that all-round intelligence was the prime factor. At issue was the need to absorb new information quickly and draw the bigger picture from a number of difficult storylines. The assignment was not always crystal-clear, so in order to be able to draw the bigger picture it was essential that a solution was found for the component parts. Precision however was key to programme execution. People had to work accurately for long consecutive hours, and tests in the form of flowcharts were incorporated in the test programme.
It was interesting that originally the bar was set at 60%, implying that 60% of the candidates were rejected early in the selection procedure due to their level of intelligence. That piled the pressure on the recruitment department. The ‘new hires’ followed a training programme of, mostly, a year. Evaluation showed that nobody dropped out and that everybody achieved good results. That led to a decision to reduce the norm to 40% compared to the HBO level. And that percentage works well, even in a turbulent market. This criterion was simply better suited to the organisation, the function and the labour market.

A correlation of approximately 40% indicates that part of the performance results from intelligence, but only a part. Motivation, management, environment and social skills also have their place. Intelligence predicts performance because a function consists partially of cognitive tasks.

Is .40 a lot or a little?

Sometimes I hear that a correlation of .40 means that only 16% of the variation in performance can be explained by intelligence. And in the event of a correlation of .30, only 9%! A correlation needs to be squared to calculate the variation, and that explains why the number goes down so quickly. And they say – because this is so low – that you don’t have to take intelligence tests in staff selection so seriously. But is that the case? Isn’t it true that when an organisation performs 5% better than the competition – year in, year out – the organisation has every reason to be very pleased with itself. I agree it may not seem a lot, but an expressed variation of 10% or so is meaningful. And that’s why it is a good idea to select staff on the basis of intelligence.

Which intelligence test should you apply?

Let’s go back to our sport metaphor. Determine first of all who should play which sport. Is it a highly focused sport, or more diffuse? Is it a decathlon or one specific game? Is it a marathon or a sprint? In order to determine the ability test to be applied, it makes sense to examine the cognitive tasks that belong to the function. I always take the top 3 or top 5 cognitive tasks as my point of departure. An accountant, for example, has to process precise statistical material, draw conclusions, consult complex fiscal literature and so on. A software engineer has to understand the customer’s story and make it tangible, apply an abstract analysis and conduct with precision a series of cognitive tasks. To ask you to select the tests from all the capacity tests best suited to this challenge is a tall order. But one way to do it is to ask yourself two questions:

  • How is the information made available? Is it in words, or in statistics, or in tables? A lawyer, for example, absorbs information primarily from the written or spoken word, an accountant through facts and figures.
  • Is the solution embedded in the material itself as a tangible element, or does a new opinion need to be formed? A sum may be numerically tangible but a series of numbers is numerically abstract.

Working in this way makes it relatively easy to establish a workable test battery. Four or five subtests – that in terms of material and level of abstraction are aligned with the cognitive tasks of a function – jointly provide a solid projection of performance. It does not make sense to take many more tests, because they are likely to be less relevant. Moreover, the impact of a sixth test on the overall score is typically negligible.

Back to intelligence theory

Spearman developed the notion of a G-factor, a general intelligence factor. However, there are specific factors alongside this, in line with the correlation between the notion of cognitive tasks and performance after undergoing various intelligence tests. If you apply factor analysis to that, one large factor takes shape that explains the test variance. In addition, other smaller factors materialize. Spearman calls these S-factoren, specific factors. They correspond with the type of test, as described above. The result is a verbal, numerical or graphic factor, although it depends on the tests applied and the traits of the group that has just undergone the tests. Others contend that there are more independent factors that determine intelligence, such as ‘fluid’ intelligence and ‘crystalized’ intelligence. Fluid intelligence is used to address problems that you have not dealt with before, while crystalized intelligence pertains to the the application of know-how and experience. Most of the intelligence tests used in selection processes attempt to measure fluid intelligence – and that is logical. After all, you don’t want to measure the intelligence somebody already has or can apply, you want to know how creatively someone can absorb new information. There’s much more we could say on ths subject, but I’ll leave it at this for now as far as our discussion of intelligence theory is concerned.

Where do we draw the line?

As already indicated, the relationship between intelligence and functional performance cannot be measured one on one. People who score low on intelligence tests can still perform to general satisfaction. My point is simply this: the lower the score, the chance of this happening actually diminishes. You need to ask yourself by which score does the chance of solid performance actually increase, and what can then be considered as acceptable? Would that be at an average HBO level, or should we be considering a high HBO level only? For an ICT company, you can draw the line at an HBO level of 40%. A level of 25% was acceptable to the employment agency, because social skills there were considered key. But a top notch firm of solicitors may set the bar at 75% of WO level. That implies in short that 75% of the graduates are rejected on the basis of intelligence alone! That is an enormous number. They can only justify that because almost all graduates wanted to work there. But not every organisation can bask in such splendour. So the line is drawn depending on the function, the company and the labour market. We’d be delighted to think along with you!

Conclusion: take a test!

If you truly wish to make a strategic difference in the market, then it is a good idea to apply intelligence tests or ability tests in the selection procedure. These can have an immense impact on the organisation and its market position.

Test-Toolkit_home
Want to try the general intelligence test?

Request a demo for the Test-Toolkit

Request now

Ixly