The new forums will be named Coin Return (based on the most recent
vote)! You can check on the status and timeline of the transition to the new forums
here.
The Guiding Principles and New Rules
document is now in effect.
So, I applied for a new job a few weeks back and was sent some tests that are meant to test cognitive skills. I think I did ok on most of them except for one that claimed to test logical reasoning by giving a 3x3 matrix of shapes and figures, where you had to fill in one blank space. I'm pretty sure this one test destroyed me, because out of 12 questions I could only discern a single pattern (and even then, I wasn't too sure). This was for an accounting job, of all things.
Now, I've never been particularly good at these sort of abstract visual puzzle sort of things, but what exactly is this sort of test meant to measure? I'm going to chalk this job to a loss, but I'm a little bit wary about that this test managed to absolutely destroy me so thoroughly. I've never seen anything like it in a job application setting before, so I'm actually kinda concerned how these kind of tests get used, since it seems so removed from what the job actually entails.
0
Posts
Doing those 3x3 puzzles and tanograms are not a great measure of anything other than doing those puzzles.
Hi! I'm going to put on my Psych Nerd hat.
It sounds like you're describing something like this:
This sort of thing is based on an IQ test from the 1930s called Raven's Progressive Matrices.
When used in a clinical or academic setting they're useful in that they're a form of IQ test that is nonverbal, fairly culturally-agnostic, and shows good reliability and validity. ("Reliability" here means that they produce consistent results over time and "validity" means that they correspond well with other IQ tests.) They test for visual pattern recognition and they seem to correlate well with general fluid intelligence.
However, before you kick yourself for getting stymied by an IQ test, there are some things you should know. First off, the "progressive" in the name Raven's Progressive Matrices means that you're supposed to start from simple puzzles and progress to harder ones. The progressive nature of the test is a bit like the difficulty curve in a video game: even if you are generally capable of tackling the higher difficulty levels, it still helps to ease into them if you're flexing mental muscles that you don't use very often. Throwing 12 questions at you (which it sounds like may have been harder ones) is dirty pool.
In fact, one of the criticisms of the RPM is that cultural exposure to the RPM and to visual puzzles similar to the RPM (the card game "Set" for example) have practiced us to get better at it. In other words, the RPM isn't testing general fluid intelligence any more, it's just testing how good you are at visual puzzles like RPM: it's tautological.
If that sounds familiar, it might be because that is a very common general criticism about IQ tests is that they aren't actually testing anything other than your ability to do IQ tests. Does your performance on RPM mean anything at all about how good of an accountant you are? Does your performance on any IQ test mean anything at all about how good of an accountant you are? ¯\_(ツ)_/¯
As psych nerds go, most fall somewhere on a spectrum from 1 to 7, where 1 is "General intelligence is real and IQ tests are good at measuring it!" and 7 is "General intelligence is a superstition and IQ tests are just measuring your aptitude with IQ tests." Just speaking personally, for myself, I'm a little closer to the 1 position, but weakly. Maybe I'm a 2.5 or 3. But we shouldn't pretend like any of this is anything less than deeply controversial. Entire tomes have been written by psychologists arguing each side.
Now I'm taking off my psych nerd hat and putting on my corporate troublemaker hat.
Charitably speaking, some HR departments use these tests because they think they're a good way to overcome implicit biases in hiring. I think that's a laudable goal. I can only speculate what this particular HR department's motivation was, but if that was their goal, throwing 12 RPM questions at you was an absurdly clumsy way to go about it.
But if I'm being uncharitable, I think some companies use brain teasers like this as a form of negging. They want you to feel stupid, because then you'll be less likely to negotiate for a higher salary. Is anybody sitting around rubbing their hands together like a Scooby Doo villain explicitly thinking that? Probably not. But they might have noticed - even subconsciously - that candidates come out of these tests a little more submissive and a little less aggressive in negotiations, which incentivizes them to keep using them.
the "no true scotch man" fallacy.
A lot of it is because the big four are constantly flooded with applications for their grad schemes so they need some metric to narrow them down
Then the small and mid tier firms ape the practice because for a lot of them their first approach to doing anything is to start by considering what the big four do
I just thought it was because Accountants generally hate doing interviews and HR managers don’t often know what to ask or test the applicants with.
Yeah, I got the impression that this was more of a "weed out" sort of thing. It was a federal government job that likely has far more applicants than positions, so using something exceptionally hard but technically scientifically supported measurement metric can chop off 90%+ of the applicants right off the bat in a defensible way, I suppose. Sucks, but whatever.
(full disclosure: I'm a chartered accountant)
It depends on the nature of the job within the sector, to be fair. Bookkeeping is not a major part of the profession anymore - there are still bookkeepers, but they're generally not expected to be accountants. For example I haven't had to prepare double entry journals for about four years.
In audit or any of the adjacent disciplines (e.g. tax, forensic accountancy) a lot of what you spend your time doing is analysing the outputs of other people's poorly documented processes and systems to understand the result, and/or analysing the system or process itself. It's probably not as far removed from programming as you think (and, indeed, many accountants are heavy users of python and R for analytics - the firm where I work has an R user group).
In private practice and the corporate and public sectors there's an expectation that accountants are general purpose business consultants and advisors, so a large part of the job is problem solving and options appraisal.
They were also stealing, but it was stealing time, which wasn't something I cared about.
Amusingly enough, I've been a senior programmer for over a decade (as a contractor) and looking to go into accounting mostly for the stability of earnings. While I can appreciate the sideways measurement of looking for details in chaos, to me, looking for the underlying meaning is the most important way for me to tease issues out of a data set (whether code or a set of financials). So, abstracting that all into a set of nonsense shapes and patterns pretty much removes all referential meaning to the situation. I guess I'm not an abstract thinker.
I really wouldn't be too hard on yourself. Jumping into the difficult end of the RPM is like starting Portal on the last puzzle. You need the earlier puzzles to warm up your brain and get used to thinking in a certain way.
the "no true scotch man" fallacy.
I've done the reverse - you're trading frustration for boring
good trade if you're looking for stability