I recently came through a stint of unemployment in the tech sector and think it would be helpful to share my experience. I'm in my 40s and have a long track record of success in my industry and my role, my specific skill set is hard to find, and there are lots of open positions being advertised for this role so I initially wasn't worried. The first few companies where I applied and thought I was an extremely good fit rejected me immediately without even a conversation. At first I was bemused: "If they don't want to even interview a candidate who is an exact match for what they are looking for, it's their loss", but as the rejections kept coming, even from a company where I knew the CEO personally and have spoken as a keynote speaker at their annual conference for 7 years in a row, I became very concerned. I talked to a hiring manager at a company I used to work for and he said that he gets 1000 applications a day for any open position and practically every single resume is a perfect match for the job qualifications meaning that who he chooses to interview is a lottery. But, he said, when he does actually interview the candidates, it's clear that many of them have vastly mischaracterized their skills and experience to match the job description with no regard to the truth. And of course because he's selecting those applicants who are perfectly matching, he's filtering out the more honest candidates who might have only 85% of the skills desired. Eventually, I did get a generous offer from one of the first companies that I actually landed an interview with, I suspect because once it was obvious that the skills and experience I claimed to have were genuine, but it did take two months and certainly wasn't pleasant.
There needs to be a greater cost to making an application, preferably related to the job itself. There was a time silicon valley companies used programming based bounty hunts to gatekeep applications, I'm not sure why they stopped. It's not practical for every job but there's enough ways to work around this problem that I'm convinced that companies have a vested interest in keeping it like this or atleast not enough of a motive to prompt a change.
"It turns out the inter-rater reliability of supervisor ratings is 0.52, meaning that “latent job performance” only explains about 50% of the variation in supervisor ratings (no need to square as it is a latent factor, not a direct path"
Your estimate of IQ validity is wrong. It's 0.65 for IQ alone, 0.78 adding an integrity test.
Using only IQ for hiring would add about $2.6T/yr. to US productivity, given a 0.3 validity for current selection methods and productivity s.d.s ranging from 0.35 at $50k to 0.65 at $200-$250k and 0.85 above $250k.
See: Schmidt & Oh.
A Rasch measure of intelligence is a measure of the difficulty of problems one can solve and of the probabability one can solve any given problem, and IQ and Rasch measures are convertible, so there is no way that IQ has as low a validity for predicting productivity (0.2 to 0.3) as you claim.
The case for signaling over human capital is not as strong as you make out here. Even the sheepskin effect can be explained with HC as Nick Huntington Klein has pointed out,
“What schools actually *do* when they allow you to continue in your education is, effectively, measure what you’ve learned and see if it passes some minimum standard. If you don’t, you drop out. We end up with those failing the (lax) minimum populating the dropout years. They’ve learned little, so they earn little. In the final year, you see everyone who passes the minimum, whether they learned just enough or WAY MORE than enough. The final year contains a wide range of big learners, so on average there’s a big jump in earnings that year.
Intuitive, based on literal actions people perform, and a totally-HC explanation of sheepskin effects.”
I have written more about the limitations to Caplan’s Case Against Education here.
Much of the AI evidence is confounded by the macro-environment. It is extremely unwise to make inferences like this during the 2019-now period which saw a huge change in available capital. Just look at the tech job market overall:
Additionally, the returns on AI to tech workers seem highly controversial within the field. I work in data, and there seems to be no consensus among data scientists & swe's on the utility of LLMs.
I recently came through a stint of unemployment in the tech sector and think it would be helpful to share my experience. I'm in my 40s and have a long track record of success in my industry and my role, my specific skill set is hard to find, and there are lots of open positions being advertised for this role so I initially wasn't worried. The first few companies where I applied and thought I was an extremely good fit rejected me immediately without even a conversation. At first I was bemused: "If they don't want to even interview a candidate who is an exact match for what they are looking for, it's their loss", but as the rejections kept coming, even from a company where I knew the CEO personally and have spoken as a keynote speaker at their annual conference for 7 years in a row, I became very concerned. I talked to a hiring manager at a company I used to work for and he said that he gets 1000 applications a day for any open position and practically every single resume is a perfect match for the job qualifications meaning that who he chooses to interview is a lottery. But, he said, when he does actually interview the candidates, it's clear that many of them have vastly mischaracterized their skills and experience to match the job description with no regard to the truth. And of course because he's selecting those applicants who are perfectly matching, he's filtering out the more honest candidates who might have only 85% of the skills desired. Eventually, I did get a generous offer from one of the first companies that I actually landed an interview with, I suspect because once it was obvious that the skills and experience I claimed to have were genuine, but it did take two months and certainly wasn't pleasant.
Why is the cover picture of this a guy wearing a bowser backpack
It’s Robin Hanson. Posted him because he advances the signalling theory of education and does funny things like these.
There needs to be a greater cost to making an application, preferably related to the job itself. There was a time silicon valley companies used programming based bounty hunts to gatekeep applications, I'm not sure why they stopped. It's not practical for every job but there's enough ways to work around this problem that I'm convinced that companies have a vested interest in keeping it like this or atleast not enough of a motive to prompt a change.
Nice article, I just didn't get this one passage.
"It turns out the inter-rater reliability of supervisor ratings is 0.52, meaning that “latent job performance” only explains about 50% of the variation in supervisor ratings (no need to square as it is a latent factor, not a direct path"
Specifically the square part.
Your estimate of IQ validity is wrong. It's 0.65 for IQ alone, 0.78 adding an integrity test.
Using only IQ for hiring would add about $2.6T/yr. to US productivity, given a 0.3 validity for current selection methods and productivity s.d.s ranging from 0.35 at $50k to 0.65 at $200-$250k and 0.85 above $250k.
See: Schmidt & Oh.
A Rasch measure of intelligence is a measure of the difficulty of problems one can solve and of the probabability one can solve any given problem, and IQ and Rasch measures are convertible, so there is no way that IQ has as low a validity for predicting productivity (0.2 to 0.3) as you claim.
The case for signaling over human capital is not as strong as you make out here. Even the sheepskin effect can be explained with HC as Nick Huntington Klein has pointed out,
“What schools actually *do* when they allow you to continue in your education is, effectively, measure what you’ve learned and see if it passes some minimum standard. If you don’t, you drop out. We end up with those failing the (lax) minimum populating the dropout years. They’ve learned little, so they earn little. In the final year, you see everyone who passes the minimum, whether they learned just enough or WAY MORE than enough. The final year contains a wide range of big learners, so on average there’s a big jump in earnings that year.
Intuitive, based on literal actions people perform, and a totally-HC explanation of sheepskin effects.”
I have written more about the limitations to Caplan’s Case Against Education here.
https://infovores.substack.com/p/attention-caplanites-school-is-less
Great post, as usual.
I think you are right on most things, however, I do not believe the AI argument. See this short, skeptical post (by a labor economist addressing the aforementioned paper): https://forklightning.substack.com/p/shorting-the-ai-jobs-apocalypse
Much of the AI evidence is confounded by the macro-environment. It is extremely unwise to make inferences like this during the 2019-now period which saw a huge change in available capital. Just look at the tech job market overall:
https://substack.com/@josephpolitano/note/c-153310337
Additionally, the returns on AI to tech workers seem highly controversial within the field. I work in data, and there seems to be no consensus among data scientists & swe's on the utility of LLMs.