Your resume scores higher when you use the same AI as the employer's screening tool.

A paper from Jiannan Xu, Gujie Li, and Jane Yi Jiang documents what's happening. When LLMs screen resumes, they pick ones they wrote themselves 67% to 82% of the time. Candidates using the same LLM as the evaluator are 23% to 60% more likely to make the shortlist. Business roles like sales and accounting show the worst gaps.

An employer uses GPT-4 to screen applications. A candidate who used GPT-4 to write their resume has a massive advantage over someone who wrote theirs by hand. Workday, Greenhouse, and iCIMS are already building generative AI into their screening pipelines. The models powering these systems are the same ones candidates use to polish applications.

It's a feedback loop. The evaluator recognizes its own patterns and rewards them. Hiring managers see a high match score. They have no way to know if it reflects actual qualifications or just stylistic alignment.

Interventions targeting how LLMs recognize their own output can cut the algorithmic bias by more than half. But companies have to actually implement these fixes, and most don't know the problem exists.

Hiring is becoming a test of which AI you used, not what skills you have are you?