If you have spent any time reading resume advice online, you have almost certainly encountered a handful of statistics that show up everywhere. "75% of resumes are rejected by ATS." "Recruiters spend 6 seconds on your resume." "There are 250 applicants per job posting." These numbers get passed from blog to blog, cited by career coaches, repeated in LinkedIn posts, and printed on the marketing pages of resume optimization tools.
We wanted to know where these numbers actually come from. So we tried to trace each one back to its original source. In most cases, we could not find one. The statistics that underpin an entire industry of resume advice are, in many cases, unsourced, misquoted, or extrapolated far beyond what the original data supports.
Rather than simply pointing this out and moving on, we decided to do something more useful: test the claims against real data. We analyzed 58 ATS score artifacts from Ajusta's production scoring engine, 24 complete before-and-after optimization cycles, 48 job descriptions, and 22 base resumes. The dataset is not enormous, and we will be upfront about that. But it is real production data from real people applying to real jobs, and every number in this article can be traced back to a specific, reproducible calculation.
Here is what we found when we held the industry's favorite statistics up to actual data.
All figures in this article come from Ajusta's production ATS scoring engine (deterministic-v2-semantic scorer). The dataset includes 58 score artifacts, 24 before/after optimization pairs, 48 job descriptions across multiple industries, and 22 distinct base resumes. Scores are computed from five weighted components: keywords (40%), skills (25%), education (15%), experience (10%), and contextual fit (10%).
We are transparent about the sample size because we think honesty about limitations is more valuable than inflated authority. These are real numbers from real users, not projections.
Stat 1: "75% of resumes are rejected by ATS"
"Up to 75% of resumes are rejected by ATS before a human ever sees them." Sometimes cited as 70%, sometimes 80%, occasionally 98%. Often attributed to vague references like "a 2013 study" or "industry research."
Where does this number come from?
We spent a considerable amount of time trying to trace this statistic to its origin. We could not find one. The number appears to have entered circulation through blog posts by resume service companies and career coaching sites, where it was cited without a primary source. Over time, it got repeated so often that it acquired an air of established fact. Some articles attribute it to "a study," but the study itself does not appear to exist in any academic database or industry report we could locate.
The closest thing to a real source we found was a Jobvite survey that discussed recruiter screening rates, but that survey measured human behavior, not ATS filtering. The 75% figure appears to be what researchers sometimes call a "zombie statistic": a number with no traceable origin that persists because it is useful to the people who repeat it.
What our data actually shows
Across our 58 score artifacts, 77.6% of resume-job combinations scored below 75 out of 100. And 29.3% scored below 50. The median pre-optimization score was 39.
Score distribution across 58 resume-job pairs
So the surface-level number is not far off. Most resumes do score poorly against their target jobs before optimization. But there is a critical problem with how the original claim is framed: it uses the word "rejected."
ATS systems do not operate on a binary pass/fail basis. They rank candidates on a continuous score. A resume that scores 39 out of 100 is not "rejected" in the way the statistic implies. It is ranked lower than a resume that scores 75. Whether a recruiter ever sees it depends on how many applicants there are, what the other scores look like, and how far down the list the recruiter scrolls. There is no universal cutoff line.
The underlying concern is real: most unoptimized resumes score poorly against their target jobs. Our median pre-optimization score of 39/100 confirms that. But the specific number is fabricated, and the framing is misleading. ATS systems rank; they do not "reject." Calling it a 75% rejection rate misrepresents how these systems work.
Stat 2: "Recruiters spend 6-7 seconds on a resume"
"Recruiters spend an average of 6-7 seconds looking at a resume before deciding whether to move forward." Variants say 6 seconds, 7 seconds, or 7.4 seconds.
Where does this number come from?
This one actually has a real source: an eye-tracking study commissioned by TheLadders, a job search platform. The study was published in 2018 and involved 30 recruiters whose eye movements were tracked while reviewing resumes. The 7.4-second figure specifically referred to the initial scan, not the total time spent reviewing a resume.
There are a few things worth noting about this study that rarely get mentioned when the statistic is cited. First, the sample size was 30 people. That is not nothing, but it is not the large-scale industry finding it is usually presented as. Second, the study was commissioned by a job search platform as a marketing piece, not published in a peer-reviewed journal. Third, and most importantly, the 7.4 seconds measured the initial scan phase. The study itself acknowledged that recruiters who found a resume interesting would spend considerably more time on it.
The career advice industry took a narrow finding from a small marketing study and turned it into a universal law.
What our data adds to this
We cannot directly test recruiter scan times because our data measures ATS scoring, not human reading behavior. But our data does illuminate a related question that matters more: what determines whether a recruiter sees your resume at all?
In our dataset, the average pre-optimization resume scored 46.2 out of 100 (median: 39). After optimization, the average score rose to 73.6 (median: 74). The resumes that score 39 are the ones sitting at the bottom of the ATS ranking. The ones scoring 74 are near the top. If a recruiter only reviews the top 20 candidates, the 6-second scan question is irrelevant for the resumes that never surfaced.
The study is real but limited (n=30, marketing-commissioned, initial scan only). The bigger issue is that the statistic distracts from the more important question. Before worrying about how long a recruiter looks at your resume, you need to worry about whether your resume ranks high enough to be seen at all.
Stat 3: "The average job posting receives 250 applications"
"The average corporate job posting receives 250 applications." Sometimes presented as "250 resumes per opening" or "250+ applicants compete for each role."
Where does this number come from?
Glassdoor has cited a figure in this range, and some HR publications have reported similar averages. LinkedIn's data suggests 150-200 for popular postings. The problem is not that the number is fabricated. The problem is that it is a meaningless average of a wildly variable distribution.
A niche engineering role at a mid-size company in a specific city might receive 25 applicants. A remote entry-level marketing coordinator role at a recognizable brand might receive 1,400 or more. Averaging these together produces a number that describes neither situation accurately.
What our data suggests
We do not track application volumes per posting, so we cannot directly verify or challenge this number. But our job description data offers one relevant observation. Across our 48 job descriptions, the number of extracted keywords per posting ranged from 11 to 59, with a mean of 30.9. The jobs with more extracted requirements tend to be more specialized roles, and more specialized roles generally attract fewer, more targeted applicants. The jobs with fewer requirements tend to be more general, and general roles attract more volume.
The "250" figure treats all jobs as interchangeable. They are not. A more honest framing would be: application volume varies enormously depending on role specificity, company visibility, location, and whether the position allows remote work. Presenting a single average as if it applies to your specific situation is not useful.
Not clearly fabricated, but misleading as presented. The real range is roughly 25 to 1,400+ depending on the role. A single average collapses that variation into a number that describes no one's actual experience.
Stat 4: "Keywords are all that matters for ATS"
"ATS systems are basically keyword matchers. If you don't have the right keywords, nothing else matters." This is not a single statistic but a persistent industry narrative that shapes most resume optimization advice.
Where does this narrative come from?
This belief is driven partly by how early ATS systems worked (crude keyword matching was a bigger factor in older systems) and partly by commercial incentive. Keyword matching is the easiest thing for a resume optimization tool to sell. "We found 12 missing keywords" feels concrete and actionable. "Your skills alignment is low" is harder to package.
What our data actually shows
This is where our data is most revealing, and the answer is more nuanced than either side of the debate usually admits.
Our scoring engine weighs five components: keywords (40%), skills (25%), education (15%), experience (10%), and contextual fit (10%). Keywords have the largest single weight. But when we looked at what pre-optimization resumes actually earn from each component, the picture flipped.
Where pre-optimization scores actually come from
In 22 pre-optimization resumes, keywords contributed only 18.3% of the total earned score. The other four components delivered 81.7%.
Why the discrepancy between the 40% weight and the 18.3% contribution? Because most people's resumes already do reasonably well on experience, education, and contextual factors. Those components have high baseline scores (experience mean: 81.6, education mean: 75.1, contextual mean: 72.2). Keywords and skills are where people score poorly (keyword mean: 26.1, skills mean: 31.0). The weight is high, but the performance is low, so the actual contribution to the total score is small.
Now here is the flip side, and it is important. When we looked at which component was the primary driver of low scores, keywords won in every single case.
Primary deficit driver for resumes scoring below 50
Among 15 pre-optimization resumes that scored below 50, keywords were the single largest source of lost points in 100% of cases.
On average, keywords accounted for 29.6 points of lost score (52.7% of the total deficit), followed by skills at 17.9 points (32.0%). Experience, education, and contextual factors together accounted for only 15.3% of the deficit.
Eight out of 22 pre-optimization resumes had a keyword score of literally zero. No keyword matches at all. These resumes still scored between 25 and 37 overall, carried entirely by their experience, education, and contextual scores. That tells you keywords are not "all that matters." But it also tells you that without keywords, you are starting from a very low ceiling.
The analogy that felt most honest to us: keywords are not the whole car, but they are the engine. If the engine is missing, the quality of your transmission, brakes, and tires does not get you very far. But if you only fix the engine and ignore everything else, you still will not win the race.
Keywords are not "all that matters," but they are the single biggest failure mode. In our data, they were the primary deficit driver in 100% of low-scoring resumes and accounted for 52.7% of total lost points. The narrative is reductive, but it is not baseless. A more accurate statement would be: "Keywords are the most common reason resumes score poorly, but they are one of five factors that determine your final score."
Stat 5: "ATS rejects resumes because of formatting"
"Your resume can be rejected by ATS due to bad formatting. Avoid tables, columns, graphics, and unusual fonts." Sometimes paired with specific warnings about headers, footers, or text boxes.
Where does this come from?
This is industry folk wisdom that has some historical basis. Older ATS systems, particularly those from the early 2010s, used OCR-style parsing that genuinely struggled with complex layouts, tables, and graphics. If the system could not read your resume, it could not score it. Resume template companies and "ATS-friendly format" services built entire businesses around this fear.
What our data shows
Our five-component scoring model has no formatting component. Scores are determined by keywords (40%), skills (25%), education (15%), experience (10%), and contextual fit (10%). Across 58 score artifacts, formatting is never cited as a factor in the score or as a reason for a low score. The system evaluates content alignment, not visual presentation.
In our optimization plans, the system classifies each section of a resume as either "eligible for modification" or "protected." On average, 56% of resume sections are protected and left untouched during optimization. The changes that actually move the score are content changes: adding missing keywords, aligning skill descriptions, matching terminology to the job posting. Not reformatting.
There is an important distinction that the industry often blurs: the difference between parseability and scorability. If an ATS cannot parse your resume at all (because it is an image-only PDF or uses encoding the parser does not support), then yes, you have a formatting problem. But once the content is successfully parsed, formatting has no effect on scoring. These are two different things, and conflating them leads to people spending hours on template selection when their real problem is content alignment.
Mostly a myth for modern ATS. Formatting affects whether the system can read your resume (parseability), but once parsed, it has zero effect on scoring (scorability). The industry conflates these two things, leading candidates to obsess over templates when their actual problem is content mismatch.
What optimization actually looks like in the data
Having debunked or qualified the industry's favorite statistics, we want to show what actually happens when a resume goes through a data-driven optimization process. Not in theory, but in our 24 measured before-and-after pairs.
The average improvement was 27.4 points. The largest single improvement was 49 points (from 25 to 74). The smallest was 3 points (from 70 to 73, a resume that was already scoring well).
Where did the improvement come from? Almost entirely from two components:
Average component change after optimization (24 pairs)
Keywords improved by an average of 61 points. Skills improved by 20. Experience, education, and contextual fit barely moved, because they were already high. The optimization added an average of 18.7 keywords per resume and made 8.5 content changes.
After optimization, 62% of resumes that were below 50 crossed that threshold. 71% crossed 70. The average processing time was 16.4 seconds per resume.
What we think this means
The resume advice industry has a sourcing problem. Statistics get invented, misquoted, or stretched beyond recognition, and then they circulate until nobody questions them anymore. This is not unique to the resume space (health and fitness advice has the same issue), but it does real harm when it leads people to focus on the wrong things during a job search.
Based on our data, here is what we think job seekers should actually focus on:
Keyword alignment is the single biggest lever.
Not keyword stuffing, but genuine alignment between the terminology in your resume and the terminology in the job posting. In our data, 36% of pre-optimization resumes had zero keyword matches. That is an enormous gap that no amount of formatting, template selection, or clever phrasing can overcome. We dug deeper into this in our analysis of 830 keywords from 48 job descriptions, where we found that 71% of keywords are unique to a single posting.
Skills matter almost as much as keywords.
Skills were the second-largest deficit driver at 32% of total lost points. And our data shows that the most commonly missing skills are soft skills like "adaptability," "entrepreneurial spirit," and "collaboration," not technical skills. Many candidates never think to include these.
Do not spend hours on formatting.
As long as your resume is parseable (plain text or straightforward PDF, no image-only pages, standard encoding), the format is not what is holding you back. Content alignment is.
Be skeptical of any resume advice that cites a statistic without linking to the original source.
If someone tells you "75% of resumes are rejected by ATS," ask them where that number comes from. If they cannot point you to a specific study with a methodology section, treat the claim with appropriate caution.
Our dataset is 58 score artifacts, 24 optimization pairs, 48 job descriptions, and 22 base resumes. That is enough to identify patterns, but it is not a 10,000-resume study. We also acknowledge that our scoring model is one implementation of ATS scoring, not a universal standard. Different ATS platforms weight components differently.
We chose to publish this analysis anyway because we believe transparent, small-sample data is more valuable than unsourced large-number claims. Every figure in this article is traceable to a specific calculation on a specific dataset. If you want to challenge any of these numbers, you can, and we think that is how it should work.
Want to see how your resume actually scores?
Ajusta's scoring engine uses the same five-component model described in this article. See your real breakdown, not a vague percentage.
Check your ATS score