Blog / Diversity

Can text bullets be biased?

The deepest patterns of bias are only revealed through big data

Recently I published an analysis of 1,100 tech resumes and how men and women choose to present themselves. Men’s resumes are shorter, make heavier use of bulleted lists, and include less personal detail, while women’s resumes tend to include more narrative content and longer prose. Even where men and women have similar professional backgrounds, their resumes appear quite different.

I was struck by how strongly the patterns of gendered language in resumes correspond to the patterns that Textio has found in job descriptions. How you write your job listing changes the gender mix of candidates who will apply for your job — and the same patterns that men and women use in their resumes show up in how they evaluate your job listing.

Bias in tech is a hot topic right now, and there is a growing body of research and products to help tech leaders who are concerned about it. They all work a little differently, but most of what’s out there is based on qualitative research.

The best qualitative research on gender bias in tech hiring has identified about a hundred common words in tech job ads that show male bias, meaning that job listings containing those words may not attract many women to apply. This research has done a lot to raise awareness, and it’s a great start.

But that’s what it is: a start. At some point it’s not enough to posit that men like the word ninja and women prefer to talk about collaboration. If we’re going to build a product that offers guidance on something as important as bias, we need stronger quantitative data to back it up.

Many factors go into how we calculate gender bias at Textio. Because our approach is statistical and based on large data sets, we identify not just dozens but many thousands of specific phrases that contribute. But beyond specific word choice, there are important structural elements that matter too.

Across industries, job listings that use bulleted lists for one third of their content are the most popular with both men and women. People scan a listing for just a few seconds before deciding whether to engage further. In that brief scan, they rely heavily on the listing’s visual silhouette as a cue. Nothing changes the visual silhouette of a standard job listing more than the balance of bulleted content and prose.

While using bullets for a third of the job listing works best for everyone, the impact of including more or less than that ideal amount varies by gender. Men are much more likely to engage with a job that has more than a third bulleted content than women are. Women are much more likely to engage with a job that has less than a third bulleted content than men are. This is true not just in tech, but also in many other industries.

The ideal is the same for both genders, but near the edges of the ideal range, men’s and women’s preferences differ. And it’s striking that their preferences align with how they represent themselves in their own resumes: men gravitate more to bulleted lists of facts, and women gravitate more to narrative prose.

There is no way we would have figured this out without analyzing documents at scale: not just relying on qualitative impressions, but measuring job listings and outcomes data from thousands of different companies to find the patterns that work.

Bias word lists that are generated based on qualitative research are a great conversation starter. But patterns of bias run deep and below the level of consciousness, and it’s impossibly hard to spot them all without using real learning models.

We work in technology. Isn’t it time we used technology to solve this problem?


Topics: Diversity, Uncategorized, Women In Tech