Simply click/hover the gender bias icon in the byline (next to the author details).
First tab: gender bias gauge to quickly see if the content leans too far in any direction.
Second tab: list of feminine words including extracted sentences.
Third tab: list of masculine words including extracted sentences.
Compare gender bias across all content
On your Confluence homepage, look for “Gender Bias+” in the sidebar (or under “Apps”).
This will open a table with a list of all content across your Confluence instance.
From here you can quickly sort and filter by:
Feminine, masculine or neutral bias.
Strong bias or some bias.
Your Confluence spaces.
Is the gender bias score public or private?
Private. Only logged-in users will see both the byline dropdown and gender bias page.
Is the calculation automatic?
Yep. The in-browser calculation is run automatically in the background when you:
visit a page/blog without a score.
newly create a page or blog.
recently edit a page or blog.
How is the gender bias calculated?
The algorithm identifies words within your page or blog post from a list of 120+ gendered words.
It then sums a total for both identified feminine words and masculine words.
The raw “score” is the difference between these two word totals.
And we also calculate a qualitative score: bias free, neutral bias, some bias, and strong bias.
Definition of bias terms “strong”, “some” and “neutral”
Bias free: no gendered words found.
Neutral bias: total feminine words = total masculine words.
LESS than 10 total masculine/feminine words found.
or, difference between totals is LESS than 200%.
one total is 0 while the other is GREATER than or equal to 10.
or, difference between totals is GREATER than 200%.
Do I need to change gendered words?
Nope. The discovery of gendered words in content is not necessarily a bad thing. The goal of this app is to help you quickly identify if gender bias in your content leans strongly in any particular direction.
If content is tagged with “strong bias” you may want to read the content, identify the particular sentences and do some editing to bring the calculation score back into “some bias”.
Why does it think X is gendered?
There will be the occasional edge-cases where a word used in a non-gendered context will be extracted as a “gendered word”. For example the word “commit” might be used in the context of “committing code” but the app will extract and include it in the total as a feminine word. Understanding context within text is just one of those really difficult problems in computer science.