ResearchPod

Genderly: Language, Bias, and Representation in Film Criticism

ResearchPod

What can film reviews tell us about gender bias in the movie industry?

Dr Wael Khreich from the American University of Beirut explores this question with Genderly, a custom-built AI tool that analyses the language of 17,000 professional reviews. His findings reveal that female-led films are far more likely to be judged through a biased lens—subtly and overtly reinforcing stereotypes. This research sheds light on how language shapes perception, influences careers, and contributes to broader societal inequalities.

Read the original research: doi.org/10.1371/journal.pone.0316093

Hello and welcome to Research Pod! Thank you for listening and joining us today. 

In this episode, we look at new research on gender bias in the film industry. It comes from the American University of Beirut and is led by Assistant Professor of Machine Learning, Wael Khreich. Dr Khreich specialises in detecting and mitigating bias in artificial intelligence, and acknowledges the support of his collaborative team, including Jad Doughman, his research assistant; the master’s students who have played a key role in its formation; and finally, the support provided by the Suliman S. Olayan School of Business (OSB).  

Gender bias is associated with social role theory – the idea that gender stereotypes originate from roles and behaviours traditionally assigned to men and women. For example, men have historically been held as breadwinners and leaders who have power and agency, while women have generally been regarded as passive and nurturing care-takers and home-makers.

Dr Khreich’s study is based on analysis of the language of 17,000 film reviews using an AI-powered bias detection tool, ‘Genderly’, developed specifically for the purpose of analysing biases. The analysis was informed by additional information about the gender of the films’ lead actors, writers and directors. 

The findings confirm that many critics review films through a gendered lens, and that reviews reflect significant bias against female-led films. Other research has shown that this kind of bias can seriously affect female actors’, writers’ and directors’ careers and finances, as well as their well-being. It also has implications for box office performance, and for the industry – and society – as a whole. 

Film-making is a traditionally male-dominated industry, both in front of and behind the camera. 

This doesn’t just mean that women are less likely to find jobs as actors, writers or directors, or that the women who are employed have to fight harder than men to be heard, or to find project backing. 

It also means that women in film typically earn less than men, and are more likely to experience the psychological and physical effects of discrimination. It’s known, for example, that gender bias can lead to heightened levels of stress and increased cardiovascular activity, both of which can be harmful to health.

At a wider level, the lack of female representation on- and off-screen has societal consequences. It represents a loss of human potential and deprives us of storytelling from a female perspective. Gender inequality results in a non-inclusive society and can help to perpetuate a culture of toxic masculinity and violence.

Until now research has tended to measure gender bias in film by considering such things as the difference in approval ratings between male- and female-led films. Dr Khreich’s study breaks new ground by looking at how the language of film reviews contributes to gender discrimination by reinforcing outmoded stereotypes of male and female behaviour. 

Researchers began by constructing a dataset of 17,000 professional English language reviews of 2,500 films, most of which were American or British. The reviews were taken from the Movie Review Dataset which was already available, thanks to research on natural language processing. The films were made between the 1900s and the 2020s, with the majority dating from the 1990s.

The dataset was enriched by information about the films’ lead actors, writers and directors taken from the Open Movie Database web service.

That’s a lot of data! To manually review and analyse it would have been time-consuming, and also subjective. Existing machine learning and natural language systems were not up to detecting the subtleties in language demanded by the research. The team’s first task was therefore to develop an AI-powered gender bias detection system –  ‘Genderly’ – to meet the challenge. 

Genderly identifies language as sexist, and either implicitly or explicitly prejudicial against women, according to five different types of gender bias. These include generic pronouns, benevolent or hostile sexism, occupational bias, exclusionary bias, and idiomatic language or semantics.

For example, using the pronoun ‘he’ or the possessive ‘his’ – ie, ‘everyone should do his best’ – to refer generically to someone of any gender privileges men above everyone else. Dr Khreich cites linguist Annemarie Bodine, who stated that historically ‘human beings were to be considered male, unless proved otherwise’! 

Another form of gender bias, sexism, is identified in two forms. Benevolent sexism appears to be positive, but is demeaning because it reinforces outdated beliefs. For example, saying that someone is ‘really smart – for a girl.’ Hostile sexism is negative from the start and based on stereotypes that reflect traditional assumptions about male superiority. For example, an unsubstantiated and generalised statement such as ‘women are incompetent.’

Occupational bias relates to the hierarchy of social roles, for example the notion that professors, doctors and scientists are male, and teachers, nurses and secretaries are female. Allied to this is exclusionary bias that’s assumed in the use of words such as ‘chairman’ instead of the gender neutral ‘chairperson’, or ‘policeman’ instead of ‘police officer’. 

Last but not least, Genderly looks at semantics and the way language is used in figures of speech. In American English, for example, a woman might be described as a ‘cute cookie’, demeaning her by likening her to food. And while a single man might simply be said to be a ‘bachelor’, a single woman might pejoratively be described as an ‘old maid’.

Annotators used the five different types of gender discrimination to label bias in the language used to train Genderly. That language was drawn from a variety of online sources built on user-generated content, including Quora, X or Twitter, and QuoteMaster.

Genderly significantly helped the research team to confirm a bias against women in the film reviews that formed the basis of the study.

For example, the reviews of female-led films contained 149% more hostile sexism and negative attitudes towards women than male-led films. They also showed 44% more benevolent sexism involving patronising or stereotypical remarks about women.

Interestingly, the kind of sexism exhibited differed across film genres. Hostile sexism was more likely to feature in reviews of Romances, and benevolent sexism in Family and Music film reviews.

The better news was that generic pronouns were not found to be a particular problem. For example, in reviews ‘he’ or ‘she’ generally referred to a specific male or female character.  And when dehumanising language was used, it was usually in relation to the film’s content, and did not suggest the critic’s bias.

Analysis of the supporting data on the films’ lead actors, writers and directors confirmed how male-centric cinema is. Female representation has improved over the years covered by the research, but women are still substantially under-employed in the industry. The study found that just 28% of lead actors were female, 14% of writers, and 9% of directors. 

Some genres such as Sci-Fi films and Westerns were fully dominated by male actors in lead roles. However, Film-Noir and Thrillers depicted a majority female lead actor representation. No genre had a majority female representation for writers or directors, though 40% of Romances did have female writers. 

Dr Khreich’s study highlights the prevalence of gender bias in the American and British film industry. It confirms that critics consciously or unconsciously judge films differently, based on the gender of those involved. Not only can this deter women from entering the film industry, it also shapes audience perceptions and helps to perpetuate societal biases against women.

This matters because language shapes our perceptions and influences the way we think about people’s roles in society. Biased language also affects how people feel about themselves and about their worth as human beings. As a result, it has real-world consequences for careers, livelihoods and well-being.

Further research is needed to look at a more diverse sample of reviews of non-American and British films. It should also consider non-binary or gender-diverse identities, as well as representations of other marginalised groups, including people of colour. Last but not least, it would be interesting to study the gender of entire casts and crews – not just lead actors, writers and directors.

Genderly has demonstrated its merits as an AI-powered gender bias detection tool, and its developers hope that it will be used beyond the film industry. The aim is to continuously improve it, so that the tool can detect and help mitigate bias in language in general – not just in film reviews. 

Dr Khreich argues that cinema needs a substantial shift to become a more equitable industry that is fairer and more considerate to all those involved, and also more representative of society.

Professional film reviewers have the power to help drive change by adopting more balanced language, and supporting diverse storytelling. In this way they can help to dismantle prejudicial narratives that reinforce gender biases and instead signpost interest in a richer, more inclusive cinematic landscape.

As Dr Khreich concludes, by looking at the language of film reviews, the study has highlighted the need to raise awareness of gender bias in the film industry, and, even more important, the need for reform.  

That’s all for this episode – thanks for listening. Links to the original research can be found in the shownotes for this episode. And, as always, don’t forget to stay subscribed to Research Pod for more of the latest science!

See you again soon.