The web is a huge virtual square, where we spend much of our time and, just as in the ‘real’ world, we meet people, exchange opinions and tell stories. But alas, just like in the street or in the square, in the great piazza of the web you also come across episodes of violence.
A study last March, published by the European Added Value Unit (Eava) of the Research Service of the European Parliament, highlighted the fact that cyber violence mainly affects women, (usually victims of stalking, threats and other types of harassment) much more than men.
Undoubtedly, the pandemic has worsened the phenomenon, because people’s social life has largely moved online. However, despite the significant increase in digital interactions in the last year, no national or international plan has been drawn up to combat this particular form of violence, which is increasingly invasive, although less recognized than offline attacks.
It takes time to collect data, study the phenomenon, find solutions to counteract it and protect women and girls who are in real need of help.
An important piece of research
A project by Arianna Muti, a student from Bologna, who graduated in ‘Language, society and communication’, goes some way to doing just that. She has always been concerned about the problem of misogyny, and so she decided to use her degree thesis as a way of tackling the issue.
Arianna developed an algorithm that classifies Italian content on Twitter and detects whether tweets are misogynistic and/or aggressive. “The idea came from computer science professor Elisabetta Fersini, who together with a team of researchers launched a competition called Automatic Misogyny Identification where participants were invited to develop a computational model to identify misogyny and aggression on tweets posted in Italian. I chose to participate because the topic has always been one that I feel strongly about. In fact, several times, especially on Facebook, I have had to respond to misogynistic comments trying to explain why a particular comment was so offensive,” says Arianna.
She undertook her study with researcher Alberto Barrón-Cedeño and under the supervision of Professor Fabio Tamburini. She explained how the algorithm works: “It is a classifier and, as such, classifies a tweet into three categories: misogynist, non-misogynist, or aggressive misogynist. It is not yet being used automatically on Twitter, therefore, to date it is not able to independently find tweets online, the user must submit a tweet for it to be classified.”
Not just a goal, but also a starting point
Thanks to the Italian student’s ground-breaking work, researchers now have important data available and, from this, further insights are being gained. “Looking at the tweets that were used to train and test the model, I can say with some confidence that the women most affected are those who expose themselves the most. They tend to experience numerous individual attacks, some complete with hashtags, while it is far less common to see aggressive or misogynistic abuse aimed at women in general.
“Many tweets refer to the outward appearance of women (body shaming), but there are also tweets that degrade women in the public eye.”
Arianna Muti’s research fits with other pieces of the jigsaw provided by similar studies. One comes, for example, from the research study published by Amnesty International Italy, The Hate Barometer, according to which the most frequent offenses online are body shaming attitudes mostly towards women, but not exclusively.
In particular, the Amnesty research notes that in the last year there has been a ‘shift in hatred’, which no longer regards just a woman’s body as fair game for derision and insult, but also the professional life of women, as if their choice to be economically independent was somehow unacceptable.
The work of the Bologna student also matches the conclusions of the European Parliament’s researchers, (namely that women are ever more vulnerable to cyber violence than men). Research by Vox (the Italian Observatory on Rights) shows similar findings, noting how hate tweets in general have decreased in the last year, while those against women have increased.
The research is progressing, and the algorithm designed by Arianna Muti has reached an important milestone: the model was presented as part of the ‘Evalita’ initiative, a project dedicated to the development of automatic systems for natural language processing in Italian, and it came out top in the Automatic Misogyny Identification section, with an accuracy of around 76 per cent, beating even Google’s research team.
The algorithm was declared best performer of all the entrants, and this means the opportunity is now there to improve and expand it. Arianna is aware of this, and she doesn’t want to waste a minute in improving her work!
“I intend to do a doctorate on the identification of misogyny to increase the model’s performance, limiting the number of false positives. On top of that, I would also like to work on memes, thus recognizing misogyny and aggression in a multimodal context (text + image).”
Undoubtedly, ‘making images speak’ to flush out misogyny is a new path that could bring significant results, but it is only one of many possible developments.
Despite the limits there are new paths for development
Her research focused on Twitter, ‘because thanks to the 280-character limit it is easier to identify misogyny,’ explains the Bologna student. But she is open to possible developments of the project, using it on other social networks and establishing how useful it might be to experiment with a similar solution on other platforms. The hope is that such tools might help ensure that the web could be protected from offensive content by acting on multiple channels and therefore with greater effectiveness. According to the Eava research, it is likely that online violence among adolescents doesn’t stop as they grow up, but continues to increase, especially due to the growing use of social media; thus the work of finding solutions is more essential than ever.
Arianna explains: “In theory it is possible to replicate a solution like this on other platforms, but the result isn’t guaranteed. To make the algorithm work on other types of text, such as comments or Facebook posts, we would have to remove the 280-character limit and retrain the model to work on longer texts. I aim to give it a try, but it is important to note that with longer texts there is more margin for error.”
Clearly, the project has within itself possibilities for growth and further development: but first, the limits imposed by technology and research itself must be overcome. For example, to date the algorithm is not yet implemented automatically on Twitter and can’t be used if you don’t have the skills to run it. However, the code is available on GitHub, a hosting service for software projects, which augurs well.
It may be some time before the algorithm becomes widely operational but understanding what the offensive tweets look like and to whom they are addressed is a an important step, not only for research but also for the wider community, which needs to become more aware of this phenomenon of cyber violence against women. Anything which helps to keep the issue in the public eye is important, so that perpetrators can be identified more easily, and reported to the relevant authorities.
“The major limitation we face,” explains the developer, “is the high number of false positives, or tweets that are classified as misogynistic or aggressive misogynist when in reality they are not. This happens due to the presence of certain words in a tweet – for example ‘whale’. Since the word whale has appeared in many misogynistic contexts, the algorithm has learned that whale is misogynistic, so it classifies most tweets containing this word as misogynistic. In fact, they may be quite innocent references to sea creatures!
“Another problem comes when a misogynistic comment is made in the form of an ironic compliment … for example: ‘For a woman you drive very well’. This phrase, clearly misogynist in tone, is difficult for the algorithm to recognize. So, we need to work on this and find solutions that can automatically recognize and report offensive content, limiting cases of misunderstanding or error.”
Many limits of the algorithm are not insurmountable and are related to the still early stage of the work; therefore, they give a glimpse of development possibilities: time, study and trust in the potential of the project are needed.
A study that helps the community
Misogyny is a problem, as is violence, offline and online and Arianna Muti’s algorithm creates awareness of this social scourge, helps it to be identified and combatted, with the tools available – even though they are limited at this stage.
Growing awareness of the problem of online violence against women is leading to a greater sense that things are changing for the better; research continues and as Muti’s work demonstrates, if there is a will, there is a way. And if the way doesn’t exist, it can be created. This does not only mean designing algorithms or using advanced technological skills, rather, she believes, it means putting oneself to the test, with one’s own life skills and sensitivities.
Each person can act actively and constructively on many levels, with the common goal of stemming this rampant phenomenon, which has become destructive for so many women.
And surely the end result will be a growth in empathy?
Federica Carla Crovella graduated in Literature in Turin, Italy. She wrote a research thesis related to gender literature. She specialises in Piedmontese culture and news, and writes about gender issues in her blog, www.sciveredidonne.blogspot.com
English translation by Ronnie Convery
- Italian student’s algorithm will keep women safer online - Ottobre 3, 2021
- Violenza di genere online: un algoritmo per aiutare la comunità - Luglio 5, 2021
- Catcalling: testimoni e non più vittime con il progetto Stand Up - Maggio 12, 2021