Facing allegations of bias, Twitter is phasing out its image cropping algorithm after a deeper research, as it looks into accusations from the public that its image system was biased.
The company said on Wednesday that not everything on Twitter is a good candidate for an algorithm, and in this case, “how to crop an image is a decision best made by people”.
Twitter started using a saliency algorithm in 2018 to crop images and improve consistency in the size of photos in users’ timeline.
However, people on Twitter noted instances where its model chose white individuals over Black individuals in images and male-presenting images over female-presenting images.
The Twitter users also identified instances where image cropping chose a woman’s chest or legs as a salient feature.
The micro-blogging platform tested the model on a larger dataset to determine if these were a systematic flaw.
According to Rumman Chowdhury, Director, Software Engineering at Twitter, an algorithmic decision “doesn’t allow people to choose how they’d like to express themselves on the platform, resulting in representation harm”.
In March this year, Twitter began testing a new way to display standard aspect ratio photos in full on iOS and Android — meaning without the saliency algorithm crop.
“The goal of this was to give people more control over how their images appear while also improving the experience of people seeing the images in their timeline. After getting positive feedback on this experience, we launched this feature to everyone,” Chowdhury said.
In October 2020, the company heard feedback from people that its image cropping algorithm didn’t serve all people equitably.
Twitter found after several months of testing that in comparisons of men and women, there was an 8 per cent difference from demographic parity in favor of women.
“In comparisons of black and white individuals, there was a 4 per cent difference from demographic parity in favor of white individuals. In comparisons of black and white men, there was a 2 per cent difference from demographic parity in favor of white men,” the findings showed.
“We’re concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform. Saliency also holds other potential harms beyond the scope of this analysis, including insensitivities to cultural nuances,” Twitter admitted.
The company said it is working on further improvements to media on Twitter that builds on this initial effort, and hopes to roll it out to everyone soon.