Quantcast

Madison Reporter

Wednesday, May 15, 2024

Popular Social Media Apps' AI Analysis of User Photos Introduces Bias and Errors

Webp aono2e5frkmwlemqf3nj740r27zh

Jennifer Mnookin Chancellor | Official website

Jennifer Mnookin Chancellor | Official website

Digital privacy and security engineers at the University of Wisconsin-Madison, led by Kassem Fawaz, have discovered concerning issues with the artificial intelligence-based systems utilized by popular social media platforms like TikTok and Instagram. The researchers found that these AI systems, which extract personal and demographic data from user images, can lead to misclassifications and introduce errors and biases into the platforms.

The team's findings, to be presented at the IEEE Symposium on Security and Privacy in San Francisco in May 2024, shed light on the potential implications of using AI to analyze user photos. According to PhD student Jack West, who worked on the project alongside PhD student Shimaa Ahmed and Fawaz, the advancement of technology now allows for machine learning to be conducted directly on users' devices, enabling a deeper look into the AI vision models and the data they collect and process.

The researchers examined the vision models of TikTok and Instagram and found discrepancies in their ability to accurately recognize demographic differences and age. TikTok's model, for instance, often made mistakes when classifying individuals under 18, with some younger individuals being inaccurately classified as older. Meanwhile, Instagram's model categorized over 500 different concepts from photos, including age, gender, time of day, and facial features.

Ahmed noted that while Instagram performed better at classifying images by age compared to TikTok, it exhibited biases against certain groups. West highlighted that both platforms analyze photos upon selection, storing the data locally on the device for unknown purposes. If used for age or identity verification, the researchers believe there is room for improvement to reduce biases in the vision models for fair and accurate digital services for all users.

The study conducted by the University of Wisconsin-Madison team, which includes Maggie Bartig, Professor Suman Banerjee, and Lea Thiemt of the Technical University of Munich, underscores the importance of addressing bias and errors in AI technology utilized by popular social media apps.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS