How social media spreads false information
From sock puppet accounts to scam ads, social media can help spread misinformation to thousands if not millions of people at once. Unfortunately, social media algorithms make it so any interaction helps the content reach more people.
Angry reactions on Facebook or comments calling a post out as false only helps the poster reach more people. This is because the algorithm only understands whether something is popular or not. It can’t tell if information is false; that’s why users must report false information rather than engage with it.
How echo chambers spread misinformation
‘Echo chambers’ is a term used to describe the experience of only seeing one type of content. Essentially, the more someone engages with the content, the more likely they are to see similar content.
So, if a child interacts with an influencer spreading misogyny, they will see more similar content. If they interact with that content, then they see more, and so on. This continues until all they see is content around misogyny.
When an algorithm creates an echo chamber, it means the user will only see content that supports the user’s view. As such, it’s really difficult to hear others’ perspectives and widen their worldview. This means, when challenged, they become more defensive and are likely to spread hate.
Learn more about algorithms and echo chambers.
How design impacts the way misinformation spreads
In a Risky-by-Design case study from the 5Rights Foundation, the following design features also contributed to misinformation spreading online.