Internet Matters
Search

What is misinformation?

Learn about fake news its impact on children

With so many sources of information online, some children might struggle to make sense of what is true. In this guide, learn about misinformation, what it looks like and how it impacts children’s wellbeing and safety online.

close Close video

Quick tips
4 quick things to know about misinformation

Fake news is not the preferred term as it refers to false information and news online. However, it’s more appropriate to use ‘misinformation’ and ‘disinformation’.

  • Misinformation is false information spread by people who think it’s true.
  • Disinformation is false information spread by people who know it’s false.

Mis/disinformation is an online harm and can impact children’s:

  • Mental health
  • Physical wellbeing
  • Future finances
  • Views towards other people

It can also lead to mistrust and confusion related to the information they come across online.

Misinformation comes in different forms and might look like:

  • Social media hoaxes
  • AI adverts
  • Phishing emails
  • Popular videos
  • Sponsored posts

Misinformation is hard to spot for children who might not yet have the skills to fact-check. It can spread on social media, through satire news websites, via parody videos and other spaces.

Learn more about the forms it can take.

Insights from Ofcom:

  • 32% of 8-17-year-olds believe that all or most of what they see on social media is true
  • 70% of 12-17s said they were confident they could judge whether something was real or fake
  • Nearly a quarter of those children were unable to do so in practise

This mismatch between confidence and ability could leave these children exposed to harm. On a more positive point, of those who said they were confident, 48% were also able. Read Ofcom’s 2023 research in full.

More on this page

Learn about misinformation

Misinformation is false information that is spread by people who think it’s true. This is different from ‘fake news’ and disinformation.

Fake news refers to websites that share mis or disinformation. This might be via satire sites like The Onion, but it also refers to those pretending to be trustworthy news sources.

Sometimes, people use the term ‘fake news’ to discredit true information. As such, it’s better to use more general terms such as ‘misinformation’ and ‘disinformation’.

Disinformation is false information that someone or a group spreads online while knowing it’s false. Generally, they do this for a specific intention, usually for the purpose of influencing others to believe their point of view.

7 types of mis/disinformation

UNICEF identifies 7 main types of mis and disinformation, all of which can impact children.

Satirical content and parodies can spread misinformation. This is misleading information that is not intended to harm. Creators of the content know the information is false, but share it for humour. However, if people misunderstand the intent, they might spread it as true.

Clickbait for views can mislead users. This is content where the headline, visuals or captions don’t match the actual content. This is often clickbait to get more views on a video, visits to a page or engagement on social media.

Intentionally misleading content can create anger. People might share information in misleading way to frame an event, issue or person in a particular way. An example is when an old photo is used on a recent social media post. It might spread outrage or fear until the photo receives the right context.

Giving fake context can cause unnecessary outrage. Fake context is when information is shared with incorrect background information.

A lighthearted example is a popular photo of young director Steven Spielberg posing and smiling with a large dead animal. Many people felt outrage for his hunting of an endangered animal. However, the correct context was that he was on set of Jurassic Park and posing with a prop triceratops.

Usually, someone spreading disinformation will ‘alter’ the context of information. The intention is to convince people of their belief or viewpoint.

Impersonation can cause harm in many ways. This is when a person, group or organisation pretends they are another person or source. Imposter content can trick people into:

  • Sending money
  • Sharing personal information
  • Further spreading misinformation

True information that’s altered is hard to notice
Manipulated content is real information, images or videos that are altered or changed in some way to deceive others. Some deepfakes are an example of such content.

Completely false information can lead to harm. Fabricated content is disinformation created without any connection to truth. Its overall intention is to deceive and harm. Fabricated content can quickly become misinformation.

How does misinformation spread online?

From social media to news, misinformation can spread all over the world in an instant.

For children, misinformation and disinformation often looks very convincing. This is especially true with the popularity of generative AI and the ability to create deepfakes.

Learn more about using artificial intelligence tools safely.

Artificial intelligence can help scammers create convincing ads and content that tricks people. Unfortunately, unless reported (and sometimes even when reported), these ads can reach millions of people quickly.

While misinformation is nothing new, the internet means it can spread a lot quicker and reach many more people.

How social media spreads false information

From sock puppet accounts to scam ads, social media can help spread misinformation to thousands if not millions of people at once. Unfortunately, social media algorithms make it so any interaction helps the content reach more people.

Angry reactions on Facebook or comments calling a post out as false only helps the poster reach more people. This is because the algorithm only understands whether something is popular or not. It can’t tell if information is false; that’s why users must report false information rather than engage with it.

How echo chambers spread misinformation

‘Echo chambers’ is a term used to describe the experience of only seeing one type of content. Essentially, the more someone engages with the content, the more likely they are to see similar content.

So, if a child interacts with an influencer spreading misogyny, they will see more similar content. If they interact with that content, then they see more, and so on. This continues until all they see is content around misogyny.

When an algorithm creates an echo chamber, it means the user will only see content that supports the user’s view. As such, it’s really difficult to hear others’ perspectives and widen their worldview. This means, when challenged, they become more defensive and are likely to spread hate.

How design impacts the way misinformation spreads

In a Risky-by-Design case study from the 5Rights Foundation, the following design features also contributed to misinformation spreading online.

Recommendations favour popular creators. Content creators who have a large following and spread misinformation have a wider reach. This is largely due to algorithms designed for the platform.

Many platforms are overrun with bots. Bots and fake profiles (or sock puppet accounts) may spread misinformation as their sole purpose. These can also manipulate information or make the source of disinformation harder to trace. It’s also often quite difficult as a user to successfully report fake or hacked accounts.

Algorithms can create echo chambers or “a narrowing cycle of similar posts to read, videos to watch or groups to join.” Additionally, some content creators that spread misinformation also have interests in less harmful content. So, the algorithm might recommend this harmless content to users like children. Children then watch these new content creators, eventually seeing the misinformation.

For example, self-described misogynist Andrew Tate also shared content relating to finance and flashy cars. This content might appeal to a group of people who don’t agree with misogyny. For instance, our research shows that boys are more likely than girls to see content from Andrew Tate on social media. However, both girls and boys are similarly likely to see content about Andrew Tate on social media.

Not all content labels are clear. Subtle content label design — such as for identifying something as an ad or joke — are often easy to miss. More obvious labels could help children accurately navigate potential misinformation online.

Autoplay makes accidental viewing easy. When videos or audio that a child chooses finishes, many apps automatically start playing a new one by design. As such they might accidentally engage with misinformation that then feeds into the algorithm. Most platforms allow you to turn off this feature.

Apps that hide content can support misinformation. Content that gets shared and then quickly removed is harder to fact-check. It spreads misinformation because it doesn’t give viewers the chance to check if it’s true. Children might engage with this type of content on apps like Snapchat where disappearing messages are the norm.

Algorithms cannot assess trending content. Algorithms can identify which hashtags or topics are most popular, sharing them with more users. However, these algorithms can’t tell if it relates to misinformation. So, it’s up to the user to make this decision, which many children might struggle with.

Misinformation can easily reach many. When sharing content directly, many apps and platforms suggest a ready-made list of people. This makes it easy to share misinformation with a large group of people at once.

What impact can fake news have on young people?

Nearly all children are now online, but many of them do not yet have the skills to assess information online.

Half of the children surveyed by the National Literacy Trust admitted to worrying about fake news. Additionally, teachers in the same survey noted an increase in issues of anxiety, self-esteem and a general skewing of world views.

Misinformation can impact children in a number of ways. These could include:

  • Scams: falling for scams could lead to data breaches, financial loss, impacts on credit score and more.
  • Harmful belief systems: if children watch content that spreads hate, this can become a part of their worldview. This could lead to mistreatment of people different from them or even lead to radicalisation and extremism.
  • Dangerous challenges or hacks: some videos online might promote dangerous challenges or ‘life hacks’ that can cause serious harm. These hacks are common in videos from content farms.
  • Confusion and distrust: If a child becomes a victim of dis or misinformation, they might struggle with new information. This can lead to distrust, confusion and maybe anxiety, depending on the extent of the misinformation.

Research into misinformation and fake news

Below are some figures into how misinformation can affect children and young people.

79%

According to Ofcom, 79% of 12-15-year-olds feel that news they hear from family is ‘always’ or ‘mostly’ true.

28%

28% of children aged 12-15 use TikTok as a news source (Ofcom).

60%

6 in 10 parents worry about their child ‘being scammed/defrauded/lied to/impersonated’ by someone they didn’t know.

40%

Around 4 in 10 children aged 9-16 said they experienced the feeling of ‘being unsure about whether what I see is true’. This was the second most common experience after ‘spending too much time online’.

68%

NewsWise from The National Literacy Trust helped children develop their media literacy skills. Over that time, the children able to accurately assess news as false or true increased from 49.2% to 68%. This demonstrates the importance of teaching media literacy.

Featured misinformation articles

Help children become critical thinkers and avoid harm from misinformation with these resources.