MENU

The Online Safety Act and AI Summit: Impacts on children’s digital lives

A young child uses a tablet with online safety icons surrounding it.

In this blog, we reflect on recent developments related to the Online Safety Act and artificial intelligence, looking ahead to what happens next for children’s digital safety.

Recent changes in online safety

As the old saying goes, you wait ages for a bus and then two come along at the same time.

Those of us working in online safety certainly feel like this after a busy fortnight, in which two key moments came to pass:

  • the Online Safety Bill received Royal Assent and thus became law;
  • the UK took centre stage with the global AI Summit.

What is the Online Safety Act?

What started as a ‘green paper’ (a range of ideas for new government policy) in 2017 later became a more concrete ‘white paper’ in 2019. That led to a full draft of the new proposals, the Online Safety Bill, which was produced in 2021. But this was not the end of its evolution, with many changes and revisions made as the draft law wound its way through Parliament.

The Online Safety Act is a landmark piece of legislation with the potential to transform children’s online experiences. With the Act, Platforms will have a much bigger responsibility to keep children safe:

  • by identifying and anticipating the risks, and putting systems and processes in place to address them; and/or
  • preventing children accessing content which is wholly inappropriate for them.

Specific harms identified which platforms will need to address include eating disorder content, self-harm and suicide content, pornography, and bullying.

How will the Online Safety Act impact families?

The Online Safety Act will not remove all risk from the internet, nor is it a perfect piece of legislation. For example, Internet Matters would have liked to see greater support for parents.

Nevertheless, it is the result of six years of intense scrutiny and an unprecedented degree of cross-party collaboration. For the first time, parents should expect age-appropriate services by default and rigorous age checks. It could make a huge difference – with the right implementation.

What role will Internet Matters play as the law is implemented?

Internet Matters – and the wider online safety sector – still have a role to play. The baton has now passed to Ofcom, the new online safety regulator, who have an immense amount of work to do to flesh out the detail of the new regulatory regime.

For example, Ofcom will begin by looking at how platforms should tackle online child sexual abuse material (CSAM) – a topic explored in our recent research into online misogyny and sharing of sexual images. We look forward to continuing our close engagement with Ofcom, sharing our research insights to champion the voices of children and their parents.

How Government is addressing artificial intelligence

On 1-2 November, the UK played host to a hotly anticipated worldwide summit on Artificial Intelligence. For the past year, barely a day has gone by without AI making the headlines, whether for good or bad. The purpose of the Prime Minister’s summit was to bring together nations to reflect on these developments, and to discuss the long term benefits and risks across many areas of life – from health to defence, business to democracy, to name just a few.

It is welcome that policymakers and industry leaders worldwide are thinking ahead to the big problems AI might pose. After all, the past two decades has shown us what happens when there isn’t enough reflection on the social impacts of new technology, with many children experiencing harm online. However, this summit is the start of a journey, not the end.

As the conversation about AI unfolds, we’d like to see two things:

  • A much greater focus on the impact of AI on the lives of children and families, not just businesses, the economy and the country as a whole.
  • Consideration and action on opportunities and issues in the short term, not just thinking ahead to the long term.

Artificial intelligence in education

AI is already beginning to re-shape what – and how – children learn. For example, schools have been promised personalised AI assistants to help with lesson planning, and many teachers and children have taken to using ChatGPT to support learning. Developments like these raise very real and immediate questions in relation to fairness, equality, the curriculum and more. Furthermore, it has led to more existential debates about the purpose of education and how to equip children for an AI-driven future.

How is Internet Matters tackling AI safety?

Schools, parents and children need answers on how to approach AI now. For this reason, Internet Matters is conducting original research to find out more about how families think and feel about this technology, particularly in the context of children’s education. We will be sharing our findings in the New Year, publicly and with key decision-makers, including the Department for Education – so watch this space.

Supporting resources

Internet Matters is passionate about standing up for families to those making big decisions affecting children’s safety online. But we are equally passionate about providing hands-on, practical advice, so that parents can do their bit to protect children too.

Explore the following resources for more information on artificial intelligence and keeping children safe online.

Was this useful?
Tell us how we can improve it

Recent posts