Read the script below
Natalie: Hello and welcome to Safeguarding Soundbites, with me Natalie.
Colin: And me, Colin. If you haven’t tuned in before, this is the podcast that brings you all the latest safeguarding news, advice and updates.
Natalie: But today’s episode is an extra special edition as it’s our bumper summer special.
Colin: That’s right, Natalie. We’re going to be catching up on the top stories from the last month, including our regular social media updates, and the latest news about UNESCO’s suggested ban on smartphones, AI-generated child sexual abuse imagery, and of course, the Online Safety Bill.
Natalie: Let’s start off with social media where there’s been a major change over on Twitter…
Colin: [ahem], do you mean over on….X?
Natalie: Oh gosh, this is going to be difficult! But yes, Elon Musk has announced that the social media platform formally known as Twitter is going to be called X from now on. As of recording, the logo has been changed to an X and there’s no more Larry the bird!
Colin: I’m sorry, Larry the who?
Natalie: The bird! Did you not know the Twitter bird logo was called Larry?
Colin: I didn’t! You learn something new every day.
Natalie: Every day’s a school day!
Colin: It is indeed. So it’s now called X, is that confirmed?
Natalie: It seems to be! We’ll keep you listeners up to date with anymore changes in the future. And I’ve also got a quick update for you about Threads.
Colin: Threads is Meta’s competitor version of Twitter – sorry, X – yes?
Natalie: Yes. Launched last month in July, it saw instant success with over 100 million users signing up in the first few days. However, it looks like that success may have been short lived as it’s been reported that the platform’s daily active user numbers have dropped by around 70% since launch.
Colin: Oof. Do you think it’s…hanging by a thread?!
Natalie: I’m going to ignore that pun, Colin! I think it’s too early to tell and, as X keeps making changes, we don’t know how that might impact people’s decisions to hop onto an alternative. We do know that Meta plans to add Threads to the fediverse, which is going to be a grouping of several social media apps, including Mastodon.
Colin: Natalie, before we go any further – for those are super confused right now…we have X, we have Threads, now we have fediverse. Can you just explain in simple terms what the fediverse is?
Natalie: Basically, the fediverse is made up of thousands of different social media servers that can communicate with other. Rather than your standard social media platforms like Instagram and Twitter, right now it’s platforms like Mastodon, Peertube and Lemmy. Less mainstream ones and it also includes platforms for blogging, podcasting etc. The important part is users can all communicate with each other, as if they were all on a single social media network.
The concern there is that a recent study found high amounts of child sexual abuse materials on Mastodon – 112 instances of known child sexual abuse material across 325,000 posts.
Colin: Wow. And Mastodon is a decentralised social media platform – which means it’s not just one server, one place, hosting everything but people can set up their own servers. So essentially there’s less oversight on what’s being hosted and posted there.
Natalie: Exactly. And a server might only have one moderator – one person to check what’s being posted there. So if Threads joins this fediverse, it means people can cross-communicate between apps. People on Threads can chat to people on Mastodon. Which, from a safeguarding point of view, is worrying. But Threads isn’t currently part of the fediverse, it’s just a plan. So another ‘watch this space!’
Colin: And watch it we shall! Or listen to it anyway, here on Safeguarding Soundbites!
Natalie: Our first story this week is a controversial one. The United Nations report calling for a worldwide ban on smartphones in classrooms sparks quite the debate. According to UNESCO, this move is aimed at improving learning outcomes and protecting children from online bullying. Interestingly, around one in six countries have already taken steps to ban smartphones in schools, with the Netherlands being the latest addition, planning to enforce the ban from January 2024, covering not only phones but tablets and smartwatches too.
Colin: Oh, I can just imagine children and young people’s reactions to that news… But closer to home, parents in Greystone in Ireland are joining forces to ban smartphones for primary school children. The concern there is all about the influence of social media and unrestricted internet access on young minds.
Natalie: Rachel Harper, the headteacher of St. Patrick’s Primary School, led the campaign, expresses her worries about kids being exposed to inappropriate content and facing peer pressure when it comes to owning a smartphone.
Colin: Yes and this was further highlighted in UNESCO’s report with claims that excessive mobile phone use is linked to poorer educational performance for children, but also negatively affects their ‘emotional stability’.
Natalie: That’s right. It’s quite the hot topic, isn’t it? On one hand, there’s the push for a ban to minimise distractions and potential harm, but on the other hand, we can’t ignore the fact that technology is an integral part of children’s lives outside the classroom. We need to equip them with the right skills to use their smartphones responsibly.
Colin: Indeed, Natalie. It’s also essential to explore alternative ways to address online bullying. Removing smartphones from the equation may help in the school day, but it doesn’t address the root cause of bullying or the fact that it can happen on various devices and platforms.
Natalie: Absolutely, our motto is ‘educate, empower and protect’ for a reason. Educating children about online safety including reporting and addressing bullying is what empowers them to have more positive experiences online and ultimately, that’s what protects them from harm.
Colin: You’re spot on, Natalie. It’s a multi-faceted challenge, and there’s no one-size-fits-all solution. It’s all about finding a balance and making sure we educate and empower children and young people to be aware of the risks, make safer choices and harness the benefits that the digital world can bring.
Natalie: Exactly! And it’s interesting that the call to ban smartphones came with a strong message about putting a “human-centered vision” of education first. Technology, including AI, has immense potential, but it should never overshadow the importance of human interaction in learning.
Colin: AI always finds its way into the conversation, doesn’t it? As we continue to explore this issue and gather more insights, it’ll be fascinating to see how schools and policymakers navigate the complexities.
Natalie: Absolutely, Colin. And we want to hear from our listeners too! What are your thoughts on the smartphone ban? Reach out to us through our social media channels or drop us an email You can find all our contact information on ineqe.com.
Colin: That’s right! Your voices matter, and we’d love to include your perspectives in our future episodes. So don’t be shy, share your thoughts with us!
Colin: Moving on now to a story that highlights the darker side of artificial intelligence…a BBC investigation has found that AI is being used to create and sell child sexual abuse material. By using AI software designed to generate images, predators are able to create realistic materials. The Internet Watch Foundation has also confirmed their own investigations found evidence of AI-generated child sexual abuse imagery. They have called on the Prime Minister, as well as AI companies, to do more to prevent the abuse of AI tools and protect users from the spread of this type of content.
Natalie: Wow – and to be clear, Colin…even though it’s AI-generated, this type of content is still illegal, there’s no ‘grey area’ around this because it’s AI?
Colin: Yes, AI-generated images of child sexual abuse are illegal in the UK. And the Internet Watch Foundation also very rightly pointed out that this isn’t a victimless crime – having this type of imagery out there can a) normalise it and b) make it harder to spot when real children are in danger.
Natalie: It’s frightening, actually. Especially as AI imagery becomes more realistic. So, what’s the solution?
Colin: Well firstly, there needs to be changes in how AI models work – that simply, this type of content cannot be created. And also, that there’s enough development in technology so that AI-generated content can be identified easily. And, of course, that the platforms and websites that predators use to share and sell this content are continuously improving their systems to identify it, remove it quickly and address the people accessing it. Secondly, that the growth in CSAM is also directly linked to the failure of the government to ensure there are meaningful deterrents. We need to remember this is about people’s behaviour, not the technology –’it’s people telling the technology what to create.
Natalie: Thank you, Colin for those important points and reminders… Now, that actually brings me on to our next story, which is an update on the Online Safety Bill. A previous loophole which was apparently discovered by news media The Telegraph has now been closed. The loophole meant that, whilst the senior management of tech companies could be held criminally responsible for persistent failures to protect children from online harms like content on eating disorders and self-harm, it didn’t cover child sexual abuse material.
Colin: That’s a big oversight. So they’ve closed that loophole now?
Natalie: Yes, exactly. The government have now added a new amendment to the bill so that tech executives will be held to account if they fail to tackle child sexual exploitation and abuse content.
Colin: And was there another amendment? Something to do with algorithms?
Natalie: Yes, so basically the Online Safety Bill, which for anyone listening who’s not sure, is designed to regulate how tech companies and services like social media platforms, safeguard their users – it puts laws in place that mean they have to protect users from harmful and illegal content. And that has previously only applied to content like videos, images, messages, etc. But this other new amendment will now hold those same companies to account for algorithms that push users towards harmful content too.
Colin: So algorithms work by suggesting content to a user based on what they’ve liked or viewed previously , or what someone with a similar user profile has liked before – someone the same age, for example.
Natalie: Yes and so the issue has been that once a user has, for example, looked at harmful eating disorder content on their social media account, that algorithm will continue suggesting more content about eating disorders to them.
Colin: And if it’s something a young person is struggling with eating disorders, for example, every time they go on social media, that content is there again. And it might feel like there’s no escape from seeing it.
Natalie: Exactly. So we will see what happens with this new amendment and how it all works out.
And one last Online Safety Bill update – in England and Wales, sharing deepfake intimate imagery is now going to be illegal. Deepfakes are images, sounds, videos or similar that have been manipulated by a computer to superimpose someone’s face, body or voice onto something else. Sharing this type of content that has been manipulated without the consent of the person or persons in them will now be against the law.
The same amendment will also remove the existing requirement to prove that perpetrators shared sexual images or films in order to cause distress, which will make it easier to charge and convict them.
Colin: It sounds like some positive steps are being made.
Natalie: It does! And speaking of good news, Colin…
Colin: Yes! It’s time for our safeguarding success story of the week!
Natalie: Take it away!
Colin: So this episode’s safeguarding success story is that the UK games industry has announced plans to restrict access to loot boxes in games for children. Loot boxes are basically those in-game purchases that assign the contents at random. So, for example, you buy a loot box and you might get something really valuable or a really rare item, like limited edition clothes for your game character or loads of experience points. Or you might get something that’s common or just not that exciting.
Natalie: So it’s a bit like gambling?
Colin: That’s the concern and you’re certainly introducing that as a concept to children. So these new plans, which are a set of guidelines, are a step to protecting children and young people from that.
Natalie: Do we know what any of the new guidelines are?
Colin: Yes, so there’s eleven of them but I’ll just outline the principles – they include preventing anyone under 18 from being able to get access to loot boxes without the permission of a parent or carer. And there’s also plans to launch a public information campaign, to raise awareness.
Natalie: To quote you a few minutes ago, sounds like positive steps!
Colin: Indeed! Well, that is everything from us today. We won’t be back again next week…
Natalie: Boo!
Colin: But we will be back again next month…
Natalie: Hooray!
Colin: And, of course, you can keep up to date on everything we’re doing and any safeguarding updates on our Safeguarding Apps, like the Safer Schools App and by signing up to the safeguarding hub on our website ineqe.com.
Natalie: And you can also follow us on social media by searching for Safer Schools or Ineqe Safeguarding Group. Until next time, goodbye and…
Both: Stay safe!
Join our Online Safeguarding Hub Newsletter Network
Members of our network receive weekly updates on the trends, risks and threats to children and young people online.
Pause, Think and Plan
Visit the Home Learning Hub!
The Home Learning Hub is our free library of resources to support parents and carers who are taking the time to help their children be safer online.