Tyla: Hello and welcome to another episode of Safeguarding Soundbites! My name’s Tyla.
Natalie: And I’m Natalie. And this is the podcast that brings you all the week’s safeguarding stories, news and alerts.
Tyla: It is and –
Natalie: – sorry to interrupt you Tyla but is that…is that the sound of sleighbells?!
Tyla: It might be – because it’s officially December!
Natalie: Woohoo! I’m going straight home to get my Christmas tree up.
Tyla: Very keen, Natalie. But shall we finish the podcast first?
Natalie: I suppose we should! Especially as this week, we’ll be talking about some really interesting topics, like the latest Meta revelations, what new research is saying about internet use and mental health, and the very worrying way young people are abusing AI.
Tyla: Okay, let’s start off with social media news and this week, as you mentioned, Natalie, it’s all about the latest shocking accusations against Meta. A new report released as part of an ongoing court case in America has unveiled previously redacted company documents – and the content is pretty damning.
Natalie: Yep, the company documents are from a 2020 Meta presentation and show that the company designed its products to purposely exploit young people’s impulsive behaviour, susceptibility to peer pressure and the underestimation of risks. Other documents also indicate that the social media giant was well aware that their platforms Facebook and Instagram are popular with children under 13 – despite this age group not being allowed to use the service due to age restrictions.
Tyla: In fact, one Facebook safety executive allegedly implied in an email that cracking down on younger users could hurt the business.
Natalie: It’s shocking stuff, isn’t it Tyla?
Tyla: It is and of course, this is all part of an ongoing court case, we haven’t seen or verified the documents ourselves and Meta has said in a statement that the complaint “misrepresents its work to make the online experience safe for teens” …but this doesn’t look good at all for Meta.
Natalie: And this is just the start of the court case too, so there’s potential for more to come out. The case has been brought by…I think it’s 33? Yeah 33 states across America against Meta for collecting the data of under 13’s without parental consent and contributing to the youth mental health crisis by knowingly and deliberately designing features that are addictive.
Tyla: We’ll keep an eye on this legal case, but I think it’s a very good example of why we can’t rely solely on social media platforms, or any online platforms, to be responsible for safeguarding our children and young people. And while we don’t think every online platform is some big-bad-wolf coming to get our children, obviously, we need to be accountable ourselves too – is my child spending too much time on this app? Is this young person in my care more likely to give in to peer pressure and participate in risky online behaviour? What apps are my 10-year-old using? Which online videos are they engaging with?
Natalie: Absolutely. What’s that saying? ‘If the product is free, you are the product?!’ I think that’s well worth remembering sometimes in relation to social media platforms. It’s free to use but what’s the real cost? Our data?! Our time because we’re hooked by design which is more time to be advertised to?! All of this applies to young users also. Anyway, I’m getting deep into it now! Let’s move on…
Tyla: Well, this is actually a great story to move onto next because new research has shown that Gen Z are turning away from social media, saying that shunning it has helped their mental health. This is according to research from phone company HMD Global, who found that Gen Z-ers – those born between 1996 and 2010 – are the most likely to say social media impacts their wellbeing. They also found a quarter of respondents felt anxious due to the noise of constant notifications.
Natalie: That’s interesting because I think there’s this assumption or stereotype that Gen Z-ers are the most glued to social media.
Tyla: So true! It is interesting that young people are thinking about how social media usage might impact their mental health. And it’s not necessarily about removing themselves entirely from social media but about recognising that over-use can affect how we feel, like knowing that constant notifications can create anxiety.
Natalie: There was actually another study out this week that I wanted to talk about, and it’s related! It’s from the Oxford Internet Institute and their research is saying that there is no “smoking gun” to link the internet with psychological harm.
Tyla: Right, okay.
Natalie: So this was a study based on data from two million people aged 15 to 89 in 168 countries.
Tyla: Wow, that’s a big study.
Natalie: Yes, it’s reportedly one of the largest ever studies on mental health and internet use. And they’ve found that in the last two decades of increasing online connectivity, there have only been very minor shifts in global mental health.
Tyla: That’s really interesting.
Natalie: However, it’s worth noting a couple things. Firstly, the researcher, or professor, has engaged in “unpaid consultations” with Meta. Secondly, he acknowledged that research on the topic is contested, hampered by methodological shortcomings. And finally, although Meta can trace responses to mental health surveys with their behaviour on the platform, including content people are engaging with, this hard data is rarely made available for researchers…
Tyla: Like how much time people spend on Meta, for example?
Tyla: Okay, thanks, Natalie.
Tyla: Moving on now to the frankly disturbing news of reports of children using AI technology to create indecent imagery of other children. The UK Safer Internet Centre has warned they have received reports from schools that pupils are using artificial intelligence (AI) image generators to make what legally amounts to child abuse material. They say that while only a small number of reports have been made, action must be taken before the problem grows.
Natalie: Really shocking. And the thing is, as you said legally these images are defined as child sexual abuse material but the young people creating the images might not even be aware of that.
Tyla: Which is another layer to the problem – we need to work together to ensure that firstly, young people understand that using image generators to create indecent images is absolutely not okay. But also, that it’s illegal and there may be criminal consequences for them.
Natalie: Sharing deepfake image-based abuse is now illegal under The Online Safety Act 2023. In America and Australia, we have seen reports of these manipulated explicit images of young people being shared online. Schools should support these victims who have had their images manipulated in this way.
Tyla: Absolutely. And I know the UK Safer Internet Centre has called upon teachers and parents to start working together on an approach, so we really hope to see some good, collaborative action beginning there.
Natalie: Our next story comes from Scotland where a mum has spoken out, pleading with people to stop sharing online videos of violence in schools. Vicky Donald, whose daughter’s attack was filmed and uploaded to social media, has spoken of the anguish victims feel when video of the incident is shared on social media. The disturbing trend sees victims being violently attacked whilst filmed with the video later being uploaded online. Mrs Donald said that having the video reappearing on social media left her daughter in a dark place.
Tyla: We’ve seen these types of incidents a few times now in the news. As this mum has said, it must be really hard… even though the bruises might have long since healed, you’re having to relive it every time it pops up online.
Natalie: Exactly. The Scottish government seems to be taking action, they’re about to have their third summit focusing on tackling violence in schools so we hope that we can see some positive steps being made in the future.
Tyla: And finally, our safeguarding success story of the week! This week, Northern Ireland joined England, Scotland and Wales in making upskirting and cyber-flashing illegal. Upskirting is taking photographs of or observing underneath a person’s clothes without their consent. And cyber-flashing is sending explicit photographs or videos online or through technology without permission from the recipient. Convictions can result in up to two years in prison and up to 10 years on the Sex Offenders Register. Legislative updates include offences relating to the upload of intimate images or videos online or threatening to share private sexual images without consent. The legislation also includes four new offences against those who masquerade as a child in order to groom. Cyber-flashing has been illegal in Scotland since 2010 and since 2022 in England and Wales. Upskirting was made illegal in England and Wales in 2019 and also 2010 in Scotland.
Natalie: Great news. [pause] I think that’s all from us then for this week, Tyla?
Tyla: I think so – and don’t you have a Christmas tree to get on with?!
Natalie: I sure do! Time to battle the tangle of fairy lights…
Tyla: Good luck! For our listeners, remember you can find us on social media by searching for INEQE Safeguarding Group.
Natalie: We’ll be back next week with more safeguarding news and updates. Until then…