Hello and a warm welcome to Safeguarding Soundbites. This is your podcast for finding out all the weekly safeguarding news plus your place for finding out what the INEQE geeks have been up to.
Over 120,000 people have signed a petition calling for an inquiry into Ofsted following the death of headteacher, Ruth Perry. Ruth took her own life while waiting for the results of an Ofsted inspection. The petition calls for an inquiry into the inspection that took place at the primary school and for a review of the report itself, which refers to Ruth Perry’s suicide in a way many have called callous and heartless. Three teachers’ unions have asked Ofsted to put a hold on inspections this week.
New research from the University of Oxford has shown that viewing self-harm images online can trigger young people to hurt themselves. They looked at existing studies, which all found harmful effects, including the escalation of self-harm and that young people often developed a self-harm identity. The connection young people make with others often triggered the urge to self-harm. The head of child safety online policy at the NSPCC said the research shows that it’s vital that social media companies “get their houses in order”. He went on to say that the Online Safety Bill will help protect young users from harmful content. The bill will place responsibility on tech companies to stop children and young people seeing content that poses a risk of significant harm. If you’re worried about someone in your care viewing self-harm imagery or want to learn more, search our website for ‘self-harm and peer support.’
Taking a look now at this week’s social media news, over on TikTok, there’s been an update of their community guidelines. Although most involve slight tweaking of their existing policies, there has been one new addition. The new section deals with rules around posting AI-generated and manipulated content. Now, users must disclose when realistic-looking media has been created or changed by AI through the use of stickers or captions saying things like ‘fake’, ‘not real’ or ‘altered’. So-called synthetic media of any real person is now banned – in other words, no using AI to manipulate photos or videos of a private non-famous person.
The use of AI to manipulate, or change, photos, videos and other media has become a major concern. As AI advances, we’ve seen incidents of it being misused to create harmful and inappropriate material, such as deepfakes being used to create sexual abuse imagery. Any move towards differentiating between real and manipulated media is positive but as it becomes harder to detect, we wonder if social media moderation can keep up with telling the fiction from the fact.
Keeping with the popular theme of AI, Google have announced the launch of their AI chatbot named Bard. The competitor to ChatGPT has been rolled out this week to over 18s and uses Google’s search engine to answer queries and questions from users. The popularity of chatbots has exploded over the past few months, with people using the artificial intelligence to help with everything from writing emails, solving maths problems and creating code for websites. But with reports of ChatGPT giving users age-appropriate advice and even guidance on how to commit major crimes, our online safety experts have taken a closer look at the safeguarding risks of AI chatbots. We have also created a handy shareable, all about AI chatbots. Keep an eye out for those next week at ineqe.com/oursaferschools.co.uk.
You might have heard of Andrew Tate in the news recently. Currently in prison in Romania amid a sex trafficking case, Tate had become a ‘success coach’ influencer known for his controversial opinions. He has been called out for spreading misogynist and harmful viewpoints. In spite of this, he has become a popular figure, in particular with young males, perhaps in part due to his ‘self-made man persona’ and being ‘strong’. To assist teachers and safeguarding professionals, we’ve created a shareable to explain who Andrew Tate is, why young people might be drawn to him and what actions you can take when concerned. You’ll find that on our website and in our app.
An investigation into reports about inappropriate sex educational classes at a school on the Isle of Man has found the claims were inaccurate. The original allegations claimed a sex education class taught by a drag queen contained graphic teachings about sex and gender reassignment surgery. The reports also claimed that a child was told to leave the class after they said there were only two genders. The classes were paused after the rumours spread on social media, but the investigation has found no evidence and that reports were inaccurate.
Finally, the government have announced the date for the first test of the new emergency alert system. On the 23rd of April, there will be a national test of the system that will send alerts to mobile phones and tablets in the case of emergencies, such as floods, fires and extreme weather. Through an alarm sound, vibration and reading out the warning, it will be used by the emergency services, relevant government departments and public bodies. Domestic violence charity Refuge have created a guide for those that may need to turn these warnings off, such as in the case of having a secret phone for safety reasons.
That’s all from me this week – join me again next time for more news and advice. In the meantime, you can follow us on social media by searching for ‘ineqe safeguarding group’. If you’ve enjoyed today’s episode, please consider sharing it with your colleagues, friends and family so we all have the info we need to help keep the children and young people in our care safer online. Thanks for listening, bye!