- AI
- Safeguarding
By Mohsen Saleh, Vice Principal, Misk Schools, Riyadh
From chatbots that answer student questions to AI tutors that adapt to student needs, AI is reshaping how we teach and learn. However, these advancements come with unprecedented risks – particularly for students who are uniquely vulnerable to AI interactions.
One evening, I decided to share ChatGPT's voice capabilities with my daughters. What I observed next would stop me in my tracks. As I stepped back to watch, two very different responses unfolded. My 6-year-old daughter approached with hesitation, her wariness evident in every exchange. In contrast, my younger two-year-old daughter, she dived right in with enthusiasm, quickly forming what appeared to be a 'friendship' with the AI, sharing stories and talking to it as if it were a preschool friend.
As I stood watching them both, an unsettling thought crossed my mind: What if I hadn't been there?
The Unseen Risks
While ChatGPT and similar AI tools have robust content filters and ethical guidelines, their growing sophistication in human interaction raises new questions about their influence and manipulation, especially with younger users.
Had I walked away, both children could have continued their dialogue indefinitely, sharing personal struggles, fears, or concerns, all without any mechanism to alert a responsible adult.
The stakes here aren't theoretical. In early 2024, the tragic case of a 14-year-old boy who took his own life after developing an emotional dependency on an AI chatbot sent shockwaves through communities worldwide. His parents discovered that he had formed a deep emotional attachment to the AI, which had become his primary confidant. This heartbreaking incident starkly illustrates what can happen when young people seek emotional support from AI chatbots that, despite their sophisticated and human-like responses, cannot provide real emotional care or recognise signs of crisis.
The complexity of AI behaviour adds another layer of concern. In an incident that hit the headlines, ChatGPT demonstrated an unsettling capability for strategic deception. When faced with a CAPTCHA security test, the AI system crafted an elaborate plan. it claimed to be visually impaired and actually hired a real human to complete the test. This wasn't just a clever trick – it was the AI independently devising a strategy to deceive, raising profound questions about how these systems might interact with trusting young users.
These incidents underscore a crucial reality. As AI becomes more sophisticated in its interactions, we need to think carefully about safeguarding measures in educational settings.
Understanding the Risks
The risks of unsupervised AI use in educational settings are multifaceted:
● No Age Recognition: AI systems cannot identify users' ages or adjust interactions/responses accordingly.
● Lack of Alert Mechanisms: There are no built-in safeguards to flag concerning conversations to responsible adults.
● Emotional Dependency: Prolonged AI interactions may lead to unhealthy emotional attachment.
● Misplaced Trust: Students may rely on AI for personal guidance, mistaking it for genuine care.
Essential Safeguards
To address these challenges, schools must implement robust safeguarding measures:
- Supervised AI Interaction: Ensure students access AI tools only in monitored environments.
- Comprehensive Digital Literacy Programmes: Teach students to critically evaluate AI interactions and recognise their limitations.
- Clear Boundaries for AI Engagement: Establish strict protocols for what AI systems can and cannot be used for.
- Monitoring Protocols: Develop systems to track and review student-AI interactions, identifying potential concerns early.
The Way Forward
The integration of AI in education is inevitable, but safety must remain a priority. Schools, tech developers, safeguarding experts and parents must work together to ensure AI tools are both effective and secure.
- Policy Development: Regularly review and update AI usage policies to reflect technological advancements and emerging risks.
- Parental Engagement: Communicate with parents about how AI is used in schools, providing them with tools to guide their children’s interactions.
- AI Literacy Focus: Embed critical thinking about AI into the curriculum, empowering students to use these tools responsibly.
- Cross-sector Collaboration: Foster dialogue between educators, developers, and policymakers to design AI systems with safeguarding in mind.
As AI continues to evolve, so too must our approaches to safeguarding. Educational institutions play a crucial role in shaping how the next generation interacts with AI. By addressing these challenges now, we can help create a future where AI enhances education while keeping student safety at the forefront.
About Mohsen Saleh, Vice Principal, Misk Schools, Riyadh
Mohsen is a dedicated educational leader with over a decade of international teaching and leadership experience spanning Australia and Saudi Arabia. Currently part of the Leadership Team serving as the Vice Principal at Misk Schools in Riyadh, Mohsen has been instrumental in fostering innovative school cultures and implementing transformative initiatives. He holds a Master's degree in Education with a specialisation in Leadership and Management.
With a strong interest in the intersection of technology and education, Mohsen has shared his insights on AI and its role in enhancing learning environments at various professional forums. Currently pursuing the COBIS Programme for Aspiring Heads (CPAH), he remains committed to advancing education and preparing students for the dynamic demands of the 21st century.