Why Reducing AI Harm Requires More Than Tech Firms’ Empty Promises

AI enters classrooms faster than safeguards can follow
Artificial intelligence is becoming deeply embedded in education systems across Hong Kong, mainland China and other parts of the world. From AI powered tutoring tools to automated grading systems and personalised learning platforms, the technology is reshaping how students learn and how teachers teach. Yet this rapid adoption is exposing serious risks, particularly for children and young users, that cannot be addressed by voluntary commitments from technology companies alone.
While many firms promote their products as safe and responsible, the reality inside classrooms often reveals gaps between promise and practice. AI systems are being deployed faster than oversight mechanisms can adapt, leaving educators and families to manage consequences after the fact.
The limits of self regulation by tech companies
Most major AI developers claim they prioritise safety, fairness and child protection. These assurances are typically expressed through internal guidelines, ethical charters and content moderation policies. However, self regulation has inherent limits, especially when commercial incentives reward rapid expansion and user engagement.
In educational settings, these limits become more visible. AI tools trained on large datasets may reproduce bias, generate misleading information or expose students to inappropriate content. When problems emerge, responsibility is often diffuse, with companies pointing to terms of service while schools lack the expertise or authority to intervene effectively.
Children face unique and underestimated risks
Young users are particularly vulnerable to AI related harm. Unlike adults, children may struggle to distinguish between accurate information and AI generated errors. They are also more susceptible to over reliance on automated systems, which can undermine critical thinking and independent learning.
Privacy risks are another major concern. Educational AI platforms often collect sensitive data about students’ behaviour, performance and preferences. Without strong safeguards, this data can be misused, poorly secured or retained far longer than necessary, creating long term risks that extend beyond the classroom.
Education systems are not neutral testing grounds
One of the most troubling trends is the treatment of schools as experimental environments for emerging technologies. AI tools are often introduced with limited pilot testing, placing the burden of discovery on teachers and students rather than developers.
Educators are expected to integrate these systems while managing curriculum demands, leaving little time to evaluate potential harm. When issues arise, schools may lack clear guidance on accountability, creating uncertainty about whether responsibility lies with the institution, the vendor or regulators.
Why parents cannot shoulder the burden alone
Parents are frequently told to monitor their children’s technology use, but this expectation is unrealistic in an AI driven learning environment. Many AI tools operate invisibly in the background, embedded in school platforms and homework systems.
Without transparency about how AI systems function, parents cannot meaningfully assess risks or intervene effectively. Placing responsibility solely on families ignores the scale and complexity of modern educational technology.
Policymakers must move beyond reactive measures
Government responses to AI harm have often been reactive, addressing problems only after public controversy emerges. In the education sector, this approach leaves students exposed during critical developmental years.
Effective protection requires proactive regulation. Clear standards for child safety, data protection and transparency must be established before AI systems are widely deployed. This includes requirements for independent testing, age appropriate design and meaningful oversight mechanisms.
Shared responsibility is the only viable path
Reducing AI harm in education demands coordinated action. Tech companies must move beyond abstract commitments and accept enforceable obligations. Policymakers need to set clear rules and invest in regulatory capacity. Schools require training and resources to evaluate AI tools critically. Parents should be included in decision making, not treated as afterthoughts.
This shared responsibility model recognises that no single actor can manage the risks alone. AI systems operate across institutional boundaries, and accountability must do the same.
Building trust through action, not rhetoric
Trust in educational technology will not be built through promises or marketing language. It will depend on whether AI systems demonstrably protect young users while supporting genuine learning outcomes.
As AI continues to shape education, the choice is clear. Societies can rely on voluntary assurances and accept recurring harm, or they can establish robust frameworks that place children’s wellbeing at the centre of innovation. Reducing AI harm is not a technical challenge alone. It is a governance challenge, and it requires action that goes far beyond empty promises.

