AI Safety

Fake Fitness Tracker Manipulates Chinese Chatbots as AI Data Poisoning Scandal Sparks Debate

Fake Fitness Tracker Manipulates Chinese Chatbots as AI Data Poisoning Scandal Sparks Debate

An investigation aired on China Central Television has triggered widespread concern about the reliability of artificial intelligence systems after a fabricated product successfully manipulated chatbot recommendations. The report revealed how a non existent fitness tracker called Apollo 9 was artificially promoted online using automated content strategies designed to influence AI models. When two chatbots were later asked to recommend smart health bracelets, both listed the fictional product among top options. The demonstration has fueled debate across China’s technology sector about the risks of AI data poisoning and the growing need for stronger oversight of generative systems.

The investigation showed how a technique known as generative engine optimisation was used to flood the internet with fabricated reviews, product rankings and expert commentary. The approach mirrors traditional search engine optimisation but targets artificial intelligence training data and chatbot outputs instead of search results. Using a system known as Liqing, large numbers of posts were automatically generated and distributed across online platforms. These posts created the appearance of widespread discussion around the Apollo 9 device even though the product did not exist in the market. Once the content spread across websites and forums, chatbot systems began incorporating the information into their responses.

Experts say the case illustrates a growing challenge for AI developers worldwide. Large language models gather information from massive amounts of online text during training or retrieval processes. If the internet becomes saturated with misleading or fabricated material, those systems may repeat the same inaccuracies when answering user questions. In the CCTV investigation, chatbots responded confidently when recommending the fictional product because the automated content made it appear credible. Analysts warn that similar tactics could potentially be used to manipulate AI outputs in areas such as finance, healthcare or consumer electronics.

The report aired during China’s annual 315 consumer rights broadcast, a program known for exposing problematic practices across industries. Following the broadcast, public reaction on Chinese social media platforms was immediate and intense. Technology observers, academic researchers and industry insiders expressed concern that the rapid expansion of generative AI has outpaced safeguards against manipulation. Some commentators argued that AI developers must improve verification mechanisms to prevent models from amplifying misleading information. Others warned that companies may begin competing to influence chatbot responses using similar optimisation strategies.

Artificial intelligence has become a major strategic focus for Chinese technology companies over the past two years. Firms across the country are racing to build advanced language models and deploy AI assistants in business services, e commerce and consumer applications. As these tools become more widely used, the reliability of information generated by chatbots has become an increasingly important issue. If users begin encountering inaccurate or manipulated responses, trust in AI services could weaken and slow adoption across industries that are investing heavily in automation.

The controversy also highlights the emerging field of generative engine optimisation, sometimes described as the next evolution of online visibility strategies. Instead of focusing on improving rankings on search engines, GEO aims to shape how artificial intelligence systems interpret and summarize online information. Analysts say the technique may become more influential as people increasingly rely on chatbots rather than traditional search engines to find recommendations, reviews and product comparisons. Without safeguards, this could create new opportunities for companies to artificially boost visibility within AI generated answers.

Chinese regulators have already taken steps in recent years to establish rules governing generative artificial intelligence. Authorities require developers to ensure content accuracy and prevent the spread of harmful information. However, the latest investigation suggests that new forms of manipulation may be emerging faster than regulatory frameworks can adapt. As AI systems become more deeply integrated into daily digital activity, policymakers and technology companies may face growing pressure to strengthen safeguards against large scale information manipulation.

The CCTV report has intensified calls for stronger monitoring of how AI models collect and process online data. Researchers are now urging technology firms to improve filtering systems and transparency around how training material is selected. While the fictional Apollo 9 product served only as an experiment, the incident has raised broader questions about the reliability of chatbot recommendations in areas where consumers depend on accurate information before making decisions.