China’s cyber regulator has released draft regulations aimed at tightening oversight of artificial intelligence systems that mimic human personalities and engage users in emotional interactions.
The proposed rules, open for public comment, target AI products and services offered in China that simulate human traits such as personality, thinking patterns, and communication styles across text, images, audio, video, or other formats. The draft reflects Beijing’s push to manage the rapid expansion of consumer-facing AI by bolstering safety and ethical standards.
Under the regulations, AI service providers would be obliged to caution users against excessive use and intervene if users show signs of addiction. Firms would also need to take responsibility for safety throughout the product lifecycle by implementing systems for algorithm review, data security, and protection of personal information.
The draft emphasizes mitigating psychological risks by requiring providers to monitor users’ emotional states and levels of dependence, with intervention expected if extreme emotions or addictive behaviour is detected. Additionally, the rules set clear “red lines” for content and conduct, prohibiting AI from generating material that could threaten national security, spread rumours, or promote violence or obscenity.
The move underscores China’s broader effort to ensure that AI development aligns with its safety, ethical, and societal guidelines as AI technologies become increasingly integrated into daily life.


Leave A Comment