China has issued new rules prohibiting the use of artificial intelligence (AI) and virtual reality technology by digital video and audio providers to create “fake news.”
The regulation released on Friday by the Chinese cyberspace authority said online video news and audio service providers and users were “not licensed” to use new technologies such as deep learning and virtual reality to create, distribute and broadcast “fake news.”
“Fake news” means anything from a mistake to a parody or a deliberate misinterpretation of facts.
From 1 January 2020, the rules will come into effect.
Failure to obey them is a criminal offense, the Chinese Cyberspace Administration (CAC) states, without providing details on punishments.
In order to carry specific labels warning consumers, the rules require videos and audio tracks created using AI or virtual reality technologies.
Specific emphasis on the risks of “deep fakes,” or software that changes images to appear real but shows incidents or expression that never occurred.
The cyberspace authority warned that Deepfake technologies could “disrupt the social order. Also, violate the interests of people, create political risks and have a negative impact on national security and social stability.”
Concerns about deepfakes have increased since the 2016 U.S. election campaign. According to U.S. investigations, it saw widespread use of online disinformation.
Earlier this year, China’s top legislative body said it was considering illegalizing deep fake technology.
A Chinese face-swapping Zao app allows users to convincingly superimpose their own image on characters in movies or television shows. It contributed to a heated debate in September about the misuse of deep fake technology.