Chinese citizens enjoy a level of digital convenience in their daily lives that cannot be matched by any other nation. Many exalt these conditions as signs of China’s burgeoning power and modernization, bringing a long sought prestige to a tech sector that was for decades typecast as ‘copycats’. Now, by pioneering the use of perception AI and advances in computer vision, facial and speech recognition is becoming increasingly ubiquitous in China. While many foreign nations balk at the widespread use of computer vision technology backed by claims of biometric privacy invasion, the narrative in China is generally different. Privacy concerns are often scuppered to accommodate even greater levels of convenience, and even more valuable data collection. But is this narrative really that pervasive? Do Chinese people have any reservations about implementing computer vision tech in their schools, stores, and public life?
Recently, a group of 27 Chinese technology companies began developing national industry standards for the use of facial-recognition. Leading the group’s effort is SenseTime, a Hong Kong-based AI firm specializing in computer vision, who announced on WeChat: “Nowadays, face scanning has become the daily ‘norm’ for the general public to experience innovation and enjoy convenience. However, the wide application of facial recognition in different fields has also led to a series of problems such as identity theft and fraud resulting from a lack of regulation on technical accuracy, as well as security risks stemming from a lack of regulation on facial data collection, storage, and usage.”
The plan to develop this framework was at the behest of the Chinese National Information Security Standardization Technical Committee, which is responsible for policy decisions around information technology.
Meanwhile, the issue has not gone unaddressed by Chinese netizens. On microblogging platform Weibo, many users have decried the lack of privacy in the internet age. One netizen wrote, “There is no privacy at all in the Internet era. Mobile phone data is basically circulated between various apps. To some extent, people have become prey.” There are many posts that echo this sentiment, that big data has invaded people’s privacy, especially concerning internet behavior. There is also a short film circulating on Weibo, highlighting a dystopian world in which strangers can access the details of one’s personal life. However, most of these concerns revolve around internet data, along with payment and purchase history.
The collection of biometric data raises another level of concern entirely. Recently, Professor Guo Bing from Zhejiang Sci-Tech University, who is a season ticket holder to the Hangzhou Safari Park, filed a lawsuit against the park for requiring facial recognition for entrance. Previously the park had accepted fingerprints, but has since upgraded the system to mandate facial scans. Following Professor Guo’s lawsuit the park has offered a meek compromise, giving visitors the choice between submitting fingerprints or a face scan.
Professor Guo is not alone in his concerns. In fact, Weibo users note that “many places are now forcibly collecting personal information”. One user’s post describing their fears of ” future risks” related to the collection of biometric data has received 12,000 likes.
Just last month, the Chinese government released new regulation targeting the proliferation of deepfakes, which are realistic video renders of people who can be manipulated to say or do whatever the creator wants. The technology is specifically dangerous when used to falsify the comments or actions of political leaders. The new law will go into effect as of 2020, and requires any deepfake material to have obvious disclosures that the content was created using AI or VR technology.
According to the South China Morning Post, the Cybersecurity Administration of China said, “With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people’s interests, creating political risks and bringing a negative impact to national security and social stability.”
Chinese application ZAO allowed users to swap faces with celebrities, provoking concerns over facial recognition technology and biometric data privacy. In response to outcry, Alipay, the mobile payment platform that pioneered facepay, assured the public that their technology uses 3D scanning and is specifically designed to detect any simulation to enhance security. This response to the concerns that ZAO caused shows that the Chinese tech sector is at least willing to fight tech with tech, to placate concerned users.
See more: Debates Over ZAO and FaceApp Usher in the Era of Surveillance Capitalism
Earlier in 2019, Alipay launched its first “Biometrics User Privacy and Information Security Protection Initiative”. Part of the initiative stipulates that biometric data collection should be limited to the minimum required, to prevent misuse. Following the Chinese government’s two sessions meeting earlier this year, the government released new regulations protecting private data. One policy was a consultation draft of “Data Security Protection Law” while the other “App Collection Personal Information Specification”. However, the legal framework around the collection, security and use of biometric data in China is still very underdeveloped, with scarce legally binding precedent.
While China uses its robust capabilities in computer vision artificial intelligence to augment the convenience of citizens’ daily lives, and provide more consistent law enforcement, there remains a concern among citizens that this technology has inherent risks, and that more needs to be done in terms of security and regulation. The technology’s proliferation is consistently increasing among China’s urban population, and before long, biometric data collection will be ubiquitous in both the private and state sector. For society to embrace what can be an incredibly powerful technology that can provide utility in many ways, regulatory frameworks may require further development to prevent this sensitive data’s misuse.