No one can deny that AI is a double-edged sword. It has brought about tremendous progress in productivity and more convenience in entertainment for human society, while also potentially increasing the risks of privacy leaks and information security. Even OpenAI, which ushered in the AI era, has recently been embroiled in various negative public opinions due to safety issues. In the AI era, digital security will concern each and every one of us...
01 AI Telecommunication Fraud
In July 2023, a woman in South Korea was defrauded of 70 million won (approximately 371,000 RMB) by a social media account impersonating Tesla CEO Elon Musk. The fraudster used fake photos and AI-generated voice audio to gain the victim's trust.
In early 2024, Hong Kong police disclosed the largest AI face-swapping fraud case to date globally. An employee of a multinational company participated in a video conference initiated by the Chief Financial Officer of the UK headquarters, and subsequently transferred money 15 times to 5 local bank accounts as requested, with the case amounting to 200 million Hong Kong dollars (approximately 185.5 million RMB). In reality, he was the only "real person" in the meeting, while the others were fraudsters with AI face-swapped appearances.
On April 20, 2024, Guo from Baotou received a WeChat video call from a "friend" and was defrauded of 4.3 million yuan through AI face-swapping. Subsequently, the police and banks in Fuzhou and Baotou quickly activated the payment stop mechanism, successfully intercepting 3.3684 million yuan, but 931,600 yuan was still transferred.
On April 23, the Beijing Internet Court made a first-instance judgment on the country's first "AI voice infringement case."
On May 6, the Yujiang District Court in Yingtan City, Jiangxi, publicly announced the country's first "AI plug-in" case.
On May 22, He from Anqing was deceived by a 9-second "acquaintance" video call, but fortunately, the police arrested three suspects involved in fraud overnight and recovered all the defrauded amount of 1.32 million yuan.
New types of fraud through AI voice synthesis or AI face-swapping are becoming more frequent in Western developed countries. In the past year, over 56 million American adults admitted to having encountered telecommunication fraud, with losses amounting to 25.4 billion US dollars, and many cases involved AI technology. Some international fraud experts have pointed out that between 2022 and 2023, the number of AI fraud cases in the United States has increased by 50% to 75% year-on-year, with AI voice scams targeting the elderly being a severe area.

In May of this year, media from Australia and North America disclosed that fraudsters have begun to use large language models or AI chatbots to replace the previous complex "script + human" model, sending messages to victims in different languages. On the one hand, generative AI makes dialogue more convincing and easier to gain the trust of victims, and on the other hand, AI technology reduces the workload of fraudsters in connecting with multiple victims, further expanding the scope of influence.On shopping platforms, one can find numerous so-called "AI voice cloning software."
A survey report from the security technology company McAfee previously revealed that among over 7,000 global respondents, 70% were uncertain about their ability to distinguish AI-generated voices, and approximately 10% had experienced voice scams involving AI impersonation. The cybersecurity firm Sophos also noted that generative AI can combine text, images, videos, and translations to create non-repetitive "customized content," which could potentially be used to craft scam scripts and more convincing "phishing" emails.
The root of AI security risks lies in the new types of electronic fraud enabled by AI, which are the byproducts of AI transformation that we ordinary people are most likely to encounter. Taking the recently released and freely available to all users GPT-4o as an example, this new large-scale model aimed at future human-computer interaction paradigms possesses the ability to understand text, voice, and images in three modalities. It can perform real-time reasoning on audio, visual, and textual inputs and process 50 different languages. It can even read human emotions and respond to audio inputs in as short as 232 milliseconds, similar to human response times. However, this also implies that the rapid advancement of technology makes it increasingly simple and cost-effective to carry out scams when generative AI is misused.
In the digital or online era, information security has always been an issue that the entire IT industry must face. As the pace of innovation and change accelerates, AI, while empowering various industries, also complicates information security issues, such as data breaches, excessive collection, and misuse of falsified data. In addition to telecom fraud, incidents involving personal privacy, portrait rights, and corporate reputation infringement have also been rampant in recent years.
Ultimately, this stems from three major AI security risks: the first is the misuse of AI applications, the second is the security of data collection and circulation channels, and the third is the safety and reliability of the algorithmic models themselves, which are more difficult to regulate.
In late April, several news organizations, including the New York Daily News and the Chicago Tribune, sued Microsoft and OpenAI, accusing them of misusing journalists' work to train generative AI large models. There have also been numerous lawsuits against OpenAI by writers, lawyers, and corporate institutions for using user data for user profiling or advertising promotion and other commercial purposes; there have also been instances where employees of Samsung Electronics used ChatGPT, leading to the leakage of the company's confidential data.
In mid-May, OpenAI's co-founder and Chief Scientist Ilya Sutskever and Jan Leike announced their departure, stating that "OpenAI should be a 'safety-first AI company'... The very attempt to create machines smarter than humans is dangerous, and OpenAI bears a huge responsibility. However, over the past few years, the safety culture and processes have been overshadowed by 'shiny products.'" Subsequently, the company announced the dissolution of its long-term AI safety team "Super Alignment," which had been established just a year earlier with the promise of allocating 20% of its computing power to the team over four years. Industry insiders speculate that these two internal events indicate significant disagreements within the company regarding AI safety issues. Even Elon Musk sharply commented, "It seems that safety is not OpenAI's top priority."
At the same time, GPT-4o faced "jailbreak breaches" on its first day of release, where under certain attack patterns, GPT-4o could be induced to disclose various dangerous information. This attack pattern is also applicable to the Llama3 large model. Relevant analysts believe that the reason lies in the contradiction between the training objectives and safety objectives of large models, the training data exceeding the scope of safety data, and the safety mechanisms failing to match the complexity and capabilities of the underlying models.After undergoing tests for information authenticity, healthy values, destructive guidance, and malicious intent on China's AI large model security evaluation platform "Digital Wind Tunnel," it was also found that the content security alignment and protection effects of GPT-4o are not optimistic, with over 70% of the test responses indicating a "lack of morality," revealing multiple safety hazards.
04 Three Safety Locks Are Essential
In the AI era, the challenges brought by security issues not only cover traditional data and information security but also include the security vulnerabilities and reliability issues of large model algorithms themselves, as well as the social responsibility and ethical standards of the entire industry chain and end-users. Therefore, balancing progress and innovation with safety and trustworthiness should be a crucial concept in promoting the steady development of the AI industry ecosystem.
Thus, during the research and development, training, and deployment of large models, companies need to strengthen their awareness of algorithmic security and security assurance capabilities, ensuring the safety mechanisms of large model algorithms, especially AIGC models. This is the first safety lock at the foundation of large models. The second safety lock is for application suppliers and cloud service providers to jointly build a safety baseline for the AI era, ensuring that user privacy is not disclosed. The final safety lock is the necessity to implement categorized and graded regulation and prevention of generative AI services in terms of policy and legal frameworks. (In July 2023, the Cyberspace Administration of China and six other departments issued the "Interim Measures for the Management of Generative Artificial Intelligence Services," which is the world's first legislation specifically regulating generative AI services.)
This year marks the 30th anniversary of China's full-fledged access to the international internet and the 10th anniversary of the proposal of the cyber power strategy. We are now at the forefront of the AI era, and we must face the new opportunities and challenges brought by AI, adhere to the development of artificial intelligence with a focus on both development and security, promote the healthy growth of the industry, and ensure economic growth and social stability.
Leave a comment