Share This

Showing posts with label artificial intelligence (AI). Show all posts
Showing posts with label artificial intelligence (AI). Show all posts

Friday, September 6, 2024

'Use AI to counter AI': Experts call for upgraded tech, system to counter AI-powered cybercrimes amid deepfake scandal

 

AI technologies Photo: VCG

Experts call for attention and countermeasures to prevent cybercriminals from using new technologies such as artificial intelligence (AI) powered deepfake technology amid growing concerns over the issue around the world. 


Numerous chat rooms suspected of creating and distributing deepfake pornographic material with doctored photos of ordinary women and female service members have been reportedly discovered on messaging app Telegram recently, with many of the victims and perpetrators known to be teenagers, The Korea Times reported last week.

Telegram had removed certain deepfake pornographic content on its platform and apologized for its response to digital sex crimes, the Yonhap News Agency reported Tuesday citing South Korea's media regulator.

The issue has raised outrage among South Korean netizens, which soon spread to its Chinese neighbors after some South Korean netizen brought it to Chinese social media platforms.

But it is just the tip of the iceberg of the Telegram's deepfake porn scandal. On August 28, a court in Paris filed a charge against Pavel Durov, 39-year-old Russian billionaire and founder of Telegram, for being complicit in the spread of images of child sexual abuse, as well as a litany of other alleged violations on the Telegram messaging app.

While Durov responded mockingly to the charge by changing his Twitter handle to Porn King, global scientists, governments and regulators view the issue as an urgent alert for them to strengthen measures to prevent cybercrimes powered by new technologies.

Deepfake refers to a kind of technology that uses a form of AI technology called deep learning to make images of fake events, hence the name deepfake.

The core principle of deepfake technology is to animate 2D photos using specific image recognition algorithms or to implant a person's face from a photo into a dynamic video, The Beijing News reported citing an industry observer named Ding Jiancong.

Recently, voice synthesis has also gradually been incorporated into the concept of deepfake. With the gradual maturity of AI large model technology in recent years, some AI image generation models, while pursuing greater realism, have inadvertently become accomplices in AI face-swapping or AI nudity, Ding said.

For instance, the well-known large model Stable Diffusion was developed with a one-click nudity feature, which once became widespread. Although the model later modified its related functions to curb such behavior, the open-source nature of the technology has already opened a "Pandora's box," making it difficult to close again, Ding warned.

Apart from the new deepfake crime, there are also two other types of risks brought about by new technologies, Xiao Xinguang, chief software architect from Chinese cybersecurity company Antiy, told the Global Times.

First, new technologies will drive the escalation of traditional threats and risks. For example, in cyberattacks aimed at stealing information or targeted ransomware, AI technologies can significantly assist throughout the entire attack process, including enhancing the efficiency of discovering attack vectors and automating attack activities, according to Xiao.

Second, the infrastructure of new technologies will become targets of exploitation. Large model platforms are becoming new hubs for information assets, and the entry points for large model applications are also becoming new exposed surfaces that are vulnerable for attacks, Xiao said.

The expert believed that with the advancement of AI technology, it is unrealistic to stop people from using AI to generate fake videos or images. Instead, it will be more effective to have strict regulations over the dissemination of technology.

Xiao was echoed by founder and chairman of 360 Security Technology Zhou Hongyi. When talking about the threats brought about by AI technologies at a forum held in North China's Tianjin municipality on Wednesday, Zhou said that "we must use AI to counter AI."

"AI technology is profoundly affecting various industries, bringing opportunities for the development of new productive forces, but also bringing many new security challenges. It is necessary to reshape security with AI and to create security large models and reshape security products with specialized large model methodologies, which will reform the security industry," Zhou said.

Strict regulations and law are also necessary. AI technology platforms should have reviews for the content uploaded and generated, and users should be required to register with their real names. There should also be severe crackdowns on tools or websites that support illegal activities, experts noted.

Source link 

RELATED ARTICLES

Thursday, December 7, 2023

Beware of AI-driven crimes

 


KUALA LUMPUR: When one speaks about Artificial Intelligence (AI), the first thing that comes to mind might be the T-800 terminator that was sent from the future to eliminate John Connor from the timeline in the 1984 movie The Terminator.

Since then, other movies and series have also depicted the potential use of AI in analysing human behaviour in a bid to predict and stop crimes from occurring.

The year is now 2023, however, and with the latest developments in technology, the threat of AI being used for more sinister reasons is already at our doorstep.

ALSO READ : Police gearing up to combat AI-based crime

Deepfakes, voice spoofing and financial market manipulation could all become the future of crime when syndicates start using AI in their operations.

Recently, a deepfake video of a Malaysian leader promoting a get-rich-quick scheme has been circulating on social media.

Federal Commercial Crime Investigation Department (CCID) director Comm Datuk Seri Ramli Mohamed Yoosuf said the video is a prime example of how AI could be misused.

“The promises made in that video are too good to be true, which means it is most definitely an investment scam and there is no way that a political leader would promote such a thing. It is absurd.

“While the video here is about gaining quick wealth, there are other aspects that can similarly be manipulated, especially in politics and social engineering.

ALSO READ : HK cops encounter fraud syndicate that used AI-generated images

“AI is already here and because of this, it is important not only for the police but also the public to be aware of the potential risks AI could pose in the near future.

“Today’s world has shown how AI is increasingly taking over tasks and roles previously done by humans.

“While some of us might know about the advancements in this field, others might still be unaware of the potential risks that follow the swift advancements made in AI,” he told The Star recently.

Comm Ramli said that based on current predictions, AI could be used by syndicates in their illicit activities against Malaysians by as early as the middle of 2024.

“Once this occurs, everything in the financial sector, every service that has gone online, could face the risk of being infiltrated.

“AI could be used in the creation of algorithms that are capable of hacking computer systems, while other algorithms could also be used to analyse data and manipulate results which gives it the potential to be used to influence or cripple financial markets,” he said.

He added that AI could also be used in advanced video and audio manipulation that can lead to potential identity theft and the creation of deepfake videos.

“In this scenario, the possibilities are limitless as crime syndicates could use deepfake images, videos or voices to dupe people and organisations.

“They could use such deepfakes in bogus kidnap-for-ransom cases, where they trick families into believing they have kidnapped a loved one, while some could even use it to create lewd or pornographic images of victims that could in turn be used to blackmail them,” he said.

He also added that through the creation of convincing false identities through photographs or videos, syndicates could even pose as a person to ask for money or even trick victims into thinking that one of their family members is in danger.

Comm Ramli explained that deepfakes could even be used in spreading propaganda and fake news which could lead to public anxiety.

“There are indicators that AI could be used to perpetrate economic crimes.

“AI scientists are also talking about quantum computing that will enable decryption, which in turn could render all binary encryption technology that is currently in place useless.

“We have been keeping up with the latest news on the use of AI in crime in the region and are aware of an instance in Hong Kong earlier this year where a syndicate allegedly used AI deepfake technology in the application of loans,” he said.

Media in Hong Kong reported on Aug 25 that police there uncovered a syndicate which used eight stolen identity cards to make 90 loan applications and 54 bank account registrations.

In what was considered the first case of its kind there, deepfake methods were used at least 20 times to imitate those pictured in the identity cards and trick facial recognition programmes.

Six people were arrested in connection with the case.

Comm Ramli said that while there have not been any reported commercial cases involving the direct use of AI so far, that does not mean that it will not become a problem in future.

“This is why the public needs to be prepared and be in the know of such things.

“The best weapons the public will have against AI are knowledge and awareness.

“If the public in general are aware of how AI can be used, they will be extra cautious and not be easily duped by syndicates employing such tactics,” he said.

It is important to note that conventional online scams such as love scams, parcel scams and Macau scams are committed by real-life people without the use of AI.

In such scams, a person is directly involved in posing as a law enforcement officer, courier service provider or even a potential lover either through phone call, email or via social media.