Between Real and Fake, What Is Real?
상태바
Between Real and Fake, What Is Real?
  • Sang Lim Hyeji, Lee Hwang Hayoung
  • 승인 2021.05.03 09:51
  • 댓글 0
이 기사를 공유합니다

PHOTO FROM FREEPIK

 

Technology never stops developing and we gain strength from it to live better lives. However, the infiltration of scientific techniques in our daily lives may actually be harming humanity similar to a double-edged sword. SMT looks at how technology is both aiding and hurting human's everyday modern life.
 

<strong>PHOTO FROM ASIAECONOMY</strong>
PHOTO FROM ASIAECONOMY

 

A double-edged sword

AI, artificial intelligence, has received a lot of attention in society and is considered as the way to the future. In particular, with the emergence of AlphaGo, the world has furthered its interest in AI. AI is cutting edge computer programming that models human intelligence using logical thinking, learning, and judgments much like a human would do. In other words, AI mimics intellectual activities performed by humans. There is a huge field of computer science and information technology that studies the thinking, learning, behaviour, and self-development of humans. Currently, AI is being utilized in a variety of fields such as mapping neural networks, pattern recognition, natural language recognition, image processing, and robotics. Al has also made great progress in deep learning. In the beginning, AI was composed of rules programmed into computers, but it has grown to collect data through the Internet. AI is now a learning machine form that analyzes big data and learns from it. Advances in deep learning have led to advances in hardware technology, and the advent of mathematical modeling methods and algorithms. Deep learning is also being applied to manufacturing and service industries through convergence, and to everyday life in such things as self-driving cars, Facebook, and Google. To learn more about AI, look up The Sookmyung Times Issue No. 366.
Using the AI technology of deep learning, deep fake technology has emerged. It synthesizes a person's face and certain parts to create fake videos. Deepfake videos use artificial intelligence technology to synthesize a person's face or body part like the use of CG in movies. It combines digital technology and artificial intelligence technology with existing synthesis technology. Deepfake technology is developing day by day and it is becoming more and more difficult to distinguish between Deepfake images and original images. Since it is hard to differentiate between the two, there have been a number of crimes. In particular, celebrities who produce large numbers of videos, can fall victim to Deepfake resulting in pornographic videos. Sex crimes caused by Deepfakes are not limited to celebrities. Ordinary people are also becoming victims. Because videos can synthesize the faces of ordinary people to create pornographic images, it is frequently used for material posted on the Internet such as Twitter and porn sites.1) Because of the seriousness of this problem, the petition titled "Please strongly punish illegal video creating platform "Deepfake" causing female celebrities to suffer" was posted on the national petition board on January 12. Just one day after the petition was updated, it was signed by 163,000 people. Besides this, there are additional problems arising from the use of AI as it targets everyone, from celebrities to ordinary people.
 

<strong>PHOTO FROM THE STATE OF DEEPFAKES</strong>
PHOTO FROM THE STATE OF DEEPFAKES

 

What you must know about Deepfake

Victims continue to suffer from AI abuse and Deepfake. In particular, direct victims of Deepfake are numerous. According to the report "Deep Race" released by a Dutch cybersecurity research company, 96% of Deepfake videos are pornographic and 25% of them feature Korean female entertainers. In other words, a quarter of all Deepfake videos are about Korean females. Many of these female entertainers don't even realize there are Deepfake videos about them or where the videos are uploaded because Deepfake videos are made quickly and with great preciseness. According to SBS News, one female group had deep fake videos posted about the group, even though it has only been two to three months since their debut. During an interview, a Deepfake expert said, "It is difficult to find clues to the creator of Deepfake videos because there are many expert video creators."2) In the end, because such material is produced so rapidly, it is incredibly difficult for viewers to know if they are watching a Deepfake video or an authentic video. Moreover, the situation is not confined to celebrities. Cases of "acquaintance humiliation" have increased due to Deepfake. Profile photos and videos are posted by individuals to sexually harass women.
Deepfake videos are spreading too rapidly on social media, various communities, and pornographic sites, so they are highly accessible. Not only can Deepfake videos be easily found on Twitter and Instagram, but these videos are being shared frequently. Some accounts buy and sell videos through specific keyword searches, and some accounts can be accessed by users regardless of age as long as the purchaser has money. In addition to harming the victims, it also harms children users. Indeed, over the past year, the number of cases of teachers suffering from creations of Deepfake videos due to use of remote lessons to stop the spread of COVID-19 has increased. According to a survey of 8,435 professors conducted by KFTU (Korea Federation of Teacher Unions) this year, 7836 respondents said they feared violation of portrait rights during remote classes (92.9%). And 651 respondents said they had fallen victim to portrait rights violations (7.7%). In other words, children who come in contact with Deepfake materials from an early age learn to accept them without pondering ethics. It negatively affects children's emotional development and prevents the reduction of such Deepfake crimes in the future.
Currently, only post-punishment is regulated for crimes related to Deepfake materials. Criminals are punishable under the Information and Communication Network Law and special law on sexual assault crimes that took effect from June of last year, but in reality it is difficult to remove and stop the spread of videos once they are uploaded onto the Internet. As mentioned above, victims don't even know where or how the videos are being spread. Even if criminals are found and punished, the punishment is not strong enough. One woman victim of Deepfake video reported the crime to the police, but the police were unable to identify the culprit because the person had posted the video on an overseas website. The police told her not to post her photos on social media. Rather than finding the criminal, the police held the victim accountable. In the case when the victim directly identified the person responsible for the Deepfake videos and filed a complaint with that person's name, no action was taken. The case from the SBS interviews mentioned above, where the woman directly sued the criminal and the number of victims totaled 14, had the criminal be released on probation because he was a first time offender. The victim said, "It was frustrating. I am very angry that he is able to live well and carelessly." The assailant's attitude toward the crime was even more brazen. In an interview with the production team, he said, "I didn't do it alone. I was asked by someone I met on the Internet to create it after receiving a picture. It was not a lot of money, so I suffered a loss creating it." As this case shows, criminals of Deepfake videos are not receiving punishments that match the level of the crime.
 

<strong>SCREENSHOT OF CHEONGWADAE</strong>
SCREENSHOT OF CHEONGWADAE

 

Problems must be solved

Clear standards of punishment are needed to prevent new and increasing crimes related to science and technology. As mentioned earlier, it is difficult to stop Deepfake crimes using only the current level of punishment. Professor Lee Jaekyung of Konkuk University said, "There should be punishments for possession and watching of Deepfake videos. Criticism may be raised as unrealistic and excessive, but Deepfake video creators are killing individuals' personalities so stricter standards like those for child abuse criminals are needed. The fundamental solution is to forcibly control the demand for deep fakes."3) Lee acknowledges that current levels of punishment are insufficient. As of yet, the government has not implemented any special measures against Deepfake-related crimes other than the Special Law on Sexual Violence, which took effect last year June. In January of this year, a national petition on the CheongWaDae board contained the following statement: "Please punish the illegal video platform "Deepfake" as it torments female entertainers". In response, CheongWaDae said, "We promise to improve social perceptions so that social vigilance increases and people know that digital sex crimes are serious crimes." This response means the government will strengthen punishments for sexual crimes.
However, punishment is not enough to reduce the number of crimes being committed by Deepfake video creators. In addition to reducing crimes, SNS companies and portal sites need to differentiate originals from Deepfake videos and restrict access to Deepfake materials. 'Deept team' of students who major in Cyber Security at Ewha Womans University recently developed an automatic system to detect Deepfake videos. Whenever someone uploads a video suspected of being a Deepfake video to their website, the program will tell the site whether it is fake or not. The system is expected to help distinguish Deepfake from original videos on social media and communities. Instagram is also currently running a system that automatically deletes a fake post using AI. Instagram claims it can immediately identify Deepfake materials. However, Deepfake video technology is continuing to advance, making it harder for AI to distinguish between fake and original materials. Nevertheless, the current reality needs technology like Deept programs and Instagram AI machines.
Another way to stop Deepfake crimes is to prevent the producers from making pornography with Deepfake system. If the producer of Deepfake videos is identified, Deepfake-related crimes will be reduced. Microsoft has made it possible for content producers to add digital hashtags and certificates to its content. It checks browser extensions and other forms of authentication, informs users that the content is not Deepfake material, and provides details of the content's producer. In other words, it is now more difficult to produce obscene materials using DeepFake. Also, citizens need to be made aware of high ethical standards to stop Deepfake video creation. People need to consider the pain of the victims and the spillover effect from video sharing. Moral sensitivity however, alone will not prevent crimes. Therefore, effort must be made at the institutional level to prevent Deepfake criminals in advance.
 

<strong>PHOTO FROM FREEPIK</strong><br>
PHOTO FROM FREEPIK

 

We want a better world for all

Science is developing faster every day, and it will continue to develop even faster in the future. In time, the types of technologies accessible to people are mind boggling, far beyond today's imagination and expectation. In line with the number of different technologies becoming available to people, society must use them wisely so that there are no more victims. Society needs to establish proper laws and campaign for proper ethics so that life is not dominated by technology.

 

1) Lee Jinuk, ""Free from Synthesizing" Why Do People Make Deepfake Pornography", Moneytoday, January 25, 2021

2) Kim Hyojung, "[What To Know That] From K-Pop Stars to Ordinary People Who Suffer From 'Deepfake Sexual Exploitation'...Protecting Victims, 'The Right Function of Deepfake'", SBS & SBS Digital News Lab, February 28, 2021

3) Heo Misun, "DeepFake Wins Popular Culture Headlines, Forced Abuse," Bridgenews, January 15, 2021

 

Sang Lim Hyeji / Editor-in-Chief
smt_lhj@sookmyung.ac.kr
Lee Hwang Hayoung / Woman Section Editor
smt_lhy@sookmyung.ac.kr


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.