avatarDr Mehmet Yildiz

Summarize

Artificial Intelligence & Society

Let’s Talk About the Dangers of Deep Fake Technologies Causing Stress to People.

A viewpoint and informed approach to deal with a double-edged sword posing critical risks for individuals & society.

Photo by KoolShooters on Pexels

Time to Re-Examine Deep Fake Technologies with Firmer Measures

Imagine your friends calling you one day for a viral video showing your face, your naked body, and your voice in a malicious post. You don’t remember posting such a video.

And you cannot believe your shocking behavior and your appalling words. You pinch yourself thinking it is a nightmare, but you are wide awake and find yourself victimized in cyberspace.

Welcome to the deep fake world!

This is not a hypothetical situation anymore, as it happened to several people who found themselves as victims on porn sites and popular social media sites. Thus, concerns of deep fake technologies are not trivial matters and certainly require serious considerations and corrective actions.

Many simple video editing tools have been producing fake videos for years. We usually notice the falseness of those superficial videos with a bit of diligence. We don’t take them too seriously.

However, artificial intelligence technologies, commonly known as “Deep Fake”, make those synthetic videos more and more difficult to distinguish between real and fake nowadays.

This article briefly introduces deep fake technologies with key use cases, highlights major concerns, and offers a few practical solution considerations based on my experience in the emerging technology domains.

An Overview of Deep Fake Technology

Deep fake audio and video technologies can be used separately and together. There are different use cases for each and their combined uses. However, the combination can create more convincing results in the end products.

Within the computer vision field, deep fake videos use artificial intelligence technologies including machine learning, deep learning, neural networks, generative adversarial networks, and autoencoders.

I identified valuable use cases and valid concerns when conducting design thinking workshops for deep fake technology solutions. The concerns revolve around safety, security, damaging reputation, fake news, mass deception, and blackmails resulting in financial loss.

Non-consensual insertion of videos belonging to politicians, executives, and celebrities is the most common. These public figures can be strategically targeted on social media. And their fake videos might even leak to traditional media if initial diligence is not applied.

While it is challenging to change the actual meaning of a video via basic tools, it is much easier using AI technologies to add emotions mimicking human traits, such as face recognition from photos and videos. Several research institutes created remarkable samples using deep learning techniques.

A simple technique is reducing or slightly increasing the speed of speech in a video. However, even this basic technique can confuse watchers giving an altered impression of a person who is not disabled in reality. We have seen some examples of such altered videos on YouTube.

Editing and rearranging text in audio files with speech fonts and markups are straightforward and quick using software packages. Proprietary tools are usually expensive, but some tools in the public domain are freely available to anyone.

Some social media sites have tools to identify fake videos by comparing them with the original ones. But, unfortunately, until they discover them, the damage of misinformation has already occurred, and compensation usually would not be possible.

There are some tools to change robotic performance to a natural one in video games and movies. Modulation is one of the techniques to achieve this goal. From a technical perspective, the more footage a video has and the larger datasets it uses, the stronger the conceivable outcomes can be.

While initially deep fake applications and tools were produced by propriety technology companies, many open-source communities now produce them. For example, there are more than 80 deep fake open-source projects on Github.

Deep fake technology can be beneficial for various purposes, such as education and training. It can also be ideal for empowering disabled people to share their content effectively. For example, we know that Stephen Hawking shared his valuable messages via computerized tools as he lost his voice due to ALS disease.

Movies and video games can be made more attractive with the contributions of deep fake technologies. For example, inserting an unusual character into a film or game which is logistically impossible is a value proposition in the media and entertainment industry.

Business leaders in international organizations can share their content in multiple languages using their faces. So, for example, a CEO of a large corporation can deliver the same speech in various countries in their native langue. Changing the foreign accent in speech is also a valuable use case in multinational teams.

And educators use these types of videos for inspiration and motivation in the teaching and learning process.

Major Concerns of Deep Fake Technology

My purpose in this section is not to scare people but to give a more balanced perspective on the pros and cons of deep fake video and audio technologies. The key point is while we take advantage of these technologies, we also need to be mindful of their risks and take cautions proactively and collaboratively.

Ethics and law are critical aspects of emerging technologies like artificial intelligence. Even though there are many concerns about AI, I only touch on the deep fake video and audio technologies in this post. I plan to post more on various aspects of ethical and legal matters related to AI.

Consumers love deep fake videos and audio documents for various reasons, such as education and entertainment purposes. However, entertaining and educative videos are usually received well by the public and hence have the potential to go viral with the power of social media.

You probably have seen Barrack Obama’s face voiced by Jordan Peele in this video in 2018. Jordan, an actor, comedian, and filmmaker, produced the video using artificial intelligence to warn about the future of fake news.

Major concerns of deep fake technologies are related to ethics and laws. Fraud tops the list in the industry. For example, tampering with medical imagery is a critical concern in the healthcare industry.

In addition, considering the power of cryptocurrencies and social engineering, fake deep videos can be used to blackmail public figures, wealthy people, and business organizations.

When these powerful techniques go to the wrong hands like terrorists, the impact can be significant. The fear is valid because AI tools are available to the masses. Sophisticated and organized efforts using AI can even threaten our existence on this globe. I will touch on this critical point in another article.

Visual and audio features can be easily manipulated using AI. Speed adjustment is the most commonly used technique to create new patterns. These techniques can change the context of content easily.

When AI enters the equation in video editing and distribution, it induces reasonable concerns and heightens fears.

I added an audio version of an article at the end. A reader sent me an interesting private message when she heard a female voice, by saying that “I thought you were a man from your profile picture and name.” While this message gave me giggles at the time, it also raised concerns as people can easily get confused with gender differences in audio files.

We use audio files for creating podcasts. A media company offered me a free trial to convert my blogs to podcasts. I am so impressed by the efficiency of turning my blog posts into podcasts with a few clicks. This is an excellent opportunity for bloggers who aspire to publish their text materials as podcasts.

But this opportunity also brings the risk of blog posts changing to podcasts with altered content beyond the control of bloggers. For example, someone can copy and paste the blog with minor edits and can turn a blog post into a podcast. This simple use case might have profound implications, such as breaching copyright rules and causing a financial impact on bloggers and writers.

Significant concerns of deep fake videos and audios are fraud, damaging organizations, defaming people, eroding public trust in governments, weakening journalism, and dangering public safety. All of these points are possible, and none of them is a trivial matter.

Even though we are still at the nascent stage of technology, we expect mobile technologies to make these tools more accessible. For example, anyone soon can create deep fake videos and audio files using smartphone applications. With this mass usage, it can be too arduous to control the situation.

We know that freedom of expression is a double-edged sword and so is the current situation of deep fake technologies. While AI technologies bring many benefits to society, they also pose many critical risks to our safety, security, and quality of life.

Governments are slow in addressing the deep fake issues, but there are some promising acts from the state government in the US and the government of China. While some media companies are also slow in addressing the issues, at least Twitter and Facebook created some policies.

Twitter took some measures, as communicated in this official blog titled “Building rules in public: Our approach to synthetic & manipulated media”.

In summary, “You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.” This tweet posted by Twitter in February 2020 received 1996 quotes that may give you ideas from various Twitter users.

Facebook developed some measures in 2020. For example, the artifact Enforcing Against Manipulated Media, posted by Monika Bickert, Vice President, Global Policy Management, pointed out that Facebook was strengthening its policy toward misleading and manipulating its videos that have been identified as deep fake.

Facebook also partnered with industry leaders and academic experts to create the Deepfake Detection Challenge (DFDC) to accelerate the development of new ways to detect deep fake videos in 2020. “By creating and sharing a unique new dataset of more than 100,000 videos, the DFDC has enabled experts from around the world to come together, benchmark their deepfake detection models, try new approaches, and learn from each other.”

According to InfoSecurity Magazine, “YouTube became the first major social media platform to remove a deep fake video of Speaker of the United States House of Representatives Nancy Pelosi. The clip had been slowed down to make it appear that Pelosi was slurring her words, causing her to come across as almost drunk enough to start a conga line.”

What Can We Do?

As a technology professional, I recommend a holistic approach to address the concerns. Wearing my solution and enterprise architect hats, I see a role for each actor in the lifecycle of deep fake solutions. The key actors in the solution lifecycle are analysts, developers, architects, project managers, governance bodies, and consumers.

While technology and design focus is critical for providers, awareness and educational readiness for consumers are also essential factors. Deep fake concerns can be added to security and safety kits in the workplace and at home for the personal use of the Internet.

Since the distribution of fake videos and audio files has significant adverse effects, one of the most critical roles and responsibilities relates to social and traditional media companies. They can take effective and automated measures to reduce risks.

For example, as most of the targets are reputable public figures, the AI system could mark these videos, apply a screening process, and keep the risky ones in triage before publishing them. This approach is low-hanging fruit for social media organizations considering they have built-in AI capabilities, including infrastructure and applications.

And journalists in this process play a critical role too. I can’t go into details about this point. However, at a high level, before using any sensational videos and audio in traditional media, journalists need to have an informative and comprehensive checklist.

As responsible consumers, we need to be mindful while consuming video and audio files. For example, before jumping to a conclusion about a shocking piece, we can question and cross-check the validity of an artifact using online sources.

We should encourage users to report these videos. In addition, the reported cases need to be monitored by the service providers promptly. As responsible users, we should follow them and request acknowledgment for reported cases if an action is not taken.

From future and sustainability perspectives, we have some emerging tools and validation processes. Developing algorithms to detect deep fake videos is a viable solution. In addition, one of the tools is Blockchain technology with digital signatures. I introduced Blockchain in this article: The Blockchain: Trust without Trusting.

The second tool is collaborative damage control using ethical hacking processes by providers, distributors, and consumers. I introduce ethical hacking in this article: Ethical Hacking: Critical Role & Responsibilities of Ethical Hackers in Digital Transformation Initiatives.

Here is a paper highlighting various social and legal challenges that regulators and society will face. The paper touches upon the potential role of online content dissemination platforms and governments in addressing deep fakes. The authors offer three proposals. They include one from a technology point of view and two from a regulatory angle on restraining the advancement of deep fake outcomes.

Conclusions

We suffered a lot from the adverse effects of manipulated photos over the last two decades. Now, similar concerns pop up regularly and prevail for fake videos and voice documents affecting our mental health and well-being.

While deep fake technologies bring many opportunities to artists, entrepreneurs, scientists, technologists, and educators, they also pose serious ethical, legal, and financial concerns for the public. With a forward-thinking approach, we need to take precautions before the situation goes out of control.

Since this is a societal issue, dealing with deep fake videos and audio files requires a collective effort. It might not be possible to mitigate the risks through our actions entirely in the short term, but we can significantly reduce them and address pressing concerns with concerted efforts in the longer term.

As deep fake videos use sophisticated artificial intelligence tools and processes, they can be persuasive to experienced journalists, informed consumers, and even technologists.

Therefore, we need to educate all types of consumers, create policies, develop sustainable solutions, and constantly monitor activities to prevent harm from deep fake technologies.

Here is another story regarding our ethical concerns in technology.

Thank you for reading my perspectives. I wish you a healthy and happy life.

As a new reader, please check my holistic health and well-being stories reflecting my reviews, observations, and decades of experiments optimizing my hormones and neurotransmitters. I write about health as it matters. I believe health is all about homeostasis.

Petechiae, ALS, Metabolic Syndrome, Type II Diabetes, Fatty Liver Disease, Heart Disease, Strokes, Obesity, Liver Cancer, Autoimmune Disorders, Homocysteine, Lungs Health, Pancreas Health, Kidneys Health, NCDs, Infectious Diseases, Brain Health, Dementia, Depression, Brain Atrophy, Neonatal Disorders, Skin Health, Dental Health, Bone Health, Leaky Gut, Leaky Brain, Brain Fog, Chronic Inflammation, Insulin Resistance, Elevated Cortisol, Leptin Resistance, Anabolic Resistance, Cholesterol, High Triglycerides, Metabolic Disorders, Gastrointestinal Disorders, Thyroid Disorders, Anemia, Dysautonomia, cardiac output, and urinary track disorders.

I also wrote about valuable nutrients. Here are the links for easy access:

Lutein/Zeaxanthin, Phosphatidylserine, Boron, Urolithin, taurine, citrulline malate, biotin, lithium orotate, alpha-lipoic acid, n-acetyl-cysteine, acetyl-l-carnitine, CoQ10, PQQ, NADH, TMG, creatine, choline, digestive enzymes, magnesium, zinc, hydrolyzed collagen, nootropics, pure nicotine, activated charcoal, Vitamin B12, Vitamin B1, Vitamin D, Vitamin K2, Omega-3 Fatty Acids, N-Acetyl L-Tyrosine, and other nutrients.

Disclaimer: My posts do not include professional or health advice. I only document my reviews, observations, experiences, and perspectives to provide information and create awareness.

As part of my creative non-fiction writing goals, I’d like to share a few stories that might warm our hearts with a bit of humor into weighty topics.

Sample Humorous Stories

Apparently, I Was a Dog in a Previous Life

Finally, After Burning Her House, Georgia Found Enlightenment

Hilarious Tips to Prevent Brain Atrophy and Keep the Gray Matter Giggling

Amygdala Hijacks: A Humorous Approach to Emotional Mastery

My First Humorous Lecture to Science Students in the 1990s

7 Hilarious Reasons Why Your Vitality Plays Hide-and-Seek

8 Psychological Points I Had to Unlearn and Relearn the Opposite

5 Funny Yet Real Reasons We Accumulate Visceral Fat

The Quirky Side Effects of Keto Diets

Based on my writing experience and observations, I documented findings and strategies that might help you amplify your voice, engage your audience, and achieve your desired outcomes in your writing journey.

I publish my lifestyle, health, and well-being stories on EUPHORIA. My focus is on cellular, mitochondrial, metabolic, and mental health. Here is my collection of Insightful Life Lessons from Personal Stories.

You might join my six publications on Medium as a writer by sending a request via this link. 26K writers contribute to my publications. You might find more information about my professional background.

As a writer, blogger, content developer, and reader, you might join Medium, Vocal Media, NewsBreak, Medium Writing Superstars, Writing Paychecks, WordPress, Cliqly, and Thinkers360 with my referral links. These affiliate links will not cost you extra to join the services.

Technology
Artificial Intelligence
Mental Health
Self Improvement
Science
Recommended from ReadMedium