avatarDr Mehmet Yildiz

Summary

The author, a proponent of AI, shares their negative experience with a content platform's AI system that labeled their content as "dangerous and illegal," causing frustration and discouragement.

Abstract

The author, who has been working in the technology industry for 40 years and is a proponent of AI, shares their recent experience with a content platform's AI system that labeled their content as "dangerous and illegal." This experience caused the author frustration and discouragement, as they had shared a chapter summary from one of their best-selling books that had gone viral on multiple platforms and brought them many loyal readers. The author explains that the content was about improving sleep quality and defeating insomnia, and they did not suggest the use of any illegal drugs or unapproved nutritional supplements. The author also mentions that they followed the appeal process, but the AI system prevented them from appealing and provided an obstinate remark. The author feels like having their attorneys write a letter for the redress of this defamation of their character.

Opinions

  • The author is a proponent of AI but has had a negative experience with a content platform's AI system.
  • The author believes that the AI system's labeling of their content as "dangerous and illegal" is a mistake and caused them frustration and discouragement.
  • The author feels that the AI system's prevention of their appeal and obstinate remark is unacceptable and caused them to consider having their attorneys write a letter for the redress of this defamation of their character.
  • The author believes that the AI system's labeling of their content as "dangerous and illegal" is a result of its lack of context awareness.
  • The author mentions that they have had similar experiences with AI algorithms on other social media sites, causing them to lose their account and all followers they gained over a decade.
  • The author believes that start-up companies must be extra careful not to depend solely on AI, as it can be risky and cause losing their business.
  • The author believes that humans can understand the context and intention, but AI systems cannot yet.

5Artificial Intelligence — Content Strategy

Detrimental Effects of Immature AI Systems on Content Platforms

Relying solely on artificial intelligence to manage a content platform is one of the worst business practices of the century, especially for start-up companies.

Photo by Somchai Kongkamsri from Pexels

Why Platforms Shouldn’t Solely Rely on Artificial Intelligence for Content Assessment

The capabilities of artificial intelligence (AI) excite many of us both within and outside of technology circles.

AI tools and sophisticated robots bring us many benefits, especially by undertaking tedious tasks humans cannot handle well and do not enjoy.

I am a proponent of AI, working in the technology industry for 40 years now. I even invited society to be friends with AI on the cusp of the AI revolution.

Unfortunately, however, misuse of AI is common and particularly harmful to content development, distribution, and marketing platforms.

The biggest problem with AI’s use by content platforms is that AI still does not possess context awareness. This causes many issues for the platforms.

The biggest issue is the defaming of talented writers and bloggers whose inspiring content gets mislabeled as illegal and potentially harmful to society.

This insult committed by the AI system is not a trivial issue. It happened to me on a content platform and discouraged me from contributing even though I want to reach out to a broader audience and desire my content, especially hard-learned lessons, to benefit many people.

Several collaborating writers and bloggers contacted me sharing their frustration and annoyance with punitive warnings made to these creators with misleading AI findings.

I want to share my recent sad and annoying story of being a victim of an artificial intelligence system belonging to a new content platform.

My Sad Story about A Platform AI Labeling My Content Dangerous

I shared a chapter summary from one of my best-selling books on a platform, highlighting key lessons I learned to improve my sleep quality and defeat insomnia.

This piece went viral on multiple platforms and brought me many loyal readers. But unfortunately, this particular platform’s AI system labeled the content “dangerous and illegal.”

How can sharing my sleep improvement technique be harmful to society? Millions of people suffer from insomnia and sleep deprivation. I never recommend my lifestyle habits to others.

I simply narrate my experience and provide explanations supported by peer-reviewed scientific papers that have not been written for the benefit of their sponsoring businesses.

I did not suggest the use of any illegal drugs or unapproved nutritional supplements. The only supplement I mentioned for sleep was magnesium by linking the term to the National Institute for Health (NIH) website. Scientific papers confirm that magnesium is associated with 300 metabolic pathways, several of which affect sleep.

I understand mistakes happen, and AI is not perfect. Therefore, I followed the appeal process explaining my situation and position. However, this specific AI system prevented me from appealing, saying I had already appealed it, which I had not. The catch-22 happened when the support team advised me to appeal via the portal.

I did this by spending my time and receiving the same obstinate remark from the AI system. How would you feel if you were me in this specific situation? I feel like having my attorneys write them a letter for the redress of this defamation of my character.

Another insightful research article on the neurological effects of COVID-19, which I also repurposed on Medium, was found “dangerous and illegal.”

I am grateful that the support team of that platform understood the AI error and apologized for the inconvenience in that specific situation. Humans can understand the context and intention, but AI systems cannot yet.

In addition, I repurposed this research article a few days ago, which was appreciated by many readers. It was going viral. The visibility of my content gives me pleasure as it might be valuable to many people. As I mentioned in a story, viral content of mine or others gives me pleasure, especially if it is helpful for society.

However, as I suspected might occur, the worst-case scenario happened yesterday. The platform’s support team is on vacation, so they entirely depend on AI to manage the publishing and distribution of the content.

Three of my high-impact articles were trending toward going viral. I gained many followers and subscribers within 24 hours. In addition, some readers found my website and contacted me to learn more about these specific articles. They received over half a million impressions within a day.

Unfortunately, the immature AI system of the platform flagged these trending stories as harmful material and delisted them.

So, now readers cannot see the stories. I received an email informing me of a strike with reasons for each article. One of them said I could not recommend supplements. It requires qualification. And the others just inform dangerous and harmful content.

Ironically, the article has not had a single word about using a supplement. Instead, it mentions Vitamin B1 deficiency as the primary cause of Korsakov syndrome. Thus stupid AI might have thought that the term Vitamin B1 was a supplement recommended by the author. Another article, including a condensed literature review on BDNF (Brain-Derived Neuro Factor), also received a punitive strike.

Worst of all, due to these two AI-assigned strikes, the platform not only disabled the viral stories from public viewing but also suspended my account for a week.

Because of the suspension, I cannot update the articles, not appeal, and publish new content on this platform. I was so happy to draft multiple articles during this holiday period. Instead, the platform killed all my enthusiasm and deprived my readers of using my content.

I link the original versions of the articles mentioned at the end of this post so that you can understand my frustration. These articles have no harmful content, and the content is thoroughly backed up by science-linking papers from peer-reviewed scientific journals.

Interestingly, some of my subscribers said these articles were eye-opening for them as they haven’t come across such valuable content on this journalistic platform. So, it was a pleasant surprise for them. But the juvenile AI system killed this value proposition for the company by insulting me and depriving their valuable readers.

I published these stories on two other platforms, which graciously appreciated and distributed them to topics. But the AI system of this new start-up flagged them as harmful material.

Unfortunately, since the platform’s support team seems to be on vacation now, I cannot appeal and get the AI-assigned punishment strikes removed.

Maybe established companies like Facebook, Twitter, or LinkedIn can get away with these types of problems as their AI algorithms are much more sophisticated and improved.

However, start-up companies must be extra careful not to depend on AI solely. It is so risky that these unacceptable incidents can cause losing their business.

I am wondering how a start-up company could afford such a terrible situation for their business. By the way, I am not the only creator affected by these issues. Many friends who post on this platform got also affected by similar situations.

As mentioned before, AI algorithms adversely affected me on several social media sites. I shared some interesting examples in an article titled Immaturity of AI in Social Networks: As an artificial intelligence professional, I became a victim of AI in social media.

The ramifications of these immature AI systems were so bad that I lost my account and all followers I gained over a decade. In addition, the appeal system did not work in one of them as the support team did not receive my multiple inquiries. With the acceptance of the situation beyond my control, I created a new account in 2020.

As an editor on various public and private platforms, I am sensitive to addressing harmful materials in content and social media platforms. Despite my sensitivity and rigor, ironically, my content was flagged as dangerous and harmful. It is challenging to stomach this insult caused by faulty and infantile AI systems.

More ironically, I read some content that should not be available to society on the same platform. Unfortunately, the undeveloped AI system did not detect and alert human editors to that inappropriate and even immoral content.

For example, some comments hosted on this particular platform depict overt racism, sexism, eliciting drug use, and even condemning governments helping citizens for healthy behavior.

I don’t want to give the platform’s name as my aim is not to embarrass it. My purpose is to show how detrimental dependence on AI can be for content platforms.

I am very tolerant of mistakes by humans and machines. We all make mistakes. That’s the only way to learn, but we have to correct our errors so they do not repeat.

Humans understand the context and fix it. But artificial intelligence machines do not have this capability, and they harm the relationships in platforms.

Final Words and Takeaways

I have written thousands of articles, papers, blog posts, and several books. My content over the last decade has never been labeled harmful and dangerous on any commercial or academic platform. But the AI system of this new platform caused me grief the first time.

As a creator, this AI-generated situation adversely impacted my morale and enthusiasm like many other affected writers.

While providing valuable information for my readers, I also used this story as a therapeutic avenue for my mental health. Serving ethics committees in various organizations, I firmly believe that my well-researched content is valuable to society.

Here are the three articles flagged as harmful and dangerous content, which caused my account to be suspended for a week due to two strikes given by the AI system of the platform yesterday.

These articles are distributed to topics on two other great platforms which understand the value of their creators and readers.

Why do you think AI labeled these three articles as harmful and dangerous on another platform solely depending on automated systems? What harm can these articles give to society?

Thank you for reading my perspectives. Despite these challenges I am grateful for my supportive readers on multiple platforms and delighted to serve them.

Related Articles

Artificial Intelligence Does Not Concern Me, but Artificial Super-Intelligence Frightens Me

Societal Impact and Bеnеfіtѕ of Artificial Intelligence Tools

How Technology Can Be Racist

Immaturity of AI in Social Networks

Why Sophia Is So Special and What It Means to Society

Time to Re-Examine Deep Fake Technologies with Firmer Measures

Dementia: Perspectives on Korsakoff’s Syndrome & Vitamin B1 Deficiency

Unbearable Feeling of Anhedonia: How Can We Enjoy Life Again?

Rewiring Your Brain by Activating BDNF & β-Hydroxybutyrate

I also author books on technology and artificial intelligence. I would like to link to the chapters of my book titled “On the Cusp of the Artificial Intelligence Revolution”.

Chapters of the Book “On the Cusp of the Artificial Intelligence Revolution” by Dr. Mehmet Yildiz

Please click on the bold hypertext to find relevant chapters.

Thank you for reading and providing feedback.

Introduction: Purpose of the book

Chapter 1: How to be friends with artificial intelligence and look at it from a fresh perspective

Chapter 2: Technologies Contributing to Artificial Intelligence Solutions — An overview of machine learning systems and solutions

Chapter 3: Artificial Intelligence Applications & Common Business Use Cases

Chapter 4: Societal Impact and Bеnеfіtѕ of Artificial Intelligence Tools

Chapter 5: The Significance of Quantum Computing for the Future of Artificial Intelligence

Chapter 6: Practical Use of Artificial Intelligence in Oncology & Genetics: How AI and deep neural networks contribute to cancer & genomics research

Chapter 7: Business Values of AI For Organizations & Consumers

Chapter 8: Fundamentals of Cognitive Computing for Artificial Intelligence

I will add more chapters to ILLUMINATION Book Chapters so that members can read the book free on this platform.

Sample Health and Well-Being Stories

I also write about health. Here are some sample stories.

Brain Health, Brain Atrophy, Anxiety, Dementia, Depression, Bipolar, Schizophrenia, Heart Disease, Strokes, Type II Diabetes, Fatty Liver Disease, Metabolic Syndrome, Liver Cancer, Immunotherapy, Dysautonomia, Lungs Health, Pancreas Health, Kidneys Health, NCDs, Infectious Diseases, Cardiovascular Health, Neonatal Disorders, Skin Health, Dental Health, Bone Health, Leaky Gut, Leaky Brain, Brain Fog, Nervous Breakdown, Autoimmune Conditions, Chronic Inflammation, Insulin Resistance, Elevated Cortisol, Leptin Resistance, Anabolic Resistance, Cholesterol, High Triglycerides, Metabolic Disorders, and Major Diseases.

Here are some sample stories about nutrients.

Boron, Urolithin, taurine, citrulline malate, biotin, lithium orotate, alpha-lipoic acid, n-acetyl-cysteine, acetyl-l-carnitine, CoQ10, NADH, TMG, creatine, choline, digestive enzymes, magnesium, hydrolyzed collagen, nootropics, pure nicotine, activated charcoal, Vitamin B12, Vitamin B1, Vitamin D, Vitamin K2, Omega-3 Fatty Acids, and other nutrients that might help to improve metabolism and mental health.

About the Author

I am a technologist, postdoctoral researcher, author of several books, editor, and digital marketing strategist with four decades of industry experience.

I write articles on Medium, NewsBreak, and Vocal Media. On Medium, I established ILLUMINATION, ILLUMINATION-Curated, ILLUMINATION’ S MIRROR, ILLUMINATION Book Chapters, Technology Hits, SYNERGY, and Readers Hope publications supporting 16,500+ writers on Medium.

Thank you for subscribing to my content. I share my health and well-being stories in my publication, Euphoria. If you are new to Medium, you may join by following this link. You may also join my seven publications on Medium as a writer requesting access via this weblink. I write about health as it matters. I believe health is all about homeostasis. I share important life lessons from people in my professional and social circles.

You might find more information about my professional background. I write about health as it matters. I believe health is all about homeostasis.

If you are a writer, you can join Medium, Vocal Media, and NewsBreak as a writer and monetize your content while inspiring a large audience. Repurposing your content on these platforms can save you time and increase your income.

You can join my six publications on Medium, contributed by 16K+ writers, as a writer requesting access via this weblink.

Artificial Intelligence
Health
Lifestyle
Self Improvement
Science
Recommended from ReadMedium