Critique of the AI Revolution
Recently I decided to see what all the Hoopla is about with ChatGPT
I downloaded the AIPRN plugin for Chrome and gave it a task. The results were pedestrian at best. It paralleled the slow decline in written language and comprehension by the general public.

I prompted the AI to write an article about my photography collection and the platform it is on. The system seemed to know what ClickaSnap.com is. It was described quite accurately and with glowing praise.
My user account, Lens4anEye, was completely misinterpreted. The article was written in the First Person Plural as in “At Lens4anEye we strive… “etc etc. The references were such that I seemed to be an entire team who would identify, select and promote exceptionalism of highly qualified photographers around the globe.
As a random person being presented with the content thusly created I would have no idea that most of the text was “boilerplate” language there but to fill column space.
I was reminded of my 5th Grade writing assignments where I was supposed to use at least 3 references to write a factual paper on some subject. I always liked to write about Natural Resources. Hence my collection of The Natural Resources of.. Brazil, Argentina, Columbia, Spain et al. My “training sources” were the Encyclopedia Brittanica, Comptons and maybe National Geographic. My biggest concerns were to meet the length requirement and of changing the wording sufficiently so as to not appear to have been merely copied. I was never never concerned with sounding repetitive or wordy.
That half century ago was a time for learning to write at all. There was a required level of reading comprehension necessary to meet to assignment with both content and grammar. I suspect I could obtain an equally pedestrian report using ChatGPT with the prompt “summarize the natural resources of Brazil in a 5 page report.” The “sounding like a 5th Grade boy” part would be optional.
The great success of AI text generation is predicated on the reader having a low comprehension level, or at least one comparable to the AI writing skill.
I’m thinking AI would completely fail at writing comedy, puns, double entendres, satire and parody. For that to be possible you don’t need a computer with Artificial Intelligence but one with Artificial Cunning.
A great portion of our lives today is 2-dimensional. Strawberries have a multitude of compounds with create the taste of strawberries. Most things which resemble strawberries contain the one or two dominant favoring compounds which present a rendering similar to strawberries. It’s a 2-dimensional product which the vast majority of people think is strawberry.
The same process applies to bananas, apples, oranges, lemons, et al. Bottled juices are primarily grape juice, sugar and water no matter which flavor is printed on the label.
Everything is going artificial. Even the spokes-face and talking heads are getting animated. It is the realm of the film “Looker” where a high- tech company, Digital Matrix, scanned actors and actresses so they could be digitally placed in their sets. This was 1981.
Why pay the likes of Tucker Carlson ten of millions per year when they can composite a face and voice to be the avatar of political misinformation for Cents on the dollar with no residuals.
Anthropomorphized cats, dogs, pigs, chickens, tunas, bears and tigers make greeeeat role models and representatives for marketing purposes. The sooner an avatar replaces a human in the media the better the bottom line can be.
Action movie “actors” are barely actors at all. Most of the action is CGI and faces (if at all) are pasted on the head of the motion-capture figure. Voiceovers are akin to dubbing that directors use when making films for multiple language markets.
The results and marketability of the resulting film title is based on the expectations and acceptance of the audience. Putting a Jennifer Lawrence or a Henry Cavill into such a movie serves only as a box office draw. Their acting skills are far better in other genre.
When CGI is employed to make realistic scenes possible to film it is deployed in its proper realm. When it is used to make the impossible into possible then it moves into another place all together. I’m not begrudging the fantasy universes created on film for what they are: escapist entertainment. But when those same methods are employed to alter people’s perception of reality it is taking society to a dangerous place.
Some people, both juvenile and adult, can get caught up and lost is the altered reality of CGI and now it’s AI augmented incarnation of the Deep Fakes. Once the seeming reality of fakery was limited to entertainment. One had to go to the theater to be mesmerized by the sights and sounds of artificial reality. Now that the greater portion of our information gathering is accomplished via video and audio feeds in our hands and on a continual basis the line is blurred.
So pervasive is the altered reality of our information acquisition via electronic media that the result is a fair portion of the population who is convinced they actually live only in a simulation. Their ability to discriminate between the real world and the make-a-believe ones is compromised.
For some people the world in their view screens is just as real, if not more so, than nature. It might have something to do with desirability. What they see and what they have is so disparate that they opt for the artificial.
The basis of the preference for a manufactured world comes from the lack of critical thinking which used to be taught starting in elementary schools. Now children learn their opinions and beliefs are more important than facts and reality. What they feel about themselves tops what everyone else sees.
Rightly or wrongly, everyone now can present however they wish. The number of scantily clad female warrior avatars is far out of proportion to the number of females at the game controllers.
While technologies strive to blur the boundary between the real world and their creations it is the education of and the acceptance by the viewer which drives the direction of content. A sizable portion of Japanese and nearby national economies are flooded with cartoonish decor and corporate logos. Streets are lined with neon and LED panelized messaging for the people to consume. Adults dress in Hello Kitty logo-wear, pastel colors and carry plastic accessories. One can only imagine the western influence the new Barbie️® movie will have on these economies.
All the synthetic landscapes are to some extent AI driven. A director told the creative team “this is what I want” and set about making it happen. Through feedback loops the results get honed and retooled until the director is satisfied. A lot of creative human minds make the scenes happen. They add the depth and nuance which makes the final product into a success or leave as a failure.
Leaving the generative process as the result of digital algorithms will lead to flat pedestrian outcomes which will not stand up to scrutiny.
The problem is with lowered expectations the mundane will be sufficient. In the cartoon shorts of my youth characters moved their entire bodies in fluid motion. It took 7200 hand painted cells to make 5 minute short. Today characters remain stationary while their hands or lips move. Some animations are composed solely of static frames with meta movement provided by flashy backgrounds or alternately enlarging and shrinking the frame size.
While the animation style suffers greatly, the people watching don’t care since they are still getting the story. AI generative imagery will be easily able to pass the “acceptability test” because the standard is so low.
The perception of quality and respectability is far more important than actual quality. Else-wise why are there knockoffs of Rolex watches and Gucci bags?
AI systems are quite capable of generating an entire book on command. However, would it be worth reading? What percentage of those people picking up the book would finish reading it? And would they realize it was produced by an algorithm?
So many paperbacks are formulaic in the first place. They should be easy to replicate. Pop songs are no different. Four chords and a theme. Even the human writers have a difficult time of being truly different and creative.
One experiment I saw was where a person took four hit Country songs, stacked them in a digital editing application then picked one line after another among the 4 and produced a reasonably accurate fifth song from the parts of the others. It seemed to make sense at various points while at others it was nonsensical.
Back around 1969 a girl in my computer science class write a poetry generator she called “Semi-sense With Rhythm and Rhyme.” It produced four line stanzas until it was shut down. They always rhymed and the lines contained the requisite number of syllables. It sometimes sounded to be lucid and profound. Most of the time is was merely garbage and spew.
The AI models have gotten better but still lacks the warmth and depth of a human mind. That may surely come in time. Until then we have low standards of acceptability.
AI sees time differently than humans do. We count the hours, days, weeks and years until each of us leaves this reality. The AI is not burdened by such temporality. Even if it has no sentience at all it can process information and issue result which span the millennia. Consider if a person asked it “is there a way to reverse climate change and avoid pre-extinction conditions?” With homage to Isaack Asimov the reply “insufficient data to answer at this time” might be the reply. “Then work on it.”
An AI system just might be able to ingest enough data to solve our dilemma. We might not like the answer it reaches but the Principle of Imminent Collapse suggests that the results of change and of curtailing it will result in a New Equilibrium. We might not like those new conditions and limitations but we will have had a chance for the comfortable lives we so much cherish.
Fear of technologies is misplaced fear of the human mind. Every installation of a powerful artificial mind, sentient or not, brings with it the notion that it may in reality be no better than ourselves. What could be worse than decision making which is far faster and completely logical.