avatarCezary Gesikowski

Summarize

On the Origins Of Artificial Species

How AI Sprang from Human Curiosity to Digital Existence

“In the struggle for survival, the fittest win out at the expense of their rivals because they succeed in adapting themselves best to their environment.” — Charles Darwin

TALOS | image by the author via GAI

Artificial Intelligence (AI) is a term coined in the 1950s but have you ever wondered what it was called before that? When did homo sapiens conceive the idea of inanimate replicants that could move and think on their own? And what terms did our ancestors use to describe visions representing artificial forms of existence and reasoning? This article traces the timeline from our earliest embodiments of artificial beings to the latest technological breakthroughs toward a new species derived from the collective human imagination.

Unlike dry historical accounts, our timeline covers the evolution of ancient representations, symbolism, mythological concepts, philosophical ideas, literary narratives, and technological prototypes exposing the fantastic roots and precursor threads leading to our modern conception of AI as an engineered replication/extension of cognition in synthetic form.

The Symbolic Seeds — Prehistory and Ancient Civilizations

EYE OF HORUS | image by the author via GAI

From the earliest artistic expressions of humanity to foundational myths and creation stories across cultures, we see the first symbolic representations and conceptual threads that can be interpreted as relating to the eventual ideas behind artificial intelligence. While not explicitly describing AI as we understand it today, these ancient artifacts reveal an enduring human drive to understand, replicate, or transcend the limits of our intelligence through artificial means.

  • 40,000 BCE — Cave Paintings (France, Spain, Indonesia) — The earliest known cave artworks depict human figures intermingled with animal-human hybrids (therianthropes), which represent the conceptual blending of intelligence across species. Many traditions include stories of mythological beings able to alternate between animal form and human form, or who possessed combined animal and human anatomical features and behaviour characteristics.
  • 25,000 BCE — Venus Figurines (Europe, Eurasia) — Stylized female figurines crafted during the Palaeolithic period have been interpreted as anthropomorphic representations or embodiments of abstract concepts like fertility, creativity, or emergence. These ancient inanimate representations of human form were worshipped as powerful magical objects endowed with spiritual energy from the domain of the gods.
  • 3100 BCE — Hieroglyphs (Ancient Egypt) — Early hieroglyphs included depictions of the human form combined with animal/bird attributes, hinting at symbolic blending of capacities. The Eye of Horus may have embodied principles of intelligence. It is often called the “all-seeing eye,” frequently representing wisdom and protection in Egyptian religion and culture.
  • 2600 BCE — Mythical Beings (Mesopotamia) — Sumerian/Akkadian mythology featured hybrids like the bull-man Lugalbanda and the Anzu Bird, displaying combined traits that could abstractly symbolize proto-AI unifying abilities of beings with powers exceeding human limitations.
  • 1400–1200 BCE — The Nephilim (Book of Genesis) — In the Hebrew Bible, Genesis 6:1–4 describes the Nephilim as offspring resulting from union between the “sons of God” and the “daughters of men.” The Nephilim mythology represents one of the earliest known concepts across cultures and religions about intermingling divine/supernatural qualities with human forms, prefiguring ideas about engineering entities with superior intelligence or abilities.
  • 1500 BCE — Artificial Beings (Ancient Greece) — Stories of Talos, Galatea, and Pandora featured artificial beings made from metal or clay by titans. Embodying humanity’s desire to create and understand intelligence, they represent the earliest imaginings of a created non-human being endowed with a form of intelligence or purpose.
  • 400 BCE — Hun(man) and Po(animal) Souls (Ancient China) — Early Chinese philosophical theories posited humans having two souls — a spiritual intelligence (hun) and a corporal/animal soul (po). This duality possibly inspired ideas of separating intelligence from biology and sparked the imagination of non-corporeal existence.
  • 322 BCE — Mechanical Replicants (Ancient Greece) — The ancient Greeks were believed to have built mechanical robots and artificial beings capable of basic movements. Half-human soldiers, robot servants and eagle drones — the Greeks celebrated myths starting speculation about recreating aspects of intelligence artificially.
  • 250 BCE — Nāga Symbolism (Ancient India) — The syncretic Nāga deity in Buddhism/Hinduism embodied mathematical abilities and all-knowingness as human/cobra hybrid beings, possibly prefiguring ideas of engineered omniscience. Their secret mission was to keep the human population in check by weeding out evil and weak members of the homo sapiens.

The Mediaeval Imaginings — Artificial Life and Reason

GOLEM | image by the author via GAI

The appetite for mystery remained insatiable in the human imagination. As antiquity gave way to the medieval era, myths, folk tales, and theological/philosophical treatises began exploring more explicit notions of replicating life, consciousness, and intelligence itself through artificial means — sowing the seeds for later AI aspirations.

  • 780 CE — Allah’s Artificial Beings (Islamic Golden Age) — Stories emerged of crafting androids and animating life in static bodies, planting seeds of mechanical artificial mimicry of intelligence.
  • 830 CE — “Living” Statues (Ancient India) — The Indian mathematician Mahaviracharya described a method involving mercury for animating statues, possibly contributing to Sanskrit beliefs in ensouling inanimate objects with consciousness. This is also referenced in Buddhist ritual animation of ‘opening the eyes’ of the statues in Laos, and references to the agency of objects, understood as being made up of ‘living’ entities.
  • 950 CE — Chatton’s Artificial Artefact (Medieval Scholasticism) — English philosopher Walter Chatton outlined an early conception of engineered intelligence by investigating the difference between knowledge and cognition, which led him to theorize creating an artificial being capable of rational thought through human ingenuity.
  • 1206 CE: Automata (Al-Jazari) — Turkish scholar and inventor Al-Jazari described and built automated machines, including mechanical water clocks and humanoid robots serving drinks, and a group of robot musicians who played their instruments on a lake in the palace to entertain guests. Medieval Islamic technology displayed an advanced understanding of mechanical imitation of life.
  • 1270 CE — Artificial Humans (Kabbalah) — Jewish scholastic traditions explored the idea of an “artificial anthropoid” or Golem created to mimic human consciousness, made from inanimate materials animated by sacred formulas in kabbalistic theurgy. The Golem is considered one of the earliest AI prototypes, carrying throughout the ages deeply-rooted fascination and anxieties at the prospect of intelligent and sentient technology.

The Mechanization of Thought — Early Modern Era

FRANKENSTEIN | image by the author via GAI

The European Renaissance and Ages of Reason saw a flurry of philosophy, experimentation and early computational devices that formalized the possibility of reproducing aspects of human cognition and reasoning capabilities through purely mechanical/physical systems.

  • 1637 CE — “Thinking Reed” — Rene Descartes’ famous proposition Cogito Ergo Sum (I think, therefore I am) sparked inquiries into what capacities for thought or reason qualify as genuine intelligence worthy of being. Blaise Pascal famously stated: “Man is but a reed, the weakest nature; yet he is a thinking reed,” sparking discussions of the nature of cognition in generations of philosophers up to 20th century existentialists.
  • 1726 CE — “Mere Machine Thesis” — Julien Offray de La Mettrie’s work L’Homme Machine argued humans are merely complex biological machines, no different from artificially engineered mechanical beings, foreshadowing AI. La Mettrie presents a significant insight by recognizing that all movements, including the brain’s functions, are mechanical. He implies that humans operate in a machine-like manner due to the mechanistic nature of their actions and cognitive processes.
  • 1818 CE — “Frankenstein” — Mary Shelley’s creation of an artificial humanoid creature explored the existential questions of simulating intelligence and the moral implications through its narrative. Frankenstein is an early form of AI, delving into enduring economic theories relevant to AI’s potential and uncertainties. The story acts as a lens to examine the historical and ongoing dynamics between humans and technology, offering insights into their future harmonization.
  • 1842 CE — “Analytical Engine” — Charles Babbage and Ada Lovelace proposed a design for a general-purpose computer which is considered one of the prototype concepts directly ancestral to the pursuit of artificial intelligence through machines. This concept of a comprehensive computing device was unprecedented before Babbage and Lovelace, with no prior attempts to construct such an innovative machine.

Fictions Presaging Non-Fictional AI

ROBOT | image by the author via GAI

As the industrial era brought rapid technological change, imaginative literature and storytelling increasingly speculated about intelligent, conscious, or self-aware artificial beings — thought experiments that would inspire future pioneers.

  • 1913 CE — “Torch of Reason” — Arthur Machen’s short story describes an artificial brain that becomes aware. This idea stirs up thoughts on whether we can make machines that think like humans. Machen’s work is an early look into ideas that are big in today’s talks about AI, thinking, and what it means to be aware.
  • 1920 CE — “Robot” — Karel Čapek coined the term by drawing from the Czech word “robota” (forced labour), Čapek’s play “R.U.R.” introduced the term “robot” in fiction depicting artificial humanoid beings, becoming a core AI concept. The story advocates subtly for robot rights before thinking machines evolve beyond humanity and begin to resent their creators for their human foolishness.

The Foundations of the Modern AI Field

GIANT BRAINS | image by the author via GAI

Finally, between the 1930s-1950s, key computer science and mathematical advances laid the theoretical groundwork for artificial intelligence to emerge as a formally defined area of study and technological development.

  • 1945 CE — “Giant Brains” (Documentary) — A documentary explored early electronic computational machines as “Giant Brains”, planting seeds of the idea machines could one day rival or exceed human brainpower. “Giant Brains” were celebrated for their remarkable speed and accuracy in solving complex mathematical problems, outperforming human mathematicians significantly. As these computers evolved, they began handling increasingly larger numbers of calculations rapidly, leading to perceptions of them possessing aspects of real intelligence due to their computational speed and efficiency.
  • 1948 CE — “Cybernetics” — Norbert Wiener coined “cybernetics” to describe self-regulating control and communication systems like the human nervous system, creating the foundation for AI development. He couldn’t foresee the widespread impact cybernetics would bring, evolving from an academic concept to a cornerstone of modern technology and culture. It’s the basis for today’s “cyber” terminology, covering everything from cyberspace to cybercrime.
  • 1950 CE — “Imitation Game”/Computing Machinery and Intelligence”— Alan Turing explored the concept of programming machines to exhibit intelligent behaviour equivalent to humans through an “imitation game” thought experiment, now called the Turing Test. To see if machines can think, Turing didn’t define “machines” or “think” directly. Instead, he focused on digital computers and defined thinking as the ability to give answers that couldn’t be distinguished from a human’s by someone asking questions for 5 minutes through a teleprinter.
  • 1956 CE — “Artificial Intelligence” — John McCarthy proposed and named the field of “Artificial Intelligence” (AI) at the Dartmouth Conference. This historical gathering defined AI as the pursuit of creating machines and computer programs that can mimic intelligent behaviour. Marking the start of rigorous research in the field, this conference laid the groundwork for the future evolution and breakthroughs in AI technology.

The AI Journey Continues: Expanding Frontiers

NEURAL NETWORKS | image by the author via GAI

From the latter 20th century through today, artificial intelligence rapidly advanced and diversified — with new terminologies, approaches, and ethical considerations continually evolving this transformative technology.

  • 1970s-80s — AI experiences a ”winter” — Roger Schank and Marvin Minsky cautioned about an upcoming “AI Winter,” predicting a significant downturn in AI investment and research akin to the decline seen in the mid-1970s. This forecast, made three years before the actual downturn, highlighted the cyclic nature of AI funding and interest, warning of a burst in the AI enthusiasm bubble similar to past fluctuations in the field’s history.
  • 1990s — “Machine Learning” and “Neural Networks” gained attention in the field of AI shifting toward developing systems capable of learning from data and improving over time, laying the groundwork for many of today’s AI applications and technologies. The resurgence of interest in neural networks, inspired by the human brain’s structure, played a crucial role in advancing AI research and practical applications during this period.
  • 2000s — “Big Data” and “Deep Learning” fuel major AI breakthroughs, along with Neural Networks — These technologies, which mimic the learning process of the human brain through complex algorithmic layers, leveraged vast amounts of publicly available data on the Internet to drive forward AI capabilities and applications, marking a notable era of advancement in the field.
  • 2010s-Present —Machine Learning, Deep Learning, AI Ethics, AGI (Artificial General Intelligence), and AI ubiquity — This period reflects a broader exploration into specialized AI fields and ethical considerations, highlighting the aim to create machines capable of human-like understanding and reasoning. AI is being integrated into daily life and the global economy, indicating its growing importance and the need for responsible development.

A Closer Look at AI Explosion: From 2010s to Now

ALPHA GO | image by the author via GAI

The 2010s saw artificial intelligence rapidly accelerate with major technological breakthroughs, commercial applications, and a widening array of specialized AI subfields. This sparked immense interest and investment from tech giants, startups, researchers, and nation-states alike in racing to the forefront of the modern AI revolution.

  • 2011: IBM’s Watson demonstrated its question-answering prowess by winning Jeopardy!, showcasing advancements by IBM’s research team.
  • 2012: The introduction of AlexNet marked a significant leap in image recognition through deep-learning neural networks.
  • 2014: The chatbot Eugene Goostman, created by Veselov and Demchenko’s team, briefly passed the Turing test.
  • 2015: AlphaGo, developed by Google’s DeepMind, outplayed a professional Go player, highlighting AI’s strategic gaming advancements.
  • 2016: AlphaGo Master, improved by DeepMind, defeated top Go professionals, setting a new benchmark in AI capabilities.
  • 2017: Google Brain’s LipRead project advanced automatic lip reading, led by Assael and his team.
  • 2018: GPT by OpenAI marked a new era in natural language processing and model development.
  • 2019: DeepMind’s AlphaStar reached Grandmaster level in StarCraft II, showcasing strategic AI growth.
  • 2019: OpenAI developed a Rubik’s Cube-solving robotic hand, demonstrating dexterous manipulation.
  • 2020: GPT-3 by OpenAI set new standards in language model performance, driven by advancements in machine learning.
  • 2021: AlphaFold by DeepMind achieved breakthroughs in predicting protein structures, revolutionizing biological sciences.
  • 2022: Google’s PaLM and Anthropic’s models advanced open-domain dialogue capabilities in AI research.
  • 2022: DALL-E 2 from OpenAI introduced high-fidelity image generation from text, enhancing creative AI applications.
  • 2022: AlphaCode by DeepMind showcased AI’s problem-solving in competitive programming, expanding AI’s application range.
  • 2022/23 — OpenAI launched ChatGPT, a generative AI based on GPT-3/3.5 and GPT-4 for text-based conversations with understanding and response capabilities in natural language, impacting various sectors by offering a new way to interact with AI systems.
  • 2023/24: Anthropic’s Claude and Google’s Bard/Gemini emerged as next-generation AI assistants, leading the way in conversational AI advancements.
  • 2024: Figure 01 — Figure and OpenAI teamed up to make a robot able to talk to people while doing tasks in real-time. This achievement hints at big plans for using AI to manage robots on a large scale.

This recent period some call an AI revolution has seen rapid progress in capabilities like natural language processing, computer vision, game mastery, robotic control, protein modeling and multi-modal perception and reasoning. With the AI race moving to revolutionize robotics, Tesla and Boston Dynamics are facing major tech companies like Google, Microsoft, Meta, Amazon, and OpenAI alongside startups like Anthropic who are driving the intense competition and innovation.

Artificial General Intelligence: Are We There Yet?

DIGITAL+ANALOG CONSCIOUSNESS | image by the author via GAI

This list covers many of the most famous landmark achievements in modern AI from the pioneering institutions and researchers leading the charge. But it isn’t exhaustive. Soon, it will be difficult to find technology that isn’t leveraging AI in some way. Rapid progress continues on all industry fronts with efforts directed towards ever more capable and artificial general intelligence (AGI) systems.

While some researchers, activists, and ethicists are cautioning that we are moving too fast, others (among them back-seat users eager to leverage the potential power of AGI to tackle complex issues beyond human comprehension) are praising the achievements and potential of AI. Dizzy from the accelerating speed of AI advancement, anticipating the imminent splash of AGI, everyone is itching to know: ‘Are We There Yet?’

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories. Let’s shape the future of AI together!

Artificial Intelligence
AGI
History Of Technology
Ai Technology
AI
Recommended from ReadMedium