Of two “Original Sins”

In her book Empire of AI, investigative journalist Karan Hao, writes: “The promise propelling AI development is encoded in the technology’s very name.” She relates how, in 1956, twenty scientists gathered at Dartmouth College to form a new discipline to study the question “Can machines think?”. “They came from fields such as mathematics, cryptography, and cognitive science and needed a new name to unify them. Johan McCarthy, the Dartmouth professor who convened the workshop, intitially used the term automata studies to describe the pursuit of machines capable of automatic behavior. When the research didn’t attract much attention, he cast about for a more attractive phrase. He settled on the term artificial intelligence.” [i]

Anthromorphisizing automata

“The name artificial intelligence“, so Hao, “was thus a marketing tool from the very beginning, the promise of what the technology could bring embedded within it. Intelligence sounds inherently good and desirable, sophisticated and impressive; something that society would certainly want more of; something that should deliver universal benefit. The name change did the trick.” Hao cites longtime chronicler of AI, Cade Metz, who called this rebranding “the original sin of the field” and the source of “so much of the hype and peril that now surround the technology”. In essence, that original sin comes down to “casual anthromorphisizing” non-human automata and making it possible for AI developers to describe their automata‘s functions in terms of “learning”, “reading” and “creating” as if their software “acts” or is capable of “acting” and “inferring” just like humans.

A fata morgana

In Webster’s College Dictionary, an automaton is defined as “a mechanical figure or contrivance constructed to act as if by its own motive power; a robot; a person or animal that acts in a monotonous, routine manner, without active intelligence; a mechanical device, operated electronically, that functions automatically, without continuous input from an operator; anything capable of acting automatically or without an external motive force”. [ii] When a humanoid automaton is defined as a person acting without active intelligence, we can be sure that there’s no machine based automaton that functions with active intelligence. All that automata such as AI can do is show a semblance of intelligence. Artificial Intelligence is a fata morgana, a hallucination, because the term embodies a contradictio in terminis. To be sure, automata do exist and they “do” as programmed. They look intelligent but lack any and all intelligence.

Artificial General Intelligence

To keep the AI bubble from bursting, the AI Elite is upping the ante by promising that AI will soon be turned into Artificial General Intelligence or AGI. AGI will complete the “anthromorphisation” of AI since it is supposed to match and even outsmart human intelligence. In fact, the coining of the term AGI cleverly obfuscates the fact that AI still functions without intelligence. We’re being told that all that is needed to make AI function with intelligence, i.e. to turn AI into AGI, is more and more and more data fed into more and more sophisticated computing power installed in many more data centers. This will take trillions of dollars in capital, but the investment is hyped as extremely wise because it supports the development of a new creed of automata that are being advocated as capable of applying all by themselves (without human instructions) what they were originally trained to do in the execution of specific tasks in performing completely new unspecific (“general”) tasks that they were never trained in. This is something that humans are quite capable of because humans are capable of understanding data and of using their intelligence to judge the data’s relevancy and applicability in solving all sorts of problems.

Kepler, Newton and AI

Whether all these drummed up trillions will eventually go up in smoke depends on providing a definite answer to the question whether AI systems can be “trained” to understand what they’re doing and “generally” apply that understanding in solving new problems they were never asked to solve. And so, a Harvard University / MIT team designed an elegant test to check whether AI systens are capable not only of predicting whether the sun will rise, but whether they can also explain why it rises every morning. They trained an AI system to do what Johannes Kepler (1571 – 1630) did: predict the elliptical movement of the planets around the sun. Then, they checked whether their “Keplerian” AI model was also capable of doing what Isaac Newton (1643 – 1747) did: explain that the planets’ trajectories are determined by the laws of gravity. Kepler predicted that an apple will fall. Newton explained why it falls. In the words of the researchers, they checked whether what they called an AI foundation model can uncover a “deeper domain understanding” that explains the real “world model” that underlies the “foundation model”, much like how Kepler’s predictions of the movement of planets led to the discovery of the laws of gravity by Newton. [iii]

Predicting isn’t explaining

The HU/MIT team concluded that while AI models are quite capable of making accurate case-specific (“Keplerian”) predictions by resorting to aligning readily accessible data/information, they fail to “encode” the generally applicable “world model” of Newton’s laws. This is precisely the limitation that prohibits the realisation of AGI. [iv] To be sure, AI is capable of producing duplicates of world models, but these duplicates lack any and all understanding of the laws and priciples that underly and “move” the world model. Differently put, an AI system can predict but is incapable of explaining its prediction. Which means that it cannot predict beyond what it was tasked to predict on the basis of the data put into it. Putting in more data may enhance the similarity with the “world model”, but it won’t create one grain of understanding the forces, laws and postulates that orchestrate and drive the predicted events or patterns. To form a “world model”, one needs to encode in the model an explanatory set of functions that describe not just what will happen but why the world works that way.

World models

Now that it has become clear that AI is incapable of producing an accurate model of the laws that govern the world of physics, we better abandon all dreams about using AI to produce a “world model” that encompasses the living world and, ultimately, the world of Man. If we were to place our trust in automata to discover who we are and guide us in our “pursuit of happiness”, we tread on dangerous ground. Not only because the substitution of human for artificial intelligence will dehumanize and robotize us, but because we will have placed our trust in machine based systems that cannot even produce a correct model of the physical world.

The Original sins

The “original sin” of the field of AI was no more than combining the words “artificial” and “intelligence” to create the misleading fallacy that automata can function with or “produce” intelligence. The “original sin” in the field of Man was of a different order. It was described in the authentic Greek version of the Gospel as “hamartanö”, which means “to miss the mark.” In his book The Mark, the Scottish psychologist, writer and teacher Maurice Nicoll (1884 – 1953) defined Man’s Mark as the point from where inner evolution starts, as the right place within oneself, the place of “I”, which is, according to the New Testament, the place where Man’s highest self can experience “one-ness” with God Whose name is “I am”. [v] This is the “one-ness” that was “lost and found” by the Prodigal Son, who, when he told his Heavenly Father that he had sinned, spoke the Greek word “hēmarton” (“ἥμαρτον“): “I have missed the Mark“. As the story goes, he then divulged that he had found the Mark in “the direction of (‘eis‘/ ‘εἰς‘) the heaven (‘ton ouranon‘ / ‘τὸν oὐρανὸν‘) and under the eyes (‘enōpion‘ / ‘ἐνώπιόν‘) of thee (‘sou‘ / ‘σου’).” [vi] The word “eis” specifically indicates an inward direction or movement. The Heaven and the Father’s likeness are inside the Prodigal Son, not outside. Heaven is the Kingdom of Heaven, the attainment of which is the key message laid down in the Gospels. It is the highest possible spiritual level of Man.

+ + + + + + + + +

[i] Empire of AIDreams and Nightmares in Sam Altman’s OpenAI; Karen Hao; Penguin Press – New York – 2025. pp. 89-90.

[ii] Random House Kernerman Webster’s College Dictionary, 2010.

[iii] What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models; Keyon Vafa, Peter G. Chang, Ashesh Rambachan and Sendhil Mullainathan of Harvard University, MIT; 14 August 2025; https://doi.org/10.48550/arXiv.2507.06952

[iv] See for a simple explanation of the Harvard/MIT Study: AI Models Are Not Ready to Make Scientific Discoveries; Alberto Romero; The Algorythmic Bridhg; July 15, 2025.

[v] The Mark; Maurice Nicoll; Watkins Publishing, Somerset, England; First published 1954.

[vi] Luke 15:18.