Icarus aflame

The Misapplication of AI

We should be genuinely worried about the hubris of the captains of AI.

I’m not talking about disingenuous marketing fluff designed to lure venture capital, like Github Workspace‘s promise this week of “leveraging a second brain…as a conduit to help extend the economic opportunity and joy of building software to every human on the planet.” I mean the earnest aspirations of engineers-turned-CEOs who now set the agenda for these companies.

Having finally shipped a viable, general-purpose product thanks to large language models like ChatGPT, these founders often speak in quiet, confident tones about the next global challenge they plan to outsmart. Listen carefully, and behind this unpretentious persona you’ll hear the disturbing assumption that expertise acquired in one specialty is easily transferred to other domains. While observers in the humanities have condemned their cavalier disregard for artists and writers, AI standard-bearers often stumble when venturing into hard sciences as well.

The poster boy for this breezy tech hubris is soft-spoken Elon Musk, who in his most recent earnings call declared Tesla an AI company rather than a car manufacturer. Yet against the advice of his engineers, Musk dropped radar and ultrasonic sensors for self-driving Teslas in favor of cameras alone. His logic? Humans drive just fine by combining what our eyes perceive with the squishy neural networks in our own heads.

Unfortunately for Tesla’s “cybercab” dreams, neurologists who study human vision will tell you that the human eye differs from a camera in countless ways, from binocular vision to aggregated retinal receptors to G-force proprioception. Signal processing can try to emulate some of this bodily prowess, but anyone who’s tried to create depth maps from 2d images will tell you it’s a hit-or-miss proposition for many edge cases.

Even engineers who’ve made significant contributions to science can be caught drinking their own Kool-Aid. Google DeepMind cofounder Demis Hassabis deserves credit for applying AI to understanding protein folding, but he loses his intellectual footing when venturing into the other sciences.

In his TED interview on “Unlocking the Secrets of Nature and the Universe,” Hassabis confesses that particle physicists like Richard Feynman were his childhood heros. But when he describes his ultimate dream of using AI to conduct experiments at the Planck scale to “understand the fundamental nature of reality,” he seems to be forgetting, or perhaps never bothered to learn, that physicists like Feynman have established a hard limit on science’s ability to probe subatomic reality.

As a consequence of well established laws like the Heisenberg Uncertainty Principle, concepts like mass, particle, and even measurement break down at Planck dimensions. I fail to see how a starry-eyed engineer with access to 10,000 Nvidia chips in a Santa Clara data center is going to break fundamental limits validated by a century of experiments by actual quantum physicists.

Maybe we shouldn’t be surprised when AI experts cosplay as neuroscientists or physicists–or artists–given their long-term gamble on AGI (artificial general intelligence). Since 2022 AGI has served the same function that the metaverse did in 2020 and the blockchain in 2018, namely aspirational bait for tech investors to put money on The Next Big Thing. Indeed, the premise of AGI based on large language models is that expertise gathered in one domain (language simulation) can be extrapolated literally to *any* domain. Unfortunately for AGI enthusiasts, the frequent missteps of AI leaders would seem to undermine the notion that knowing enough about one topic makes you an expert in all of them.

Image by Stable Diffusion (Leonardo Anime XL)

Scroll to Top