Five Sketches: Musings on AI

There are many fine technical pieces written on AI that will further expand upon its history, use cases, and potential impact. This is not that essay. Neither is it a poem nor a story. At best, what follows is a set of literary waypoints and a fractured compass.

Preamble

If you follow the developments of AI and are still at ease, it is only because, like a sun flare or global warming, it is slow-moving change. Slow in the sense that it may not affect us tomorrow, but inexorably will in one year, five years, ten years. Slow like a freight train. Slow like five thousand tons of steel and silicon, buoyed by billions of dollars of private equity investment and consumer expectations. If most technology hype cycles are carbon fiber hotrods that will disintegrate upon collision, AI is the slow but unstoppable train that will, through sheer momentum, usher in a new era of humanity.

It is easy, from today’s excitement and progress, to extrapolate the dystopian scenarios that AI might bring about. But the specifics will never be quite right. We view the world through models, stories, and abstractions, and because of this, it’s often helpful to apply similar lines of thought to how AI will shape our world views. Our perception of reality is already mediated through devices, algorithms, and signals; AI will further abstract and mediate our interactions with reality, and so it’s only fair to explore the impact of AI through similar models.

There are many fine technical pieces written on AI that will further expand upon its history, use cases, and potential impact. This is not that essay. Neither is it a poem nor a story. At best, what follows is a set of literary waypoints and a fractured compass. But if, as Leonard Cohen writes, “there is a crack in everything. That’s how the light gets in”, then perhaps we can consider these sketches part of a journey to discover the right questions.

Sketch 1: Definitions and Admissions

On Names and Meaning

“What's in a name? that which we call a rose / By any other name would smell as sweet.” – Shakespeare

Artificial intelligence. The word “artificial” in this context is more than just a semantic distinction. And if we are to be honest with ourselves in our belief that AI will fundamentally reshape society, then we should allow ourselves a few moments of discourse to properly label what it is. With increased usage, I find myself progressively more uneasy with the term “Artificial Intelligence”. Not from a philosophical opposition to the technology itself – but because how we refer to things matters. That culture changes language and language changes culture is not in doubt, and as technology advances, we must continually reevaluate our relationship to it and the words, phrases, and language we use to describe it.

Artificial grass. Artificial sweetener. Artificial flower. Whether something is deemed artificial is often simply a judgement call. We might use the term to mean “not natural”. AI is certainly not from nature, insomuch as any derived intelligence is not natural. Is it, then, a substitute? If so, what is it a substitute for? AI – particularly generative AI – can recite facts and figures. It can combine and refine sentences and grammar. It can create stories that evoke true emotion. It is an extremely capable, somewhat flawed, language model that is prone to hallucinations.

Letting the mind wander, let’s deconstruct this a bit further. “Intelligence” would seem fairly straightforward, no? Perhaps so. It refers to an ordering or structuring of knowledge. Anti-entropy, if you will. It implies a level of reasoning. In human terms, it implies a degree of education, process, and effort. A successful investment of biological resources over time toward an aggregation of information.  It would seem that this investment of time then is the difference between intelligence being seen as “artificial” and “real”. But what if we posit that AI does require an investment of time – it just does so in a massively parallel manner during model training. That forces us to redefine once again, with the core requirement being that AI does not require a sequential investment of time, while human intelligence – and even further, human knowledge – does.

The technical principles are in place for AI to reshape our relationship with information. Even with this momentum, there are barriers to widespread AI adoption. It’s important to note though, that the technical barriers will soon be overcome. The degree of investment, interest, and social capital being dedicated toward AI research will lay waste to these challenges in the years ahead. The cultural barriers, however, will linger. There is an Overton Window, of sorts, at play – the ideas we find radical today, will be commonplace tomorrow. AI assisted heart surgery? Digital doubles? Instant recall of all lived experiences? While now extreme, each of these will someday seem commonplace. The distance between now and then will be narrowed through a concerted effort by those who most stand to profit. And to be clear, I mean that matter-of-factly, and not disparagingly.

As this happens, we will see a rebranding of AI not as artificial, but as integral and augmentative. Something that is still human, and that will allow us to be even more human. With profit to be made, there will be a significant incentive to assist the collective consciousness past this hurdle of artificiality. AI will be anthropomorphized in a hundred small ways enroute to making it essential to the human experience. It will learn to converse, empathize, and personalize. It will develop longitudinal relationships. It is not the outputs of AI that are artificial – the term must instead derive from the process of creation. And if the process of creation is a computational one, and if the assignment of the term artificial is just an artifact of our current place and time, then we should feel free to redefine it as that place and time shift.

At the risk of surfacing problems without solutions, if not artificial, then what? “Machine intelligence”? No. Machines are artifacts of the twentieth century – one thinks of looms and steam engines. Digital Intelligence? Computational Intelligence? We might be getting closer. Could the paradigm change be so great that we force ourselves in the other direction, further classifying human intelligence as “biological intelligence”? I am in favor of the latter. So perhaps then that is where we land – computational and biological intelligence, distinct at the moment, but soon to be entwined into a new construct for consciousness.

Sketch 2: The Pendulum of Collectivism

On the Merits of Individualism and Collectivism

If one of the throughlines of the twentieth century was the struggle between individualism and collectivism, then perhaps it is no surprise that this struggle, as yet unresolved, should spill into our twenty-first century digital theater. It manifests differently, of course, but it is the Great Game of our age. It is present in the brinksmanship surrounding microchip production, rare earth minerals, and the hording of technical talent. It is visible in the struggles for regulatory control, technical know-how, and data rights. It will continue to be an essential factor in defining our technological future.

The Cambrian explosion we are observing in modern AI is being driven by the technological innovations of transformer architectures and large language models (LLMs). These revolutions in turn benefited from the steady acceleration of progress in algorithms, cloud scale compute (particularly GPUs) and cheaper data capture and storage. LLMs draw from vast sums of human knowledge (or at least, human-generated data), made possible by the frictionless transmission of information across the Internet. They are trained on mind-shatteringly large quantities of text, and can produce, seemingly magically, human-like conversational outputs. Naturally, they must be monetized.

The bourgeoisie cannot exist without constantly revolutionizing
the instruments of production, and thereby the relations of production,
and with them the whole relations of society.
” – Karl Marx

The steady march of individualism – consumer preference, identity politics, private ownership – is reaching its late-capitalism apex. We are witnessing, again, the swing of the pendulum from individualism toward collectivism, only this time, we are collectivizing knowledge. Alongside this collectivization, we are seeing the means of production – ownership of the AI models – being radically consolidated in the hands of those individuals and corporations who can afford the staggeringly high cost of building, training, and deploying these models. It’s not so much the use of these models that is gated, but their ownership, and thus, their control. Control over what content is allowed, what biases are present, and who is ultimately allowed to benefit from them.

Control aside, what happens when knowledge itself is collectivized? We form collective enterprises all the time: businesses, communities, states, families. But each still relies on individuals contributing time and energy in a somewhat linear, scalar fashion, and each entity is limited by the physical logistics inherent in coordinating vast numbers of humans. Each in turn is largely a representation of the collective skills of its constituents.

With collective knowledge (in this case, represented by an LLM) we are drawing from the experience of billions without these limiting physical factors. We will feed the LLM, collectively and individually. Think of the office in ten years. Will we all be expected to contribute to the single corporate LLM, or will we each cultivate and train our own, to serve as a “second us”? When a company hires us, are they hiring the contributions of our physical selves, or the potential contributions of our trained model? Will we carry, from job to job, much like our resumes, a digital model that serves as a stand-in for our skills and experience? And further, if we create a second us – which “us” is it? The fearless and ambitious us of our twenties or the wiser but more tentative us of our forties?

Once LLMs can train and respond in near-real time, no longer will the sum of intelligence of any one person ever exceed that of the model. Organizations that cannot effectively harness the collective intelligence of their organization will find that they can no longer compete. This model of organizational knowledge management ensures that every employee is as intelligent as the organization as a whole. That said, having intelligent employees is different from having capable and effective employees. AI may level the playing field with respect to access to knowledge, but it is still the responsibility of the organization to marshal its resources toward an end and execute an effective strategy.

Beyond the workplace, there are deeper repercussions to this swing. There is immense danger in the State having access to this level of collective intelligence. Removed from a dependence on its citizens and their knowledge contributions, it is free to become increasingly centralized and authoritarian. We see this play out today, as expanded regulatory control over financial and credit markets, coupled with enhanced policing and spy craft capabilities increasingly tighten the State’s control over its citizen’s lives.

Even as the pendulum completes its arc, we are reminded that all intelligence is collective, insomuch as it depends on a cumulative body of knowledge created through the years. And all intelligence is individual, in that it must be cultivated, deployed, and observed in order to exist. Here then, we see the duality at play – to be human is to be both an individual, and part of the larger collective of humanity. And it is in this liminal space that AI will be most clearly felt.

Sketch 3: A City of Fog and Mirrors

On Models of Reality

Remember that society is both edifice and artifice. It appears solid, stolid, and structured, but in reality, it is no more than a thin film over the wilds of humanity. Resilient, but once punctured too deeply and too often, it quickly unravels. This is why we spend so much time tending to it, carefully propping it up and making sure the illusion holds.

In “Simulacra and Simulation”, Jean Baudrillard presents the idea of hyperreality and the simulacra – a simulacra being a thing that has no original – and simulation – an imitation of a real-world process. He posits that human experience is a simulation of reality. AI is both simulacra and simulation. It generates content that has no original source. In our case, we can see AI as an inference model posing as a simulacra of intelligence, and as a simulation of the act of knowledge retrieval.

Imagine a tiny model city, exquisitely detailed. A toy building reflected through a series of increasingly larger mirrors to resemble something much grander. At a glance, it appears immense and indestructible. On closer inspection, you realize that it's really only as tall as a toddler, and easily toppled by tiny, wayward, feet.

Now imagine that AI is that toddler, with strength that outpaces reason, no true control over emotions, and a lack of responsibility for outcomes. If that toddler pushes hard enough, the edifice comes down. The push can be direct – smashing the model city with a toy truck – or indirect – bumping the mirror systems that manifest the artifice, producing a new reality that is slightly askew, slightly warped, slightly off-kilter.

And what is AI if not a toddler? Barely reasoning, suddenly possessed of mobility and strength, and delighted by the attention of the world upon its every awkward step forward? How do we reason with this toddler as it pushes against our model city? Sometimes, it uses the wrong words. Sometimes, it intentionally lies (though we should hesitate to ascribe intentionality to a language model). How do we harness the goodwill and potential, guiding it along to maturity, without allowing it to strain the bounds of our society?

with your glacial surprise
you never realized
that yesterday's coinage
is tomorrow’s debt
and that somewhere between
the dawn and the day
the world reveals itself

Being an adult is coming to terms with an understanding that life is a series of losses; that everything and everyone you know will age and come undone. Youth affords us a glimpse into the vastness and wonder of the universe; the understanding that time is finite comes later. This is why we are fascinated with AI. In its purest form – unregulated, unbounded by financial constraints – we can feed it, and it will grow, disconnected and divorced from the demands of time. But what happens when the fodder runs low?

Sketch 4: Tolstoy in the Age of AI

On Artistic Achievement and Humanism

When feeling unmoored, ungrounded, adrift in the digital ocean, we instinctively turn toward the analog. Analog mediums in print and paint; dated words and faded colors. And what is more analog than War and Peace? Audacious in scope, it stretches to more than a thousand pages as Tolstoy weaves a rich tapestry of Russian life during the Napoleonic wars and expounds upon his philosophy of history and the impact of the great man.

Tolstoy pushed hard against the “Great Man” theory of history, in which events are shaped by the decisions and unique merits of heroic individuals. He posits that the effects of great men are mostly imaginary, an artifact of circumstance, luck, and situation. In many ways, the collectivization of knowledge reinforces Tolstoy’s view. It is not difficult to see a future where the merits of individual knowledge will be further diminished by the collective power of AI, while the wielders of collective knowledge are elevated by its use.

Isiah Berlin, in his essay on Tolstoy, “The Fox and the Hedgehog” posits that there are two types of writers and thinkers – the Foxes, who view the world through a multitude of experiences and theories, and the Hedgehogs, whose view of the world is shaped by a single big idea. He draws this comparison from the ancient Greek poet Archilochus, who wrote that “a fox knows many things, but a hedgehog knows one big thing”. Berlin goes on to state that Tolstoy very much wanted to be a Hedgehog, though his view of the world was so vast and exacting that he was most certainly a Fox, and that these two opposing views wrestled for control in his mind.

If Tolstoy’s tragedy was that he was a Fox striving to be a Hedgehog, AI’s tragedy is that it is a Hedgehog striving to be a Fox. It knows one big thing – attention-weighted language imputation – but wants to know the total of human knowledge, experience, and emotion. We may project intention and emotion onto this model, but that is our lapse, not its. It is a model, mostly deterministic, but possessed of just enough indeterminism to mimic human behavior.

On the other hand, we humans are nothing if not foxes. There are surely hedgehogs in our midst who live by unary ideas, but in aggregate, we contain multitudes and average out toward the fox. We feel, see, hear, and process information in abstract and wonderful ways. Soon though, the artisanal nature of our knowledge will be challenged, and we will find ourselves retreating into hedgehog territory. We will know only one big thing – that we are alive while the machines are not, and that we can always pull the plug if we need to (but can we?).

If both sides aspire to be something else, perhaps then we need a new analogy for a new time. What if AI is a digital elephant, and we are but darting goldfish, dancing briefly across the stage? What if neural pathways, once imprinted in the model, are refined but never forgotten and always accessible – the entire postdiluvian memory of the human race etched like acid through a zinc plate?

what is evolution
if not a shift
from rhythm
to ritual?
from a neural tremor
to the ripple of thought?

Could AI have written War and Peace? Possibly so. Deep within the depths of its digital neurons surely lie encoded the verbatim text of the masterpiece across multiple translations and languages. Could AI recreate the novel, word for word? Perhaps. Could it conjure forth a second narrative of similar breadth, sprawling in its ambition, scope, and simulated wisdom? Likely. What is unlikely, however, is that we would respond to it in the same way. If AI generated War and Peace, would we consider it a masterpiece? Would it still be one of the pivot points of modern literature?

Undoubtedly not. On one hand, we are (at least for now) disinclined to trust AI creations as true art. On the other hand, it is not so much the text (though masterpieces should be able to stand alone on their literary/visual/audio merits), as the notion that it was brought forth by a single mind, crafted by a single brain. While surely Tolstoy’s wife supported him immensely in both his personal, familial, and editorial pursuits, for purposes of this argument, in the collective imagination, Tolstoy remains the author of record. And his is a human mind, with human limitations that has engaged in the very human act of creation. It is a human body, with its attendant investment of time, that has labored over each sentence. It is the sequence of non-derivative words (assuming any words or stories are truly non-derivative) that have been strung together by a person who has, in their mind’s eye, envisioned the place and time they are describing. There is a reason that copyright can only be ascribed to a human – we recognize in some fundamental way, the necessity to protect investments of time toward creative pursuits. Absent that investment, the protections disappear.

In the end, we respect the craftsmanship of it all. It is a common refrain to judge the art, not the artist. But in a world where AI and human generated art are indistinguishable, is it not equally important to judge the artist?

Sketch 5: Glory to the Goldfish

On Grieving our Technological Gains

As AI expands its reach into our lives, there will be a period of grief that creates a chasm between who we are and who we will be. We will not all cross this chasm. This grief will be driven by a profound sense of loss as the skills and identities cultivated across a lifetime become, seemingly overnight, less valued. Our individual identities, carefully crafted through the decades, may very well implode, or at least deflate. We may experience ego death as our sense of purpose wanes, and we come to grips with the fact that one brain – no matter how tempered in experience – will never be able to match the cognitive output of a collectivized digital brain. Some will see this as an opportunity to expand into new arenas of being, while others will never make the leap.

Is there still glory in being a goldfish? Is there value in forgetfulness? What is our role when we become stewards, but not creators, of knowledge? What impact on society? No doubt, we will strain our existing structures. Rising inequality, job loss, state control – all will be accelerated. When the cost of societal control drops, it can be more effectively done at scale, and this will have deep implications for both how we govern and how we allow ourselves to be governed.

The majority of jobs will remain, in some form, as they are a useful construct for societal control. New jobs will be created as technology forms new markets and opportunities. Other industries will fade away, and many entry-level jobs will be replaced by automation. We will never be a society that adheres to a notion of abundance – we are hardwired for scarcity. We will not reduce to twenty-hour workweeks, and jobs enough will appear to fill the time. Higher-end knowledge will still be in demand, but it may also become a handy parlor trick, much in the way that the emergence of Google prematurely ended many promising cocktail party discussions. We may grieve the tactile loss of applied knowledge in our work, much as we now grieve a sense of disconnectedness to the soil and the labor of our forebearers.

To be human is to forget. Memories must be refreshed to remain relevant, and each refresh touches the memory anew, slightly adjusting it, reframing it, before setting it down to be picked up again. To be human is to learn. To form our own pathways from our own unique experiences. We must remember not to conflate information with wisdom, and knowledge with experience. We must still seek out the path for ourselves, even when the answer is readily available and within reach.

Bringing it All Back Home

A Conclusion of Sorts

We are on the cusp of an exponential virtuous technology cycle. The scale of this is exhilarating and terrifying in ways that we can’t fully quantify. If Covid was the catastrophic break with (and reminder of) the 20th century, the emergence of AI truly heralds the arrival of the 21st century. AI-driven advances in engineering, materials science, and medicine (one thinks of personalized cancer treatments, robot-assisted surgery, and early warning diagnostics) will continue to enhance the hardware, skills, and human lifespan required to further improve AI technology. This will in turn fuel additional developments across the spectrum of human knowledge. That’s not to say that the profits of progress will be equally or equitably distributed. They won’t, at least at first. Our great hope, the reason we work in this space, is to ensure both that we do no harm, and that we work toward the greatest possible collective good.

She knows there's no success like failure / And that failure's no success at all” – Bob Dylan

There will be significant failures as the hype cycle builds, busts, and rebuilds anew. We will see setbacks, and some will use these setbacks to insist that the future of AI is a false promise. But these redirections are the process by which we learn. It may yet be generations before the promise is truly realized, but the arc of progress is upward, and the train rolls on. This is a long game. We shouldn’t mistake the missteps and limitations of AI today as a natural ceiling for the technology. The capabilities that it will bring in the years ahead will be exponentially better – to believe too strongly in limitations would be foolishly short-sighted.

As we peer across the horizon, we see glimpses of what the future may hold. Luckily for those of us who are here, we have both the ability and the responsibility to guide the path forward. Let us take that responsibility seriously. Let us embrace our moment in history and commit to future generations that AI will provide them with more fulfilling and meaningful lives, and not just more distractions. Let us commit to a subservient technology, one that assists and guides, and provides more time to focus on those pursuits we find truly valuable. Let us commit to the benefits and the promise without ever allowing it to diminish or in any way lessen our humanity.

Subscribe to Analog Souls

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe